Choosing a RAID Card for ESXi

I recently built a VMware ESXi host at home. When I was researching the hardware, I learned there are a number of things to consider when choosing a RAID card for use under ESXi. This article covers those things and offers advice for anyone who is building a similar system.

Target Audience

Anyone who has stood up ESXi in a enterprise data center would probably scoff at the idea of putting storage directly in the host machine. That’s ok, this article isn’t for you. This article is for people building ESXi in a lab environment or maybe in a small business setting where a) clustering/HA isn’t a consideration and b) there is no shared storage present.

Objectives

My plan was to use inexpensive, high capacity SATA drives to store all the VM images. Since I wasn’t willing to accept the risk of running everything on a single drive, my plan was to have two drives setup in a mirror and present a mirrored volume to ESXi. I was already familiar with presenting LUNs off a SAN for ESXi to use, but hadn’t yet seen a host built using local storage so I wasn’t sure what my options were or their pros and cons.

Possible Solutions

There are basically three types of solutions that I explored, listed here in order of lowest to highest in terms of cost:

  1. On-board SATA controller RAID
  2. SATA/SAS host bus adapter with RAID support
  3. Dedicated RAID card

On-board SATA Controller RAID

Every new motherboard for the past number of years has offered RAID support using the on-board SATA chipset. This is usually seen as an easy, low cost (could even consider it “no cost”) way of getting RAID into your machine. Even server class motherboards made by companies like Super Micro and Intel support RAID on their SATA ports. These ports are connected to the southbridge on the motherboard with the southbridge acting as the SATA controller. Modern motherboards have southbridge chipsets with names like ICH10R for Intel boards and SB750 for AMD boards. When enabling RAID on these chipsets, they utilize a combination of the southbridge hardware and a software driver to operate the RAID array. This means that ICH10R and SB750 based RAID arrays are heavily dependent on the type of operating system that’s running — the OS must have an appropriate driver that enables the RAID functionality. This is what’s known as “software RAID” because of the reliance on a software component.

ESXi’s support for software RAID is pretty clear: there is none. The only way you can use on-board SATA ports is in non-RAID mode. This eliminates on-board RAID as an option.

SATA/SAS Host Bus Adapter With RAID Support

The next best option to using the on-board ports is to add additional ports via a SATA or SAS HBA. Some really common examples of such cards include LSI’s SAS30x1E-R or Super Micro’s AOC-USAS-x cards as well as LSI’s 921x or Super Micro’s AOC-USAS2-x cards. These cards are branded as RAID adapters and utilize LSI’s 1068E and SAS2008 chipsets, respectively. Despite the marketing of these cards, they operate similarly to the on-board SATA controllers — they require a software driver in order to operate in RAID mode which makes them ineligible for use as RAID controllers with ESXi. They do however provide between 4 and 8 high performance SATA/SAS ports complete with more features and options than the ICH10R/SB750 ports.

When reviewing SATA/SAS HBAs, consider these points when trying to determine if it’s a software RAID card:

  • Does the card utilize one of these very popular chipsets which only support software RAID? (eg, LSI 1064E, 1068E or SAS2008) (Yeah I’m picking on LSI chipsets here but that’s only because I researched them the most)
  • Is the card simply rebranded and really using one of the above chipsets under the hood (such as the Intel SASWT4I/SASUC8I or Super Micro cards)?
  • Does the card lack its own processor (aka, RAID On Chip, ROC) for doing parity calculations?

If you can answer “yes” to any of these questions then you are likely looking at a card that only does software RAID.

Dedicated RAID Card

A dedicated RAID card is just that: a card that does all RAID functions itself, no software required. These cards are often identified by being able to answer “no” to the set of questions posed in the section above. You can also usually spot a RAID card at twenty paces just by looking at its price tag. These cards typically start at 1.5x – 2x the price of a SATA/SAS HBA with the same number of ports and go up from there.

Generally speaking, hardware RAID cards are well supported under ESXi which, after ruling out the previous two solutions, made them the only viable solution for my build. The only thing you need to check on is that the card you’re considering is listed on the VMware HCL.

Other Considerations for a Dedicated RAID Card

Do a search in almost any VMware related forum and you’ll see posts from people talking about poor performance when running their data store on a directly attached RAID array. The response is almost always “do you have write-back turned on?” Enabling write-back tells the RAID controller to cache writes in its on-board memory and indicate to the OS (ie, ESXi) that the data has been successfully written to disk. Storing the data in memory takes a fraction of the time required to send the data to the disk. At some point in the future, the card then sends that data from memory down to the disk, but the advantage is that ESXi isn’t sitting waiting for that to happen, its moved on to other things by then. Write-back greatly increases random write operations which are highly characteristic in a VM environment.

Now, you may wonder, since the data is stored for a time in volatile memory, what happens if the machine crashes or the power is lost before the data is sent to the disk? That’s where the battery backup unit (BBU) comes in. The BBU is like a small UPS for your RAID card; it provides battery power so that the memory on the card stays online even if the computer itself is off. That keeps the non-committed data safe in memory until the machine is back online and the data can be written to disk.

Conclusion

A hardware RAID card that is listed on the VMware HCL is the only viable solution for running an ESXi host with local storage. Consider how many SATA/SAS portsĀ  you need when sizing the card and also carefully consider whether you should install a battery backup-up unit to eliminate the risk of power supply issues. Be aware that not all SATA/SAS cards are true RAID cards, even if the marketing material says they are. Ask the vendor and do your research online before you purchase.

24 thoughts on “Choosing a RAID Card for ESXi”

    1. Hi,

      My understanding is that even when the chip is configured in RAID mode ESXi will see each drive individually and not a single logical volume. This is because ESXi does not contain the software necessary to support the RAID function.

  1. Thanks for posting this mate. I have spent a fair amount of cash trying to get my On-Board raid setup working with ESXI, which hasnt come without its fair share of swearing at the screen. Its a pitty I hadnt read this before :-( I now have a Ton of 120Gb SSD Drives lying around. Oh well..!

  2. To clairfy though, the LSISAS1064E DOES support raid in ESXi, at least in 5.0.0 and up. For any embedded HBA’s its also not bad at performance either.

    1. Hi Lee,

      Interesting. Can you elaborate a bit? When you say it’s supported, how far does that go? Can you query the status of a RAID volume from ESXi?

  3. Joel, thanks for a great post. I learned the hard way about the *truth* of pseudo-RAID support and after much testing, shelled out for an Adaptec 2405 card. I would add two additional tips:

    1) make sure you have a UPS if your RAID card isn’t battery-backed – the benefit for write caching is significant
    2) don’t cheap out on your cables – I bought a Startech mini-SAS fan-out to go with my Adaptec card and it didn’t work and took ages to RMA

    If it’s a RAID 10 system and the price sounds too good to be true – it is, and you’re going to need at least an extra $300 for an aftermarket RAID card.

  4. Cool, erm, what about LIS2008 based cards? With IR firmware, they support raid0/1/10 and with IT firmware they operate as HBAs. But regardless the firmware, ESXi recognizes the lack of a BBU and disables the write cache. There are rumors that one can enable them in IR firmware with LSI MSM, but there seems to be no way to enable the write cache of the HBA and/or HDDs with IT firmware. It looks like SSDs are that fast one does not experience the lack of a disabled write cache.

    What are your experiences with raid adapters w/o a BBU? Do you get transfer rates < 50MiB/s or do you get the full potential of your (spindle based) HDDs?

    1. Chris, that’s a good question. I did some quick tests on a VM whose vmdk is on a LSI 9260 but my results were so wildly inconsistent that I threw them all out. I’m not sure if my test method was crap or something else was going on. So unfortunately I can’t give you a good answer for what kind of performance I’m seeing on a BBU-less card. Maybe when I take this host down I’ll poke in the LSI BIOS to see if there’s a hint about the state of the write cache.

  5. Looking for a cheap raid card to do mirror (raid-1) using Vmware 5.0 or 5.1 for a simple small lab. Any thoughts?

    1. Hey Steven,

      I guess that’s kind of the unwritten side effect of needing a real card: they aren’t inexpensive.

      I use an LSI 9260-4i which was around $350 CAD (I think). That gives you an idea of price point.

  6. So that means the 9341, although it lists with VMWare drivers and UEFI BIOS drivers would not support a RAID 1 for example? Or would a UEFI Bios offer other options? The power budget for a system mirror then runs around 15W just for the card, which is quite absurd these days.

    1. Hi. I’m not totally certain what you’re getting at.

      I don’t know what bearing a UEFI BIOS has on RAID controllers. I would think it has zero bearing but I’ve also not heard of “EUFI BIOS drivers”.

  7. Thanks for your reply. I was currently researching viability of RAID controllers for a system mirror (RAID 1) for an ESXi system drive and vm datastore with basic redundancy. The latest LSI 12Gb models did not appear that much more expensive than their 6Gb models or even used m1015 these days (~$200 on Ebay), but could have better features, e.g., when I decide to set the mirror up with two SSDs. LSI is an obvious choice due to their widespread use, OEMing for other companies, and first hand VMWare support. The models in question then are the 9341 or 9361, matching with 3008 and 3108 ROC chips the 2008 and 2108 6Gb implementations you were talking about.

    The issue becomes quite complex, because different levels of firmware/software are involved. Even LSI Support is quite cautious in how they express themselves and calls the 9341 not fully self contained (yes, it uses motherboard memory and does not have its own cache, the 9361 does). The official HCL for VMware only lists the 9361 for what it is worth.

    1. Basically the LSI RAID controllers use PowerPC cores with firmware to provide their function (see for example here: http://www.lsi.com/downloads/Public/SAS%20ICs/LSI_PB_SAS3008.pdf). The level of software and encapsulation towards the BIOS and OS are what makes a “fakeraid” or real raid.
    The firmware of the RAID controller posts in some form an interface on the bios level (control console to configure RAID sets), where the software of it already hooks into the motherboard (has to, so the 9341 obtains it’s cache). The question then is if the OS level “driver” only consists of some vectors (PCI, vendor, class) and, e.g., generic AHCI drivers to access the RAID.
    2. About the UEFI functionlity from the LSI web site, it apparently affects the OS level in some cases: http://www.lsi.com/downloads/Public/Host%20Bus%20Adapters/Host%20Bus%20Adapters%20Common%20Files/SAS_SATA_12G_P2/SAS3_UEFI_BSD_P2.zip. There are also VMWare 5 drivers for what it is worth.
    3. Some web sites claim that the LSI firmware for a 2008 or 3008 (i.e., the 9240 or 9341
    would make a RAID 1 good enough for VMware to see it as raid drive), but Intel with it’s ICH 10 or similar does not. That is a pity, because the power budgets of the RAID cards with ~15W is not small for today’s machines and Intel could probably do better with their implementation of RAID functionality with south bridge integration. Don’t know what you tried along those lines since you state that 2008 controllers don’t work.
    4. About the 9361 (2108 or 3108 based controllers) line, it is unclear to me if a battery or similar is also required to achieve being fully self contained. The Bios of those controllers detects the presence of the battery and enables some memory based functionality accordingly.
    5. Other points are that the 9341 lists double the MTBF and substantially better temperature tolerance (due to it’s simpler functionality) than the 9361.

    Hope that helps to clarify where I was coming from.

    1. Hi again.

      I think you’re way over thinking this. The 9341 and 9361 are both listed on the ESXi HCL (http://www.vmware.com/resources/compatibility/search.php?deviceCategory=io) under the “SAS-RAID” category (note that ICH10 and the like are under the “SATA” category, a non-RAID category). Those cards are supported by the megaraid_sas driver. This is the same driver that all the LSI SAS/SATA RAID cards are supported under and allows ESXi to see the RAID volumes just fine.

      1. I wish overthinking would apply. LSI support could not answer clearly “affirmative” to very specific questions recently. Something may have changed in the list, because now the 9380 is also there (supposed to ship soon). :) An affordable answer for ESXi then clearly is the 9341 (4i lists for around 165 Euro/$200 new). Thanks for the input anyways.

        1. Good luck! If you do get one of those cards, please post a comment with the results in case others are in the same boat as you.

  8. Hi Joel

    this article made me realize i need a sata raid controller for my home server :)

    fractial dessign node304
    asrock e3c226d2i
    intel X3 1245V3
    Crucial ECC 2x8GB
    5 x WD 1TB Red 2.5″

    all on the vmware HCL list for ESXi 5.5 U2 – and all devices install OK.

    What i did not know upon purchasing is that software raid or “fakeRAID” does not go with ESXi, so now i need to finalize my setup by adding a raid controller.

    given the number of harddrives (maybe I will add an extra), would you say thay the “LSI MegaRAID SAS 9260-8i” is a good choise?

    http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=12377&deviceCategory=io&details=1&partner=50&releases=202&deviceTypes=14&vioSolutions=Standard%20-%20IO%20Devices&page=1&display_interval=500&sortColumn=Partner&sortOrder=Asc

    i dont want to spend $200 on hardware that is likely to be depreciated in next version of ESXi.

    Thanks again.

    1. Hey Jesper,

      The 9260-8i is on the HCL, as you pointed out, so that seems like a good choice. It’s unlikely that VMware would deprecate such a common card in the next version of ESXi, but that’s really the risk we all take. Only they know for sure what will be in or out :-)

  9. Hi,
    I’m building a home server to replace my current one and I want to run VMWare instead of Win Home Server. I figured I would need to add in a RAID card, your article confirmed this. My current concern is heat produced by this extra card.

    I want to use a micro tower case with 4 bays and I have a similar form factor currently using the Windows software RAID. It’s quiet and sips power and doesn’t generate too much heat, where I can toss it under an end table in my living room and forget about it. So inside that small, crammed box I fear there won’t be much room for heat dissipation.

    Are there any RAID cards that might have low-heat ratings or something? Maybe a way to externalize the heat sink, if that’s even possible… or the entire RAID card. Any options for low-noise cooling?

    My priority is noise and reliability. I’m willing to pay a little more for this peace of mind. Especially if my new server lasts as long as my old one: 6 years and counting and only replaced one power supply to date.

    Thanks Joel!

    1. Hey Tony,

      Yeah, understandable :-) I don’t know if any specific low-heat cards. I’d be surprised if they generate all that much heat to begin with. Have you checked data sheets on the LSI cards for environmental stats? You might also want to post on servethehome.net to see what other SFF enthusiasts have done.

Leave a Reply

Your email address will not be published. Required fields are marked *

Would you like to subscribe to email notification of new comments? You can also subscribe without commenting.