Next Gen NVME storage with Dell's PERC 11 controller series

img Greg Linwood  |  Jun 08, 2022

I'm excited about Dell's latest PERC 11 RAID controller series!

NVME storage can now be hardware RAIDed & hot-swapped the same way we have enjoyed with SAS SSDs/HDDs for many years.

This excellent new series of storage controllers now supports:

  • Hardware NVME RAID arrays
  • Hot-swappable, redundant NVME storage devices
  • PCIE Gen 4 (doubles PCIE 3 throughput to 16GT/s (32GT/s duplex))
  • 8GB DDR4 2666MT/s cache

These capabilities were already available with slower SAS SSDs/HDDs, or via software RAID with NVMEs (with severe performance degradation) but now we can have it all - high performance, redundancy and hot-swappable NVME drives.

Very cool.

Striping NVMEs into RAID arrays multipies performance in the same way it does with SAS SSDs/HDDs, or you can choose from the other familiar RAID configurations - 0, 1, 5, 6, 10, 50, 60 to balance performance with various forms of redundancy.

8GB controller cache allows writes to occur in super fast DDR4 memory, optimizing extremely fast storage even further.

Dell is early to market with these new capabilities (joining the Lenovo Thinksystem RAID 940 family) and HPE presently have no equivalents in their x86 server range.

In this article, let's review how SSDs have evolved in recent years and how these new NVME RAID controllers fill a big gap in the evolution of x86 server SSDs.

Early SSDs - powerful but tricky

Early high end SSDs such as Fusion-io were performance game changers in 2008, boosting performance from ~100MB/sec per HDD up to ~500MB/sec and response latency from milliseconds down to microseconds.

The downside was that they needed to be plugged into PCIE slots on system motherboards, and they required extra storage driver software & updates for their proprietary protocols.

PCIE attachment also meant lack of redundancy (no RAID) and no hot-swap capability as the devices were plugged into system motherboards rather than into regular disk drive bays and RAID controllers.

Installing or replacing a PCIE attached SSD required powering down a server, sliding it out from its rack (fiddling with rear cabling) before opening the server & inserting into a PCIE lane on a motherboard or into a riser card.

All of this was a little daunting for those doing it for the first time, especially on production database servers.

These issues, coupled with uncertainty about reliability of SSD Flash memory discouraged many from adopting these powerful but slightly tricky devices.

In 2010 I presented the state of early SSDs at Microsoft's TechEd Australia conference, including a case study with one of our early adopter Fintech clients who used them very successfully, despite their early challenges.


NVMEs - a faster, standard protocol

The new NVME protocol was introduced in 2011. It offered standardization and much higher scalability than earlier SAS/SATA or even Fusion-io's proprietary SSDs.

Microsoft introduced native support for NVME in Windows Server 2012 R2, late in 2013.

Major manaufacturers such as Dell, HPE, Lenovo quickly added front drive bays for NVME drives, making them more accessible for maintenance including basic (non-redundant) hot-swap.

These improvements were helpful but production database servers still couldn't survive single NVME drive failures and hot-swap isn't really possible in a production database server without storage redundancy.

Redundancy can be implemented with software RAID such as Windows Storage Spaces or Linux Ceph but software RAID then becomes a significant performance bottleneck, negating the performance benefits of the NVME devices.

SAS SSDs - the current approach

SAS SSDs were already compatible with current servers when they became available a few years later, slotting directly into existing storage controllers.

This brought them quickly into mainstream and SAS SSDs are currently the most common form of SSD interface used in production x86 database servers.

The current record holder for TPC-E OLTP performance benchmarks was set in November 2020 using SAS SSDs

SAS SSDs are the contemporary approach for high performance x86 server storage and are adequate for most production database servers, but NVME interface performance is a step up from SAS.

So, let's reach for the next level..!


Dell's PERC 11 RAID controller series gives us the best of everything currently available:

  • RAID redundancy - single NVME device failures do not take production databases offline.
  • Full hot swap - add / remove NVME drives as needed & let the RAID controller re-synch.
  • Extreme SSD performance - stripe multiple devices into RAID Arrays for even higher performance.
  • PERC 11 attached via PCIE Gen 4 - ~30GB/sec total controller throughput.
  • 8GB DDR4 cache - write to DRAM at much higher speeds than possible with Flash based SSD.

Most of the above is already possible with SAS SSDs but Dell's PERC 11 includes faster NVME drives, combining the redundancy & convenience of traditional RAID with extreme NVME device performance.

One trade-off with NVME SSDs is that they draw more power than SAS SSDs which also leads to higher operating temperatures for the device (motherboard or RAID controller) they are attached to.

Finally, a link to Dell's User Guide and spec sheet for those interested in further details.

Related Articles