Few people can deny the rising presence of solid-state drives (SSDs) in enterprise applications such as SQL Server. They have a few major advantages over their spinning-platter counterparts, namely, their increased read and random-access speeds. But given that conventional spinning-platter drives have been on the market for decades and have a great deal of proven technology behind them, is there a real incentive to push for a switch to solid-state storage devices for SQL Server -- especially given their cost?
SSDs have a number of attractive features that make them increasingly competitive against conventional disks. They consume little energy, they have fast random-access read modes, and they come in form factors (e.g., Serial Advanced Technology Attachment) that allow them to natively replace hard disks. For database administrators, SSDs’ high read speeds are a major draw, since increasing those speeds can theoretically reduce a major I/O bottleneck.
But there are several valid reasons not to go with solid-state storage devices for SQL Server. The single biggest is their cost-effectiveness, whether or not they deliver better throughput for the dollar than conventional disks. When dealing with storage systems containing many disks—as you often do with databases -- it isn’t just raw performance that matters but performance per dollar. If you can solve most of your bandwidth problems with a broad array of cheap hard disk drives, go for it. With SSDs, you could be spending up to 10 times as much money, but unless you’re getting 10 times better performance (and you typically don’t), you’re better off with hard disks.
A 2009 Microsoft Research paper, Migrating Server Storage to SSDs: Analysis of Tradeoffs, concluded that SSDs were not, at the time, a viable replacement for conventional hard drives in any of the server scenarios they tested. “The capacity/dollar of SSDs needs to improve by a factor of 3-3,000 for SSDs to be able to replace disks,” the authors wrote. “The benefits of SSDs as an intermediate caching tier are also limited, and the cost of provisioning such a tier was justiﬁed for fewer than 10% of the examined workloads.” SQL Server was not one of the workloads the authors tested explicitly, but they did test against a 5,000-user installation of Microsoft Exchange Server (which uses an embedded database) and didn’t find the investment worthwhile.
One thing that should not be held against SSDs almost inevitably comes up in any discussion about their long-term use: that flash memory cells can withstand a limited number of write cycles. Users and IT administrators alike have been hyperconscious of this fact ever since flash drives came on the market. In a consumer setting, where the amount of I/O isn’t as aggressive as in an enterprise environment, maybe write-cycle limit isn’t such a big deal. But in an enterprise setting, especially for applications like databases, where reliability is crucial, people don’t want to bank on a technology that might torch their data.
A closer look shows the “write endurance” problem is a lot worse on paper than in reality, and it has been mitigated to a great extent by good design. SSD market analyst Zsolt Kerekes did his own investigation of the issue and concluded, “In a well-designed flash SSD you would have to write to the whole disk the endurance number of cycles to be in danger.” Even databases that sustain a great deal of writes don’t pose a write-endurance problem to SSDs.
Given such a scenario, the write-endurance lifetime of the solid-state storage drive is many times longer than the likely deployment lifetime of the unit itself. In other words, you’re far more likely to replace an SSD because a newer, larger, faster or more energy-efficient model of SSD comes on the market than because it runs out of write cycles.
And newer models are constantly arriving, although the prices have a long way to fall before they become cost-effective replacements for conventional drives. Consequently, if you’re looking to spend the kind of money spent on flash SSD storage for a database system (easily on the order of thousands of dollars), you might be better off putting those resources toward other components in your database system. Increasing RAM, for instance, means less of the workload is I/O-bound, and may be a more cost-effective way to speed things up than dropping stacks of cash on SSDs. Your best bet is to use real-world statistics to find out how much of your database workload is irrevocably I/O-bound, and then determine if SSDs are worth the cost.
James Hamilton of the Data Center Futures team at Microsoft crunched some numbers on when SSDs make sense in server applications and produced a useful formula for figuring the cost-effectiveness of SSDs. His formula uses a database server (a “red-hot transaction processing system,” in his words) as a test case for when SSDs might be justified. From what he’s found, random I/O to and from disks have consistently lagged behind other kinds of I/O, so it’s tempting to replace disks with solid-state storage devices on this note alone. But, again, there’s how cost effective it is to do so, and if you gather real-world data from your own systems and do the numbers, you may find the costs don’t justify the gains.
While SSDs are on the way to overtaking their spinning-disk counterparts in many environments, it’s still hard to justify their use in a SQL Server environment from a cost perspective. That will change as the prices on SSDs come down, or your workloads change, or both. But before you drop the cash, do the math; for the time being, your money may be put to better use somewhere else.
ABOUT THE AUTHOR
Serdar Yegulalp has been writing about computers and IT for more than 15 years for a variety of publications, including InformationWeek and Windows Magazine.
This was first published in July 2011