adv

Solid state drives are now larger than hard disk drives: The impact for your data center

The amount of data that is being generated on a daily basis is growing rapidly, placing more and more demand on data centers. Not only do we have connected users actively engaged in generating and storing large amounts of content, machines such as autonomous cars and connected planes generate greater amounts of content by orders of magnitude.

As there is value in almost any data, it is rarely, if ever, deleted. This leads to increasing demands on storage capacity. The alternatives are hard disk drives (HDDs) and solid state drives (SSDs).

HDD capacities have been increasing steadily over time, and have been the mainstream form of storage in the enterprise until now. But in 2015 the capacity of HDDs was surpassed for the first time by SSDs–and because SSDs are scaling at a faster rate than HDDs, we will never look back.

SSDs use NAND memory, which has an amazing ability to scale. NAND memory is comprised of storage cells formed on semiconductor material. Improved density was achieved via process geometry shrinks at the die level. However, this method of gaining greater density was nearing its physical limitations based on how closely the memory cells were being squeezed together.

Undeterred, the industry introduced a breakthrough in the past three years with the introduction of 3D or vertical NAND. Instead of attempting to squeeze memory cells ever closer together, 3D NAND stacks them vertically on top of each other. This allows SSDs to continue to aggressively scale in capacity for the foreseeable future.

Another inflection point took place during 2015–enterprise SSDs became less expensive than HDDs when taking data reduction technologies such as compression and deduplication into account.

Compression reduces bits and hence the amount of storage needed for a set amount of data by identifying and eliminating statistical redundancy. This is possible because most real-world data exhibits statistical redundancy. For example, an image may have areas of color that do not change over several pixels. So instead of coding “red pixel, red pixel ...,” the data may be encoded as “100 red pixels.”

Deduplication is a special form of data compression that eliminates duplicate copies of repeating data. While compression takes care of repeated substrings inside individual files, deduplication inspects volumes of data to identify large sections (such as entire files) that are identical and stores only a single copy.

Storage vendors claim they can achieve approximately a 3x data reduction using SSDs without negatively impacting system performance. The same cannot be said for HDDs due to their inherently slower nature.

Therefore, when comparing SSDs and HDDs in a storage array, the relative cost for SSDs is divided by three when taking data reduction technologies into account. SSDs have become more cost-effective than 15K performance HDDs on a per gigabyte basis. This cross-over to SSDs, compared to 10K HDDs, will happen very soon as well. This has triggered a massive transition to flash in the enterprise.

SSDs have always had a good value proposition versus HDDs–faster, more reliable and lower power. Adding lower cost and higher density makes the SSD alternative all the more compelling.



Comments