Skip to content
Biz & IT

How SSDs conquered mobile devices and modern OSes

With flash ascendant, OS vendors have disabled defragging and supported TRIM.

Lee Hutchinson | 103
In part one of our "SSD Revolution" series, we covered the basics of flash memory in solid state drives, walking through lots of important but esoteric details such as the difference between NAND flash and NOR, or how SSD reads and writes work. We also talked about the techniques used to make SSDs faster and to prolong their lives. But SSDs don't just exist in a vacuum—the state of solid state, such as it is, has had a significant effect on the shape of the modern mobile device landscape, which we explore now in part two.

Not long ago, flash-based MP3 players occupied the low end of the capacity spectrum and, while some brave souls were using PCMCIA compact flash cards in their laptops, you still needed a real hard disk drive to effectively boot and use Windows or OS X. Not anymore—not only are flash-powered, high-capacity MP3 players and laptops standard, but modern operating systems are quickly adapting to SSDs as the norm.

An entire class of ultrabooks—which, in spite of what the name suggests, do not contain hyperdrives, organic CPUs based on alien DNA, or anything else truly deserving of the "ultra" prefix—are now built around the MacBook Air's design philosophy of being durable, thin, light... and stuffed full of NAND flash. Laptops of this form factor seem poised to deliver on most of the promises that netbooks once made (especially portability and battery life) without falling prey to the same set of compromises that ultimately doomed netbooks to hobbyist devices.

Tablets, too, are on the rise. The tablet segment of the mobile device market didn't even meaningfully exist prior to 2010—say what you will about the iPad, but it truly sparked a revolution. Since their rise to prominence, all mainstream tablets have been exclusively flash-powered devices; there's not a hard disk to be found anywhere in the lot. While the SSD craze might be sweeping the "real" computer segment as flash storage becomes more common on desktops and laptops, the place where NAND flash most truly empowers consumers is in mobile devices.

But things were not always thus, and flash wasn't always the best choice for mobile devices to store data.

CmdrTaco's gaffe

Tough crowd

One of the most-parodied comments in the history of the Internet appeared on October 23, 2001 on tech blog and aggregator site Slashdot. Apple had just announced something called an iPod, with which they it hoped to take on the already-crowded portable music player market. In the news post about the iPod launch, Slashdot editor and founder Rob Malda (known by his net handle CmdrTaco) famously wrote, "No wireless. Less space than a Nomad. Lame."

At the time, he had a point. That initial iPod contained a 1.8" hard disk drive (the discontinued MK5002MAL) manufactured by Toshiba, which provided 5GB of space to store music (or anything else, really, since the iPod's FireWire connection meant it could be used as a fast and relatively cheap external hard disk drive). Though CmdrTaco was certainly correct that the competing Creative NOMAD Jukebox had more usable capacity, he swung and missed on which player would win the market because he made the common geek mistake of believing that a device's laundry list of features will tell the story of its success. Even though it had less space and didn't do wireless syncing (a feature that wouldn't show up on iOS devices until 2012!), the iPod was a heck of a lot easier to put in your pocket.

A look through the comments attached to that old Slashdot story paints a scene utterly foreign to most folks today. Yes, flash-based portable music players existed back in 2001, a time so long ago that a big chunk of today's hipster digerati were still in junior high, but those flash-based players had capacities measured in the dozens, or at best hundreds, of megabytes. A 64MB music player couldn't carry around much more than a CD or two of songs; the biggest 256MB players weren't that much of an improvement. Flash just wasn't ready for such uses, and if you wanted to take a substantial chunk of your music collection around with you, you needed a player with a hard disk drive.

I've said more than once that hard disk drives are miraculous machines, manufactured to incredibly tight tolerances and requiring extreme precision. Putting a spinning hard disk drive into a portable device which is going to bounce around clipped to your hip while jogging, therefore, is just about the worst possible thing you could do to the drive. Hard disks work best and most reliably when they're placed flat and don't move, since even tiny vibrations can bounce the drive heads around as they sit suspended on an air cushion just a few nanometers above the spinning platters, trying to focus on data tracks that are themselves only a few nanometers wide. The hard drives used in portable devices had to be treated carefully and used in very specific ways.

A Creative NOMAD Jukebox, in all of its weird sort-of-CD-player-shaped chunkiness Credit: Wikimedia Commons

That first-gen iPod, for example, didn't keep its drive spinning constantly like a desktop or laptop computer does. In addition to saving on battery power, keeping the drive spun down when not needed kept the heads safely parked in their landing zones rather than hovering over the data surface of the disk, into which they might crash if sufficiently jarred. When music was played, the drive would quickly spin up, read as much of the current song and the next song into the iPod's RAM as it could, and then spin down again. Assuming you were listening to a playlist and not rapidly clicking through your library, the disk spent most of its time spun down and the iPod just played data out of RAM.

Flash-based players don't require such an approach. Flash media is solid state, so it doesn't have a motor; it also doesn't particularly care whether or not it's being vibrated to death. It's entirely possible to take a hard disk player like an iPod Classic or a Creative Nomad and cause them to freak out by shaking them vigorously while changing tracks; flash doesn't care. If only it could be produced with higher capacities...

Moore's Law marches on

Times change, progress happens, and every couple of years or so the amount of transistors you can cram into a given amount of space tends to double. Hard disks fit the bill for portable devices for quite some time after the portable music player market began to explode, with 1.8" disks giving way to even smaller "microdrives" (Toshiba's 2004 press release about its 0.85" hard disk makes for a fun read, predicting that the drive will soon be used in "handhelds and smart phones"—which was true, though not for very long). Even as late as 2005, it seemed certain that the smart phones of the future would be powered by a teeny-tiny little hard disk drives, with Samsung and others proceeding full steam ahead with production plans.

Hard drives can be tough—but even the best are no match for flash. Credit: http://www.flickr.com/photos/kchrist/3311792967/

Teeny-tiny little hard disk drives are amazing in their own right, but in the last half of the 00s (the Aughties? Is that what we're supposed to call them?), their capacity and their speed was surpassed by solid state. Flash has always been a better choice for portable devices because of its lower power consumption and insensitivity to vibration; so as flash capacity ramped up, hard drives for portable devices lost their one remaining advantage. Any future that microdrives had in the mobile space was quickly stomped flat by the 2007 release of the first-generation iPhone, which used 4 and 8 GB of NAND flash as its medium of choice for storing music, pictures, videos, and later apps and ringtones. Within a year, every other smartphone manufacturer in the world began to release something that looked, sounded, and smelled like an iPhone; designers abandoned all thoughts of using anything other than NAND flash.

It became like a closed feedback loop: as more manufacturers demanded to use flash in their devices, more flash needed to be produced; as more flash needed to be produced, it became worthwhile to accelerate research and development efforts in order to figure out how to make it better and cheaper, so more could be sold. From 2007 on, the mobile device market skewed exclusively in the direction of flash.

These days, portable devices with 32 or 64 GB of flash can be found everywhere. Tablets like the iPad are powering their way into homes and businesses, and solid state disks in laptops are becoming as common as spinning hard disk drives. Ultrabooks, driven by the sales success of the 2010 and 2011-generation MacBooks Air, all have flash in some form or another; some have hybrid disks, others are SSD-only.

It's certainly safe at this point to say that flash has won the war in the mobile space—I don't think we'll ever see another tablet or phone based on anything other than solid state storage. The war for the proverbial desktop (which includes most laptops) is far from over, with hard disk drives still outnumbering SSDs in most traditional computers. Still, SSDs are in enough places doing enough things that modern operating systems have changed to accommodate them.

Take a hike, conventional wisdom

In the old days before Windows, back when I was in junior high—take that, hipster digerati!—things were simple. Want to keep your computer running fast? Buy a copy of the Norton Utilities for DOS and run Speed Disk to defragment the thing. If you were one of those lunatics crazy enough to use Stacker or DoubleSpace on your drive, then you made sure you were using Norton 7 so that Speed Disk supported your insanity.

A good defrag was magic for an ailing disk, and as operating systems got smarter, disk defragmentation got more automated. For example, Windows 95 included a built-in disk defragmenter that was smart enough to detect when the contents of the disk being defragmented had been modified after the defragmenting started. It was so smart, in fact, that it would often start defragmenting the C: drive and never finish, because the operating system would blithely continue doing other I/O tasks on the C: drive while it was defragmenting. (Like all tales of comi-tragedy, this one is a lot funnier now than it was at the time.) Fortunately, future versions of Windows got a lot more intelligent about handling boot volume defragmentation, and automated defragging became a transparent feature.

"If you happen to use an SSD with Windows XP or Windows Vista, make sure that you've disabled the automatic background defragmentation."

Outside of the Windows arena, holy wars were fought on USENET and later on message boards all across the Internet about whether or not OS X's HFS+ file system truly needed defragmenting; the answer in Apple's own words seems to be not really but sometimes yes, maybe. In OS X 10.3, Apple introduced some limited automatic defragmenting with two modes of operation, with the first mode taking files smaller than 20MB and with more than eight fragments and coalescing them into a fragment-free location when they were opened for reading; the second mode (called "Hot File Adapting Clustering") moved small and frequently accessed read-only files to the fastest part of the disk.

The most common GNU/Linux file systems are ext3 and ext4, which like HFS+ can be manually defragmented, but which perform their own housekeeping and don't often need to be.

Defragmenting is a file system-level operation and is usually done because hard disk drives tend to toss files around wherever there's room to put them, and that doesn't always mean writing a file in sequence. As a disk's contents change organically over time with writes and deletions, holes open up where files used to be, and the operating system will write partial files into those holes, fragmenting the files (yes, I'm simplifying, because otherwise we'll be here for another 10,000 words, just like last time!). If the file that gets fragmented is one that needs to be read quite often, then the hard disk drive has to perform one or more extra seek operations to read the file—rather than reading the file from sequential sectors, it has to read the first part, then jump to the second part. If the file is fragmented into dozens or hundreds of pieces, which can happen, then reading that single file becomes quite a time-consuming operation. Remember the memory hierarchy—hard disk drives are the slowest part in any operation the computer performs and saddling a read with tons of extra seek operations can waste a huge amount of time.

How disk fragmentation happens

With solid state, automated defragmentation is not only unnecessary but actively bad. In the first place, SSDs are mostly immune to the effects of file system fragmentation. The majority of the latency a spinning disk experiences is because the heads have to physically swing over the right logical cylinder on the disk's surface, and then the correct track on that cylinder has to rotate underneath the heads to be read. SSDs, though, have almost no random seek penalty and no rotational latency—any given page on the entire SSD can be read just about as quickly as any other page. A truly sequential read of SSD pages will benchmark faster than a random read, because the random read involves a lot more controller overhead, but the difference is minuscule compared to the sequential vs. random read performance of even the fastest spinning hard disk. SSDs just don't care very much about fragmented files.

Of far more concern, though, is the practical effect defragmentation has on solid state disks, because defragmenting is a fast way to make a whole lot of changes to a disk. To briefly review, recall from our previous SSD feature that the NAND flash used in SSDs can only be written to one page at a time—that is, one 8,192 byte (usually) chunk, which equates to one horizontal row of NAND floating gate transistors inside of the flash chips. Further, recall that a page in an SSD cannot be overwritten. The floating gate transistors inside a NAND flash chip come from the factory containing little to no charge, and a floating gate transistor with no charge has the binary value of "1" (or "11," for a current-generation MLC floating gate holding two bits). Changing that 1 to a 0 involves pumping electrons into the floating gate and isn't too difficult, but changing a 0 back into a 1 involves convincing those electrons to leave the floating gate; much like convincing an unemployed brother-in-law to get the hell off your couch and go do something else, doing so requires quite a bit of energy.

It's very difficult to confine the erase operation to individual cells because of the amount of current involved; in fact, targeting only one cell for erasure requires even more current applied to the surrounding cells to keep them from being affected, and all this current significantly shortens the life of all of the cells. The solution is to skip on the voltage-intense process and just erase an entire flash block all at once. Since a block in a current-generation SSD contains 128 or 256 pages, SSDs try to erase them only when necessary; all modifications to SSD pages are made by writing the changes out to a whole new page and marking the old page as stale. Then, at some point when the drive's controller feels the need to do so, good pages are separated from stale pages and blocks with all stale pages are erased.

SSDs can only be erased one whole block at a time.

Fresh, blank pages are a precious commodity, and automated management of file fragmentation by the operating system eats fresh pages like a starving lion eats crippled gazelles. An SSD doesn't know anything about what files reside on its pages—files and clusters are things that the operating system knows about, but there's a line between the file side of the house and the block side of the house which is rarely crossed. A defragmentation process run on an SSD kicks off a tremendous number of new page writes; each time the OS picks up a piece of a file and relocates it, the SSD writes the changes out as new pages. Defragging a sufficiently fragmented SSD could easily blow through that SSD's entire set of free pages, with significant performance implications. When no free pages are available, every single change requires the SSD to read an entire block into cache, erase the entire block, and then re-write the entire block with the changed page. With a single block taking up as much as 2MB, this could mean 2MB worth of reading and writing just to change a single byte.

Defragging an SSD is a bad idea.

The good news is that modern operating systems understand this. Windows 7 disables its automatic defragging when it senses it's installed on an SSD, and Windows 8 does the same (Windows 8, in fact, is designed from the ground up to run on solid state storage as its preferred medium). OS X's two automated defragmenting methods appear to automatically disable themselves when used with an Apple-provided SSD—at least, the ".hotfiles.btree" file used by Hot File Adaptive Clustering is missing on the SSD-equipped Macs we had access to check. (Actual technical information from Apple on HFS+ defragmenting on SSD is annoyingly difficult to come by, and non-Apple SSD users should at least be aware of the possible issue.) File systems used by most GNU/Linux distros typically do not auto-defrag, so there's nothing to disable there.

Older operating systems are a different story. If you happen to use an SSD with Windows XP or Windows Vista, make sure that you've disabled the automatic background defragmentation (this might require some registry tweaking on Windows XP). There's more than just regular defragmentation to be turned off in XP and Vista; for example, both operating systems do their best to take files referenced during boot and move them onto the quickest part of the hard disk drive. This optimization is worse than useless on SSDs, since repeated shuffling of boot files chomps up free pages. Curbing it requires another trip into the registry.

Windows 7 contains a whole slew of performance optimizations designed to use supplementary storage—like USB thumb drives—as extra cache space in order to help overcome the problems of dealing with latency-bound hard disk drives. ReadyBoost, a technology first introduced in Windows Vista, works in concert with another technology called SuperFetch to find commonly used program and data files and cache them on a thumb drive, the theory being that accessing those files off of a thumb drive will in most cases be quicker than pulling them off of the hard disk drive, if the hard disk drive is busy servicing other I/O. SuperFetch by itself performs a more limited caching role to system RAM, but when coupled with ReadyBoost, it really comes into its own, pre-caching many gigabytes of data on a thumb drive. There's also ReadyDrive, a feature which relies on the flash buffer in hybrid hard disk drives to speed up I/O. But when it finds itself on a solid state disk, Windows 7 turns off all of these features because suddenly the bottleneck they were designed to mitigate is far smaller.

In addition to disabling old features, SSDs have required some new ones—most notably, support for a command called TRIM.

TRIM isn't just for toenails

One of the most important operating system features to arrive along with flash drives is support for TRIM. Though it's commonly written in all capitals, TRIM isn't an acronym; rather, it's the name of the ATA command that the operating system can send to the SSD controller to indicate that a certain page or set of pages contains stale data.

The SSD's controller has no notion of files; the only things it understands are pages and blocks. When a file is modified by the operating system, the SSD knows to copy the relevant pages plus the changes being made into new fresh pages, and to mark the old pages as stale; this is possible because the operating system's changes are communicated to the disk drive using logical block addressing, or LBA. The operating system tells the SSD that it is modifying a certain logical block, and the SSD controller maintains a mapping (called the FTL, or Flash Translation Layer) between logical blocks and physical blocks. When the operating system changes a file, it tells the controller to change one or more logical blocks. The controller makes the changes and writes out the appropriate number of new pages, updating the FTL so that the LBAs continue to point at the correct physical locations, and marking the old pages as "stale" pages. Then, at a future time dictated by the controller, the stale pages are left behind by a garbage collection operation and block erased.

Deletions, however, are different. Without TRIM, when the operating system deletes a file on a solid state disk, there's no way for the operating system to tell the SSD controller that the pages can be marked stale. The standardized command set that the operating system uses to communicate down the bus to the SSD has no method to say, "Hey, please mark these pages stale because I deleted the files in them." So, pages containing deleted files continue to be gathered and relocated by garbage collection along with good pages; this increases write amplification and consumes free pages.

TRIM changes this by giving the operating system that missing method to exclude pages containing deleted files from garbage collection. When you delete a file on Windows 7 or OS X (with TRIM enabled on the right kinds of disks) or your favorite GNU/Linux distro (with the right file system, a recent enough kernel, and the right arguments in your fstab file), the pages containing the deleted data are marked stale. The next time the SSD's controller sweeps through to gather up the good pages and move them on, the deleted pages are left behind to be eaten by the block erasure Langoliers. In my head, this looks extremely scary.

Without TRIM, garbage collection doesn't know about deleted files and continues to move pages containing deleted data along with good pages, increasing write amplification. TRIM tells the controller that it can stop collecting pages with deleted data so that they get left behind and erased with the rest of the block.

TRIM is a relatively recent invention; it's supported by most current SSDs and most current operating systems, but not all. Windows 7 automatically uses it; so does OS X 10.6.8 and later, but only with Apple-provided SSDs. GNU/Linux also supports TRIM, provided you're using ext4 and a kernel newer than 2.6.33.

It's worth taking a moment and examining TRIM support in OS X on non-Apple SSDs. Out of the box, TRIM is disabled in OS X for third-party SSDs, and enabling it requires an app which does some kext hacking and yields mixed results, with some folks saying that it provides a performance increase, others saying that it makes performance tank, and others saying they notice no difference at all. Other World Computing, which among other things resells a line of highly rated SSDs designed primarily for use in Macs, goes so far as to say that its drives are so amazing that they don't need TRIM at all, and that enabling it can actually make the drive less reliable. Unfortunately, the company's blog post on the subject contains no technical information about exactly why this is so, only saying that the SandForce controllers used in OWC SSDs take care of everything and that you should pay no attention to the man behind the curtain. While it's true that LSI/Sandforce does have efficient garbage collection, there is no small amount of hand-waving on OWC's part about the limitations of garbage collection over time—one blog post featured on OWC's site talking about how its Mercury Extreme SSD performs in a MacBook Pro nonsensically states that with the OWC disk installed, "System Profiler does not show TRIM support, this is not an issue, as the OWC SSDs have TRIM support built-in."

I don't doubt that its disks are fast, but OWC deserves to be taken to task for providing marketing-talk in place of actually useful technical information about whether or not TRIM support is necessary. Most damning is the fact that other SSDs based on the same SF-2281 controller as the Mercury Extreme featured in the blog post demonstrably do benefit from TRIM when used on Windows and GNU/Linux.

Bottom line: using an Apple-provided SSD is safest on OS X, since you get TRIM support. Using a non-Apple SSD is fine, too, especially one with good garbage collection (and "good" doesn't necessarily mean "aggressive" since aggressive garbage collection can exacerbate write amplification), but enabling TRIM with a third-party kext hack might cause your computer to eat all of your data. Buyer beware.

Encryption: keeping Eve out in the solid state age

Current operating systems offer whole disk encryption as a security feature, and it's one that is being used more and more often both at home and in the enterprise. Employers tend to mandate encryption because it provides some measure of control over otherwise-uncontrolled data, and because it puts something more secure than an operating system account password between a lost laptop and the potentially malicious person who finds it. Home users like it because it's nominally easy to turn on (though I have in the past spent an incredibly frustrating evening trying to enable Windows 7's BitLocker and make it work with my laptop's TPM chip) and gives them some free piece of mind about whether or not people can steal their identity along with their hardware.

For the most part, whole disk encryption isn't something that greatly affects SSDs, because most SSD controllers don't particularly care what kinds of things they're reading or writing. However, LSI's SandForce controllers once again force us to take a bit of a technical detour because, unlike other SSDs, SandForce-powered SSDs actively do things to the data they write.

Specifically, SandForce SSD controllers perform compression and deduplication on incoming data in an effort to get the write amplification on a given chunk of data below 1. Just because the operating system passes a few megabytes worth of pages down the bus to the SSD to be written doesn't necessarily mean that a SandForce-powered SSD will commit the entirety of that data to flash. Rather, the controller chops the incoming data up into small pieces and compresses those pieces, then looks at the pieces and compares them against the data already stored in flash. If the pieces are identical to other pieces already on the drive, then the controller tosses them out and doesn't bother writing them (more correctly, the controller writes out a pointer to the unique data instead of duplicating it). This lets Sandforce-powered SSDs work more efficiently than other SSDs which aren't doing compression and deduplication.

Deduplication works by not writing anything to flash more than once, but derives its efficiency from having lots of duplicate data to toss out.

Encryption, however, breaks compression and deduplication. Both compression and deduplication work best in situations where there is a lot of repeated information in a given chunk of data, since repeated information can be tossed out and reconstituted later. Modern encryption schemes by their very nature work to eliminate recognizable or repeated patterns in data, since those patterns can be used to puzzle out the thing that has been encrypted.

It's worth noting that SandForce-powered SSDs in fact do encrypt the data on their NAND flash chips, but that encryption is done by the controller and is transparent to the operating system. Pulling a NAND flash chip off of a Sandforce-powered SSD and plugging the chip into a reader will yield only encrypted data; the encryption protects against physical disassembly, but as long as the controller is used to read the chips, the encryption and decryption is handled automatically and the controller's compression and deduplication aren't affected. Even though the data is all encrypted, the controller holds the keys and knows what the unencrypted data looks like.

It's also possible to set up a boot-time password that you must use in order to provide the key to the controller; the presence or absence of the password doesn't affect whether or not the drive is internally encrypted, but rather prevents the disk's controller from accessing its own encryption keys unless you enter the correct password (which itself is used to hash the encryption keys—without a password, the keys are stored unencrypted).

Operating system-enforced whole disk encryption is something else entirely. Because the disk controller isn't involved in the encryption or decryption operations, the only things it sees are endless blocks of data with few or no repeating patterns. Even the proverbial giant PowerPoint file saved in ten different locations—something that if unencrypted would only be written once to flash by a SandForce-powered SSD—stymies the controller. Each of the ten copies of the file, though identical unencrypted, appear to the SSD as ten different chunks of random data. The SandForce controller must reduce its read and write speeds in order to handle reading and writing data without being able to use its bag of speed-enhancing, write amplification-reducing tricks.

Identical files before encryption, and after—deduplication is no longer possible because the files contain no common information.

Fortunately, SSDs are quick enough that the performance impact of running whole-disk encryption on an SSD isn't tremendous; in fact, the delta between the two is often less than running a traditional spinning disk unencrypted versus encrypted. The security benefits of whole disk encryption do seem to more than outweigh the small performance penalties. For Windows 7 and later, this means enabling BitLocker and ignoring it; TRIM continues to work with BitLocker and the operating system will continue to pass down to the SSD information about deleted files. For current versions of OS X, whole disk encryption is provided by FileVault 2, which you can safely enable without problems both on an Apple-provided SSD with TRIM support and also on third-party SSDs. Benchmarks on Anandtech show that the real-world impact of enabling FileVault 2 even on a SandForce-powered SSD is relatively small.

Linux distros, as usual, have more options. Popular distros like Ubuntu include support out of the box for whole disk encryption; third-party applications like TrueCrypt also can be used by those who want more control and more configurability. The same caveats apply, though: there will be some performance penalty, especially with SandForce-powered drives, but that penalty will likely be small.

Kicking cost per gigabyte in the teeth

SSDs are becoming less expensive over time, but an SSD big enough to hold your main operating system and all of your music and pictures and applications is still a lot more pricey than the equivalent amount of space on spinning disk. Even though, as of this writing, some SSD prices are flirting with that magical $1-per-gigabyte cost level, not everyone has the desire or ability to spend $500 on a 480GB SSD to hold all of their data (and no small amount of folks would find a 480GB SSD to be laughably inadequate for their enormous data hoard).

There's another way, though. Caching is a technique as old as the hills—if you've got data on a slow medium that is repeatedly accessed, you can make things faster by temporarily holding that data in a faster medium, and flash fits the bill perfectly as a cache for spinning disk in that it's much faster but also a lot less expensive than system RAM, the next step up in the memory hierarchy. In the enterprise space, companies like EMC and NetApp augment their disk arrays with solid state storage which is used as a supplemental cache tier. NetApp calls their technology Flash Cache and it holds recently read data for quick access; EMC's is called FAST Cache and it holds both reads and writes. It's a given that adding more cache to a tier of storage makes that storage faster, and both technologies can greatly enhance array performance (in addition to helping your NetApp and EMC sales reps buy new Porsches, since neither technology is a cheap add-on).

What's good for the enterprise goose is also good for the consumer gander. Intel's Z68 chipset, present on Intel-produced Sandy Bridge motherboards, includes support for a new feature which Intel calls Smart Response Technology. SRT allows a computer to use a normal hard disk drive as its main storage device, but adds the ability to then use a smaller SSD (up to 64GB in size) as a much faster read and write cache. Reviews of SRT show that the technology does indeed work, with some of the same caveats that apply to large caches in the enterprise: you must allow time for the cache to "warm up"—in other words, the cache can only help with read performance after it's had some data cached into it, which means you have to use it for a bit in order for it to have anything in it. And it works best when employed in systems with a small, consistent workload. SRT works at the block level, not at the file level, and as such it doesn't have a way to differentiate between important operating system files that must remain cached and small one-off reads and writes, so maximum performance is only going to be realized if you do the same stuff over and over again.

While Intel marches down this path using hardware, GNU/Linux distros are close to offering similar functionality using only software. Bcache is a block caching kernel patch which has been in production for a number of years, but which according to its creator is essentially ready for release (the fearless and/or crazy can clone the project's Github repository and knock themselves out today). The idea behind bcache is the same as Intel SRT (and for that matter, the same as Microsoft's ReadyBoost): use solid state as a caching layer in front of a much larger backing store made of spinning disk, and handle as much I/O as possible from the cache.

While Windows users with Ivy Bridge CPUs and Z68-based motherboards can use SRT and Linux folks can get rolling with bcache, OS X seems somewhat left in the cold right now for SSD caching. Ivy Bridge Macs have begun appearing, and some of the lower-end MacBook Pros can be ordered with spinning disk instead of solid state, but there's no news yet as to whether or not SRT might eventually be exposed as usable functionality to the users. Given Apple's touting of the new MacBook Pro and its flash-only storage as the future direction of all its laptops, it's doubtful SRT will ever appear on an Apple laptop. Ivy Bridge in non-portable Macs remains a no-show as of this article's publication, with a much-hoped-for spec bump on desktop Mac Pros turning out to be barely a blip.

Hybrid disk drives, which pair standard spinning disks with some amount of internal flash storage, offer the same benefits as SRT in a single unit, without needing a fancy motherboard or special CPUs, and can be had for a nominal cost increase over plain hard disks. They have most of the same operational caveats as other flash-based caches—it takes a bit of time for the cache to warm up, and variations from your day-to-day workload can fill the cache up with things you don't necessarily want or need to cache, but compared to a full SSD, these are relatively inexpensive. Whether with a standalone drive or an integrated solution, the idea of using solid state as a caching layer for a larger hard disk is one way to get some of the performance benefits of SSDs without paying the cost-per-gigabyte premium of actually having a giant SSD.

Looking forward

We've come quite a ways, from the "lame" iPod to the iPad, a device so compelling that it's forcing its way into businesses in spite of Apple's traditional utter lack of catering to the enterprise. Operating systems react differently to SSDs than to hard disk drives, tossing out long-held best practices like defragmentation and using new techniques like TRIM to help SSDs live longer. Things are good, but might they be better? That is: where is flash going next?

Ah, but that would be telling. We'll get to that in the next part of our series.

Further reading

Photo of Lee Hutchinson
Lee Hutchinson Senior Technology Editor
Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston.
103 Comments