A couple of months ago, we linked to an article which stated that SSDs might never replace mechanical storage, at least in the sense that we’d no longer have the need for it again. Today, even if you do have an SSD, chances are good you have a mechanical drive installed as well for actual storage purposes, because quite simply, SSD’s are too expensive for that.
The article, written by Enterprise Storage Forum, surmised the idea that mechanical storage is here to stay due to the lack of innovation in SSD technologies, and limits with current die processes. It stated that things might improve a little bit with die shrinks, but that’s only able to go so far, as new technologies will need to be developed in order to actually have the product function reliably over time.
In a follow-up article, the site goes into specific detail as to how it originally came up with its claims, and according to the author, there’s a huge roadblock up ahead – regardless of whether or not the SSD vendors want to admit it. For those who want to delve deep into the the full issue, I recommend a read of the full article, but for those who want just the synopsis, it’s rather simple.
Current SSD designs use what’s called a floating-gate transistor, and between it and the substrate, data gets passed through. This is accomplished by applying varying levels of voltage, which the article states as being +5V to activate the entire page, then another +0.5V to read a cell, or +20V to write to a cell. Over time, cells will require lower voltage levels due to them being broke in.
Over time, these voltages can begin to wear on the cells, and it could be this reason that there are a limited number of read/write cycles on a given type of flash technology. That’s not so much the problem, though, but rather that is the fact that because these cells are placed so close together, there’s a form of interference that can affect nearby cells not even being used. According to the article, this can result in silent data loss, although most often it won’t be noticeable to the end-user thanks to robust SSD controllers that can verify the data transfer.
The idea is that if this problem exists now, then decreasing the die size, and effectively the amount of space between cells, the problem is only going to get worse. This interference in effect would affect an even greater area. It’s for this reason that the site, and also some scientists, believe that 20nm will be the current limit for a NAND flash chip. As a result, densities could be halted.
That’s not to say that our SSDs won’t continue to get larger, because they could. It just might require more chips, not just chips built on smaller processes. As for pricing, that could also go down, due to the natural increase of success with yields. There’s also the option to move to a 3 or 4-bit-per cell design, but that can’t be accomplished without some caveats – namely decreased performance and lifespan.
The best thing to happen would be to see revamped or brand-new technologies that break through the barriers that current designs hold, but there’s no telling when those will become a viable option. Sooner than later would be our hope, though. SSDs are awesome in many ways, so we sure don’t want to see the technology development slow down from its current accelerated rate.
The 5V bias voltage has to be applied to bring the cells to a conductive state, you still need 0.5V to read a cell, and 20V to program/erase a cell. However, existing data corruption problems may actually get much worse because the EM fields are the same size and will be stronger in neighboring cells as they are closer together. This only makes the problem of possible data corruption worse.