News of an amazing SSD tweak has been making the rounds lately, and as an avid proponent of SSDs, I just had to take a closer look. Most articles pick up on the 300% performance increase, but given the claims appear to be too good to be true, I was initially doubtful.
A research team led by Ken Takeuchi, a professor at Chuo University, announced at the 2014 IEEE International Memory Workshop that it has developed an algorithm that can increase SSD performance by up to 300% with just a small firmware tweak. Keep in mind, IEEE happens to be one of the de facto standards organizations for hardware interconnects; anything from NAND interfacing to USB and SATA specifications. Here’s an excerpt:
In September 2013, to address this issue, the research team developed a method to prevent data fragmentation by making improvements to middleware that controls a storage for database applications. It makes (1) the “SE (storage engine)” middleware, which assigns logical addresses when an application software accesses a storage device, and (2) the FTL (flash translation layer) middleware, which converts logical addresses into physical addresses on the side of the SSD controller, work in conjunction. This time, the team developed a more versatile method that can be used for a wider variety of applications.
The new method forms a middleware layer called “LBA (logical block address) scrambler” between the file system (OS) and FTL. The LBA scrambler works in conjunction with the FTL and converts the logical addresses of data being written to reduce the effect of fragmentation.
In other words, a small algorithm added to the firmware of SSDs (even those currently in our systems) can potentially offer big performance gains through better efficiency of routine SSD tasks.
However, there’s a few important caveats that aren’t actually being mentioned in the news coverage. The first is that the typical SSD housed in a consumer PC performs garbage collection when idle, specifically to avoid performance penalties involved. This implies the tests assume a near constant workload on the drives, which is a scenario mostly reserved for server (particularly database) focused solid-state drives.
Secondly, according to the results graph, the 300% performance figure is only applied to the simulated SSD that had less than 20% free capacity; those drives with 40% or more free area were limited to gains of “only” 50% – which admittedly would still make for a rather nice boost in SSD performance.
Unfortunately, that’s not all, as the increase in performance is dependent on the type of workload, and most workloads indicate only a 30-60% increase, even for the 80% full SSD. Lastly, worth mentioning is that these results are from a 4-way SSD simulation and not actual drives, so the real potential of this novel firmware modification has yet to be demonstrated.
As a hardware enthusiast I do look forward to seeing the actual performance benefits of this novel firmware design. That said, indications are of a much more reasonable 20-50% boost in performance and not the touted 300% figure cited by most news blurbs. Other benefits mentioned are of a reduction in power consumption of 60%, and peak 55% reduction in write-erase cycles – but again, these figures likely apply to only the 80% full SSD category. Nonetheless, any boost in SSD performance is welcome and I do look forward to future innovations with what admittedly, at its base level, is a very old system of handling data organization and storage.