Ever have one of those sudden realizations that something you meant to do six months ago still isn’t done? Well that’s me right now, and it has to do with our proposed CPU test suite upgrade. It’s still a work-in-progress, and one that I hope will be completed soon, but once again I want to open the floor for reader recommendations, or at least for thoughts on our current choices.
It’s not that our current methodologies and suite is poor, but I like to keep things fresh and revise our test selections from time to time. This time around, a great reason to upgrade things is thanks to the introduction of Windows 7, which we’ve already begun using as a base in some of our content (namely, P55 motherboards).
For the most part, the applications and scenarios we test with in our current CPU content isn’t going to change too much, and instead of dropping or replacing a bunch, we’ll instead be adding to the pile, for an even broader overview of performance, and to see where one architecture may excel or be lacking. So with that, we’ll be retaining Autodesk’s 3ds Max (upgraded to version 2010) and also Cinebench R10 and POV-Ray. In addition, we’ll also be adding Autodesk’s Maya 2010, where we’ll be taking advantage of the very comprehensive SPECapc benchmark.
For Adobe Lightroom, we’ll be upgrading to 2.5 (unfortunately, it doesn’t look like 3.0 will be made available until well after our suite is completed), and also adding in some similar batch processes with ACDSee Pro 3. Also on the media side, we’ve revamped our test with TMPGEnc Xpress. We’ve dropped using DivX AVI as our main test codec, and will begin using MP4 and WMV instead. In all of my tests, both formats are of a much higher quality, and they’re both very taxing on any CPU.
SiSoftware recently released its 2010 version of SANDRA, so we’ll begin testing with that right away. The biggest feature there that I want to make use of is with the encryption part, as it’s designed to take advantage of the brand-new instruction found on Intel’s upcoming Westmere processors, AES-NI. It’s going to be very interesting to see just what kind of difference that instruction will make for applications that can take advantage of it.
Another entry to our test suite will be CPU2006, an industry-standard benchmark from SPEC that’s used to gauge the overall worth of a PC in various scenarios, from computing advanced algorithms, to compressing video, to compiling an application and a lot more. This is easily the largest benchmark we could ever run on our machine, and in our initial tests, it takes 12 hours to complete on a Core i7-975 Extreme Edition. No, I’m not kidding. The default of the application is to run three iterations, however, so I’m likely to back that down to just one, as in my many experiences, I’ve found the results from individual tests to be incredibly reliable. If the entire suite took 12 hours on a top-of-the-line CPU, I cringe when I imagine how long it would take on a dual-core…
In addition to all this, we’ll be revising our game selection as well, since the games we’ve been using for a while are obviously out of date (especially Half-Life 2, although it still -is- a decent measure for CPU performance). There are some other benchmarks I have in mind, but I won’t talk too much about them right now since I’m really not sure how they will play out.
So with that, I’d like to again send out a request for input from you guys. Do you think we should include another type of benchmark, or scenario? With Westmere right around the corner (beginning of January), it’s highly unlikely that I’ll be able to put a new suite into place for our launch article, but who knows…