PhysX is getting a lot of attention right now, but the reasons vary wildly. Since we haven’t taken a look at the technology in a while, this article’s goal is to see where things stand. We’ll also be taking an in-depth look at GPU PhysX performance, using both 3DMark Vantage and UT III.
As mentioned on the previous page, we are still in the early stages of GPU PhysX support, but this is still a good time as any to test out the performance. The AGEIA PhysX add-in card, as you may recall, had the task of off-loading heavy physics calculations from the CPU and sending them over to the PPU, where as we found out, they process much faster.
Before testing, I had a few goals in mind. The first was to find out if using PhysX on a GPU is much faster than using the add-in PPU, and in the same vein, would using the PhysX acceleration severely throttle overall gaming performance, when compared to the PPU?
Because only two current applications take advantage of the GPU acceleration, 3DMark Vantage and Unreal Tournament III, I had little choice but to use them. Luckily, one of these is automated, while the other is a heck of a lot of fun to play. Testing wouldn’t be so arduous, after all!
We’ll begin off with 3DMark Vantage, since that’s the hot topic right now. For all of the testing, a single 9800 GTX was used. As it stands, NVIDIA’s latest beta driver adds PhysX support to that card, in addition to the GTX 260 and GTX 280. Don’t fret if you don’t own one of these cards, though, as support will be added to the entire fleet of 8-series and 9-series cards with an updated driver next month.
Our system also features a high-end Intel QX9650 Quad-Core, which will help us in finding out how effective a PhysX accelerator really is. Intel also boasts Quad-Cores as being a crucial part of a true gaming rig, which might be the case, but as we’ve seen in the past, physics isn’t one of its strongest points.
Although I’m running the full-fledged test, the one we’ll focus on most is CPU Test 2, which is the only test in the entire suite to take advantage of a PhysX processor. The test consists of a scene where numerous planes soar around a small area, both flying through hoops and into each other. Physics play a huge role in the scenario, in more than one way:
If no PhysX accelerator is present, then the benchmark will fall back on the CPU. All results are displayed as ‘operations per second’, although the results screen itself calls it ‘steps/s’.
The initial group of tests here were completed on non-overclocked parts. The first test was performed without PhysX-acceleration, while the second relied on the PPU, and finally, the third utilized the GPU. The PhysX driver used is 8.06.12, which was just released this past Wednesday.
With no acceleration, all of the work is left up to the CPU, resulting in somewhat lackluster results… at least as far as physics calculations are concerned.
Surprisingly, adding in the AGEIA PPU did little to to increase our score. The boost from 17.09 op/s to 28.50 op/s might seem impressive, until we take a look at the GPU results.
Yes… here’s the reason that some people are up in arms. This is a rather substantial increase, but again, I still feel it’s deserved. The GPU is so efficient at processing physics calculations that it blows away even the dedicated PPU by more than quadrupling the average results. As you can see, that boost also effects the overall 3DMark score by a fair bit.
From a technical standpoint, the GPU was obviously much faster at handling the most popular physics processing that PhysX offers, so it might very-well be that the PPU is actually a lot less capable than we first imagined.
I next wanted to see what sort of effect overclocking had on the results, on both the CPU and GPU. Which overclock would increase the physics more? With the PPU installed once again and the CPU (QX9650) overclocked to 3.6GHz, we saw a 9.3% increase in overall physics capabilities.
On to GPU acceleration. I overclocked both the CPU and GPU to see which one had the greater effect. The first result is for the CPU overclock, still at 3.6GHz. The differences are once again just under 10%, at 9.6%.
Finally, putting the CPU back to stock speeds and overclocking the GPU (+21% Core MHz, +7.8% Shader MHz) pushed us up to 134.35 op/s… resulting in yet another 9.3% increase.
The increases to be seen here will of course scale with the size of the overclock, and it’s more of a coincidence that both the CPU and GPU overclocks gave the same % increases. If the CPU was overclocked to 4.0GHz or higher, even greater differences would be seen.
But the proof is in the pudding. The GPU is efficient at performing PhysX calculations… far more so than even the dedicated PPU. So it seems to me that the dedicated PPU is not up to the task of handling the sheer amount of physics that the GPU can handle. The PPU adds extra cost to a gaming machine, yet offers much lower performance that what we can expect from a GPU.
The reason that’s important is because we all have a GPU… and if the power is there, we might as well tap into it. I’ll touch up a lot more on this towards the end of the article, but for now, let’s head into some real-world testing, using Unreal Tournament III.