Date: August 25, 2019
Author(s): Rob Williams
MAGIX updated its popular Vegas Pro video editing suite to version 17 earlier this month, bringing with it a bunch of new features and enhancements, as well as promises of performance improvements all over. We’re taking a look at where things stand today with CPU and GPU encodes, as well as playback performance.
Earlier this month, MAGIX released the latest major version of its popular video editor: Vegas Pro 17. In addition to performance polish and lots of under-the-hood tweaks, MAGIX also introduces a number of useful features, such as Nested Timeline, a unified color-grading workflow, HLG HDR color support, 8K resolution and high-DPI support, and a lot more that can be explored on the product comparison page.
There are a couple of specific performance-related changes that should be noted in VP17. Another addition to this version is GPU-accelerated decoding for AVC and HEVC, although at the current time, this implementation only works with Intel QSV and NVIDIA NVDEC. We’d expect the next software update to add support for AMD’s decoder.
Speaking of AMD, it’s important to note that AMD’s Radeon ‘Navi’ RX 5700-series GPUs are officially unsupported for the moment, but like AMD’s decoder, Navi support is also being worked on. MAGIX told us that its development timeline didn’t align with AMD’s this go-around, but adding support is a current top priority.
By default, VP17 will select Intel QSV for its decoder, even if it doesn’t exist in the system. That means if you have NVIDIA, you will need to hit-up the File I/O tab in the settings and change to the appropriate setting in the drop-down menu at the bottom. The ultimate goal here is improved playback performance. We have more comments on this in the playback performance section.
On a similar topic, MAGIX has improved HDR / ACES workflows and uses the GPU to accelerate the OpenColorIO code. Intel’s HEVC decoding has also been given some polish, and should offer some performance improvements in both encode and playback.
To give a rough idea of how performance can be impacted with these new optimizations, MAGIX gave us an in-house test result that saw 1.6 FPS boost to 11.5 FPS in an HDR 10 / 32-bit project, using Preview/Full as the quality setting, when compared to Vegas Pro 16’s most up-to-date build. In our own tests, we haven’t seen major gains like this, but we also haven’t tested with HDR content – something we’ll correct in the future.
As with Vegas Pro 16, some NVIDIA GPUs struggle to offer full acceleration unless VP17 is added as a profile inside of the NVIDIA Control Panel. Doing this is as easy as heading to the Manage 3D Settings section, and then “Program Settings”, followed by “Add”. The only trick is that you have to launch VP at least once for it to show up in this list.
If this trick is not done, LUT performance will effectively be crippled, and Median won’t fare much better. We’d suspect that some other filters could prove problematic without it, as well, but while time is infinite, our lifespans are not, so there’s only so much testing we can do.
A simple way to see if you actually need to do this fix is to travel to your VP17 local profile folder at %localappdata%\VEGAS Pro\17.0 and open gpu_video_x64. You’ll then need to search for the Interop context line and see what kind of result you get. If you see anything relating to “Failed to create CL interop”, you need to apply the “fix” in NVIDIA CP.
We’re still not sure whether MAGIX or NVIDIA is at most fault with this issue, but given the fact it seems to happen only on Quadro and TITAN, and not GeForce (as far as we can tell), it seems like something that should be fixed in the driver. NVIDIA’s latest Studio driver release calls out explicit support for VP17, but it does not take care of this issue.
Our Vegas Pro 17 testing doesn’t differ much from our VP16 testing, although the results between versions cannot be compared since all of the projects were recreated from scratch, and in some cases have changed slightly. That’s especially true with the FX configuration for LUT and Median. We dug into the settings for each a bit deeper to make sure both were being used to realistic effect.
MAGIX offers three brand-new FX in VP17, including Slow Motion, Color Grading, and Warp Mesh. We’ve tested all three, and have found our current tests remain suitable enough for benchmarks. Our LUT test takes care of some sort of color-grading, while nothing can top Median in terms of using a CPU or GPU to its full potential (seen much more often in rendering than encoding).
The Slow Motion test gave us some false hope that it’d be suitable for benchmarking, since its analysis process will use any CPU you give it at well over 90%. The problem, though, is that our 18-core Intel Core i9-9980XE proved slower than the 8-core i9-9900K. The footage used seems to dramatically impact how long this process will take, but we couldn’t get realistic scaling in our testing.
Here’s a five-second clip that would take over an hour to process its slow motion on an 18-core CPU. While that’s being done with the highest quality setting, even the default Coarse setting would have taken far longer than we’d expect, especially in comparison to the 9900K machine, which tears through the same workloads quicker. It’s not the first time we’ve bumped heads with nonsensical scaling like this, and we’re sure it won’t be last.
That all covered, let’s get to the basic specs of our test rigs and hardware tested:
|Techgage Workstation Test Systems|
|Processors||AMD Ryzen Threadripper 2990WX (32-core; 3.0 GHz)|
AMD Ryzen Threadripper 2970WX (24-core; 3.0 GHz)
AMD Ryzen Threadripper 2950X (16-core; 3.5 GHz)
AMD Ryzen Threadripper 2920X (12-core; 3.5 GHz)
AMD Ryzen 9 3900X (12-core; 3.8 GHz)
AMD Ryzen 7 3700X (8-core; 3.6 GHz)
AMD Ryzen 5 3600X (6-core; 3.8 GHz)
AMD Ryzen 5 3400G (4-core; 3.7 GHz)
Intel Core i9-9980XE (18-core; 3.0GHz)
Intel Core i9-7900X (10-core; 3.3 GHz)
Intel Core i9-9900K (8-core; 3.6 GHz)
Intel Core i7-8700K (6-core; 4.2 GHz)
|Motherboards||AMD X399: MSI MEG CREATION|
AMD X470: ASUS ROG CROSSHAIR VII HERO Wi-Fi
AMD X570: Aorus X570 MASTER
Intel Z390: ASUS ROG STRIX Z390-E GAMING
Intel X299: ASUS ROG STRIX X299-E GAMING
|Memory||G.SKILL Flare X (F4-3200C14-8GFX)|
4x8GB; DDR4-3200 14-14-14
|Graphics||AMD Radeon VII (16GB)|
AMD Radeon RX Vega 64 (8GB)
AMD Radeon RX 590 (8GB)
NVIDIA TITAN RTX (24GB)
NVIDIA TITAN Xp (12GB)
NVIDIA GeForce RTX 2080 Ti (11GB)
NVIDIA GeForce RTX 2080 SUPER (8GB)
NVIDIA GeForce RTX 2070 SUPER (8GB)
NVIDIA GeForce RTX 2060 SUPER (8GB)
NVIDIA GeForce RTX 2060 (6GB)
NVIDIA GeForce GTX 1080 Ti (11GB)
NVIDIA GeForce GTX 1660 Ti (6GB)
NVIDIA Quadro RTX 4000 (8GB)
|Et cetera||Windows 10 Pro build 18362 (1903)|
|Drivers||AMD Radeon: Adrenaline 19.8.1|
NVIDIA GeForce & TITAN: Studio 431.70
NVIDIA Quadro: Quadro 431.70
|All GPU-specific testing was conducted on our Intel Core i9-9900K test rig.|
All product links in this table are affiliated, and support the website.
We were going to include AMD’s Radeon Pro WX 8200 in our testing, but for some reason the card has been giving us some massive issues, and might need to be RMA’d. In case you missed mention of it above, VP17 (nor 16) support Radeon Navi, something we’d expect to be fixed with the next released build.
All of our encodes use the respective VCE profile for AMD testing, and NVENC for NVIDIA testing. MAGIX has confirmed that these profiles are best to use over the default ones, since they are optimized for the graphics hardware available.
A reader recently asked us to test Intel’s integrated graphics solution in Vegas Pro, so we’ve done that here, except for playback, because the performance is just too poor (more on that later). We wanted AMD’s 3400G’s GPU tested as well, but as luck has it, our older AM4 ITX motherboard won’t update its EFI so that we can use it. A board replacement is en route, but we didn’t want to wait on it before getting this article launched.
And with all of that, let’s get right into a look at VP17 performance:
It’s humorous to see Intel’s UHD 630 graphics outpace the AVC encoders of the much more capable (overall) GPUs. Interestingly, the HEVC performance isn’t quite as strong, but it’s still not bad, either. It’s nice to see such strong performance from a solution built straight into the CPU.
As for the others, performance is pretty unpredictable. The Vega 64 doesn’t perform too well against the newer generation of cards, but somehow the RX 590 managed to do great in HEVC, even though it also sat quite a bit behind in AVC. Even the Radeon VII topped the HEVC chart, but sits near the bottom in AVC. If only Navi worked so we could have had that perspective taken care of.
Ultimately, basic encodes like this only matter so much, since it’s when you start to use effects that the heavy workloads can begin. Let’s start with LUT:
Note that the results for the TITAN and Quadro cards would be worse here if not for the fact that VP17 was added as a profile in the NVIDIA Control Panel. After adding the profile, the TITANs exercise their strengths and leap to the top, sitting ahead of the Quadro RTX 4000. Given that performance boost there, we’d imagine that NVIDIA actually has some driver optimization in place for its workstation cards, which is silly given you need to enable a profile to get workable performance in the first place.
The Radeon VII performed well in the LUT test, but the other AMD cards fell to the bottom of this chart. To see the biggest gains in LUT performance, you simply need to avoid those older AMD architectures.
The Median denoising FX is almost as demanding as actual 3D rendering, meaning the overall performance of the graphics hardware is going to matter a lot. Here, the Intel IGP isn’t a good choice at all. It would have taken over six-times as long to complete this project as the next step up, so we bailed on it so as to not give the chart a major skew.
AMD’s Radeon VII again performs great here, outperforming the entire rest of the stack. It’s a little unfortunate that this card isn’t going to be a great option for long, as it’s already entered end-of-life status, a mere six months after its launch. Something about that enterprisey architecture really jives well with the Median FX.
We’ll tackle CPU and playback performance on the next page:
Because it’s the most grueling FX we can find in the entire Vegas Pro suite, we choose to run with it as our CPU test of choice. Note that this really is a strenuous test, and it won’t be reflective of all encodes using FX. Median just happens to use available processors to great effect.
Intel struts its strong performance here, which isn’t too uncommon for media-type tests, despite the 2990WX’s greater number of cores. In an eight-core match-up, Intel’s Core i9-9900K again conquers AMD’s eight-core option, 3700X. Dollar for dollar, though, the $500 12-core 3900X offers a big jump in performance over the equally priced 9900K.
When the GPU is introduced into the mix, not too much about the scaling changes:
Interestingly, the 3900X did make a bit of a jump here, placing ahead of every single Threadripper. That’s a good hint to a lack of optimization somewhere down the line. Oddly, while Intel’s 9900K placed ahead of the 3700X in the CPU-only test, adding the GPU into the mix changes their positions.
Meanwhile, let’s not ignore how much a slow CPU can hold back your efficiency. The performance seen on the 4-core 3400G is hard to call “awful”, but when you compare it to the rest of the tested stack, even the chips with just two additional cores, the performance advantages of bigger CPUs becomes super-clear.
Our playback testing is performed using 4K/60 AVC MP4 source footage, using the LUT and Median FX filters. For the LUT project, two different LUTs are used across light and dark scenes, with the highest quality setting chosen. The Median FX project uses an appropriate level of intensity for the scenes chosen, though we’ve found this FX to use the GPU just as heavily regardless of what settings are chosen.
Overall LUT performance is great across the stack. It’s important to note that the minimums seen are reflective of one scene changing to another, which is to say, one entire video file progressing to another in the timeline. That causes a slight drop at the start, but then smooths out to deliver the constant 60 FPS.
There is a little more to talk about than just this. All of our testing was conducted without the appropriate NVIDIA decoder being selected in the options (because we didn’t know about it until testing was completed). We did some extra testing with the setting enabled, and found that our minimums actually dropped, from around the 45 FPS mark you see in the chart down to 15.
While a drop like that seems to imply that the feature should be avoided for now, this is just how it’s behaved in our particular tests. You should definitely be doing your own A/B testing, because you may find a different outcome than we have. Given MAGIX’s suggestion to use this setting, we’d assume that scenarios we are not testing would exhibit stronger gains.
The LUT filter is pretty simple in compared to some of the other GPU heavy-hitters, like Median. This denoising will use both the CPU and GPU to good effect.
We’re seeing the same bizarre sort of scaling with this test as we did with the same one in Vegas Pro 16. Even though some cards are faster than others, it doesn’t mean squat when it comes to VP’s logic. Which cards will run this test without issue is unpredictable, so the best thing to do is almost ignore Median playback entirely until it comes time to run your final encode (unless you happen to have a blessed GPU).
The same GPUs that top this chart also topped the VP16 one, with an exception of the 1080 Ti, which ran better in our old VP16 testing than it did with this latest VP17 go-around. It could be that this test in general is not too reliable, although we’re not using the filter in any special way. The fortunate thing is that Median is an extreme example of this happening, and that most other filters in our tests have played back fine across the different GPUs we tested.
We published a news post the other day talking about our first impressions with Vegas Pro 17, and in there, we said the application has been stable for us overall. We’re starting to think that us saying that caused issues to begin occurring. We’ve even found the installer to be buggy, having installed the software on one occasion and finding it not installed after a reboot. We also encountered an issue on two occasions where the software would ask us to activate on every launch, forcing us to simply reinstall the OS after wasting too much time trying to fix it manually.
As for the actual usage of the application, though, it hasn’t felt much different from VP16, although based on the official forums, it does seem like some people are experiencing other issues. Fortunately, MAGIX was quick to release a post-launch build, and we get the feeling it won’t take too long for the next one, especially as the company has said that its focus on bettering AMD GPU (including Navi) support is a major priority right now.
That all said, just like with VP16, it’s hard to draw some conclusions here, because Vegas Pro uses hardware efficiently at times, and inefficiently other times. We really hoped the Slow Motion feature would make for a good CPU test, but it’s proven itself to be an unreliable benchmark for now. Since this is a new feature, we’ll test it again if a future build adds some polish.
After posting the VP16 look, we honestly began to feel like MAGIX didn’t care enough about performance in order to get niggles fixed, but our viewpoint on that has changed, after finally being able to chat back-and-forth with one of its engineers. It sucks that some performance is still unpredictable, especially on the NVIDIA side, but the company seems eager to get current issues fixed. And we have to hope that NVIDIA is giving the company good support as well, since the software itself is only one part of the equation. The fact that profiles need to be added on TITAN and Quadro for full GPU acceleration is ridiculous.
We’ve provided many performance graphs here, so hopefully some of them are relevant to you. If you are still left wondering which path you should take, or have other questions, please leave a comment.
Once AMD Navi support gets added, we’ll get the RX 5700 XT and RX 5700 tested, and at the same time test the latest drivers on some of the other cards to see if performance changes for any of them.
Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.