Techgage logo

An In-depth Look At Blender 2.80 (Beta) Viewport & Rendering Performance

Blender Logo

Date: March 5, 2019
Author(s): Rob Williams

Blender’s upcoming 2.8 version represents one of the biggest shifts the software has ever seen, something its meager bump from version 2.79 hides well. To see where things stand today on the performance front, we’re using 30 GPUs to tackle the current beta with viewport testing, as well as Eevee and Cycles rendering.



Introduction, Blender 2.8 Viewport Performance

Unlike web browser makers, which seem to be in a race to score the highest version number possible (Firefox is at 65, whereas it was 9 in 2011), Blender’s done things a little differently. To the layman, Blender 2.80 might seem like a minor update over 2.79, but the reality is, it’s a major upgrade, both under the hood, and on the surface.

This next Blender is such a big release, that the developers a year ago sold “Flash Drive Rockets” to help fund travel for key team members to get together and better steer the 2.80 ship.

This is just scratching the surface, but 2.80 brings on a new physically-based renderer called Eevee, many updates to the preexisting Cycles renderer, improved animation tools, boosts to performance, and arguably a much more refined (and better looking) UI.

It could be that some performance in 2.80 is going to change between now and final release (which is tentative), and if that happens, we’ll retest and reflect the updated performance. If anything, we could see the viewport getting some more fine-tuning, but we’re not entirely confident that the rendering results will change very much between now and final, at least on the Cycles side.

CPUs & GPUs Tested in Blender 2.80.44 (Beta)
AMD Ryzen Threadripper 2990WX (32-core; 3.0 GHz, ~$1,729)
AMD Ryzen Threadripper 2970WX (24-core; 3.0 GHz, ~$1,119)
AMD Ryzen Threadripper 2950X (16-core; 3.5 GHz, ~$829)
AMD Ryzen Threadripper 2920X (12-core; 3.5 GHz, ~$649)
AMD Ryzen 7 2700X (8-core; 3.7 GHz, ~$299)
AMD Ryzen 5 2600X (6-core; 3.6 GHz, ~$199)
AMD Ryzen 5 2400G (4-core; 3.6 GHz, ~$140)
Intel Core i9-7980XE (18-core; 2.6 GHz, ~$1,800)
Intel Core i9-9900K (8-core; 3.6 GHz, ~$525)
AMD Radeon VII (16GB, ~$699)
AMD Radeon RX Vega 64 (8GB, ~$449)
AMD Radeon RX Vega 56 (8GB, ~$499)
AMD Radeon RX 590 (8GB, ~$279)
AMD Radeon RX 580 (8GB, ~$229)
AMD Radeon RX 570 (8GB, ~$179)
AMD Radeon RX 550 (2GB, ~$99)
AMD Radeon Pro WX 8200 (8GB, ~$999)
AMD Radeon Pro WX 7100 (8GB, ~$549)
AMD Radeon Pro WX 5100 (8GB, ~$359)
AMD Radeon Pro WX 4100 (4GB, ~$259)
AMD Radeon Pro WX 3100 (4GB, ~$169)
NVIDIA TITAN Xp (12GB, ~$1,200) (x2)
NVIDIA GeForce RTX 2080 Ti (11GB, ~$1,200)
NVIDIA GeForce RTX 2080 (8GB, ~$800)
NVIDIA GeForce RTX 2070 (8GB, ~$499)
NVIDIA GeForce RTX 2060 (6GB, ~$349)
NVIDIA GeForce GTX 1080 Ti (11GB, ~$699)
NVIDIA GeForce GTX 1080 (8GB, ~$499)
NVIDIA GeForce GTX 1070 Ti (8GB, ~$449)
NVIDIA GeForce GTX 1070 (8GB, ~$379)
NVIDIA GeForce GTX 1060 (6GB, ~$299)
NVIDIA GeForce GTX 1050 Ti (4GB, ~$139)
NVIDIA GeForce GTX 1050 (2GB, ~$109)
NVIDIA GeForce GTX 1660 Ti (6GB, ~$279)
NVIDIA Quadro RTX 4000 (8GB, ~$899)
NVIDIA Quadro P6000 (24GB, ~$4,300)
NVIDIA Quadro P5000 (12GB, ~$1,249)
NVIDIA Quadro P4000 (8GB, ~$749)
NVIDIA Quadro P2000 (5GB, ~$425)
All GPU-specific testing was conducted on our Intel Core i9-7980XE workstation.
All product links in this table are affiliated, and support the website.

The main reason we’re including professional GPUs in our testing is in response to receiving questions about whether or not there are pro-level enhancements in Blender. We can say that there definitely isn’t, but if you already have a current-gen ProViz card, you can use these results to see how it stacks up to other cards in a neutral design suite.

We recently upgraded our Blender testing script to include viewport testing, so in this article, we’ll take care of that, along with rendering performance in both Cycles and Eevee. There’s even a bit of heterogeneous rendering testing, and info on tile sizes, so… let’s get right to it!

Viewport Performance

Blender’s built-in viewport offers three rendering modes: Solid, Wireframe, and LookDev. We didn’t test the first two, as they run so well on the vast majority of GPUs that no real scaling would be seen between them. We can say that if you want 60 FPS out of Wireframe in heavier projects, you’ll want at least a “decent” GPU, which is to say a GeForce GTX 1060.

Blender - Techgage LookDev Viewport Test

LookDev is the most grueling of the three modes, as it loads up all of the assets and lighting effects to help you gauge whether you’re going in the right direction with your scene. LookDev is too grueling to use for real-time work, but artists love it to better gauge a scene before they continue their work.

For our viewport testing, we use the “Racing Car” project found on the official demo files page. Since performance in Solid and Wireframe are pretty pointless (you still don’t want a super low-end GPU), we stuck to LookDev for all of our testing, at 1080p, 1440, and 4K.

Blender 2.80 1080p Viewport Performance

Right off the bat, we can see NVIDIA’s Turing architecture make a statement at the top of the chart. Which card really stands out to us is the GTX 1660 Ti, as it delivers quite a bit of performance in relation to the top dogs, but for a $279 price point. The RTX 2060 likewise offers very strong performance, sitting right behind the GTX 1080 Ti.

Even at the arguably modest resolution of 1080p, the LookDev mode proves a serious bog on lower-end GPUs. This is largely fine if you don’t plan on moving the camera, but chances are good that you are going to do that. While you don’t need 60 FPS in a viewport like you do actual gameplay, you don’t want it to behave like a sideshow, either. It can be literally headache inducing.

How bad does the pain get at 1440p?

Blender 2.80 1440p Viewport Performance

1440p has about 80% more pixels than 1080p, but its performance hit over the smaller resolution isn’t exactly big. The top 2080 Ti loses just 5 FPS off the top, while the GTX 1660 Ti once again becomes an extremely strong contender at its price point.

All of these GPUs are included here for completeness, but the smaller models should honestly be avoided if you want anything close to a reasonable experience in LookDev. We’d wager you’d want at least 20 FPS, but 30 FPS is a whole lot better. The cards that can hit that at 4K are going to be part of a special club, so let’s take a look:

Blender 2.80 4K Viewport Performance

Whether for gaming or design, 4K resolution is where some agony is brought on. Only the RTX 2080 and RTX 2080 Ti managed to keep above 30 FPS at 4K, but anything WX 8200 or over is going to prove satisfactory. At 4K, we really can’t afford to complain about lacking performance too much, especially if the market’s $1,200 gaming card (2080 Ti) barely manages to move past 40 FPS.

For most GPUs, 4K LookDev means it’ll run like a slide show. Again, while you don’t need 60 FPS in a viewport, you don’t want the opposite problem of so few frames, that rotating the camera is a stuttery, imprecise mess.

For both Wireframe and Solid, we’d recommend at least a GTX 1060. After you get faster than that GPU, the frame rates effectively jump over each other from each card; they don’t scale on the high-end like LookDev does. The 1050 Ti in our tests hit 30 FPS in Wireframe at 4K, but the GTX 1060 – the very next step up – hit 52 FPS. The GTX 1660 Ti hits 69 FPS, which basically matches every GPU faster than it, since the mode simply doesn’t scale.

Cycles Tile Sizes, Cycles/Eevee Rendering Performance & Final Thoughts

Tile sizes are not used in Blender’s upcoming Eevee renderer, but they remain just as important as ever for Cycles. That is, unless you already know which values you should be going with, which for many will be the case. But, to see the true effects of using different tile sizes in three different rendering modes, we decided to generate some numbers.

The tile size you choose simply dictates how big the rendering region is. On a small image, a big tile size will cover most of the image, whereas a smaller tile size will look like Cinebench’s tile rendering. It’s long been said that for CPU, 32×32 should be used, while 256×256 is fine for GPUs. Recently, someone at one of the GPU vendors told me that 512×512 is safe to use for modern GPUs, and lo and behold, there was actually an improvement to be seen over 256×256.

16×1632×3264×64128×128256×256512×512
BMW (CPU)101s104s115s149s
BMW (GPU)71s73s83s93s88s69s
BMW (Hybrid)46s51s74s158s
Classroom (CPU)158s156s159s170s
Classroom (GPU)117s145s123s123s112s109s
Classroom (Hybrid)76s71s85s106s
Pavilion (CPU)164s166s175s214s
Pavilion (GPU)184s180s226s354s209s195s
Pavilion (Hybrid)98s100s128s202s
NotesBest result for each series in bold.
BMW: 35 samples; Classroom: 150 samples; Pavilion: 500 samples

For CPUs, you never want to go above 32×32, at this table highlights. While you’ll be safe up until 128×128, you’re already losing performance; going higher is just asking for pain (which is why there are some dashes instead of numbers).

Blender 2.80 can take proper advantage of heterogeneous rendering, meaning that both the CPU and GPU can jump in on the action to get the job done quicker, which is well evidenced in the table above. Tile sizes for hybrid rendering should be treated the same as CPU tile sizes, so 16×16 or 32×32. Otherwise, the CPU will choke, while the GPU will be trying to work with it.

Tile size is fortunately something you don’t need to fuss over much: you just need to choose the right value for your chosen rendering device, and move on. I should note that only these three projects (BMW, Classroom, Pavilion) from the Blender demo files site would render without issue in all three modes.

GPU Rendering Performance

Blender 2.80 GPU Rendering Performance - BMW (Cycles) Project

The BMW Blender project is almost iconic at this point. It’s not a complex scene, but it’s great acting as a quick benchmark to see how different CPUs and GPUs scale. The project will just render quicker than the other more complex Classroom and Pavilion scenes, which take even better advantage of ray tracing.

We can’t imagine that many people are going to be using multiple GPUs for Blender, but fortunately, if you were to, the Cycles renderer would see a major boost in performance. At the bottom end of the chart, some of the results are downright painful. Other GPUs down there really don’t cost too much, so a card like the Quadro P2000 or RX 570 should be considered bare minimum.

Consider the fact that all of these renders are single-frame, and it’s not even a high-resolution frame. The more detailed a scene, and the higher the resolution, the longer it’ll take to render. That fact can become painful when we’re talking about 4K resolution, and especially animation.

Blender 2.80 GPU Rendering Performance - Classroom (Cycles) Project

The Classroom project doesn’t change much with the scaling, although some projects fare better in the BMW project than this one, which is thanks to the fact that there’s a lot more going on in this scene. The multi-GPU config continues to perform amazingly well, while the bottom bunch of GPUs should almost be outright avoided for this type of work.

Whereas the P2000 seemed like a decent enough cut-off point with the BMW scene, its performance here doesn’t bode a ton of confidence. Here, the RX 580 and cards around it seem to offer the best value. Faster cards will of course continue to shave time off, but it’ll be up to you to decide the best use of your hard-earned money.

Blender 2.80 GPU Rendering Performance - Pavilion (Cycles) Project

The Pavilion scene, like the Classroom one, is very complex compared to the BMW project. That helps us get slightly different pictures of GPUs sometimes, since not all projects render the same way. Fortunately for NVIDIA, the company’s Turing-based GeForces rule the roost, though the last-gen TITAN Xp gets some props for placing so high in every single test as well.

It’s important to note that the CPU in Cycles counts for a lot, as you’d expect given it originated as a CPU renderer. We’re going to be taking a look at heterogeneous rendering performance shortly, because if your project can take proper advantage, it can change your perspective. But first, a quick look at Eevee:

Blender 2.80 GPU Rendering Performance - The White Room (Eevee) Project

Eevee is the “Extra Easy Virtual Environment Engine”, but also the name of a Pokémon, which makes Google searching for Blender-specific queries a little more complicated. While Eevee is designed from the ground up with the GPU in mind, it doesn’t currently support multiple GPUs, which is why the dual TITAN Xp entry is missing.

While it doesn’t take advantage of multiple GPUs, Eevee is a crazy fast renderer. So fast, that in order to generate some scaling, we had to boost the sample count on the chosen project. You don’t need 1,000 samples, but it’ll probably act well enough for the final render.

Eevee’s claim to fame is that it’s going to greatly accelerate animation rendering, which is one of the reasons it was built to be mind-blowingly fast. Still, the faster your GPU, the faster test renders are going to be with Eevee. We got our test project from here.

In our benchmarking, we couldn’t get the Eevee renderer to use the CPU, but research has told us that animation would make far better use of the CPU than a straight single-frame render. Unfortunately, we’re benchmarkers, not designers, so we don’t exactly have a capable project floating around. If you have one that you would like to see represented in our testing, please reach out.

Heterogeneous Rendering Performance

When we found out that Blender 2.80 would be supporting heterogeneous rendering, we couldn’t help but jump for joy. When you can render to both the CPU and GPU at the same time, the performance gains can be downright amazing. That’s doubly true if you are using both a super-fast CPU and GPU. At the same time, you probably don’t want things to be too disjointed, but in our tests, the CPU can make a bigger difference to performance than GPU.

As mentioned before, Eevee can use both the CPU and GPU, but in straight rendering, our CPU was not touched at all (at least in our chosen project). Now, swapping 30 GPUs or so isn’t too terribly complicated, but swapping that many CPUs definitely is. So, for our hybrid tests, we chose 9 CPUs to include, with AMD bringing us sweet scaling from bottom to top, with Intel chiming in to round things out.

Blender 2.80 Heterogeneous Performance - BMW (Cycles)
Blender 2.80 Heterogeneous Performance - Classroom (Cycles)
Blender 2.80 Heterogeneous Performance - Pavilion (Cycles)

In every single one of these tests, using hybrid rendering dramatically improves performance. That’s even true with the modest 2400G quad-core, a $140 CPU. That said, we would never suggest you should choose that kind of CPU for creative workloads; it’s just that if you had one that you were rendering to, it’s not going to hold as much performance back as you’d think, when used with hybrid rendering. On its own, it’s slow.

For Cycles, a better CPU is quite obviously a better buy over a faster GPU, but again, with Eevee’s huge GPU focus, your needs could change in time. If you have both a decent CPU and GPU, you will have little to worry about.

To give a better idea of just how important a CPU can be in Cycles, here’s a look at the 18-core Intel i9-7980XE combined with every single GPU tested:

Blender 2.80 Heterogeneous Performance Comparison - Pavilion (Cycles)

As you can see, this 18-core CPU is so fast, that the GPU doesn’t matter nearly as much. But, if you happen to have a fast GPU as well, then your overall renders are going to take place far quicker. And again, we feel compelled to emphasize the fact that this is just a single frame render at a modest resolution. Complex scenes are going to take far more time to render, and animation will of course require 24+ frames for every second of footage.

Final Thoughts

We benchmark many creative applications at Techgage, and we of course have fielded many requests in the past. Interestingly, Blender 2.8 requests have been hitting us quite a bit lately. Even as this was being written, we were hit with a notification from someone else asking for this very content. It’s clear that people are excited about the coming final build!

Blender 2.8 LookDev (Eevee)
Credit: Andreas Strømberg

To summarize, there are two distinct performance avenues with Blender: viewport and rendering. For some, rendering performance might not matter so much, because they can simply sleep through the night while their computer is rendering away. But viewport performance is something that’s impossible to ignore. If your viewport is slow, then you’re going to have a frustrating time. You might even get a real headache.

For Wireframe and Solid modes, you don’t need a considerable GPU to get reasonable performance. You’ll want at least a GTX 1060, or GTX 1660 Ti to ensure 60 FPS in complex scenes. LookDev is the real performance killer, with it taking an RTX 2070 to hit it at the most modest of our three tested resolutions, 1080p. At 4K, you need a serious GPU to deliver a reasonable experience.

As it stands, NVIDIA’s Turing-based graphics cards perform extremely well in every conceivable Blender test we throw at them. We got the best frame rates out of them in the viewport tests, and the best rendering performance in both Cycles and Eevee. Fortunately, we benchmarked many cards, so hopefully the one you have your sights set on makes an appearance.

Blender 2.8 Agent 327 Render (Cycles)

On the AMD side, for top-tier Blender performance, you’ll want either the Radeon VII, or Vega 64. For modest cards, NVIDIA’s GTX 1660 Ti really does seem to be unbeatable, but there are so many price points covered here, and thus a lot of options to choose from. You just don’t want to go too low-end when it comes to rendering, unless you get some sick pleasure out of exercising your patience.

As made obvious in every single one of the performance graphs here, performance has been tested with the beta version of Blender 2.80, and performance could change between now and the final release (which is still not set in stone). Should that happen, we’ll retest. Maybe not to the same extent, since 30 GPUs is quite a bit, but we’ll have to see how this article is reacted to before moving ahead.

If you have questions not tackled in this article, please leave a comment!

Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.