Nvidia GeForce RTX 3090 Founders Edition: It works hard, it plays hard

The 24GB of VRAM makes it a stunning value for prosumers.

Credit: Brad Chacos/IDG

Let’s continue our look at the benefits Nvidia-specific software can provide with a pair of tools that require GeForce graphics cards. The Radeon VII, obviously, isn’t included in the next couple of tests.

Maxon’s Redshift is a GPU-accelerated biased renderer that requires CUDA-capable graphics. It’s been used in the real world by major companies like Jim Henson’s Creator Shop and Blizzard Entertainment. We tested our cards using the hard-to-find Redshift “Age of Vultures” benchmark in the demo version. Redshift has also implemented OptiX and supports additional acceleration with RTX graphics cards. We enabled that for all applicable GPUs, while the GTX 1080 Ti stuck to bare CUDA. The results listed are seconds to render, so again, lower is better.

redshift Brad Chacos/IDG

Yep, it’s a whupping. The GeForce RTX 3090 is over 65 percent faster than the GeForce RTX 2080 Ti, which offered most of the performance of the $2,500 RTX Titan, albeit with lower memory capacity. As you can see, though, if your specific workloads don’t tap into the 3090’s massive 24GB memory buffer, it’s only slightly faster than the much cheaper RTX 3080.

Next up: OctaneBench 2020 v1.5. This is a canned test offered by OTOY to benchmark your system’s performance with the company’s OctaneRender, an unbiased, spectrally correct GPU engine. OctaneBench (and OctaneRender) also integrate Nvidia’s OptiX API for accelerated ray tracing performance on GPUs that support it. The RTX cards do; the GTX 1080 Ti again sticks to CUDA. The benchmark spits out a final score after rendering several scenes, and the higher it is, the better your GPU performs.

octanebench Brad Chacos/IDG

The GeForce RTX 3090 scores a massive 86 percent higher than the RTX 2080 Ti, and 19 percent higher than the RTX 3080.

Okay, let’s get back to non-CUDA-specific tests. When you’re talking about professional tools, viewport performance matters. Unlike with gaming, faster GPUs aren’t always better through brute strength in ProViz. SPECviewperf 13 measures GPU viewport performance using traces in 3ds Max, Maya, Siemens NX, Creo, CATIA, and SolidWorks, as well as energy and medical tests that draw on datasets typical of those industries. Both AMD and Nvidia contribute to the project as part of SPECgpc

Higher scores are better. Also note that this particular set of specialized benchmarks are likely to score higher with Nvidia Quadro or Radeon Pro cards thanks to their optimized professional drivers, but we don’t have any on hand to test.

specviewperf13 1 Brad Chacos/IDG
specviewperf 2 Brad Chacos/IDG

The GeForce RTX 3090 generally offers better viewport performance than other options, though it doesn’t smash quite as hard as it does in other tests. These are very application-dependent results. Still, if you use these tools, these benchmarks indicate you’ll notice faster viewport responsiveness with Nvidia’s new GPU.

Our final set of benchmarks examines rendering performance in DaVinci Resolve Studio 16, a production tool that “combines professional 8K editing, color correction, visual effects and audio post production.” (Nvidia provided an activation code for our testing.) It’s very popular among creative professionals, especially for 8K media editing.

We tested these GPUs using Puget System’s DaVinci Resolve Studio Benchmark. Puget Systems specializes in creating high-end, custom professional workstations, and the company crafted a serious of benchmarks for various applications to quantify performance.

Puget offers several different benchmarks for DaVinci Resolve. We used the 4K benchmark, which requires 16GB of system RAM and at least 8GB of GPU VRAM. It tests a variety of media codecs, and each of the codecs gets put through the following gauntlet, per Puget:

“For the 4K and 8K render tests, we benchmark 5 different levels of grade:

  • ”Optimized Media” - No effects applied in order to simulate the performance when generating optimized media
  • ”Basic Grade” - Simple color wheel and similar adjustments with a base grade plus 4 power windows.
  • ”OpenFX - Lens Flare + Tilt Shift Blur + Sharpen” - Basic Grade plus four OpenFX
  • ”Temporal Noise - Better 2 Frames” - Basic Grade plus a single TNR node
  • ”3x Temporal Noise - Better 2 Frames” - Basic Grade plus three TNR nodes using splitter/combiner nodes

The “Optimized Media” timeline is rendered out to MXF OP1A DNxHR LB at 1920x1080, while the others are all rendered out to Quicktime DNxHR HQ at the native resolution of the timeline (UHD or 8K).”

Puget’s tool measures how many frames per second it takes to complete each benchmark on each codec, then spits out an average score for each type of test, along with an overall score. Those scores are based on performance relative to their reference workstation with a Core i9-9900K and a 24GB RTX Titan; the higher the score, the better. Nvidia GPUs use CUDA in DaVinci Resolve, while the Radeon VII leans on OpenCL.

davinci overall Brad Chacos/IDG
davinci render Brad Chacos/IDG

As the Overall Score and Basic Grade results show, you’ll notice a decent performance bump with the GeForce RTX 3090. But once you start going heavy with the GPU effects, the GeForce RTX 3090 starts to roar, as evidenced in the Open FX and Temporal Noise results.

Puget’s benchmarking tool also proved the worth of one of the GeForce RTX 3090’s most key features: its massive 24GB of GDDR6X VRAM. We hoped to run Puget’s 8K benchmark suite as well, but every Nvidia card except the 3090 ran out of memory and crashed during the attempt. If you’re editing 8K video, you need a graphics card with 24GB of VRAM, and the GeForce RTX 3090 offers it for $1,000 less than the RTX Titan did.

octanebench memory Nvidia

Nvidia-supplied results in a memory-heavy OctaneRender test. The 3090’s 24GB of VRAM makes a massive difference to overall performance.

Exceeding your GPU VRAM makes some applications (like Blender and DaVinci Resolve) fail their tasks. Other tools may allow you to use general system memory if the scene you’re working on exceeds the capacity of your VRAM, but doing so delivers a huge performance impact on rendering time.

Nvidia’s reviewer’s guide walked through just such a scenario with OctaneRender. The results above show how significantly performance improves if you can keep the entire workload directly on your graphics card’s VRAM rather than going “out-of-core” to system memory.

Next page: 4K and 1440p gaming benchmarks

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Keep up with the latest tech news, reviews and previews by subscribing to the Good Gear Guide newsletter.
Brad Chacos

Brad Chacos

PC World (US online)
Show Comments

Most Popular Reviews

Latest Articles

Resources

PCW Evaluation Team

Cate Bacon

Aruba Instant On AP11D

The strength of the Aruba Instant On AP11D is that the design and feature set support the modern, flexible, and mobile way of working.

Dr Prabigya Shiwakoti

Aruba Instant On AP11D

Aruba backs the AP11D up with a two-year warranty and 24/7 phone support.

Tom Pope

Dynabook Portégé X30L-G

Ultimately this laptop has achieved everything I would hope for in a laptop for work, while fitting that into a form factor and weight that is remarkable.

Tom Sellers

MSI P65

This smart laptop was enjoyable to use and great to work on – creating content was super simple.

Lolita Wang

MSI GT76

It really doesn’t get more “gaming laptop” than this.

Featured Content

Product Launch Showcase

Don’t have an account? Sign up here

Don't have an account? Sign up now

Forgot password?