Nvidia GeForce RTX 3090 Founders Edition: It works hard, it plays hard

The 24GB of VRAM makes it a stunning value for prosumers.
  • Brad Chacos (PC World (US online))
  • 24 September, 2020 23:00

Nvidia says its monstrous $1,500 GeForce RTX 3090 delivers “the ultimate gaming experience.” That’s very true. You can even game at 8K—not 4K, 8K—with some titles on this so-called “BFGPU.”

You probably shouldn’t buy it if all you do is game, though. Unless you’re a deep-pocketed enthusiast who doesn’t mind spending lavishly for the absolute best performance possible, the staggeringly powerful $700 GeForce RTX 3080 offers much better bang-for-buck for pure gamers. Nvidia actually calls the RTX 3080 its flagship gaming GPU, even though the RTX 3090 offers more raw horsepower.

But if you’re a professional who could save real money by creating videos or churning through GPU-intensive tasks at a massively faster clip, the GeForce RTX 3090 truly shines—especially if you can take advantage of its massive 24GB memory capacity. Nvidia also says this card delivers “Titan-class performance,” and that's no exaggeration. The GeForce RTX 3090 will absolutely melt your face (and your render times) in many content creation tasks, toppling both the last-gen RTX Titan as well as AMD’s creator-beloved Radeon VII. Oh, and at $1,500, the GeForce RTX 3090 costs a full $1,000 less than the RTX Titan it replaces in all but name.

If you need the ultimate graphics card for both work and play, Nvidia’s BFGPU tramples the competition. Let’s dig into Nvidia’s GeForce RTX 3090 Founders Edition.

Editor’s note: This comprehensive review of the GeForce RTX 3090 goes longer than most as it’s good for much more than just 4K gaming—we dive into 8K benchmarks and prosumer tasks, too. Check out Nvidia GeForce RTX 3090 tested: 5 key things you need to know for high-level takeaways of this in-depth info. And you can use this table of contents to hop between the various sections of this review.

dsc01065 Brad Chacos/IDG

Nvidia GeForce RTX 3090 specs and features

Nvidia’s GeForce RTX 3090 is built using the same next-gen “Ampere” GA102 graphics processor as the GeForce RTX 3080, but it’s stuffed with more of everything—more CUDA cores, more RT and tensor cores, more SMs, more memory, a bigger memory bus, you name it. Check out our RTX 3080 review for a deeper look at Ampere’s most significant architectural changes, as we won’t be rehashing those GPU-level technical details here. You can also find more info about how the new RTX 30-series GPUs stacks up against the previous generation in our GeForce RTX 30-series vs. RTX 20-series spec comparison

Here’s a high-level look at the GeForce RTX 3090’s insides:

  • CUDA cores: 10,496
  • Boost clock: 1.7GHz
  • Memory: 24GB GDDR6X
  • Memory bus: 384-bit
  • Memory bandwidth: 936 GBps
  • RT cores: 82 (2nd-gen)
  • Tensor cores: 328 (3rd-gen)
  • NVLink SLI: Yes
  • PCIe: Gen 4
  • HDMI: 2.1
  • HDCP: 2.3
  • Display connectors: 1x HDMI 2.1, 3x DisplayPort 1.4
  • Length: 12.3 inches
  • Width: 5.4 inches
  • Height: 3-slot
  • Maximum GPU temp: 93
  • Graphics card power: 350W
  • Recommended power supply: 750W
  • Power connectors: 2x 8-pin (with supplied 12-pin adapter)

By comparison, the RTX 3080 packs 68 SMs and 8,704 CUDA cores. That gives the GeForce RTX 3090 a 20-percent advantage in raw specs, though it’s worth noting that it is not 20 percent faster in every workload. The GeForce RTX 3080 offers roughly 85 to 90 percent of the RTX 3090’s 4K gaming performance at under half the cost, as you’ll see in our benchmarks later.

The upgraded RT and tensor cores in Ampere GPUs help with playing ray traced games at 1440p and 4K resolution, alleviating a concern with the RTX 20-series. But Nvidia’s new-look cores can make an even more tangible difference for creators. Several creative applications now support Nvidia’s OptiX technology, which lets them tap into the specialized capabilities of RT and tensor cores to speed up tasks. In addition to the raw denoising and ray tracing speed boosts in rendering tasks, the 2nd-gen RT cores inside the RTX 3090 now support hardware acceleration for ray-traced motion blur, and applications can now support Nvidia’s DLSS technology to speed up their real-time visualizations. The D5 Render tool for architects already supports DLSS. 

Two specs in particular should make professionals drool, though: The SLI connector, and the 24GB of GDDR6X memory.

3090 vs rtx titan Nvidia

Taking on Titans.

Nvidia teamed up with Micron to create GDDR6X, the ultra-fast new memory in the RTX 3080 and 3090. It uses advanced “PAM4” signaling technology that can send up to four possible values per cycle, rather than the usual two. That lets GDDR6X move data twice as fast as previous incarnations. Again, read our RTX 3080 review for deeper Ampere details.

The RTX 3090’s mammoth 24GB capacity is the real draw for professionals. Creators need more capacity than standard gaming-class graphics cards provide when they’re editing 8K videos or processing other tasks. More VRAM helps AI and analytics pros handle larger data sets.

The 16GB of HBM2 inside AMD’s Radeon VII helped it win over creators more than gamers during its brief lifespan. The GeForce RTX 3090 offers 8GB more capacity. While HBM2 helped the Radeon VII hit a massive 1TB/s memory bandwidth, GDDR6X’s advances help the RTX 3090 hit a blistering 936GB/s. By contrast, the RTX 3080 packs in only 10GB of GDDR6X.

dsc01069 Brad Chacos/IDG

Prying off a cover reveals the RTX 3090’s NVLink connector.

The GeForce RTX 3090 is also the only RTX 30-series graphics card equipped with an NVLink connector, which helps move data between a pair of RTX 3090 cards in a multi-GPU configuration to the tune of 112.5GB/s. SLI is dead for gamers—Nvidia won’t even offer SLI profiles in its drivers beyond 2020—but this is a critical inclusion for creators and software developers who can utilize multiple linked GPUs for enhanced application performance.

The days of rigs stuffed with four high-end graphics cards may be numbered though, especially if you use a standard PC case. Nvidia’s BFGPU needs a big-ass cooler to tame it.

Next page: RTX 3090 Founders Edition design

Page Break

Nvidia GeForce RTX 3090 Founders Edition design

The GeForce RTX 3090 Founders Edition is like the 3080 FE’s design on steroids.

It’s the same cooler, just massively bigger. While the RTX 3080 FE managed to squeeze into a 2-slot design, Nvidia expanded the 3090 Founders Edition to a full three slots.

It’s longer, too. The RTX 3080 FE measures 11.2 inches long, while the 3090 FE is 12.3 inches. This beast takes a lot of space in your case.

dsc01038 Brad Chacos/IDG

RTX 3090 FE next to RTX 3080 FE

If you’re a gamer, that doesn’t matter. And if you’re a prosumer who only needs one or two monstrous graphics card for your projects, the RTX 3090 Founders Edition’s size shouldn’t matter either. But if your workload can scale up if you put more graphics cards in your system, drastically reducing completion time and potentially putting real money in your pocket, the size is something you want to take into consideration.

I have a second 3090 here—MSI’s custom Gaming X Trio, which is also a beefy 3-slot design. I’d hoped to slap both in my system to test Davinci Resolve performance. They both fit in my ATX motherboard, but the fans on the bottom-most card couldn’t avoid the case wiring for my Corsair Crystal 570X RGB case. You could fit two triple-slot 3090s easily into a full-tower case, but not this relatively spacious mid-tower.

dsc01037 Brad Chacos/IDG

Two triple-slot RTX 3090s is a tight squeeze in a standard PC case.

The actual cooler design could be an issue if you plan on slapping several into a system too, as both the Founders Edition and the MSI card (review tomorrow!) exhaust hot air back into your case, rather than venting it all out with a blower-style design. If you have a big case and want to cram four or more RTX 3090s inside of it after reading this review, we’ve spotted a Gigabyte RTX 3090 with a standard 2-slot, blower-style design, but it’s unknown at this point whether adopting a less-potent cooler will affect real-world performance. Of course, you could also turn to custom water cooling.

There’s no doubting the Founders Edition’s effectiveness, though. Its two-slot configuration held up very well on RTX 3080, but opting for the smaller design helped custom models with bigger coolers run even cooler and quieter. Well, not here. It’s clear Nvidia engineered its radical cooler with the GeForce RTX 3090 in mind, and the Founders Edition cooler blows us away in its beefed-up 3-slot version. Even with the card running at full tilt, pushing our total system power draw over 500 watts, the Founders Edition couldn’t be heard whatsoever. It maxed out at a chilly 68 degrees Celsius under load—far cooler than most custom designs on less potent GPUs, and a full 11 degrees cooler than the RTX 3080 FE.

dsc01063 Brad Chacos/IDG

Heavy metal. The entire body of the RTX 3090 Founders Edition acts like a heatsink, bristling with thick, black fins.

All those bristling metal fins and Nvidia’s unique “Flow-through” push-pull hybrid design really paid off. Custom cards will find it difficult to match the Founders Edition’s excellent cooling. For deeper details into the custom cooler, check out the Founders Edition design section of our RTX 3080 review.

Like that card, the 3090 Founders Edition also uses a proprietary 12-pin connector to power the card, but Nvidia includes a bundled adapter in the box. It’s short and ugly, though some power supply makers like EVGA offer full-length 2x 8-pin to 12-pin cables if the aesthetics bother you.

Nvidia’s GeForce RTX 30-series also upgrades to PCIe 4.0, which is currently supported only on AMD Ryzen 3000 systems with an AM4 X570 or B550 motherboard. Intel does not support the blazing-fast interface. PCIe 4.0 makes little difference if you’re gaming—the raw speed of your CPU matters more, which is why we tested this card on an overclocked 5GHz Core i7-8700K instead of upgrading to a Ryzen 3000 system. But it’s possible that some professional applications can better tap into the lightning-fast interface.

dsc01047 Brad Chacos/IDG

Peer closely and you can see the proprietary 12-pin power connector nestled among the black metal fins in the center of the RTX 3090.

Nvidia is pitching the GeForce RTX 3090 as an 8K gaming GPU. As you’ll see in our testing later, it can definitely hit 60 frames per second at 8K in many games, though it’s far from universal. In any case, Nvidia equipped the GeForce RTX 3090 with an HDMI 2.1 connector capable of handling 8K/60 over a single cable—something that couldn’t happen with previous HDMI or DisplayPort connectors.  The card also packs three DisplayPort 1.4 connections.

That setup works well enough for the RTX 3080. Considering the 8K video and enthusiast-level aims of this much pricier RTX 3090, however, I would’ve preferred to see an extra HDMI 2.1 connection, even if it meant dropping a DisplayPort. That HDMI 2.1 connection is required for 8K video, and if you’ve invested in a VR headset as well—easy to imagine if you’re this class of gamer—then you’ll need to swap the cords out when switching between the two. The RTX 3090 lacks the VR-focused VirtualLink connector found on prior-gen GeForce FE cards (after no headsets embraced it). Nvidia says it decided customers are more likely to need extra DisplayPorts than two HDMI 2.1 connections, but pointed out that the custom $1,600 Asus TUF RTX 3090 includes dual HDMI ports if you need them.

av1 vs h264 Nvidia

Nvidia-supplied numbers for AV1 vs. H.264 decode performance

On the video technologies front, the RTX 30-series GPUs are the first to support AV1 decode (a boon for streaming 8K content) and the ability to record 8K HDR video natively at 30 fps using GeForce Experience’s Shadowplay feature. These new features potentially negate the need to invest in a discrete video capture card to share your 8K experiences with the world. 

If you’re interested in learning more about the streaming software side of the RTX 30-series, I highly recommend watching EposVox’s excellent GeForce RTX 3080 review embedded below. He specializes in streaming and video creation tools, and delves deeply into the performance of AV1, NVENC encoding performance, and actual streaming results around the 22:08 mark. It’s a review of a less powerful card, but it should still prove insightful. If you want a deeper look at AV1 specifically, he goes into detail at the 56:52 mark of this other video. Good stuff.

Our tests using Microsoft’s AV1 video extension and this 8K/60 video on a 4K monitor upscaled to 8132x4320 using Nvidia’s Dynamic Super Resolution technology lived up to the claims. Without the GeForce RTX 3090 installed, watching that video maxed out our processor, which in turn led to massive amounts of dropped frames. But the RTX 3090’s AV1 decode support took over with it installed, easing the processor’s screams and utterly eradicating all dropped frames. Again: Good stuff.

Speaking of tests, let’s get to that. Evaluating the BFGPU will take a little more work than our typical reviews.

Next page: Our test system, content creation benchmarks begin

Page Break

Our test system

Our dedicated graphics card test system is a couple of years old, but it's packed with some of the fastest complementary components available to put any potential performance bottlenecks squarely on the GPU. Most of the hardware was provided by the manufacturers, but we purchased the cooler and storage ourselves.

  • Intel Core i7-8700K processor ($300 on Amazon) overclocked to 5GHz all cores
  • EVGA CLC 240 closed-loop liquid cooler ($105 on Amazon)
  • Asus Maximus X Hero motherboard
  • 64GB HyperX Predator RGB DDR4/2933 ($355 on Amazon)
  • EVGA 1200W SuperNova P2 power supply ($352 on Amazon)
  • Corsair Crystal 570X RGB case, with front and top panels removed and an extra rear fan installed for improved airflow
  • 2x 500GB Samsung 860 EVO SSDs ($70 each on Amazon)

GeForce RTX 3090 content creation benchmarks

Our graphics card test system was designed for maximizing pure gaming performance, but because so much of the GeForce RTX 3090’s value proposition lies in its prosumer chops, we also wanted to test its content creation capabilities.

We don’t have any Titans on hand, so we’re comparing Nvidia’s $1,499 RTX 3090 Founders Edition against other gaming flagships. We’ve included the step-down $699 GeForce RTX 3080 Founders Edition with 10GB of GDDR6X in the charts below, along with the $1,200 GeForce RTX 2080 Ti Founders Edition. The former flagship packs 11GB of GDDR6. The 3-year-old $700 GeForce GTX 1080 Ti includes 11GB of GDDR5X. Finally, AMD’s bizarre and short-lived $800 Radeon VII was beloved by content creators thanks to its 16GB of HBM2 memory, so we tested that as well. (It’s screaming loud compared to Nvidia’s cards.) We ran three runs of each benchmark and averaged the results.

It’s worth noting that while Nvidia offers creator-focused Studio drivers now, tied to updates to creative applications, some of these tools realize their greatest potential on Nvidia Quadro graphics cards, which get enhanced support for specialized professional applications. They cost a whole lot more, though. Your system configuration also matters for many of these tests, so your exact mileage may vary.

Let’s start with Geekbench 5. This quick cross-platform benchmark measures GPU compute performance “using workloads that include image processing, computational photography, computer vision, and machine learning.” Higher results are better, and we’ve included both OpenCL and CUDA results. Any GPU can run OpenCL, but CUDA requires specialized hardware and software from Nvidia--hence the “zero” score for the Radeon VII on that metric.

geekbench Brad Chacos/IDG

It’s a massive win for the GeForce RTX 3090 here, especially once you flip on CUDA.

Moving on, Luxmark 3.1 is a pure OpenCL benchmark based on the LuxRender v1.5 engine. It offers three different scenes to test. The “simple” Luxball HDR renders 217K triangles; the “medium” Neumann TLM-102 Special Edition renders 1,769K triangles; and the “complex” Hotel Lobby renders 4,973K triangles. Higher scores are better.

luxmark Brad Chacos/IDG

AMD’s Radeon VII holds its own against the GTX 1080 Ti and even the RTX 2080 Ti, but these new RTX 30-series GPUs wipe the floor with it.

Nvidia’s software stack is a key ace in the hole. Many developers swear by CUDA software optimizations. Now that RTX is here, Nvidia’s been rolling out “OptiX” technology that leverages all those RT and tensor cores for creative purposes.

Blender is a very popular free and open-source 3D graphics program used to create visual effects and even full-blown movies. In 2019, Blender integrated Nvidia OptiX into Cycles, its physically based path tracer for production rendering, to tap into GeForce’s RT cores for hardware-accelerated ray tracing.

We tested Blender using the Blender Open Toolkit. We tested two scenes: Classroom and Victor, with the latter being the most strenuous scene the benchmark offers. AMD’s Radeon VII consistently crashed when trying to run Victor, but worked fine with Classroom. We tested each graphics card with the best-performing GPU acceleration possible for that card. That’s OpenCL for the Radeon VII, CUDA for the GTX 1080 Ti, and OptiX for the trio of GeForce RTX cards. The results show total rendering time, so lower is better.

blender Brad Chacos/IDG

The Ampere GPU architecture inside the GeForce RTX 3080 and 3090 smash all prior options, especially in the more difficult Victor scene. Even the RTX 3080 explodes far ahead of the former RTX 2080 Ti flagship, though the 3090 starts to flex its muscles as complexity ramps up.

Next page: Content creation benchmarks continue

Page Break

Let’s continue our look at the benefits Nvidia-specific software can provide with a pair of tools that require GeForce graphics cards. The Radeon VII, obviously, isn’t included in the next couple of tests.

Maxon’s Redshift is a GPU-accelerated biased renderer that requires CUDA-capable graphics. It’s been used in the real world by major companies like Jim Henson’s Creator Shop and Blizzard Entertainment. We tested our cards using the hard-to-find Redshift “Age of Vultures” benchmark in the demo version. Redshift has also implemented OptiX and supports additional acceleration with RTX graphics cards. We enabled that for all applicable GPUs, while the GTX 1080 Ti stuck to bare CUDA. The results listed are seconds to render, so again, lower is better.

redshift Brad Chacos/IDG

Yep, it’s a whupping. The GeForce RTX 3090 is over 65 percent faster than the GeForce RTX 2080 Ti, which offered most of the performance of the $2,500 RTX Titan, albeit with lower memory capacity. As you can see, though, if your specific workloads don’t tap into the 3090’s massive 24GB memory buffer, it’s only slightly faster than the much cheaper RTX 3080.

Next up: OctaneBench 2020 v1.5. This is a canned test offered by OTOY to benchmark your system’s performance with the company’s OctaneRender, an unbiased, spectrally correct GPU engine. OctaneBench (and OctaneRender) also integrate Nvidia’s OptiX API for accelerated ray tracing performance on GPUs that support it. The RTX cards do; the GTX 1080 Ti again sticks to CUDA. The benchmark spits out a final score after rendering several scenes, and the higher it is, the better your GPU performs.

octanebench Brad Chacos/IDG

The GeForce RTX 3090 scores a massive 86 percent higher than the RTX 2080 Ti, and 19 percent higher than the RTX 3080.

Okay, let’s get back to non-CUDA-specific tests. When you’re talking about professional tools, viewport performance matters. Unlike with gaming, faster GPUs aren’t always better through brute strength in ProViz. SPECviewperf 13 measures GPU viewport performance using traces in 3ds Max, Maya, Siemens NX, Creo, CATIA, and SolidWorks, as well as energy and medical tests that draw on datasets typical of those industries. Both AMD and Nvidia contribute to the project as part of SPECgpc

Higher scores are better. Also note that this particular set of specialized benchmarks are likely to score higher with Nvidia Quadro or Radeon Pro cards thanks to their optimized professional drivers, but we don’t have any on hand to test.

specviewperf13 1 Brad Chacos/IDG
specviewperf 2 Brad Chacos/IDG

The GeForce RTX 3090 generally offers better viewport performance than other options, though it doesn’t smash quite as hard as it does in other tests. These are very application-dependent results. Still, if you use these tools, these benchmarks indicate you’ll notice faster viewport responsiveness with Nvidia’s new GPU.

Our final set of benchmarks examines rendering performance in DaVinci Resolve Studio 16, a production tool that “combines professional 8K editing, color correction, visual effects and audio post production.” (Nvidia provided an activation code for our testing.) It’s very popular among creative professionals, especially for 8K media editing.

We tested these GPUs using Puget System’s DaVinci Resolve Studio Benchmark. Puget Systems specializes in creating high-end, custom professional workstations, and the company crafted a serious of benchmarks for various applications to quantify performance.

Puget offers several different benchmarks for DaVinci Resolve. We used the 4K benchmark, which requires 16GB of system RAM and at least 8GB of GPU VRAM. It tests a variety of media codecs, and each of the codecs gets put through the following gauntlet, per Puget:

“For the 4K and 8K render tests, we benchmark 5 different levels of grade:

  • ”Optimized Media” - No effects applied in order to simulate the performance when generating optimized media
  • ”Basic Grade” - Simple color wheel and similar adjustments with a base grade plus 4 power windows.
  • ”OpenFX - Lens Flare + Tilt Shift Blur + Sharpen” - Basic Grade plus four OpenFX
  • ”Temporal Noise - Better 2 Frames” - Basic Grade plus a single TNR node
  • ”3x Temporal Noise - Better 2 Frames” - Basic Grade plus three TNR nodes using splitter/combiner nodes

The “Optimized Media” timeline is rendered out to MXF OP1A DNxHR LB at 1920x1080, while the others are all rendered out to Quicktime DNxHR HQ at the native resolution of the timeline (UHD or 8K).”

Puget’s tool measures how many frames per second it takes to complete each benchmark on each codec, then spits out an average score for each type of test, along with an overall score. Those scores are based on performance relative to their reference workstation with a Core i9-9900K and a 24GB RTX Titan; the higher the score, the better. Nvidia GPUs use CUDA in DaVinci Resolve, while the Radeon VII leans on OpenCL.

davinci overall Brad Chacos/IDG
davinci render Brad Chacos/IDG

As the Overall Score and Basic Grade results show, you’ll notice a decent performance bump with the GeForce RTX 3090. But once you start going heavy with the GPU effects, the GeForce RTX 3090 starts to roar, as evidenced in the Open FX and Temporal Noise results.

Puget’s benchmarking tool also proved the worth of one of the GeForce RTX 3090’s most key features: its massive 24GB of GDDR6X VRAM. We hoped to run Puget’s 8K benchmark suite as well, but every Nvidia card except the 3090 ran out of memory and crashed during the attempt. If you’re editing 8K video, you need a graphics card with 24GB of VRAM, and the GeForce RTX 3090 offers it for $1,000 less than the RTX Titan did.

octanebench memory Nvidia

Nvidia-supplied results in a memory-heavy OctaneRender test. The 3090’s 24GB of VRAM makes a massive difference to overall performance.

Exceeding your GPU VRAM makes some applications (like Blender and DaVinci Resolve) fail their tasks. Other tools may allow you to use general system memory if the scene you’re working on exceeds the capacity of your VRAM, but doing so delivers a huge performance impact on rendering time.

Nvidia’s reviewer’s guide walked through just such a scenario with OctaneRender. The results above show how significantly performance improves if you can keep the entire workload directly on your graphics card’s VRAM rather than going “out-of-core” to system memory.

Next page: 4K and 1440p gaming benchmarks

Page Break

GeForce RTX 3090 4K and 1440p gaming benchmarks

Phew! That was a lot of work. Let’s see how the GeForce RTX 3090 plays. Mullet time.

We’re comparing the $1,500 GeForce RTX 3090 Founders Edition against Nvidia’s $700 GeForce RTX 3080 Founders Edition, of course. We’ve also included results for a bunch of prior-gen Founders Edition cards: Nvidia’s $800 GeForce RTX 2080, $1,200 RTX 2080 Ti, and the older $700 GTX 1080 FE. (MSRP prices for the 1080 and 2080 started at $100 less, but Nvidia charged a premium for the FE models.) We’re also including the EVGA GTX 1080 Ti SC2 in our roundup, as our GTX 1080 Ti FE gave up the ghost years ago. AMD Radeon graphics cards can’t compete with Nvidia’s enthusiast-class GPUs in gaming performance, so they’re not included here.

dsc01076 Brad Chacos/IDG

The GeForce RTX 3090 FE in our test system, with EVGA’s optional full-length 12-pin cable installed because the bundled adapter is fugly.

We test a variety of games spanning various engines, genres, and graphics APIs (DirectX 11, DX12, and Vulkan). Each game is tested using its in-game benchmark at the highest possible graphics presets unless otherwise noted, with VSync, frame rate caps, real-time ray tracing or DLSS effects, and FreeSync/G-Sync disabled, along with any other vendor-specific technologies like FidelityFX. We’ve also enabled temporal anti-aliasing (TAA) to push these cards to their limits. We run each benchmark at least three times and list the average result for each test. We tested the older cards using Nvidia’s publicly available 452.06 Game Ready driver, the RTX 3080 Founders Edition model using a 452.16 driver provided early to reviewers, and the RTX 3090 FE using the publicly available 456.38 driver.

Overall, the GeForce RTX 3090 offers about 10 to 15 percent more performance than the RTX 3080 at 4K resolution, and less of an improvement at 1440p. We’ll present these 4K and 1440p gaming benchmarks but withhold additional commentary for our final analysis. Our RTX 3080 review already proved this even more potent card isn’t a good 1080p option, as cheaper cards are just as fast at that resolution.

Horizon Zero Dawn

Yep, Sony exclusives are hitting the PC now. Horizon Zero Dawn hit Steam with some performance issues, but the most egregious ones have mostly been cleared up, thanks to hard work from the developers. The game topped the sales charts for weeks after its release. It also seems to respond somewhat to PCIe 4.0 scaling, which will make this an interesting inclusion when we shift to a PCIe 4.0-based system in the future.

Horizon Zero Dawn runs on Guerrilla Games’ Decima engine, the same engine that powers Death Stranding. Ambient Occlusion can still offer iffy results if set to Ultra, so we test with that setting at Medium. Every other visual option is maxed out.

hzd Brad Chacos/IDG

Gears Tactics

Gears Tactics puts it own brutal, fast-paced spin on the XCOM-like genre. This Unreal Engine 4-powered game was built from the ground up for DirectX 12. We love being able to work a tactics-style game into our benchmarking suite.

Better yet, the game comes with a plethora of graphics options for PC snobs. More games should devote such loving care to explaining what flipping all these visual knobs mean. You can’t use the presets to benchmark Gears Tactics, as it intelligently scales to work best on your installed hardware, meaning that “Ultra” on one graphics card can load different settings than “Ultra” on a weaker card. We manually set all options to their highest possible settings.

Fun fact: The GeForce RTX 3080 FE is the only graphics card that doesn’t generate a “Your GPU can’t handle this” warning when enabling Glossy Reflections, and only the 3080 and the RTX 2080 Ti lack that warning for Planar Reflections. Told you these cards are monsters.

gears tactics Brad Chacos/IDG

Metro Exodus

One of the best games of 2019, Metro Exodus is one of the best-looking games around, too. The latest version of the 4A Engine provides incredibly luscious, ultra-detailed visuals, with one of the most stunning real-time ray tracing implementations released yet. We test in DirectX 12 mode with ray tracing, Hairworks, and DLSS disabled for our basic benchmarks.

metro Brad Chacos/IDG

Next page: Gaming benchmarks continue

Page Break

Borderlands 3

Borderlands is back! Gearbox’s game defaults to DX12, so we do as well, and it gives us a glimpse at the ultra-popular Unreal Engine 4’s performance in a traditional shooter.

borderlands 3 Brad Chacos/IDG

Strange Brigade

Strange Brigade is a cooperative third-person shooter where a team of adventurers blasts through hordes of mythological enemies. It’s a technological showcase, built around the next-gen Vulkan and DirectX 12 technologies and infused with features like HDR support and the ability to toggle asynchronous compute on and off. It uses Rebellion’s custom Azure engine. We test using the Vulkan renderer, which is faster than DX12.

strange brigade Brad Chacos/IDG

Total War: Troy

The latest game in the popular Total War saga, Troy was given away free for its first 24 hours on the Epic Games Store, moving over 7.5 million copies before it went on proper sale. Total War: Troy is built using a modified version of the Total War: Warhammer 2 engine. This DX11 title looks stunning for a turn-based strategy game. We use the more intense battle benchmark scene.

total war Brad Chacos/IDG

F1 2020

The latest in a long line of successful racing games, F1 2020 is a gem to test, supplying a wide array of both graphical and benchmarking options, making it a much more reliable (and fun) option that the Forza series. It’s built on the latest version of Codemasters’ buttery-smooth Ego game engine, complete with support for DX12 and Nvidia’s DLSS technology. We test two laps on the Australia course, with clear skies on and DLSS off.

f1 2020 Brad Chacos/IDG

Shadow of the Tomb Raider

Shadow of the Tomb Raider concludes the reboot trilogy, and it’s utterly gorgeous. Square Enix optimized this game for DX12, and recommends DX11 only if you’re using older hardware or Windows 7, so we test with DX12. Shadow of the Tomb Raider uses an enhanced version of the Foundation engine that also powered Rise of the Tomb Raider and includes optional real-time ray tracing and DLSS features.

sotr Brad Chacos/IDG

GTA V

This DX11 game isn’t really a visual barn-burner like the (somewhat wonky) Red Dead Redemption 2, but it still tops the Steam charts day in and day out, so we deem it more worthy of testing. RDR2 will melt your graphics card, sure, but GTA V remains so popular years after launch that upgraded versions of it will be available on the next-generation consoles. That’s staying power.

We test Grand Theft Auto V with all options turned to Very High, all Advanced Graphics options except extended shadows enabled, and FXAA. GTA V runs on the RAGE engine and has received substantial updates since its initial launch.

gtav Brad Chacos/IDG

Rainbow Six Siege

Like GTA V, Ubisoft’s Rainbow Six Siege still dominates the Steam charts years after its launch, and it’ll be getting a visual upgrade for the next-gen consoles. The developers have poured a ton of work into the game’s AnvilNext engine over the years, eventually rolling out a Vulkan version of the game that we use to test. By default, the game lowers the render scaling to increase frame rates, but we set it to 100 percent to benchmark native rendering performance on graphics cards. Even still, frame rates soar.

rb6 Brad Chacos/IDG

Next page: 8K gaming benchmarks

Page Break

GeForce RTX 3090 8K gaming benchmarks

Now let’s get a little weird.

nvidia 8k splash Nvidia

As our 4K gaming benchmarks show, the GeForce RTX 3090 is insanely powerful. It’s so powerful, in fact, that Nvidia pitched this “BFGPU” as the world’s first 8K graphics card as part of its marketing, then bolstered that claim with AV1 encoding, HDMI 2.1 for 8K/60 over a single cable, and native 8K/30 video capture with GeForce Experience’s Shadowplay feature.

The company even upgraded its fantastic Deep Learning Super Sampling (DLSS) 2.0 technology to 2.1 to support 8K gaming. Whereas DLSS 2.0 previously topped out at 4x upscaling—rendering games at 1080p, then using smart AI-based upscaling to render at 4K resolution with little to no visual degradation—DLSS 2.1 deploys “9x AI Super Resolution,” using your GPU to render visuals at 2560x1440 resolution then upscaling the image to 8K. Nvidia calls the new 8K support “Ultra performance mode.”

It’s a smart use of DLSS 2.0, a technology I wish more games embraced. Nvidia sent over access to beta versions of Death Stranding and Control that support 8K Ultra Performance Mode. I was, indeed, able to play those games just fine at 8K with DLSS on at High presets. Well, kinda 8K. I don’t have a pricey 8K monitor or TV, so I used Nvidia’s 4x Dynamic Super Resolution feature to unlock 8132x4320 resolution on my 4K monitor. That’s a smidge above the official 7680 x 4320 “8K” resolution. F1 2020’s DLSS 2.0 (not 2.1) also helped that game crack the hallowed 60fps barrier at the high preset.

nvidia 8k games Nvidia

Nvidia-supplied benchmarks of various games at 8K using High visual presets.

Look at the rest of the games in Nvidia’s 8K benchmarks above, though. They’re mostly lighter, less-intense games, with several esports titles and the fantastically optimized Forza Horizon 4 in the mix. Screwing around in Destiny 2 and Rainbow Six Siege at High settings indeed proved playable at my psudeo-8K resolution. RB6 offers wonderful resolution scaling tools that helped squeeze out more performance with minor visual compromises.

But is the RTX 3090 really an 8K graphics card? I decided to run several games in our benchmark suite at the strenuous resolution, both at our original Ultra settings, then again at High settings. (Borderlands 3 couldn’t see the higher Nvidia DSR resolutions, for some reason.) Here are the results, but keep in mind that because my DSR resolution is slightly higher than actual 4K, performance should be 3 to 5 percent higher on a native 8K display:

8k gaming Brad Chacos/IDG

As you can see, it’s a mixed bag. Grand Theft Auto V, Strange Brigade, and F1 2020 indeed hover around 60 fps on average at 8K with High settings. Despite the 35-fps average, Gears Tactics also felt okay—not great, but okay—in real life, because it’s a slower turn-based game. Metro Exodus, Total War: Troy, Horizon Zero Dawn, and Shadow of the Tomb Raider didn’t perform well at 8K. In fact, even the menus felt sluggish in Shadow of the Tomb Raider because they’re full of animated characters and scenes.

Several games (HZD and Gears Tactics among them) offer dynamic resolution scaling features that can offer higher performance when enabled, but they work by scaling down the actual render resolution and then fitting it to your screen. The dynamic resolution scaling you find in most games doesn’t use any of the AI smarts that DLSS 2.0 does, and can sometimes get real ugly, real quick depending on their implementation. Play around with the feature if you think it can help you play on a 4K monitor, but keep an eye out for blurriness.

We also ran into some funkiness with games. Shadow of the Tomb Raider uses first-gen DLSS technology that tops out at 4K resolution, so we couldn’t activate it at 8K. Total War: Troy’s user interface scaled horribly at 8K, rendering incredibly small, which is a bummer in a menu-heavy game.

Bottom line? The GeForce RTX 3090 can indeed game at 8K—sometimes. The sky-high resolution still manages to bring even this beast to its knees in many triple-A games. Consider 8K gaming more of a bonus than the RTX 3090’s gold standard, unless you mostly play esports games. Here’s hoping that more developers embrace the awesome potential of DLSS 2.0 in the future, as it looks like that makes all the difference in the world, both in 8K and at lower resolutions.

Next page: Power, thermals, noise, and conclusion

Page Break

Power draw, thermals, and noise

We test power draw by looping the F1 2020 benchmark at 4K for about 20 minutes after we’ve benchmarked everything else and noting the highest reading on our Watts Up Pro meter, which measures the power consumption of our entire test system. The initial part of the race, where all competing cars are onscreen simultaneously, tends to be the most demanding portion. 

This isn’t a worst-case test; we removed the Core i7 8700K’s overclock and specifically chose a GPU-bound game running at a GPU-bound resolution to gauge performance when the graphics card is sweating hard. If you’re playing a game that also hammers the CPU, you could see higher overall system power draws. We saw Borderlands 3 hit 590W on our system. Consider yourself warned.

power Brad Chacos/IDG

The GeForce RTX 3090 Founders Edition is the first graphics card to top 500W of total system power draw in this test. Considering the RTX 3080 FE drew 482 watts, this isn’t so bad given the RTX 3090’s higher performance. Still, you might want to replace your power supply for RTX 30-series GPUs if your current one is on the modest side. They draw significantly more juice than previous GeForce generations.

We test thermals by leaving GPU-Z open during the F1 2020 power draw test, noting the highest maximum temperature at the end.

temps Brad Chacos/IDG

At this level of performance, raw power draw probably isn’t a major consideration. Thermals and acoustics very much matter for enthusiasts, though, and Nvidia’s GeForce RTX 3090 Founders Edition excel in those departments.

While the 2-slot 3080 FE proved cooler enough and quiet enough, it was bested by beefier 3-slot custom designs like the MSI Gaming X Trio. It’s clear that the 3090 was Nvidia’s true focus though. The massive 3-slot design of the 3090 Founders Edition runs ice cold at a full 11 degrees cooler than its 3080 sibling. Nvidia’s radical cooler goes toe-to-toe with the best custom coolers we’ve ever tested, aside from over-the-top liquid-cooled models. Better yet, it’s utterly silent. You won’t hear it whatsoever even running the most strenuous 4K games or GPU-heavy DaVinci Resolve workloads.

Bravo. Nvidia’s RTX 3090 Founders Edition cooler is an engineering marvel, albeit a gigantic one. Custom-cooled third-party versions of the RTX 3090 will be hard-pressed to beat Nvidia’s thermals and acoustics.

Should you buy a GeForce RTX 3090?

It depends on what you’re doing.

dsc01050 Brad Chacos/IDG

There’s no doubt that the $1,500 GeForce RTX 3090 is indeed a “big ferocious GPU,” and the most powerful consumer graphics card ever created. The Nvidia Founders Edition delivers unprecedented performance for 4K gaming, frequently maxes out games at 1440p, and can even play at ludicrous 8K resolution in some games. It’s a beast for 3440x1440 ultrawide gaming too, as our separate ultrawide benchmarks piece shows. Support for HDMI 2.1 and AV1 decoding are delicious cherries on top.

If you’re a pure gamer, though, you shouldn’t buy it, unless you’ve got deep pockets and want the best possible gaming performance, value be damned. The $700 GeForce RTX 3080 offers between 85 and 90 percent of the RTX 3090’s 4K gaming performance (depending on the game) for well under half the cost. It’s even closer at 1440p.

If you’re only worried about raw gaming frame rates, the GeForce RTX 3080 is by far the better buy, because it also kicks all kinds of ass at 4K and high refresh rate 1440p and even offers the same HDMI 2.1 and AV1 decode support as its bigger brother. Nvidia likes to boast that the RTX 3090 is the first 8K gaming card, and while that’s true in some games, it falls far short of the 60 frames per second mark in many triple-A titles. Consider 8K gaming a nice occasional bonus more than a core feature.

dsc01069 Brad Chacos/IDG

NVLink is a major deal for some professional workloads, and only the GeForce RTX 3090 offers it.

If you mix work and play, though, the GeForce RTX 3090 is a stunning value—especially if your workloads tap into CUDA. It’s significantly faster than the previous-gen RTX 2080 Ti, which fell within spitting distance of the RTX Titan, and offers the same 24GB VRAM capacity of that Titan. But it does so for $1,000 less than the RTX Titan’s cost.

The GeForce RTX 3090 stomps all over most of our content creation benchmarks. Performance there is highly workload-dependent, of course, but we saw speed increases of anywhere from 30 to over 100 percent over the RTX 2080 Ti in several tasks, with many falling in the 50 to 80 percent range. That’s an uplift that will make your projects render tangibly faster—putting more money in your pocket. The lofty 24GB of GDDR6X memory makes the RTX 3090 a must-have in some scenarios where the 10GB to 12GB found in standard gaming cards flat-out can’t cut it, such as 8K media editing or AI training with large data sets. That alone will make it worth buying for some people, along with the NVLink connector that no other RTX 30-series GPU includes. If you don’t need those, the RTX 3080 comes close to the RTX 3090 in raw GPU power in many tests.

Listen: You know if you need a card this powerful for professional tasks. You know if you need a 24GB memory buffer or NVLink. If you do, this is the card to get, full stop. The GeForce RTX 3090 indeed offers Titan-class performance, but without the Titan-brand price tag, and it’s the best gaming GPU in the world on top of that.

dsc01076 Brad Chacos/IDG

Nvidia’s highly customed GeForce RTX 30-series Founders Edition design worked very well with the 3080, but the expanded 3-slot version used with the GeForce RTX 3090 excels. It looks gorgeous, maxes out at a frigid 68 degrees Celsius under full load, and stays utterly silent the entire time. This unique “flow-through” cooler is exceptional here.

Enthusiast-priced hardware needs to meet enthusiast-class demands, though, and I have a couple of minor quibbles with the otherwise exceptional FE. The card’s bulk means you can only fit a couple in a standard PC case, and while the rear fan blows most of the hot air out of your case, the front fan exhausts some heat back into your case. Those could both be an issue if you plan on packing a system full of these for professional tasks, though workarounds definitely exist for each. (Gigabyte plans to offer a 2-slot RTX 3090 with a standard blower design that fully exhausts hot air.) Since this card also targets price-is-no-object gamers, I wish Nvidia equipped the Founders Edition with two HDMI 2.1 ports for enthusiasts who want to use both an 8K display and a VR headset. (The Asus TUF 3090 does exactly that.) I still don’t like the look of the short, ugly 12-pin power adapter either.

Those gripes may be big issues for some people, but they’re nitpicks in general.

Nvidia calls the RTX 3090 a “BFGPU,” and it’s just as potent as the BFG it takes its name from. If you need the ultimate graphics card for work and play, the GeForce RTX 3090 earns our hearty recommendation, along with our Editors’ Choice award. It topples Titans for $1,000 less and chews through 4K gaming with nary a whisper. Buy it if you can put that massive 24GB VRAM buffer and NVLink to good work—or save a bunch of money with the RTX 3080 if you’re only looking to get outrageous gaming frame rates.