AMD - Compute Utilisation

Today I noticed that when enabling ray tracing, wow does not seem to begin using the compute of the AMD RX 7900 XTX, conversely, game’s such as RE4 Remaster do.

Is this intentional? Will that change?

The Ray Tracing in wow is extremely poorly implemented so im not at all surprised it doesnt use AMD’s compute. It’s not just AMD though when i enable Ray Tracing it doesnt use my Tensor Cores on my 2080 Ti either.

The game can’t decide how the raytracing will be computed, as that is the GPU driver and DX12 API job. The game issues the commands to the driver and the driver talks to the GPU.

What parameter and in which app are you looking at?

1 Like

This is within task manager, seems to display utilisation in RE4 Remaster etc as of current driver

and that is the problem. AMD has massive cpu overhead which if you dont have an extremely fast CPU can and will cause issues. Nvidia has dedicated hardware for ray tracing. That is why Ray Tracing on AMD has literally half the performance of Nvidia.

Windows Task Manager can be limited to what it shows and how accurately. Radeon software shows more but overall people rather trust HWinfo. Due to lower FPS, some GPU utilization metrics may be lower, total utilization should rather increase. CPU should drop.

You can check two of my DXR benchmarks/analysis:

https://rkblog.dev/posts/wow/analyzing-ray-traced-shadows-world-warcraft/

https://rkblog.dev/posts/pc-hardware/ray-tracing-and-api-scaling-on-intel-arc-a380/

https://rkblog.dev/posts/wow/benchmarking-ryzen-5900x-and-rtx-3070-wow/#5


What overhead? Credible source or stop talking nonsense.

So do AMD and Intel graphics cards. Making hardware acceleration on a single-purpose silicon like Nvidia doesn’t make it better on its own but better design in the case of Nvidia does.

WoW has simple raytraced shadows, visible only in SL and newer zones. The effect is minor yet it will hit your FPS noticeably. From my tests, it’s around 30% for RTX 20 cards, 20% for Intel Arc, and around 10% for RTX 30 out of combat and 30%-40% in combat/motion. AMD DXR to be tested.

1 Like

Guess i got it mixed up, it’s nvidia that has the CPU overhead issues my fault.

No they DO NOT. AMD uses its shaders for ray tracing, nvidia uses RT and Tensor cores. both are infinitely more powerful than anything AMD has.

RTX and Ray tracing are NOT the same thing. RTX is special code specific for nvidia hardware. it tells the game to use nvidia’s RT hardware hence the enormous performance lead nvidia has even without DLSS/FSR.

RTX is a brand name. DXR is part of DX12 feature set responsible for raytracing interface that then each GPU driver implements. Nvidia had their own API prior to DX12 Ultimate going public but in the end it’s just an interface and each driver has “specific code” for given vendor.

WoW uses the standard DX12 interface, DXR, and works with Intel, AMD and Nvidia cards.

Enabling ray tracing lower performance over pure raster no matter wherever it’s Nvidia or AMD. You enable it in WoW and you can loose up to around 40% of FPS just like that.

They aren’t “infinitelly more powerful”. I would recommend reading Chips & Cheese article on the implementations. Using single function silicon vs multi function silicon isn’t inherently good or bad design.

https://chipsandcheese.com/2023/03/22/raytracing-on-amds-rdna-2-3-and-nvidias-turing-and-pascal/

AMD takes a more conventional approach that wouldn’t be out of place on a CPU, and uses a rigid BVH format that allows for simpler hardware. AMD’s RT accelerators don’t have to deal with variable length nodes. However, AMD BVH makes it more vulnerable to cache and memory latency, one of a GPU’s traditional weaknesses. RDNA 3 counters this by hitting the problem from all sides. Cache latency has gone down, while capacity has gone up. Raytracing specific LDS instructions help reduce latency within the shader that’s handling ray traversal. Finally, increased vector register file capacity lets each WGP hold state for more threads, letting it keep more rays in flight to hide latency. A lot of these optimizations will help a wide range of workloads beyond raytracing. It’s hard to see what wouldn’t benefit from higher occupancy and better caching.

Nvidia takes a radically different approach that plays to a GPU’s advantages. By using a very wide tree, Nvidia shifts emphasis away from cache and memory latency, and toward compute throughput. With Ada, that’s where I suspect Nvidia’s approach really starts to shine. Ampere already had a staggering SM count, with RT cores that have double the triangle test throughput compared to Turing. Ada pushes things further, with an even higher SM count and triangle test throughput doubled again. There’s a lot less divide and conquer in Nvidia’s approach, but there’s a lot more parallelism available. And Nvidia is bringing tons of dedicated hardware to plow through those intersection tests.

I suspect Nvidia’s bringing that together with the factors above to stay roughly one generation ahead of AMD in raytracing workloads. With regards to raytracing strategy, and how exactly to implement a BVH, I don’t think there’s a fundamentally right or wrong approach. Nvidia and AMD have both made significant strides towards better raytracing performance. As raytracing gets wider adoption and more use though, we may see AMD’s designs trend towards bigger investments into raytracing.

This game is very broken, i hope next year they bring back more talent cut down on crunching fire anyone with bad influence on game production and delay whatever is next and just reboot world of warcraft back into before warcraft 1, i have feeling if they do that world of warcraft will come back really strong.
And make wow subscription part of gamepass.

not at all true. RTX is the branding of nvidia’s ray tracing implementation. that contains both the software behind and their RTX-cores, so actual hardware.

DXR is Microsoft’s ray tracing API. it basically allows non rtx hardware (without the ray tracing cores of nvidia) to make ray tracing calculations, which will hopefully be fast enough to be feasible.

your own article proves what i said correct? im confused why your saying im wrong but the quotes you posted said nvidia uses specific hardware for RTX.

This is wrong. DXR API is used by Nvidia, AMD, and Intel and is a part of DX12. THERE IS NO MAGICAL Nvidia API and YOU DO NOT NEED Tensor/RT cores to do HARDWARE ACCELERATED ray tracing.

RTX is a BRAND NAME. Nvidia drivers implement DX12 and Vulkan API for Ray Tracing and just like any other driver - what they do in the background is their proprietary job.

Nvidia uses a SINGLE FUNCTION solution. A part of silicon that is used ONLY for ray tracing. AMD and Intel use a MULTI-PURPOSE solution where part of the silicon can perform raster OR ray tracing BVH traversal operations. It’s more cost-effective as you don’t waste chip die space for a function that may be unused if DRX is not used.

You don’t need SINGLE-FUNCTION silicon to perform DXR with HARDWARE ACCELERATION. Tensor/RT cores ARE NOT NEEDED. So stop talking nonsense and white-knighting for Nvidia. There is no need for that.

Not sure where i said you NEED any of those, i said nvidia has a massive ray tracing performance lead BECAUSE of those hardware cores.

i never said it was a different API? not sure why your disagreeing with me then proceeding to explain how i am right? like what are you even doing?

that is exactly what i said… AMD uses shaders. so nvidia has specific hardware for ray tracing… can you not read or something?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.