Wednesday 26 December 2018

The GPU Compute Performance From The NVIDIA GeForce GTX 680 To TITAN RTX

A few days back we posted initial Linux benchmarks of the NVIDIA TITAN RTX graphics card, the company's newest flagship Titan card shipping as of a few days ago. That initial performance review included a look at the TensorFlow performance and other compute tests along with some Vulkan Linux gaming benchmarks. In this article is a look at a more diverse range of GPU compute benchmarks while testing thirteen NVIDIA graphics cards going back to the GTX 680 Kepler days.
Besides being a diverse range of NVIDIA cards looking at the raw Linux GPU compute performance, complementing that performance data is also the AC system power consumption and performance-per-Watt metrics as well as thermal data. All of that data generated in a fully-automated and reproducible manner using the open-source Phoronix Test Suite benchmarking software. The AC system power data was being polled by PTS using a WattsUp Pro power meter.
All of the tests were done from the Intel Core i9 9900K system running Ubuntu 18.04.1 LTS with the Linux 4.19 kernel and NVIDIA 415.23 driver and CUDA 10.0.
The tests today ranged from OpenCL desktop workloads like Darktable to OctaneBench 4.0 to various CUDA/OpenCL scientific programs, FAHBench, LuxMark, and others. Again, if you are interested in TensorFlow performance with different models and precision, check out the article from last week for all of those current numbers. The cards tested in this benchmarking go-around included the:
- GTX 680
- GTX 780 Ti
- GTX 970
- GTX 980
- GTX 980 Ti
- GTX TITAN X GM200
- GTX 1060
- GTX 1070
- GTX 1080
- GTX 1080 Ti
- RTX 2080
- RTX 2080 Ti
- TITAN RTX
The NVIDIA compute tests were done with the cards I had available for testing that were not busy in other rigs; sans the RTX 2070 that is currently having issues. I'm still in the process of vetting Radeon's ROCm 2.0 release and should have some comparison benchmarks there in the days ahead. Without further ado, let's check out the green GPU compute performance this Christmas.
With Darktable the TITAN RTX is basically in line with the RTX 2080 Ti due to diminishing returns for scaling even with these already very large resolution RAW images used for testing that aim to be representative of current RAW image handling. But it is interesting for showing just how the OpenCL Darktable performance compares from the once very capable GTX 680 through now with the ultra high-end TITAN RTX.
OctaneBench 4.0 was recently released and does handle the Turing GPUs quite well. The TITAN RTX here was 6% faster than the RTX 2080 Ti -- though not quite as large of a margin as showing in many of the TensorFlow tests that were ~12% faster.
While running OctaneBench, the TITAN RTX had a 315 Watt average AC system power draw on this 9900K system with a peak of 350 Watts, compared to 299 Watts on the RTX 2080 Ti for an average and peak of 330 Watts.
But even with the slightly higher power draw of the TITAN RTX, the performance-per-Watt was still comparable to the leading RTX 2080 Ti.
The Parboil scientific tests with OpenCL do very well on the RTX 2080 series.
The double precision performance measured by the OpenCL Mixbench was about 6% faster than the RTX 2080 Ti. Compared to the GTX 680, it was a 3.35x performance difference.
For global memory bandwidth measured by clpeak, the TITAN RTX was 3.75x the speed of the GTX 680 while being 5% faster than the RTX 2080 Ti that is also equipped with the GDDR6 video memory.
FAHBench as the Folding@Home benchmark had a negligible performance difference compared to the RTX 2080 Ti, but here was interesting to see the 9.4x spread in performance.
Even with performance-per-Watt, the RTX 2080 Ti and TITAN RTX offer 4.8x the power efficiency of the GTX 680 Kepler.
With the LuxMark OpenCL benchmarks, the TITAN RTX offered better performance than the RTX 2080 Ti while still offering either better or comparable power efficiency.
Here's a look at all of these different graphics cards and their GPU core temperatures during the span of all the GPU compute benchmarks carried out for this article. The TITAN RTX had an average temperature of 64 degrees and a peak of 79 degrees, actually a few degrees lower on all the metrics compared to the GTX 680 as well as many of the other cards tested.
The TITAN RTX also came out well with these tests in the overall AC system power consumption metrics and slightly ahead of the RTX 2080 Ti for these particular workloads (see more power data in the original TITAN RTX Linux benchmarks article). This article is basically complementary data points to the original tests featuring TensorFlow, Linux gaming, etc.
If you want to see how your own Linux GPU compute performance compares to this diverse range of NVIDIA cards tested, simply install the Phoronix Test Suite and then run phoronix-test-suite benchmark 1812259-PTS-NVIDIATI26.
For those wondering about Blender rendering performance on the TITAN RTX, there are these standalone tests so far using a patched build of Blender 2.79 that has the CUDA 10 support needed for Turing cards. I'm still working on getting the Blender 2.80 beta to play nicely for benchmarking and when that's working right will have a large comparison on that front.

0 comments:

Post a Comment