This week the GPU Flashback Archive sets its sights on the GeForce 500 series from NVIDIA. Arriving in late 2010, the 500 Series was the second round of graphics cards based on the Fermi architecture which had limped over the line in the previous generation, ostensibly due to fabrication and yield issues. The new flagship GTX 580 arrived with a more polished take on the Fermi design that help NVIDIA combat the threat from AMD and their popular Radeon 5000 and 6000 series cards. As ever, let’s take a look at the new GPU, the new flagship card and a few of the outstanding scores that have been submitted to HWBOT.
NVIDIA GeForce 500: Overview
To say that the NVIDIA 400 series graphics cards launch was less than smooth, would be a total understatement. The GF100 Fermi architecture GPU in fact arrived six months late with a significant number of cores hacked off. Blame was laid at the door of fabricators TSMC and a 40nm manufacturing process that clearly hadn’t been optimally adapted for NVIDIA’s Fermi, a monster chip boasting 3 billion transistors and a 529mm² die. While cards such as the GTX 480 had actually done well to make NVIDIA competitive in performance terms, the GTX 580 and its GF110 GPU was rather quickly shoved out the door just eight months later as a revised and improved version of the original.
Although marketed as Fermi 2.0, the GF110 is in most respects simply a moderately updated version of its predecessor, the GF100. Ryan Smith from Anandtech wrestled with the topic in memorable style back on launch day November 2010 when he said – “GF110 is a mix of old and new. To call it a brand-new design would be disingenuous, but to call it a fixed GF100 would be equally shortsighted.”
I won’t call the GF110 a fixed Fermi GPU but at a glance it seems to be exactly what NVIDIA had in mind for the GF100. In terms of Stream Processors the GF110 packs 512, the same number that we expected to see on the GF100. It boasts a few more Texture Address Filters but the same number of Render Output Units. In terms of clocks however we do see a few modestly higher defaults with the GPU core bumped from 700MHz to 772MHz and the Shader Clock upped from 1,401MHz to 1,544MHz. The default frequency for the graphics memory was also moved up from 924MHz to 1,002MHz. The new card featured the same 1.5GB of GDDR5 using the same 384-bit memory bus. Clearly the GF110 is a faster version of its predecessor, but in reality, there’s not too much more going on under the hood. In terms of API support, the new GF110 supported DX12, OpenGL 4.6, OpenCL 1.1, CUDA 2.0 and Shader Model 5.0 – identical to the first Fermi GPU.
The GTX 500 series officially launched on November 9th 2010 in the form of a new flagship offering, the GeForce GTX 580 card. It arrived with a price tag of $499 USD, replacing the GTX 480 which was then available for around $430 USD. The 400 series would persist for sometime yet however, being gradually phased out and replaced by 500 series equivalents, much in the same manner as we see with today’s NVIDIA GPU refreshes. The GTX 580 would be joined by the GTX 570 in time for Christmas with a GTX 560 Ti following it soon after New Year.
Here’s a shot of the reference design NVIDIA GeForce GTX 580:
The new card is a two-slot design the same length as the previous generation, however a different cooling design from the GTX 480 was used. The mammoth GF100 GPU had proven to be true beast when pushed under load using FurMark for example. Loud whining fans were a common complaint as the cooling design struggled to really deal with the heat produced. On the GTX 580 the distinctive protruding heatpipes are nowhere to be seen, instead we find a full shroud enclosing a vapor chamber and heatsink. AMD cards had benefited from vapor chamber designs so it was perhaps an obvious choice from NVIDIA to follow suit.
On the GTX 480 we encountered for the first time a PCB using digital VRM controllers. From an overclocking perspective, this meant NVIDIA having tighter controls over what could and could not be overclocked, with special BIOSes needed to access higher core and memory frequencies. With the GTX 580 we find another aspect of digital power delivery at work for the first time. The GTX 580 card was one of the first consumer graphics cards to feature power monitoring chips.
NVIDIA was apparently rather miffed at reviewers who insisted on running FurMark and OOCT benchmarking and stress testing apps which actually used workloads that could almost brick the card, such were the temperatures involved with Fermi 1.0. With Fermi 2.0 NVIDIA wanted a failsafe mechanism that would allow the GPU to throttle down when faced with incredibly high workloads that the company deemed unrealistic compared to those encountered when actually playing video games. These new power monitoring chips allowed for throttling in much the same way as we encounter on today’s 1000 series cards. The throttling is not always reported in OS or software, but performance drops are nonetheless palpable.
Here’s a PCB comparison of the GTX 480 and GTX 580. They are almost identical with the exception of the large cutaway sections on the GTX 480 and (if you look closely) the presence of two power monitoring ICs on the upper right side of the VRM area.
The new cooling design was indeed an improvement over the previous generation offering improved performance using slightly less power. The GF100 used a TDP of 250W, while the GF110 was a little more efficient at 244W. Both required 6-pin and 8-pin power connectors. The new card supported connectivity for 2x DVI plus a mini-HDMI port. Still no DisplayPort at this juncture.
The Most Popular NVIDIA GeForce 500 Card: The GeForce GTX 580
It’s time to take a look at the most popular NVIDIA 500 series cards in terms of submissions to the HWBOT database:
- -GeForce GTX 580 – 45.62%
- -GeForce GTX 560 Ti – 18.04%
- -GeForce GTX 570 – 16.43%
- -GeForce GTX 550 Ti (192-bit) – 6.14%
- -GeForce GTX 560 – 4.19%
- -GeForce GT 520 – 1.53%
- -GeForce GT 540M – 1.47%
- -GeForce GTX 590 – 1.19%
- -GeForce GTX 560 Ti (448) – 1.15%
- -GeForce GTX 560M – 0.63%
We noted how the previous generation of 400 series cards had managed to really isolate the high-end flagship card from the rest of the product stack. If you wanted to really compete on HWBOT, you simply had to have a GTX 480 card. That has been amplified somewhat with the GeForce 500 series where we find that more than 45% of all 500 series submissions involve the flagship GTX 580 card. The GTX 560 Ti arrived in January 2011 and sold for a very tempting $250 USD, just a snip below the asking price of a AMD Radeon HD 6950 card.
Here’s a shot of the dashingly affordable GTX 560 Ti card:
The reference cooler design of the GTX 580 was faithfully reproduced by many 3rd party vendors who were pushed by NVIDIA to make sure that noise was not the issue it had been on the GTX 480 cards. Here’s a take on the GTX 580 from ASUS who opted for a rather robust twin fan, triple slot solution:
Elsewhere on the list we have several mainstream options, a mobile version and, perhaps more interestingly the dual GPU GTX 590 which retailed for $700 bucks and packed a pair of GF110 GPUs in a single card. Here’s a shot of that beautiful beast, sans shroud.
NVIDIA GeForce 500 Series: Record Scores
We can now take a look at some of the highest scores posted on HWBOT using an NVIDIA GeForce 580 card, the fastest single GPU card in the 500 Series lineup.
Highest GPU Frequency
Although technically speaking, GPU frequency (as with CPU frequency) is not a true benchmark, it does remain an important metric for many overclockers. Looking through the database, we find that the submission with the highest GPU core frequency using a GeForce GTX 580 card comes from a German overclocker called Kurbel. He pushed a the GPU of his GeForce GTX 580 to 1,690MHz, which is a massive +117.50% beyond stock settings. The rig used also included an Intel Core i7 2600K ‘Sandy Bridge’ processor clocked at 5,657MHz (+66.38%).
You can find the 3DMark03 submission from Kurbel here on HWBOT: http://hwbot.org/submission/2190352_kurbel_3dmark03_geforce_gtx_580_182729_marks
3DMark Vantage – Performance
The highest 3DMark Vantage – Performance score submitted to HWBOT using a single NVIDIA GeForce 580 card was made by the legendary dhenzjhen (Philippines). He pushed an MSI GeForce GTX 580 Lightning card to 1,550MHz (+99.49%) on the GPU core and 1,275MHz (+27.25%) on the graphics memory. With this configuration he managed a hardware first place score of 53,768 marks. The submission was actually fairly recent and was helped by an Intel Core i7 6950X ‘Broadwell-E’ chip clocked at 4,500MHz (+50.00%).
Here’s a close up of the LN2 cooled rig as pushed by dhenzjhen..
You can find the submission from dhenzjhen here on HWBOT: http://hwbot.org/submission/3320919_dhenzjhen_3dmark_vantage___performance_geforce_gtx_580_53768_marks
In the classic Aquamark benchmark we find that Hideo (Japan) is the highest scorer with a single GeForce GTX 580 card. He pushed his GTX 580 GPU clock to 1,253MHz (+61.26%) with graphics memory at 1,205MHz (+20.26%) to hit an impressive score of 632,301 marks. The score was made just in October of this year and will have benefited greatly from being joined by an Intel Core i7 7700K ‘Kaby Lake’ chip clocked at 6,925MHz (+64.88%).
You can find the submission from Hideo here on HWBOT: http://hwbot.org/submission/3681949_hideo_aquamark_geforce_gtx_580_632301_marks
Thanks for joining us for this week’s episode of the GPU Flashback Archive series. Come back next week and join us for a look at the NVIDIA GeForce 500 series of graphics processors and cards.