Welcome back to another episode in our GPU Flashback Archive series. Following on from last week’s look at the GeForce FX series, we turn our attention to its successor, the NVIDIA GeForce 6 series. After rising to a position of relative dominance in the early years of GPU design, the GeForce 4 and subsequent FX series had seen NVIDIA lose ground to ATI who had stolen a march with their highly popular Radeon 9000 series. The stage was set for a return with the launch of a new GPU design and a series of cards that required more space in your rig and additional power to deliver a truly next generation gaming experience. Let turn our minds back to 2004 and check out the technologies and features that debuted with the GeForce 6 series, plus the most popular cards of the era and the most notable scores that have been submitted here on HWBOT.
NVIDIA GeForce 6: Overview
The NVIDIA GeForce 6 series arrived in tech reviewers hands in April of 2004, debuting with a new NV40 GPU and two graphics card models, the GeForce 6 Ultra which commanded a price of $499 USD, and the GeForce 6800 (often referred to as the non-Ultra) for $299 USD. Let’s first consider the GPU itself, the NV40.
The NV40 was manufactured by IBM using the same 130nm process node that had was used for the previous gen NV30 and NV35 chips. It was however a significantly larger beast, packing 222 million transistors compared to its predecessor’s 135 million. It was physically larger too at 287 mm² compared to 210 mm². The NV40 was based on the Curie architecture and packed significantly more rendering power under the hood – 16 pixel shaders, 6 vertex shaders, 16 Texture Mapping Units (TMUs) and 16 Render Output Units (ROPs). That means either double, triple or quadruple the number of rendering components compared to the previous generation. NVIDIA was playing catch up and had no option but to really pack as much as possible in its new GPU.
Another major improvement was graphics card memory. The 128MB-bit memory interface was replaced with a 256-bit bus that could take advantage of faster GDDR3. This meant a leap in overall bandwidth from 16GB/sec to 35.2GB/sec. The high-end cards were fitted with 256MB of GDDR3 – for the sake of perspective, was about the same capacity as some PCs had for the entire system in 2004.
Crucially the new NV40 series of GPUs supported Shader Model 3.0, a new feature which arrived with newly updated DirectX 9.0c API. VS3.0 allowed shader programs to use for more than 65,000 lines of instructions, featured 4 Multiple Render Targets, 32-bit floating point precision, shader antialiasing and ten texture coordinate inputs per pixel. This allowed developers to add much more realism and visual depth to games, helping bring to life games like Far Cry and Doom3 and the bleeding edge visuals available with Ultra High settings.
Let’s get back to the actual cards. The new flagship, GeForce 6800 Ultra and its non-Ultra sibling were among the first cards to feature addition power sockets. Reviewers were told from day one that they would need a 450 watt power supply to guarantee sufficient juice. Reference design cards included a pair of standard molex power sockets, the kind that would have been used to power your hard drives. The cards were also among the first use a two-slot design, with a significantly beefier cooling array to help deal with the heat generated.
A reference design GeForce 6800 Ultra (above).
Other major firsts were happening too around this time. We find that many manufacturers produced both AGP x8 and PCIe versions of the cards. Indeed in 2004 we find ourselves at the junction of the AGP / PCIe interface transition. PCI Express was for most enthusiasts a very new technology, so with so many AGP motherboards out in the wild, it made plenty of sense to offer both options. In reality PCIe offered minimal performance gains, being largely forward looking.
Another new feature that eventually arrived with the GeForce 6 series was SLI, a Scalable Link Interface that allowed for two cards to run in tandem for even more performance. Available exclusively on nForce 4 chipset boards, its somewhat buggy and immature arrival paved the way for 3-way and eventually 4-way setups. Early SLI implementations tested PC builders in terms of power requirements with PSUs above 500 watts being fairly rare in 2004. Early drivers often produced inconsistent performance levels with updates coming thick and fast.
The GeForce 6800 Ultra had a GPU clocked at 400MHz with GDDR3 memory clocked at 1,100MHz. The GeForce 6800 used a GPU clocked at 325MHz and memory at 600MHz. The non-Ultra card also had one fewer Vertex Shader.
An early SLI system from 2004 (above).
The Most Popular NVIDIA GeForce 6 Card: The GeForce 6600 GT
Let’s jump straight into the data and take a peek at the HBOT submission numbers for the GeForce 6 series.
- -GeForce 6600 GT (PCIe) – 16.66%
- -GeForce 6800 GT – 6.29%
- -GeForce 6600 DDR (PCIe 128-bit) – 5.19%
- -GeForce 6600 GT (AGP) – 4.10%
- -GeForce 6800 GS AGP (GDDR3 256-bit) – 3.94%
- -GeForce 6800 GS (NV42 256MB) 3.81%
- -GeForce 6100 – 3.65%
- -GeForce 6800 AGP (DDR 256-bit NV40) – 3.42%
- -GeForce 6800 Ultra – 3.37%
- -GeForce 6150 SE – 3.28%
As you can see, NVIDIA and its AIB partners went on to produce several variants of the GeForce 6 series cards, including the most popular model with HWBOT users, the GeForce 6600 GT (PCIe version). This card was joined with an AGP version and a PCIe version that used DDR memory and a 128-bit bus. The GeForce 6800 series was expanded to have GS and GT versions, that were clocked somewhat lower than the Ultra, but marginally higher than the non-Ultra.
When the GeForce 6600 GT arrived it was arguably NVIDIA’s most potent weapon in terms of market share. For most mainstream gamers who were priced out of the $300-$500 USD segment it was a must have component. Arriving in August of 2004, the GeForce 6600 GT and the non-GT model arrived on the scene using the NV43 GPU which used a 128-bit memory bus and 3 Vertex Shaders. It was clocked at 500MHz with graphics memory tuned to 900MHz / 1,00MHz depending on manufacturer and type of memory used. The GeForce 6600 GT was used in more than 16% of all 6 series submissions on HWBOT.
A reference GeForce 6600 GT card (above).
The GeForce 6600 GT was able to outpace the competing X800 Pro from ATI in most titles, a very impressive feat for a card that retailed for around $200 USD. Despite the lower memory bandwidth and lower clocks, it hit a sweet spot that made it enormously popular. It brought great DX9 performance at a price that was within reach of many with gamers who want Antialiasing and Anisotropic Filtering, dual screen support and decent capabilities at resolutions at 1600×1200. It was indeed the goldilocks of graphics cards for many of us.
In terms of GPU overclocking with NVIDIA cards, one popular option was to use Coolbits, a registry hack which offered GPU frequency control and access to various other features via the NVIDIA Driver control panel. Unlike today, NVIDIA seemed to be quite happy to let overclockers have their fun.
NVIDIA GeForce 6: Record Scores
We can now take a look at some of the highest scores posted on HWBOT using NVIDIA GeForce 6 series cards.
Highest GPU Frequency
Although technically speaking, GPU frequency (as with CPU frequency) is not a true benchmark, it does remain an important metric for many overclockers. Sifting through the database, it appears that the submission with the highest GPU core frequency in the HWBOT database comes from Gorod (Russia). He pushed a GeForce 6600 GT PCI-e
to 928MHz, a massive +85.60%, with graphics memory configured at 675MHz (+35.00%). His rig also included an Intel Core 2 Duo E6850 ‘Conroe’ clocked at 3.8GHz (+26.67%).
You can find the 3DMark05 submission from Gorod here on HWBOT: http://hwbot.org/submission/988815_gorod_3dmark05_geforce_6600_gt_pci_e_6661_marks
The highest 3DMark2001 SE score submitted to HWBOT using an NVIDIA GeForce 6 card was made by Slovenian overclocker alibabar. He pushed a pair of GeForce 6800 GS
Cards in an SLI configuration to 660MHz (+55.29%) on the GPU core and 670MHz (+34%) on the graphics memory. With this configuration he managed a hardware first place score of 76,472 marks. The rig he used also featured an Intel Core 2 Duo E8600 ‘Wolfdale’ CPU clocked at 5.4GHz (+62.16%).
You can find the submission from alibabar here on HWBOT: http://hwbot.org/submission/871176_alibabar_3dmark2001_se_2x_geforce_6800_gs_(nv41)_256_mb_76472_marks
In the classic Aquamark benchmark we find alibabar is also top dog. This time his pair of GeForce 6800 GS cards were clocked even higher at 676MHz with memory at 660MHz (+32%), hitting a score of 227,320 marks.
You can find the submission from alibabar here on HWBOT: http://hwbot.org/submission/871180_alibabar_aquamark_2x_geforce_6800_gs_(nv41)_256_mb_227320_marks
Thanks for joining us for this week’s episode of the GPU Flashback Archive series. Come back next week and join us for a look at the classic NVIDIA GeForce 7 series of graphics processors and cards.