43 - It should be an option somewhere in the ATI Catalyst Control Center. I don't have an X800 of my own to verify this on, not to mention a lack of applications which use this feature. My comment was more tailored towards people that don't read hardware sites. Typical users really don't know much about their hardware or how to adjust advanced settings, so the default options are what they use.
Correction to my last post. KiB and MiB and such are meant to be used for size calculations, and then KB and MB can be used for bandwidth calculations. Now the first paragraph (and my gripe) should be a little more clear if you didn't understand it already. Basically, the *bandwidth* companies (hard drives, and to a lesser extent RAM companies advertising bandwidth) proposed that their incorrect calculations stand and that those who wanted to use the old computer calculations should change.
There are problems, however. HDD and RAM both continue to use both calculations. RAM uses the simplified KB and MB for bandwidth, but the accepted KB and MB (KiB and MiB now) for size. HDD uses the simplified KB and MB for size, but then they use the other KB and MB for sustained transfer rates. So, the proposed change not only failed to address the problem, but the proposers basically continue in the same way as before.
#38 - there are quite a few cards/chips that were only available in very limited quantities.
39 - Actually, that is only partially true. KibiBytes and MibiBytes are a *proposed* change as far as I am aware, and they basically allow the HDD and RAM people to continue with their simplified calculations. I believe that KiB and MiB are meant for bandwidths, however, and not memory sizes. The problem is that MB and KB were in existence long before KiB and MiB were proposed. Early computers with 8 KB of RAM (over 40 years ago) had 8192 bytes of RAM, not 8000 bytes. When you buy a 512 MB DIMM, it is 512 * 1048576 bytes, not 512 * 1000000 bytes.
If a new standard is to be adopted for abbreviations, it is my personal opinion that the parties who did not conform to the old standard are the ones that should change. Since I often look at the low level details of processors and GPUs and such, I do not want to have two different meanings of the same thing, which is what we currently have. Heck, there was even a class action lawsuit against hard drive manufacturers a while back about this "lie". That was the solution: the HDD people basically said, "We're right and in the future 2^10 = KiB, 2^20 = MiB, 2^30 = GiB, etc." Talk about not taking responsibility for your acttions....
It *IS* a minor point for most people, and relative performance is still the same. Basically, this is one of my pet peeves. It would be like saying, "You know what, 5280 feet per mile is inconvenient Even though it has been this way for ages, let's just call it 5000 feet per mile." I have yet to see any hardware manufacturers actually use KiB or MiB as an abbreviation, and software that has been around for decades still thinks that a KB is 1024 bytes and a MB is 1048576.
Jarred, you were wrong about the abbreviation MB.
1 MB is 1 mega Byte is (1000*1000) Bytes is 1000000 Bytes is 1 million Bytes.
1 MiB is (1024*1024) Bytes is 1048576 Bytes.
So the vid card makers (and the hard drive makers) actually have it right, and can keep smiling. It is the people that think 1MB is 1048576 Bytes that have it wrong. I can't pronounce or spell 1 MiB correctly, but it is something like 1 mibiBytes.
I received a link from Matthew Quon containing a recent presentation on the whole chip fabrication process. It includes details that I omitted, but in general it supports my abbreviated description of the process.
#34: Yes, there are errors that are bound to slip through. This is especially true on older parts. However, as you point out, several of the older chips were offered in various speed grades, which only makes it more difficult. Several of the as-yet unreleased parts may vary, but on the X700 and 6800LE, that's the best info we have right now. The vertex pipelines are *not* tied directly to the pixel quads, so disabling 1/4 or 1/2 of the pixel pipelines does not mean they *have* to disable 1/4 or 1/2 of the vertex pipelines. According to T8000, though, the 6800LE is a 4 vertex pipeline card.
Last, you might want to take note of the fact that I have written precisely 3 articles for Anandtech. I live in Washington, while many of the other AT people are back east. So, don't count on everything being reviewed by every single AT editor - we're only human. :)
(I'm working on some updates and corrections, which will hopefully be posted in the next 24 hours.)
I think it is very good to put the facts together in such a review.
I did notice three things, however:
1: I have a GF6800LE and it has 4 enabled vertex pipes instead of 5 and comes with a 300/700 gpu/mem clock.
2: Since gpu clock speeds did not increase much, they had to add more features (like pipelines) to increase performance.
3: Gpu defects are less of an issue then cpu defects, since a lot of large gpu's offered the luxory of disabling parts, so that most defective gpu's can still be sold. As far as I know, this feature has never made it into the cpu market.
A lot of mistakes for a professional hardware review site the size of Anandtech.I will only mention the de facto mistakes since I have doubts for more.I am actually surprised about the amount of mistakes in this article.I mean since I live in Greece (not the center of the world in 3d technology or hardware market) I always thought that the editors in the best hardware review sites of the world (like Anandtech) have at least the basic knowledge related to technology and they make research and doublecheck if their articles are correct.I mean they get paid, right?I mean if I can find so easily their mistakes (I have no technology related degree although I was purchase and product manager in the best Greek IT companies) they must be doing something very,very wrong indeed.Now onto the mistakes:
ATI :
X700 6 vertex pipelines: Actually this is no mistake since I have no information about this new part but it seems strange if X700 will have the same (6) vertex pipelines as X800XT.I guess more logical would be half as many (3) (like 6800Ultra-6600GT) or double as many as X600 (4).We will see.
Radeon VE 183/183: The actual speed was 166/166SDR 128bit for ATI parts and as low as 143/143 for 3rd party bulk part
Radeon 7000 PCI 166/333 The actual speed was 166/166SDR 128bit for ATI parts and as low as 143/143 for 3rd party bulk part (note that anandtech suggests 166DDR and the correct is 166 SDR)
Radeon 7000 AGP 183/366 32/64(MB): The actual speed was 166/166SDR for ATI parts and as low as 143/143 for 3rd party bulk part (note that anandtech suggests 166DDR and the correct is 166 SDR) also at launch and for a whole year (if ever) it didn't exist a 64MB part
Radeon 7200 64bit ram bus: The 7200 was exactly the same as Radeon DDR so the ram bus width was 128bit
ATI has unofficial DX 9 with SM2.0b support: Actually ATI has official DX 9.0b support and Microsoft certified this "in between" version of DX9.When they enable their 2.0b feutures they don't fail WHQL compliance since 2.0b is official microsoft version (get it?).Feutures like 3Dc normal map compression are activated only in open GL mode but 3Dc compression is not part of DX9.0b.
NVIDIA:
GF 6800LE with 8 pixel pipelines has according to Anandtech 5 vertex pipelines: Actually this is no mistake since I have no information about this part but since 6800GT/Ultra is built with four (4) quads with 4 pixel pipelines each isn't more logical the 6800LE with half the quads to have half the pixel (8) AND half (3) the vertex pipelines?
GFFX 5700 3 vertex pipelines: GFFX 5700 has half the number of pixel AND vertex pipelines of 5900 so if you convert the vertex array of 5900 into 3 vertex pipes (which is correct) then the 5700 would have 1,5
GF4 4600 300/600: The actual speed is 300/325DDR 128bit
GF2MX 175/333: The actual speed is 175/166SDR 128bit
GF4MX series 0.5 vertex shader: Actually the GF4MX series had twice the amount of vertex shaders of GF2 so the correct number of vertex shader is 1
According to Anandtech, the GF3 cards only show a slight performance increase over the GF2 Ultra, and that is only in more recent games : Actually GF3 (Q1 01) was based in 0,18 nm technology and the yields was extremely low.In reality GF3 parts in acceptable quantity came in Q3 01 with GF3Ti series 0,15 nm technology .If you check the performance in open GL games at and after Q3 01 and DX8 games at and after Q3 02 you will clearly see GF3 to have double the performance of GF2 clock for clock (GF3Ti500 Vs GF2Ultra)
Now, the rest of the article is not bad and I also appreciate the effort.
Sorry, ViRGE - I actually took your suggestion to heart and updated page 3 initially, since you are right about it being more common. However, I forgot to modify the DX7 performance charts. There are probably quite a few other corrections that should be made as well....
Jared, like I said, you're technically right about how the GF2 MX could be outfitted with either 128bit SDR or 64bit SDR/DDR, but you said it yourself that the cards were mostly 128bit SDR. Obviously any change won't have an impact, but in my humble opinion, it would be best to change the GF2 MX to better represent what historically happened, so that if someone uses this chart as a reference for a GF2 MX, they're more likely to be getting the "right" data.
Yes, Questar, laden with errors. All over the place. Thanks for pointing them out so that they could be corrected. I'm sure that took you quite some time.
Seriously, though, point them out (other than omissions, as making a complete list of every single variation of every single card would be difficult at best) and we will be happy to correct them provided that they actually are incorrect. And if you really want a card included, send the details of the card, and we can add that as well.
Regarding the ATI AIW (All In Wonder, for those that don't know) cards, they often varied from the clock and RAM speeds of the standard chips. Later models may have faster RAM or core speeds, while earlier models often had slower RAM and core speeds.
Questar - if you don't like it, leave. The article clearly stated its bounds and did a great job. My $.02 - the 7500 AIW is 64 meg DDR only, unsure of the speed however. Do you want me to check that out?
#22 The Geforce256 was released in October of 1999 so this is roughly the last 5 years of chips from ATI and Nvidia. If it were to include all other manufacturers it would be quite a bit longer.
How about examples of this article being "laden or errors" instead of just stating it.
Nice article.... BUT....
I was hoping the Quadro and FireGL lines would be included in the comparison.
As someone who uses BOTH proffessional (ProE and SolidWorks) AND consumer level (games) software, I am interested in purchasing a Quadro or FireGL, but I want to compare these to their consumer level equivalent (as each pro level card generally has an equivalent consumer level card with some minor, but important, otomizations).
This list is not complete at all, it would be 3 times the size if it was from the last 5 or 6 years. It covers about the last 3, and is laden with errors
Just another exampple of half-asssed job this site has been doing lately.
#14 - Sorry, I went with desktop cards only. Usually, you're stuck with whatever comes in your laptop anyway. Maybe in the future, I'll look at including something like that.
#15 - Good God, Jim - I'm a CS graduate, not a graphics artist! (/Star Trek) Heheh. Actually, you would be surprised at how difficult it can be to get everything to fit. Maximum width of the tables is 550 pixels. Slanting the graphics would cause issues making it all fit. I suppose putting in vertical borders might help keep things straight, but I don't like the look of charts with vertical separators.
#20 - Welcome to the club. Getting old sucks - after a certain point, at least.
12/13: I updated the Radeon LE entry and resorted the DX7 page. I'm sure anyone that owns a Radeon LE already knows this, but you could use a registry hack to turn them into essentially a full Radeon DDR. (By default, the Hierarchical Z compression and a few other features were disabled.) Old Anandtech article on the subject:
Virge... I could be wrong on this, but I'm pretty sure some of the older chips could actually be configured with either SDR or DDR RAM, and I think the GF2 MX series was one of those. The problem was that you could either have 64-bit DDR or 128-bit SDR, so it really didn't matter which you chose. But yeah, there were definitely 128-bit SDR versions of the cards available, and they were generally more common than the 64-bit DDR parts I listed. The MX200, of course, was 64-bit SDR, so it got the worst of both worlds. Heh.
I think the early Radeons had some similar options, and I'm positive that such options existed in the mobile arena. Overall, though, it's a minor gripe (I hope).
Jarred, without getting too nit-picky, your data for the GeForce 2 MX is technically wrong; the MX used a 128bit/SDR configuration for the most part, not a 64bit/DDR configuration(http://www.anandtech.com/showdoc.aspx?i=1266&p... Note that this isn't true for any of the other MX's(both the 200 and 400 widely used 64bit/DDR), and the difference between the two configurations has no effect on the math for memory bandwidth, but it's still worth noting.
For the Radeon LE, I noticed a question mark next to the amount of RAM. I own one of these cards, and can confirm that 32MB DDR is the only configuration it comes in.
You skipped which OpenGL version and features the various cards support... maybe add that when you add the various workstation cards to the listings...
Yeh, Nvidia learned it's lesson, last gen, with the 0.13 micron new at the time process delaying the introduction of the NV30, thy learned to play it safe using a tried and tested process is a good idea for such high complexity chips initially, though they of course plan to shift these chips to the 110nm process when the process matures enough, possibly on the NV48 and R480 hopefully allowing higher clocks in the process:D, maybe not for R480 unless low-k is ready for 110nm by that time.
It does make more sense to use the newer manufacturing process to help save costs on the volume shipping GPU, as the cost savings will beaccumulated much better in the mainstream and value arena's thanks to sheer volume.
We also see this with Intel, when Intel yields on the 90nm were only so so, they introduced Prescott up to 3.2GHZ in quanitity, but introduced their Pentium 4 3.4GHZ on the northwood core on 0.13 micron. Though over time Intel is making all efforts to transfer everything to 90nm, with Prescott and Prescott 2M w/1066FSB for EE Edition.
8 - Intel does this as well, testing a new process on their non-flagship parts. For example, after the launch of the P4, Intel piloted their 130 nm copper technology with the Tualatin CPU before releasing the Northwood. It probably has something to do with the amount of extra time a more complex design takes to test and verify.
Interesting how on the die sizes chart, I notice they're phasing in the 110nm process only for their mid-range-ish cards and sticking to the tried and tested 130nm for the high-end one. I suppose you can't blame them for that really, given it's their flagship product and all, but it could contribute to the huge die sizes.
Thank, AtaStrumf - any errors in the numbers are ColdPower's fault. Heheheh. Really, he already caught a bunch of small mistakes, so hopefully the number of remaining errors is very small.
For what it's worth, there are various versions of some of the chips that have different clock speeds and RAM speeds from what is listed. The models in the chart should reflect the most common configurations, though.
BTW, the article text is now tweaked somewhat on the ATI and NVIDIA overview pages. Derek Wilson provided some additional insight on the subject of AA and AF that clarified things a little.
Argon was the name for the .25 micron K7, while Pluto and Orion were .18 micron.
#2 and #4: I realize you're kidding, but in all seriousness we did think about including other architectures. With the broken features on some of the more recent cards and the lack of T&L on 3dfx and older cards, we just decided to stick with the two major players. And hey - it's all fair, as we didn't include Cyrix/Via or Transmeta processors in the CPU cheatsheet! ;)
OMFG, this is awsome!!!! You really outdid youself this time! I have been collecting data on GPUs for quite a while and have been planing on making a spreadsheet just like the first two for my, so called, web site, but WAU, this rocks. Thanks for saving me a lot of work :)
When I get the time, I'll check your munbers a bit, just to make sure there aren't any typos in there.
We can get some Matrox Parhelia action in there too to go along with the missing 3DFS =) I am wondering what 'Argon' is under the AMD platform (listed with the K6 CPUs). I never remember hearing an Argon codename or anything.
Sweet article though.
Jason
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
43 Comments
Back to Article
JarredWalton - Thursday, October 28, 2004 - link
43 - It should be an option somewhere in the ATI Catalyst Control Center. I don't have an X800 of my own to verify this on, not to mention a lack of applications which use this feature. My comment was more tailored towards people that don't read hardware sites. Typical users really don't know much about their hardware or how to adjust advanced settings, so the default options are what they use.Thera - Tuesday, October 19, 2004 - link
You say SM2.0b is disabled and consumers don't know how to turn it on. Can you tell us how to enable SM2.0b?Thank you.
(cross posted from video forum)
endrebjorsvik - Wednesday, September 15, 2004 - link
WOW!! Very nice article!!does anyone have all these datas collected into an exel-file or something??
JarredWalton - Sunday, September 12, 2004 - link
Correction to my last post. KiB and MiB and such are meant to be used for size calculations, and then KB and MB can be used for bandwidth calculations. Now the first paragraph (and my gripe) should be a little more clear if you didn't understand it already. Basically, the *bandwidth* companies (hard drives, and to a lesser extent RAM companies advertising bandwidth) proposed that their incorrect calculations stand and that those who wanted to use the old computer calculations should change.There are problems, however. HDD and RAM both continue to use both calculations. RAM uses the simplified KB and MB for bandwidth, but the accepted KB and MB (KiB and MiB now) for size. HDD uses the simplified KB and MB for size, but then they use the other KB and MB for sustained transfer rates. So, the proposed change not only failed to address the problem, but the proposers basically continue in the same way as before.
JarredWalton - Saturday, September 11, 2004 - link
#38 - there are quite a few cards/chips that were only available in very limited quantities.39 - Actually, that is only partially true. KibiBytes and MibiBytes are a *proposed* change as far as I am aware, and they basically allow the HDD and RAM people to continue with their simplified calculations. I believe that KiB and MiB are meant for bandwidths, however, and not memory sizes. The problem is that MB and KB were in existence long before KiB and MiB were proposed. Early computers with 8 KB of RAM (over 40 years ago) had 8192 bytes of RAM, not 8000 bytes. When you buy a 512 MB DIMM, it is 512 * 1048576 bytes, not 512 * 1000000 bytes.
If a new standard is to be adopted for abbreviations, it is my personal opinion that the parties who did not conform to the old standard are the ones that should change. Since I often look at the low level details of processors and GPUs and such, I do not want to have two different meanings of the same thing, which is what we currently have. Heck, there was even a class action lawsuit against hard drive manufacturers a while back about this "lie". That was the solution: the HDD people basically said, "We're right and in the future 2^10 = KiB, 2^20 = MiB, 2^30 = GiB, etc." Talk about not taking responsibility for your acttions....
It *IS* a minor point for most people, and relative performance is still the same. Basically, this is one of my pet peeves. It would be like saying, "You know what, 5280 feet per mile is inconvenient Even though it has been this way for ages, let's just call it 5000 feet per mile." I have yet to see any hardware manufacturers actually use KiB or MiB as an abbreviation, and software that has been around for decades still thinks that a KB is 1024 bytes and a MB is 1048576.
Bonta - Saturday, September 11, 2004 - link
Jarred, you were wrong about the abbreviation MB.1 MB is 1 mega Byte is (1000*1000) Bytes is 1000000 Bytes is 1 million Bytes.
1 MiB is (1024*1024) Bytes is 1048576 Bytes.
So the vid card makers (and the hard drive makers) actually have it right, and can keep smiling. It is the people that think 1MB is 1048576 Bytes that have it wrong. I can't pronounce or spell 1 MiB correctly, but it is something like 1 mibiBytes.
viggen - Friday, September 10, 2004 - link
Nice article but what's up with the 9200 Pro running at 300mhz for core & memory? I dun remember ATI having such a card.JarredWalton - Wednesday, September 8, 2004 - link
Oops... I forgot the link from Quon. Here it is:http://www.appliedmaterials.com/HTMAC/index.html
It's somewhat basic, but at the same time, it covers several things my article left out.
JarredWalton - Wednesday, September 8, 2004 - link
I received a link from Matthew Quon containing a recent presentation on the whole chip fabrication process. It includes details that I omitted, but in general it supports my abbreviated description of the process.#34: Yes, there are errors that are bound to slip through. This is especially true on older parts. However, as you point out, several of the older chips were offered in various speed grades, which only makes it more difficult. Several of the as-yet unreleased parts may vary, but on the X700 and 6800LE, that's the best info we have right now. The vertex pipelines are *not* tied directly to the pixel quads, so disabling 1/4 or 1/2 of the pixel pipelines does not mean they *have* to disable 1/4 or 1/2 of the vertex pipelines. According to T8000, though, the 6800LE is a 4 vertex pipeline card.
Last, you might want to take note of the fact that I have written precisely 3 articles for Anandtech. I live in Washington, while many of the other AT people are back east. So, don't count on everything being reviewed by every single AT editor - we're only human. :)
(I'm working on some updates and corrections, which will hopefully be posted in the next 24 hours.)
T8000 - Wednesday, September 8, 2004 - link
I think it is very good to put the facts together in such a review.I did notice three things, however:
1: I have a GF6800LE and it has 4 enabled vertex pipes instead of 5 and comes with a 300/700 gpu/mem clock.
2: Since gpu clock speeds did not increase much, they had to add more features (like pipelines) to increase performance.
3: Gpu defects are less of an issue then cpu defects, since a lot of large gpu's offered the luxory of disabling parts, so that most defective gpu's can still be sold. As far as I know, this feature has never made it into the cpu market.
MODEL 3 - Wednesday, September 8, 2004 - link
A lot of mistakes for a professional hardware review site the size of Anandtech.I will only mention the de facto mistakes since I have doubts for more.I am actually surprised about the amount of mistakes in this article.I mean since I live in Greece (not the center of the world in 3d technology or hardware market) I always thought that the editors in the best hardware review sites of the world (like Anandtech) have at least the basic knowledge related to technology and they make research and doublecheck if their articles are correct.I mean they get paid, right?I mean if I can find so easily their mistakes (I have no technology related degree although I was purchase and product manager in the best Greek IT companies) they must be doing something very,very wrong indeed.Now onto the mistakes:ATI :
X700 6 vertex pipelines: Actually this is no mistake since I have no information about this new part but it seems strange if X700 will have the same (6) vertex pipelines as X800XT.I guess more logical would be half as many (3) (like 6800Ultra-6600GT) or double as many as X600 (4).We will see.
Radeon VE 183/183: The actual speed was 166/166SDR 128bit for ATI parts and as low as 143/143 for 3rd party bulk part
Radeon 7000 PCI 166/333 The actual speed was 166/166SDR 128bit for ATI parts and as low as 143/143 for 3rd party bulk part (note that anandtech suggests 166DDR and the correct is 166 SDR)
Radeon 7000 AGP 183/366 32/64(MB): The actual speed was 166/166SDR for ATI parts and as low as 143/143 for 3rd party bulk part (note that anandtech suggests 166DDR and the correct is 166 SDR) also at launch and for a whole year (if ever) it didn't exist a 64MB part
Radeon 7200 64bit ram bus: The 7200 was exactly the same as Radeon DDR so the ram bus width was 128bit
ATI has unofficial DX 9 with SM2.0b support: Actually ATI has official DX 9.0b support and Microsoft certified this "in between" version of DX9.When they enable their 2.0b feutures they don't fail WHQL compliance since 2.0b is official microsoft version (get it?).Feutures like 3Dc normal map compression are activated only in open GL mode but 3Dc compression is not part of DX9.0b.
NVIDIA:
GF 6800LE with 8 pixel pipelines has according to Anandtech 5 vertex pipelines: Actually this is no mistake since I have no information about this part but since 6800GT/Ultra is built with four (4) quads with 4 pixel pipelines each isn't more logical the 6800LE with half the quads to have half the pixel (8) AND half (3) the vertex pipelines?
GFFX 5700 3 vertex pipelines: GFFX 5700 has half the number of pixel AND vertex pipelines of 5900 so if you convert the vertex array of 5900 into 3 vertex pipes (which is correct) then the 5700 would have 1,5
GF4 4600 300/600: The actual speed is 300/325DDR 128bit
GF2MX 175/333: The actual speed is 175/166SDR 128bit
GF4MX series 0.5 vertex shader: Actually the GF4MX series had twice the amount of vertex shaders of GF2 so the correct number of vertex shader is 1
According to Anandtech, the GF3 cards only show a slight performance increase over the GF2 Ultra, and that is only in more recent games : Actually GF3 (Q1 01) was based in 0,18 nm technology and the yields was extremely low.In reality GF3 parts in acceptable quantity came in Q3 01 with GF3Ti series 0,15 nm technology .If you check the performance in open GL games at and after Q3 01 and DX8 games at and after Q3 02 you will clearly see GF3 to have double the performance of GF2 clock for clock (GF3Ti500 Vs GF2Ultra)
Now, the rest of the article is not bad and I also appreciate the effort.
JarredWalton - Wednesday, September 8, 2004 - link
Sorry, ViRGE - I actually took your suggestion to heart and updated page 3 initially, since you are right about it being more common. However, I forgot to modify the DX7 performance charts. There are probably quite a few other corrections that should be made as well....ViRGE - Tuesday, September 7, 2004 - link
Jared, like I said, you're technically right about how the GF2 MX could be outfitted with either 128bit SDR or 64bit SDR/DDR, but you said it yourself that the cards were mostly 128bit SDR. Obviously any change won't have an impact, but in my humble opinion, it would be best to change the GF2 MX to better represent what historically happened, so that if someone uses this chart as a reference for a GF2 MX, they're more likely to be getting the "right" data.BigLan - Tuesday, September 7, 2004 - link
Good job with the articleLove the office reference...
"Can I put it in my mouth?"
darth_beavis - Tuesday, September 7, 2004 - link
Sorry, now it's suddenly working. I don't know what my problem is (but I'm sure it's hard to pronounce).darth_beavis - Tuesday, September 7, 2004 - link
Actually it looks like none of them have labels. Is anandtech not mozilla compatible or something. Just use jpgs pleaz.darth_beavis - Tuesday, September 7, 2004 - link
Why is there no descriptions for the columns on the graph on pg 2. Are just supposed to guess what the numbers mean?JarredWalton - Tuesday, September 7, 2004 - link
Yes, Questar, laden with errors. All over the place. Thanks for pointing them out so that they could be corrected. I'm sure that took you quite some time.Seriously, though, point them out (other than omissions, as making a complete list of every single variation of every single card would be difficult at best) and we will be happy to correct them provided that they actually are incorrect. And if you really want a card included, send the details of the card, and we can add that as well.
Regarding the ATI AIW (All In Wonder, for those that don't know) cards, they often varied from the clock and RAM speeds of the standard chips. Later models may have faster RAM or core speeds, while earlier models often had slower RAM and core speeds.
blckgrffn - Tuesday, September 7, 2004 - link
Questar - if you don't like it, leave. The article clearly stated its bounds and did a great job. My $.02 - the 7500 AIW is 64 meg DDR only, unsure of the speed however. Do you want me to check that out?mikecel79 - Tuesday, September 7, 2004 - link
#22 The Geforce256 was released in October of 1999 so this is roughly the last 5 years of chips from ATI and Nvidia. If it were to include all other manufacturers it would be quite a bit longer.How about examples of this article being "laden or errors" instead of just stating it.
Neo_Geo - Tuesday, September 7, 2004 - link
Nice article.... BUT....I was hoping the Quadro and FireGL lines would be included in the comparison.
As someone who uses BOTH proffessional (ProE and SolidWorks) AND consumer level (games) software, I am interested in purchasing a Quadro or FireGL, but I want to compare these to their consumer level equivalent (as each pro level card generally has an equivalent consumer level card with some minor, but important, otomizations).
Thanks
mikecel79 - Tuesday, September 7, 2004 - link
The AIW 9600 Pros have faster memory than the normal 9600 Pro. 9600 Pro memory runs at 650Mhz vs the 600 on a normal 9600.Here's the Anandtech article for reference:
http://www.anandtech.com/video/showdoc.aspx?i=1905...
Questar - Tuesday, September 7, 2004 - link
#20,This list is not complete at all, it would be 3 times the size if it was from the last 5 or 6 years. It covers about the last 3, and is laden with errors
Just another exampple of half-asssed job this site has been doing lately.
JarredWalton - Tuesday, September 7, 2004 - link
#14 - Sorry, I went with desktop cards only. Usually, you're stuck with whatever comes in your laptop anyway. Maybe in the future, I'll look at including something like that.#15 - Good God, Jim - I'm a CS graduate, not a graphics artist! (/Star Trek) Heheh. Actually, you would be surprised at how difficult it can be to get everything to fit. Maximum width of the tables is 550 pixels. Slanting the graphics would cause issues making it all fit. I suppose putting in vertical borders might help keep things straight, but I don't like the look of charts with vertical separators.
#20 - Welcome to the club. Getting old sucks - after a certain point, at least.
Neekotin - Tuesday, September 7, 2004 - link
great read! wow! i didn't know there were so much GPUs in the past 5-6 years. its like more than all combined before them. guess i'm a bit old.. ;)JarredWalton - Tuesday, September 7, 2004 - link
12/13: I updated the Radeon LE entry and resorted the DX7 page. I'm sure anyone that owns a Radeon LE already knows this, but you could use a registry hack to turn them into essentially a full Radeon DDR. (By default, the Hierarchical Z compression and a few other features were disabled.) Old Anandtech article on the subject:http://www.anandtech.com/video/showdoc.aspx?i=1473
JarredWalton - Monday, September 6, 2004 - link
Virge... I could be wrong on this, but I'm pretty sure some of the older chips could actually be configured with either SDR or DDR RAM, and I think the GF2 MX series was one of those. The problem was that you could either have 64-bit DDR or 128-bit SDR, so it really didn't matter which you chose. But yeah, there were definitely 128-bit SDR versions of the cards available, and they were generally more common than the 64-bit DDR parts I listed. The MX200, of course, was 64-bit SDR, so it got the worst of both worlds. Heh.I think the early Radeons had some similar options, and I'm positive that such options existed in the mobile arena. Overall, though, it's a minor gripe (I hope).
ViRGE - Monday, September 6, 2004 - link
Jarred, without getting too nit-picky, your data for the GeForce 2 MX is technically wrong; the MX used a 128bit/SDR configuration for the most part, not a 64bit/DDR configuration(http://www.anandtech.com/showdoc.aspx?i=1266&p... Note that this isn't true for any of the other MX's(both the 200 and 400 widely used 64bit/DDR), and the difference between the two configurations has no effect on the math for memory bandwidth, but it's still worth noting.Cygni - Monday, September 6, 2004 - link
Ive been working with Adrian's Rojak Pot on a very similar chart to this one for awhile now. Check it out:http://www.rojakpot.com/showarticle.aspx?artno=88&...
Denial - Monday, September 6, 2004 - link
Nice article. In the future, if you could put the text at the top of the tables on an angle it would make them much easier to read.suryad - Monday, September 6, 2004 - link
What about the mobility x800 graphics card? I didnt see that thrown into the mix?coldpower27 - Monday, September 6, 2004 - link
Thank you Bloodshredder, yeh after reading a little about the Radeon LE, it's almost as good as a Radeon DDR, except with lower working frequencies.so if it's DDR then the correct no. are 148/296 and 32MB VRAM only.
Bloodshedder - Monday, September 6, 2004 - link
For the Radeon LE, I noticed a question mark next to the amount of RAM. I own one of these cards, and can confirm that 32MB DDR is the only configuration it comes in.Draven31 - Monday, September 6, 2004 - link
You skipped which OpenGL version and features the various cards support... maybe add that when you add the various workstation cards to the listings...coldpower27 - Monday, September 6, 2004 - link
Yeh, Nvidia learned it's lesson, last gen, with the 0.13 micron new at the time process delaying the introduction of the NV30, thy learned to play it safe using a tried and tested process is a good idea for such high complexity chips initially, though they of course plan to shift these chips to the 110nm process when the process matures enough, possibly on the NV48 and R480 hopefully allowing higher clocks in the process:D, maybe not for R480 unless low-k is ready for 110nm by that time.
It does make more sense to use the newer manufacturing process to help save costs on the volume shipping GPU, as the cost savings will beaccumulated much better in the mainstream and value arena's thanks to sheer volume.
We also see this with Intel, when Intel yields on the 90nm were only so so, they introduced Prescott up to 3.2GHZ in quanitity, but introduced their Pentium 4 3.4GHZ on the northwood core on 0.13 micron. Though over time Intel is making all efforts to transfer everything to 90nm, with Prescott and Prescott 2M w/1066FSB for EE Edition.
JarredWalton - Monday, September 6, 2004 - link
8 - Intel does this as well, testing a new process on their non-flagship parts. For example, after the launch of the P4, Intel piloted their 130 nm copper technology with the Tualatin CPU before releasing the Northwood. It probably has something to do with the amount of extra time a more complex design takes to test and verify.stephenbrooks - Monday, September 6, 2004 - link
Interesting how on the die sizes chart, I notice they're phasing in the 110nm process only for their mid-range-ish cards and sticking to the tried and tested 130nm for the high-end one. I suppose you can't blame them for that really, given it's their flagship product and all, but it could contribute to the huge die sizes.JarredWalton - Monday, September 6, 2004 - link
Thank, AtaStrumf - any errors in the numbers are ColdPower's fault. Heheheh. Really, he already caught a bunch of small mistakes, so hopefully the number of remaining errors is very small.For what it's worth, there are various versions of some of the chips that have different clock speeds and RAM speeds from what is listed. The models in the chart should reflect the most common configurations, though.
BTW, the article text is now tweaked somewhat on the ATI and NVIDIA overview pages. Derek Wilson provided some additional insight on the subject of AA and AF that clarified things a little.
JarredWalton - Monday, September 6, 2004 - link
Argon was the name for the .25 micron K7, while Pluto and Orion were .18 micron.#2 and #4: I realize you're kidding, but in all seriousness we did think about including other architectures. With the broken features on some of the more recent cards and the lack of T&L on 3dfx and older cards, we just decided to stick with the two major players. And hey - it's all fair, as we didn't include Cyrix/Via or Transmeta processors in the CPU cheatsheet! ;)
AtaStrumf - Monday, September 6, 2004 - link
OMFG, this is awsome!!!! You really outdid youself this time! I have been collecting data on GPUs for quite a while and have been planing on making a spreadsheet just like the first two for my, so called, web site, but WAU, this rocks. Thanks for saving me a lot of work :)When I get the time, I'll check your munbers a bit, just to make sure there aren't any typos in there.
Myrandex - Monday, September 6, 2004 - link
We can get some Matrox Parhelia action in there too to go along with the missing 3DFS =) I am wondering what 'Argon' is under the AMD platform (listed with the K6 CPUs). I never remember hearing an Argon codename or anything.Sweet article though.
Jason
CrystalBay - Monday, September 6, 2004 - link
Very nicely done...jshaped - Monday, September 6, 2004 - link
missing option - 3DFX !!!ye old 3DFX how thee has served me so well