Doom 3 Performance

More so than any other game that we tested, Doom 3 is going to be held up by the X800 Pro. With the engine being heavily optimized for NVIDIA's architecture - or simply being a better fit for OpenGL and shadows, if you prefer - performance scaling drops off rapidly with increasing resolutions. Whether future Doom 3 engine licensees will exhibit similar results is up for debate, Call of Duty and Wolfenstein: Enemy Territory, for example, don't correlate directly with Quake 3 performance. However, we expect NVIDIA to maintain some advantage over ATI in Quake 4 and Quake Wars.

At 640x480 without antialiasing, Doom 3 still manages to gain 43% more performance. That drops off quickly to 34% at 800x600 and 18% at 1024x768. Consider our 1024x768 4xAA results to be something of a reality check. It doesn't matter how fast your CPU is if your bottleneck is somewhere else. The 1% increase between 1.8 GHz and 2.7 GHz performance is testament to this fact. Higher quality RAM, once again, buys about 5% more performance relative to budget RAM, though the 2T timing at 9x300 is 12% slower than performance RAM. It's also slower than the value RAM at 9x289, which is a trend seen in BF2 and several other tests.

Battlefield 2 Performance Far Cry Performance
Comments Locked

101 Comments

View All Comments

  • JarredWalton - Wednesday, October 5, 2005 - link

    Sorry if I missed this in the article. The reason a 3200+ may be better is the 10X multiplier vs. 9X. Sure, the DFI board used worked pretty well at either setting, but there are many boards that won't handle much above 250 MHz CPU bus stably. Needless to say, there's a reason 2800 MHz was only included at one setting. While it still wasn't stable, it would actually run most benchmarks at 10x280. 9x311 wouldn't even load Windows half the time. The extra $50 for added flexibility is also nice: you can try 9x300, 10x270, PC3200, PC2700, etc. to find the most stable, highest performing option.
  • Bakwetu - Wednesday, October 5, 2005 - link

    Thanks for a great article. I haven't been following the development so carefully since I upgraded last time (with one of the last unlocked Barton 2500+), so this article was a most welcome refresher for me, as I will probably get a x2 3800 rig in the near future.

    Last time I checked using the naked fingertip to smear out the paste was a big no-no. I have always used either a washed razorblade or fingertip in a clean plastic bag. The Arctic silver once sold without silver was a faked, copied product as far as I know. The real stuff in its many forms over the years has definitely shown that it is a good product.
  • javalino - Wednesday, October 5, 2005 - link

    Frist , great article, Jarred.
    Second, i m an anand fan since i remember (1999-2000).
    Third, Since yours conclusion focus on a dilema about overclock, why spend to much in an overclock symtem(or on a powerfull system) if you target is at games ? (wich is a GPU limited). An 125 bucks , like you said, will be more usefull in a video card.
    My idea is an article, about "Benefits, Costs, and Lessons Learned" about build a system for games. How much will be a performance gain from systems running high end cards ,at high resoltion and configurations ( like 1600 x 1200, and with an extra 4xAA 16XAF), with differents system . A FX VS 64(overclock) VS P4 (over) VS P-M VS AMD XP (over of course), for example. The conclusion will be, how much is "needed" to pay for a decent game machine wich is possible to play all current games(and maybe future) with great image quality and performance.

    Maybe the answer is obvious, go with the best FPS/price option possible, or maybe not.
  • AtaStrumf - Tuesday, October 4, 2005 - link

    Great article Jarred!!! I really like your choice of value parts and how you criticaly assesed the results based on the bang-for-the-buck. And finally you did away with pages and pages of bar charts, and combined them into line-scaling charts. How long have I been asking for something like that??? Now we can finally see the REAL difference (or lack of it), and analyse results properly, without having to go back and forth between tens of bar charts. Tell Anand to upgrade your graphing engine ASAP.

    I am a little worried about those voltages though. This sure looks like a bad chip to me (OC wise). WAY too high voltages. I would not go over 1,45 - 1,50 V or else you risk screwing up the chip. You see the memory controller on the chip doesn't like too high voltages and though it will still work, the chip will get slower eventually. Hard to explain really but I know my new 2,2 GHz A64 is faster and much cooler than my old 2,4 GHz A64 (same core - Newcastle, same cooer, same RPM, same case, same ...), which I bought from some crazy overclocker (last time BTW). The 2,4 GHz one gave me really shitty results in FAH for weeks. That's the only explanation a have so far anyway. Maybe you can do an investigaion into this -- burn in one A64 Venice at say 1,6V 24/7 for a few weeks and let's see what happens. I just don't have the $$$ and time to take the risk. I'd be very happy to hear from other forum members on this as well.

    Anyway, glad to see at least part of AT is back to the high quality standards we were used to.
  • AtaStrumf - Tuesday, October 4, 2005 - link

    Or maybe it's the SOI process that is to blame for not taking high voltages too kindly, or maybe both, don't know yet, but I would definitely advice caution goint over 1,5V (default for 0,13 mikron SOI chips). Just think about it, that's already a 15% increase. +10% is usualy max that is still considered safe.

    You just posted that this chip seems to have changed it's behavior (better OC). That may have something to do with the high voltages and it may not be all good. I'd suggest testing it again in a few benchmarks and comparing the results.
  • JarredWalton - Wednesday, October 5, 2005 - link

    Working on it. I think I ended up benching at 1.850V for the 10x280 setting and then not dropping voltages as much as I was supposed to. I'm a little skeptical that a CPU would get slower, though. Usually, they work or they fail. We'll see.

    My thought on the "safe limit" though: what voltage does the FX-57 run at? Whatever it is, at 10 to 15% to that and you're probably still okay. Good cooling will also help; on the stock HSF, I'd be a lot more nervous going over 1.550V.
  • OvErHeAtInG - Tuesday, October 4, 2005 - link

    Very useful article - thorough yet concise. And I would like to toss in another request: Add to the test a ULi-based motherboard (such as the recently reviewed ASRock 939Dual-SATA2). How do these Venices overclock when you can only feed them +.05v? As I recall the standard AT Clawhammer was used in that review.

    That would be hugely useful to a lot of us wanting to transition to A64. While the thing to do is probably just get a DFI or other top-end oc'er, what to do for those of us who are not yet ready to upgrade GPUs? On second thought: you could simulate the ASRock motherboard by simply setting the Venices to the lower voltage, on the DFI board, and testing for the max overclock on that. I think that would vary quite a bit from chip to chip, but just to get an idea - how much of a disadvantage is being limited in your voltage? Food for thought.
  • JarredWalton - Tuesday, October 4, 2005 - link

    I played around with voltages a bit more last night. It seems like I can hit about 2.40 GHz with only increasing the CPU voltage to 1.40V, though I didn't run all of the benchmarks to fully test that config. I'm not sure if the CPU has changed behavior over the past month, or if I was just too liberal with the voltages initially.

    For the ASRock, that Wes managed to get a 500 MHz OC even with the minimal voltage adjustments is promising. Truth be told, the DFI Infinity seems to undervolt the CPU slightly, so 1.500V actually shows up as closer to 1.455V. If the ASRock is exact with the voltages, or even a bit high, I think a 2.4+ GHz overclock is a reasonably safe bet.
  • OvErHeAtInG - Wednesday, October 5, 2005 - link

    Thanks for the info, Jarred. I'm sure there's a thread on this somewhere.... :)
  • araczynski - Tuesday, October 4, 2005 - link

    i haven't seen a better argument for not wasting money on the 'better' memory in ages.

    with those kinds of 'gains' i congratulate the companies for milking everyone with their markups for the 'higher end' components.

Log in

Don't have an account? Sign up now