Synthetic Gaming Performance

We've already looked at some Futuremark performance numbers, and most people are already familiar with 3DMark. You'll note that we list 3DMark as a Synthetic Gaming benchmark. This is not meant as a slam on the application, but the truth is that games and graphics are so complex these days that the only thing that any benchmark can really tell you is how well that benchmark runs. Doom 3 says nothing about how Quake 4 will actually run on the same system; we'd assume it will be similar, but we won't know until it comes out and we can test it. If a game can't even give you sure knowledge of how other licensees of the engine will run, how can it possibly give you an insight into how a different engine will run? That's 3DMark in a nutshell: some games will correlate very well with the performance results and scaling that we see in 3DMark, and others will be completely different. The only thing that the benchmark shows for sure is how well it runs.

Starting with the 3DMarks results, note that both graphs start at non-zero values. The reason why this was done is to help show that there are actual differences, slight though they may be. In 3DMark03, the performance scales from 10073 to 11078 (10%). In 3DMark05, there's even less difference, as results go from 4833 to 5061 (5%). The difference between value RAM and TCCD maxes out at about 1.5% in 3DMark03 and less than .5% in 3DMark05. What all this means is that both of these tests are almost completely GPU limited with our X800 Pro. Using a 7800 GT or GTX would increase scores quite a bit, but even then, the 3DMarks scores are still mostly GPU limited - a typical problem of many synthetic benchmarks.

The good news is that we also get CPUMarks that largely remove the graphics card from the picture. The CPUMarks render a couple of the same tests as the 3DMarks portions of the tests, but they compute the transform and lighting operations and some other areas on the CPU rather than the GPU. The CPU tests are also multi-threaded, but the Venice chips don't gain anything from that. (If we were to compare the results with a Pentium 4 HT, the P4 tends to perform very well relative to actual gaming results.) The CPU scores are more in line with the other results that we've already seen, gaining 46% and 42% in 03 and 05, respectively.

Being able to run the entire 3DMarks03/05 tests from start to finish without crashing is once again a good indication of stability. We went a step further and looped the tests for eight hours or more on the top overclocks, without trouble... at least, without incident on everything 2700 MHz and below. The 2800 MHz overclock would crash after 30 to 60 minutes of looping, usually during the CPU portion of the test.

Encoding Performance Battlefield 2 Performance
Comments Locked

101 Comments

View All Comments

  • JarredWalton - Monday, October 3, 2005 - link

    It's tough to say how things will pan out long-term. 1.650V seems reasonably safe to me, but I wouldn't do it without a better HSF than the stock model. The 1.850V settings made me quite nervous, though. If you can get your CPU to run at 1.600V instead of 1.650V, that would be better, I think. There's also a possibility that slowing down your RAM slightly might help the CPU run at lower voltages. I'd sacrifice 5% to run what I consider a "safer" overclock, though really the thought of frying a $140 CPU doesn't concern me too much. That's less than any car repair I've had to make....
  • cryptonomicon - Monday, October 3, 2005 - link

    well for most overclocks a reasonable ("safe") increase of voltage is 10-15%. however that is just a guideline, it may be more or less. there is sort of a way to find out: if you work on overclocking to the maximum of your chip while scaling the voltage, you will eventually hit a place where you have to increase the voltage dramatically just to get up the next FSB bump. for example if you are at 2500mhz and 1.6v, then it takes 1.75v just to get to 2600mhz, then you have hit that boundary and should go back down immediatly. when the voltage to cpu speed ratio is scaling consistently, then things are fine. but once the voltage required becomes blatently unbalanced, that is the logical time to stop... unless you have no concern for the longetivity of the chip.
  • Ecmaster76 - Monday, October 3, 2005 - link

    Finally goaded me into overclocking my P4 2.4c. I had been planning for a while but never bothered too.

    So I got bored and set the FSB to 250mhz (I went for my goal on my first try!) with a 5:4 (still DDR400) memory ratio. It works great at stock cooling + stock voltage. I will have to do some long term analysis of stability but since I am building a new system before the years end I don't really care if it catches on fire. Well as long as it doesn't melt some of my newer nerd toys that are attached to it.
  • lifeguard1999 - Monday, October 3, 2005 - link

    I am running an AMD Athlon 64 3000+ Processor (Venice) @ 2.7 GHz, stock HSF; 1.55V Vcore; DFI LANPARTY nF4 SLI-DR. It was cool seeing you run something similar to my setup. I run value RAM and it seems that I made the right choice for me (giving up at most 5% performance). You ran at Vcores much higher than I do, so it was interesting to see the CPU handle that.

    The only thing I would add to this article is a paragraph mentioning temperatures that the CPU experienced.
  • mongoosesRawesome - Monday, October 3, 2005 - link

    yes, i second that. temps at those volts using your cpu cooler as well as with maybe a few other coolers would be very helpful. also, if you could do a few tests using different coolers to see when temps hold you back.
  • JarredWalton - Monday, October 3, 2005 - link

    I've got some tests planned for cooling in the near future. I'll be looking at CPU temps for stock (2.0 GHz) as well as 270x10 with 1.750V. I've even got a few other things planned. My particular chip wouldn't POST at more than 2.6 GHz without at least 1.650V, but that will vary from chip to chip. The XP-90 never even got warm to the touch, though, which is pretty impressive. Even with an X2 chip, it barely gets above room temperature. (The core is of course hotter, but not substantially so I don't think.)
  • tayhimself - Tuesday, October 4, 2005 - link

    Good article, but your Vcore seems to scale up with most of the increments in speed? Did you HAVE TO raise the vcore? Usually you can leave the vcore until you really have to start pushing. Comments?
  • JarredWalton - Tuesday, October 4, 2005 - link

    2.20GHz was fine with default 1.300. 2.40GHz may have been okay; increasing the Vcore to 1.40V seemed to stabilize it a bit, though it may not have been completely necessary. 2.60GHz would POST with 1.450V, but loading XP locked up. 1.550V seemed mostly stable, but a few benchmarks would crash. 2.70GHz definitely needed at least 1.650V, and bumping it up a bit higher seemed to stabilize it once again. 2.80GHz was questionable at best even at 1.850V with the current cooling configuration. It wouldn't load XP at 2.80GHz at 1.750V, though.
  • JarredWalton - Tuesday, October 4, 2005 - link

    My memory on the voltages might be a bit off. Personal experimentation will probably be the best approach. I think I might have erred on the high side of required voltage. Still, past a certain point you'll usually need to scale voltage a bit with each bump in CPU speed. When it starts scaling faster - i.e. .1V more to get from 2700 to 2800 MHz - then you're hitting the limits of the CPU and should probably back off a bit and call it good.
  • tayhimself - Tuesday, October 4, 2005 - link

    Thanks a lot for your replies. Looks like there is a fair bit of overclocking even if you dont increase the Vcore too much to help save power/noise etc.
    Cheers

Log in

Don't have an account? Sign up now