Application Performance

We'll start with some general application performance, courtesy of Winstones 2004. Winstones runs a bunch of scripts in a variety of real-world applications. The problem is that many of the scripts simulate user input and operate at speeds that no human can approach. Rendering an image, encoding audio or video, etc. can take time. Word, Excel, and Outlook, on the other hand, are almost entirely user-limited. While the fastest systems do perform higher, in every day use the typical office applications are going to run so fast that differentiating between the various overclocked settings is difficult, if not impossible.

We get a decent performance increase from overclocking, but nowhere near the theoretical maximum. Going from 1.8 GHz to 2.8 GHz represents a 64% CPU performance increase, although other factors would almost never allow us to realize that gain in benchmarks. In the Business Winstones test, we see a range from 21.9 to 27.6, a 26% increase. The Content Creation test gives a slightly larger increase, ranging from 28.3 to 39.7 - 40% more performance. If you like to think about it this way, the lack of performance scaling in the Business test can also "simulate" the user-limited aspect of office applications.

Similar in some ways to Winstones performance, PCMark attempts to gauge system performance. The results are a little more theoretical, as PCMark takes 5 to 10 minutes to run compared to 20 to 30 minutes for the Winstones tests. PCMark also includes some 2D and 3D graphics tests, which make the GPU somewhat important to the overall score. With Windows Vista moving to more hardware acceleration for windowing tasks, though, that's not necessarily a bad thing.

The difference between the slowest and fastest scores for our configuration is about the same as Winstones. PCMark04 goes from 3851 to 5567, a 45% increase. PCMark05 shows less of a difference, ranging from 3259 to 4146 (27%). PCMark05 is also the sole benchmark that we couldn't run to completion on the 2.8 GHz overclock. A couple of the tests failed every time. Both of the PCMark tests serve as great stress-tests of CPU overclocks, which is one of the reasons why we included the results. The failure to run complete PCMark05 at 2.80 GHz means that we definitely won't run this particular system at that speed long-term.

In case the graphs don't convey this fact well enough, our standard application scores benefited very little from the use of higher quality RAM. While the 2T command rate on the 9x300 value configuration did worse than the 9x289 value configuration, nearly all of the other tests show increasing performance, even with slightly lower memory speeds and latencies. The biggest gap between the value and performance RAM was in Business Winstones at 2.4 GHz, and even then, it was only a 5% margin of victory.

RAM Latency Encoding Performance
Comments Locked

101 Comments

View All Comments

  • JarredWalton - Monday, October 3, 2005 - link

    It's tough to say how things will pan out long-term. 1.650V seems reasonably safe to me, but I wouldn't do it without a better HSF than the stock model. The 1.850V settings made me quite nervous, though. If you can get your CPU to run at 1.600V instead of 1.650V, that would be better, I think. There's also a possibility that slowing down your RAM slightly might help the CPU run at lower voltages. I'd sacrifice 5% to run what I consider a "safer" overclock, though really the thought of frying a $140 CPU doesn't concern me too much. That's less than any car repair I've had to make....
  • cryptonomicon - Monday, October 3, 2005 - link

    well for most overclocks a reasonable ("safe") increase of voltage is 10-15%. however that is just a guideline, it may be more or less. there is sort of a way to find out: if you work on overclocking to the maximum of your chip while scaling the voltage, you will eventually hit a place where you have to increase the voltage dramatically just to get up the next FSB bump. for example if you are at 2500mhz and 1.6v, then it takes 1.75v just to get to 2600mhz, then you have hit that boundary and should go back down immediatly. when the voltage to cpu speed ratio is scaling consistently, then things are fine. but once the voltage required becomes blatently unbalanced, that is the logical time to stop... unless you have no concern for the longetivity of the chip.
  • Ecmaster76 - Monday, October 3, 2005 - link

    Finally goaded me into overclocking my P4 2.4c. I had been planning for a while but never bothered too.

    So I got bored and set the FSB to 250mhz (I went for my goal on my first try!) with a 5:4 (still DDR400) memory ratio. It works great at stock cooling + stock voltage. I will have to do some long term analysis of stability but since I am building a new system before the years end I don't really care if it catches on fire. Well as long as it doesn't melt some of my newer nerd toys that are attached to it.
  • lifeguard1999 - Monday, October 3, 2005 - link

    I am running an AMD Athlon 64 3000+ Processor (Venice) @ 2.7 GHz, stock HSF; 1.55V Vcore; DFI LANPARTY nF4 SLI-DR. It was cool seeing you run something similar to my setup. I run value RAM and it seems that I made the right choice for me (giving up at most 5% performance). You ran at Vcores much higher than I do, so it was interesting to see the CPU handle that.

    The only thing I would add to this article is a paragraph mentioning temperatures that the CPU experienced.
  • mongoosesRawesome - Monday, October 3, 2005 - link

    yes, i second that. temps at those volts using your cpu cooler as well as with maybe a few other coolers would be very helpful. also, if you could do a few tests using different coolers to see when temps hold you back.
  • JarredWalton - Monday, October 3, 2005 - link

    I've got some tests planned for cooling in the near future. I'll be looking at CPU temps for stock (2.0 GHz) as well as 270x10 with 1.750V. I've even got a few other things planned. My particular chip wouldn't POST at more than 2.6 GHz without at least 1.650V, but that will vary from chip to chip. The XP-90 never even got warm to the touch, though, which is pretty impressive. Even with an X2 chip, it barely gets above room temperature. (The core is of course hotter, but not substantially so I don't think.)
  • tayhimself - Tuesday, October 4, 2005 - link

    Good article, but your Vcore seems to scale up with most of the increments in speed? Did you HAVE TO raise the vcore? Usually you can leave the vcore until you really have to start pushing. Comments?
  • JarredWalton - Tuesday, October 4, 2005 - link

    2.20GHz was fine with default 1.300. 2.40GHz may have been okay; increasing the Vcore to 1.40V seemed to stabilize it a bit, though it may not have been completely necessary. 2.60GHz would POST with 1.450V, but loading XP locked up. 1.550V seemed mostly stable, but a few benchmarks would crash. 2.70GHz definitely needed at least 1.650V, and bumping it up a bit higher seemed to stabilize it once again. 2.80GHz was questionable at best even at 1.850V with the current cooling configuration. It wouldn't load XP at 2.80GHz at 1.750V, though.
  • JarredWalton - Tuesday, October 4, 2005 - link

    My memory on the voltages might be a bit off. Personal experimentation will probably be the best approach. I think I might have erred on the high side of required voltage. Still, past a certain point you'll usually need to scale voltage a bit with each bump in CPU speed. When it starts scaling faster - i.e. .1V more to get from 2700 to 2800 MHz - then you're hitting the limits of the CPU and should probably back off a bit and call it good.
  • tayhimself - Tuesday, October 4, 2005 - link

    Thanks a lot for your replies. Looks like there is a fair bit of overclocking even if you dont increase the Vcore too much to help save power/noise etc.
    Cheers

Log in

Don't have an account? Sign up now