The Overclocking Debate

Go To Last Post
17 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I see quite a few posts about overclocking. It's inevitable that hobbyists (as against production engineers) will resort to all sorts of unreproducible tweaks to get their projects working faster.

It's interesting that some datasheets show a 'stepped' speed/voltage SOAR and others a linear relationship (presumably more realistic, though I expect in reality its not a straight line at all). I'm just musing on what the limiting factors might be:

    Reliability of eeprom writes

    General clock errors causing random errors

    Clock errors to specific modules (which ones?)

    Memory/register read/write errors

    Power dissipation problems (i.e. at high clock speeds and low voltages gates spend too high a percentage of their time in the 'crossover state' between 0 and 1)

    Failure of one or more clock sources to start/maintain

There are surely other issues; my money would be on the first or last of these. What do the panel think, as if there is a likely failure mode for overclocked chips, perhaps it could be specifically tested for or worked round?
[/]

Cheers,

Joey

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Gotta laugh.

I'm generally against overclocking.

But a couple of weeks ago I overclocked an Xmega, significantly. I was monitoring a synchronous data line and needed the ability to watch the clock and then grab the data and store it, quickly.

With the PLL the overclocking was trivial to implement, adjust, undo, etc.

JC

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If I needed to run components out of spec, I'd upgrade the components so that the project ran in spec :)

As a hobbiest I'd rather not hunt down bugs that I've introduced by being a pr***!
As an engineer I'd rather not hunt down bugs.

--greg
Still learning, don't shout at me, educate me.
Starting the fire is easy; the hardest part is learning how to keep the flame!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm exactly with Greg on this one - I see no reason to introduce possible bugs if I have a way around it; if the project requires over clocking either the component selection was incorrect or, possibly, the algorithm chosen. In the later case, of course, you might be able to fix the problem without changing hardware.

While there may be cases it makes sense - as with JC's project above - I think it is almost always more trouble than it is worth and - in those cases where it is appropriate - should only be attempted by someone who knows loads about the component, how it can fail and how to debug it; which really excludes many of the hobbyists( or "gasp" student engineers ) that ask about over clocking here.

Martin Jay McKee

As with most things in engineering, the answer is an unabashed, "It depends."

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There are a few cases where overclocking is fair game. Certainly NEVER in any product that you intend to sell and support! However in a geek/nerd type hobby project why not? If it works fine, if not throw it away and try something else. The Uzebox video game is one (actually the only) example of an overclocked AVR project I'm familiar with. BTW reports are that the first thing to fail in the atmega644 during overclocking was the Usarts. I don't know if Uzebox used the eeprom, but it could use the usarts to drive a midi interface.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I envision a test setup with a pulse gen for a clock, and a lab power supply, and a test program, and a test led. The test program would fill ram and read it back, read a big table from flash, then flash 1 for ok or 2 for ram fail or 3 for flash fail. Keep upping the clk by a factor of 1.1 till something fails, the goose up the vcc to 5.3 and see if it passes at that speed. Report results here. Collect beer in any city with several AVRfreaks and a Pub.

Imagecraft compiler user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

kscharf:
And what that USART did(n't)? I'm running m644 at 32MHz with no sign of USART illness (bidirectional communication at full 2Mbd). It actually worked at 40MHz at 6V1 too, but that was 'a bit' unstable.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

smaslan wrote:
kscharf:
And what that USART did(n't)? I'm running m644 at 32MHz with no sign of USART illness (bidirectional communication at full 2Mbd). It actually worked at 40MHz at 6V1 too, but that was 'a bit' unstable.

There are several 'flavors' of the mega644 out there. This includes the original (one usart), the 'P','PA', and 'V' types. I don't know which version had the issue with overclocking and usarts. Also some batch lots worked better than others. IOW YMMV!

BTW the Uzebox people also got the atmega1284P to run at 28mhz, but had to increase the power supply voltage to 5.6 volts or more. ("overclocked and overpowered!").

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well the ATmega644-20PU (series 617) worked normally at 32MHz at 5V but I've used only USART, SPI, timers and ports so no wonder ...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The Uzebox application is VERY critical in system timing since it generates NTSC/PAL graphics in real time using only software (well there is an outboard chip that combines the processor generated R, G, and B signals with processor generated H and V blanking into a composite or SV form). When the overclocking fails the result is garbage graphics instead of pac-man.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
garbage graphics instead of pac-man
NOOOOooooooooo!!!!!!
Tell me it's not true!!!!!
Pac-Man rules, fo-eva!!!!

--greg
Still learning, don't shout at me, educate me.
Starting the fire is easy; the hardest part is learning how to keep the flame!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

kscharf wrote:
There are a few cases where overclocking is fair game. [...] The Uzebox video game is one (actually the only) example of an overclocked AVR project I'm familiar with.

But the whole point of the Uzebox project is to make something that the AVR is NOT suited for; it is, as you say, a geek/nerd type project. It is also a place where you have people who "know what they" are doing developing it.

Even in my hobby work, stability is more important than "doing something cool". All of the projects "do something". Certainly, it is possible to overclock - it may not even be difficult, but I stand by the assertion that you need to know what you are doing before you try it... and you need to know why; that precludes many of the people that ask about it.

Martin Jay McKee

As with most things in engineering, the answer is an unabashed, "It depends."

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
But the whole point of the Uzebox project is to make something that the AVR is NOT suited for

Perhaps, but for many hobbyists AVR is the only game in town. I'm not going to switch to a different range of processors with each new project. I've invested in some tools and the understanding to explore AVRs and I want to push the envelope.

I'm driving a 320x240 GLCD. Software scrolls at a decent speed (8 lines a second) mean putting over a megabyte of data in and out of a buffer less than 4K in size together with all the accompanying calculations and processing the lines to be added. Without overclocking (16MHz/3v3) I'd be scrolling at 5 lines a second. I had to add three nops to stabilise the initial read from the GLCD. (it's also happily running a USART at fmax)

I get the feeling some folks would think me a sinner if I start thinking a 20 MHz crystal would make it just that bit slicker...

I can't see I harming man nor beast by doing that

Cheers,

Joey

Last Edited: Thu. May 5, 2011 - 10:37 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

It's inevitable that hobbyists (as against production engineers) will resort to all sorts of unreproducible tweaks to get their projects working faster.

I miss two or three occurences of the word "some" in that sentense. And I guess that some "some" could actually be replaced with "a few".

It seems to me that for some reason there are hobbyists that are obsessed with speed, in many cases not needed, instead of things like robustness etc. Sometines the need for speed is simply because of bad programming practices - clumpsy algorithm makes for slow execution. Those bad practices also tend to indicate unstabilities, or even bugs.

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Sometines the need for speed is simply because of bad programming practices - clumpsy algorithm makes for slow execution

Ah! you mean when people use C instead of hand optimising their assembler :-)

I resorted to overclocking (1) experimenting with algorithms and code optimisation (we're talking comparing clock cycles for different subroutines here) (2) after taking the risk of asking for advice here (3) alongside choosing a part with more SRAM for a bigger buffer.

I think (2) was probably the most risky step.

Funnily enough, now I have more SRAM I could switch to scrolling by line rather than column, but the 8 times bigger buffer I made so much less difference to the speed than the overclock, is it worth the extra effort?

Cheers,

Joey

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
is it worth the extra effort?
Only you can answer that.
Did you consider using two 'frame' buffers and just switch them when you were ready? I understand thats a popular technique for improving user display interaction.

--greg
Still learning, don't shout at me, educate me.
Starting the fire is easy; the hardest part is learning how to keep the flame!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Using two frame buffers goes back to the BBC Master, and works very well (and can give nice smooth results).

But I only have 16K of arm and the screen is 150k. Even with external SRAM I'd only be scratching the surface. This slows down rectangle moves and copies as well. This means a scroll requires relatively slow reads and re-writes. Irritatingly the screen supports hardware scroll, but it goes the wrong way as I'm using it in portrait mode.

TBH I'm pleased that, after all the overheads and calculation I've got it down to about 10 clocks per pixel move, but I reckon there is still some room for improvement - not in the individual read/write cycles, but in setting up the read/write areas.

Cheers,

Joey