Fine tuning a crystal on TOSC

Go To Last Post
15 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi folks,

I'm working with ATmega8 and using a 32.768 kHz crystal on the TOSC pins. I've noticed that the clock runs about 3 seconds behind over a 24 hour period. I wanted to trim this up to be less than 1 second off per day.

Tell me if this sounds like the right approach:

I have timer2 running with a prescaler of 128 so that it overflows once every second. This means I need to throw out 768 cycles per day to get the clock to run faster:

3 seconds * 256 cycles/overflow = 768 cycles

There are 86400 overflows per day. To adjust time evenly I will need to throw out one cycle about every 112.5 overflows. To make this even, I'll throw out 2 cycles every 225 overflows.

60 * 60 * 24 = 86400 overflows per day
86400/768 = 112.5th overflow, throw out 1 cycle

Here is my Interrupt Service Routine before I make the changes to reflect this adjustment:

//RTC one second overflow timer
ISR(TIMER2_OVF_vect)
{
  if (++ticks > 59) 
  {
    ticks = 0; //One minute has passed
    //TODO: Put code here to fine-tune the clock
  }
}

Here is the altered ISR to reflect these adjustments:

//RTC one second overflow timer
ISR(TIMER2_OVF_vect)
{
  if (++ticks > 59) 
  {
    ticks = 0; //One minute has passed
  }
  if (++clk_adjust > 224) //Check to see if 225 overflows have passed
  {
    TCNT2 = 2;  //Reset overflow register, throwing out 2 cycles
    clk_adjust = 0;
  }
}

Will this work as I have explained it? Is this the correct way to approach this issue?

Thanks!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You could do the trim in hardware by changing the capacitors.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It should work as long as you are servicing the interrupt within 128 cpu cycles. You will also need to pay attention to the ASSR register when doing this. You will also lose any partial sub-cycles (cpu cycles per timer tick) completed, so you will still have some jitter.

One simple change might be to use CTC mode, and the compare match interrupt instead of the overflow interrupt, and then adjust the compare value, this is less likely to introduce jitter.

Writing code is like having sex.... make one little mistake, and you're supporting it for life.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

glitch wrote:
It should work as long as you are servicing the interrupt within 128 cpu cycles. You will also need to pay attention to the ASSR register when doing this. You will also lose any partial sub-cycles (cpu cycles per timer tick) completed, so you will still have some jitter.

Are you saying that I need to check to see that TCNT2 is ready to be written to? Something like this? :

while (ASSR & (1<<TCN2UB)) { ; }

In order to stall until the register is ready to be updated?

Regarding jitter: I have written some more code that allows for this clock calibration to be user-adjustable. No matter the amount of correction, this code will only execute 128 times in a 24 hour period. Absolute worst case scenario I lose 127 cpu cycles every time this correction is made (any more and I've missed a whole timer increment). That is just over 1/2 a second per day due to jitter. Does that sound right? How can I tell how many cycles are passing during this ISR?

Also, here's the updated code. Notice I have updated to 675 as the magic number because it allows for all adjustments necessary for my purposes.

unsigned int clk_adjust = 0
volatile signed char clk_adjust_amt = 2 //Amount of seconds per day to adjust the clock

//RTC one second overflow timer
ISR(TIMER2_OVF_vect)
{
  ++ticks; 
  //Check to see if 675 overflows have passed
  if (++clk_adjust == 674)
  {
    //Check to see if we are adding cycles to delay the clock
    if (clk_adjust_amt < 0)
    {
      //Decrement ticks in order to delay by a few cycles
      --ticks;
      //Reset timer to interrupt again in a few cycles
      TCNT2 = 0 + (clk_adjust_amt * 2);
      //Set counter to zero after next interrupt
      clk_adjust = 65535;
    }
    //We are throwing out cycles to speed up the clock
    else 			
    {
      //Reset overflow register, throwing out cycles
      TCNT2 = 0 + (clk_adjust_amt * 2);
      //Reset clock adjust counter
      clk_adjust = 0;
    }
  }
  //One minute has passed
  if (ticks > 59) ticks = 0;
}

edit: Moved comments to make code more readable on the forums.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

you can't tell how many cycles have passed until the ISR executes. There is no way to read the count inside the prescaler. (note that this could be 10's, 100's, or even 1000's if you have other ISR's active in the system) As I said earlier switching to CTC mode, eliminates this problem. The switch only requires a minor change to a few lines of code... and simplifies your adjustment code.

Writing code is like having sex.... make one little mistake, and you're supporting it for life.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I don't see how the switch to CTC is only minor. Timer2 is an 8 bit timer and I need it to overflow in order to track 1 second passing.

This is fine if you want to make the clock run faster, but makes it much harder to run the clock slower.

If I change the prescaler of 256 I could compare at 128 and still get one second. The clock adjustment would then need two steps, one to throw out (or add) the adjustment cycles, and another executed on the next compare to set the compare value back to normal. Doesn't this make the issue more complicated? Would it be worth it?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

WHATEVER you do the clock WILL drift even if calibrated unless you can keep the crystal at a constant temperature. That's why they make TCXOs like the DS232KHz.

John Samperi

Ampertronics Pty. Ltd.

www.ampertronics.com.au

* Electronic Design * Custom Products * Contract Assembly

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

barney_1 wrote:
I don't see how the switch to CTC is only minor. Timer2 is an 8 bit timer and I need it to overflow in order to track 1 second passing.

ok so you're counting overflows. So the (overflow) interrupt fires when the counter rolls from 255 back to 0

So what's the difference if you use the compare? If you set comparison value to 255.. The (compare) interrupt still fires when it rolls from 255 to 0. So it behaves exactly the same in this case.

But now in CTC mode, if you need to shave a couple of ticks off to adjust, you can set it to 253 for that one pass, and it will fire when it rolls from 253 to 0, without adding any jitter into the system, like you do when you twiddle with TCNT directly.

Writing code is like having sex.... make one little mistake, and you're supporting it for life.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have a slightly different approach which you might consider; the basic concept is:

    - I use 10 msec interrupts and two timer counters
      o a "raw" wrap-around counter: incremented at each 10-msec interrupt, o a real-time timer counter: incremented at each 10-msec interrupt, but subject to correcion;
    - for controlling correction, I use a correction mask: a list of bits where a timer pulse needs to be added or skipped (see below),

    - at each timer interrupt, I check whether the least significant 1 in my raw timer counter corresponds to a bit that is set in the correction mask

    - if there is a match, the moment has come for a correction: I count this timer interrupt twice - or skip it, depending on the direction of the required correction (which is determined by the 1st bit of the correction mask).

      o no correction: the real-time 10msec counter is incremented by 1 o positive correction: the real-time counter is incremented by 2
      o negative correction: it is not incremented
The correction is tuned by adjusting the correction mask (in fact, a correction polynomial).

Sounds slightly complicated, but is easy to implement very flexible and allows good precision (and I store the correction mask in eeprom).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I have a slightly different approach

So do I. Unless timing requirements are very stringent (and then the precise no-drift oscillator and/or the tunable RTC chip are justified), then I wouldn't worry about correcting the "ticks"--10ms. periods or the CTC count itself. Rather, I'd do it at a coarser level--every n minutes I'd skip a second or add a second, with the n being the "calibration" value. Yes, a logged "cycle" could be shown a second longer or shorter than it really is. That rarely matters in many apps.

My rule-of-thumb when using DS1305/6 chips untuned into a logging app is that a minute a month is fine. Most fall well within that boundary. A connection is made to a PC to upload every few weeks, and the clocks can then be synchronized. A typical PC "RTC" seems to drift more than that.

Those numbers are roughly what OP wants: he is "off" 90 seconds a month, and would like to get under 30 seconds. Now, if this is a one-off and hand-built then I might fuss with the caps and cleaning the board and reducing capacitance, etc. If on a whole batch, and they are consistent--hmmmm. Is it inherent with caps not really designed for the setup? Is it the TOV "race"? [I'd almost always use CTC.]

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

my suggestion for CTC was only to minimize the change to the OP's code and program structure where he wants the interrupts happening at 1sec intervals. I would normally have the interrupts happening at a higher rate, incrementing a software counter, and then executing things when the 1 second worth of ticks has elapsed. This allows for better synchronization to external events, as well as makes tuning easy.

As always there are many ways to accomplish the task... the above is just one of them.

Writing code is like having sex.... make one little mistake, and you're supporting it for life.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Add a trimmer capacitor. And also remember to have that serial resistor in the output pin (can't remember which one was it).

To calibrate the device the best solution would (IMHO) be to have the device output 50HZ (or 60 in USA) squarewave and then compare it with the actual AC on the wall outlet. It is easy to detect even the slightest deviation with this setup.

You will obviously need an oscilloscope to do the calibration this way. (They usually have the sync on line voltage).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The problem with 50Hz:
So how long does one have to stare at the scope?
Time for a one cycle slip is 86400/(3s*50Hz)=576 seconds. More coffee required.

So does it work at all?
barney_1 is 35 ppm off. 60Hz+35ppm=60.002Hz since he's leftpondian.
http://www.leapsecond.com/pages/mains/ shows what the mains stability is like (TVB is a time nut, questioning his reference is out). The short term stability is 10 to 20 times worse than the requirement.

I'd take a look at the load caps. Most of the 32kHz tuning forks I use are better than +/-20ppm, -34ppm is a bit much unless it's hot - tuning forks have terrible tempco outside of the 20-38 deg.C range.

For production trimming software IS a good way to go about it.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yeah, I'm not going to mess around with 50/60Hz.

I had thought about the "coarse" one second adjustments but I'd rather this be a little more refined as seconds are displayed and if you watch at just the right time you'll see a second either last twice as long as it should, or be skipped entirely.

As far as caps are concerned. It is my understanding that the mega8 has the TOSC pins optimized for a 32.786kHz crystal and no external caps are needed. Is this wrong?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

All my clocks run fast unless I adjust them. I use CTC to throw away ticks. It works well. I can get them quite accurate as long as the temperature stays constant.

I had one running for 8 days and in that time it lost 50 milliseconds (about 12 ticks). That's 3 seconds per year.

I'm now going to work on a temperature adjustment. There's a temperature sensor on board. If I can get it accurate withing 3 seconds per year under all temperatures, I will let you know. :)