Can someone please explain why Dean says we need to decrement the number of ticks by one? I'm not seeing the logic in his argument.
The only reason I can see to decrement the number of ticks is because of the timer resetting uses one cycle? So, in his example 50,000 ticks is 1/20th of a second. Decrement by 1 = 49999. Timer counts down to 0 then the next clock cycle tick would be "wasted" or spent resetting to 49999? And how important is that one cycle? I was under the impression that the internal clocks are a poor time keeper prone to significant error. Maybe on an 8 bit timer it might be a little more important?
The AVR timers will only update their count on each timer input clock tick, thus it takes one tick
to get from a count of zero to one, or from the timer's maximum value back to zero. As a result,
we need to decrement the number of ticks in our calulation by one as shown in the above formula.