Timekeeping - High Level ?

Go To Last Post
87 posts / 0 new

Pages

Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Greetings Freaks -

 

This is a question about high level "issues" of time keeping. I am not talking about how time "ticks" are generated or about how those ticks are converted into useful time quantities. Instead, it is about long term accuracy. And, it is not, strictly, an AVR question as it probably applies to microcontroller clocks, generally, or maybe even to all clocks, generally.

 

Let me frame this question with two assumptions:

 

1. Lets assume that we have a source of clock ticks close to some standard rate and that these ticks are moderately adjustable, probably by adjusting the timer roll-over value (though the detail of how is not important).

 

2. Lets assume that we have intermittent access to some external higher precision time source. Maybe this is internet time, or perhaps GPS. But, the assumption is that it is intermittent so that it cannot be used as THE internal time-base.

 

Now, the question: What sort of algorithm or process is commonly used to correct the internal time source from the external one? I assume that the internal accumulated time is not just block-updated from the external source, because that would result in either time gaps or, if the internal time were to be set back, intervals when the same time number is reported at two different times. This leads me to suspect that the "better" strategy is to speed up or slow down the internal clock so that reported time remains continuous. If this is the case, how is the decision made about how much to speed up or slow down? Or, is some other method used? 

 

Many thanks

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

Last Edited: Sun. May 24, 2020 - 05:50 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have a sneaking suspicion that the linux time is updated from nntp servers the way you suggest - but windows - at least as recently as W10 - just slams in the new time when it gets it.

 

What you might consider: calculate the number of seconds from your last time check to this one. Calculate the offset - i.e. how much you've lost or gained. Adjust your oscillator by that percentage, plus 'a bit' and regularly update your reference (remember your clock may not be particularly stable) until you are exact or within your desired accuracy limit and then remove the 'bit'.

 

Apropos of timekeeping: I have a Tissot automatic watch which has a silicon balance spring and no adjustment. It's within half a second a day, which pleases me as that's twenty times better than the spec.

 

Neil

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Depends on the precision required, I'd say. 

 

My (lousy) clock occasionally skips seconds because I happen to know it's too slow, and skipping a second isn't a problem (It's kinda fun to watch and see if it will...).  Horologist science occasionally throws in a 'leap second' because the Earth's rotation is slowing down, and some people accept that xx:xx:60 is a valid time, and some (Google, for example) try to 'smear' it out over longer periods.  There's probably at least as many opinions on the subject as there are butts in chairs... 

 

wink  Possibly more.  S.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

PS - If you're worried about duplicate times when your internal clock gets corrected by an external one, paint a sign and go protest the unmitigated horsepucky that is 'Daylight Savings Time' changes.  S.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

Scroungre wrote:
paint a sign and go protest the unmitigated horsepucky that is 'Daylight Savings Time' changes
Speaking of...
Scroungre wrote:
opinions
;-)

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This clock will ignore "daylight saving time". It is used to timestamp data, so duplicate time values could be problematic.

 

My hunch is that the external driver for "accuracy" will be to correlate between this data and data recorded from other sources.

 

Jim

 

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

Last Edited: Sun. May 24, 2020 - 06:08 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Then stamp the data with a serial number as well as the time.  That will guarantee monotonicity.  S.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

'Real computers' speed up or slow down the system time clock in order to make corrections. The source of the correction is usually some more precise device attached to an NTP (Network Time Protocol) server.

 

If you have a *ix box to hand, look at the man page for adjtime(2) system call. e.g.

 

adjtime() makes small adjustments to the system time, as returned by gettimeofday(2), advancing or retarding it by the
     time specified by the timeval delta.  If delta is negative, the clock is slowed down by incrementing it more slowly than
     normal until the correction is complete.  If delta is positive, a larger increment than normal is used.  The skew used to
     perform the correction is generally a fraction of one percent.  Thus, the time is always a monotonically increasing func-
     tion.
 ...

 

From the man page for the ntpdate(8) command line program:

 

-B      Force the time to always be slewed using the adjtime(2) system call, even if the measured offset is greater than
             +-128 ms.  The default is to step the time using settimeofday(2) if the offset is greater than +-128 ms.  Note
             that, if the offset is much greater than +-128 ms in this case, it can take a long time (hours) to slew the clock
             to the correct value.  During this time, the host should not be used to synchronize clients.

It is generally reasonable to correct using a step-change at system boot and then skew gradually thereafter.

Last Edited: Sun. May 24, 2020 - 06:30 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
What sort of algorithm or process is commonly used to correct the internal time source from the external one?
WWVB RCC simply adjust the digital display or analog seconds hand per "recommended practice".

ka7ehk wrote:
If this is the case, how is the decision made about how much to speed up or slow down?
IIRC, GNSS sends a UTC correction to receivers; so, a machine can stay monotonic though what's displayed to the operator and what's transmitted to machines (M2M) will vary.

Ideally, a system specification will state when local time is corrected as this information must be made available to operators (blend, jump, time quality factor and operator decision, etc)

 

P.S.

Some computer languages require monotonic time (easier said than done)

Inaccurate time can kill.

Planck constant (the opportunity for the local universe to sync to the universes of greater scope)

 


WWVB Radio Controlled Clocks: Recommended Practices for Manufacturers and Consumers (2009 Edition)

[page 9, Table of Contents]

4. Recommended Practices for Clock Synchronization .................................11

via Help with WWVB Radio Controlled Clocks | NIST

Monotonic Time (Ada)

[edit]

Implementation Advice

...

    It is recommended that Calendar.Clock and Real_Time.Clock be implemented as transformations of the same time base.

...

The Patriot Missile Failure via ECE 4760 (Cornell University)

Planck’s constant | Definition, Units, Symbol, & Facts | Britannica

 

edit2 : Python 3

https://docs.python.org/3/library/time.html#time.monotonic

 

"Dare to be naïve." - Buckminster Fuller

Last Edited: Mon. May 25, 2020 - 12:46 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I implemented the "time adjust" strategy on a data-logger that spent 99% of it's run time in Power-Save sleep mode. The coarse granularity of ASYNC Timer-2 interrupts (250ms) made it perform rather badly. It would take many hours to drift back to correct time and receive multiple "Time Correction" commands [over radio] during the adjusting period.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hmmm..could you also apply a correction factor to any logged times?  If you kept track of when the time was resynced (last verified), the you could restate any log times since then.

 

For example, if you took data every minute & now you find the time is 5 minutes fast since syncing 3 days ago (4320 minutes)*...you might be able to instantly adjust the clock to the true time & linearly slightly nudge all of the misreported times logged since 3 days ago.

Items from 3 days ago --the nudging would be zero & build up til today where the time was off by 300 seconds (5 min).  The time stamps would be updated and would produce a log with the most accurate records.

You could still tweak the timebase as well, to minimize the amount the log needs updated in the future rounds. 

  

* of course we hope you don't lose 5 minutes in 3 days...just an example

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Last Edited: Sun. May 24, 2020 - 07:44 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

That bugs me, avrcandies, the idea of going back into and modifying the logs.  Seems like something you should not do. 

 

I'm going to reiterate the 'slap a serial number on every data dump' idea, and then the time becomes just a curious detail.  S.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Given your apparent criteria, no large gaps, no backwards time,
time stretch and compression seem your only option.
Presumably there is an accuracy requirement.
The real question is how much adjustment when.
Presumably you can figure out how fast your
clock was ticking between synchronizations.
Use that number so that you clock will have the correct time soon enough.
After that, readjust.

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think I'm just re-wording the last post (or I guess now its #11)-

 

When the time is updated with a precise value, just log the event as a time update (with the new time and also include the current mcu time), then update the mcu time, and let the viewer of the data figure out what happened. Your time update events will be known/good times, and any times logged in between can be corrected after the fact if needed since you will be able to see your mcu time errors at each time update.

 

If the precise time update says its 08:00:00, and your mcu says 07:55:00, log the event as a time update of 08:00:00,07:55:00, then update the mcu time. A correction could be made at that point to the mcu timekeeping if wanted, but if using a 32k crystal+rtc you may not have much to correct anyway.

 

Last Edited: Sun. May 24, 2020 - 08:15 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Don't go back in and mess with the logs!!  That's what bugs me.  You can append to the logs, if you want to (e.g. "This time may be incorrect" as an addendum), but don't change what happened when! 

 

Got another stupid suggestion:  If the problem is duplicate times, run your local clock deliberately slow.  That way, any corrections will only advance the clock, and there will be no duplicate times.  S.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

That bugs me, avrcandies, the idea of going back into and modifying the logs.

I'm not sure why...if I download the data and use time for part of some further post-processing calculations, as the user I'd like the time numbers as accurate as possible & not wonder if they are off or not....that would be the least confusing for a user (give them time & reading).  Of course, this might be moot if we are talking about insignificant errors building up.  If you are going to be several minutes on a 1 minute sampling, that data might be intolerable.

 

The distinct advantage of modifying the times is that the resync'd time can be updated immediately & exactly.  Previously logged times are mapped to fit the interval from the last sync to the present exact time.  So as soon as the resync occurs you are back to recording exact times, with no "catching up" interval . 

I'm going to reiterate the 'slap a serial number on every data dump' idea, and then the time becomes just a curious detail.  S.

Perhaps, maybe wouldn't actually need to record the time at all  --use a sample number.

 

As an in-between, you could separately log the time base resync errors & apply time corrections after downloading, if significant & desired.  Tweaking the timebase can still be maintained to minimize other corrections.

 

 

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Don't modify them, append to them.  "After reconnecting to our GPS, we decided this particular time entry was about forty-five seconds off...".  Encode as you will. Then you can have the best of both worlds - the original data, and further information that might be interesting and/or useful.  S.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

  and let the viewer of the data figure out what happened

Perhaps, but most users would prob prefer to have as accurate times as possible and concentrate on plotting & their own calc. 

 

start  data dump

10:20  57.933

10:35  53.818

11:50  47.983

12:20  56.822

end

Notice: log start time actual was: 10:16, end time actual was: 12:23

 

As a user experience, I'd consider that a bit sub-par & say why didn't they take care of this in the reporting?

 

Don't modify them, append to them.  "After reconnecting to our GPS, we decided this particular time entry was about forty-five seconds off...".  Encode as you will. Then you can have the best of both worlds - the original data, and further information that might be interesting and/or useful.  S.

 It's probably "ok", but one of those grumbly things..how hard would it have been for them to  make it less hassle.

 

Anyhow, if the errors are kept small in the the first place, my suggestion is pretty secondary, since the small error would be restated over a long interval..it is only an improvement with larger errors.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Last Edited: Sun. May 24, 2020 - 09:07 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:

 It's probably "ok", but one of those grumbly things..how hard would it have been for them to  make it less hassle.

 

For that we have a 'front end'.

 

Maybe it's a slightly different outlook here.  My gizmos give the user all the power...  it's up to them to use it carefully.  Might be tricky in spots, but that's their problem. angel   S.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

On a different note, I remember a very interesting article years ago about the old time mechanical clocks in cars...someone got smart & invented a truly clever "learning mechanism"...each time you readjusted the time knob it would tweak the timebase & you could end up with something having ppm accuracy.  I think they had a patent and made $$$$

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I wouldn't try anything like smearing on your logger.  Rather, simply log your data as you have been doing, with your locally clocked timestamp.  But, add a new record type, which logs the externally acquired higher-precision time stamp, and associates it with the current locally clocked timestamp.  Start a sampling run with this, and insert a new one every time you acquire the external time.  This way, you handle any drift in 'post-production' as we say in the business.

 

 

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

barnacle wrote:
I have a sneaking suspicion that the linux time is updated from nntp servers the way you suggest - but windows - at least as recently as W10 - just slams in the new time when it gets it.
Not slammed is available in Windows via QueryPerformanceCounter (QPC)

Whenever an updated calendar time arrives then can link monotonic time to calendar time.

 

P.S.

PIC32 have a monotonic count register.

 


Linux monotonic time and real-time (dependent on UTC and/or NTP) :

https://www.kernel.org/doc/html/latest/core-api/timekeeping.html#basic-ktime-t-based-interfaces

 

General FAQ about QPC and TSC | Acquiring high-resolution time stamps - Win32 apps | Microsoft Docs

...

 

Is the performance counter monotonic (non-decreasing)?

Yes

 

...

 

MIPS32® microAptiv™ UP Processor Core Family Software User’s Manual

[bottom of page 149]

6.2.13 Count Register (CP0 Register 9, Select 0)

The Count register acts as a timer, incrementing at a constant rate, whether or not an instruction is executed, retired, or any forward progress is made through the pipeline. ...

...

[32 bits wide]

...

edit : via microAptiv Processor Core – MIPS

 

"Dare to be naïve." - Buckminster Fuller

Last Edited: Mon. May 25, 2020 - 02:19 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:

I assume that the internal accumulated time is not just block-updated from the external source, because that would result in either time gaps or, if the internal time were to be set back, intervals when the same time number is reported at two different times. This leads me to suspect that the "better" strategy is to speed up or slow down the internal clock so that reported time remains continuous. If this is the case, how is the decision made about how much to speed up or slow down? Or, is some other method used? 

That depends on the stability, step size and the cal-update periods, but broadly 'self-chooses' in any system,

 

eg on most MCUs with reasonably good clock sources* already, you may find that your 'GPS ideal fix' is divide by some fraction, not a whole number.

You can then dither between the two integer values each side of that fraction, such that it long-term-averages to the fraction needed.

If you want least-average deviation, using a Rate-Multiplier approach on that dither, is better than a coarse PWM

 

Every cal-update, you can check and revise the integer and fraction parts, and continue doing that so you track slow changes in local clock

 

* GPS volumes mean TCXO with specs as good as 0.5ppm are quite common and low cost.  

There are also VCTCXOs which can couple to a MCU with a DAC, to give to an analog version of that control loop above.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Correct me if I am wrong. Doesn't all of the above assume that the drift is constantly slow or fast?

 

What if there is a diurnal temperature effect on the oscillator and or power source... and or seasonal temperature drift underlying the diurnal one. Aren't we talking about a year long data logger in the Amazon?

 

A more stable clock would appear to be the more effective solution.

Ross McKenzie ValuSoft Melbourne Australia

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

valusoft wrote:

What if there is a diurnal temperature effect on the oscillator and or power source... and or seasonal temperature drift underlying the diurnal one. Aren't we talking about a year long data logger in the Amazon?

Check-pointing with a special record in the log would as I suggest in #21 would capture those effects, and might even prove to be a useful side channel.

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A significant number of my customers are using my accelerometers in fairly extreme temperature situations. Some are in "boreal forests" of northern New England (U.S.A) while others are in tropical forests (Equatorial Africa, South East Asia). This means that some are seeing extended intervals below -20C and some are seeing extended intervals above +35C. Using a standard 32.768KHz watch crystal as the timing reference means that these clocks are usually measurably slow over the span of a year. 

 

Right now, I am starting pretty serious design of the next generation product and considering how to provide a more accurate time base. One method is to use a temperature compensated RTC, and simply substitute its 1 second tick for that of the default internal clock. But, there are situations that seem to argue for correcting the internal timebase rather than substituting. One of these situations is when the timer ticks need to be faster than 1 second as provided by the external RTC. 

 

No decisions, yet, just trying to weigh the factors. One of those factors is how complex time base correction is relative to substitution. 

 

You have provided some great food for thought, and I appreciate that. 

 

Thanks

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

gchapman wrote:

barnacle wrote:
I have a sneaking suspicion that the linux time is updated from nntp servers the way you suggest - but windows - at least as recently as W10 - just slams in the new time when it gets it.
Not slammed is available in Windows via QueryPerformanceCounter (QPC)

Whenever an updated calendar time arrives then can link monotonic time to calendar time.

 

 

Observed last week: my company laptop has been regularly connected to the internet but not to the company VPN for perhaps five days. At the end of that time the displayed time was ten minutes fast. Connecting the VPN (for access to some internal files) corrected the time in one hit.

 

My assumption is that Windows is set up on that machine to get an NNTP via a route on the company network (or possibly an internal  NNTP on the domain server? I don't know much about windows these days...) and obviously wasn't seeing that until the VPN was connected. The inaccuracy of the system clock is impressive, though...

 

Neil

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

joeymorin wrote:

This way, you handle any drift in 'post-production' as we say in the business.

 

^^^That

 

As someone who worked in 'post' I'd say that the best way would be to get the highest quality data input and do any corrections later once the data is off your device and you can throw processing at it. So it would appear to me that the best idea is to simply log any corrections on the device and deal with it on the PC. By doing it in post you can also adapt the method used later as more information and better techniques come along.

 

I don't think we know what you clock source of the device is? If it's a crystal then how they behave, and hence the source of any errors, is very well known. If you are logging the temperature that he logger is in then it's possible to accurately model how the frequency will change. Crystal ageing can also be accurately modelled.

 

If it is a crystal, then pre-ageing the crystals is a possibility such that by the time you deploy them you have pretty much eliminated that source of error.

#1 Hardware Problem? https://www.avrfreaks.net/forum/...

#2 Hardware Problem? Read AVR042.

#3 All grounds are not created equal

#4 Have you proved your chip is running at xxMHz?

#5 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand."

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

 

Here are some tidbits that might be enjoyable in the quest for highest performance.

 

Minimize Frequency Drift In Crystals

https://www.electronicdesign.com/technologies/analog/article/21798809/minimize-frequency-drift-in-crystals

Other factors, if not addressed, can have a tremendous impact on drift. Drift from humidity or pressure, for example, can be hundreds of ppm. However, humidity and pressure can be effectively addressed during the manufacturing process by housing crystals in hermetically sealed packaging.

Another consideration in addition to the cut of a crystal is the type of electrode used. [Gold] will also age more slowly, with the overall result of less drift.  

 

This is an interesting concept--whether practical in your system is another question:

https://www.st.com/resource/en/application_note/cd00178404-extremely-accurate-timekeeping-over-temperature-using-adaptive-calibration-stmicroelectronics.pdf

Extremely accurate timekeeping over temperature using adaptive calibration

32,768 Hz watch crystals. These are readily available and relatively inexpensive, but they suffer a loss of accuracy when operated over wide temperature ranges.

Conversely, the higher speed, AT-cut crystals used with microprocessors have low drift over a wide temperature range, and can thus provide high accuracy, but their oscillators are not suitable for backup since they will draw much more current.

The purpose of this application note is to show how, using a combination of these crystal characteristics, users can get high accuracy

 

 

 

 

 

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Scroungre wrote:
That bugs me, avrcandies, the idea of going back into and modifying the logs.  Seems like something you should not do. 

Absolutely!

In the pharmaceutical world where my data-logger operates, this practise would be forbidden.

 

I took my lead from the linux ntpdate command, running this repeatedly and watching the error in adjust mode become smaller & smaller over time.

 

I came to the conclusion that Speeding up / Slowing down the RTC is the only way to guarantee exactly 1440 (24*60) 1 minute interval records in a 24hour period. Doing any other trickery may result in missing records.

 

BTW: The pharma guys analyse their data intensely. A missing record would be spotted and results in a formal "Incident" being raised. Things escalate from there-on.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:

...

Right now, I am starting pretty serious design of the next generation product and considering how to provide a more accurate time base. One method is to use a temperature compensated RTC, and simply substitute its 1 second tick for that of the default internal clock. But, there are situations that seem to argue for correcting the internal timebase rather than substituting. One of these situations is when the timer ticks need to be faster than 1 second as provided by the external RTC. 

...

 

Beware 'second-system effect' ;)

 

Some RTC chips have a register for fine calibration of the clock frequency, in addition to temperature compensation, e.g. http://www.kerrywong.com/2014/07...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

In the pharmaceutical world where my data-logger operates, this practise would be forbidden.

Maybe/maybe not??  You push a download button & the system transfers its results to you.  If the results are provided as calibrated, does that make them unsuitable? 

 

It reports: 20.28, 27.69, 32.82, 36.93...calibrated results ready to use  -- that's what the system considers to be the final results, so it should be free to report those--does that sound acceptable?

or reports 21.3, 27.8, 32.3, 35.9  ...values uncalibrated, must apply a user linearization to a calibration algorithm. 

 

A unit stores 20 voltage readings & later reports the reading's rms value as the result --forbidden?   Seems acceptable & probably done in practice.  Like most equipment, users get a set of final results; inner working are invisible.   

Equipment's design verification process assures proper calibration implementation between the taken & reported values.

BTW: The pharma guys analyse their data intensely. A missing record would be spotted and results in a formal "Incident" being raised. Things escalate from there-on.

No doubt!  I'm working on a patient unit, so see as the medicals hammer out the latest algorithm.  We take data in a 3D scan space, but before it's displayed and later downloaded, the data is "rotated" to align with calibration fiducials.  No one has complained or brought any issue up, so I'd like to know if they have one.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Most of those examples involve a processing of the original data. Even the 3D scan fits into that scheme. The User linearization is a tricky one though.

Any modification of previously logged data; even just the timestamps would definitely raise concerns.

 

avrcandies wrote:
Like most equipment, users get a set of final results; inner working are invisible.

We had to comply with the DQ IQ OQ PQ process. you probably know about that; here's a simple intro for other freaks: What Are IQ, OQ, and PQ, and Why Are They Required In The Pharmaceutical Industry?

 

As part of DQ (Design Qualification) questions about inner workings are asked and form the basis of the report. Oh - and you have to fess up; you cannot hide anything.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Most of those examples involve a processing of the original data. Even the 3D scan fits into that scheme. The User linearization is a tricky one though.

Any modification of previously logged data; even just the timestamps would definitely raise concerns.

Yeah, could see it going either way.  Might hinge on "what is" logging? ...reading values into an array?  Moving them to an SSD?  Performing a compensation?  At what point in the chain must they remain final?  
I'd think once they are actually reported to the user or depended upon, but perhaps that is too loose.  These gray areas are where the consultants live!!! 

Time for some memorial day burgers!

   

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:

Right now, I am starting pretty serious design of the next generation product and considering how to provide a more accurate time base.

One method is to use a temperature compensated RTC, and simply substitute its 1 second tick for that of the default internal clock.

But, there are situations that seem to argue for correcting the internal timebase rather than substituting.

One of these situations is when the timer ticks need to be faster than 1 second as provided by the external RTC. 

If you plan to use the internal RC osc, that's a fairly poor source, so you may need a dual correction design. (assuming a TCXO external RTC)

You could trim the  RC to the nearest step size to correct, and use that over moderate time frames, and then apply a dithered correction to get an (eg) 10.000ms averaged interrupt rate.

ie you get the coarse adjusted clock to the nearest step, and then use digital adjustments for the finer detail.

 

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The dawn of digital watches was big news & a lot of surprising innovation...they went all out for the first "time computer"

https://www.youtube.com/watch?v=a5szJYA_z44
 

A visit to the jewler & he'd change your capacitors!

 

Roger Moore as James Bond donning the Pulsar Time Computer.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Last Edited: Tue. May 26, 2020 - 03:32 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:

A visit to the jewler & he'd change your capacitors!

:)

Yes, it was common for earlier RTC schematics to show trimmer caps, but the newest RTCs now offer digital trimming.

 

Some do the trimming by dither of caps on the xtal pins, which gives the least output jitter, as the high Q of the xtal averages that out nicely.

Others use digital divider changes, which mean some CLKOUT options can have jitter.  I think 1pps is usually ok.

Did someone mention a MCU (NXP?) with that CAP option feature built in recently ? 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have made "very" slow PLL's for things like this.

 

Once I made a control for a painting cabin, where I only got one pulse pr rotation, and I had to make a resolution of 256 points pr rotation, the problem was that the speed was not 100% constant, and if I needed to do something on last step I would sometimes miss it.

The PLL was simple the timer that made 256 interrupts before a zero pulse, the compare value got either added or sub by one for each zero puls. 

(It was 25 years+ ago on a 2mips 8051 (8032*32Kram))

 

update it was a 1 mips 11.092MHz /12

Last Edited: Tue. May 26, 2020 - 11:15 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sounds like good times!   By the way it took me a bit to figure you probably were saying  painting booth, or painting cabinet

If I'm staying in a cabin it will be in the woods with a fireplace & I won't be doing any painting  angel

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ok sorry.

It was an old painting "box" with 8 sprayers on a chain, so they painted two ways, and than there was a band where things came moving through with liquid under. I made a line scanning camera with 64 photo diodes 2 meters outside the "box" so it only painted when there was something (so even a window frame didn't get painted inside). And there was a 2x40 display, mainly where you could adjust position and tell how wide the nozzle was.

And it all ran from a  11.092MHz 8032 (good old intel with 12 clk pr cycle) . The code was written in pascal.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Jim,

 

I assume you are not trying to use the external clock to calibrate your internal one.  You are just trying to correct the time.

A simple way to do this would be to speed up or slow down your internal clock (by say 10%)  for some duration D.

 

Say your internal clock has a resolution of 1 second  and your internal clock interrupts at 1 msec.   That is you usually increment the second count every 1000 interrrupts.

Now, say your local time is 1000 and you get a notification that the time is 900.

You choose to slow down your clock to increment second count every 1100 ticks, from the usual 1000 ticks.

The external time goes like tx = 900 + ticks/1000

The internal time goes like ti = 1000 + ticks/1100.

You want tx = ti.  The times become the same in 1,100,000 ticks.

So, you slow down to increment every 1100 ticks for 1,100,000 ticks then reset back to increment every 1000 ticks.

 

Matt

 

Last Edited: Tue. May 26, 2020 - 12:52 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

In general it's bad to change the internal speed, and the steps are to big any way.

Just keep track of the error and use that as the clk tick. (the easy way is a simple DDS, even with 8bit add into 16bit the result will be better than 0.5%)

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

We really do not have the requirements in
enough detail to make a good recommendation.

 

For time of day or decade, what error range is allowed?
What range of sizes is allowed for a tick, e.g. 0.999 to 1.002 second or minute?
How long can the device go between accesses to an accurate time base?
Will the device be taken from Death Valley
to Antarctica immediately after such an access?

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Error spec: better than naked 32.768 crystal. No absolute spec. Its "better is good".

 

Allowable tick sizes: uncertain.

 

Expected access to accurate time base: once a day

 

Rate of temperature change: normal diurnal rates, or perhaps 30C in 12 hours?

 

Remember, this is EARLY DESIGN. The decision, at this point, is mostly about program structure and high-level processes.

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

Last Edited: Tue. May 26, 2020 - 05:11 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

As a start, I think I would just find out how good the mega4809 is with a standard 32k crystal. Use the precision temp ic to set the calibration register value on some regular basis so the temp curve can possibly be tamed. Get a couple 4809 nano boards, add the temp ic, throw them in a freezer,oven,whatever, to see how good you can keep time by just adjusting for temp. Pulse an led on the top of the hour/minute for a quick way to see how things are going over time (or log the data to a pc).

 

>Expected access to accurate time base: once a day

 

With a 32k crystal, a precise temp, it 'should' be easy to keep good time for just 24 hours (I doubt the internal 32k would be of much use though).

 

 

edit-

the internal temp sensor could also be used, but it probably takes some tweaking to come up with good (enough) numbers

Last Edited: Tue. May 26, 2020 - 06:09 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

These are nearly two orders of magnitude better than a 32KHz crystal though

4V abs max :

Model TT32 HCMOS TCXOs - CTS Electronic Components | Mouser

 

5.5V max :

TG-3541CE (TCXO) Crystal Oscillators - Epson | Mouser

 

and consume roughly another microamp.

 

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

 

That's pretty amazing packaging of technology.

----------------------------------------------------

I found a contender that is 2 ppm, however it is about double the current (to 5 uA)

https://www.sitime.com/datasheet...

-------------------------------------------------------

This looks to be a very interesting contender...EXTREMELY low power   $1.62 @1000 @ mouser

  • Very high Time Accuracy (best in class). 

     

    • ±1.5 ppm 0 to +50°C
    • ±3.0 ppm -40 to +85°C
    • ±7.0 ppm +85 to +105°C
  • Low power consumption: 240 nA @ 3 V. (best in class)

https://www.microcrystal.com/fileadmin/Media/Products/RTC/Datasheet/RV-8803-C7.pdf

-----

The power company does a good job counting & adjusting AC cycles over the long haul...1 ppm is 31.5 sec per year.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Last Edited: Tue. May 26, 2020 - 08:03 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Ambiq Micro RTC can go an order of magnitude less in current though the RC oscillator is periodically tuned from a 32KHz crystal (so, reduced accuracy); packaged at Abracon.

 

Ultra-Low Power RTC (Ambiq Micro)

Abracon | Abracon’s Industry Leading 22nA Time Keeping Current…

 

"Dare to be naïve." - Buckminster Fuller

Last Edited: Tue. May 26, 2020 - 11:45 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

10 ppm is better than a second per day.

If you are close to that and your desired tick rate is once per minute,

you probably do not need to adjust your actual tick rate at all.

When you get access to the accurate clock,

decide when the next tick should be.

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

My tick rate goal is at least one per second. I hope for 10 per second in some cases or maybe all the time.

 

Hence, that becomes a challenge with using an external RTC as a substitute time base as most that I've found only provide interrupts at one per second. This pushes toward one of the PLL-like locking/synchronizing mechanisms.

 

Of course, that 10 per second is a very early goal and may prove to be impractical. 

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

Last Edited: Wed. May 27, 2020 - 12:58 AM

Pages