Detecting crystal frequency in firmware

Go To Last Post
30 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have a project (based on ATMega1284p) that runs just fine with a 14.74 Mhz crystal on about 50 boards.  I want to upgrade the crystal on new boards to 20.00 Mhz and, over time, replace the crystals on the old boards.  I have tested the firmware on a board with a 20.00Mhz crystal (after changing F_CPU) and it works fine.

 

I would like to have one copy of the firmware for both boards so, at startup, the board needs to be able to detect if the crystal is running at 14.74 or 20.00 Mhz.  I do not need to be able to detect the exact frequency, just the ability to differentiate between 14.74 and 20.00 Mhz.  I want to be able to do this on the existing boards, i.e.: not modify the old boards by adding RTC, etc.

 

At first I thought I could use the conversion speed of the ADC but then realized that that is also based on a divided clock.  Another thought was perhaps trying to run the ADC "Too" fast and maybe being able to detect conversion errors, etc.  In use, all 8 ADC pins "normally" (but one or two might have 0) have 3 to 8 volts on them at startup but they are not normally changing in a manner that would allow me to measure slopes, etc.

 

Other chips on the board:

     MCP2515 CAN Bus Controller (SPI bus)

     TLC59116 LED Driver (I2C bus)

 

Something that would work "sometimes" would be to initialize assuming 14.74 Mhz and attempt to communicate (listen for valid traffic) on the bus but that won't work when the boards are stand-alone, or the first board that powers up on the bus.

 

I tried searching the forum for like conversations but found none.  My apologies if I missed one ...

 

Regards,

 

Chuck Hackett

This topic has a solution.
Last Edited: Sat. Jul 15, 2017 - 06:02 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Use the watchdog timer in interrupt mode rather than reset mode; it has its own RC clock.

 

There are numerous ways to determine the CPU clock from there. E.g. count cycles until the WD interrupt fires, or configure a timer and compare before/after counter values, ...

 

You'll need to replace all references to F_CPU with a runtime value, including references within libraries.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Forgot to mention: I could arrange for the 14.74/20.00 info to be in EEPROM but, at the moment, the EEPROM gets erased and loaded (sometimes from Studio after EEPROM has been erased to allow successful verification) with information about the configuration where the controller is being installed, not the controller itself.  If I only loaded the EEPROM config info via the boot loader I could reserve an area for controller-specific info but this would make testing in Studio a bit more difficult.

 

Regards,

 

Chuck Hackett

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

AH HA!  WDT!  Forgot about that guy, thanks much!

 

Hmm, I'll have to check the AVR libraries.  I think the only think I'm using would be "Delay" but I'll check.  Thanks for reminding me of those.

 

Regards,

 

Chuck Hackett

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You could also try timing EEPROM write(s).

 

ChuckH wrote:
In use, all 8 ADC pins "normally" (but one or two might have 0) have 3 to 8 volts on them at startup

Ever hear of Absolute Maximum Ratings?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

On reflection I don't expect the standard libraries will have any F_CPU dependencies.

 

The issue will be things like delay loops from header files. If they're inline assembly then their constraints may require F_CPU to be a compile-time constant.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Wasn't there a problem with that model and 20MHz and USART?  Y'all with younger memories will have to help.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

Chuck,

 

Could you simply read an otherwise unused input as either high (20MHz) or low (14MHz)?

 

 

Cheers,

 

Ross

 

Ross McKenzie ValuSoft Melbourne Australia

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

I would like to have one copy of the firmware for both boards

Your project, your call, but I'd question that logic!

 

Once a board is upgraded, why in the world would it's software need to know, or care, about the older 14 MHz version?

 

Clearly the code is Xtal frequency dependent, as you mentioned bus traffic.

 

I would think you would want to mess with working code as little as possible, you never know when a "small change" will ripple through to some unexpected effect elsewhere within the program.

Modifying the program to analyze the clock and tweak everything that depends upon this will clearly involve multiple changes to the code.

 

I suggest putting a colorful sticker on the boards as they have their Xtal swapped out, and then obviously loading the new code version to match the new Xtal frequency at the same time.

You can put a V2 or a date on the sticker if you so desire, but the sticker alone will suffice to show which boards have been refurbished.

 

Keep two versions of code:

ProjectXYZ14_Vx  and ProjectXYZ20_Vx

 

Once all of the boards are refurbished the 14 MHz version will be irrelevant, and only a part of the project's history.

The code will be cleaner and easier to maintain.

 

Personally, I think in the grand scheme of running the project over years, with an upgraded platform, having one version of code may sound nice, but isn't really advantageous.

 

YMMV

 

JC

 

Edit: Typo

 

 

Last Edited: Wed. Jul 12, 2017 - 03:30 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

As noted, the watchdog timer will do the trick.

Also as noted, the trick is not obviously worth doing.

I've not used Studio.

In Studio, how how is it, from precisely the same source files.

to generate two .hex files, one corresponding to -DF_CPU=14700000,

the other to -DF_CPU=20000000 ?

International Theophysical Year seems to have been forgotten..
Anyone remember the song Jukebox Band?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
In Studio, how how is it, from precisely the same source files. to generate two .hex files, one corresponding to -DF_CPU=14700000, the other to -DF_CPU=20000000 ?
Easily done. You use "Add as link" so the identical sources appear within two project in the solution then in the Properties-Symbols of one you use F_CPU=12345 and in the other F_CPU=54321 (or whatever). Now "build solution" and both projects are built.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ChuckH wrote:

AH HA!  WDT!  Forgot about that guy, thanks much!

Check the specs for your exact part, as most MCU WDT oscillators are rather broad in spec - they focus on low power.

You might just manage the 36% frequency choice here ?

Otherwise, you may need a two-step process. Start in RC Fast, calibrate the RC Slow from that, then switch to XTAL, (wait for it to start), and then check RC Slot INT cycles.

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Who-me wrote:
Start in RC Fast, calibrate the RC Slow from that, then switch to XTAL

???  The '1284 has such clock-switching capabilities?

 

Who-me wrote:
Check the specs for your exact part, as most MCU WDT oscillators are rather broad in spec - they focus on low power.

At say room temperature and a fixed supplyV, the datasheet hints at very close to nominal.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Postman,

 

Sorry, when I mentioned 0-8v I was referring to the voltage at the pin which feeds the board not the ADC pin of the AVR itself, my bad.  The external pin (0-16v) goes through a conditioning RC network and gets divided down before it hits the AVR ADC pin.

 

So, yup, I try to stay within the part's capabilities .. but I've goofed before :-)

 

Regards,

 

Chuck

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Postman,

 

theusch wrote:

Wasn't there a problem with that model and 20MHz and USART?  Y'all with younger memories will have to help.

 

So far I have not had a problem with the ATMega1284p USART ... at least at 9,600 baud ...

 

Regards,

 

Chuck

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

valusoft wrote:

Chuck,

 

Could you simply read an otherwise unused input as either high (20MHz) or low (14MHz)?

 

Cheers,

Ross

 

Ross,

 

All pins are used except for JTAG that I have reserved for debugging. 

 

DocJC wrote:

Once a board is upgraded, why in the world would it's software need to know, or care, about the older 14 MHz version? Clearly the code is Xtal frequency dependent, as you mentioned bus traffic.

 

There are something like 50 boards already installed at my location and at customer locations.  I distribute new firmware via email and they upgrade the controller via the bootloader so I want the firmware to be able to detect which board it's running on and adapt without operator intervention if at all possible. 

 

It is impractical to upgrade all boards at once (and, in many cases unnecessary).  As time goes on the 14.74 Mhz cards may get upgraded to 20 Mhz but I don't want to have to maintain two different firmware downloads or run the chance that I or a customer will load the incorrect firmware (clock) into a given board.  Much cleaner if I can just give everyone one executable for both clock versions.

 

Currently clock divisors (to maintain a given USART baud rate, CAN Bus bit time, etc.) are calculated at compile time based on F_CPU.  The change will just mean detecting the 14 .vs. 20 Mhz at startup and doing the clock divisor calculations at run time so I think it's fairly straightforward and does not require the complication of maintaining different versions and running the chance that a customer will load the wrong one causing them unnecessary frustration.

 

If I can't reliably detect the clock speed (i.e.: due to WDT timing variations mentioned) then I may be forced to deal with two versions, we'll see.

 

Regards,

 

Chuck Hackett

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Who-me wrote:

Check the specs for your exact part, as most MCU WDT oscillators are rather broad in spec - they focus on low power. You might just manage the 36% frequency choice here ?

 

These controllers are used outdoors and I would expect the ambient temperature to be no lower than 0 C or above 40 C (these are signal controllers for outdoor 7.x" gauge ride-on railroads and we don't normally run the trains outside this temperature range due to a shortage of engineers that want to endure it smiley ).

 

Using the chart that theusch provided 0 C to 40 C @5.5v gives a range of about 120.2 to 118.2 kHz or a "drift" of about 1.7% so it should be easy to detect the 35% difference between 14.74 and 20 mHz.  again ... we'll see ...

 

Thanks to all on your input and suggestions.

 

... off to do some code smiley

 

Regards,

 

Chuck Hackett

Last Edited: Wed. Jul 12, 2017 - 10:07 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ChuckH wrote:
Using the chart that theusch provided 0 C to 40 C @5.5v gives a range of about 120.2 to 118.2 kHz or a "drift" of about 1.7% so it should be easy to detect the 35% difference between 14.74 and 20 mHz. again ... we'll see ...

You only need to do the detection once, at initial "commissioning".  That should be at a normal temperature and fixed Vcc level.  Park the info in EEPROM.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:

Wasn't there a problem with that model and 20MHz and USART?  Y'all with younger memories will have to help.

 

I may not have a younger memory but...wink

I think that problem may have been due to using the low power xtal osc?

I have always used the full swing xtal osc. without any 1284p USART problems and always use 115,200 baud.

Some have speculated it could be caused by the USART0 pins being next to the xtal pins and by using a careful board layout might 'fix' that problem.

 

 

 

Last Edited: Thu. Jul 13, 2017 - 12:52 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

OK, so now I understand why you wish to have one software version.

 

It makes sense, knowing the rest of the story!

 

JC

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The change will just mean detecting the 14 .vs. 20 Mhz at startup and doing the clock divisor calculations at run time

You'll only need to make a >>choice<< between two divisor settings.  No need to calculate them at run time.  That can (and should) be done at compile time.

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"Read a lot.  Write a lot."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

skeeve wrote:
In Studio, how how is it, from precisely the same source files. to generate two .hex files, one corresponding to -DF_CPU=14700000, the other to -DF_CPU=20000000 ?
Easily done. You use "Add as link" so the identical sources appear within two project in the solution then in the Properties-Symbols of one you use F_CPU=12345 and in the other F_CPU=54321 (or whatever). Now "build solution" and both projects are built.

Or, you have only one project, and have two build configurations in it - with different defines for F_CPU.

 

Build several configurations as a "one op" with the Batch Build function in the Build menu in Studio.

 

The advantage with having only one project is that when adding new source files (or deleting), you just do it once.

"He used to carry his guitar in a gunny sack, or sit beneath the tree by the railroad track. Oh the engineers would see him sitting in the shade, Strumming with the rhythm that the drivers made. People passing by, they would stop and say, "Oh, my, what that little country boy could play!" [Chuck Berry]

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks again for the help.  Detection of the 14.7456 .vs. 20 MHz xtal using the WDT is working great.

 

Regards,

 

Chuck Hackett

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Chuck,

 

Are you able/willing to share that section of the code?

 

Cheers,

 

Ross

 

Ross McKenzie ValuSoft Melbourne Australia

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well, the code is kind of embedded in my particular restart logic but I'll try separating it out into a compile-able snip-it if it's of interest ...

 

Regards,

 

Chuck Hackett

This reply has been marked as the solution. 
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

For those interested, sample code is attached.  Hopefully it doesn't embarrass me! :-)

 

Note: the sample code was tested using Atmel Studio (optimization Off) and ATMega1284p processor with both 14.7456 MHz and 20 MHz crystal.

 

Regards,

 

Chuck Hackett

Attachment(s): 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks Chuck.

 

Cheers,

 

Ross

 

Ross McKenzie ValuSoft Melbourne Australia

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I can see a couple of issues with the code...

 

1. You're relying on ClockCheck_State being zero on power-up. Someone please correct me if I'm wrong, but SRAM contents are undefined at power-up so while it might be ok in your test environment it may stop working in the field. I suggest forcing it to ClockCheck_NotStarted for everything *except* a watchdog reset.

 

2. ClockCheck_Cycles should be volatile otherwise the loop may well be optimised away once you enable optimisation.

 

3. Since updates to ClockCheck_Cycles are not atomic, the watchdog could fire before a carry out of an LSB is written to the next significant byte, resulting in a potentially large error. Though looking at the constants you're using it may not be an issue.

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Bit late, and maybe not necessary - but the internal temp. sensor of the 328p is easy to setup and can be pretty accurate.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ChuckH wrote:

It is impractical to upgrade all boards at once (and, in many cases unnecessary).  As time goes on the 14.74 Mhz cards may get upgraded to 20 Mhz but I don't want to have to maintain two different firmware downloads or run the chance that I or a customer will load the incorrect firmware (clock) into a given board.  Much cleaner if I can just give everyone one executable for both clock versions.

If you want to avoid having the incorrect firmware variant running on a board, then a simpler solution is to just implement a clock self-check at power-on (using whatever clock measurement method you can find, e.g. using the WDT) and if it fails then firmware sets some kind of error signal to the outside world and jumps into an endless loop. This will surely mitigate the risk of having the wrong firmware running on a board, while still allowing a static (compile-time) clock frequency setting.

 

 

/Jakob Selbing

Last Edited: Wed. Jul 19, 2017 - 07:30 PM