How to change the F_CPU in runtime to use the delay?

Go To Last Post
12 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

 

I can successfully use the "_delay_ms()" function by defining the F_CPU then configuring the clock settings. But I have to change the clock speed in my code manually. But when I change the clock speed in runtime, the delay function does not work as it should. 

 

For example, I am defining the F_CPU as 32000000 at the start and the delay function is working well. Then I'm changing the clock frequency to 2MHz in runtime and doing this:

 

#def    F_CPU   32000000UL
#def    INT_32MHZ

int main()
{

    clock_init();     //Configure the clock frequency as 32MHz.

    //do application stuff.

    _delay_ms(3000);    //This delay works well.
    /////

    #undef  F_CPU
    #def    F_CPU   2000000UL

    #undef  INT_32MHZ
    #def    INT_2MHZ

    clock_init();     //Renfigure the clock frequency as 2MHz.

    _delay_ms(3000);    //This delay does not work properly.

}

void clock_init(void)
{
	#ifdef INT_32MHZ

	CLKSYS_XOSC_Config( 0, 0, OSC_XOSCSEL_32KHz_gc );
	CLKSYS_Enable( OSC_XOSCEN_bm );        /* Does an OR with current Osc bits */
	do {} while ( CLKSYS_IsReady( OSC_XOSCRDY_bm ) == 0 ); /* 32 kHz Does AND of the bits */

	CLKSYS_Prescalers_Config(0xC0,0x00);				//Must divide by 4 if using 32 MHz
	CLKSYS_Enable(OSC_RC32MEN_bm);					//Enable 32 MHz Clock
	do {} while ( CLKSYS_IsReady( OSC_RC32MRDY_bm ) == 0 ); /* Wait for 32 MHz Osc. to stabilize */

	//CLKSYS_AutoCalibration_Enable( OSC_RC32MCREF_bm, true );

	OSC.DFLLCTRL	= 0b00000010;
	DFLLRC32M.CTRL |= DFLL_ENABLE_bm;

	CLKSYS_PLL_Config( OSC_PLLSRC_RC32M_gc, 4 );             /* 8MHz * 2 = 16 MHz.  PLL Min is 10 MHz */

	CLKSYS_Enable( OSC_PLLEN_bm );
	do {} while ( CLKSYS_IsReady( OSC_PLLRDY_bm ) == 0 );   /* Wait for PLL to be ready */

// 	CLKSYS_AutoCalibration_Enable( OSC_RC32MCREF_bm, true );  /* 32MHz Osc.  True= external crystal used as reference */

	CLKSYS_Main_ClockSource_Select( CLK_SCLKSEL_PLL_gc );   /* Switch main clock PLL */
//	CLKSYS_AutoCalibration_Disable(DFLLRC32M);
	clock_speed=_32MHz;

	#elif defined INT_2MHZ
	CLKSYS_XOSC_Config( 0, 0, OSC_XOSCSEL_32KHz_gc );
	CLKSYS_Enable( OSC_XOSCEN_bm );        /* Does an OR with current Osc bits */
	do {} while ( CLKSYS_IsReady( OSC_XOSCRDY_bm ) == 0 ); /* 32 kHz Does AND of the bits */

	CLKSYS_Prescalers_Config(0x00,0x00);
	CLKSYS_Enable(OSC_RC2MEN_bm);					//Enable 2 MHz Clock
	do {} while ( CLKSYS_IsReady( OSC_RC2MRDY_bm ) == 0 ); /* Wait for 2 MHz Osc. to stabilize */

//	CLKSYS_AutoCalibration_Enable( OSC_RC2MCREF_bm, true );  /* 2MHz Osc.  True= external crystal used as reference */

	OSC.DFLLCTRL	= 0b00000001;
	DFLLRC2M.CTRL |= DFLL_ENABLE_bm;
	CLKSYS_PLL_Config( OSC_PLLSRC_RC2M_gc, 1 );             /* 2MHz * 8 = 16 MHz.  PLL Min is 10 MHz */

	CLKSYS_Enable( OSC_PLLEN_bm );
	do {} while ( CLKSYS_IsReady( OSC_PLLRDY_bm ) == 0 );   /* Wait for PLL to be ready */
	CLKSYS_Main_ClockSource_Select( CLK_SCLKSEL_PLL_gc );   /* Switch main clock PLL */
	clock_speed=_2MHz;

	#endif
}

 

I know the #def is a preprocessor macro so I'm not sure whether it is a proper way or not to change a define in runtime. I would like to understand how to achieve this.

 

 

Last Edited: Wed. Jan 4, 2017 - 11:44 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Why not use a timer to achieve your delays? This would be more accurate than delay.h, and would also allow you to do things in between your delays. For example, in psuedo code form do something like this 

 

volatile uint8_t timerEnabled = 0;
volatile uint16_t msRemaining = 0;

void InitTimer()
{
    // set up timer to trigger interrupt every 1ms. Maybe pass paramter to
    // set PER which will likely be different at different clock speeds
}

ISR(some_vect_related_to_timer) // ISR that gets fired every ms
{
    if (timerEnabled == 1)
    {
        if (msRemaining > 0)
            msRemaining--;
        else
            timerEnabled = 0;
    }
}

void StartTimer(uint16_t waitTime)  
{
    msRemaining = waitTime;
    timerEnabled = 1;
}

void WaitForTimer()
{
    while (timerEnabled == 1)
        ;
}

void msDelay(uint16_t delayTime)
{
    StartTimer(delayTime);
    WaitForTimer();
}

or, if you want to start the delay, do something after it's started before waiting for it to finish then you could do this

StartTimer(x);
DoSomethingElse();
WaitForTimer();

 

This will you don't need to use delay.h, you don't need to worry about defining F_CPU (or changing during run-time, something I don't think is actually possible), and your delays will be more accurate and more useful. All you'll need to do is adjust the period of your timer once you change your clock speed.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Personally,   I specify F_CPU in the Project Properties.   Then use the PLL to achieve it in real life.

 

F_CPU is always treated as a compile-time constant.   Your delay_ms() or peripheral maths use F_CPU to ensure the application behaves as a known speed.

 

If you really want to change speed on the fly,   make f_cpu a variable.   Use f_cpu when configuring Timers etc.

 

Most apps run at full speed then sleep.   Wake up, run fast, sleep ...

You seldom want to run at fast, slow, snail, sleep ...

 

Incidentally,  the PLL lets you try 40MHz, 48MHz, ... very easily.

AtomicZombie overclocks wickedly.   I find that 48MHz is a sensible limit.

 

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

 make f_cpu a variable.

No it need to be a known constant at compile time.

 

But if you have a module(s) where the speed are different then compile them with a different define of f_cpu .

 

But like others don't change the speed! 

 

And the general info delay is wrong if there are interrupts.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I thought that I had implied  using Timers with f_cpu.

 

The library _delay_ms() is pretty crazy.   

 

Whereas a macro is wise for _delay_us(),   you might just as well use a variable loop that counts a hardware 1ms when you want delay_ms().    The loop housekeeping is trivial.

 

David.

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You are right about to think of avoiding delay functions as I am also using some interrupts. Actually, I am in a project revision and they already had used soft timers on that software. So I wanted to skip the timer at first, but I guess I better go in deeply and try to fit my requirement into their soft timer flags. I will need a 1ms delay, so I have to use at least microsecond sensitive timer. But there were no free timers to use. I guess it is better to make a change in the default project.

Howard_Smith wrote:

Thanks, I liked the simple example you provided, it looks very clean :) I will note this for the further usages.

Last Edited: Wed. Jan 4, 2017 - 01:20 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You can normally steal a Timer for a "system timer". e.g. from a PWM Timer. You simply add an Overflow ISR().
It does not have to be an exact 1ms although this does make life easier.
.
David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

How do you kick the WD ? (perhaps you can make a counter there).

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Why would you want to change cpu frequency? I just set it and leave it.

If you don't know my whole story, keep your mouth shut.

If you know my whole story, you're an accomplice. Keep your mouth shut. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The only way to do it is to add extra functions. Say you run at 2MHz and 32MHz, you could do:

 

#define _delay_ms_2mhz(a)   _delay_ms(a)
#define _delay_ms_32mhz(a)  _delay_ms(a*16)

 

If you are feeling fancy you can make a little macro that selects the correct one for you, but it will be evaluated at run time so add a little bit of extra delay to your loop.

 

For millisecond timing on XMEGA, one easy hack is to use the RTC. If you run it at 1024Hz each tick is 1.024ms, close enough for many purposes. Then you get an ultra low power sleep ability and don't waste one of the other more useful timers.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

mojo-chan wrote:
If you run it at 1024Hz each tick is 1.024ms
I believe 1024Hz would be 0.9766ms...

David

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

frog_jr wrote:

mojo-chan wrote:
If you run it at 1024Hz each tick is 1.024ms
I believe 1024Hz would be 0.9766ms...

 

Can you tell I'm overworked? ;-)