After the last 8 years of writing in plain C for my Xmega projects I took it upon myself to learn some C++ and make a C++ version of one of my telescope controllers.
Making the migration to objects went pretty easy, but one thing though was a "silent killer" that bit me and took me a few hours to find. I use the delay_ms and delay_us functions once in a while. For example, there is an i2c joystick I use, and you have to:
1) Do an I2C write, saying "I want to read the joystick".
2) ( wait for some time, let the joystick prepare the data )
3) Do an I2C read and get the results.
That delay at step #2 is using delay_us and it turns out in C++ it was running about 3 times faster than it had when compiled under "C". Even looking at the signals on the scope I didn't catch it right away -- as all the I2C operations I saw looked correct, it was the time duration *between* the I2c operations that got me.
Why would this delay change so dramatically?
Mike in Alaska