Write to SPI register masking UART interrupt?

Go To Last Post
9 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hello chaps,

I was wondering if anybody has come across problems when using both SPI and UART heavily on the ATMega32?

In my application commands are received via the UART and buffered using an interrupt driven routine. These commands are then executed out of the buffer and any required data displayed in the LCD module connected to the SPI port. Now data is coming in at 38400 baud and the display data updated at nearly 3.7MHz.

What I had been experiencing were lost bytes of incoming data with the processor heavily loaded, and I had assumed that data was not being buffered correctly due to something I had done in software. However, a simple test showed that the UART buffer overrun flag was being set reasonably often. After an awful lot of head scratching/ banging, I disabled global interrupts just before and after writing data to the SPI register...... And this cures the problem. No missed bytes and no buffer overrun.

I am using date code 48/03 ATMega32-16AI TQFP parts. They run at 14.7468MHz @ 5V. All code is written in 'C' and compiled using Imagecraft 6.30D, which has been verified to generate the expected assembler output.

Whilst the work around is OK for now, I cannot see anything anywhere which describes the potential for this issue. I have put the question to Atmel, but it is still early days for a response by them. So I was wondering if any of you guys can shed some light upon this. Surely I cannot be the only person seeing this?

I look forward to some enlightened answers. And just plain old answers too!

Sacha.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Are you using the SPI-Complete interrupt?

If so, how much processing are you doing there?

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There did not seem much point when running at that speed, so the SPI interrupt is definitely disabled and I am just polling. I did try the interrupt approach to see, but it made no difference to the problem. Here is the function as it stands.

void spi_out(unsigned char value)
{
	CLI(); /*appears to stop interrupt problems during sending*/
	SPDR = value;
	SEI();
	#ifndef DEBUG
	while(!(SPSR & 1<<SPIF)); /*waits if there is still a byte being sent*/
	#endif
}

My initial thought was that the problem was due to the polling but that was wrong. As you can see, the interrupts are now only disabled when transferring data to the SPDR register. Were you thinking that the UART was overflowing during a possible SPI ISR? I can see where you are coming from.

I do have timer1 running on interrupt, but the ISR is completed well within the time to recieve one frame. The state of the UART buffer is also checked on entry to this routine to make sure that a potential UART interrupt has not been masked.

Sacha.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hmmm--I've got apps on Mega32, '16, & '8 that do Modbus RTU slave full-tilt-boogie at 115kbps as fast as a master can query in test suites, and there are several SPI devices (e.g., DS1305 RTC, 25xxx EEPROM) that are part of the app whichI use polled SPI [I, too, see no reason to do the SPI-Complete interrupt for most apps] without problems.

I have a theory, however. I'm guessing that SOMEWHERE in your code global interrupts are turned off. It's not the CLI()/SEI() pair that is the solution, it is the SEI() itself.

I guess you could test it by just putting an SEI() at the head of the routine that does the pair now, and see if that changes the symptoms.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Not that it affects anything, but since SPDR is only 8 bits, there should be no need to force the write to be atomic by bracketing it with CLI/SEI. The 8 bit write will be atomic all on it's own. The CLI/SEI pair should be used when writing to 16 bit registers, as the write could be interrupted in between bytes.

Also SPI is a two way transfer, you might consider re-writing your routine to something like the following. (though not necessary, if you never read from the SPI port)

char spi_xfer(char data) {
  SPDR = data;           
  while(!(SPSR & (1<<SPIF))) NOP(); 
  return SPDR;           
}

I have used a variation of the above successfully, in an application where I am loading data from the usart into the spi, at full rate.

My used code, for reference. (the only difference is the order in which things are done, to maximize data transfer)

char spi_xfer(char data) {
  char tmp;
  while(!(SPSR & (1<<SPIF))) NOP(); // wait for any uncompleted transfer to finish
  tmp = SPDR;   // get the transferred byte
  SPDR = data;   // send the next byte
  return tmp;       
}

Using the above, you must pre-load SPDR with the first TX byte before the first call to spi_xfer().

[edit]
after reading your first post, I see the reasoning for the CLI/SEI. I would agree with Lee here, that the problem is elsewhere, and that you may have inadvertantly disabled interrupts elsewhere, resulting in the data-loss. And the addition of the SEI is re-enabling them for you, resloving the problem
[/edit]

Writing code is like having sex.... make one little mistake, and you're supporting it for life.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Lee,

You are a top man. Just enabling the interrupts before moving the byte to SPI register does the same thing. There was no rogue CLI, but sometimes the SPI routine would be called from within an interrupt triggered display update routine, which of course is disabling global interrupts. Doh!

Anyway. Problem solved now. I just re-enable the interrupts on entry to the original ISR. Thanks for your help chaps. It's greatly appreciated. :)

Sacha.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sacha wrote:

... would be called from within an interrupt triggered display update routine, ...

What you describe is against my principles of trouble-free microcontroller interrupt-driven programming. Others posting here seem to love nested interrupts, and in some apps they may be necessary. But my rule is: Get in; do your business; and get out. I would NEVER call a display update routine from an interrupt. You are already spending at least 2-3 character times in your ISR or you wouldn't have seen your Rx symptoms.

Think of what is happening to your stack; think of all the multiple register save-restore. By the time your waiting interrupt actually gets to run, you could have been done and out of the first ISR.

If you every get cascading interrupts, your system will tie itself into a Gordian knot in an instant.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Firstly I agree wholeheartadly with Lee's comments regarding ISR's. You never want to place long running code into an ISR, unless the application absolutely demanded it. Avoid nested interrupts at ALL costs.

Sacha wrote:
Now data is coming in at 38400 baud and the display data updated at nearly 3.7MHz.

I hope that that is your shift clock to the LCD, and not the actual attempted update rate. There is NO NEED to update a display so quickly, as your eye will never catch it. For perfectly smooth animation, updating the display at a rate of 25-30Hz is more than enough. But for most applications for data display updating around 2-4Hz is probably enough.

People get caught in a mental loop of trying to update a display as soon as the data changes, and for every change, even if it does so at 100Hzor more, the problem with this is you will never be able to read the changes in values. Updating a display is a LOW PRIORITY task, and as such should be done periodically from the main loop. And as it's such a low priority it should be safely interrupted by any ISR's.

Note there is an exception to the above, and that is wehre you are driving ther display with a stream of data, that cannot be interrupted, like in generating NTSC, or other video. In these cases the video has a very hih priority, and consums the bulk of your resources. In theses cases your actual processing will need to be fairly siomple, and other interrupts will need to be virtually non-exstant. If you require more power, move the intensive processing onto another micro, and use a smaller micro purely as a display controller.

Writing code is like having sex.... make one little mistake, and you're supporting it for life.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Guys,

Thanks for the advice. I did sort of realise that doing all that work in the ISR was causing me problems. And seeing that the specific display event it was triggering was not time critical I did take it outside of the loop. It all seems fairly robust now. Nearly time to hand it over to the customer to see how badly they can punish my poor design. :lol:

Glitch, you are right. No need to panic as it's just the serial clock.

Thanks once again chaps. Nice to know there is always some good advice out there.

Sacha.