large inter-byte delay in spi_m_sync_transfer?

Go To Last Post
11 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Short form:

I'm trying to send a packet of three bytes via SPI using spi_m_sync_transfer().  I'm seeing a 100 uSec gap between each byte.

  • is this to be expected?
  • what should I do to send 3 bytes without a gap?

 

Details:

Hardware is SAMD11 XPlained Pro board.  CPU and SERCOM1 are running off a 48MHz clock (from DFLL48M)

Dev Env is Atmel Microchip Studio v 7.0.2542. 
CONF_GCLK_SERCOM1_CORE_FREQUENCY is 48000000 (48MHz) as expected.

CONF_SERCOM_1_SPI_BAUD is 2500000 (2.5Mhz) as expected.

Clock rate for the SPI is 2.5MHz, and measures properly on my 'scope.

 

Code:

#include "atmel_start.h"
#include "atmel_start_pins.h"

static uint8_t pixels[] = {0x01, 0xfe, 0x55};  // 3 pixels

int main(void)
{
    uint32_t retval;
    struct spi_xfer xfer = {
        .rxbuf = NULL,
        .txbuf = pixels,
        .size = sizeof(pixels)
    };

    atmel_start_init();
    spi_m_sync_enable(&RGB_COM);

    do {
        retval = spi_m_sync_transfer(&RGB_COM, &xfer);  // write 3 pixels
        delay_ms(1);
    } while (retval == sizeof(pixels));

    // arrive here if spi_m_sync_transfer() failed
    while (1) {
        asm("nop");
    }
}

What I expect:

I expect to see a burst of three bytes sent back-to-back with a 1 mSec delay between each burst.

 

What I observe:

I see three bytes sent with a 100 uSec gap between each byte, followed by a 1 mSec delay.

 

This topic has a solution.
Last Edited: Fri. Feb 5, 2021 - 04:16 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Update: The short answer is that I'm code limited.  But I don't see the SERCOM buffering data the way I expected it would.  By skipping the ASF library (tip o' the hat to Lars) and cranking up optimization, I got the inter-byte gap down to ~18 uSec.  But there's still a gap.  Here's the updated code:

 

static inline void send_byte(uint8_t byte) {
    while(SERCOM1->SPI.INTFLAG.bit.DRE == 0) {
        asm("nop");
    };
    SERCOM1->SPI.DATA.reg = byte;
}

static void RGB_loop(void) {
    uint8_t *p;

    spi_m_sync_enable(&RGB_COM);
    while(true) {
        p = s_pixels;
        send_byte(*p++);
        send_byte(*p++);
        send_byte(*p++);
        delay_ms(1);
    }
}

So here's why I'm perplexed.  With low optimization, I put a breakpoint on the NOP instruction and it never got triggered, meaning that SERCOM was processing bytes faster than the code could provide them.

 

But after bit bumming and cranking up the optimization, it hit the NOP breakpoint consistently.   HOWEVER, there was still a 17 uSec gap between bytes.  This leads me to think that the SERCOM isn't double buffering the data register as I thought it would.

 

Comments?

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

fearless_fool wrote:
skipping the ASF library

Yes, I had to ditch ASF in a similar situation.

 

 

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Are you letting SERCOM control the SPI /CS pin (by setting CTRLB.MSSEN)?

 

If so, the hardware will insert an inter-byte gap of at least three clock cycles even if there's more data to transmit. In addition, it will de-assert /CS for at least one cycle while doing this!

 

It's a known erratum, and the solution is to control /CS in software.

 

Steve

Maverick Embedded Technologies Ltd. Home of Maven and wAVR.

Maven: WiFi ARM Cortex-M Debugger/Programmer

wAVR: WiFi AVR ISP/PDI/uPDI Programmer

https://www.maverick-embedded.co...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

@Steve: thanks for the note. 

 

I saw the previous post about MSSEN, and it's disabled.  First, I turned off RX Enable in Atmel START, and I notice that Config/hpl_sercom_config.h has

#define CONF_SERCOM_1_SPI_MSSEN 0x0

 

 

In addition, during execution, I've used the debugger to examine SERCOM1->CTRLB->MSSEN and confirmed that it's set to zero.  So I don't think this is the issue, but I'm still seeing 15 uSec between bytes.  (I see that CTRLB->RXEN is enabled despite what I asked Atmel START to configure.  I've traced that to a bug in _spi_load_regs_master() in hpl_sercom.c, but I think it's benign...)

 

(BTW, see also https://community.atmel.com/foru... for the ongoing saga...)

 

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Something doesn't add up. With the CPU running at 48 MHz and SPI at 2.5 MHz, you've got ~150 instruction cycles per SPI byte. With the SPI Tx shift and holding registers you can pretty much double that number of instruction cycles.

 

Given the simplicity of the example code in your message #2, it seems unlikely that even the unoptimised code would exceed 300 cycles between bytes.

 

Have you verified that the CPU really is running at 48 MHz?

 

Steve

Maverick Embedded Technologies Ltd. Home of Maven and wAVR.

Maven: WiFi ARM Cortex-M Debugger/Programmer

wAVR: WiFi AVR ISP/PDI/uPDI Programmer

https://www.maverick-embedded.co...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

@Steve: I agree with you about things not adding up.  Circumstantial evidence is that the CPU is running at 48MHz:

 

In peripheral_clk_config.h:

#ifndef CONF_GCLK_SERCOM1_CORE_FREQUENCY
#define CONF_GCLK_SERCOM1_CORE_FREQUENCY 48000000
#endif

... and a previous version of my code wrote strings to the EDBG serial port at 115200 baud -- those came out at the right speed.

 

I changed the SPI baud rate to 2.4MHz (since it divides evenly into 48MHz) and I'm seeing 2.475Mhz on SCK - close enough considering that the 32KHz oscillator is running the show.

 

What interesting is that with the code as written, the bytes start 15 uSec apart:

 

<I'm trying to insert a scope image here but the server is hiccuping.>

 

When I comment out the busy wait on DRE (while (SERCOM1->SPI.INTFLAG.bit.DRE == 0);) (and yes, I know that's a bad idea), I get well-formed bytes at 5 uSec apart:

 

<Still attempting to insert another scope image.>

 

I don't know why I'm not getting buffering, but it *does* seem likely that I'm still CPU bound: 48Mhz / 2.4MHz gives me 20 clock ticks to load a new byte.  [Edit: that should have been 20 tics per bit, or 160 tics per byte.  See @steve's note below.]  I'm not an expert on ARM/Cortex timing, but that seems pretty tight.  That's why I'm trying the DMA approach instead (https://community.atmel.com/foru...).

 

Last Edited: Fri. Feb 5, 2021 - 04:23 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I've no doubt the SERCOM clock is correct, both for the current one in SPI mode and the other hooked to EDBG. However, that #define is not relevant to the CPU core clock speed.

 

If you're using START:

What are the definitions for CONF_GCLK_GEN_0_SRC and CONF_GCLK_GEN_0_DIV in hpl_gclk_conf.ig.h?

What's the definition of CONF_CPU_FREQUENCY in peripheral_clk_config.h?

Can you post a screenshot of START's clock configuration?

 

The next step is to toggle a GPIO pin in a tight loop (without using the Start API) and use the 'scope to check its frequency.

 

Steve

Maverick Embedded Technologies Ltd. Home of Maven and wAVR.

Maven: WiFi ARM Cortex-M Debugger/Programmer

wAVR: WiFi AVR ISP/PDI/uPDI Programmer

https://www.maverick-embedded.co...

This reply has been marked as the solution. 
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

@Steve:

What are the definitions for CONF_GCLK_GEN_0_SRC and CONF_GCLK_GEN_0_DIV in hpl_gclk_conf.ig.h?

What's the definition of CONF_CPU_FREQUENCY in peripheral_clk_config.h?

Give this man a cigar!  I thought I was running the CPU off of the DFLL but instead it was running at 1MHz off the OSC8M:

#define CONF_GCLK_GEN_0_SRC GCLK_GENCTRL_SRC_OSC8M
#define CONF_GCLK_GEN_0_DIV 1

#define CONF_CPU_FREQUENCY 1000000

Switched it to run off the DFLL (div 4) at 12MHz and I'm getting beautiful contiguous SPI bytes.

Many thanks for your insights and perseverance!

Last Edited: Fri. Feb 5, 2021 - 04:20 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

No probs.

 

The good thing about the SAMDx chips is the multitude of clocking options.

The bad thing about the SAMDx chips is also the multitude of clocking options.

 

Steve

Maverick Embedded Technologies Ltd. Home of Maven and wAVR.

Maven: WiFi ARM Cortex-M Debugger/Programmer

wAVR: WiFi AVR ISP/PDI/uPDI Programmer

https://www.maverick-embedded.co...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Now the only thing I need to figure out is how to make MOSI COPI stay low between transfers...