SPI Hardware vs bitbang

Go To Last Post
11 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Split from :

https://www.avrfreaks.net/forum/...

 

 

 

avrcandies wrote:
The design of the spi block is the opposite; they try to shave the timing margins down to the minimum tolerable/usable margins (in order to allow for the fastest speeds).  Why use the minimum possible safety margin, if you are not in a blazing  hurry? 

As I said in another thread discussing SPI - I think you are wrong. See my previous post:  https://www.avrfreaks.net/comment/2811871#comment-2811871

 

If you still think that the AVR SPI block is badly designed and gives you bad timing margin - please provide an example. I cannot see how any external SPI device would NOT be compatible with at least some configuration (CPOL, CPHA, frequency) of the SPI block. After all SPI is nothing but clock and data at arbitrary rate.

/Jakob Selbing

Last Edited: Thu. Dec 19, 2019 - 01:16 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

As I said in another thread discussing SPI - I think you are wrong. See my previous post:  https://www.avrfreaks.net/comment/2811871#comment-2811871

 

If you still think that the AVR SPI block is badly designed and gives you bad timing margin

Oh, I'm not saying SPI doesn't normally work, I'm saying if you want to apply some RC filtering to cabling to greatly increase noise resistance, bit-banging allows a much longer settling time in a high noise environment.  You might even allow 10 microseconds for the dust to settle ( a moderate filter), if you are in no hurry.   Perhaps the same could be achieved just by setting the spi to it's slowest possible setting.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:
Oh, I'm not saying SPI doesn't normally work...

You claimed in the other thread that for you 75% of the chips were not "compatible" with the SPI block.

 

avrcandies wrote:
they try to shave the timing margins down to the minimum tolerable/usable margins

Maybe you'd like to explain what you mean by this?

/Jakob Selbing

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jaksel wrote:

avrcandies wrote:
they try to shave the timing margins down to the minimum tolerable/usable margins

Maybe you'd like to explain what you mean by this?

 

  I try to clarify this if you don't mind. There is more about the SPI interface than just saying it is compatible or not. SPI is a synchronous interface, meaning that data lines should change state when it is a clock state change. If you change data before clock, the slave may be late and read your new data which is wrong. If you change data too late, the slave may be faster than you think and it may read old data.

  When you bit-bang it, you may have all lines in one port or not. If you don't have them in one port, you can't drive them synchronouslsy, which limits the speed of the interface.

  The main challenge is when the master reads from the slave. Once the master flips the clock line, the signal must reach the slave, the slave needs to see it, prepare next bit, output the new bit early enough that the master can read it before half clock period. To achieve this,

avrcandies wrote:
they try to shave the timing margins down to the minimum tolerable/usable margins

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Oh, I'm not saying SPI doesn't normally work...

 

You claimed in the other thread that for you 75% of the chips were not "compatible" with the SPI block.

 

 

Yes, that is true....they may require some odd ball number of bits, so it's much simpler to bitbang them, also the bits can be sent in batches of the partial sizes needed.  For example, may need 3 bits to specify parameter A (ex: channel 6), 2 bits for parameter B, and 4 bits for parameter C  (ex: brightness 13), and finally 2 bits for parameter D) ...so you can figure out 3  bits, send them, elsewhere figure 2 more bits , send those, etc---no wasteful regrouping or aligning needed. 

 

just call:    spit_bits(mybyte, number_of_bits)  //call each time, as needed

 

 

also:

After all SPI is nothing but clock and data at arbitrary rate.

 This implies there is an inherent timing margin, yes?  If you desire more margin for increased noise filtering & reliability, you can get it using bit-bang.   Perhaps you can get the same effect by simply increasing the spi divider (unless it is a fixed time delta, then all rates have the same margin).  

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Last Edited: Thu. Dec 19, 2019 - 11:00 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

angelu wrote:

  I try to clarify this if you don't mind. There is more about the SPI interface than just saying it is compatible or not. SPI is a synchronous interface, meaning that data lines should change state when it is a clock state change. If you change data before clock, the slave may be late and read your new data which is wrong. If you change data too late, the slave may be faster than you think and it may read old data.

  When you bit-bang it, you may have all lines in one port or not. If you don't have them in one port, you can't drive them synchronouslsy, which limits the speed of the interface.

  The main challenge is when the master reads from the slave. Once the master flips the clock line, the signal must reach the slave, the slave needs to see it, prepare next bit, output the new bit early enough that the master can read it before half clock period. To achieve this,

 

avrcandies wrote:

they try to shave the timing margins down to the minimum tolerable/usable margins

 

I still don't see what that has to do with "margins" being"shaved down" in the SPI block itself. What margins are you referring to here? Anything from the datasheet? Or are you just saying that the SPI block is *fast* in order to support high clock frequencies? Surely that is not a bad thing. 

 

Regarding synchronous design, the datasheet for ATmega328 says this:

Quote:
Data bits are shifted out and latched in on opposite edges of the SCK signal, ensuring sufficient time for data signals to stabilize.

This means that incoming data is normally sampled in the middle of the data cycle, i.e. half a period after it changes and half a period before next change. That is the optimum point of sampling. And besides you can select which clock edge is used for what, and even invert the clock all together. So how can that NOT be compatible for any device claiming to have a standard SPI interface? 

 

What could happen though is that the CPOL and CPHA are incorrectly configured, which could lead to an SPI device working intermittently but being very susceptible to noise, sensitive to trace length differences etc.

/Jakob Selbing

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:
Yes, that is true....they may require some odd ball number of bits, so it's much simpler to bitbang them, also the bits can be sent in batches of the partial sizes needed.

I wouldn't call that a matter of compatibility then. Sure, perhaps easier because you can use arbitrary number of bits.

 

avrcandies wrote:

After all SPI is nothing but clock and data at arbitrary rate.

This implies there is an inherent timing margin, yes? If you desire more margin for increased noise filtering & reliability, you can get it using bit-bang.   Perhaps you can get the same effect by simply increasing the spi divider 

Since you can set an arbitrary (almost) clock frequency you can decide the timing margins. I still don't see what this has to do with any timing margins inside the AVR SPI block being "shaved down"?

 

avrcandies wrote:
unless it is a fixed time delta, then all rates have the same margin

Are you referring to the clock-to-data delay of the SPI block? In that case, that has probably very little effect unless running at extremely high clock frequencies. The outgoing data and incoming data use opposite edges of the clock. That means that the clock-to-data delay of the SPI block may be as high as half a clock period before any data errors occur. I doubt that it has any practical relevance at typical SPI frequencies.

/Jakob Selbing

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0


jaksel wrote:
I still don't see what that has to do with "margins" being"shaved down" in the SPI block itself. What margins are you referring to here? Anything from the datasheet?

jaksel wrote:

Quote:

Data bits are shifted out and latched in on opposite edges of the SCK signal, ensuring sufficient time for data signals to stabilize.

  That is in theory. Here are the SPI timings for Mega328P:

  Look at t7=10ns. Despite the fact that this is produced by the same device pins, there is still a delay.

 

  Now ,lets see about the case I described before, when the master reads from the slave. First, the master flips the clock line. Then the slave needs up to t15 = 15ns to have the data on the line. Then the master needs this data at least t4 = 10ns to be there before it reads it. These t15 + t4  = 25ns needs to fit in half SPI clock period. This means that the minimum SPI clock period becomes 50ns, or the maximum SPI clock frequency is 20MHz. If you add any circuitry in between like level translators, isolators, things only get worse. This is more of a problem for microcontrollers with CPU clock speed higher than the peripheral clock. The aim is to have these numbers as small as possible. And this is what manufacturers are doing: they shave down these timings as much as possible.

  It is obvious that you can't achieve this when you bit-bang the SPI especially if the pins are not on one port.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

angelu wrote:
The aim is to have these numbers as small as possible. And this is what manufacturers are doing: they shave down these timings as much as possible.

I totally agree with your description. However I would not call that "shaving down margins" - I would call that making the device faster or have less clock-to-data delay/latency or whatever. 

 

To me, less margin would mean that the time between the data signal is latched in and the point at which data signal changes is reduced. In your example, if the total delay of the slave data signal is 25 ns and the master uses a 50 ns clock period (20 MHz) the margin would be 0 ns which is obviously too small which is bad (high risk of data corruption). If clock frequency is reduced then the margin increases - which is good.

 

BTW an SPI clock of 20 MHz is pretty high considering that it is inherently limited to 1/4th CPU clock, i.e. 80 MHz. So for any AVR device, the SPI clock frequency will probably be limited by other factors like trace length, ringing, reflection, slave delays, etc.

 

angelu wrote:
  It is obvious that you can't achieve this when you bit-bang the SPI especially if the pins are not on one port.

Again I totally agree.

 

But my objection was to the description of the AVR SPI block as being "incompatible" and "margins shaved down" suggesting that it does not work properly due to its design.

 

OTOH if "margins shaved down" means faster device then I cannot see why that would be a problem since that would only support even higher SPI clock frequencies (or more margin if SPI clock is the same).

/Jakob Selbing

Last Edited: Sat. Dec 21, 2019 - 04:05 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Again, I argue that if margins are incorrect, the wrong mode is probably being used.

 

The original posting was about M328P or PB, not quite sure which. If that runs at 8MHz, then the maximum clock rate can be no higher than 2MHz. That means 250ns between new data being put onto the bus by the data source (one clock edge) and data being read by the receiver (next clock edge). 250ns is not "shaving", by any stretch of the imagination. if it were 2 or 3 or 5ns, one might be justified in saying that, but not 250ns. 

 

With SPI running at 2MHz, hardware is rarely a limit. Even with horrendous ringing, mismatched trace lengths, and such, it just won't be a limit. Propagation velocity on a PCB trace is about 1ns/inch. So, a 5 inch mismatch between clock and either MISO or MOSI will only effect the 250ns "setup" time by 5ns; that is essentially no effect. Poorly designed hardware, such as caps on the clock on data lines, or attempts to "terminate" traces might cause problems. But, nobody does that, right?

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Btibang simply allows you to center your clock pulse in the middle of the data.  Timing #7 shows the data going away very close to the clock going away, which may be a close shave when filters are added in a power RF environment (cable going to chip in the rf section).  All of the lines going to the RF boards have some sort of filtering & the digital lines are filtered going to sensitive RF sections  True, you could adjust your mode so that the data extends past the end of the clock, but then this makes the front edge very tight, which may not be so good for other boards in the system that need data steady as the clock edge is going high, rather than going low.  Bitbang allows you to always place the clock pulse dead-center and avoid any issues, being simultaneously compatible with both.  As mentioned, it also allows you to tailor to the exact number of bits needed, so any setup is inherently compatible.  9 bits 11, bits, 26 bits, etc can be sent with ease, especially since each subgrouping of bits can be generated and sent without any extra packaging steps.   Of course, the downside is that bbang puts extra work on the processor, but its easy work, especially if you are just updating some value every 100 ms.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!