i2c/twi slave on usi using Don Blake's code

Go To Last Post
5 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi Guys,

At the moment I am working on a little project that targets on using attiny's for led lighting control (on/off/pwm dimming/etc) using twi/i2c. The final version will be using '85's (8 pins) but I am developing on the '861 for ease of debugging. I am pretty sure the pwm will be no problem, so I didn't yet start on that.

I am using an avr dragon in isp mode and avr-gcc for development, the attiny is programmed in-circuit on the development board (i.e. it's not on the dragon). The development board has four leds and two buttons for debugging.

To start with, I am trying to make twi/i2c, as a slave, working.

Of course I started with avr application note 312, made some changes to make it work on avr-gcc and of course it didn't work ;-) A start condition was triggered, but the direction was the wrong way around and the address was halved (i.e. I was sending address 0x04 from the master and the slave responded to address 0x02).

So yes, I fell into the trap of the slow clock (1 MHz). I changed it 8 Mhz, then the direction was good and the address was good. Reception was okay, but I never managed to reply something to the master.

Then I switched to Don Blake's usi/twi slave code, which at least allowed me to receive correctly on the slave and replying as going well. Thanks Don :-)

BUT it's still not 100% okay. The i2c bus kept locking up when more bytes were read than available in the buffer. After a quick inspection of the code I found a comment about having changed the behaviour on an out-of-data condition, from "sending NACK" to skip any action, to make the master wait for data from the slave to appear (Arduino seems to expect that). This is NOT the behaviour I want. I want to make a very robust system between master and slave, which means the master must be able to "flush" data that may be in the slave's buffer from a previous, interrupted command. It would do so by reading data until an error occurs. With the current Don Blake code, this will lock up the bus.

So I re-activated the code to go to start condition on out-of-data condition (which was commented out). This did not resolve the issue though. The slave DOES go into start condition, but appears to leave SDA low (active), so again the bus gets locked up.

Then I added a line of code to the macro that sets start condition, to explicitly set SDA to "input" and now it works. I am not completely sure this is completely right though.

- Should the "set sda to input" line be added to the start condition macro, or is it sufficient to add it to the point where out-of-data occurs?
- The read now does NOT yield an error on out-of-data, instead it returns 0xff. I am not completely sure that's a result of the code or that the master returns 0xff on error
- Shouldn't the slave return NACK (on out of data condition) instead of simply going to "start condition"? I can't find any code that does so, though, I think it's not so easy to implement anyway (exact timings...)

So, to summary, I'd like to be able to do a read of "many" bytes from the slave, and the slave should return all bytes it still has in it's buffer and then post a NACK to signal no more bytes available. How should I approach that?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Here's one reasonably simple way of handling things: When the slave finishes sending a packet's worth of data to its bus-master, just have it go to the bus-idle state. Once a bus-master has gone into "pull data from a slave" mode, there's really nothing the slave can do to terminate the exchange anyway, because it's the bus MASTER that generates the ACKs and NAKs during read-from-slave cycles. As long as the bus master keeps generating clocks, it will happily read an infinitely long sequence of "0xFF" bytes even from a compeletely disconnected bus. So there needs to be some way to clue the bus master in as to how much data it should expect.

This knowledge might be implicit in your master-slave protocol (fixed-length reports, for example, where the reports are framed by the sequence). For variable length data sequences, you might have the slave prefix each report with some indication of message sequence number, and remaining bytes-in-packet information.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thank you. I was trying to stay out of formatted messages (including message length, crc's etc) but maybe it will prove to be necessary after all.

It seems that part of my question comes from lack of knowledge of the i2c protocol, I will work on that ;-)

Your suggestion exactly explains the endless rows of 0xff's I am getting when reading more data than available on ANY (hardware) i2c slave, not just the attiny.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well, you don't need CRCs, and one option for avoiding explicit message lengths is to use some data format (ASCII characters, for example) that let you reserve special delimiter characters (0xFF, for example) that will never be sent as part of any message's valid data. So then the slave can just go bus-IDLE the instant it sees the bus master ACK the last byte of a message, the bus master can just issue the slave's read address, and slurp in bytes until it gets an 0xFF

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Talking about CRC's... how can we be sure that there are never any read/write-bit-errors due to noise etc.? Indeed I've never seen any message verification on i2c.

I was suggesting using CRC's as one possible solution to know for sure you're not interpreting any bogus 0xff values (completed with a message structure including message length).

Another approach would be to avoid using the value 0xff altogether, indeed, which is my plan at te moment. One could use a scheme with byte or bit stuffing, but that's way too complex. Another scheme would be where only the low nibbles are carrying information, the high nibbles can hold three bits of metadata, like "more data follows", "delimiter" and "last nibble", as long as never all four bits are 1 (=> 0xf0 | possibly 0x0f data == 0xff)

I think I like this one :-) Data transfer speed isn't important here anyway.