TWI/I2C on ATTiny84 with USI: Detecting end of data?

Go To Last Post
31 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have an I2C slave using the USI of an ATTiny84. I have code loosely based on AVR312.

However, this only lets me poll "is there any data" from the application code -- it won't let me detect that the master has authoritatively sent a full packet of data.

On the Atmegas, with a built-in TWI engine, there exists a state that lets me do that. In the interrupt handler, I can schedule a task in my micro-task-system to actually pick up the complete packet and do something with it.

On the Tiny, I can send ack or nak based on whether I have buffer space left -- but there is no interrupt for the bus going into the STOP state. There is a bit I can check in USISR, but I can't get an interrupt when this changes.

The best I can think of is to schedule some timeout within the "I received a byte" function, and if no more bytes come within that timeout, decide that the packet is done, and hand the data off to the application code. This has three draw-backs:
1) It adds latency (not so bad)
2) It is not necessarily correct (worse)
3) It loses data if a new packet is started (stop+start condition) in the polling interval

Are there any better ideas? FWIW, I'm running the ATTiny at 8 Mhz, and it's being written to by a ATMega328p running at 16 MHz, running the bus at 400 kHz, although that really shouldn't matter for this question.

Perhaps staying in the interrupt routine after shifting out the ACK bit, but clearing the interrupt flag and clock counter, and poll SCL and SDA until one of them change, and detect whether that's STOP or not?
This would potentially still cause a latency increase for my main program because of the polling, but it might bet better than nothing.
Also, the master could drive the slave into watchdog timeout (currently 2 seconds) this way by simply keeping the lines idle after shifting out the ACK bit...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I don't see your problem. The USI will interrupt on STOP whereas TWI does not.

With any I2C the Master determines how many bytes to read from a Slave.

Of course you can always make your Slave send a special byte when it wants to indicate 'no more data'.
The Master still has to send a ReadNAK() command to tell the Slave to stop sending any more data.

The I2C spec determines how the bus behaves.
You decide what data you send and receive.

I suggest that you either use a fixed number of bytes to send/receive.
Or you use an 'EOL' marker e.g. the linefeed or NUL in "an ascii line of text\n"

Or the Master can send a 'how many bytes are available' command. The Slave returns the number.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
With any I2C the Master determines how many bytes to read from a Slave.

I agree! Therefore, the slave can't signal the application that "a full transmission was received" until it receives the STOP condition.

Quote:
The USI will interrupt on STOP whereas TWI does not.

The data sheet I'm reading says different.

Section 14.2, USI overview says that the counter interrupts when the transfer completes -- but this simply means that one byte in is ready, or the ack bit out was sent. This cannot be used to detect stop condition.

14.3.4, Twi-wire mode, and 14.3.5, Start Condition Detector, says that the start condition detector can generate an interrupt when detecting a start condition, and does not talk about a stop condition.

14.5.3, USISR, talks about the USIPF condition flag being set to one when a stop condition is seen. It then says "Note that this is not an interrupt flag."

Finally, 14.5.4, USICR, talks about USISIE and USIOIE as interrupt masking flags, neither of which controls any stop condition interrupt.

Quote:
I suggest that you either use a fixed number of bytes to send/receive.

If I could control all the parts of the equation, that would work, but would be sub-optimal, as it requires an additional byte of payload. As it is, I would like it to work with an existing protocol.

Quote:
tell the Slave to stop sending any more data

I am using a master sender / slave receiver, so that doesn't make sense in this case.

Quote:
TWI does not.

In the 328p data sheet, it claims to generate an interrupt each time the state machine changes state. Section 21.5.5, "Control Unit," talks about how it asserts TWINT, and the status code is available in TWSR. One of the sources of TWINT is receiving a STOP or REPEATED START condition.

Additionally, I have code that seems to work fine on the 328p, emitting the end-of-packet signal to the application when a STOP or REPEATED START condition is seen.

I now want to emulate that behavior on the ATTiny84a.

So far, what I can think of is that, after the slave sends the ACK (or NAK) bit for receiving a byte, I stay within the interrupt handler, polling the SDA and SCL lines. If I see SDA transition high with SCL high, I know it's a STOP condition, and I know I'm at the end. If I see anything else, I know it's not STOP, and I can return from the interrupt handler and let the circuitry do the work of receiving the next byte.

This approach has at least two problems:
1) If the master decides to stop a transfer in the middle (gets reset, or whatnot,) then I won't detect that on the client.
2) If the master decides to just stall out, not transitioning SDA or SCL after clocking an ACK bit from the client, the client will stay locked in the interrupt handler until the watchdog fires and resets the chip.
This is not as robust as I would like the implementation to be.

I can't believe I'm the only one who needs to reliably detect end-of-transmission on TWI on an ATTiny? Surely there is some other code out there that does this, and I'm just missing it? But AVR312 doesn't do it, and neither does any of the examples that are based off of it that come up on Google.

Another question related to this implementation:

The diagram for the Start Sequence Detector seems to indicate that a Start Sequence interrupt will be generated if a zero bit is clocked on the bus. This is because SDA will be low, and SCL will be high. Will this actually happen? Do I need to turn off the start condition interrupt while receiving a byte, or clocking out an ACK bit?
If so, isn't there a race condition where, after I've clocked out the ACK bit, the master can generate a REPEATED START condition, before I get the interrupt for the ACK bit being clocked out and can set up the start condition detector again?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I suggest that you read the I2C spec and the TWI or USI sections of the data sheet.

I presume that you are aware that AVR312 is broken. Google "AVR312 Don Blake"

If you explain a typical byte transfer, someone might offer solutions. You might explain your typical fears. e.g. Master aborting, Slave losing power, incorrect data, ...

There are ways with coping with these situations.

Quite often your Slave may have 'similarities' to an existing hardware chip e.g. PCF8583, 24Cxx, ...
This makes life far easier to explain. It also means that it is easier to debug both Master and Slave. e.g. you test the Master with the 'equivalent' hardware chip. Then make your Slave behave exactly the same.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks for your continued replies. Hopefully we'll get somewhere!

david.prentice wrote:
I suggest that you read the I2C spec and the TWI or USI sections of the data sheet.

I have, in quite a lot of detail! What parts of my question leads you to believe that I do not understand how the TWI/I2C bus operates?

Quote:
I presume that you are aware that AVR312 is broken. Google "AVR312 Don Blake"

No, I was not. I've seen the Don Blake implementation, but it still just implements the "whatever bytes are there, are there" method of transmission, and does not give you any way of determining where one transmission ends and another one starts. It also does not give you any indication that a particular transmission has ended.

Quote:
If you explain a typical byte transfer, someone might offer solutions. You might explain your typical fears. e.g. Master aborting, Slave losing power, incorrect data, ...

I thought I was pretty clear in my previous explanation. What part of the question was not clear?

Quote:
There are ways with coping with these situations.

Such as... ?

Quote:
Quite often your Slave may have 'similarities' to an existing hardware chip e.g. PCF8583, 24Cxx, ...

I am writing a slave receiver using an ATTiny84a. The protocol behavior for the application running on the ATTiny84a relies on being told when a transfer has been completed in order to take effect. I have previously used a 328p implementation, where it works fine, because there is an interrupt available when the transmission finishes.

Now, with regards to your answers, I don't see a clear line between most of what I wrote and what your suggestions are. The single useful pointer I've gotten is that "AVR312 is broken," without a particularly good explanation of exactly what that means.
I've read the Don Blake code. It doesn't do what I need -- see above.
Even the simplest of I2C auto-increment register transfer protocols, where the first byte is a starting register number, and the following bytes are the actual data, cannot accurately be implemented on top of the Don Blake code. My use case is in turn more sophisticated than that, because register-transfer can be implemented by knowing that a byte is "first" in a packet -- I need a trigger at the "end" of a packet, with no guarantee that there will ever come a next packet.

Regarding all the other pointers, I wonder if I have failed to accurately communicate the question or my requirements, because all of the other suggestions seem to be beside the point, or even flat out wrong.

If you (or anyone else) are able and willing to help, I would greatly appreciate specific guidance on the answer to this question, which is a re-formulation of the above thread:

"Using an ATTiny84a as a receiving TWI slave implemented using the USI hardware and interrupt handlers, where the application must keep running until a full packet has arrived (MASTER has output STOP or REPEATED START condition on the bus,) how do I efficiently and robustly detect this STOP condition so I can signal the application that the transmission is complete?"

A good answer to that question is all I ask. If you need more context information, hopefully it's already available above.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I never used USI, but if it can send an interrupt when a byte has been send/received you could act on that as the next thing is that you have to ACK or NACK the data. If you set your system up as blocks of xx bytes max, then the master can send adres + startbytelocation r/w then you know max bytes that can be received in one go is xx bytes. After that you just NACK further bytes that are comming in unless there has been a (repeated)start.....
edit: ( be a bit clearer)
what you can do even:
so you have start byte locations
x ( with max 5 bytes)
y (with max 6 bytes)
and z (with max 10 bytes)
the master sends a adres + x and wants to send data to you
now you know the max amount of data you can receive is 5 bytes. If you then receive 6 bytes the last byte can be NACKED as your system can not handle it and it should have never happened.
If you check out the 24Cxx datasheet it works more or less the same. with a difference (from the head as I have not used one in a very long time) that when you start writing to byte 15 of the page that you can only send a single byte and then have to wait for the page to be completed and when sending to address 0 you can write 16 bytes in a string and only then the page is rewritten.

hope this helps

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jwatte, you have an interesting problem.

If your protocol depends on sensing the STOP condition to start acting on the variable-sized command/data transmitted from master to slave, the USI does not provide that. You always just know what byte is the first after start.

Therefore I think with the USI you could not even emulate a 24xx I2C memory, as it starts writing the data to memory only after receiving a stop, and so the write can be cancelled by sending a (repeated) start during byte transmission.

In fact many times the I2C bus is reset by start-stop sequence only, so basically even if stop condition is the last thing sent to the bus, USI will happily detect any further bytes as valid data, as it can only detect start.

May I ask further information from your protocol?

But yes sometimes chips have all kinds of weird requirements for I2C operation. Fortunately I have usually been on the bus master side, not slave.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

meslomp wrote:
If you set your system up as blocks of xx bytes max

Changing the protocol was discussed above.

hope this helps

Thanks, but unfortunately, no, as it doesn't answer my question.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Jepael wrote:
Therefore I think with the USI you could not even emulate a 24xx I2C memory, as it starts writing the data to memory only after receiving a stop

Yes, that's exactly my problem. It sounds as if you're saying "AVR chips with USI-driven TWI can't actually perform as full I2C slaves," which is the conclusion I'm beginning to reach as well. That's unfortunate. I may have to go back to the 328p based solution.

Quote:

May I ask further information from your protocol?

You may, but there's not a lot more to be said. It's used as a LAN protocol on a small robotics platform, for status and control data.

Thanks for the reply -- it's the best reply I've gotten in this thread, and I appreciate it!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I may have to go back to the 328p based solution.

Surely there's something in between an 8K tiny and a 32K mega? How about a mega88PA?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Of course the USI can function as an I2C slave.
Likewise the TWI can function as an I2C slave.

If you say what you want the Slave to do, I will write you a slave that will run on either USI or TWI.

However the USI version would use the "AVR312 Don Blake" solution which apparently is not allowed.

You also need to be realistic. You won't get a 400kHz bus as Slave. You don't really get 400kHz as Master either. However you can make the bus run at the speed of the slower side.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
If you say what you want the Slave to do, I will write you a slave that will run on either USI or TWI.

I would love to see an USI slave that correctly deals with STOP conditions and REPEATED START conditions for the USI! That would be super helpful, and I think a lot of other developers would love that, too!

What the slave needs to do:

1) I need to call init_slave(address) to set up power, the slave address, interrupts, etc.
2) The slave should respond only when it's properly addressed on the bus for a write transaction.
3) As the slave receives the packet, it puts the data into a buffer and ACKs the data, until that buffer is full, at which point the slave will NAK the data.
4) The slave must be interrupt driven, and allow other tasks to run while being addressed on the bus and receiving data.
5) When the slave receives a STOP condition or a REPEATED start condition, it will swap to using a second received buffer (double-buffering) and call a function "on_twi_receive(buffer, size)" with the received data. Then the slave code goes back to waiting to be addressed on the bus again, while application code deals with the received packet.

I have 1-4 working just fine just by reading the data sheet and writing code. It's 5) that's the problem.

It would be great if the slave also recognized bus error conditions, STOPs in the middle of a byte, repeated STARTs in the middle of a byte, master disconnect, etc, but those are not what worries me right now.

Quote:
However the USI version would use the "AVR312 Don Blake" solution which apparently is not allowed.

It's not clear to me that you understand what the problem is. Don Blake would be "allowed" if it wasn't broken.

I think I found three problems with the Don Blake code:

1) It does not tell you which byte is the first of a packet, as opposed to follow-up bytes. This means that most I2C protocols can't be implemented.

2) It does not detect and signal a STOP condition.

3) After being addressed as a slave, if the master does a REPEATED START condition, addressing another slave, the Don Blake slave will stay addressed and receive data intended for the other slave (including the address of the other slave.)

I could be wrong -- I haven't hooked up the logic analyzer yet -- but that's what it looks like when reading the code in detail.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
How about a mega88PA?

That would probably work, too! The reason I wanted the tiny was mostly for space, rather than cost, though.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

My offer still stands. Provide a specification for the Slave, and I will write you an implementation.

You mention a packet. You do not describe the packet. I can think of many 'packet' formats. They generally have a length field and some form of CRC or checksum. Without this information, how can I write a Slave?

Jepael mentioned a 24Cxxx. The 24Cxxx data sheet describes the chip behaviour. Yes, you can emulate with an AVR. e.g. write to page buffer, start a page-write (with STOP), poll the slave address, ACK when ready, read eeprom contents, ...

So if you can quote a chip number and say "My slave should behave like this chip and also ...". Then it can help your 'specification'.

Remember. The I2C bus describes start, stop, read, write operations. What your device does is up to you.

A slave can typically NAK a write that it disagrees with. A good example is the 24Cxxx when it is busy. However it is normally more convenient for a Master to issue a 'read status' command to the Slave.

If you are writing some packet structure to a slave, the Master generally constructs a valid packet. Of course you can let the Slave do the verification if you want. Personally, I prefer a convenient API for the Master. After all, a Slave is written once. A Master may make many requests / commands.

You can produce most behaviours that are logically possible. You can also recover from many typical error situations.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

david.prentice wrote:
My offer still stands. Provide a specification for the Slave, and I will write you an implementation.

I already provided exactly the specification.

Quote:
You mention a packet. You do not describe the packet.

Yes I do. It's a sized sequence of bytes. That's it! No CRC. No length prefix. Just a packet of bytes that are delivered to the application code running on top of the I2C stack/MCU.

I can constrain it slightly more if it helps: I do not use zero-length packets, and I'm OK with the implementation only allowing a maximum packet size that's no shorter than 16 bytes.

If you need something concrete for the application to do, let's say that it repeats the packet out to a RF link through a device that supports variable-length payloads.

To remind you what specification I already provided:

Quote:

What the slave needs to do:

1) I need to call init_slave(address) to set up power, the slave address, interrupts, etc.
2) The slave should respond only when it's properly addressed on the bus for a write transaction.
3) As the slave receives the packet, it puts the data into a buffer and ACKs the data, until that buffer is full, at which point the slave will NAK the data.
4) The slave must be interrupt driven, and allow other tasks to run while being addressed on the bus and receiving data.
5) When the slave receives a STOP condition or a REPEATED start condition, it will swap to using a second received buffer (double-buffering) and call a function "on_twi_receive(buffer, size)" with the received data. Then the slave code goes back to waiting to be addressed on the bus again, while application code deals with the received packet.

Note that 1) is something that I, the application code running on the slave, do, on top of you, the I2C bus implementation on the slave.

I really do look forward to the implementation, even though most of your comments on this topic have seemed a bit non-sequiteur to me.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I give up.

You say a fixed length packet. Yet give no length.

If the slave has no method of knowing how many bytes, how can it know when to NAK a byte?

Look at a 24Cxx data sheet. It shows example sequences.
It specifies how the chip will behave.

Look at your C standard library documentation. It specifies behaviour, arguments, results. And it often gives an example.

e.g. if the buffer length is 10

   i2c_write_str(slave, "David Prentice");
   // Acks "David Pren".  Naks "tice". stores "David Pren" 
   i2c_stop();     // swap  buffers
   i2c_write_str(slave, "Jwatte");
   // Acks "Jwatte".  stores "Jwatte\0\0\0\0".    
   i2c_restart();     // swap  buffers

I can't think of any way that the Slave can know how many bytes to ACK if you are not prepared to specify it. Or the Master tells the Slave.

I also need to know the Slave address. Yes. You do need to specify 7-bit address or 8-bit address (or 10-bit address). I understand the targets are mega328P and tiny84.

Is English your first language?

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
I can't think of any way that the Slave can know how many bytes to ACK if you are not prepared to specify it. Or the Master tells the Slave.

Quote:
I'm OK with the implementation only allowing a maximum packet size that's no shorter than 16 bytes.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

david.prentice wrote:
I give up.

You haven't yet started.

david.prentice wrote:
You say a fixed length packet. Yet give no length.

First, no, I did *NOT* say fixed length packet. That's the hole point. Second, yes, I did specify a maximum packet size:

jwatte wrote:
I'm OK with the implementation only allowing a maximum packet size that's no shorter than 16 bytes.

david.prentice wrote:
I also need to know the Slave address.

That's why I conveniently specified how you'd know the address:

jwatte wrote:
1) I need to call init_slave(address) to set up power, the slave address, interrupts, etc.

For the sake of argument, let's say a seven-bit address. It's really immaterial to the protocol problem that is the crux of the matter, and decoding 10-bit addresses on USI would just add needless complexity. You're going to have a hard enough time detecting the end of a packet anyway.

Quote:
Is English your first language?

No, it's Swedish. Do you prefer that I use that?

It seems to me that the problem isn't language, though; the communication problem seems to be that you seem to willfully not actually read my posts or understand what the problem is. That, and you make claims that are entirely wrong, such as USI will give an interrupt on stop condition, and TWI won't.

Given that Jepael got it in the first try, I think my description should be clear enough for any skilled engineer to understand, but I have been wrong before. If there's any part of the specification that is still unclear, I'll be happy to help point out where it's clarified.

Last Edited: Sat. Jun 16, 2012 - 06:41 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I also don't understand why this keeps going around and around.

jwatte explained his problem loud and clear, he needs to be able to detect STOP condition with USI, regardless of any other aspect of any I2C aspect or protocol over it.

It is not about NAK or anything that depends on 24xx. Hell, even 24xx eeproms don't know when to NAK, it accepts as many bytes you send to it, it just makes no sense to send it more than it can hold in its page buffer.

So in that sense this needs to be like 24xx, receive any amount of data and start doing something with it when stop is received.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

I've the same problem. I was thinking about polling (possibly in a timer interrupt) for the stop flag. What would be the problem with that?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I can think of a few problems:

1) Wastes already limited resources, if I'm also driving any other functionality (which is likely the whole reason I'm using the ATTiny in the first place :-)
2) Latency in detecting the condition.
3) I may miss a stop-and-then-start transition, which is different from a repeated-start transition.

That being said, I'm also attempting to work-around by adding polling into my main scheduler loop, which adds less than a microsecond per scheduled task, but ends up suffering the latency of whatever my longest-running task is.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I haven't tested my code yet but I was thinking of testing the stop flag in both interrupt routines, although I think that this condition should only happen when a transfer is aborted. Atmel had a test in the start interrupt but don blake removed it and replaced it with his own stop detector (I think his implementation is questionable). It would be nice if there was more disclosure on the use of the stop flag (and even nicer when it could generate an interrupt).
In my main loop (quite fast) I also poll for the stop condition (this is where I think it should occur). The latency I can tolerate. The problem Don Blake apparently tries to solve is that the slave may hold the bus which is problematic with multiple slaves. Until I've seen it happen I have no clue how this can happen en how Don prevents it from happening. The only real difference between Don's code and the atmel code is Don's detection of the stop condition (sda high either before or after scl low) and when there is that condition, disabling overflow interrupt and changing the mode to not holding scl at overflow.
There is a further change in the handling of an empty buffer but that does not seem relevant to the problem at hand.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I believe that the stop flag is set when the detector sees the data line go from low to high within a clock pulse, and the clock doesn't go back low. This is a "stop condition" on the bus, the reverse of the "start condition" which is when the data line goes low while the clock is high, and the clock then goes low.

I'm pretty sure that the stop flag will never generate an interrupt. This is the main deficiency in the I2C implementation of USI that I'm trying to work around.

Because you don't get an interrupt when stop flag goes high, it's quite possible that a stop flag is already set in the status register when you want to generate the start condition, and I think that's what Don Blake is trying to avoid. I personally think that writing a one to that flag before generating the start condition, before you want to test it, would be sufficient.

I also think Don Blake's code has several defects. Then again, so does the USI interface we're all trying to use, so it's pretty much "pick your poison" to get it to limp along.

Note that a slave that freezes and holds clock low "forever" can't be detected by the USI hardware alone, because you'll never get to the point where you get the next interrupt.

If I didn't need to be doing other things at the same time, I'd just run USI synchronously on the ATTiny, and detect the special cases using bit banging.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm testing now and I'm also pretty sure that 'stop' does not generate an interrupt. I poll for the stop condition in the main loop, in my case this is fast and reliable. Just in case a new 'start' interrupt occurs before I have detected the 'stop', I also test for 'stop' at the entry of the 'start' interrupt. If a 'stop' has occurred I call the stop handler and clear the flag. The while loop does not need to test for 'stop' but could test for SDA to prevent blocking.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
If a 'stop' has occurred I call the stop handler and clear the flag.

Thanks for your suggestions! This assumes that you're ready to accept another start immediately after the "stop." -- i e, the work done after detecting "stop" before detecting "start" is not "big" (for some value of "big.")

Because one thing this devices does is forward variable-length packets to a packet-based radio interface, and I don't know whether to actually do this or not until I can look at the whole packet, at a minimum I would need to double-buffer incoming I2C data to make this work. That might be possible, but is yet another consumer of SRAM...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

My primary stop detection is still in the main loop (test stop flag, call handler, clear flag). But as you pointed out, a new start may occur before the stop is detected in the main loop, that's why I also do the same test in the start handler. There is no need to worry about duration because the USI will stretch the clock.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
There is no need to worry about duration because the USI will stretch the clock.

Have you found a way to stretch the clock past the return of the start interrupt? Every approach I've tried has ended up not working.

The work I need to do before I can relinquish the incoming buffer space is significant, and can't be done inside an interrupt handler (it in turn is asynchronous and requires interrupts.)

Hence, the double-buffering :-)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi jwatte,

I am now running into exactly the same problem as you describe (and which was clearly not understood by david.prentice), I am trying to make the code from Don Blake more robust. One thing is that at only one stage a check is done for a stop condition (using pin testing). At every other stage, there is no check for a stop condition. I believe this can lead to a condition where the slave keeps holding the bus indefinitely, thereby making any other i2c communication impossible. IMHO a stop condition should ALWAYS be acted upon and lead to the scl being released and the state machine being reset.

But, indeed it seems there is no good privision for that, I guess I'll have to check the stop condition flag at various stages. I don't like the solution to put the check into the main loop, as I want the whole usi/twi thing out of the main program.

Nice to read, thx.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

I was going to adapt the Don Blake code to do something like that, but I didn't understand the code, for a part because it's coding style is 180 from what I am used to, for a part because I was not familiar enough with the matter.

So I ended up rewriting it from scratch, with the code from Don next to it, and of course the datasheet and a test setup. Now I have something that works (again) and I actually do understand what happens ;-)

This version has a stop-condition-detector like mentioned above. It's still is very basic, it keeps polling the stop condition flag all of the time but it works. I don't think continuous polling is necessary, the polling needs to start when a start condition has occurred and can stop after a stop condition. When no polling is done, the mcu can "sleep" in an idle or standby sleep mode, I will experiment with that.

For ease of cooperative coding, I've put the whole thing on github, you can find it here:

https://github.com/eriksl/usitwi...

If you find bugs please send a patch.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I just finished testing with entering sleep mode outside the start-data-stop frame, but it doesn't work unfortunately. It kills the fix for writing to a non-existant slave that locks up the bus. I am open for suggestions ;-)

The difference it makes is 3 mA = 15 mW (= also about 15% of the total consumption), so I don't think it's worth spending a whole lot of time on.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yeah, I've found a fix. Now it works fine, mcu is in SLEEP_MODE_IDLE when outside a start-data-stop transaction and busy polling when inside one. Without lockups this time :-)