Odd behavior when decoding IR into uint32_t

Go To Last Post
7 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm using an Atmega328p and gcc and decoding IR from an NEC remote.  NEC IR protocol has 32 data bits, each determined by the length of a single pair of a mark (on) and a space (off).  My algorithm is pretty simple - as pairs come in, determine if the pair indicates a '1' based on it's timings.  If so, set the bit in the current location using a counter.  I am using USART to debug.  I have verified that I'm receiving the right number of pairs, so that is not the issue.  Here is my code - I have left out irrelevant details:

 

#include <avr/io.h>
#include <avr/interrupt.h>
#include <stdlib.h>
#include <stdbool.h>
#include "typedefs.h"

#define COM_11759_POWER_BTN 284153895
#define BIT_SET(a,b) ((a) |= (1<<(b)))

Pair data_pair;
volatile uint8_t data_bit_counter = 0;
volatile uint32_t decoded_data = 0;

int main(void) {

    while(1) {
        if(data_pair.has_mark == true && data_pair.has_space == true) {
            data_pair.has_mark = false;
            data_pair.has_space = false;

            int8_t bit = nec_data_bit_from_pair(data_pair);
            if(bit != INVALID_PAIR_TIMINGS) {
                if(bit == 1) {
                    BIT_SET(decoded_data, data_bit_counter);

                    char bit_count_str[8];
                    utoa(data_bit_counter, bit_count_str, 10);
                    usart_transmit_string(bit_count_str);
                    usart_transmit_string(", ");

                    char decoded_data_str[32];
                    ultoa(decoded_data, decoded_data_str, 10);
                    usart_transmit_string(decoded_data_str);
                    usart_transmit('\n');
                }
                data_bit_counter++;
            }
            if(decoded_data == COM_11759_POWER_BTN) usart_transmit_string("yes\n");
        }
    }
}

 

The problem is, the value seems to jumping unexpectedly once I move past 16 bits.  The USART output from the above code is below.  On the left is the index of the bit set, and on the right is the value of the decoded_data after the bit is set.

 

3, 8
8, 264
9, 776
10, 1800
12, 5896
13, 14088
14, 30472
15, 4294965000
16, 4294965000
17, 4294965000
19, 4294965000
20, 4294965000
26, 4294965000
29, 4294965000
30, 4294965000
31, 4294965000

 

decoded_data never equals the value of COM_11759_POWER_BTN as I expect, so something isn't working.  Any ideas?

This topic has a solution.

Last Edited: Wed. Sep 6, 2017 - 09:36 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0
#define BIT_SET(a,b) ((a) |= (1<<(b)))

Change it into:

#define BIT_SET(a,b) ((a) |= (1UL<<(b)))

Stefan Ernst

This reply has been marked as the solution. 
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The issue is the BIT_SET macro definition. The value 1 used in the macro is 16-bit signed integer. The result of 1<<(b) is also 16-bit signed integer and it is implicitly promoted to 32-bit integer on assignment to a. It works correctly for b values up to 14. When b value is 15, the value of 1<<(b) is signed integer 0x8000, i.e. -32768. When it is promoted to 32-bit, the result is 0xFFFF8000 and the BIT_SET macro sets all seventeen more significant bits of decoded_data. For subsequent calls of BIT_SET macro the value of b is more than 15, overflow of the << operator occurs and 1<<(b) result is 0, so no bits are set and value of decoded_data doesn't change.

To solve this, you should change BIT_SET macro definition to:

#define BIT_SET(a,b) ((a) |= (1L<<(b)))

or better to:

#define BIT_SET(a,b) ((a) |= (1UL<<(b)))

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks to you both!  I don't know if I ever would've found that on my own, and I was quickly starting to get frustrated blush

 

So, should I have several BIT_SET macros to match the variable being performed on?  e.g. BIT_SET_I8, BIT_SET_I16, BIT_SET_UI8, so on and so forth.  What do you guys generally do for these utility macros?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

As explained in #3, the problem is due to sign extension -  so that won't happen if you use unsigned values, and won't happen when a larger value is truncated into a smaller type.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

These are standard 'C' language rules - not specific to AVR.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

awneil wrote:

As explained in #3, the problem is due to sign extension -  so that won't happen if you use unsigned values, and won't happen when a larger value is truncated into a smaller type.

 

The problem is data size. With unsigned data type of insufficient size this also would happen, although the result would be different. If data size is correct, the result will be correct with both signed and unsigned int. However, it is recommended to use unsigned values as shift operator arguments. Is you checked the code with a tool like QAC, you would get warning that a signed value is used with the shift operator.

 

Tylerlee12 wrote:

So, should I have several BIT_SET macros to match the variable being performed on?  e.g. BIT_SET_I8, BIT_SET_I16, BIT_SET_UI8, so on and so forth.  What do you guys generally do for these utility macros?

 

I would prefer don't use a macro and I would just write

decoded_data |= 1UL << data_bit_counter;

but this is question of personal preferences. If you want use macros, you have to use different macros for any size, i.e. 8, 16 and 32 bits, however you don't need separate macros for signed and unsigned. You can also use only one macro, the version for 32 bits, but you will likely get build warnings about implicit conversion to narrower type and possible data loss.