Printing variables from RAM with printf_P

Go To Last Post
18 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I've searched for every PROGMEM and printf_P thread I could find, but for the life of me I couldn't find an answer nor mention of this seemingly simple use-case:

int intVar = 1;
const char* stringVar = "testing";
printf_P(PSTR("This is a test %d for %s"), intVar, stringVar));

Is this invoking an undefined behaviour? Should I use normal printf when I have parts of my string stored in RAM instead of flash?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

No, that looks like it should work.

 

EDIT: once the extraneous ')' is removed ;-)

Last Edited: Wed. May 22, 2019 - 10:13 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0


Quick test (sprintf rather than printf):

 

 

So it works.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hm. I'm having some very weird issues with printf_P which I'm unable to resolve, so I thought I might be using printf_P in an undefined way, hence the thread/question. I'm using ATmega2560. Here's the output from avr-size:

 

Device: atmega2560

Program:   58514 bytes (22.3% Full)
(.text + .data + .bootloader)

Data:       6793 bytes (82.9% Full)
(.data + .bss + .noinit)

I suspect it has something to do with high RAM usage, although there's still over 1kB left and I'm not using any large arrays locally so I guess it shouldn't be a problem. I've also tried compiling only small parts of the code (and hence reducing the RAM usage to roughly 60%) but the issue persisted.

 

At one part of the code I have the following test call:

printf_P(PSTR("PROG-Sensor %d: %s\r\n"), 1, "junk");

 

This produces gibberish on the UART.

 

When I change it to this:

 

printf("PROGInk sensor %d: %s\r\n", 1, "junk");

It works as expected (the line is printed normally).

 

Here's my UART setup for printf (the relevant parts).

 

static FILE uart_printf_stream;

int printChar(char c, FILE* stream)
{
    //write directly to UART
    while ((UCSR0A & (1 << UDRE0)) == 0)
    {
    }
    UDR0 = c;
    return 0;
}

//relevant uart config:
uart_printf_stream.put = printChar;
uart_printf_stream.get = NULL;
uart_printf_stream.flags = _FDEV_SETUP_WRITE;
uart_printf_stream.udata = 0;
stdout = &uart_printf_stream;

I'm not sure what's wrong here.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Shantea wrote:
This produces gibberish on the UART.
What about in a debugger (even simulator)? What happens if you follow it into vfprintf() ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Do other USART operations result in non-gibberish?

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

Shantea wrote:

This produces gibberish on the UART.

 

What about in a debugger (even simulator)? What happens if you follow it into vfprintf() ?

 

I can try that later today - not at the moment.

 

Other UART operations work fine. I've reduced my entire code to this:

 

#include <avr/pgmspace.h>
#include <stdio.h>
#include <util/delay.h>
#include <avr/io.h>

static FILE uart_printf_stream;

int printChar(char c, FILE* stream)
{
    //write directly to UART
    while ((UCSR0A & (1 << UDRE0)) == 0)
    {
    }
    UDR0 = c;
    return 0;
}

void initUARTdebug()
{
    int16_t baud_count;

    UCSR0B = (1 << TXEN0);

    uart_printf_stream.put = printChar;
    uart_printf_stream.get = NULL;
    uart_printf_stream.flags = _FDEV_SETUP_WRITE;
    uart_printf_stream.udata = 0;
    stdout = &uart_printf_stream;

    baud_count = ((F_CPU / 8) + (115200 / 2)) / 115200;

    if ((baud_count & 1) && baud_count <= 4096)
    {
        //double speed uart
        UCSR0A = (1 << U2X0);
        UBRR0 = baud_count - 1;
    }
    else
    {
        UCSR0A = 0;
        UBRR0 = (baud_count >> 1) - 1;
    }

    //8 bit, no parity, 1 stop bit
    UCSR0C = (1 << UCSZ00) | (1 << UCSZ01);
}

int main()
{
    initUARTdebug();

    while(1)
    {
        // printf("Hello world\r\n");
        printf_P(PSTR("Hello world\r\n"));
        _delay_ms(500);
    }
}

 

printf_P not working, printf working fine.

 

Output from compiler:

avr-g++ -funsigned-char -funsigned-bitfields -fdata-sections -ffunction-sections -fshort-enums -Wall -c -fno-jump-tables -fno-strict-aliasing -std=c++11 -fno-exceptions -DARCH=ARCH_AVR8 -DF_CPU=16000000UL -DNDEBUG -O2 -mmcu=atmega2560   -MD -MP -MF "build/firmware/main.d" -MT"build/firmware/main.d" -MT"build/firmware/main.o" -c "firmware/main.cpp" -o "build/firmware/main.o"
Finished building: firmware/main.cpp
Finished building target: build/x880.elf
AVR Memory Usage
----------------
Device: atmega2560

Program:    1858 bytes (0.7% Full)
(.text + .data + .bootloader)

Data:         20 bytes (0.2% Full)
(.data + .bss + .noinit)


 

Last Edited: Wed. May 22, 2019 - 01:05 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I built and simulated that with a breakpoint on the while() (which I  commented) in printChar(). The R24 value I saw on each call was:

48 'H'
65 'e'
6C 'l'
6C 'l'
6F 'o'
20 ' '
77 'w'
6F 'o'
72 'r'
6C 'l'
64 'd'
0D '\r'
0A '\n'
48 'H'
65 'e'
etc

There is little doubt in my mind that this is working. So any issue you are seeing is to do with the UART code itself.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I've found the issue... nothing to do with code at all. Ugh. This is my target in Makefile:

 

$(TARGET).elf: $(OBJECTS)
	@$(CXX) -o$(TARGET).elf $(OBJECTS) -mmcu=$(MCU) $(LDFLAGS) -Wl,-Map,$(TARGET).map
	@echo Finished building target: $@
	@avr-objcopy -O ihex -R .eeprom -R .fuse -R .lock -R .signature -R .user_signatures "$(TARGET).elf" "$(TARGET).hex"
	@srec_cat $(TARGET).hex -Intel -exclude 0x0e4 0x0e8 -Little_Endian_Maximum 0x0e4 -fill 0xff -over $(TARGET).hex -I -Output $(TARGET).hex -Intel
	@srec_cat $(TARGET).hex -Intel -Little_Endian_CRC16 -max-address $(TARGET).hex -Intel -Cyclic_Redundancy_Check_16_XMODEM -Output $(TARGET).hex -Intel
	@avr-objcopy -I ihex "$(TARGET).hex" -O binary "$(TARGET).bin"
	@avr-size -C --mcu=$(MCU) "$(TARGET).elf"

 

I've been using two screc_cat lines to write the firmware length in at the beginning of flash, and flash CRC at the end of it. If I remove those two lines then everything works. I'm not sure why.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Shantea wrote:
I'm not sure why.

At the beginning of the flash is the reset vector.  Instead of allowing the AVR to work, you have chosen to replace the vector contents with an arbitrary length value.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

OK I've figured it out. So, since I've used the trick mentioned above to store the flash length, I've also defined custom section in linker file, right after vectors:

 

    *(.vectors)
    KEEP(*(.vectors))
    *(.applen)

(Note that in the stripped down example I haven't used my custom linker file).

 

Now, in one of the source files I have this:

 

/* flash code length -- this is loaded in flash by the linker and other scripts */
const uint32_t ProgramLength __attribute__((section(".applen"))) __attribute__((used)) = 0;

Due to this, firmware length was supposed to be written at 0xE4. However...

 

Inspecting the map file I see this:

 

 *(.vectors)
 *(.applen)
 *(.progmem.gcc*)
                0x00000000000000e4                . = ALIGN (0x2)

So, according to this, progmem starts at 0xE4 which is obviously wrong since that's the supposed location of my firmware length. Also, there is nothing in applen section.

 

The fix was to add this in the linker file:

 

*(.vectors)
    KEEP(*(.vectors))
    *(.applen)
    KEEP(*(.applen))

I forgot to add KEEP. The worst thing is, I'm using this method on various projects and somehow, in this one, I forgot to add KEEP. Compiling the code again I see this:

 

*(.vectors)
 *(.applen)
 .applen        0x00000000000000e4        0x4 build/./firmware/board/v1.28/Board.o
 *(.applen)
 *(.progmem.gcc*)
                0x00000000000000e8                . = ALIGN (0x2)

Data is present in .applen and progmem starts at 0xE8. Also, my printf_P calls are working. I feel bad about this now. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Why the temptation to try and store this up near the front of the flash? I'd go the back and store your 32bits in FLASHEND-1..FLASHEND-4

 

I do something like that here:

 

https://github.com/wrightflyer/sdbootloader

 

Now at build time for the application I used srec_cat:

srec_cat $(MSBuildProjectDirectory)\$(Configuration)\$(Output File Name).hex -intel -fill 0xFF 0x0000 0x37FE --l-e-crc16 0x37FE -o AVRAP001.bin -binary

which takes the .hex file, pads all the unused area with 0xFF up to the last 2 bytes then it fills those last two bytes with a CRC16 checksum of the entire image. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Um. Isn't printf for taking the data from ram, and printf_P from program memory? You use one or the other depending on where your data lies.

 

 

The largest known prime number: 282589933-1

It's easy to stop breaking the 10th commandment! Break the 8th instead. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

That's what he did (notice PSTR() usage!)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

Why the temptation to try and store this up near the front of the flash? I'd go the back and store your 32bits in FLASHEND-1..FLASHEND-4

 

I do something like that here:

 

https://github.com/wrightflyer/sdbootloader

 

Now at build time for the application I used srec_cat:

srec_cat $(MSBuildProjectDirectory)\$(Configuration)\$(Output File Name).hex -intel -fill 0xFF 0x0000 0x37FE --l-e-crc16 0x37FE -o AVRAP001.bin -binary

which takes the .hex file, pads all the unused area with 0xFF up to the last 2 bytes then it fills those last two bytes with a CRC16 checksum of the entire image. 

 

I used to do it that way, but then the size of generated binary file is equal to MCU's flash size, regardless of flash usage. Since I'm updating firmware over UART that takes 4x as much time then it does now!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Ah that's where I have an advantage - my bootloader reads the code from SD/MMC not across UART so size/bandwidth don't really matter ;-)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Shantea wrote:

I used to do it that way, but then the size of generated binary file is equal to MCU's flash size, regardless of flash usage. Since I'm updating firmware over UART that takes 4x as much time then it does now!

Consider RLE:

https://en.wikipedia.org/wiki/Run-length_encoding

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Just make the bootloader know that the last two bytes are the CRC, regardless of what went before.

"Demons after money.
Whatever happened to the still beating heart of a virgin?
No one has any standards anymore." -- Giles