avrdude and bootloader

17 posts / 0 new
Last post
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

i want to flash a bootloader file for a mega2560. The .hex file only includes the bootloader code which is about 4kB, but avrdude starts programming from 0x0000 to the last address in the .hex file and not only from the bootloader start address (the part is correct programmed. Th edude seems to fill the empty code parts with 0xff). This generates very long programming times.

Does anybody know, how to solve this very long programming time problem ?

Thanks
tmo

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Has the .hex file been compiled to be programmed into bootloader start address?

avrdude just programs the locations the hex file tells it to.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

yes, it starts from 0x3f000. There is nothing before this address in the .hex file.

bye

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Which programmer was used?

I use the STK500, but with AVRstudio, not with AVRdude.
And then only the used Flash was programmed.
So programming the bootloader (524 byte at 0x03fc00) into the ATmega2561 need about one second.

Peter

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

programmer is stk200 dongle.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I don't think it is solvable without some modifications to avrdude.

I was poking around in the source a while back trying figure out why the terminal mode flash dumps were so slow.

If I remember correctly,
the low level block read/write routines (...paged_write() and ..._paged_load() do not take starting addresses. They only take a byte count and boldly assume that the data always starts at the beginning of the memory space.

In the case of terminal mode, it meant the the dump routines had to call the byte access routines in order specify starting addresses, which is much slower that reading blocks (multiple bytes), especially if going over USB.

For programming, I would expect the behavior you are seeing since programming has to use the block mode and there is no way to specify the starting block to the lower block read/write functions.

--- bill

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sorry, the information above related to the stk500 code (which is what I was using) and does not really apply to the stk200.

I just looked at the code and it appears that the stk500 has code to avoid writing pages that are all 0xff (empty) but the code in the parallel programmer code (which I think is what is used for the stk200) does not appear to have this optimization.

This should be a very small and easy patch to avrdude.

--- bill

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> This should be a very small and easy patch to avrdude.

The more difficult part would be to generalize this. I think there are
already open bug/patch reports for this. Right now, AVRDUDE flattens
out the input file into an internal memory image, thereby losing the
information which part of the memory image was actually related to the
input file itself (where the input file could consist of multiple input
segments, like an application and a bootloader). The way the STK500 code
handles it is only wrong in other ways: in case an input file actually
contains a large block of 0xff, the verification would ignore that request,
and indicate a correct programming operation even in situations where there
has been prior data in the respective actual memory region.

Alas, this will become a major code rework.

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.
Please read the `General information...' article before.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think the stk500 code only skips over writes of 0xff.
I believe that all reads are still done for verification.

I would think that while not optimum, couldn't something like this be applied to the general case?

For example if you know the part was just erased, it should contain all 0xff and so it should be perfectly safe to skip over writes of 0xff.

And when in byte write mode, this would eliminate having to read the part to see if the write was needed. The code could just write the new data if it wasn't 0xff.

It would mean that for something like a bootloader lots of additional reads and verification for flash data outside the bounds of the bootloader code are still done after all the writes have completed but I would think that even this simple optimization would offer some amount of time benefit.

--- bill

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> For example if you know the part was just erased, it should contain
> all 0xff and so it should be perfectly safe to skip over writes of 0xff.

Well, even if not erase, a write operation to the flash (in contrast to
EEPROM) with 0xff wouldn't be able to change anything... So yes, you
convinved me, when writing, pages with all 0xff can safely be skipped.

As that is a general thing, it could be moved out of the STK500 code
into the generic part.

Anyway, that would only solve have of the OP's problem, because the
verification read would still read the entire flash ROM (up to the end
of the input file's size), so skipping over 0xff writes would at most
save 50 % of the time (unless they turn off verification, but that's
probably not a bright idea in particular for a bootloader).

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.
Please read the `General information...' article before.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I haven't looked at the code yet to see how involved this is yet, but to help reduce the reminder of the excess i/o operations, what if the code that reads in and flattens out the hex image kept track of 2 values.
A lowest address and a highest address.
This would be the lowest memory address in the hex image and the memory address of the highest byte in the image.
(I assume highest address is already kind of tracked since there is a byte count)
This would essentially put a bounds on the flattened out memory image.
i.e. it marks the memory inside the flattened memory image that is "valid".

The update/write and verify code could limit its memory range to the memory within these addresses.

I'm assuming that most hex images are not sparsed data. If that is the case, then this simple method should help in most cases.

This would be particularly helpful for the burning of small bootloaders in large flash memories such as the mega2560 which is what tmo (the OP) was doing.

In his case only 4k of the 256k part needed to be updated.

--- bill

Pages