How to fail a build if the output is too big?

Go To Last Post
29 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Currently, AVR Studio will happily compile a project and emit a binary that requires more memory than physically present. By default it runs avr-size to show you how much memory is used. However, if the usage is > 100% then it will just show that, and the build will still succeed.

I want my makefile to actually *fail* the build if the sizes are over the limit. It's too easy to overlook informational messages.

Additionally I want to be able to set custom thresholds for failure. For example, if SRAM/data usage is 100% then there is no room for the stack. A lower percentage would be better.

For example, suppose my device has 64 kilobytes of flash but my compiled program takes up 70 kilobytes. Then I want the build to fail. I don't want to see any "build succeeded" message.

Why is this important? Firmware is going to be build as part of a much larger build script for our product's system that could take significant amounts of time to run and could generate significant informational messages. An informational "flash usage 109%" message is going to get lost in the shuffle and the developer will falsely think everything is ok.

How should I do this? I can't quite put a finger on how; perhaps my GNU make skills are not the best...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I suspect that you would need to change avr-size:

1. Make it accept thresholds, and

2. Make it exit with a value > 0 if any threshold is violated, and

3. Write your makefile so that it stops when avr-size exits with a failure value.

Tounge-in-cheek: avr-size is open source software. You are free to contribute... :wink:

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

And, of course, an alternative would be to write a small utility that takes the output of avr-size as it's input, parses it and does that failure exit if limits are violated.

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

And, of course, an alternative would be to write a small utility that takes the output of avr-size as it's input, parses it and does that failure exit if limits are violated.

For many, that would probably be more straightforward than the avr-size mods, depending on one's skills and inclination to make the mods.

Not exactly the same, but I have a couple apps that use STK500.EXE in command-line mode. I capture the output and look for key words that indicate success, and check for a couple key words for common failures. This was in a VisualBASIC6 Windows app.

Pretty straightforward for flash/program size. SRAM gets trickier with the stack, and especially if you use a heap.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

JohanEkdahl wrote:
And, of course, an alternative would be to write a small utility that takes the output of avr-size as it's input
... or the mapfile.

Enjoy! ;-)

Jan

PS. More loosely related fun at https://www.avrfreaks.net/index.p...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

One thing to note about avr-size is that it can be used on .hex files:

E:\avr>dir plus.hex
 Volume in drive E is VBOX_windows
 Volume Serial Number is 0000-0805

 Directory of E:\avr

17/08/2011  09:13             2,767 plus.hex
               1 File(s)          2,767 bytes
               0 Dir(s)  294,300,901,376 bytes free

E:\avr>avr-size plus.hex
   text    data     bss     dec     hex filename
      0     974       0     974     3ce plus.hex

I would have thought it'd be fairly easy to write a script to be executed in the Makefile that parses that output and determines if "974 > threshold".

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You could just pipe the avr-size output into tail and awk then use that as input into test.

In a makefile something like:

test `avr-size filename.hex |tail -1 |awk {'print $$2'} ` -lt $(MAXSIZE)

Not too friendly but this will fail the make if the hex is too large.

--- bill

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Or let avr-size and awk do the whole job

avr-size main.elf | awk -v MAX=100 'END{ if($2 > MAX) { print "Program uses too much SRAM"; exit 1 } }'
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Just copy default linker script into project directory and edit it so it have actual memory sizes. After that add -Wl,-T,

<script filename> into project linker option.
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If anyone's interested, here's the solution I finally figured out:

%.avrsizetest: %.avrsizetestflash %.avrsizetestsram
	@true
%.avrsizetestflash: %.elf %.avrsizetestshow
	test `$(SIZE) -A $< | grep '.text\|.data\|.bootloader' | sed -r -e 's/ +/ /g' | cut -f 2 -d ' ' | xargs | tr ' ' + | bc` -le $(MAXPROGRAM)
%.avrsizetestsram: %.elf %.avrsizetestshow
	test `$(SIZE) -A $< | grep '.data\|.bss\|.noinit' | sed -r -e 's/ +/ /g' | cut -f 2 -d ' ' | xargs | tr ' ' + | bc` -le $(MAXDATA)
%.avrsizetestshow: %.elf
	$(SIZE) --mcu=$(MCU) --format=avr $<

Basically, the sequence of steps for testing is:

  1. Run avr-size and print list of all sections and their sizes.
  2. Only pick certain sections of interest that we want to sum.
  3. avr-size pads the output with lots of spaces to align it; remove the duplicate spaces.
  4. Use the cut tool to pick the second field (actual size of object).
  5. Use xargs to put all lines onto one line.
  6. Replace spaces with + symbols.
  7. Pipe the output to bc to add them.
  8. The output of all that is passed as an argument to test, which checks to be sure that we are below the maximum.

The makefile runs "avr-size --format=avr" to print the pretty output for the user. Then it tests the flash and SRAM; if either is too large then the build fails. Depending on the avrsizetest prerequisite is what kicks things off.

Several solutions in this thread used awk, which I'm sure is fine if you have that tool available. I restricted myself to the tools included with WinAVR, which does not include awk.

There's a lot of steps above and I'm sure it can be improved and simplified. If anyone has ideas for improvement while restricting yourself to tools included with WinAVR, please post. :) I'm no expert with the GNU / POSIX command line tools, so I'm sure there are simpler ways (the fact that the Wikipedia page for "sed" says its turing complete might be a clue...).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I'm sure it can be improved and simplified

As noted above you can avr-size the .hex file and get ONE number - no need to pick out the .text/.data/.bootloader sizes from the more complex avr-size -C output on the .elf file.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

johnstonmvs wrote:

Several solutions in this thread used awk, which I'm sure is fine if you have that tool available. I restricted myself to the tools included with WinAVR, which does not include awk.

There's a lot of steps above and I'm sure it can be improved and simplified. If anyone has ideas for improvement while restricting yourself to tools included with WinAVR, please post. :) I'm no expert with the GNU / POSIX command line tools, so I'm sure there are simpler ways (the fact that the Wikipedia page for "sed" says its turing complete might be a clue...).

But WinAVR does include awk.
Its called: gawk.exe

So I'd go back to using awk since it can do it all.
see snigelen's post.

You can still have separate rules for text & data using slightly different awk parameters.

--- bill

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Since awk (or gawk) is fun, some examples:

$ avr-size main.elf   
   text	   data	    bss	    dec	    hex	filename
   5570	     18	    303	   5891	   1703	main.elf

Print sum of text and data on last line

avr-size main.elf | gawk 'END{print $1+$2}'
5588

Exit with a fail-code if this sum is bigger than make variable $(MAX)

$ avr-size main.elf | gawk -v MAX=$(MAX) 'END{if ($1+$2 > MAX) exit 1}'

With option -C

$ avr-size -C main.elf
AVR Memory Usage
----------------
Device: Unknown

Program:    5588 bytes
(.text + .data + .bootloader)

Data:        321 bytes
(.data + .bss + .noinit)

Print second field on all lines starting with Program:

avr-size -C main.elf | gawk '/^Program:/{print $2}'
5588

With option -A

$ avr-size -A main.elf                               
main.elf  :
section     size      addr
.text       5570         0
.data         18   8388704
.bss         303   8388722
.stab      19740         0
.stabstr    5659         0
Total      31290

Sum upp size column (well second column) for lines starting with .text or .data or .bootloader

avr-size -A main.elf | gawk '/^\.(text|data|bootloader)/{sum+=$2} END{print sum}'                                             
5588
=================================================================================
Last Edited: Tue. Nov 15, 2011 - 08:36 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The reason why I consider parsing the mapfile superior is, that in the output elf only the information on output sections (including the stub sections) is present.

More particularly, my original problem was to ensure the .progmem section won't cross the 64kB boundary; checking of other sections came as a side-effect.

Oh, and I personally find using those awk-ward tools, instead of proper programming languages, for writing relatively complex programs, pervert, rather than fun ;-)

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

bperrybap wrote:
But WinAVR does include awk.
Its called: gawk.exe

Leave it to me to overlook something obvious and come up with a complicated alternative! :?

wek wrote:
Oh, and I personally find using those awk-ward tools, instead of proper programming languages, for writing relatively complex programs, pervert, rather than fun ;-)

Fully agree!! awk and sed are two confusing things I don't like to deal with... They may be Turing-complete but I don't find them easy to read/write...

snigelen wrote:
With option -A
$ avr-size -A main.elf                               
main.elf  :
section     size      addr
.text       5570         0
.data         18   8388704
.bss         303   8388722
.stab      19740         0
.stabstr    5659         0
Total      31290

Sum upp size column (well second column) for lines starting with .text or .data or .bootloader

avr-size -A main.elf | gawk '/^\.(text|data|bootloader)/{sum+=$2} END{print sum}'                                             
5588

I tried adapting this but ran into an issue:

%.avrsizetestflash: %.elf
	@test `$(SIZE) -A $< | gawk '/^\.(text|data|bootloader)/{sum+=$2} END{print sum}'` -le $(MAXPROGRAM)

The error:

gawk: cmd. line:1: /^\.(text|data|bootloader)/{sum+=} END{print sum}
gawk: cmd. line:1:                                  ^ parse error

Seems like gawk isn't getting to see the "$2" symbol. But I don't see what I need to do to fix this issue? I tried escaping the dollar sign but it didn't seem to help.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This is where things get a bit messy when using make.
The commands you put in the makefile are not always exactly like what you type on the command line.
In this case, you have to escape the $. so you will need $$2 which is what you saw in my example.
The best thing to do is when testing stuff like this is to not use the @ so you can
see the commands that are being attempted.

But why use the test command?

Why not use awk kind of like in sniglen's first awk example so you can print an error message before make exits with an error.

BTW, you could also check for RAM overflows by extracting bss and data and adding them up and comparing to the RAM size.

--- bill

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Oh, and I personally find using those awk-ward tools, instead of proper programming languages, for writing relatively complex programs, pervert, rather than fun

I disagree.

If we stay with using the output of avr-size as our input the problem is largely about pattern-matching/regular-expressions/somesuch... That is fairly complicated in an "ordinary language", yes. But one of the strengths of most "sripting languages" is that they have excellent high-level support for exectly these kinds of problems.

If I need to "find the third integer that follows after the word 'smörf'", then regexes/scripting is THE tool. Any solution using an "ordinary language" will be more complicated.

I can understand that newcomers to regular expressions and the like can be baffled. To me, such a situation has much in common with e.g. makefile. Both regexes and make are languages designed for specific problems and this is reflected in the syntax and semantics.

There are few things that has empowered me more profesionally than Make and scripting languages. (AWK was the first I learned - in the eighties or nineties. ATM I'm trying to abandon Perl as the primary scripting language, in favour of Ruby.)

Since AWK is installed by WinAVR, then maybe the WinAVR Mile makefile template should be amended with build steps that bails out if certain limits are exceeded?

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

bperrybap wrote:
In this case, you have to escape the $. so you will need $$2 which is what you saw in my example.

Ah - that's what I was missing... I was trying to use slashes to escape and it wasn't working. Confusing since slash is usually used for escaping things. Works now!

bperrybap wrote:
Why not use awk kind of like in sniglen's first awk example so you can print an error message before make exits with an error.

Good point, here's a corrected version:

%.avrsizetestflash: %.elf %.avrsizetestshow
	@$(SIZE) -A $< | gawk -v MAX=$(MAXPROGRAM) '/^\.(text|data|bootloader)/{sum+=$$2} END{if (sum > MAX) exit 1}'
%.avrsizetestsram: %.elf %.avrsizetestshow
	@$(SIZE) -A $< | gawk -v MAX=$(MAXDATA) '/^\.(data|bss|noinit)/{sum+=$$2} END{if (sum > MAX) exit 1}'

This new version runs much faster than the version I originally posted, since only two executables are involved now: avr-size and gawk. It adds up the sections that are supposed to be added according to "avr-size -C". Thanks everyone!

bperrybap wrote:
BTW, you could also check for RAM overflows by extracting bss and data and adding them up and comparing to the RAM size.

Pretty much.... I was already doing this, as shown above. I recommend picking a lower MAXDATA than the actual RAM on the device. There needs to be room in RAM for the stack and if data + bss is taking up all the RAM, there is no room for the stack. How much stack space is needed? Depends on your application and there is no 100% bullet-proof "good" way to determine this, especially if you have recursive function calls. I reserved 1 KB of stack on my 4 KB devices (MAXDATA = 3KB) and 2 KB of stack on my 8 and 16 KB devices (MAXDATA = 6 KB and 14 KB). I'm hoping that's enough but I haven't got time to do a more formal check.

clawson wrote:
As noted above you can avr-size the .hex file and get ONE number - no need to pick out the .text/.data/.bootloader sizes from the more complex avr-size -C output on the .elf file.

snigelen wrote:
Or let avr-size and awk do the whole job
avr-size main.elf | awk -v MAX=100 'END{ if($2 > MAX) { print "Program uses too much SRAM"; exit 1 } }'

bperrybap wrote:
In a makefile something like:

test `avr-size filename.hex |tail -1 |awk {'print $$2'} ` -lt $(MAXSIZE)

These methods don't seem to be accurate, because the output of "avr-size" or "avr-size -B" don't seem to be accurate for my purposes. "avr-size -C" defines flash memory usage to be text + data + bootloader, and SRAM usage to be data + bss + noinit. "avr-size -B" doesn't output these sums:

$ avr-size -C testfile.elf
AVR Memory Usage
----------------
Device: Unknown

Program:    2590 bytes
(.text + .data + .bootloader)

Data:       1090 bytes
(.data + .bss + .noinit)

$ avr-size -B testfile.elf
   text    data     bss     dec     hex filename
   2558      36    1058    3652     e44 build\PCB001001-Boot-v12.elf

$ avr-size -A testfile.elf
section            size      addr
.data                32   8388864
.text              2558     61440
.bss               1058   8388896
.fuse                 3   8519680
.lock                 1   8585216

Total             16436

Notice that none of the numbers outputted from "avr-size -B" match the numbers from "avr-size -C". Closer examination of the section sizes reveals why. The "text" column from "avr-size -B" does not include the "data" section; you still have to add that in. Unfortunately, the "data" column from "avr-size -B" is no good because it includes fuse and lock sections as well as data: there's no way to determine true SRAM usage from the output of "avr-size -B" alone: you'd need prior knowledge of fuse and lock bit section sizes. For SRAM usage from "avr-size -B", you have to add the "data" and "bss" columns but as mentioned, the answer is wrong because of the incorrect inclusion of "fuse" and "lock" sections. Finally, "avr-size -C" mentions bootloader and noinit sections, which admittedly I am not familiar with as I have never used them. I wouldn't doubt that "avr-size -B" mishandles these as well though.

For these reasons, I think summing the sections from "avr-size -A" is the safest and more accurate path. Parsing "avr-size -C" could work too, but I avoided it because I think the format of the output from that command is more likely to be changed in the future.

JohanEkdahl wrote:
Since AWK is installed by WinAVR, then maybe the WinAVR Mile makefile template should be amended with build steps that bails out if certain limits are exceeded?

That seems like a no-brainer to me. There is absolutely no valid reason that I can think of for why you'd want to have the output exceed device limits. Exceeding device limits should be viewed as erroneous output. Therefore, the build script should fail any build that does not meet the defined limits.

I agree that the makefile templates included in WinAVR could benefit from this little change. In fact, I would argue that Atmel needs to include this with the makefile generator inside of AVR Studio so that the limits are checked based on the chosen device in AVR Studio. (If only a newer WinAVR will ever be released? I guess there is also the new Atmel toolchain, but I have not tried it and I have not gotten good vibes about it from this forum.)

Another way to solve this problem would be modification of avr-size such that "avr-size -C" will return an error code if device limits are exceeded.

In any of these cases, provision needs to be made for stack space (see commentary above). For example, if "avr-size -C" returns an error code, I need to be able to set a value for stack space on the command line so that the data number checked against the device limit is "data + bss + noinit + reserved_stack_space". If AVR Studio is doing checks in the makefile, it needs a text box in the project options for reserved stack space. Etc. etc.

I wouldn't mind working on these improvements myself, except that WinAVR seems dead until evidence to the contrary comes out, and I don't work at Atmel to change the AVR Studio product itself. Besides, I have an actual job to do involving the product that the makefile is building, and does not include little tweaks to the AVR toolchain when I now already have a solution in the makefile (avr-size + awk) for dealing with this problem.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

The "text" column from "avr-size -B" does not include the "data" section; you still have to add that in

Why would it? It's reporting the size of .text not ".text + .data". As you say if you use the -B numbers you have to do the addition that -C would do otherwise.
Quote:

There needs to be room in RAM for the stack and if data + bss is taking up all the RAM, there is no room for the stack. How much stack space is needed?
This does depend on how much you rely on locals and whether it's just a few ints/chars or buffers full of the things but a rough rule of thumb is that "Data:" over 75-80% is starting to get dangerous. One way to avoid this is to make all variables global (or static) then you know at compile time how much RAM they all consume and all that's left is CALL/PUSH/POP/RET space.

I'm not sure I agree with your suggestion that Atmel should concentrate on automating this. Most humans can look at "Program: 16389 (101.3%)" and see that 100.3% means they've burst the flash size. Atmel have got loads more pressing things they need to fix before worrying about something like this.

Quote:

except that WinAVR seems dead until evidence to the contrary comes out,

Then you haven't seen Eric's promise about a new one then?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I agree that a bit of magic would be nice, here, but I have never had any problems with looking at the data size and thinking: "Self, hey, that 97% is a bit big. No room left for stack or locals. Time for some serious stuff, here!"

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
Quote:

The "text" column from "avr-size -B" does not include the "data" section; you still have to add that in

Why would it? It's reporting the size of .text not ".text + .data". As you say if you use the -B numbers you have to do the addition that -C would do otherwise.

Well, obviously, by definition. But the whole point of this exercise is to see whether compiled output fits within device limits. For real-world use of actually verifying whether output fits in the device, I don't see an alternative to using "avr-size -A" and adding the correct sections manually. As I noted, the output from "avr-size -B" does not give enough information to do this task and determine whether output fits within limits. Sounds like we both realize this but several posters seemed to indicate that "avr-size -B" had all the information needed and I just wanted to clear up for anyone reading this thread later on that I found there are actually some pitfalls if one tries to use "avr-size -B" (e.g. inclusion of fuse/lock sections; maybe I'm just being OCD but those aren't stored in SRAM so shouldn't be counted, even if they only amount to 4 bytes).

clawson wrote:
Quote:

There needs to be room in RAM for the stack and if data + bss is taking up all the RAM, there is no room for the stack. How much stack space is needed?
This does depend on how much you rely on locals and whether it's just a few ints/chars or buffers full of the things but a rough rule of thumb is that "Data:" over 75-80% is starting to get dangerous. One way to avoid this is to make all variables global (or static) then you know at compile time how much RAM they all consume and all that's left is CALL/PUSH/POP/RET space.

In general I agree with the idea here, but all variables? Sounds gross. I find leaving a few integer local variables and function parameters to be quite alright - especially since I don't use recursion. It really helps the readability / maintainability of the code in my experience. Not to mention the C runtime uses functions with parameters and no doubt some small local variables, as well. Of course, I put the big buffers and so on right in bss as global variables - as they should be. Large constants get placed in program memory. Etc. Perhaps this was mainly what you were referring to - keeping the big stuff in text, data, and bss sections and restricting the stack to small variables?

In the past I found pages about stack analysis and AVR at http://www.cs.utah.edu/~regehr/stacktool/ and http://docs.tinyos.net/tinywiki/index.php/Stack_Analysis. Interesting stuff to be sure, I just haven't taken the time to learn it since I don't come close to 75% usage of SRAM with data/bss and my stack isn't going to contain more than a few pushes/pops for local integer variables and return addresses. But I think that is probably the most promising path to take.

I think a good exercise would be to add a stack analysis tool to the makefile and include its results in the total SRAM usage metric that is then tested to see if device limits are being blown. E.g. enforce that "data + bss + noinit + stack_analysis_output < MAX_DATA". Maybe another day...

clawson wrote:
I'm not sure I agree with your suggestion that Atmel should concentrate on automating this. Most humans can look at "Program: 16389 (101.3%)" and see that 100.3% means they've burst the flash size. Atmel have got loads more pressing things they need to fix before worrying about something like this.

Perhaps it isn't the highest priority thing, but I think it should be on their TODO list because one could argue that this is a bad "codegen" bug: the generated output does not function correctly on the device specified on the command line to the compiler. Most compiler codegen bugs are urgent priority - behind things like compiler optimizations. (Now I realize some will argue that this is not a codegen bug since the code *would* work correctly if the hypothetical target device had no limits.)

By this logic, most humans can see a warning like "use of uninitialized variable" in the output and then fix the problem. After all, I can see that warning in the compiler output and it should be easy for me to go and look for it, right? Or even, one could argue that I should have written the code correctly and I can easily read the code to find out the bug for myself without any warning from the compiler. Of course, such warnings can be lost in the shuffle if the makefile is outputting pages full of informational text. Then bugs result where the code works on 49 customer computers but breaks on the 50th customer computer. That's why compilers output warnings and have options like "treat warnings as errors" so that the build *cannot* complete successfully without fixing the problem (or explicitly ignoring the warning). It takes the human out of the loop, so that human error (i.e. failure to carefully examine every line of makefile output) does not result in a bug.

It's even worse with this, because there is no warning. You have to manually read each number and do math and compare to see if it is < 100% (or < 75% for SRAM). This might sound stupidly trivial and a non-issue to a lot of you, but...

The reality is that I am ultimately going to be compiling 14 different AVR projects/configurations, and numerous Windows applications from one makefile. Nobody is going to try to find the output from avr-size for each of the 14 configurations and manually examine it every time they do a build to be sure that device limits aren't being blown. Can a human do it? Sure. Can I depend on every person ever using this makefile to manually check it and check it correctly 100% of the time? Especially if they have been working a few hours straight and the eyes are glazed over? No way. The only way this kind of issue will be caught 100% of the time is by stopping the build.

clawson wrote:
Quote:

except that WinAVR seems dead until evidence to the contrary comes out,

Then you haven't seen Eric's promise about a new one then?

I define evidence to the contrary to be an actual, new release of WinAVR.

I have actually seen Eric's promise; I assume you're talking about the thread at https://www.avrfreaks.net/index.php?name=PNphpBB2&file=viewtopic&t=107269? The problem is that he aimed to get a release out within 3 months - as of almost 6 months ago. The last post in that thread is from June. I realize he maintains this in his free time and that perhaps other priorities have come up in his personal life detracting from time on this project. Maybe he has lost interest. Maybe his employer does not want him spending more time on it. Or maybe he is only a few days away from releasing a new version and was merely delayed / it was harder than he expected. I have no idea. And that is fine - this is nothing personal against Eric. All I know is that for practical purposes, the project is dead for now - a new WinAVR simply does not exist and all I've found is stale communications with time estimates that have been blown, with no very recent status updates.

I also seem to recall reading somewhere that WinAVR is not completely open source. For example, the setup program not being open-sourced. So I'm not sure how easy it would really be for someone else to pick up the torch and continue maintaining it. (I seem to remember reading a post by Eric describing how easy/hard it would be for someone else to do that.) For now, my understanding is that maintenance of WinAVR hinges almost entirely on Eric, since he's the only one who can build it.

Maybe I need to move on to the new official Atmel toolchain and get to the bottom of it. It seems like that's the future of avr-gcc development on Windows, simply by virtue of the fact that it seems to be the only one with a recent version and it has actual commercial resources behind it. I'll need to more carefully read this forum to learn any potential issues with switching from WinAVR to the official toolchain (seems like I recall reading that there were some regressions, for example?). Then fix my code/makefiles so that it's compatible with the Atmel toolchain. (Like I said, I haven't really used it yet... I'm still using WinAVR because I know & trust it, and haven't had problems with the compiled outputs, which have been deployed to customers. And it seems like a number of regular members here don't like the direction Atmel is going with the new toolchain, if I'm remembering right? Doesn't exactly convince me to rush out and try the new toolchain.)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

but all variables?

No I really MEANT all variables. We're talking about 1K/2K/4K microcontrollers here. What does it matter about name scope etc. ? You'd also gain the advantage that almost all functions could be "void fn(void)" thus reducing parameter passing overhead etc.
Quote:

It's even worse with this, because there is no warning. You have to manually read each number and do math and compare to see if it is < 100% (or < 75% for SRAM). This might sound stupidly trivial and a non-issue to a lot of you, but...

it does sound trivial and what's more in all the posts I've read here (and that's pretty much everything since I joined) I simply cannot remember anyone ever raising this "issue". Presumably most folks can read when 100% is exceeded? If it really bothers you then edit the lib/ldscripts/avrN.x that's being used for your AVR and edit this:

MEMORY
{
  text      (rx)   : ORIGIN = 0, LENGTH = 128K
  data      (rw!x) : ORIGIN = 0x800060, LENGTH = 0xffa0
  eeprom    (rw!x) : ORIGIN = 0x810000, LENGTH = 64K
  fuse      (rw!x) : ORIGIN = 0x820000, LENGTH = 1K
  lock      (rw!x) : ORIGIN = 0x830000, LENGTH = 1K
  signature (rw!x) : ORIGIN = 0x840000, LENGTH = 1K
}

to put in the actual size of text for the device. That way if you ever exceed the .text+.data size for the device you will get a fatal linker error.

The reason the regions in this are "big" is that one .x covers about 10 different AVR models so it caters for the biggest (or even bigger) but if you stick to one model then you could edit the master copy. Either that or take a local copy for each project, edit it for the device in that project, then -T it to make the linker use the edited copy.

Quote:

Maybe he has lost interest.

You clearly do not know Eric. He's passionate about GCC for AVRs. I think he may be waiting on the work being done by George-Johan Lay before making the 4.6/4.7 choice though.
Quote:

Maybe I need to move on to the new official Atmel toolchain and get to the bottom of it

You'd be ill advised to use that bug fest!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

johnstonmvs wrote:
So I'm not sure how easy it would really be for someone else to pick up the torch and continue maintaining it.
... or start over?
Like https://www.avrfreaks.net/index.p... ?

Did not sound like a lot of people rushing to try their stuff.

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
No I really MEANT all variables. We're talking about 1K/2K/4K microcontrollers here. What does it matter about name scope etc. ? You'd also gain the advantage that almost all functions could be "void fn(void)" thus reducing parameter passing overhead etc.

You've got to be kidding... I don't really understand how this would reduce actual parameter passing overhead. If anything, it makes it worse. Twice as worse, by my test. Yes, the actual function call will be faster but that's it. A general statement I happened to read somewhere is not to try to outsmart the compiler. This is an area where that statement applies. The compiler will try to avoid using memory and use registers instead, which reduces memory consumption and is faster. By explicitly putting things in global variables, the compiler is forced to use memory even if it didn't have to. Parameters are forced to go into memory. And local variables are forced into memory, even if they could have otherwise gone into a register. Basically, you're inventing your own (slower) calling convention. Let's look at a simple example:

// testunit.h

unsigned char Method1(unsigned char a, unsigned char b);
void Method2();
extern unsigned char a, b, c;

// testunit.c (functions split into separate file
// from main.c to prevent whole program optimization
// removing the functions completely)

unsigned char a, b, c;
unsigned char Method1(unsigned char a, unsigned char b) {
	return a + b;
}
void Method2() {
	c = a + b;
}

// main.c

#include "testunit.h"

volatile unsigned char targetMemory1, targetMemory2;
int main(void) {
	targetMemory1 = Method1(3, 5);
	a = 3;
	b = 5;
	Method2();
	targetMemory2 = c;
	return 0;
}

Method1 and Method2 do the same thing: add two numbers. Method1 uses normal C conventions for parameter passing and return values. Method2 uses your proposed method of global variables. Let's look at the disassembly:

5:        int main(void) {
// put immediate values in registers, call Method1, then store
// result into memory
+00000052:   E083        LDI       R24,0x03       Load immediate
+00000053:   E065        LDI       R22,0x05       Load immediate
+00000054:   940E0067    CALL      0x00000067     Call subroutine
+00000056:   93800101    STS       0x0101,R24     Store direct to data space
// put immediate values in registers, store in memory, call Method2,
// load result from memory, store result in target memory.
7:        	a = 3;
+00000058:   E083        LDI       R24,0x03       Load immediate
+00000059:   93800104    STS       0x0104,R24     Store direct to data space
8:        	b = 5;
+0000005B:   E085        LDI       R24,0x05       Load immediate
+0000005C:   93800102    STS       0x0102,R24     Store direct to data space
9:        	Method2();
+0000005E:   940E0069    CALL      0x00000069     Call subroutine
10:       	targetMemory2 = c;
+00000060:   91800103    LDS       R24,0x0103     Load direct from data space
+00000062:   93800100    STS       0x0100,R24     Store direct to data space
12:       }

@00000067: Method1
// Add the input numbers, then return.
3:        unsigned char Method1(unsigned char a, unsigned char b) {
+00000067:   0F86        ADD       R24,R22        Add without carry
+00000068:   9508        RET                      Subroutine return

@00000069: Method2
// Load input variables from memory, add, store result in memory,
// then return.
8:        	c = a + b;
+00000069:   91800102    LDS       R24,0x0102     Load direct from data space
+0000006B:   91900104    LDS       R25,0x0104     Load direct from data space
+0000006D:   0F89        ADD       R24,R25        Add without carry
+0000006E:   93800103    STS       0x0103,R24     Store direct to data space
9:        }
+00000070:   9508        RET                      Subroutine return

The simulator clocks Method1 at 13 cycles and Method2 at 25 cycles on ATmega644p.

The proposed Method2 is almost twice as slow as Method1, requires 3 additional bytes of memory that Method1 did not require, and the code is much harder to read and maintain as well (especially if you used this style universally in a large project).

The only "advantage" with Method2 is that you know how much memory was required for parameter passing: 3 bytes in the global variables. Of course, this was invented because Method1 never needed those 3 bytes of memory in the first place. And you still have no idea what the stack consumption actually is: the "call" instructions pushed something onto the stack, which you can't count with avr-size.

Of course, the compiler can't put *everything* in registers and will resort to the stack sometimes. If stack consumption is a real concern, look into the static stack analysis tools I mentioned. That way stack consumption can be monitored without degrading the overall speed and memory consumption of the program.

No offense, but I'd really hate to maintain your firmware if all your function signatures look like "void Function(void)". What a mess... I'd probably submit portions of it to http://www.thedailywtf.com!

My firmware involves 25 *.C, *.H, and *.S files and runs on micros with 4 KB minimum; the more advanced versions run on XMEGA with even more RAM. Maybe passing parameters as global variables can be done with smaller firmwares that aren't so complicated. (Even then, as I already noted, you're hurting the actual speed/memory consumption of the program - and if it's a small firmware you're probably desperately trying to improve those very metrics on a small device!) But I can't imagine trying to maintain mine with everything as a global variable. There's a reason Atmel is making AVR8 micros with 256 KB flash. My code only uses 56 KB or so. So I'd imagine there are firmwares with excess of 100 *.C/*.H files out there - some probably written by people on this forum! I'd have many words to describe such a large codebase that passes everything as a global variable when the compiler offered better options, and they're not suitable for the public...

clawson wrote:
it does sound trivial and what's more in all the posts I've read here (and that's pretty much everything since I joined) I simply cannot remember anyone ever raising this "issue". Presumably most folks can read when 100% is exceeded?

Maybe nobody else seriously thought of this issue? Most people probably have only one or two AVR configurations in a single project and no associated non-AVR projects. Or they're so disorganized they haven't considered placing all AVR and non-AVR (i.e. Windows) code in source control repositories and then setting up a build script to compile all the code in the repositories - including both AVR and MS Windows code. Causing potential problems to result as build errors probably isn't even on their radar.

I know when I first learned AVR a few years ago, I simply used AVR Studio / WinAVR and kept an eye on the compiler output to see how I was doing on data/flash consumption. I only kept the code on my local machine and I was the only developer who would ever work with the code. I trusted myself as a developer to check the output and be sure that I didn't blow device limits. Each project only consisted of a few hundred lines of code and there weren't other related projects.

This scheme works, for awhile... and it's good enough for most people dabbling with AVR (or embedded programming, in general, I'm sure...). But I don't think it scales.

clawson wrote:
If it really bothers you then edit the lib/ldscripts/avrN.x that's being used for your AVR and edit this:

I do like the sound of this scheme on the surface. But editing master copies on the computer is a bad idea: it's not in source control and you have to remember to manually duplicate them to other development computers. Too fragile.

Duplicating these scripts into the application's repository is a much better solution. But the problem with this is we have now duplicated mostly-unmodified WinAVR files. If WinAVR ever updates these linker scripts, we'll have to manually merge in these changes. Yuck.

I think better would be if WinAVR itself installed custom linker scripts for each microcontroller - that would avoid both these issues. I doubt this will happen (priorities & all that), but it does seem like that would be a more ideal solution.

clawson wrote:
You clearly do not know Eric. He's passionate about GCC for AVRs. I think he may be waiting on the work being done by George-Johan Lay before making the 4.6/4.7 choice though.

Admittedly I don't know Eric. I certainly hope that you are right and something will be coming soon. But I have learned long ago, that promised future products may be vaporware, even with the best of intentions. It's not finished 'til it's finished, I say!

I'd just as soon use the Atmel solution since it has the backing of a commercial company, but, as you say...

clawson wrote:
You'd be ill advised to use that bug fest!

Assuming Eric continues with WinAVR, why will we have two common builds of Windows avr-gcc - with the independent one presumably maintained by an Atmel employee in his free time? (i.e. WinAVR & Atmel toolchain). Atmel should have taken over WinAVR development and continued to maintain it with new devices / bug fixes. Or something similar.

This situation of an Atmel employee independently maintaining a separate toolchain because the official Atmel toolchain sucks just makes Atmel look really stupid!! (I'm sure the independent commercial compiler vendors like IAR love the current situation).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Atmel should have taken over WinAVR development and continued to maintain it with new devices / bug fixes.

That's what everyone here thinks - God alone knows why they outsourced the preparation of "toolchain" to a team in India who don't seem to have benefited from any of the acquired knowledge they already had in house.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Atmel should have taken over WinAVR development

With everything else being equal, this must be one of the scariest scenarios I could think of.

They'd start with changing the default optimization level to -O0 and then continue to wreck it from there. I'm sure they'd exchange e.g. the cross-platform GAWK that comes with it for something Windows-specific just because its WinAVR, and someone at Atmel probably gets a lot of free lunches from Microsoft..

The really amazing thing is that they didn't decide to sponsor the independent WinAVR project, and the avr-gcc / avrlibc heroes, with what they might need while still letting it them be independent. Then Atmel could concentrate on IDEs, and making an official solution for running Atmel debugging tools on platforms other than Windows.

Why did Atmel decide to create it's own "clone" of WinAVR? I don't know, but suspect one or more of the following:
- Pride/prestige
- Control (so that they have full control of the Toochain/ASF combo).
- More pride/presstige, e.g. in the form of internal politics
- Customer demands (e.g. customer that will not trust e.g. a community project like WinAVR, but would trust something more or less identical just because it says "Atmel" on the box).

None of the reasons I can think of actually makes much sense, but my guess is that it was largely politics/prestige.

Atmel taking over WinAVR - Just say no. (Unless, of-course they let EW alone handle it on paid time. THAT would prolly be heaven!)

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

JohanEkdahl wrote:
Quote:
Why did Atmel decide to create it's own "clone" of WinAVR? I don't know, but suspect one or more of the following:
- Pride/prestige
- Control (so that they have full control of the Toochain/ASF combo).
- More pride/presstige, e.g. in the form of internal politics
- Customer demands (e.g. customer that will not trust e.g. a community project like WinAVR, but would trust something more or less identical just because it says "Atmel" on the box).

None of the reasons I can think of actually makes much sense, but my guess is that it was largely politics/prestige.

Responding to customer demand is sensible even if the customer isn't.
Also, if Atmel supports its toolchain, it would likely want to be able to understand changes before they are made.

Moderation in all things. -- ancient proverb

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

JohanEkdahl wrote:
Why did Atmel decide to create it's own "clone" of WinAVR?

Not-invented-here-syndrome. Which can work out if you have competent people. But Atmel is a textbook example of what happens when you don't, and when developers and management are completely overwhelmed by a task.

Someone did sell the Toolchain idea to management, but couldn't deliver. And did Atmel a real disservice.

Someone did sell the ASF idea to management, but couldn't deliver. And did Atmel a real disservice.

Someone did sell the AS5 idea to management, but couldn't deliver. And did Atmel a real disservice.

Atmel messed up three times in a row, but the really sad thing is they don't learn and still pretend all three things are the best things since sliced bread. This happens when the arrogance is larger than the competence on developer as well as on management level.

And if you e.g. look at the ASF architecture or the GPL handling for the Toolchain you can easily spot that competence was absent and drunken on vacation in Hawaii when these things were planed, sold to and bought by management, and executed.

Stealing Proteus doesn't make you an engineer.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
Responding to customer demand is sensible even if the customer isn't.
Also, if Atmel supports its toolchain, it would likely want to be able to understand changes before they are made.

I can completely understand that they want some level of control over the releases - especially if they are in the position of having to provide technical support for it.

That's why they could have forked/branched WinAVR, and they could pick & choose the stuff from WinAVR that they wanted to distribute, and WinAVR could merge contributions from Atmel. Two parallel branches that regularly merge with each other. And given that Eric works at Atmel, it seems he would have been the perfect person for that role. Atmel could have retained the control they needed over their particular distribution while the community could still participate.

Tons of open source projects have commercial backing and it seems to work for them. (Google Chrome & Chromium for example?) Why not for Atmel?

Disclaimer: I'm probably talking out of my *** because I haven't actually installed the toolchain and studied it - everyone says don't bother. For all I know, it's a direct fork of WinAVR, since I haven't really looked at it in person yet.