SRAM size at compile time vs run time?

Go To Last Post
11 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi All

 

How does one know when your SRAM is too full at compile time and whether your code will crash during runtime? For example i write some code for a micro that contains 10 different arrays of char type of size 10:

 

char array1[10];

char array2[10];

char array3[10];

.

.

.

char array10[10];

.

.

.

and a whole bunch of other code.

 

I then compile it in atmel studio and It shows me that the SRAM is 72%. Does this 72% include all memory required for the arrays? or does it use up more than 72% once the arrays are filled with data?

 

If so how can i make sure my program does not crash from over writing SRAM in the wrong place?

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Atmel Studio shows memory used for global variables. It don't shows stack and heap sizes. If You want to be sure that your program does not crash from overwriting SRAM You must calculate memory size used by stack and dynamic allocation yourself.

Last Edited: Thu. Aug 27, 2020 - 12:02 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

awit wrote:

Atmel Studio shows memory used for global variables. It don't shows stack and heap sizes. If You want to be sure that your program does not crash from overwriting SRAM You must calculate memory size used by stack and dynamic allocation yourself.

 

Ok thanks my arrays are declared as global so they are on the heap? So atmel studio should be reporting on these arrays in the used up SRAM ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

No, if your arrays are global then they are a part of the report.

Do a little test....comment out two arrays and compile. See uf the sram report is 20 bytes smaller.

Jim

I would rather attempt something great and fail, than attempt nothing and succeed - Fortune Cookie

 

"The critical shortage here is not stuff, but time." - Johan Ekdahl

 

"Step N is required before you can do step N+1!" - ka7ehk

 

"If you want a career with a known path - become an undertaker. Dead people don't sue!" - Kartman

"Why is there a "Highway to Hell" and only a "Stairway to Heaven"? A prediction of the expected traffic load?"  - Lee "theusch"

 

Speak sweetly. It makes your words easier to digest when at a later date you have to eat them ;-)  - Source Unknown

Please Read: Code-of-Conduct

Atmel Studio6.2/AS7, DipTrace, Quartus, MPLAB, RSLogix user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jgmdesign wrote:

No, if your arrays are global then they are a part of the report.

Do a little test....comment out two arrays and compile. See uf the sram report is 20 bytes smaller.

Jim

 

Thanks Jim, I have done this and it does show the size decreases. What i am unsure about is, does it allocate all the memory at compile time even though the array has not been initialized with values or does it expand the memory used when initializing the arrays? My optimization is set for size

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I found a reply to another post from Clawson which explains what im asking very well:

 

"

Your understanding is correct. As for guidelines:

Program: take this all the way to 99.9% (even 100.0%) but the moment you go over you need to trade up to a larger AVR or find a way to reduce the code size

Data: the figure given here is only for the static allocations known at compile time (that is uninitialised globals/statics in .bss, and initialised globals/statics in .data and the very outside possibility you might have something in .noinit). What it doesn't account for is the RAM that will be used for the CALL/RET stack as functions are called and also the local variables you may be creating on entry into functions/statement blocks which are also created on that same stack. In general (though it depends how you as the programmer split the use between globals and locals) a rule of thumb is that you need about 25-30% of the RAM for the stack so don't let this figure go much over 70-75%. Your 80.8% is probably on the limit. If you have a large "Data" usage the chances are you don't know about "PROGMEM" (Google it).

EEPROM: you can go all the way up to (but not exceeding) 100.0% for this one. It's your choice what you want to keep non-volatile in EEPROM.

"

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There are 4 places your variables may be stored. For some the placement and hence size/usage is known at compile time but for some the usage is dynamic at runtime.

 

Globals (and statics) with initialisation are stored in the fixed .data area and are therefore known at compile/link time

Globals/(statics) without initializsation (and hence guaranteed 0) are stored in the fixed .bss, also known at compile/link

Stack frame automatics (locals) are created on the stack on function entry at run time, their location/size is not known at compile/link 

malloc()d variables are created on the heap (in GCC unused area between .data/.bss and the stack) at run time so not known at compile/link 

 

If you want to be absolutely sure all RAM usage is accounted for at build time stick to globals.

Last Edited: Thu. Aug 27, 2020 - 01:12 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
If you want to be absolutely sure all RAM usage is accounted for at build time stick to globals.
Not quite sufficient.

If nothing else, return addresses will be on the stack.

Also, if you do something complicated enough,

the compiler might need to put some temporaries on the stack.

Don't do that.

 

IIRC avr-gcc can emit assembly that contains data about the stack space used

Don' t remember what is is called or exactly what is emitted.

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What I  do is fill the entire SRAM with some recognizable bit pattern (I like  0x5a = 01011010) BEFORE the first initializations occur. Then, you can look at memory with the debugger and see just how much is or has been occupied by both variables and the stack. That has saved my butt several times.

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You can add the -flto optimization switch to the compiler options; this enables whole program optimization and inlines calls that would otherwise not be inlined, usually reducing stack usage. It may help if you are reaching SRAM limits sometimes.

 

This also shows that stack usage is affected by certain compiler settings, so it's basically impossible to predict.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

El Tangas wrote:
This also shows that stack usage is affected by certain compiler settings, so it's basically impossible to predict.
One can get useful upper bounds.

For a function that calls no other functions,

2 or 3 plus the sizes of its automatic variables is a likely upper bound.

If the assembly shows no pushes or other stack manipulation,

only the 2 or 3 bytes for the return address is used.

For a function that calls other functions, add the maximum used by one of those functions.

Recursion requires a more detailed analysis.

Iluvatar is the better part of Valar.