Allocating "just enough" buffer size at link time

Go To Last Post
10 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi, all!

I'm mildly interested in finding some elegant way to instantiate a statically-defined data buffer exactly big enough to serve the needs of the most demanding of several clients, when the quantity and storage needs of those clients aren't known until link time.

I'm developing a library (in the "libStaticLink.a" sense) with aspirations of being a reuseable foundation for several derivative applications. The library will have a module that defines the buffer and provides the consumer function for its data. Other library modules may provide "producing" functions with known, modest buffer size requirements. But the derivative applications may (and probably will) contain further producer functions interested in stuffing larger quantities of data into the buffer. I don't want to predeclare some "surely-big-enough" buffer size in the library module, since some applications might not produce big messages and would rather have the memory for purposes, though.

The microcontroller linkers I got to use in the 1980s would handle this sort of thing with the mechanisms they had for figuring out how large one's stack segment needed to be: they'd just collect up all the individual "I need this much" declarations from all the modules, and set the overall size according to the largest value encountered.

Is there any way that the gnu tools can do this sort of thing?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Some of these in a linker script?

http://sourceware.org/binutils/d...

Or some tools' magic during link stage, e.g. run custom script at link-time by make in order to get the sizes before link-time and then feed the information into the link.

avrfreaks does not support Opera. Profile inactive.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think what OP wants can be done without too much effort:
With gnu, definitions without initializers become common symbols.
When linking, multiple common symbols with the same name become a single symbol.
It will have the largest size and the strongest alignment.
Note that C does not require this to work.

Linking in the actual size of the buffer might be tricky.

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
I think what OP wants can be done without too much effort:
With gnu, definitions without initializers become common symbols.
When linking, multiple common symbols with the same name become a single symbol.
It will have the largest size and the strongest alignment.
Note that C does not require this to work.

Linking in the actual size of the buffer might be tricky.

That was the first thing I tried; I got a "symbol XYZbuf changed size from nnn to mmm" error from the linker and the "make" processing terminated. Also, although the warning message about sizes changing were initially talking about my buffer growing, I experimentally changed its declared size in one of my modules, and the warning obligingly changed to describe the symbol DEcreasing in size. That made me think that the last definition encountered by the linker would win, even if the warning were suppressed.

Sorry for not posting the actual messages; I can do that when I get back to work tomorrow.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Levenkay wrote:
That was the first thing I tried; I got a "symbol XYZbuf changed size from nnn to mmm" error from the linker
The opbects/szmbols must be in the common section, i.e. there mut not be an explicit section directive like __attribute__((section(".noinit"))) and you must not use -fno-common.

Read the map file to see where the thing ends up.

As far as I understood you, the object need a particular section, .progmem in this case?

avrfreaks does not support Opera. Profile inactive.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If the size was not known at compile time, then you need no memory to be reserved.
Simple use malloc() and free() to establish the buffers.

Peter

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
I think what OP wants can be done without too much effort:
With gnu, definitions without initializers become common symbols.
When linking, multiple common symbols with the same name become a single symbol.
It will have the largest size and the strongest alignment.
Indeed. It wouldn't occur to me...

One just has to suppress the learned habit to put a declaration into a common header...

l.c:

unsigned char a[200];
unsigned char b[100];

int main(void) {
};

l1.c:

unsigned char a[100];
unsigned char b[200];
c:\tmp>avr-gcc l.c l1.c -o l.elf -Wl,-Map=l.map

c:\tmp>

l.map:

[...]
Allocating common symbols
Common symbol       size              file

b                   0xc8              C:\Users\OM7ZZ\AppData\Local\Temp/ccp0McEx.o
a                   0xc8              C:\Users\OM7ZZ\AppData\Local\Temp/cckYlTyn.o
[...]
 *(COMMON)
 COMMON         0x00800060       0xc8 C:\Users\OM7ZZ\AppData\Local\Temp/cckYlTyn.o
                0x00800060                a
 COMMON         0x00800128       0xc8 C:\Users\OM7ZZ\AppData\Local\Temp/ccp0McEx.o
                0x00800128                b
                0x008001f0                PROVIDE (__bss_end, .)
[...]

Maybe I will learn to appreciate the mess C really is? :-)

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

danni wrote:
If the size was not known at compile time, then you need no memory to be reserved.

The size is known for a given module (source). The max size for all modules is what is not known.

danni wrote:
Simple use malloc() and free() to establish the buffers.
This implies indirect access which may be less efficient while not necessary.

JW

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

wek wrote:
skeeve wrote:
I think what OP wants can be done without too much effort:
With gnu, definitions without initializers become common symbols.
When linking, multiple common symbols with the same name become a single symbol.
It will have the largest size and the strongest alignment.
Indeed. It wouldn't occur to me...

One just has to suppress the learned habit to put a declaration into a common header...

The below are definitions, not just declarations.
Quote:
l.c:
unsigned char a[200];
unsigned char b[100];

int main(void) {
};

l1.c:

unsigned char a[100];
unsigned char b[200];
c:\tmp>avr-gcc l.c l1.c -o l.elf -Wl,-Map=l.map

c:\tmp>

l.map:[code]...
Maybe I will learn to appreciate the mess C really is? :-)

C does not require this to work.
It is a feature of typical unix-y compilers and linkers.

Iluvatar is the better part of Valar.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

skeeve wrote:
I wrote:
One just has to suppress the learned habit to put a declaration into a common header...
The below are definitions, not just declarations.

Of course.

That's why one has to omit the "shared" declarations through headers - that's C's customary "mechanism" to avoid the differently-sized symbols, and that has to be selectively suppressed by the user for this particular purpose.

The surprising (for me) thing is, that nothing more is needed.

Michael wrote:
I wrote:
Maybe I will learn to appreciate the mess C really is? :-)
C does not require this to work.
It is a feature of typical unix-y compilers and linkers.
I of course understand that; I would even expect that the standard would say something about undefined behaviour for this case (lazy to look up).

Nevertheless, isn't the unixey implementation the canonical one, thus this "feature" stressing the hackishness of the whole environment surrounding C? That's why I hate C - but in this particular case, that's what unexpectedly came handy... ;-)

Jan