using malloc and free

Go To Last Post
6 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I am working on a program for my SAM4S xplained pro. I am getting a list of files and attributes from the mounted SD card. I will then display the list via the console and the user can pick which file to copy to the NAND. Since the number of files are dynamic, I am using malloc to allocate the memory at execution time. The below code works and I don't see any immediate issues. I still need to put some fail safes in to prevent running out of memory if the user scans 32gb card with thousands of files.

 

My question is; am I doing this correctly? Meaning is there a better way? Also does "free" actually free memory on hardware like the SAM4S? Here is a snippet of code.

typedef struct  {	
    
    char path[150];
    int file_size;
    char name[20];	
    uint32_t nand_address;

}nand_file_t;

nand_file_t* nandFiles;

const char* path = "/";

nandFiles = malloc(1 * sizeof(*nandFiles)); // dummy array for the scan
read_sd_card(path, false, nandFiles); // gets the number of files we can index
nandFiles = malloc(number_of_files * sizeof(*nandFiles));	// alloc the struct array
read_sd_card(path, true, nandFiles); //load the file attributes into struct array
for(int ii = 0; ii < number_of_files; ii++){
    printf("File name: %s\n", nandFiles[ii].name);
}	

free(nandFiles); // free up the memory

 

"When all else fails, read the directions"

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

 

the actual behaviour of malloc and free depend on implementation, ie. which "libc" you use (if any).

 

I'd say that it is generally a (very) bad idea to use malloc and free with devices that have no MMU. In this case usually dynamic allocation doesn't really provide any advantage over using only static allocation, but instead is vastly limited, computationally costly, and unreliable. If you allocate and deallocate memory in multiple places there is a risk that memory will become fragmented so that its use won't be possible - you won't have a large enough contiguous region available. If you use dynamic allocation in just one place, there is really no reason to do so, because you can allocate a large array statically as a local variable in a function. If you need a permanent storage, you can define a global array, and manage it "manually". Depending on what's going on in the software, you can either use this whole region of memory without any restrictions (if you don't need to have more than one "big thing" in memory at a time), you can divide this memory into a few parts (it's better to define a number of arrays of respective types then), or, if the objects can have variable size and need to be stored simultaneously, you can try to organize the storage as a list/queue (immune to fragmentation, but at the cost of some overhead for storing pointers) etc. So this can be anything from very simple to very complex, but at least you always know what's going on.

 

Best regards,
Adam

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thank you for your response. I see what you're saying, but perhaps you can shed some light on this example where I want to store an array of structs:

 

typedef struct  {	

	char path[150];
	int file_size;
	char name[20];
	uint16_t start_page;
	uint16_t start_block;	

}nand_file_t;

 

I want to initialize the array to 700. The size of the struct is 180 bytes. 180*700= 126000 bytes (126kb). The below hangs during the initialization of the array

nand_file_t metadata_array[700]; // hangs on initialization

Whereas, the following does not:

 

nand_file_t *metadata_array = malloc(700 * sizeof(*metadata_array));
	

 

"When all else fails, read the directions"

Last Edited: Sun. Jun 25, 2017 - 01:38 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

"During the initialization" is not necessarily a precise statement. In case of global variables you can't really tell when the initialization happens, in case of local variables initialization may not be performed (if not explicit).

My first guess is that you don't have enough memory for this array. If I can see correctly, SAM4S Xplained comes with SAM4S16C which has "only" 128kB of SRAM. Most likely the rest is used by other things. I'm not sure what happens in this case. It may be that the processor detects an error and enters an infinite loop, or that you write to some memory you shouldn't touch - I don't know what does SAM4S do. You say that malloc returns, but are you sure that it really allocates the memory and this amount of memory? If yes, then most likely it will fail in an other way in an inpredictable moment.

Does the compiler say anything? What does arm-none-eabi-size say? (I assume you use GCC.)

Does the program start if you limit the array to, say, 10 cells?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A.R.f. wrote:
If I can see correctly, SAM4S Xplained comes with SAM4S16C which has "only" 128kB of SRAM.

Actually the SAM4S xplained pro has a SAM4SD32C with 160kbytes of SRAM. But you seem to be right about not having enough SRAM. I'll need to rethink how I want to manage the data coming from the NAND.

 

 

"When all else fails, read the directions"

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Which compiler do you use?

I've digged out an app for the same SAM4S you have on board, I've added a large global array, put some code using this array (so that the compiler didn't optimize it out) and GCC told me:

/usr/lib/gcc/arm-none-eabi/4.8/../../../arm-none-eabi/bin/ld: ../bin/sam4s_app section `.bss' will not fit in region `ram'
/usr/lib/gcc/arm-none-eabi/4.8/../../../arm-none-eabi/bin/ld: region `ram' overflowed by 85656 bytes

In case when the array was defined as a local variable, it was not allocated in .bss and the compiler didn't complain about that. This of course means unexpected trouble in a later time.

So to check, you can make sure you define this array as global. You can also try making it much smaller or even larger and checking if the program works and the linker says anything.

Once I've got a very strange behaviour because the linker script allocated some memory for stack and most memory for heap. Then I've quickly ran out of memory for local variables and with a bigger code the operation of microcontroller was "random". When the linker script got corrected (by getting rid of heap and allocating stack till the end of RAM) the program did work correctly. But if you use the linker script from ASF, it should be already organized like this (at least the one I have is).