How to link a file in SD_card without using the memory of the chip.

Go To Last Post
8 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hello guys,

 

I;m trying to open a file form a sd_card with a link from a webpage that is running on my board, but the broblem is that I cannot send a file more that 12Kb because I'm running out of memor.

I am using a FatFs library to open a file and read it I sade to readied byte on a buffer and send the content of the buffer to my WebPage vie HTTP protocol.

const  char logging[1200];
red = f_open(&file_object_read, "File.txt",FA_OPEN_EXISTING | FA_READ);
red = f_read(&file_object_read, logging ,sizeof(logging) , &br );

So I safe the logging buffer in virtual file and then send it to my webpage. 

 

The problem is that I cannot send more than 12000 characters. 

My question is, how can I link the file directly to the sd_card without uploading the content of the file in the memory of the chip but directly link it to the sd_card and download it from there. 

 

I have seen that FatFs have a function F_forward but, I'not sure if its going to be suitable for this kind of approach.

Last Edited: Tue. May 3, 2016 - 11:43 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

Silly question buyt why read/send all the bytes of the file in one go? The whole point of file streams and internet streams is that you can just open and send the data one byte at a time. No need to read it all THEN send. Just take the GET request and the open/read/send a byte at a time - it should be limitless.

 

If you build the data into the code you will have to rebuild the code every time the data is changed.

 

Oh an [1200] is 1.2KB not 12KB by the way ;-)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What do you mean by GET request? Form the HTTP protocol of form the FatFs  F_get.  

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

HTTP works by the guy at the other end saying:

GET foo.bin HTTP/1.1 

to your server. You then check the size of foo.bin on disk (or memory card) and then send back:

HTTP/1.0 200 OK
Date: Fri, 31 Dec 1999 23:59:59 GMT
Content-Type: application/octet-stream
Content-Length: 12345

after this you then send 12345 bytes of the data in the file. There are two way you might do that:

unsigned char buffer[12345];

fileout = fopen("foo.bin", "rb")
fread(buffer, 1, 12345, fileout);
fclose(fileout);

for (i=0; i < 12435; i++) {
    write_byte_to_internet(buffer[i]);
}

this opens the file, reads the whole thing into a memory buffer THEN sends those bytes out to the internet. The other way is:

fileout = fopen("foo.bin", "rb")
for (i=0; i < 12435; i++) {
    write_byte_to_internet(fgetc(fileout));
}
fclose(fileout);

In this scenario you read a byte at a time and send each one out to the receiver until you have sent all the bytes. Because there is no fixed size 12345 byte buffer involved here you can send virtually unlimited amounts of data.

 

Now it could be that the internet socket read/write stuff works "best" if you actually send a block of a number of bytes at a time (maybe the MTU size?) as it could be horribly inefficient to send just individual bytes. So there maybe some solution between "buffer the entire file" and "send single bytes" where you do something like "read/send 1024 bytes at a time" or something like that. This will depend on how the sockets interface you are using is implemented and whether it already handles the buffering of data into TCP/IP packets anyway (it probably does!).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thank you for your answer clawson!! But I think didn't understand your answer or my question was not clear enough.

 

I'm reading a file from the sd_card what I actually do same the content of the file in a buffer which cannot be more than 12000 bytes, but that is a problem because the file can grow to 500 MB  maximum. 

For example I'm using the function Fstat to get the status of the file and what I actually to want to know from the status is only the file size fsize.
Using your approach I can only get the last package.

My question is how I can structure the packages in order and the maximum package size to be 12000 bytes?

 

SoI guess it will look something like this:

I have a buffer

Buffer [12000] -- Package size/

 

I read the file safe the first 12Kb in 1st package send the package

  read the file safe the second 12kb in 2nd package send the package. 

....

...

...

Until I reach the end of the file. By this approach I won't accupate the whole memory ans still send the whole file.

Maybe you examples the same idea but, I couldn't get them working right..  

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

You could do it that way but why pick this arbitrary 12000 as your buffer size? I'm sure the buffer does not need to be that big and if that uses all the RAM then you have no room for expansion. I would have thought a 512 or 1024 byte buffer would suffice but, like I say there are possibly two things to consider:

 

1) does the TCP/Ip stack already do packet buffering anyway? In which case you are wasting your time and RAM

 

2) is there some "magic" size that works well for the TCP/IP. As I say there's usually an MTU (maximum Transmission Unit) which is the "buffer size" used to packet things up into IP "chunks". You probably don't gain anything by trying to send more than the MTU size in one go anyway.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Ok basically here is what I have done so far.

 

                red = f_stat("Logging_File.csv", &fno);
		red = f_open(&file_object_read, "Logging_File.csv",FA_OPEN_EXISTING | FA_READ);
		red = f_read(&file_object_read, buffer ,1000 , &br);
		while (&br < fno.fsize)
		{
		red = f_read(&file_object_read, buffer ,1000 , &br);
		 sizeof(logging) == sizeof(buffer);
		}
		f_close(&file_object_read);
		

This is the readinf of the file for now I'm trying to read a 1kB of data form the file. And stream it to the logging buffer.. 

 

 

 

 

The logging is my buffer where a send the data to the client. 

 

const struct fsdata_file file_logging_js[] = {{file_excan_js, "etc/js/logging.csv", logging, sizeof(logging)}}

i'm saving the content of the buffer in a virtual file system. 

 

So I can just link the file in my HTML link button to the file

 

<li><a href="etc/js/logging.csv" download  >logging file </a>

And basically it downloads the file to the clients drive. 

 

The prefer MTU I guess will be 1kb, but still still I can't figure out how I'm suppose to divide the whole file in packages and send it..

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Metio wrote:
but still still I can't figure out how I'm suppose to divide the whole file in packages and send it..

Say I have a 32768 byte file I could:

char buff[32768];

fread(buff, file, 32768);
send(buff);

or I could:

char buff[16384];

fread(buff, file, 16384);
send(buff);
fread(buff, file, 16384);
send(buff);

or

char buff[8192];

fread(buff, file, 8192);
send(buff);
fread(buff, file, 8192);
send(buff);
fread(buff, file, 8192);
send(buff);
fread(buff, file, 8192);
send(buff);

or (also recognising that a for loop might make this easier):

char buff[2048];

for (i=0; i<16; i++) {
  fread(buff, file, 2048);
  send(buff);
}

or

char buff[512];

for (i=0; i<64; i++) {
  fread(buff, file, 512);
  send(buff);
}

or

char buff[64];

for (i=0; i<512; i++) {
  fread(buff, file, 64);
  send(buff);
}

(I think you can see where this is headed!). Ultimately perhaps:

char buff;

for (i=0; i<32768; i++) {
  fread(&buff, file, 1);
  send(buff);
}

In this last one I don't really have a buffer at all. I just have 1 byte that I read from the file then send to the internet. If you do it like this you could send a 3GB file if you like!

 

That's my point above. There's probably no need to buffer at all. Just open the file and send a byte at a time. This won't use any RAM, just one machine register to hold the "current" byte.

 

This is, of course, predicated on the fact that the internet stack will be doing some buffering itself. If you are sure it doesn't then perhaps you do want to send it as 512 lots of 64 bytes or something?

 

There's nothing particularly special about any of this - if you were writing an HTTP server on a PC you would probably do it in exactly the same way. There you can be absolutely sure that both the disk filing system and the internet interface will both, already be doing buffering for you, so you real can putc(getc()) and do it a byte at a time if you like.