FatFS - how to reduce sector buffer?

Go To Last Post
13 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi guys,

 

This is probably a dead shot, but anyway... Did anyone managed to shrink FatFS sector buffer to smaller size? AFAIK there are currently two possibilities. 

 

1. use FatFS, where sector size buffer is limited to be minimum of 512B

2. use petitFS, where accessing byte N within sector requires N-1 byte reads (not investigated this yet, but I'm suspicious that functionality as directory traversing/listing would not be possible without any buffer at all)

 

Rationale for my question:

- SD cards (I'm not completely sure, but...) at least to 2GB capacity should be readable by byte addressing with setting CMD16 BLOCKLEN to one byte minimum)

- from performance perspective, now I'm on a problem, that read throughput is not as necessary as latency to access data

- so, what I see is, that FatFS on sector boundary just reads whole sector to buffer, but code halts/waits for this to complete

- as this is whole 512B sector, what I see during performance profiling are halts to some miliseconds (quite nice in other applications, but...)

 

My idea is, to shorten sector buffer to (preferably code/compile time) definable size, thus:

- reducing latency coming from need of a read of a complete sector

- dissolve this latency to more predictable fashion over reads (better to have four 1ms stalls than one 4ms stall)

 

What did I had tried already

- reducing MIN_SS/MAX_SS defines to smaller value [64]

- replaced all "512" referencies within SD card IO module (mmr_avr????) to new size [64]

- code after these quite "mechanical" changes didn't work [no, I'm not surprised ;)]

 

Outcome of such solution

- within a small AVR without much RAM memory and in scenario using multiple files I think that smaller sector buffer is way to go to keep reasonable ratio of latency/throughput

- obviously you can't have both

 

So, now comes The Question:

 

Did anyone of you guys stepped over similar issue an has been able to solve it?

Do you have any ideas how to achieve this, or in bad case any warnings why to not try?

 

I could probably find a solution and nail it down to code, but on these days, any "shortcut" would be welcome.

BTW, to have an idea what is this all about, I'm working on multi MIDI files player, so there latency during playback is not welcome, not talking about accumulated latency over parsing multiple files sharing one sector buffer (multi sector buffer is not viable).

 

Thanks

T.

 

 

 

Last Edited: Fri. Sep 30, 2016 - 11:22 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The speed of reading from a sdcard varies, so you need some form of elastic buffer. Considering you can get micros with 100's of k on chip for reasonable prices, i'd suggest that is a more viable solution.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

You pretty much outlined it, if you are so low on RAM you can't afford the 512, then you need to use Petit which already handles only reading/writing part of a sector (it does dummy reads until it reaches the bytes you want).

If you are so low on RAM you have almost certainly picked the wrong micro. If you ever plan to use FatFs I'd immediately assign 1K in fact in your memory budget.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
You pretty much outlined it, if you are so low on RAM you can't afford the 512, then you need to use Petit which already handles only reading/writing part of a sector (it does dummy reads until it reaches the bytes you want). If you are so low on RAM you have almost certainly picked the wrong micro. If you ever plan to use FatFs I'd immediately assign 1K in fact in your memory budget.

 

Dear clawson,

 

what I see (not declaring I'm right) in this case, latency would be present in any amount of RAM, as any reading of whole sector would cause same latency [OK, with different micro which would be able to comm over SPI in faster manner this could skew problem a bit, but...]. After all in case any file reads (I'm thinking about ~8 files with ~16 read pointers = 8 * 16 = 128 buffers * 512 bytes per sector = 64kB, that's completely different resource solution) would generate a lot "cache-miss" situations. Micro I would like to use [32u4] is quite capable in processing, only thing which causing any concerns is that read latency, which would be present in any scenario until I've used micro with magnitude different resources, which would be targeting me to choose a different platform. In such case problems would still be present, until I decided to accommodate some techniques as double buffering via interrupts. What my "mind-math" is telling me, is that by finding a solution how to work with smaller "sector" buffers would solve my issues/concerns [reality is that 5ms latency [full 512B sector read[ is not an issue until it is connected with multiple files reads. In such scenario I think would be a lot of better to just read multiples of less amount of data to smaller buffer than reading small count of bigger buffer sizes. Maybe I'm not clear enough about this, but I think that find a solution to make smaller read buffers using FatFS would solve my concerns. So there's my questions I've presented in my topic post.

 

[It's just a code, there is a solution to this, I know. But after quite a long time, shortcut would be welcomed.]

 

T.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

How often does your application read less than 512 bytes ?


You can create another routine ( my_disk_read_bytes() ) to read 'n' bytes and then
modify f_read() to use your my_disk_read_bytes() instead of disk_read()
The problem then becomes how to calculate how many bytes are needed.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I don't understand this thread.  If RAM is not the issue why would you ever want to split a sector read? 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

I don't understand this thread.  If RAM is not the issue why would you ever want to split a sector read? 

 

Due to latency. When I'm processing sector preloaded data, at the end of buffer I have to wait for next full buffer fill. [ 512 x 1B read time ] Even when I need to read just one byte from next sector in current event processing. Even when I would have enough RAM for all files I need to process simultaneously [in sequence of course] that would amplify this issue even further [ i.e. 8 files x 512B ]. So in the worst case that I need to read just few bytes from every file which is beyond current sector buffer, this would cause significant latency between processing first and last events. In a case, let's say 16B buffer, there would be of course additional overhead [due to start of a READ command], but latencies in bad cases would be a lot lower.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

But FatFs always caches a sector? (that's why sizeof(FATFS)=568 or whatever it is: 512 of those are a 1 sector cache) so while your are just reading/writing a few bytes from a sector it's happening in the cache. Only when you move past the sector boundary is a flush and re-read performed.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Exactly. So in a scenario when you need read 8 files with every file reads just one byte beyond sector you will read 8 times full sector buffer. Just for 8 bytes for different files. If you would have separate sector buffer for every file (which is something not possible on 32u4, but even if it was with different micro) in a case that in one event processing which consists reading 8 files and if there reads even of one byte are beyond current sector cache boundary, this would read 8 x 512 bytes. That's the point. With smaller buffers the worst case latency would drop in favor of some overall average processing and bit lower throughput, but this is not an issue in my application. 

So far investigation revealing that bending FatFS would be too much complicated as this seems is build over sector size (low level reads are referenced by sector granularity). PetitFS seems to be better modifiable, but support for more than one files has to be added.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Must you write a few bytes at a time to so many different files? Can't you use a different format or data structure and write all the data to a single file. What you propose seems to be the worst of all worlds! Let the smart device that you will probably plug the card into exercise ITS smarts and do all the heavy lifting. Here, your whole task is probably just to get the data onto the card. Once its on, do the complicated stuff elsewhere.

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Jim, I just need to read, not write so many files.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Then, can the file(s) be structured differently? Maybe even sequentially organized at creation time?

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

No, this would be too complicated as these will be just standard MIDI files which would just need to be copied onto SD card and then played as multi track together. Thus that my concerns about latency. MIDI comm is rather slow ( ~one event per 1ms ) so it would be better in this use to have lower latency on reads to be able to fill HW out buffers evenly (better to have data in out buffer to send in background which would cover small latency, than empty buffer which would then possibly clog with larger file buffer).