micro-SD memory block behavior?

Go To Last Post
28 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Greets, folks 

 

Hope someone around has delved into this.

 

Question: In a microSD card driven by FatFS, suppose that a partial block is  written, and the document is closed. Then the  document is later  re-opened in append mode, and new data is added. Does the new data fill-out the  existing block or does it leave that block as-is, going on to a new one? Does anyone know what happens?

 

Thanks

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

Last Edited: Mon. Oct 22, 2018 - 02:16 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It over-writes. All this is "hidden" inside the operation of the SD controller. To the outside world floppy disks, hard drives, SSDs, SD/MMCs appear as just a "sea" of completely rewritable 512 byte sectors. The very fact that some/all of these systems may have wear levelling, bad block avoidance, error correction and other mechanism going on is effectively "hidden". To the user of an SD/MMC (or flopp or HDD or SSD or CF card or ....) it just appears that you can fill a 512 byte buffer and say "write this into sector 12345" and it does that. 

 

The FAT system will do read/modify/writes if it needs to. For example in 32bit FAT system when you write a FAT you actually write a 512 byte sector but (usually) only 4 bytes are changed each time. The SD/MMC or SSD or CF controller will do any necessary buffer, sector erase, new sector write that might be required because there happens to be a 0 to 1 bit transition in any of the bytes.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks Cliff -

 

That is an immense help.

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

All this is "hidden" inside the operation of the SD controller. To the outside world floppy disks, hard drives, SSDs, SD/MMCs appear as just a "sea" of completely rewritable 512 byte sectors. The very fact that some/all of these systems may have wear levelling, bad block avoidance, error correction and other mechanism going on is effectively "hidden".

 

Sorry for barging in, it's just that only yesterday I started reading about "raw" SD card interface (over SPI), and was wondering how raw it really is; meaning, if I happen to write some bytes to an address in a damaged area, will the internal card controller know and do something about it, or will my data be lost? Thanks!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

Yes, it'll do that. This is "bad block remapping". Say you write 512 bytes to sector 123 but the card either already knows that 123 is "damaged" or that, while doing a test read of the written data it finds it was not stored correctly it will add 123 to the remap table and take the next sector from the spare "remap pool". In future when there are read/writes to 123 it may well be using sector 54273551 in the remap pool but you aren't aware of this. Of course things really hot up when 54273551 in the remap pool then shows a write failure itself! It then has to remap a remap....

 

Even magnetic HDDs from Seagate and Western Digital have been doing thus same remapping thing for decades.

Last Edited: Mon. Oct 22, 2018 - 10:58 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

Yes, it'll do that.

 

Cool, thanks!

 

clawson wrote:

Even magnetic HDDs from Seagate and Western Digital have been doing thus same remapping thing for decades.

 

That I knew, but I wasn't sure how intelligent these little FLASH cards are smiley

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Re-reading Cliff's response, there are a couple of questions (not "doubts" !) in my mind:

 

1) What do you mean by "over-write"?

 

2) Not sure my concern was addressed. Suppose you write a partial sector/block and close the file. Does FatFS know that there is more to be added to the same block on the next write? Or, is that managed internally? Or, does "it" even try, leaving partial blocks scattered hither-and-thither. I don't, but does it matter if the file object is unmounted between successive writes? If you really did answer this concern and I simply don't understand what you wrote, my apologies!

 

Thanks

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

Last Edited: Tue. Oct 23, 2018 - 01:00 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What happens when the remap map goes bad and you have to remap the remap map?

 

--Mike

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

avr-mike wrote:
What happens when the remap map goes bad and you have to remap the remap map?
Aye, there's the rub! (actually I think they use ECC and redundancy)
ka7ehk wrote:
1) What do you mean by "over-write"?
The sectors in an SD/MMC are just bytes in a NAND array so all the usual rules about updating NAND apply. You can write several times to the same set of bytes as long as all the bit transitions are in the 1->0 direction but as soon as you need a 0->1 transition you have to read out the page contents, update the new date, erase the page, then write the updated stuff back.

 

Your scenario was a file that ended early the first time. So maybe only the first 217 bytes of the 512 were written. The nature of erased flash means the remaining 295 bytes will be 0xFF. If you now reopen the file and add "hello world" from byte 217 onwards then it will just make the 1->0 transitions necessary to make the next 11 bytes of 0xFF turn into "hello world". In that sense it is writing into the SAME sector so I would call this "overwrites" though perhaps that word was badly chosen and I really should have used "appends". Of course what you may actually do is open the file, seek to the end, then seek back 11 bytes and THEN write "hello world". So now you are changing data - truly over-writing. If you did this either all the bit transitions are 1->0 and it just does a plain write, or somewhere in there a bit in the existing data has to go 0->1 to turn it into "hello world" so now the page read, update, erase, write thing would have to come into play.

 

BTW I am 99% certain the page size is NOT just 512 bytes - so that complicates things a bit further.

 

ka7ehk wrote:
Does FatFS know that there is more to be added to the same block on the next write?
FAT all works by multiples - Say you write 1,500 bytes to a file. It writes 512 to the 1st allocated sector, 512 to the 2nd allocated sector and 476 bytes into the 3rd sector. It then writes 1500 into the size field in the directory entry. If you now reopen that file later and seek to the end and write 300 bytes then it knows that it has to start writing at the 477th byte in the 3rd sector (adding to what is already there) then that spills over into the 4th sector where 164 bytes are read. Of course, at some stage one of these writes that spans the 512 byte sector size boundaries is also going to hit the allocation unit boundary too. Say the allocation unit (cluster) size is 16KB so there are 32 sectors per AU then as it writes over the boundary of the 32nd sector it cannot just continue on to write into the adjacent, 33rd sector, for all it knows that might already be part of an existing, written file, so this is where "FAT" actually comes into play. It now has to go back to the FAT tables, find the next unallocated AU entry, write the number of that into the AU entry for the previous link in the FAT chain then multiply that number by the AU size (32 sectors), offset from the base of the data area and continue writing the next 32 sectors here. (as I say it's all about "multiples", multiples of the sector size when writing bytes and multiples of the AU size as it hits the cluster boundaries)

 

What it does not do in the 512+512+476 case is simply "leave behind" the 476 bytes and some how copy them to the new sector where the next 36 of the 300 being added are appended. It really does go back and update (better word than "overwrite") the sector where the existing 476 are held.

 

Like I say if you reopen the file and don't seek to the end but seek to somewhere less than the previous end and then write that truly is "overwrite"ing.

 

BTW if it had been that things like the 476th get "orphaned" then the FAT system would have to keep a map of each and every sector to say whether it were useable or not". There just isn't room for that. The whole reason you have AU size granularity (and hence often "wasted space" when writing short files) is that the FAT tables couldn't be any larger to support a finer granularity. In grown up filing systems like NFS and EXT3/EXT4 the block size (effectively AU/cluster size) is fixed at 4KB so you never waste more than 4095 bytes and on average it will be 2048 bytes per file. But in FAT, where AU size is typically 16KB or 32KB you can waste up to 32767 bytes and on average 16384 bytes per file when the AU size is 16KB. On the other hand FAT is "dead simple" compared to NTFS/EXTn !

 

BTW because of the Microsoft patents on FAT (well LFN) at one stage I looked at using an alternate format on SD/MMC. I got most of the "read" stuff working to implement UDF but I stopped short of completing the "write" functions which (like FAT) are quite a lot more complex than reading. "Universal Disk Format" is nice because it is "open". Sadly it never seem to have got much beyond usage in optical media such as CDs/DVDs

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks, very much, Cliff, for that extended description. 

 

The part about involvement of FatFS vs the inner workings of the memory device really has to do with code execution time (and serial interface transfers). If FatFS has to manage that, then there would be more code to execute, thus longer "awake" times. BUT, if it is managed internally, then software has to be aware in order properly manage power to the memory device.

 

My questions have been put to rest. 

 

Cheers

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Ah but the card is probably the major power consumer in the system. So it may continue to process when "writes" have finished.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

That was my point about power management. Not an issue in this application (no power switch - a serious design error) but likely in the next iteration.

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

WOW Cliff, you must have ditched the tablet and use a real keyboard or maybe added a real keyboard to the tablet? Lots of long posts...wink

John Samperi

Ampertronics Pty. Ltd.

www.ampertronics.com.au

* Electronic Design * Custom Products * Contract Assembly

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This page although old,  is a very good read on the subject - beware though it's meant for the really keen and inquisitive.

 

I found it useful when investigating a spate of SD card failures some time ago.

 

https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=show&redirect=WorkingGroups%2FKernel%2FProjects%2FFlashCardSurvey

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks

 

Facing a real dilemma, here. In an attempt to reduce power consumption, I am opening, writing, and closing on every write. It is now looking like the background processing in the chip is taking long enough that, at the minimum write interval, is is consuming the same power as before. Expect it to be less if I increase that interval.

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Opening and closing a file (on any file system) uses a lot of computing resources (and power) because of all the overheads
eg. locating the file in the directory structure, finding where the end-of-file is, flushing your buffered data to storage, updating the file-system data-structures.


Instead of a 'file open, write, close' method, you could do a 'file open, write, write, ..., write, save, write, ..., close' method, where the 'save' (sometimes called a flush) is a FatFS f_sync().
How often you 'save' and close (and re-open the file) will depend on your application and how much data you can afford to 'lose' when the power fails or your application stalls or restarts.
If you have a power-fail(ing) detector in your system (and a decent power-reserve), you could use that to close the file.

Last Edited: Tue. Oct 23, 2018 - 09:56 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The problem is that staying open is also costly. I am seeing an 10mA jump in supply current with the file system open, even when there are long gaps (minutes) between bursts of  writes. This (open, write, close) was an attempt to reduce that current. But, at faster rates, the memory's background processing seems to take long enough that the current is staying high. FatFS, itself, takes 24ms to get through one open, write, close cycle and I am guessing that background, in the card, is running quite a bit longer than that.

 

Got to keep searching for a strategy that will help.

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
The sectors in an SD/MMC are just bytes in a NAND array so all the usual rules about updating NAND apply. You can write several times to the same set of bytes as long as all the bit transitions are in the 1->0 direction but as soon as you need a 0->1 transition you have to read out the page contents, update the new date, erase the page, then write the updated stuff back.

 

NAND invariably has ECC as its basic reliability is rather poor. This means you can only write a sector at a time otherwise your ECC doesn't work out. All this is managed 'under the hood' with the sdcard and if my memory serves correct, you can't index into a sector when reading/writing the sdcard - you tell it what sector and you read/write from byte 0 of that sector. I think you can choose how many bytes you read or write (upto the sector size).

 

Even with NOR flash, the higher densities are relying on ECC to get the reliability up. Even micros like the Infineon XMC1100 have ECC on the flash, so with these, you must write a complete sector.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

js wrote:

WOW Cliff, you must have ditched the tablet and use a real keyboard or maybe added a real keyboard to the tablet? Lots of long posts...wink

True enough - these days I do a lot of Freaks reading on my phone when out walking my cats (once they've heard a mouse they'll sit and stare at a patch of undergrowth for 20..30 minutes!) but sometimes I use my PC (often when engaged in boring conference calls!). It's much easier doing stuff on a PC!!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

For some numbers, here is what i got on an 8MHz M328P system using FatFS with about 40 bytes of data per operation and an SPI clock of 2MHz. The seek operation is used because data needs to be appended to the existing. The medium is an 8GB Speed Grade 2 Kingston uSD card that is NOT power-switched.

 

Open 0.7ms

Seek 0.77us

Write 2.8ms

Close 12.8ms

 

These numbers do NOT include the continued on-card processing time after the FatFS f_close() function completes.

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

Last Edited: Wed. Oct 24, 2018 - 02:29 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If you add a printf() to the disk_read() / disk_write() functions how many of each are done at each step I wonder?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

For every block of data to be written (a csv record), I do one open, one seek, one write, and one close.

 

Because of processing time, I am doing each of these on a  separate "clock" tick (10ms data ready interrupt from sensor). Fortunately, the sensor data is buffered in the sensor and I can afford to wait up to 8ms after the data ready to actually read the data. This  allows me to tolerate the greater than 10ms close time. I do need to determine whether or not any of these steps grows in duration as the file length grows.

 

Jim

 

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

Last Edited: Wed. Oct 24, 2018 - 03:22 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

No, what I was asking is for an open(), seek(), write(), close() how many sector reads and how many sector writes are involved in each operation.

 

I'll assume that during the mount() the FS already read and cached the BPB so it should be able to calculate where the FAT and root directory are located but then to open() (assuming CD = \) then it may need to read N sectors sequentially until it finds the one that has the 32 bit directory entry with the name that matches (far more reading if LFN enabled!). From this it can get the file size and also the starting AU. So it may need nothing more than N reads until it find the directory entry. That gets it as far as knowing where the initial AU is which is all it needs at this stage. The seek() could be very quick (it was in your case) or it could be quite long. If the offset is within the AU it does not need to do any sector reading, it just updates the file pointer. But if it's to a position beyond the first AU it may need to start walking the FAt chain at this stage - or it might defer this until the actual read/write operation. The write() *may* be as simple as one sector write, or it could involve a whole host of activity if it tips over an AU boundary.

 

As I say, I'm just interested to know how many read/write each high level file operation involves. Do not assume that a write() of 40 bytes (say) is as simple as one 512 byte sector write. In fact until flush (sync) it could be that the data write is simply cached in the buffer within the FIL structure.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
I am seeing an 10mA jump in supply current with the file system open, even when there are long gaps (minutes) between bursts of writes.

That seems strange to me -- 10mA "quiescent" current?  How does an SD card "know" a file is open?  At 10mA then every cell phone and camera and song playing device with an SD card would have diminished battery life, right?

 

That said, what is the Class of the cards you are using?

 

Nearly all of my SD work has been with a family of controllers with good power, so I haven't poked at your area of interest.  When logging I keep the file open, log in 512 byte  chunks to avoid [most] partial-sector stuff, and f_sync() after each write.

 

A Google search indicates you are not the only one with this situation.

https://community.nxp.com/thread...

https://electronics.stackexchang... ...but that mentions 200uA idle current

https://forum.arduino.cc/index.p...

...

 

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Cliff - Not sure where I would put those signals. Don't have a print output port but DO have a diagnostic pin that can be wiggled.

 

Lee - As mentioned above, using 8GB Class 2 Kingston microSD cards.

 

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Oh I was just interested to know what it is that "costs" in terms of reads or writes and exactly how many were going on. But if you don't have a debug console I guess that's tricky.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
I guess that's tricky.

What I do during dev, especially when "pin challenged" or no convenient interface, is to log a set of interesting items to internal EEPROM and then read back with ISP and interpret.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Kartman wrote:
Even with NOR flash, the higher densities are relying on ECC to get the reliability up.
Likewise with FeRAM and MRAM.

https://www.mouser.com/new/rohmsemiconductor/rohm-feram/ (4th feature)

32KB SPI-attached FeRAM is 3.3V at 10mA max.

 

32KB SPI-attached MRAM is 3.3V at 27mA max (40MHz) to 13mA max (1MHz)

https://www.everspin.com/family/mr25h256

 


https://www.mouser.com/new/rohmsemiconductor/rohm-256k-8bit-feram/

 

Edit: MRAM

 

 

"Dare to be naïve." - Buckminster Fuller

Last Edited: Wed. Oct 24, 2018 - 06:57 PM