question about sudden long busy delay on microSD cards SPI

Go To Last Post
21 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm now testing the overwrite speed (overwriting the same file is much faster than writing new file) of microSD cards with SPI mode, and I found out that sometimes the card can get a very long delay (100ms ~ 200ms) on SOMI pin, which indicates busy.

This happens on all the different microSD cards (SD or SDHC, 2GB, 4GB, 8GB, 16GB, class 2,4,6, FAT or FAT32) I have, I wonder if this is caused by the built-in wear leveling function, that sometimes it change the pysical write address to a new place.

Anyone knows why? and how to reduce this long delay?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Here is a tutorial on spi with a long list of devices that interface to spi.

http://www.mct.net/faq/spi.html

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

hi, the link you provide doesn't have the info I need for SD card sudden long delay.

I wonder if this long delay is caused by built-in wear leveling design.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

darthvader wrote:
hi, the link you provide doesn't have the info I need for SD card sudden long delay.

I wonder if this long delay is caused by built-in wear leveling design.

Your guess is as good as mine.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

i wonder if there exist any old microSD card that doesn't have this wear leveling?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I would >>guess<< not since wear leveling has been a recognized factor from the very start. I would REALLY hate to rely on the presence or absence of some undocumented property. If it happens, you just need to be prepared to do the right thing and wait gracefully.

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ok, then I have to settle for this long delay and hope for the best for my data logging.

Thanks.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Think carefully about what you are doing.

Overwriting a file means truncate the existing file contents. Write the data and update the existing directory entry.

Writing a new file means creating a new directory entry and writing a new chain of contents. Then deleting the original file and directory entry.

So you have significantly more sector writes, and more importantly the updating of the FAT tables.

I have no idea what algorithms are used to wear level the writes to the FAT table sectors. I can make some inspired guesses though.

The whole FAT methodology implies chains of links through the table. Writing to a virgin disk just writes consecutive sectors which means that a chain will not involve multiple sector reads.

All this is conjecture on my part. I would imagine that the designers of the SDCards have addressed most FAT deficiencies. But you should be aware of potential "features".

I would certainly remain within any published timing constraints. Perhaps you will find that you get a more reliable throughput by sticking to the data sheet.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

hi,

I now uses a new way to write data, so during the data logging, no FAT operation is involved, so the delay is just the microSD card busy delay. On average, now I can get about 3.3MB/s write speed at 30MHz SPI clock. But the problem of sudden long delay would cause my high speed data logging to stop.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

So you can write a 2GB card in 10 minutes. This sounds quite good.

How long does it take to write a single 2GB file from your PC?

Does this take longer if you do not re-format the card first?

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

write SD card from PC should much faster than 3.3MB/s,

I can test it now.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

i think on my Transcend 16GBmicroSDHC6, i can get about 10MB/s writting speed from PC.

on my SandDisk 2GB microSD card, i can get about 4MB/s.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

So if the PC device driver can achieve this throughput, so can you.

I am sure that there are publicly available algorithms, and the Linux code should show you how it is done. The manufacturers may well have public code for you.

At an inspired guess, you keep a FAT allocation table in SRAM, write directly to sectors. Alternately filling buffers, while flash erase/ writes are awaiting completion. Then write the updated FAT when the file is closed.

You will always have a risk of device failure and consequent corrupt filesystem. You just have to weigh up the pros an cons.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I have no idea what algorithms are used to wear level the writes to the FAT table sectors. I can make some inspired guesses though.

HDDs and "intelligent" memory arrays don't do anything specifically for FAT usage but more generically just write then read data to make sure it's committed without error. In the case of HDDs if there's an error they then go through a 64 step process that involves things like increasing the write current - seeking away and back and trying again (to correct head alignment errors) and so on. With flash arrays, while there may be a repeat write attempt, the chances are that it just adds an entry to its remap table and writes the data to a sector in the spare sector pool immediately.

FAT is a very bad system in terms of "sector wear". Things like the FsInfo sector (6/7 is it?) in particular get hammered hard as does the root cluster.

But only people like WD, Seagate, Sandisk ever really need worry about any of this (oh and PVR manufacturers!) - for the end user/programmer you needn't worry about any of this as it's all hidden insider the firmware of the intelligent controllers.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

hi, the microSD card I use on my UC3B is through SPI interface, the PC uses 4pin SD mode, so PC should have upto 4times faster than SPI mode.

This sudden long delay is really bad for small SRAM realtime sensor application.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

This sudden long delay is really bad for small SRAM realtime sensor application.

Add an SRAM buffer to the project perhaps?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

hehe, not at the moment.

probably in future.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

HDDs and "intelligent" memory arrays don't do anything specifically for FAT usage but more generically just write then read data to make sure it's committed without error.

Actually the newest drives you can now purchase use 4kB sectors internally, still accessible as 512 byte sectors like normally. And they have the following optimization regarding partitioning and filesystems.

Usually the MBR is at LBA #0 and first partition is at LBA #63 (because back in days before LBA the CHS addressing had maximum 63 sectors per track, numbered from 1 to 63), the partitioning scheme that starts at LBA #63 is not aligned to 4kB, so the HDD manufacturers have a jumper that adds +1 offset, so internally sector #0 is unused, sector #1 is presented to operating system as LBA #0, and thus LBA #63 is internally sector #64. Uh, as in 512-byte sector numbers. So the hard drive aligns it internally.

Also I've seen some USB sticks that use very weird partition offsets, not LBA #63, propably due to optimize wear-leveling. Or were they memory cards, I don't remember anymore.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

http://elm-chan.org/docs/mmc/mmc...
At the bottom is a benchmark run on an ATmega.

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I know I'm 2 years late, but I found this thread because I had the same problem (long busy delays). I found that these delays are gone if you format the card with a different tool than the windows one. I think it's due to the fact that the ms formatting tool is not fully compliant with the SD standard. There are many free tools on the net to correctly format SD cards. Once I used one of them the speed is ok!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A full format writes zeroes (or similar) to whole device, while a quick format just writes enough of the file system to make it look empty.

Formatting does not remove the problem, it is just a fancy name to initialize device with a file system, it does not do anything special to make it work.

As the full format writes the whole device, it might help the wear leveling a bit, but the delays will be back.