Is there a way to close a file before an unexpected reset?

Go To Last Post
9 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi folks!

I am using an SD-Card (one file only, FAT-16) with an ATmega32 through SPI, and I am thinking about some unexpected conditions, like a power failure.

Well, in my application (a kind of data logger) I do open a file and it will remains open for a time (some hours). So I did think, how may I close the file before an unexpected power failure?

I know that I can detect a source of a reset, looking for the MCUSR register, doing something like this:

  #define POWERONRST  'P';
  #define EXTERNALRST 'E';
  #define BROWNOUTRST 'B';
  #define WATCHDOGRST 'W';

  char rstsource;

  ...
  if (MCUSR & 1)                  // POR = Power-on Reset
    rstsource = POWERONRST;
  else if (MCUSR & 2)             // EXR = External Reset
    rstsource = EXTERNALRST;
  else if (MCUSR & 4)             // BOR = Brown-Out Reset
    rstsource = BROWNOUTRST;
  else                            // WDR = Watchdog Reset
    rstsource = WATCHDOGRST;
  //
  MCUSR = 0;                      // Init flags
  ...

So, "rstsource" will contain a character indicating a source of the reset.

To work around this unexpected situation, I need to call the function that performs the closing of the file, say "CloseFile ();".

My questions are:

1 - Detecting the unexpected reset by BOD (via interrupt) seems to be the best way?

2 - How long do I have to perform the "CloseFile ();" function (using an external crystal of 11.0592MHZ)?

Thank you in advance.

Teach is learn twice. So, what do you think regarding learn again?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Your "else if" statement will not work correctly, because several bits in MCUSR can be set at the same time.
I would add a cap to the circuit, monitor the supply-voltage and close the file before the reset happens.

/Martin.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

How long you have depends on the capacitance you have and current consumption. The internal write operations of the SD card will take longer than to actually transmit the command to the card. So it depends heavily on the SD card as well how much it consumes and how long it takes to be done with the write even in the worst case. I guess you really don't have a specification for your card.

What would be even better is to have all the data writes to be syncronous so every time the write function completes you know at least that part is OK with the data, and the file length in directory entry and the file (cluster) allocation table is updated. Well as long the card has had enough time to actually do the writes into flash.

So how much do you write data and how often, maybe every time you want to write to the card you could open the file, write and then close, so the file spends most of the time closed?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

What would be even better is to have all the data writes to be syncronous so every time the write function completes you know at least that part is OK

Exactly - don't keep the file open. Just open it for a few milliseconds at a time each time you want to write. The only "clever" bit in doing this is knowing how to SEEK to the end of the file to append the next thing to be written. So open(), seek(), write(), close() and wait.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Exactly - don't keep the file open. Just open it for a few milliseconds at a time each time you want to write. The only "clever" bit in doing this is knowing how to SEEK to the end of the file to append the next thing to be written. So open(), seek(), write(), close() and wait.

I went at it a bit differently in my SD card app.

I keep the file open all the time. When any anomaly occurs--and it could be at any time and it doesn't necessarily involve power--I "abandon" the file and begin again with the "next" fresh log file.

(Have you considered the user popping out the card? ;) )

Now, I don't lose any data. Perhaps the write in progress with a sudden power loss or the card popped out during a write. (Actually pretty unlikely if you look at the timing in my app.)

It is up to the processing program (on a PC in my case) to detect invalid/partial records at the end of the file and discard them.

I seem to get good data integrity after a) writing only full 512-byte sectors as my logging records; and b) I do a f_sync() after every successful f_write(). Yes it takes longer--a few milliseconds more on each write IIRC. I'd think less time, though, than the open/seek/write/close. And I don't see that open/seek/write/close is any better. YMMV.

Quote:

f_sync

The f_sync function flushes the cached information of a writing file.
...
The f_sync function performs the same process as f_close function but the file is left opened and can continue read/write/seek operations to the file. This is suitable for the applications that open files for a long time in write mode, such as data logger. Performing f_sync of periodic or immediataly after f_write can minimize the risk of data loss due to a sudden blackout or an unintentional disk removal. However f_sync immediataly before f_close has no advantage because f_close performs f_sync in it. In other words, the differnce between those functions is that the file object is invalidated or not.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Have you considered the user popping out the card?

Isn't that an argument for only having the file open for a few milliseconds at a time?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Isn't that an argument for only having the file open for a few milliseconds at a time?


Nope. Not IMO. IME f_sync() does the job fine; forcing the actual write and moving the file pointer. I don't have the complete timings handy but as a guess the f_open() for the re-open and the f_seek() would take some milliseconds. Probably f_sync() versus f_close() is nearly a wash.

Now, for me the full 512 bytes is fairly convenient for my logging records anyway. I just decided early on to always put fixed-length full-sector records into my file. It is my understanding that it makes the writing process somewhat more efficient. The gurus will have to confirm or object. It makes the PC-side processing a little cleaner as it seems my app is continually being enhanced with added record types. So a mistake or other "unrecognized record type" easily skips to the next and continues. As storage space is not a concern--1GB holds 10 years of data on even the busiest of this app, and we are using 2GB cards--I tried to lay out the system as described above in the most straightforward manner.

"Do something before ... unexpected event" is an oxymoron--if you can anticipate it it isn't unexpected. :twisted:

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Another solution is to preallocate a large file and then rewrite the next sector when 512 bytes is buffered. Nothing else has to be updated and on a crash you lost at most the last unfilled buffer. Also, it can be done with a smaller footprint as you don't need FatFS write capability (use it to position for reading and then do the sector write yourself).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Uau! Thank all of you for yours replies.

I think I need some time to digest all the information.
Sorry, my mind is a little tired (I'll stay in power saving mode for a few hours).

I will return soon.

Teach is learn twice. So, what do you think regarding learn again?