Software testing for RTC time setting

Go To Last Post
6 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Software testing is a bit new to me and I was hoping to get some better ideas on how to test a RTC routine in an embedded system. There is a serial command line interface to set the date and time. I wanted to stress it and find out if there are bugs. My thinking is that I would set date of all days in months plus one in non leap year and plus two in February in leap year. As far as years go, I would do the entire range, before and after also (before our start year and after our end year, IIRC). Time would be > 59 for seconds and minutes > 23 for hours. You get the idea. Problem is I am not sure this is enough. Due to being a newbie at testing. I am normally the one making the bugs not testing for them. :)
Have any of you any other insights? I would like this to be rigorous.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Given that your serial line is significantly faster than 50 Baud, you might succeed in a finite amount of time, but only due to the fact that the range you need to test is affordable for a human life span. In general things are not that well laid out.

In my opinion, it should be enough if you test every allowed day within a given month, then test invalid days, then do the same test for feb. in a leap year and in a non-leap year. No need to test each month, no need to test each day within each month within each (allowed) year. Same goes with testing the time, just test some valid ranges, then some invalid values.

I mean, why on earth should testing for april 31st succeed in one year and fail in another? The result *must* be consistent, so no need to loop over every single year...

[edit] thinking twice, in the above i assumed that you will send your date/time information as a string of a certain format. If on the other hand, you simply send a huge number representing the number of seconds since a given start date/time, then you might need to test your conversion algorithm indeed somewhat more thoroughly.

Einstein was right: "Two things are unlimited: the universe and the human stupidity. But i'm not quite sure about the former..."

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The first rule is you can never test enough. How rigorous do you want to be? If the software fails, what is the estimated cost impact? If you have to test someone elses code, I'd suggest a code review would be a good start. It helps to work to a coding standard. I use MISRA. If code is easy to read, it has been found it is easier to spot defects and is more likely to have less defects. I look for things like atomicity problems that may not show themselves immediately. Look closely at isrs and shared variables. If you have a software spec, compare the code with the spec. No spec? For your rtc the spec should be pretty easy as most of us know the expected range of values.

Step through each line of code with a debugger. Check for range and logic errors.

Write functions that exercise the various functions under test.

If you've got this far then you can do black box testing where a external device exercises various functions.

If you want to be really rigorous, then analyse the compiler output.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I agree with Kartman. Devise a suite of Black Box tests. The general advice is 'well out of range', 'at end of range', 'at end of range +- 1', 'within range'.

Obviously you can add some other test cases. For example you need to check 23.59.59 28 February 2100 rolls over to 1 March 2100. However I will be long gone before then.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Testing all the "edge cases" and "corner cases" is a good start. So trying every rollover where seconds go 59->0 then minutes, then hours at 11:59:59 or 23:59:59 and so on.

When you design the software (we all do don't we?) the design spec. (or perhaps a separate document) should have a complete set of acceptance tests - this is particularly necessary when developing the software for someone else as it forms part of the contract and you can later say "but we showed that the software/device was working exactly as defined in the acceptance test spec". It's up to the customer to review that spec. and define further tests if they aren't happy it catches everything and forms the basis of how they accept that the product is finished and working.

You should spend as much time on the spec as you do on the main design. In our company each project has an assigned tester who writes the spec then performs all subsequent testing independent of the designers and implementers. The spec also forms the basis of your regression testing so that when you move from issue V2 to V3 the entire thing (or selected parts) are retested to show that fixes/improvements haven't broken anything that was previously working. (ie a regression).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks for all the good suggestions. Now to get to the test code!

I should mention I am writing the test code using robotFramework and Python. RF puts out a very nice html output file of results.