[TUT] [SOFT] Writing reliable code

1 post / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Today's tutorial is on writing reliable code.
So what do we mean by reliable? Hopefully code that can be shown to perform as we expect and handle exceptional situations in a consistant and repeatable manner. Unfortunately, doing this can be a lot harder than it seems but there are some basic techniques than can go a long way to achieving this goal.

Garbage In, Garbage Out
Someone coined this term many years ago and it still holds true. In the context of AVRs and microcontrollers what is the 'garbage'. Obviously the source of this garbage is our input signals - this could be switches, analog values, serial data, input from a human etc.
For something as simple as a switch, consider the following:
1/ contact bounce - mechanical switches do not make/break cleanly.
2/ how fast do we expect the switch to be operated as a maximum?
3/ transients - these happen infrequently - that's why they're called transients! It could be due to lightning activity, someone using a transmitter/ mobile phone etc, a motor or high current device starting, electrostatic discharge, poor connections and so on.
4/ do we have the means of controlling the operation of the switch - say the switch in question is a limit switch and we control the motor. If we command the motor to operate, we expect the limit switch to activate in a given time. If not, we have an error.
5/ Similar to #4, if we had a switch or sensor to measure the speed of the motor that we also controlled. If we command the motor to operate, we should expect the switch/sensor to give us regular signals, if not, we have an error.
6/Induced noise. If we run a length of wire parallel to another wire carrying current, its signal will be induced into ours. This is something we wish to avoid, but sometimes it is unavoidable. In this instance, our input signal has unwanted signals or noise impressed upon it. If there is too much noise, it will render our signal useless.

When we read the port pins or do an analog to digital conversion, we're only looking at the input state for a fairly small amount of time. If, in this small window of time, we have a transient, so what we read may not be true and correct. To combat this, we need to read the input a number of times over a period of time. The more we do this , the less likely a fast transient will affect a number of readings. Thus, by doing this we gain a much clearer picture of what the true status is.

So how fast should we sample our inputs? To answer this we need to consider what is acceptable and what is not of our input. If it is a human operated pushbutton switch and the input changed much faster than 20 times per second, then we've got an exceptional human operating our switch. If we have a sensor to measure RPM and the motor it is connected to can only do 10,000RPM then a signal showing 20,000RPM must be an error.

With analog signals, the sample speed affects the quality of the information we get and may introduce 'aliasing' if we don't sample enough. For the purposes of this discussion, I'll concentrate on slow moving values like from a temperature sensor, otherwise we'll get bogged down in items that are a subject in themselves. So, for a temperature sensor, we might sample at once per second as the item we're measuring (air temp for example) doesn't change that quickly. For non digital inputs (like our temperature sensor) we might like to consider values that are out of range. Say we had a broken wire on our temperature sensor that caused our input to go to maximum and the value we process means the air temperature is 150C - this is clearly wrong so we should test for values that are to be expected and flag values that are out of range.

Code Craft
Writing code is a subject in itself. Seems everyone who writes code has a different opinion of what's right and wrong. Humans have been writing code for computers as we know it for over 50 years and many mistakes have been made and many studies of these mistakes have also been done. This means we have a wealth of gained knowlege to benefit from - this is a the basis of computer science. For everybody involved in writing code, it is an advantage to have some knowlege of the current state of the art in writing code. For those who don't have the time to do a computer science degree, a little reading will help. One book I've found that discusses and covers most of what we encounter in programming is 'Code Complete' by Steve McConnell ISBN 1-55615-484-4. Steve doesn't lay down the law and say 'you must do it this way' but rather discusses things to avoid, the reasons for and presents some evidence.

Well crafted code is easy to read and thus easier to debug. The potential for less defects means more reliable code.

Frequently on the forums, people post their code and says 'it doesn't work - please tell me where the error is'. The common problem I see is that the piece of code does more than one thing. Say the code reads a sensor, scales the value then outputs it - we have three operations here and the problem could be in one or any combination. So rather than trying to solve one problem at a time, we could have up to 27 (3^3)different possibilities. Clearly, if the code was presented so that the three operations were separate subroutines, the poster should've been able to isolate the subroutine(s) where the problem was located and solve the problem themselves. This is called 'reducing complexity'. If you think a piece of code is difficult to understand - there's a fair chance it actually is to most people.

more to come.... work in progress