How do you visualize your code?

Go To Last Post
21 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

When you guys are creating a program for an AVR, how do you visualize your code? Do you imagine a flowchart, or is it some other nonlinear mind set?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I am not consciously aware of visualizing anything when I start or continue work on a program. I think that may be common among experienced developers.

Assuming you want to know how experienced developers do it, maybe a more useful question is something along the lines of "if you were to document your program visually (e.g. using charts) what would you use ?"

In my case the answer would most likely be a class diagram. If that wasn't enough, I would probably add transition state diagrams (or maybe flowcharts).

Usually, diagrams are not needed though - the diagrams I'm talking about have a direct one to one relationship with the source code. With experience, you just keep coding pieces as you add them to this imaginary diagram. When reading code, you know what the diagram would look like when you see the source code.

AVR programs are so simple that knowing the hardware and how it maps to the software is more challenging than keeping track of the logic itself.

Sid

Life... is a state of mind

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I don't know about others but I rarely draw a flow chart these days. Drawing flow charts was the usual practice back when I programmed in Fortran...

I'll use a flow chart these days if I need to sort out a complex control algorithm, to help define the flow, and make it easier to spot "special cases", and their impact on the system. (Special cases = sensor failure, switch failute, out of range inputs, etc.).

I spend a lot more time deciding my pin usage and module usage, (Timer/Counter #1, ADC, etc.), than I do drawing flow charts these days.

JC

Edit, as the scope of the responses seems to have grown.

I, too, use a combination of the Top Down and Bottom Up approach. The Initial Main is justing calling a bunch of empty subroutines, that get filled in. The sensor interfaces, LCDs, communications links, etc., are all written and debugged as separate tasks of the overall project.

I, too, use block diagrams, but for the hardware, not a classic flow chart for the software.

Last Edited: Thu. Sep 20, 2012 - 04:14 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A drawing program is very useful in creating visual aids for code development. These visual aids can be flowcharts, state diagrams, timing diagrams, etc. The amount of detail contained in these documents depend on how complex the task is.

First comes the concept in your mind. Start at high level converting the concept to list of tasks that will need to be performed. Define task priorities. Some tasks may need to be interrupt driven, and some can be done whenever you "get around to it". Whenever possible, make your tasks as independent as possible so they can be written and debugged independently. Of course, you may feed a task,function,subroutine a set of simulated data to check it's results.

Then start laying out a main loop while asking yourself some questions.

What are the order of the tasks ?
Other tasks may be dependent on results from other tasks. Define the dependencies.
Some programs are entirely interrupt driven. That is, the main loop may do nothing.

--- but let's get back to documentation.

As the development continues, you can document each task to the level you feel comfortable with. If you don't have any idea how a task should work, and how it fits into the rest of your program, then you ***STOP***, and draw it out. Wandering off in the a coding abyss with no idea how this piece of code is going to work, or how it works with other parts will lead to mess - guaranteed.

When it's documented to a level where you think you understand it, then code it. Often enough, you will find out you didn't understand it quite as well as you thought you did, and will have to make some changes.

I like VISIO to document software. Ebay has some old versions for sale at around $10.00, and some student versions of 2003,2007, and 2010 for $50.00+. There are also some freeware versions of flowcharting and drawing packages as well.

Don't skip the documentation phase. It may seem like a waste of time, but it will actually save you time in the long run.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Pen and paper for me. I work a lot with graphics and when (for example) you are trying to work out how to read NV12 and blit the output into YUYV (say) there's nothing quite like drawing out a grid of boxes to represent a small (say 8 by 8 pixel) memory map and then another for the destination and determine what goes where. After iterating a few pixels you can usually see the pattern and how to encode the loops to process an (x,y) blit.

Even when processing bits (like you need to combine bits23..17 from that variable with bits 14..5 from this one) I find it helps to sketch out a picture of how the data will be processed. (actually this example is too trivial and could just be done in your head).

Another technique that works for me is to psuedo-code main() with the high-level steps that are going to be needed:

main() {
 open_input_file()
 create_output_file()
 while(!eof(in)) {
  read_line()
  search_pattern()
  if (found) {
    write_output()
  }
 }
 close_in()
 close_out()
}

then worry later about how those things will actually be coded. If complex they may well become a separate function and a call to it in main(). If simple the step may just be replaced with a single C statement:

 FILE * fin = fopen("infile.txt", "rt");

etc. What's more the "complex", separate functions themselves may then become a series of psuedo-coded steps later replaced by implementation and so on.

I think the CS guys would call this "top down design".

Of course there's also the other technique of "bottom up" design. If you know you have an LCD or a keyboard you can write the driver code for that low level function in almost complete isolation. You then put that in the "toolbox". Later, when you want to bolt together an app that has both LCD and keyboard in the design, you open the toolbox and take those things (already tried, tested, working) out and then simply rustle up a bit of code to bolt them together. Arduino may be the ultimate AVR example of this technique.

I think you can combine both. I may have LCD and keyboard in the box already so I know I can code:

main() {
 init_LCD()
 init_kbd_timer();
 while(1) {
  if (kbd_data_available()) {
    read_kbd_char()
    output_to_LCD()
  }
 }
}

(this looks awfully like an Arduino "sketch" ;-))

The C++ fans would likely point out that you can have a lot more in your "toolbox" and then simply derive classes from what you've got to make small, application specific changes to make your solution. One day I'll probably eventually see the utility of that for a few K of AVR code ;-)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have mixed approaches.

On the other hand, it is important to take a good look at some block diagram what stuff needs to be controlled and how, like selecting different input connections, and how to route/process data through board to output connections. It does not matter if this is just pushbutton information or analog video or digital audio or whatever.

And on the other hand, it is also important to make simplest things first, to which more complex things are based on. For example, getting a LED to blink, then make it blink at correct speed, have UART for debug control and debug info, then rest of GPIO drivers to control some simple chips, I2C/SPI drivers needed to control more complex chips.

Then those chip-specific drivers, that use I2C/SPI/GPIO layers to control those chips into required states or modes, or return some status info.

Then make the thing autonomous, by combining all the above, to make it react to user actions or changes in input signals.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Forgot to mention that another good approach to design is to start with a specification. Initially this may simply say

Quote:
It's going to be an alarm clock

Then you add detail:
Quote:
It's going to be an alarm clock
There'll be a 6 digit display for hh:mm:ss
There will be three buttons to set H, M and one to set alarm

Then you add yet more detail:
Quote:
It's going to be an alarm clock
There'll be a 6 digit display for hh:mm:ss
The display wil be switchable between 12h/24h
There will be three buttons to set H, M and one to set alarm
If held the buttons will fast repeat
There will be two alarms settable

Then you start to think about how it might actually be implemented:
Quote:
It's going to be an alarm clock
There'll be a 6 digit display for hh:mm:ss
The display wil be switchable between 12h/24h
There will be three buttons to set H, M and one to set alarm
If held the buttons will fast repeat
There will be two alarms settable

4K flash probably enough but develop in the 16K device just in case
about 20 bytes RAM required
Probably use HD44780 8x2 display
Buttons will each be single connection to CPU and use internal pull-ups


Then you can scour your micro datasheets and pick the MCU. If you find ones that support some kind of battery backed "RTC mode" then the CPU alone may be enough. Otherwise perhaps a DS1307 or something?

Continue this process adding more and more detail to the design spec. Eventually you may end up with something that reads like a user manual. Now you can simply code for each function that is offered in the user manual. If there are ambiguities in the design correct them with more detail and then code for that.

At the end of the process you have a full set of design notes, the outline of a user manual and some hopefully well structured code.

The wrong approach is to set your hardware designer on to have 500 PCBs made using his favourite micro then later find that it cannot achieve the battery consumption you need or he only has enough I/O for 3 buttons and you later realised that 4 would actually be required so now you spend days/weeks trying to think of some clever solution to make the 500 PCBs work somehow.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well, if I visualize my code, it is a nightmare...
If people ask me to look at their code in R, I try to find out what the results should be (i.e. : if they compute standard quantities like max, mean, median and average value , I think they might be asked to compute a r.m.s, too, and the only way (at least, at the same time flexible and easy) it can be achieved is to return a list (if things evolve, the former and the later versions can be used safely).
I try to figure out/compute what the result might be (if it not that complicated) or what its order of magnitude should be in numerical computations...

For devices linked to electronics, I try to figure out how to show their results with a /two/more LEDs -which are cheap- or with a scope -I have one- , or with a loudspeaker...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I tend to use pseudo-code also. It's the same as a flowchart but easier to write out. I switch between high level (do_motor_control()) to low level (if(PIND & START_MASK)) as the whim strikes me.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

For small stuf there is no visualization of code - except the obvious tetual visualization. This might hold true for many larger systems also.

[Sidenote - I once heard a theoretical physicist being asked how he visualized things like the four dmensions of space-time. He didn't. Or rather - his visualization was the formulas. Not that I'm an Einstein - quite the opposite. But anyway..]

If I do use graphical things for visualization, thining, specification, communicating or whatever I guess it would be things like class diagrams, sequence diagrams, Sequence diagrams (for those, see e.g. this). State machines obviously very often nice to visualize as a state-transition diagram.

Once upon a time, before modern software engineering methods, object-orientation etc got a hold one method of structured programming was "JSP". The methodology is probably more or less outdated now, but one thing I took with me was the fact that any structured algorithm (as in only sequence, selection and iteration allowed - No GOTOs) can be expressed in a JSP diagram. A hierarchy or tree of ordinary-, selection- and iteration-nodes. Once upon a time I often thought about e.g. the code inside a function in this way. Nowadays I have more or less filtered out the graphical visualization, and "see the graph in the textual representation" (i.e. in the source code itself) - you more or less need only tilt your head to the left and read R2L... :wink:

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

The C++ fans would likely point out that you can have a lot more in your "toolbox"

You can have a lot more in your "toolbox".

[There. Now we got that out of the way. :D]

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Well, if I visualize my code, it is a nightmare...

Everyone writing code should do it as if they will be knocked down by a bus tomorrow and the next guy has to pick it up and run within it in a matter of hours/days. Sadly this is the story of most commercial software development. (well OK not THAT many bus fatalities but people move on for lots of other reasons). Using structured design before you implement the code will help you write code that's easy to read and maintain. Programs that just "grow organically" with "this bit bolted on to account for X" and this bit "here" and "here" to account for Y does not make for maintainable code!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I was taught to use finite state machines/flow charts when I do my programs...

but this rarely works for me. I usually don't know everything that needs to be in the program until the end, so the state machines always change and create more of a mess.

I visualize my code by what I need to get done.
My first question then is what do I need to get done?
blink and LED from a button? ok..

If I need to wait for a button press, I do that first.
If (button)

Now what to do when the button fires...
{
LED=1;
}

and so on.

I take one thing at a time, because I'll usually be changing it all around anyway.

Since I don't have a debugger I put huge delays in portions of code I need to test to freeze the code there and see what it's doing on the hardware. LCD should be here soon to fix that..

One thing I notice is after a tough night of working on code, when I go to sleep, I'm seeing C.
All over the place. Running from it like a nightmare lol.

But as I learn, it's getting better.

One thing I like about codevision is it has a notepad right next to the code window so it makes jotting down pseuodocode easy and fast and I knew what I need to get done and it's remembered for next time. This helps quite a bit to have a programming notepad open right next to your code window. ideas can be saved easy.

if (Learning_AVRs)
{
DOH();
}

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

but this rarely works for me. I usually don't know everything that needs to be in the program until the end, so the state machines always change and create more of a mess.

Back in my day we were told to design the thing first, then implement it when the design was clear. Sure there might be very small corrections required in the design that were only spotted at the point of implementation but one shouldn't have code that just grows organically as you bolt bits on as you think of new stuff - all that should have been caught at the design phase.

In fact in the old days there were two levels of programmers: System Analysts were the high paid ones with brains who actually did all the clever design work. Then "programmers" were a much lower pay grade and could basically be done by trained chimps. They just coded the designs the analysts had produced. Presumably there was something wrong with this approach as it doesn't seem to be what's done these days, but I wonder.

In fact I wonder if it's because a lot of programmers these days are self taught rather than being given a formal course of instruction? (or even reading a book which seems to be a dying skill since you can now Google everything).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I used to use flow charts, a LOT. A recent project implementing a complex protocol stack needed one because the specifications were so obtuse and there were lots of timing constraints.

Mostly, these days, I start with a clear set of specifications that lead to an over-all structure (mental flow chart ???), then fill in that framework with the details. This does not always work well, as in the protocol stack case. One of the things that helps me is modularization - I don't go as far as C++, but carefully designed functions can be really helpful for dividing a big project up into manageable pieces.

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
you can now Google everything

I think it's because even though we can google everything it's still not easy to understand for people with limited resources or education.

I can read over the same thing ten times and not understand it... until I try it for myself..

I'm just a hands on kind of person though. Everyone is different in the way they can understand and learn.
Specially across multiple language barriers.

I can imagine handing my grandmother a timer sheet from the mega 88... and asking her to count something. Her best bet would be google and lots of luck.

if (Learning_AVRs)
{
DOH();
}

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Interestingly, Jack Ganssle wrote in his latest Embedded Muse about design methodologies (that's the real subject of this thread) in embedded designs:

Jack Ganssle wrote:
In the embedded space, UML has a zero percent market share.

In the embedded space, the Capability Maturity Model (CMM) has a zero percent market share (other than CMM1, which is chaos).

The Shlaer-Mellor process tags right along at zero percent, as does pretty much every other methodology you can name.

Rational Unified Process? Zilch. Design patterns? Nada.

(To be fair, the zero percent figure is my observation from visiting hundreds of companies building embedded systems and corresponding with thousands of engineers. And when I say zero, I mean tiny, maybe a few percent, in the noise. No doubt an army of angry vendors will write in protesting my crude approximation, but I just don’t see much use of any sort of formal process in real embedded development).

There’s a gigantic disconnect between the typical firmware engineer and methodologies. Why? What happens to all of the advances in software engineering?

Mostly they’re lost, never rising above the average developer’s horizon. Most of us are simply too busy to reinvent our approach to work. When you’re sweating 60 hours a week to get a product out the door it’s tough to find weeks or months to institute new development strategies.

Worse, since management often views firmware as a necessary evil rather than a core competency of the business they will invest nothing into process improvement.

But with firmware costs pushing megabucks per project even the most clueless managers understand that the old fashioned techniques (read: heroics) don’t scale. Many are desperate for alternative approaches. And some of these approaches have a lot to offer; properly implemented they can great increase product quality while reducing time to market.

Unfortunately, the methodology vendors do a lousy job of providing a compelling value proposition. Surf their sites; you’ll find plenty of heartwarming though vague tales of success. But notably absent are quantitative studies. How long will it take for my team to master this tool/process/technique? How much money will we save using it? How many weeks will it shave off my schedule?

Without numbers the vendors essentially ask their customers to take a leap of faith. Hard-nosed engineers work with data, facts and figures. Faith is a tough sell to the boss.

Will UML save you time and money? Maybe. Maybe even probably, but I’ve yet to see a profit and loss argument that makes a CEO’s head swivel with glee. The issues are complex: tool costs are non-trivial. A little one-week training course doesn’t substitute for a couple of actual practice projects. And the initial implementation phase is a sure productivity buster for some block of time.

Developers buy tools that are unquestionably essential: debuggers, compilers, and the like. Few buy methodology and code quality products. I believe that’s largely because the vendors do a poor job of selling – and proving – their value proposition.

Give us an avalanche of successful case studies coupled with believable spreadsheets of costs and time. Then, Mr. Vendor, developers will flock to your doors, products will fly off the shelves, and presumably firmware quality will skyrocket as time-to-market shrinks.

What do you think? Turned off – or on – by methodology tools? Why?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think if you only design things you know fully,
you'll never design some of your better things.
Have to reach outside your boundaries and test yourself.
Never know what you can do. ;)
Gotta try everything and not be stuck to one ideal process.

I was unable to visualize any code at all when I designed this robot that won best of show. Had no idea if it would work at all.

Do everything by the books if you must, but don't forget to add your own twist. Somebody might come along who's never read a book in their life but yet can outdo you.

if (Learning_AVRs)
{
DOH();
}

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

In technical school (Portland Community College, Portland, Oregon USA mid 1980s) computer and electronics students were taught to use the Warnier-Orr method of program/algorythm diagramming.
The web documentation on W-O found through Google is not great but not bad. I still use W-0 for anything with moderate complexity. For example, decoding the scan codes of a PS2 keyboard. Don't forget to use really big sheets of paper. And use a light pencil with a big eraser because when 'walking through' the W-O diagram, it will probably occur that whole sections need to be moved or changed.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

When I'm designing software (which in my job is more than a single program), I often find it helps to think about how the data flows around, and becomes manipulated. It isn't exactly a flow chart. Actually, now that I'm trying to explain it, it is probably more like a circuit diagram. Of course, I rarely, if ever, draw these out. It is only an internal representation.

In my mind, the "state" of my program is defined in small chunks. "I know that between this point and that point, these things are true". I can be wrong, which often is an indication of a bug, rather than of my own misunderstanding. Both happen, of course.

Also, I spend more time visualizing my goal, rather than my design. I've got enough experience in software design, that the pieces often just "fall into place".

In short, there is no "secret" to becoming an Expert, in any field. One's on mind develops an "expertness" quality with experience. I read a book (Pragmatic Thinking and Learning) which references a study that shows 10 years is the average amount of time it takes to become "expert" at most things.

Another interesting observation is that while experts can create "rules of thumb" to aid the novice or amateur, the expertise is in knowing when to break those rules (without necessarily being able to articulate why). For example, an experienced driver may break a traffic law in order to avoid an accident, however most traffic laws are there to prevent accidents.

I'm not sure if this has gone too far off of what your question really asked, but that was what came to mind ;-)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

the expertise is in knowing when to break [the] rules (without necessarily being able to articulate why)

I wish I had written that.

I'm a book nerd. I will look for the book you referenced. Thank you.

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]