The Fallacy of “Design Before Implement”?

Go To Last Post
78 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Greets, Freaks!

 

My current program has sort of grown, willy-nilly, and without much formal design. It was originally intended to be “just a little upgrade” from an existing program, leveraging everything that was muddled through in that first version.

 

Well, this second edition has become an almost total re-write. And two core routines, one to read data from a FIFO in a sensor, and the other to write to a uSD memory card, recently went through what was intended to be a “rewrite by design”.

 

I listed the requirements. I listed the various conditions they needed to respond to. I paid special attention to timing. I drew nice, and, I thought, thorough, flow charts. Only then was code for two functions committed to (virtual) paper. I was elated. All I need to do, now, is some thorough testing to verify that it does what it is supposed to do.

 

Oh, how naive! The cursory “well, does it crash?” test was passed. No biggie, I thought. Then, I started exercising them with various conditional cases. Ohhh, how sad. I had missed several of these conditional cases, and the algorithms that the functions implemented were really incorrect. For all practical purposes, both now need to be rewritten. But, even now, have I identified ALL of the conditional cases? I am not sure! I think I have, but I am not certain and I fear that I won’t know until testing after the rewrite of the rewrite happens.

 

I am now starting to believe that for most of us, especially not part of a team in which individuals can cross-check each other, the idea of being able to implement a fully designed process is more of a fantasy than reality. Oh, you can probably do it for a BLINKY program, but for anything with any complexity, I have real doubts. Its not like the program, itself, is a mess. It is driven by an orderly FSM. Events and timing are taken care of with the care of an RTOS. The big problem seems to be that of the fundamental complexity of the thing. 

 

Maybe my mental capacity is too limited, or too aged, or too …. something? I can design electronic circuits in a snap and I have been doing embedded programming, off and on, for 30 years, so I’m not a newbie. But, something here just isn’t coming through. I guess that I just don't know how to manage complexity?

 

<edit> I guess that one of the conclusions that one can draw from this experience is that a good design requires absolutely perfect knowledge. And, that is something that I am unlikely, ever, to possess. </edit>

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

Last Edited: Wed. Feb 13, 2019 - 08:07 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

My understanding is that this is the sort of thing that "Agile" ( as recently discussed here: https://www.avrfreaks.net/forum/agile-software-development) is supposed to address.  Not so much to eliminate "formal design", but to more easily permit and even encourage re-design when you inevitably discover flaws in what you started out with.  It follows that you should start writing and testing code "earlier", because that's the best way to find flaws in the design.

 

But that's a rough guess.  I retired before it became enough of a "thing" that they would offer training or heavily supporting a particular sub-style...

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
but for anything with any complexity,
We do it on projects with 100+ engineers, 50,000 source files, 10's of MBs of code so, yes, if done methodically it's a technique that works very well. Your mileage may vary.

 

One of the key things is developing the unit tests from the requirements/design. In that you are testing each module that it covers all the requirements that were originally created. Of course you do have to get the requirements right. On each project we have an "architect" who is principally responsible for creating the requirements in partnership with our customers.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
But, even now, have I identified ALL of the conditional cases? I am not sure! I think I have, but I am not certain and I fear that I won’t know until testing after the rewrite of the rewrite happens.
Fear not ... be aware.

Testing shows the presence, not the absence of bugs. - Edsger W. Dijkstra

from http://www.ganssle.com/tem/tem360.html#quotesandthoughts

Formal methods can prove the absence of defects though that's a somewhat daunting computer science course (easier now than 25y ago)

Static analysis can cover cases that won't be covered by testing.

In-lieu of the above, do what a cPLD and FPGA designer knows (Boolean algebra, combinatorial digital logic) though the solution space can be large (though usually sparse)

ka7ehk wrote:
I am now starting to believe that for most of us, especially not part of a team in which individuals can cross-check each other, the idea of being able to implement a fully designed process is more of a fantasy than reality.
PSP - Personal Software Process

Discipline for Software Engineering: The Complete PSP Book

The Software Process Dashboard | The Software Process Dashboard Initiative

ka7ehk wrote:
Events and timing are taken care of with the care of an RTOS.
... until I the application I implemented mis-operated the RTOS (deadlock, process/task/thread starvation)

An event framework is an alternative to a RTOS; analogous to the event system in XMEGA and unified memory AVR.

ka7ehk wrote:
Maybe my mental capacity is too limited, ...
untruth as written by one who has a Ph.D (you)

ka7ehk wrote:
... or too aged, ...
It's not age, it's health.

Our brains are made of cholesterol and fueled by glucose (fats that are a match to your body, complex sugars with fiber [fruits and vegetables])

There's nutritional therapy for brain function.

ka7ehk wrote:
…. something?
Welcome ... we who can't place our finger on it.

smiley

 


E.W.Dijkstra Archive: Home page

New Book About SPARK 2014 - The AdaCore Blog

learn.adacore.com - Intro to SPARK

Modern Embedded Programming: Beyond the RTOS « State Space

 

edit: strikethru

edit2: PSP has an additional book as of '05; Introduction to PSP ('96) is also in Spanish.

✎ Books by Watts S. Humphrey

 

"Dare to be naïve." - Buckminster Fuller

Last Edited: Thu. Feb 14, 2019 - 03:47 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'll also make the point that (as I said the other day) that this site exists to correct the errors of others. I have simply lost count of the number of posts we've seen of both simple and complex software that has clearly been thrown together and grown organically where someone just started with an empty main{} and started to fill in the blanks as thoughts occurred to them. In time the complex plate of spaghetti got to the point where they simply couldn't shoe-horn in the extra new idea they just thought of.

 

Are you really saying that is a "better" approach to professional implementation than sitting down and actually thinking about the design first ?!?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

TDD - Test-Driven Development

MDD - Model-Driven Development

Sometimes the system engineer drops a whopper onto the software developers, "This is what I want ... implement it."

westfw wrote:
It follows that you should start writing and testing code "earlier", because that's the best way to find flaws in the design.
A principle is a design must be executable.

One literally "walks" a design.

Likewise with specifications.

My method is different. I do not rush into actual work. When I get an idea I start at once building it up in my imagination. I change the construction, make improvements, and operate the device entirely in my mind. - Nikola Tesla

https://www.goodreads.com/work/quotes/4756-my-inventions-the-autobiography-of-nikola-tesla

 


Trends in Embedded Software Design « Barr Code

...

 

Trend 2: Complexity Forces Programmers Beyond C

(next to last paragraph)

Thus I predict that tools that are able to reliably generate those millions of lines of C code automatically for us, based on system specifications, will ultimately take over. 

...

MPLAB® Device Blocks for Simulink® - Developer Help

 

edit: Simulink

 

"Dare to be naïve." - Buckminster Fuller

Last Edited: Wed. Feb 13, 2019 - 04:49 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
Of course you do have to get the requirements right.
Realistic is best effort.

Auditing Firmware Teams - The Embedded Muse 364

...

  • Poor elicitation of requirements. I can't stress this enough. While getting to 100% is tough to impossible, too many teams practically abdicate their responsibility to do a good job at this. The following chart shows what typically happens. LOC is the size of the program in lines of code, the second column lists typical number of pages of the requirements document, and the last shows the document's completeness:

...

 

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

OK so here's a worked example:

 

https://www.avrfreaks.net/forum/...

 

I simply pick this thread because it happened to turn up this morning. This is not having a go at OP in that thread - just using it as a timely example. It's simple, perhaps too simple to represent a real product but the design techniques could still apply.

 

You can tell early on (from the obvious copy/paste error) that this stuff is just being "made up" in the C editor and already it has led to problems.

 

Take that sequence for reading a button that has been copy/pasted. The very fact that a significant block of code has just been duplicated should ring the alarm bells already. Programs that have sections of almost identical code repeated many times already say "no pre-implementation thought went into this". If someone had sat back and thought about this first they might have spotted the fact "I need a routine to read buttons", "I will use it for one button to increment a counter", "I will use it to read another button to decrement a counter".

 

But let's wind back. How should this have been implemented?

 

1) Specification. The customer (which might also be the implementor) needs to be pretty clear about what the "device" is actually going to do. For the purposes of illustration let's assume OP's current implementation IS the final goal so the spec is something like:

 

"A device to display a counter on three 7-seg digits with two buttons to adjust the counter up or down"

 

2) Requirements. OK so what do we actually require to implement that? Now there would be a whole section here about the requirements of the hardware but let's take that as read - we have an AVR, we have three 7-seg on a port, There are "common" to select each digit in turn, There are two buttons wired to PINB.0 and PINB.1. The software requirements would be something like:

 

2.1) require PORTB to be set as input to read the buttons

2.1.1) require PB0 to be interpreted as "count up" button

2.1.2) require PB1 to be interpreted as "count down" button

2.2) require PORTD to have 3 (active low) output to act as display commons

2.3) require PORTA to have 8 segment outputs (a..g and dp)

2.4) require button signals to be debounced for N ms (to be determined by experiment)

2.5) require decimal digits to 7 segment translation.

2.5.1) Segs a..g map to A0 .. A6

2.6) require 0 to 999 counter to be maintained

2.7) at 999 do not allow further increment of counter

2.8) at 0 do not allow further decrement of counter

2.9) require to split three digit counter into separate 100, 10 and unit digits for display

2.10) require single digit display to enable common and output one digit

2.11) require to multiplex digits so each is rewritten regularly to give the effect of all on at once

2.12) require to be able to increment/decrement counter at least 10 times per second

2.13) main loop to invoke one time initialisation then loop reading buttons and adjusting counter - also driving display

 

3) Design (numbers here relate to same section in (2)). Sketch out more detail of how each requirement may be realised

 

3.1) Only PB0/PB1 used and input so either accept power on default for DDRB or deliberately set bits 1,0 low
 Examine button logic/schematic - are pull up/down provided or should PORTB be written to enable

3.1.1) and 3.1.2) consider specific way to isolate bit 0 and bit 1 inputs

3.2) Set bits 4,5 and 6 of DDRD as output for common drive

3.3) Set whole of PORTA as output to drive segments

3.4) may need early prototyping experiments to determine best solution - just use delays in main loop or use timer interrupt polling (possibly better)

3.5) 7 segment display to use following digit shapes:
 

Assume segments are:

Image result for 7 segment digits a .. g

 

(picture shamelessly stolen from the internet! )

 

Provide a look up function to take 0..9 (range checked) and return bit pattern to select these digits on PORTA

 

3.5.1) Lookup data will be for bits in PORTA:

  pgfedcba
0b00111111 - 0
0b00000110 - 1
0b01011011 - 2
0b01001111 - 3
0b01100110 - 4
0b01101101 - 5
0b01111101 - 6
0b00000111 - 7
0b01111111 - 8
0b01100111 - 9

0b01111001 - F

Return 0..9 pattern or 'F' if invalid input passed.

3.6) As counter is 0..999 most appropriate type is uint16_t. However as lower bound is 0 int16_t better still as easy test for <0 to check lower bound. Probably global or consider passing by reference

3.7) logic such as:

counter++;
counter = (counter > 999) ? 999 : counter; // is max(999, counter)

3.8) logic such as:

counter--;
counter = (counter < 0) ? 0 : counter; // ie min(0, counter)

3.9) consider function with three member struct return to give split to individual decimal digits

3.10) function(?) to be passed new value and common drive pin to display digit

3.11) display each in turn with small following delay - should delay be part of 3.10 ?

3.12) should button reading/counter adjustment be done via ISR to guarantee temporal requirement?

 

4) Implementation.It is common to give each requirement a unique ID number then to show which parts of the code implement that requirement. Let's call them SWR_N.M to match the 2.M numbers in section 2:

// two possible approaches - either start at high level and inmplement top level
// logic then "fill in the blanks" or, per requirements, develop the support
// routines first (and test each in turn) then implement top level to pull
// together. There are arguments for both approaches but maybe start with
// top level logic:

int main(void) {
    uint8_t confidence_but0 = 0;
    uint8_t confidence_but1 = 0;

    init();

    while(1) {
        if (button_pressed(&PINB, 0, &confidence_but0)) {
            increment_counter();
        }
        if (button_pressed(&PINB, 1, &confidence_but1)) {
            decrement_counter();
        }
        split_digits = split(counter);
        display_digit(HUNDREDS, split_digits.hundreds);
        display_digit(TENS, split_digits.tens);
        display_digit(UNITS, split_digits.units);
    }
}

Peer review (or in this case me just changing my mind!) possibly suggests bad idea to use global counter - so pass by ref to inc/dec functions:

int main(void) {
    uint8_t confidence_but0 = 0;
    uint8_t confidence_but1 = 0;
    static int16_t counter = 0 // SWR_2.6

    init();
    while(1) {
        if (button_pressed(&PINB, 0, &confidence_but0)) {
            increment(&counter);
        }
        if (button_pressed(&PINB, 1, &confidence_but1)) {
            decrement(&counter);
        }
        etc.

Functions / types to implement then:

typedef enum {
    HUNDREDS = 4, // these are actual pin numbers in PORTD for each 7-seg
    TENS = 5,
    UNITS = 6
} digit_e;
// satisfies SW_2.1, SWR_2.2, SW_2.3
void init(void) {
    DDRB &= ~((1 << 1) | (1 << 0)); // two button inputs
    PORTB = ((1 << 1) | (1 << 0)); // also pull-up (this line optional - according to h/w design)
    DDRD = (1 << 6) | (1 << 5) | (1 << 4); // three common drive lines
    PORTD = (1 << 6) | (1 << 5) | (1 << 4); // display drive default high (inactive)
    DDRA = 0xFF; // outputs a..g and d.p from bit 0 upwards
}
// satisifies SWR_2.4
uint8_t button_pressed(volatile uint8_t * port, uint8_t bit, uint8_t * confidence) {
    uint8_t retval = 0;

    if (!(*port & (1 << bit))) //active low buttons
    {
        *confidence = *confidence + 1;
        if(*confidence > 80)
        {
            retval = 1;
            *confidencce = 0;
        }
    }
    return retval;
}
// satisfies SWR_2.1.1, SWR_2.7
void increment(int16_t * count) {
    *count = *count + 1;
    *count = (*count > 999) ? 999 : *count; // is max(999, count)
}
// satisfies SWR_2.1.2, SWR_2.8
void decrement(int16 * count) {
    *count = *count - 1;
    *count = (*count < 0) ? 0 : *count; // ie min(0, count)
}
typedef struct {
    int8_t hundreds;
    int8_t tens;
    int8_t units;
} split_t;

// satisfies SWR_2.9
split_t split(int16_t count) {
    static split_t retval;
    int8_t temp;

    retval.hundreds = count / 100;
    temp = count % 100;
    retval.tens = temp / 10;
    retval.units = temp % 10;
    return retval;
}
// satisfies SWR_2.10
void display_digit(digit_e dig, int8_t val) {
    PORTD &= ~(1 << dig);
    PORTA = convert_7seg(val);
    _delay_ms(1);
    PORTD |= (1 << dig);
}
// satisfies SWR_2.5
uint8_t convert_7seg(int8_t val) {
    uint8_t segs[] = {
        0b00111111, // - 0
        0b00000110, // - 1
        0b01011011, // - 2
        0b01001111, // - 3
        0b01100110, // - 4
        0b01101101, // - 5
        0b01111101, // - 6
        0b00000111, // - 7
        0b01111111, // - 8
        0b01100111, // - 9

        0b01111001  // - F
    };
    if ((val < 0) || (val > 9) {
        val = 10; // will display 'F'
    }
    return segs[val];
}

And I think that about does it. It effectively implements the same thing as the OP did in that other thread but (a) it is modular, (b) as such each sub-function could be written in any order, by different people and tested in isloation and possibly most importantly if (2) and (3) are written down somewhere there is a complete record of the requirements and design to aid the maintainer in 3 years when he comes back to this to apply a feature or bug fix.

 

The above is simple and is not rigorous. For a real project (2) and (3) are held in an immense database - many people use "IBM Rational DOORS" or similar for the upfront architecture, requirements and design.

 

One further benefit of this is that you can share (2) and possibly (3) with the customer who ordered this design.  In (2) they might not like my design for a 7-seg '6' in (2.5)/(3.5) so they could feed back there ideas into the requirements/design the this could trigger an auto request for a reimplementation of the consequences of (2.5) and so on.

 

I like to believe my design of this code is "better" than OPs original attempt because (2) and (3) made me sit down and contemplate exactly what I needed to achieve and how.

 

Of course this doesn't just "fall out". You might find that (2) and (3) actually take 6 months to a year on something really complex. Even here I probably spent an hour on the two just to illustrate this point.

 

My final implementation looks something like:

typedef enum {
    HUNDREDS = 4, // these are actual pin numbers in PORTD for each 7-seg
    TENS = 5,
    UNITS = 6
} digit_e;

typedef struct {
    int8_t hundreds;
    int8_t tens;
    int8_t units;
} split_t;

// satisfies SW_2.1, SWR_2.2, SW_2.3
void init(void) {
    DDRB &= ~((1 << 1) | (1 << 0)); // two button inputs
    PORTB = ((1 << 1) | (1 << 0)); // also pull-up (this line optional - according to h/w design)
    DDRD = (1 << 6) | (1 << 5) | (1 << 4); // three common drive lines
    PORTD = (1 << 6) | (1 << 5) | (1 << 4); // display drive default high (inactive)
    DDRA = 0xFF; // outputs a..g and d.p from bit 0 upwards
}
// satisifies SWR_2.4
uint8_t button_pressed(volatile uint8_t * port, uint8_t bit, uint8_t * confidence) {
    uint8_t retval = 0;

    if (!(*port & (1 << bit))) //active low buttons
    {
        *confidence = *confidence + 1;
        if(*confidence > 80)
        {
            retval = 1;
            *confidencce = 0;
        }
    }
    return retval;
}

// satisfies SWR_2.1.1, SWR_2.7
void increment(int16_t * count) {
    *count = *count + 1;
    *count = (*count > 999) ? 999 : *count; // is max(999, count)
}

// satisfies SWR_2.1.2, SWR_2.8
void decrement(int16 * count) {
    *count = *count - 1;
    *count = (*count < 0) ? 0 : *count; // ie min(0, count)
}

// satisfies SWR_2.9
split_t split(int16_t count) {
    static split_t retval;
    int8_t temp;

    retval.hundreds = count / 100;
    temp = count % 100;
    retval.tens = temp / 10;
    retval.units = temp % 10;
    return retval;
}

// satisfies SWR_2.5
uint8_t convert_7seg(int8_t val) {
    uint8_t segs[] = {
        0b00111111, // - 0
        0b00000110, // - 1
        0b01011011, // - 2
        0b01001111, // - 3
        0b01100110, // - 4
        0b01101101, // - 5
        0b01111101, // - 6
        0b00000111, // - 7
        0b01111111, // - 8
        0b01100111, // - 9

        0b01111001  // - F
    };
    if ((val < 0) || (val > 9) {
        val = 10; // will display 'F'
    }
    return segs[val];
}

// satisfies SWR_2.10
void display_digit(digit_e dig, int8_t val) {
    PORTD &= ~(1 << dig);
    PORTA = convert_7seg(val);
    _delay_ms(1);
    PORTD |= (1 << dig);
}

int main(void) {
    uint8_t confidence_but0 = 0;
    uint8_t confidence_but1 = 0;
    int16_t counter = 0 // SWR_2.6

    init();
    while(1) {
        if (button_pressed(&PINB, 0, &confidence_but0)) {
            increment(&counter);
        }
        if (button_pressed(&PINB, 1, &confidence_but1)) {
            decrement(&counter);
        }
        split_digits = split(counter);
        display_digit(HUNDREDS, split_digits.hundreds);
        display_digit(TENS, split_digits.tens);
        display_digit(UNITS, split_digits.units);
    }
}

Compare this to #1 in the other thread. Which would you prefer to be maintaining ?

Last Edited: Wed. Feb 13, 2019 - 05:59 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

My lament was largely about an unrealistic expectation on my part. That expectation was that with careful planning, enumeration of requirements, consideration of conditional cases, and all, that I, as a single, isolated individual, COULD generate a couple of moderate-complexity algorithms that were substantially without error.

 

Shame on me; XX years of experience SHOULD have taught me that the expectation was fantasy!

 

When I was writing the conditional cases, I simply forgot, for example, that behavior had to change if B event occurred before A event instead of the usual A before B. It ought to have been written down. Its not a system design requirement but one that is driven by behaviors in other parts of the system that are there as a consequence of my earlier design choices. Simply forgot.

 

The big problem is that there are many such conditional cases through the system. They aren't there because of some fundamental operating requirement but because of earlier implementation choices that are not all well documented. One of the things is that the consequences of such choices are not always apparent at the time of choosing. Those consequences often only become apparent at some later time when testing reveals it.

 

Consider a not totally hypothetical case: Lets say, for example, that the time to open a new file in FatFs grows as the number of files on the medium grows (maybe this is true). I would have never expected that as a consequence of earlier choices. So, I don't look for it and it only becomes apparent when I happen (by accident) to have a particularly full memory card. Now, how is one to know about such a "fact" (if it is a fact)? As far as I can tell, the FatFs documentation is pretty silent about things like this; you would only know that it is present by (1) designing a test to reveal it, which is unlikely if you don't suspect it, or (2) you have prior knowledge from "experience", or (3) by accident.

 

This is simply one of the hurdles faced by the isolated developer. Even with all of the fantastic experience and knowledge that is here in Freaks, you can't tap into it if you don't understand that there is a problem that you need to know more about ... until you stumble on it by accident. This is one of the things that I am finding in comparison to other previous work situations. Before, there was always someone else who I could chat with, "bounce" ideas off, plumb his or her experiences and hunches. The power of two people in collaboration is far more than 2X of our individual "power" and I tend to forget that.

 

This little experience has simply served to remind me of how much more we are responsible for, and how much more methodical we need to be, when we work alone.

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 2

I remember reading about a manager who told his

software developers to stop asking him for help with

debugging their code.  Instead they were to imagine

they had a companion (I think it was a duck) and if

they had a difficult problem they couldn't solve, they

were to explain their problem to the duck.

 

Nearly every time the developers subsequently tried

explaining their problems to the duck, they would

notice where they had gone wrong and discover an

approach to fixing the code.

 

"Talk to the duck!"

 

--Mike

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I used to talk to my wife...she is still my wife by the way.....even the blank look on her face was helpful to me.

 

But of course that's the reason why updates were invented, if one doesn't stuff things up first how is one supposed to charge money for new versions or even the updates themselves?

John Samperi

Ampertronics Pty. Ltd.

www.ampertronics.com.au

* Electronic Design * Custom Products * Contract Assembly

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avr-mike wrote:

I remember reading about a manager who told his

software developers to stop asking him for help with

debugging their code.  Instead they were to imagine

they had a companion (I think it was a duck) and if

they had a difficult problem they couldn't solve, they

were to explain their problem to the duck.

 

Nearly every time the developers subsequently tried

explaining their problems to the duck, they would

notice where they had gone wrong and discover an

approach to fixing the code.

 

"Talk to the duck!"

 

--Mike

 

 

I've never talked to a duck, but I have often found the solution while trying to explain the problem to somebody else. Sometimes, I'm composing a question here, and don't post it because I saw the bone-headed mistake.

 

"We trained hard... but it seemed that every time we were beginning to form up into a team, we would be reorganized. I was to learn later in life that we tend to meet any new situation by reorganizing. And a wonderful method it can be of creating the illusion of progress while producing confusion, inefficiency and demoralization." Petronius Arbiter, approx. 2000 years ago.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Have had same experience that Torby describes.

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sometimes I think that Monsanto and Other Companies are putting Roundup / GMO's and other crap in our food to make us dumb enough to not be able to organize or revolt but still smart enough to keep making money they can steal.

 

On the other hand.

It seems that you have a pretty complex application and have done a mayor rewrite / refactoring without much testing in between.

Something like that is bound to fail.

No normal human being can oversee the concequences of all the details in such a big project.

 

There are a lot of methods that are supposed to help with writing better software, but they al require frequent and early testing of small parts.

In your first post you write: 

ka7ehk wrote:
I listed the requirements. I listed the various conditions they needed to respond to.

But in #9 you write about a whole lot of undocumented and untested behaviour.

I'ts very difficult to rewrite an existing (badly written?) application into something "good".

And without more discipline and thought poured into it than in the first itereation, chances are it is not going to be much better.

How many man-hours did you put in the previous version?

Doing magic with a USD 7 Logic Analyser: https://www.avrfreaks.net/comment/2421756#comment-2421756

Bunch of old projects with AVR's: http://www.hoevendesign.com

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'd say the point is if you can usefully delegate part of the implementation.  If you're doing it all on your own, it's not such a problem because you can easily reach back into 'some other module' and tinker with it to make the next one work better.  But if it's all 'compartments' where People {ABC} are working, independently, on modules {XYZ}, then you need to plan a great deal more.

 

Most of what I do I do pretty much by myself (stop sniggering - as far as AVR design is concerned!) and although I have some plan in mind going ahead, quite a bit of it is rough.  Partly because maybe I haven't really decided exactly how to do something yet - defining module interfaces, for example - and partly because I can keep an overview of 'what's going on' and if a little change here makes a huge benefit there I can cheerfully do so, even if it violates 'the plan'.

 

S.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Torby wrote:
Sometimes, I'm composing a question here, and don't post it because I saw the bone-headed mistake.
If I had a nickel for every time that's happened, I'd be posting this under the shade of a palm tree, while sipping on a margarita.

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The big problem is that there are many such conditional cases through the system. They aren't there because of some fundamental operating requirement but because of earlier implementation choices that are not all well documented. One of the things is that the consequences of such choices are not always apparent at the time of choosing. Those consequences often only become apparent at some later time when testing reveals it.

I find a lot of the issues stem from not exploring enough of the "what if" scenarios.   Often, due to the "now we've detailed exactly what we need, lets do it"...It might even be "easy" (not always) to thoroughly define what the requirements are.....I need to read the RPM from the file, meeting format specification VDAT14E-G_REVC  & display it like (  ),  within 500ms of button PANEL-7A_B4 being pressed., etc, etc.  Many paragraphs of wonderful details to meet are developed & then all will work as envisioned & we can pop the champagne.  However........what if the file doesn't exist?  What if the disc (remember those?) is corrupt?  What if the bus logic is unresponsive? What if only half of the data is read before it quits due to ESD? What if the path is invalid? What if the response is delayed due to concurrent file accesses?  What if the display requires a 3 second powerup before accepting any data?  What if the button is stuck in the pressed position? 

 

Some of the "what ifs" , we've encountered so many times we no longer overlook, such as knowing the inherent additional requirements for button debouncing.   Except those who don't know...and write their tales of woe.

We often become the woeful, when we don't think about what to look out for.  It seems like a tip of the iceberg problem...then main "requirements"  are really just the very beginning.  Don't start coding there, even if those are exquisitely detailed.

 

===

A few years ago, after some initial debugging, I was very surprised to learn we were now set up for a big demo; all the big players had already been invited to HQ.  I asked--had we ever built one of the proto PCBs that were being etched? Ah, NO.  Had we ever tried running the beta software on any of the actual product hardware?...Ah, NO.  Had we ever had an actual prototype running?  Ah NO.  Had we tried assembling all of the custom mech parts from the vendors?  No.   But mgmt had determined that since software seemed to be working on generic development boards & the proto PCB's were on order (thus "done" in their mind ), that we'd merely merrily combine them and have a demo in a week for the bigwigs.   That's a different lack of what-if's! 

 

 

 

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

js wrote:
I used to talk to my wife...she is still my wife by the way.
Torby wrote:
I have often found the solution while trying to explain the problem to somebody else
I was watching re-runs of old episodes of House the other day. Anyone who watches this know that he has a team of fellow doctors he usually bounces ideas off but in this episode he was stuck on a plane over the Arctic so he just picked three fellow passengers at random and had the usual "exchange of ideas" with them which naturally triggered the solution anyway! 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

In one of his episodes House bounces his ideas off a janitor.

 

Doing magic with a USD 7 Logic Analyser: https://www.avrfreaks.net/comment/2421756#comment-2421756

Bunch of old projects with AVR's: http://www.hoevendesign.com

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

House.

So realistic.

We can't rule out Lupus!

 

It was thoroughly entertaining though.

Quebracho seems to be the hardest wood.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I find a lot of the issues stem from not exploring enough of the "what if" scenarios.

I always found the difference between CCITT protocol specifications (ie LAPB, X.25) and Internet Protocol specifications (RFCs (TCP/IP-related))

The CCITT protocols (and the conformance tests) are full of timeouts and specifications for nearly every possible error case.  Implementations are full of configurations for timers that mostly never get used, because those errors are nearly impossible if both sides have correct implementations.  Optimization of PROPERLY operating equipment never run into most of them.   The RFCs, OTOH, are full of descriptions of how the protocol works when things are going right...

(as an example, X.28 has ... ~10 separate timeouts.  Telnet, which is probably the closest Internet equivalent, says nothing about what you should do if you send a negotiation, and the other side doesn't respond with the expected answer.)

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
You might find that (2) and (3) actually take 6 months to a year on something really complex.
wrt '(3)' :

Jack Ganssle's blog: Software Process Improvement for Firmware

January 21, 2019

[immediately before mid-page]

Fraction of budget spent on architecture

 

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

How very confusing to re-use the SPI acronym !!

 

PS " Engineering without numbers is not engineering - it's art. " is something that should be printed on T-shirts! cheeky

Last Edited: Mon. Mar 25, 2019 - 01:52 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

An addendum to this now month-old thread...

 

What I AM beginning to think is that there is a complexity ceiling for single-designer projects that is a lot lower than for multi-designer projects. This ceiling is, I suspect, not an impenetrable one and it certainly varies with the single-designer.

 

For example, consider what a single individual could do with a design of a bicycle designed and built from "scratch" vs what a single individual could do with an automobile that meets safety standards and has an operating life 100,000km while transporting four occupants. Even if that single individual knows about vehicle electrical systems and power transmission and braking and air conditioning and all the details of ergonomics, he or she would have their hands full building that vehicle by themselves.

 

Same principle applies to embedded engineering. Blinking LED - no sweat! Making an embedded micro-server with WiFi, LoRa, ethernet, and keyboard interfaces running from solar-powered rechargeable batteries and a 100,000 hour MTBF? Whole different complexity, whole different ball game. Many of us, here, could design a blinky in our sleep and predict every resource that would be needed in the MCU that runs it. How many of us, by ourselves, could be that certain with the hypothetical micro-server? Not I!

 

As I rethink the variety of postings, above, (and they are all enlightening), I think this is the crux of what I was feeling when I wrote the initial post.

 

One of the major challenges for single-designers is to get a sense of just where that ceiling is. This is rather equivalent to asking oneself before taking on a consulting job: "Do I know (or can I quickly learn) everything that is needed for this job?" Technical knowledge isn't always much of a help here, because what is needed for such jobs is often beyond technical knowledge: organization, project management, purchasing, translating customer requirements into a system design. For many of us, self included, these things are far more difficult than tasks such as algorithm design or coding for failure cases.

 

Enough blather. The end conclusion, for me, is, I think, that single-designers do face significant challenges that members of design teams often don't recognize. And single-designers who came from a design team environment often don't recognize these challenges, either, until its (almost, if they are lucky) too late.

 

Cheers

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

Last Edited: Mon. Mar 25, 2019 - 04:49 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Jim,

I agree. I have tried a number of times to start writing down a lot of things before actually pulling the trigger and hitting the keyboard and actually write the code.

Now I do not think I am a super programmer, but then always during the implementation phase it turned out that I discovered that things could be done better if I did it differently, or that things I had written down just simply did not work as expected in the end because as a 1 man team I overlooked something.

What one misses if one is a 1 man team is the sparring part were you can have discussions or a seconds pair of brains and eyes to up front see these things.

 

what I do now a days when I start on a project is that I write down the global direction I want to go and what I need to do and then how I think that might be possible, not going into details, and then just start implementing and changing things as I go.

Now I do have a set of self made libraries that allow me to have a basic program up and running fairly quickly, and then from there I start to dig down and implement.

for instance till now all my projects involved a display and keyboard(various configs). The display is either a HD44780 or a known graphical(we have years back rejected a batch of displays for use in customer projects but they are good to use for private stuff) So first thing I always do is get my base compiling( have mean while also made a base project so that is also just copy/past and rename the project and solution file and then add the display driver and handler and see if after programming it says "Hello Marcel glad to be back" spread over all the possible lines of the display. Then I add uart ( last couple of projects did not use that, but that is also just copy and paste a self made lib that I know works), get the keyboard up and running and then start implementing menus top down and add special functions as I need them. And then ofcoarse when I think things need change or are better done in another way I just change them.

 

I fully understand that when one does that in a multi person project it will create hell as that will become one gigantic mess, but as said then you can have a look together and discuss things that might look logical to you but were others will see that it will not work.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

meslomp wrote:
during the implementation phase it turned out that I discovered that things could be done better if I did it differently,
But that's the reason there's a feedback loop on the V model. This image from Wikipedia:

 

Image result for v model software engineering

 

The browny/gold arrow for "verification and validation" shows the feedback. When the tests show that the implementation does not meet the requirements you re-spin that part of the design.

 

A more detailed view of the same:

 

Image result for v-model

 

In fact if you do an image search for "V model" you will see a LOT of similar diagrams!

 

etc etc.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ok, never knew about this method, sounds right.

Just wonder... in this V model is there only 1 V allowed? Or could you break-up your main V into smaller V's and thus break your design process up in a lot of smaller V's and handle them as separate projects so to say.

As in that case it is more or less what I tend to do. First get the broad outlines on paper, break that up in a lot of different Vs and implement one by one.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well everything in the system is modular (another precept of "Agile") so you already work on isolated, small packages anyway. And, yes, different modules may have different designers/design teams.

 

If you were working alone then the same would apply. You design small parts completely in isolation as long as they then deliver the interfaces needed in the top level design. You could envisage a UART module or an ADC module or similar and the 5..10 exposed functions they might provide to support the rest of the system.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Here is an example of what I have been running into:

 

1. Need a new interface between a sensor and an existing system.

 

2. Carefully list the requirements for the interface. Here is the first potential problem point, as will be seen.

 

3. Draw out a flow chart to meet the requirements.

 

4. Write a meta-code implementation of the flow chart. It immediately becomes apparent that the flow chart results in a code structure that is, at the very least, awkward. There are far too many conditional loops involving computations that control the flow.

 

5. Back to the flow chart. Any way I try to do it, process flow results in what seems to be overly convoluted code. That is, it is really hard to trace progress through the code and even harder to verify proper operation.

 

6. After several iterations, I settle on what appears to be a "least bad" implementation. Is this really "design"?

 

7. Convert the meta-code to real C. Does not seem much worse but certainly not better.

 

8. Drop the code into the working project.

 

9. Add calls to the new function at appropriate places in the project.

 

10. Run it.

 

11. Oh, Sh*t! One of the calling points has a requirement that I totally forgot about (see item 2).

 

12. Do I go back and create a new flow chart followed by new meta-code followed by new C implementation, or do I "adapt, on the fly"? That is, write code without "designing"?

 

13. Inserted a couple of lines. Its only 3 or 4. How bad can that be?

 

14. Getting frustrated now. Several misspellings creep in. A semi-colon is misplaced. A closing brace is overlooked.

 

15. Takes forever to debug - that missing brace throws errors throughout the the whole module.

 

16. Finally run it again. F*ck! It does not do what is really needed. I had chosen the wrong kind of data for the interface to return to the system!

 

17. Back to somewhere up in the "design sequence"!

 

Where, prey-tell, does that beautiful "V" chart fit into this? It seems like so much Pie In The Sky! (Not the restaurant - would kind of liked to go there!) I agree that not enough thought was applied at several points, but that implies that one's mind can juggle everything that is needed and can draw upon various facts and details with perfect accuracy. Alas, I don't know how to make that happen!

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

Last Edited: Wed. Mar 27, 2019 - 05:48 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm guessing you simply can't teach old dogs new tricks cheeky

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I just don't know how to make the leap to using the "new tricks" within the constraint of an existing project! There is also the "complexity ceiling" that I referred to, above. I think that I am getting very close (to my complexity ceiling) on this one.

 

What does one do when you cannot even reliably list all of the requirements for something, due to the fact that you just can't remember everything that is going on?

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

Last Edited: Wed. Mar 27, 2019 - 06:28 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
There is also the "complexity ceiling" that I referred to, above. I think that I am getting very close (to my complexity ceiling) on this one.
A very rough rule is 10K SLOC for each one.

... divide (abstract) ... communicating sequential processes (CSP) (multi-process or multi-task)

ka7ehk wrote:
What does one do when you cannot even reliably list all of the requirements for something, due to the fact that you just can't remember everything that is going on?
best effort, iterative incremental, one does what one can do

 


Communicating Sequential Processes (CSP), by C. A. R. Hoare (PDF Version)

Jack Ganssle's blog: Software Process Improvement for Firmware

[two paragraphs on 'requirements']

 

edit : one is an individual

 

"Dare to be naïve." - Buckminster Fuller

Last Edited: Wed. Mar 27, 2019 - 07:25 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Software design is entirely an exercise in managing

complexity. And more than that, you need to be a

translator, converting what humans want to achieve

into the language expected by the machine.

 

A simple example in AVRs is clearing an interrupt

flag such as TXC0 (transmit complete). The bit is one

and you want it to be zero, so logically you want to

say "clear the txc bit" and have this work.  The AVR,

however, won't do anything if you set the bit to zero;

the bit is cleared by writing a one to its location.

 

If you've been programming AVRs for a while, you

will just know this and won't even blink when you see:

UCSR0A |= (1<<TXC0);  // clear the "transmit complete" interrupt flag

But think of how someone new to AVRs might react

to seeing this line of code. "Obviously" the comment

is saying one thing and the code another.  A lot of

time might be wasted researching this, trying to

figure out if this line is the cause of whatever bug

the programmer is tasked with fixing.

 

You might argue that anyone fixing bugs in AVR code

needs to know how to clear interrupt flags, and yeah

it's certainly helpful to have read the datasheet a few

times. But you can avoid the confusion entirely by

writing better code:

clear_the_transmit_complete_interrupt_flag();

The intent is stated right there, and anybody who is

not an AVR expert can understand it too. Perhaps its

name is a bit too verbose, but this is a style issue. I

prefer long descriptive function names so I never need

to write comments (though this one is a tad too long

even for my own taste). The function name is the

comment.

 

Then somewhere in your project you translate this

into the actions the AVR needs to take to cause the

flag to be reset:

void clear_the_transmit_complete_interrupt_flag ()
{
    UCSR0A |= (1<<TXC0);  // the flag is cleared by setting it to 1
}

A comment is needed here because the behavior is

surprising to someone unfamiliar with AVR internals.

 

So your job is to be a translator. You must translate

between what you want to do as a programmer, and

what actions the chip needs to be told to do to achieve

the goals. The vast majority of your code should be

written in terms humans understand. This is for your

benefit as well as anyone who later might have to or

want to read your code.

 

--Mike

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I agree whole heartedly with Mike.  Especially:

Software design is entirely an exercise in managing complexity.

 And that is something that I have never been terribly good at. The examples of function naming are excellent. For me, they work far better at low level things than mid-level. Let me give an example. Suppose that I need to create and open a data logging file via FatFs in multiple places. The file needs a name, of course, so I include the naming algorithm in this function. And, maybe a few other details pertinent to the logging system. So, I create a handy function with a definition that starts with:

FRESULT LogFileOpen( uint8_t FileSequenceNum ) {

Then, down the line, I realize that I need to do a LogFileOpen(); in a new location. LogFileOpen() is in a different module that the one I am working on, so I go through my mental list of what is in LogFileOpen() does [BAD JUDGEMENT alert!] and conclude that it will also work properly in the new location. BUT, and that is the big one, one of those "few other details" that are in there does the opposite of what is needed in the new call. Memory neurons gloss over those "unimportant" details and continue to return "it'll work there". One of the early debug runs reveals, in all of its glaring ugliness, that the little unimportant detail really does matter. This, in turn, leads to insane head scratching to determine what to do.

 

Yes, that was an error to skip the detailed evaluation of the function in its new environment. I was confident that I knew what was needed and it was OK. The big problem with this scenario is that it happens over and over. You would think that someone would learn, but you would be surprised (or maybe not) how often, I come up with "Ya, I screwed up before, but I am SOOO certain that I really do remember all the details of how it operates, this time..." In spite of all of the historic evidence to the contrary, there "it" goes again. I think that it may be a similar sort of mental mechanism to the gambler who says to himself or herself "Yes, I've been loosing a lot, but this time it will all change and I'll end up with the winning hand, or pull, or roll, or whatever".

 

I think there is something more than "old dog not being able to learn new tricks"!

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

Last Edited: Wed. Mar 27, 2019 - 08:42 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

One of the major things completely missed here & generally completely unappreciated is the "learning curve" of the product...not programming, or designing procedures, but the product evolution itself.

 

We made precision hydraulic positioning systems (involving, electronics, precision bearings, valves, fluids, pumps, motors, sensors etc, etc).....The wonderful system we created 10 years ago looks quite like garbage compared to today...This is especially true when getting into a new product territory.

Well, why didn't we simply design today's unit 10 years ago?...well, we really couldn't.   Ten years of things we know now that we didn't know then.  This brand of fluid does this,  that valve type oscillates like that, those motors  overheat when., these optosensors, get corrupted by, ..etc, etc, etc, (ten years worth of etc).  So even though the high-planning today might look generally similar to 10 years ago, the information may be quite different.

 

I look forward to being able to predict the future!

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:
... that valve type oscillates like that, ...
Learned a lesson that the pressure relief valve oscillates therefore the pressure oscillates therefore the pressure limit is very briefly though acceptably exceeded; the software defect was to limit on peak instead of mean pressure, that led to a pressure fault, and that led to a failure of an inadvertent shutdown (my knowledge of hydraulics is from Physics 101 instead of Mechanical Engineering 401)

 

 

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Learned a lesson that the pressure relief valve oscillates therefore the pressure oscillates therefore the pressure limit is very briefly though acceptably exceeded; the software defect was to limit on peak instead of mean pressure, that led to a pressure fault, and that led to a failure of an inadvertent shutdown (my knowledge of hydraulics is from Physics 101 instead of Mechanical Engineering 401)

Yep, part of the learning curve...after several years there is a lot of that information built up.  If I suddenly decided to start making chocolate coins, I'd probably make some that "passed muster" , sorta, and had many issues.  Only after a few years would I appreciate all the details & learnings required to make chocoate coins..at least a tasty development process

 

check out some of the precision  tech needed for chocolate coins:

https://www.youtube.com/watch?v=8FgtDKjjjOc

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The learning curve also leaves a lot of

"technical debt" which is a good term

to search on.

 

--Mike

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'd never heard of the V model. When I still worked, we had the 'waterfall' (requirements/system design/detail design w/ testing at each stage -- ultimate test being against requirements.  There was also the 'spiral model ' which was a repeated waterfall with the requirements and design getting more complete at each stage.

 

Here are a few issues. Requirements SHOULD come from customers ( takes some skill to flesh them out). System design should be done by someone with system vision, other design should be done by others against system design and at some point a separate function (QA) should do some of the V&V.

 

Waterfall model can work if you pretty much know what you want and the tech is understood. Spiral should be better when there's uncertainty, but what often happens is that as you

start to shoot for more complexity and robustness, the base design is found to be inadequate.

 

Hard to do either with a one person operation. 

 

I did a little QA and I think that I helped a bit, but who knows. One of the better things that we did was to have reviews so that others could look for flaws. Often helped quite a bit.

Again, hard to do with a solo project.

 

I came late into software -- and then as QA because I was certainly not qualified for design. When I was trying to figure things out, I spent a fair amount of time reviewing the IEEE software guidance. I found it pretty interesting that the newest discipline (software) focused on the design process. A lot of IEEE standards seemed to cover  technology for various specific fields. But the software section focused on the Software Development Life Cycle (SDLC) --AKA waterfall model. I'm pretty sure that they also expanded it as a project model.

I'm not saying that it's a bad approach -- it was just interesting.

 

Disclaimer -- these are just a few opinions -- I am certainly one of the least qualified  folks here.

 

Jim -- please don't beat yourself up. Some form of a restart or course change may be in order -- or not.

 

Good Luck,

 

hj