RTOS and low level interrupts

Go To Last Post
31 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hello all!

I'm thinking of experimenting with an RTOS such as freeRTOS or MicroC/OS-II.

I am wondering how the low level interrupt services will affect the RTOS. For example if I had a fast serial port running the time taken for the reception fo a character could be faster than the RTOS tick. Therefore a dedicated ISR is needed to store the data rather than using a basic RTOS task. The problem is not just related to serial ports but any internal or external interrupt that needs servicing beyond the capabilities of the RTOS.

How would this ISR for the serial port affect the RTOS tick? Can I assume that the serial ISR (if i define it as exclusive) will be serviced with only a small resulting delay to the RTOS? What would happen if the serial ISR occured during a context switch?

Thanks all.

Tim

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I can't speak for freeRTOS or uC/OS-II, but with AvrX you can mix your interrupts without interference. In other words, there are some interrupts that are handled by the kernel, such as the system timer tick, that require special calls to interact with the kernel. Others, such as the UART interrupt, can be handled completely outside the kernal using a standard SIGNAL configuration, and the kernel will neither know nor care about it. The only interference you may have is that it is always possible for in interrupt to happen while you are already in an ISR for another interrupt. This is one reason for keeping the ISR as lean as possible. You probably also want set the timer that you use for your system timer tick to run in MODE 2 (where it automatically reloads itself rather than being reloaded in the ISR). This will help prevent jitter on the system timer tick.

Dave

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Why are people putting OSes on microcontrollers?

And why are there C compilers for them?

Ok, I admit, I'm doing software video with mine, so it's pretty much busy all the time.

But, really....if you can't keep the program flow for a 32k program in your head without relying on the crutch of an OS....

And that's on a largish Atmel.

OSes are good for general purpose computers, where you have I/O devices being added and removed, filesystems to manage, video hardware to pre-process data for, users to log in and out.

The sort of tasks that microcontrollers are really MOST useful for are the things that general purpose computers don't do well. Timing sensitive code, 100% CPU utilization banging on some device, whatever...

I just don't get it.

The Atari 2600 guys had 8k by the end of the machine's run, and I've NEVER heard of them having an OS of any time. Ok, so the games had a 'kernel', but that was custom written for each game, or at best a small series. And it was really concerned ONLY with the 192 visible screen lines and VSYNC (paddles being the big exception I can think of). Game logic wasn't considered part of the 'kernel'.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
But, really....if you can't keep the program flow for a 32k program in your head without relying on the crutch of an OS

I think while it still may be possible that i keep the overall logic of a 32K program in my head, i might find it easier to split the overall problem i'm trying to solve into several, smaller, pieces. Especially where my problem can be split into somewhat independent tasks (you see, even in a verbal description, RTOS concepts pop up!), for example a robot control unit, it seems only logical to handle different problems in separate tasks. Since they are unrelated to each other, there's no point to handle them in one big endless loop and one big program. An OS will support the concept of stepwise refinement. And a bug in one task won't necessarily screw the whole thing, but maybe only the distance measurement of the robot.

I do see, on the other hand, your point where you're talking of a problem which is concise (is that the right word?) and there's no benefit of splitting it, or where you want to have the ultimate control over time and resources of your hardware. But i disagree that OS are only for big and complex systems. Having an OS at hand for me is having another great tool to accomplish a given task.

-- Thilo

Einstein was right: "Two things are unlimited: the universe and the human stupidity. But i'm not quite sure about the former..."

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
What would happen if the serial ISR occured during a context switch?

Nothing. Contex switching executes with ints disabled.
Quote:
How would this ISR for the serial port affect the RTOS tick?

Tick will be delayed until ISR finished.

You need to use OS-controlled interrupts (handled by kernel). In that case, ISR itself is only a calculated jump (2-4 CPU instructions) to a function where all necessary work will be done.
Using OS-controlled ints is better because of one more reason. If non-OS interrupt occur, context (32 registers) is saved in current process return stack, this may cause stack overflow if you are limited with SRAM space.

Quote:
Why are people putting OSes on microcontrollers?
And why are there C compilers for them?

And why the hell that Scandinavian guys invented AVR? 8051 is the BEST forever, yeeeahhhh!!!! :wink:

Cats never lie. At least, they do this rarely.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

And why are there C compilers for them?

Is that a serious question? You wouldn't be from a hardware background by any chance?

There are high level languages for computers because we humans more naturally think at a high level, abstracted from the lowest-level bit twiddling.

I am quite capable of writing in assembler, and will do so if somebody else wants to pay me by the hour, but when I am writing to achieve a target (i.e. money in my pocket) writing in a high level language is MANY times more efficient.

Add to that we run the same code on PCs, H8s and AVRs - how would you do that without a compiler?

/A

If we are not supposed to eat animals, why are they made out of meat?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thank you Micklecat. That confirms my thoughts. I hadn't thought about that possible problem with the stack overflow though. I've bought the MicroC/OS-II book from amazon today so hopefully that will be of use and will no doubt keep me going for a while.

henrik51 asked why put OS's on mivrocontrollers and talked about code size and filesystesm etc. Well I didn't mention any specific processor (I know that by being in this forum it implies AVR). I'm keen to have a play with the Atmel ARM processors and start to see what can really be done with a large memory model and all the things that can go along with this such as HDDs and file systems. Does anybody know of a good way to learn? I've taken a look at numerous datasheets for the devices but there isn't anywhere as good as AVRfreaks.net to find information.

Any good links or book recommendations greatly accepted. I'm quite happy with PIC and AVR but have never used anything larger than this. I'm still a little confused how the ARM devices handle memory remapping to run code from RAM rather than flash to speed things up. I'm also intreeged by the Thumb and ARM modes of operation and how they switch between the two.

Thanks all!

Tim

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
I've bought the MicroC/OS-II book from amazon today ...

OT: OT: When I'm looking for a book recommended by others, or looking for books on a topic, I usually go to Amazon.com as well--to read the reviews and see the links to related books.

When I am ready to buy, however, I go to www.half.com armed with the ISBN number and/or title & author. Many times there will be a used or remaindered copy available for a fraction of the price of new. Often there will also be links to active eBay hits for the same or very similar items.

Also, a trip to http://www.bestwebbuys.com/books/ armed with the same information will search many sources and give you prices, almost always with click-throughs to the particulars.

Be sure to verify the edition before you buy.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

AndyG wrote:
Quote:

And why are there C compilers for them?

Is that a serious question? You wouldn't be from a hardware background by any chance?

Mmm, no, but I am showing my age, I fear.

I learned programming on the 6502 (C64), and one very quickly gets good at holding much in one's head, as you're not going to stick it in a register in that poor CPU. I do love the thing, but it DOES build your mental picture skills. The Atmel can get equally trying when you realize you just ran out of registers that work in immediate mode. :)

And I like C++ just fine for large projects. Actually, I'm supposed to be working one one right now. ;)

I just don't see that a microcontroller does complicated enough activities to be worth the overhead of C. And besides, they work best in timing critical applications. How many cycles does a >> take in C. One? Two? Three? Acutally, that depends on whether the operands are in registers or they got spilled out. Are they on the stack, or in general SRAM? Timing, timing, timing. C++, on the other hand, works great when a few cycles either way doesn't matter, but being able to read your code at 2am does. :)

I was just having a conversation the other day with an assembly language guru. Did BIOS work, now on some microcontroller project. He was complaining about having to add another feature to a 2k microcontroller that was FULL. Freeing up one byte at a time. And he was moaning about assembly, and I'm like, "You get instructions in assembly that do in one cycle what C takes three or four lines to do." And he had to agree with that.

Nothing on the Atmel is coming to mind right now, but the BIT instruction on the 6502....if you designed your system right, just TRY that in C.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I just don't see that a microcontroller does complicated enough activities to be worth the overhead of C.

Ok, maybe its just because I'm an inexperienced EE student, but I've got plenty of projects I consider complicated enough for C, but don't require the kind of code density or speed that assembly offers. I also don't have a whole lot of time, and from what I gather, neither do professional engineers. I'm very fluent in assembly, I learned that before C, but what I've realized is that with many projects, having a faster time to market is far more important than squeezing every last bit of performance out of a smaller, cheaper MCU. I'd rather spend a few extra bucks on more flash and put a faster crystal on it and have a project that works rather than spend another couple of months hand writing assembly code. I have a certain number of projects that need to be done in a certain amount of time, the faster they get done the sooner I can start on the next. If I need an ultra fast routine then I'll use inline asm or something like that, but otherwise, if C gets the job done then I'll use it. It doesn't have to be the fastest and the smallest, it just has to work according to specifications, and it also has to be finished.

I'm posting this since I used to consider myself a die-hard assembly only developer... I thought C was for pansies. Then I decided to try it, and saw that I could do in 15 minutes what took me an hour in asm. And that was with 4 days of C experience and a year and a half of assembly.

I've considered OSes in several projects as well, for many of the same reaons (easier to write the code), and for one project wrote my own OS in asm to see how it worked. That was one I could have done in C with no OS, as I later came to realize.

JeremyB.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
And he was moaning about assembly, and I'm like, "You get instructions in assembly that do in one cycle what C takes three or four lines to do." And he had to agree with that.

Remember lines of C doesn't mean lines of final code or code size. You CAN get a few lines of C code turned into one asm instruction. Likewise one line of C could be many many lines of ASM...

And, if you can't get the C compiler to do something you need, every C compiler for the AVR lets you in-line ASM, so you can just do it yourself if you need.

Likewise you can always check how many cycles a certain amount of code is using by checking the compiler output. If you need it to be a constant amount of cycles one can just write that section in ASM.

Normally there is a certain limit past which C is actually normally better than ASM for size. The compiler doesn't really care what its ASM looks like, and also isn't thinking of the code in the same way you might. So it will make blocks of re-usable code that would make no sense from a normal structured point of view, but it makes a difference.

I know there isn't a chance in hell I'll convince you away from ASM, but don't be too hard on C ;-)

Warm Regards,

-Colin

Regards,

-Colin

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I just don't see that a microcontroller does complicated enough activities to be worth the overhead of C. And besides, they work best in timing critical applications.

Then, respectfully, I feel that in spite of your years of experience your view is very narrow.

I've programmed commercially for 25 years, from 6502s, Z80s, Z8000s through to more modern processors and compilers. The only processors that I program in assembly today is a little 8-pin PIC. I don't need assembly (at least in the last 4 years or so.)

Although microcontrollers are ideal for "timing critical" applications, they are also ideal for a wide range of other applications and to write these off at a stroke seems bizarre to me.

/A

If we are not supposed to eat animals, why are they made out of meat?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

w0067814 wrote:
...I'm keen to have a play with the Atmel ARM processors and start to see what can really be done with a large memory model and all the things that can go along with this such as HDDs and file systems. Does anybody know of a good way to learn? I've taken a look at numerous datasheets for the devices but there isn't anywhere as good as AVRfreaks.net to find information.

I use AT91x devices (with uC/OS II) and I have very good feelings about them.
If you want more info priv e-mail will be more appopriate.

It is obviously not as good as AVRFreaks but look at:
http://www.at91.com/

Regards,

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

@bluefire211

Quote:

I'm an inexperienced EE student...

It doesn't have to be the fastest and the smallest, it just has to work according to specifications, and it also has to be finished.

I know engineers with 20 years experience who have not grasped this yet. :)

Cheers
A

If we are not supposed to eat animals, why are they made out of meat?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hmm, perhaps I came off rather confused.

Ok, professionally, I develop VoIP software. Nice large machines. 960 phone calls max. That's a huge amount of code, and there's no assembly in it. Alll C++. It's not practical to write it in assembly, as the code size is now >5MB. So, assembly makes no sense, and C++ gives us SO many advantages. And only tiny sections of the code are timing critical, and due to the hardware we have, it's not THAT timing critical. C++ is still faster than the 10 to 20 ms response time of the OSes timer. So, yes, C++ has it's advantages.

However, microcontrollers do not do VoIP. They run TV remotes. A keyscan routine doesn't need C, and the IR modulation at 30-50KHz is way too fast for C. Ok, so TV remotes now have full LCD displays. I would purport that is past the realm of a microcontroller. They run video output. A one cycle slip on a 16MHz processor is still highly visible on even a standard def TV. I've got my own video out circuit. Uses maybe 200 instructions tops. My stuff is at home, so not sure. How would an RTOS or C help with clocking out a bitmap? Microcontrollers read from floppy disks. There was some discussion on these boards about whether a 16MHz AVR could even keep up with the data rate of a floppy. The fact that they guy wanted to run linux on his floppy contoller is just scary. Just clock the bits into a buffer, and wait for a command to dump them out some other port. Done. Would an RTOS really help?

If you need to do more than a tiny, tiny task, use microcontrollers for the timing critical sections, and a SBC of some type running a real OS to interpret the data coming back from the IO devices.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

henrik51 wrote:

If you need to do more than a tiny, tiny task, use microcontrollers for the timing critical sections, and a SBC of some type running a real OS to interpret the data coming back from the IO devices.

I do more than "tiny, tiny task" with my AVRs, and I do it in C. They are more than capable for what I do. Adding a SBC in my projects will put them completely out of budget.

Regards,
Alejandro.
http://www.ocam.cl

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
They run TV remotes.

OK

Silly me.

If we are not supposed to eat animals, why are they made out of meat?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

AndyG wrote:
@bluefire211

Quote:

I'm an inexperienced EE student...

It doesn't have to be the fastest and the smallest, it just has to work according to specifications, and it also has to be finished.

I know engineers with 20 years experience who have not grasped this yet. :)

Cheers
A

And hence, my parents own a 2.4GHz laptop to check their email.

Do they ever do any real work with it?

No. Oh, wait. They play solitaire.

So, they own a 2.4GHz laptop, because Windows REALLY runs like a pig on a 33MHz 486.

Goodie.

So, because someone at Microsoft said, "It just has to perform to specifications...", and they have the muscle to tell the computer mfrs, "This is the platform you will build if you want to run Windows on your hardware.", machines get faster and faster with most users not actually NEEDING the speed.

If people sat back and said, "Ok....how can we do this EFFICIENTLY.", we might remember Linux a few versions back. Yes, even Linux is getting a bit heavyweight for older machines. I ran it on my 486/33. Netscape for web and email. It worked fine. Not fancy, took a while to get XWindows started (and yes, that's a pig too, but another rant there).

Honestly, that's all most computer users need. But, they buy faster and faster machines because the OSes they can get support on won't run on the older hardware. Microsoft no longer supports 95 and 98. So, if you've got a 486, they pretty much won't support you. For fairly non-technical users, that means it's time to upgrade the OS. To something bigger and slower.

Now, I like XP. I'm a geek, so I own a fast machine so that I can run it and play with it, and it's got LOTS of stuff that older versions of Windows don't have.

But, rather than making something that runs efficiently first, and has features second, we are pell mell on features first, and the user can upgrade their hardware if they want support. And even Linux has gone down this path. My kernel is larger than the entire memory of my first PC (which, yes, was also capable of running Linux). And what of that do I use? Very little. It's compiled down as tight as I can make it, and it's just fatter and slower every version.

I see this on the other side at work, in a sense. Yes, at some point, we have to say, "In order to get the product out, we have to add more memory, or a faster CPU." That's not before 3 or 4 people have spent a month trying to fit it onto the current platform. Why? Because if we demand the hardware be upgraded......we're obligated to pay for it. If the home computer industry worked that way as a whole, you'd see the bloat slow WAY down. Imagine that, get a call from your computer manufacturer, "Hey, umm, there's a new version of your OS out, and if you pay for it, we'll send you the computer to run it on for free. Just send us the old one back one of these days. Ok?" Yes, we have to do crap like that.

So, no, not every company welcomes the attitude of, "Just get it in and tell the customer to upgrade their hardware." And yes, we go through LOTS of interviewees to find people that think the way we do. And no, we're not going under.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
"In order to get the product out, we have to add more memory, or a faster CPU." That's not before 3 or 4 people have spent a month trying to fit it onto the current platform.

So that's (say) 17 man weeks of effort you have your software guys go through before you'll let them have more memory.

We cost a man week of senior programmer time at GBP 2000. 17 man weeks is 34000 pounds or roughly $60000 (US).


To program a TV remote control? This is what happens if you program in assembler. :) :)

But seriously, IF they then succeed in making the code fit, you save what? $1 per board? $2? Sell many of these do you? If you do, fine, THEN it's worth the cost reduction.

If they DON'T make it fit, then not only do you have to pay the premium for the extra hardware, but you have also flushed away the effort they spent.

Our volumes are more like 12000 units per year of each particular board variant and we typically port 2 man years' worth of common code across (same calculation - $375000 worth of code - THAT is what we are really selling). So we START with a processor that it will just drop on to. And then IF somebody wants to buy 100K pieces, THEN we trim the design down and shave that £1 off the production cost. We might lose £12k to begin with, but we _save_ much more than that in wasted effort by the softies.

I knew that this would come down to what I consider to be a "hardware engineer" attitude - that we (softies) are somehow "wasting" hardware by over-specifying. But at the end of the day, the microcontrollers are really only little boxes of sand with electricity leaking through. They don't mind being under-used. The SOFTWARE is what WE are selling. It's just that our customers don't realise it; they think they are buying hardware.

I am also not surprised that you have trouble finding candidates that think your way. I don't think that today's students are taught to save cycles and memory locations like we were.

In 1976, when memory was little ferrite cores woven through with wires and computers were huge, I was taught to be frugal with memory and CPU cycles. The whole Computer Science department was sharing time on the same computer.

I was also taught to design boards using a pencil and eraser at a drawing board and then red and blue crayons on a light box for layout.

Does your company still do that, or do you use Protel or similar for board layout now? On a PC with that dreadful Windows software? Shame on you!

Time has moved on, both in terms of hardware and software technology. I embrace both firmly and achieve things today that were just impossible 25 years ago.

Cheers
/A

If we are not supposed to eat animals, why are they made out of meat?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Any person who has been involved in serious embedded systems development for time critical systems understand that the cost of code development is less than 10% of total systems developemnt and less than 1% of the life time cost of the system.

1) Unless code is developed in a well documented and legible form, the costs of lifetime support are greatly increased - thus C.

2) Unless the code can be easily ported to a differnt device/process in a mid life upgrade when the initial device is no longer avaialble, then the costs of lifetiome support is greatly increased - thus C.

3) Unless code is developed in a modular fashion that can be resued from project to project, the cost of testing and integration is greatly increased - thus C.

Any person who uses assembler for time critical embedded systems for any project other than for TV remotes will not be in business to provide lifetime support - FACT.

Embedded systems run missile guidance systems, radar controllers, aircraft thrust and steering controlls, spacecraft communications systems, power distribution systems, etc. Imagine having an inbound missile that you are trying to destroy and your windows systems crashes a few seconds before impact !! imagine having an X-ray machin turn on the X-ray unit then crash without turning it off again - Fried flesh!!

People who think embedded systems based on microcontrollers are used only for Tv remotes has only ever used them for hobby projects.

Lachlan

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Imagine having an inbound missile that you are trying to destroy and your windows systems crashes a few seconds before impact !! imagine having an X-ray machin turn on the X-ray unit then crash without turning it off again - Fried flesh!!

We don't need to imagine that. Already happens, and without the need of a windows system. Please read this article, from Jack Ganssle.

Some excerpt of it :

"
The Patriot Missile

...

The air fields and seaports of Dhahran were protected by six Patriot batteries. Alpha battery was to protect the Dhahran air base. On February 25, 1991, Alpha Battery had been in operation for over 100 consecutive hours. That's the day an incoming Scud struck an Army barracks and killed 28 American soldiers.

...
The Patriots maintained a “time since last boot” timer in a single precision floating point number. Time, so critical to navigation and thus to system accuracy, was computed from this number. Patriots use a 100 msec timebase. Unhappily this 1/10 of a second number cannot be exactly represented by a floating point number. With 24 bit precision, after about 8 hours of operation enough error accumulated to degrade navigational accuracy.
...
The problem was known and understood; the solution sounds something like what we’d hear on a tech support hotline for a PC. “Can’t hit Scuds, huh? Try rebooting once in a while!” In fact, operational procedure was to reboot at 8 hour intervals until fixed software arrived.

The crew of Alpha Battery didn’t get the reboot message from tech support. After 100 hour on-line, it missed the Scud by half a kilometer.

Therac 25

AECL, at the time a Canadian Crown Corporation, developed the Therac-25 in the early 80s. It was designed to treat cancers by irradiating the patient with protons or electrons at computer-controlled energy levels. The instrument apparently had a number of design flaws, which resulted in operators constantly being presented with cryptic error messages requiring system restarts.

Over a two year period six patients received massive doses of radiation from the eleven machines installed in the US and Canada. Each incident had similar pathology - the operator would initiate treatment, but get an error message indicating no dose had been supplied. Used to the machine's quirky behavior, operators would press the "try again" button - sometimes several times. In fact, software bugs were indeed dosing the patient on each trial, with radiation levels sometimes 30 times higher than desired.
"

Regards,
Alejandro.
http://www.ocam.cl

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well, if a person is using C and they suddenly find the need for critical timing, then it is simple enough to say,
#asm
{
...
}

No big deal.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

AndyG wrote:
Quote:
"In order to get the product out, we have to add more memory, or a faster CPU." That's not before 3 or 4 people have spent a month trying to fit it onto the current platform.

So that's (say) 17 man weeks of effort you have your software guys go through before you'll let them have more memory.
Cheers
/A

No, I am one of the software guys who has to work late nights and then beg my boss to allow the upgrade. I'm always on the side of doing more with less hardware. Sometimes it cannot be done, but don't give up before you start.

So, say we decide on a hardware upgrade. Now we have to fly a service guy to that country, pay his time and flight. Now, the box is installed in a unattended machine room, so we have to bribe the customer to send one of HIS techs to the site to open the place up. Plus this all has to go on in the middle of the night with the lowest usage.

Or, better yet....it's a machine that is totally inaccessable. We've had some of those, where it's simply not possible to upgrade the hardware.

Or, maybe the box is in a country currently experiencing some rather violent turmoil. You should see what hazard pay is around our place. Why don't you total up hazard pay to the middle east against the peanuts software engineers are paid and then see which one makes better sense. Multiply that across multiple sites, ya. It gets quite expensive to do these free upgrades.

Oh, and a $1 or $2 per board? When we did our last hardware upgrade, from 512MB or 1GB of RAM, it cost a lot more than $2 per board.

Obviously, computers will grow in speed as they are asked to do more. It takes 30 mins to compile our software right now. So, if I can get a faster CPU or a faster disk or more memory, I always go for it, and the compile time always drops. Doom 3 will not run on a 1MHz 6502 no matter how 'optimal' it is.

But, for instance, the audio path is now the same length in the current version of our software than it was in earlier versions. Thus, if a customer didn't use the fancier features that wrap around simply processing audio packets, they could actually stay on the same hardware. Growth in speed and hardware is primarily over new features rather than a mandated speed kick.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Alejandro, I like your extract. Unfortunately it happens much too often because equipment manufactureres often use COTS (Commercial Off The Shelf) equipment in areas they should not. However they usually do that trying to decerase lifetime support costs.

Lachlan

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
So, say we decide on a hardware upgrade. Now we have to fly a service guy to that country, pay his time and flight.

So you agree with me then. You should specify hardware that you KNOW will do the job from the outset.

Just like I have been saying.

Quote:
Oh, and a $1 or $2 per board? When we did our last hardware upgrade, from 512MB or 1GB of RAM, it cost a lot more than $2 per board.

Now you have to be arguing purely for the sake of it.

YOU claim that microcontrollers are only fit for TV remotes. And you want a GB of RAM?

If we are not supposed to eat animals, why are they made out of meat?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

@lachlan

Quote:
Any person who has been involved in serious embedded systems development for time critical systems understand that the cost of code development is less than 10% of total systems developemnt...

Do you really find this? My experince is entirely the opposite.

The hardware design for a relatively straightforward board that we make is (say) 4 weeks including layout. Maybe a bit more if we have to respin.

But (at least when writing from scratch rather than porting existing code) the software development time tends to dwarf the hardware costs.

Maybe it's just the applications I have been involved in happen have a higher than average software content? I wouldn't have thought so though.

Cheers
/A

If we are not supposed to eat animals, why are they made out of meat?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I would agree with Andy. I've been involved with projects where the board development toook 4-6 man weeks, and the code development took nearly a man year of time. The intelectual property is typically in the code, not in the hardware design. Typically the code develoment far outweighs the board development.

Writing code is like having sex.... make one little mistake, and you're supporting it for life.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sorry for the misunderstanding,

The projects i refer to involve 50 to 200 man years development effort. The project definition stage is usually around 2-3 years. Product life cycle is around 30 years which usually involves eitehr one or two mid life upgrades.

Systems are usually build around many - up to hundreds - of embedded modules each dedicated to specific tasks.

The key is to be able to maintain the operation of the system for the life cycle indicated. Often there are COTS front ends (GUI based displays) but these never perfom critical operations.

Supprt of such systems represents around 90% of the life time cost of the system so devlopement cost is a very small part. The most significant costs of system devlopment are usually product specification and integration and testing. These two components together usually contribute 60-80% of the development cost and if done properly will reduce life cycle supprot costs. thus software development itself is a small component.

These types of systems are built on microntroller based systems, especially those uC that are relatively immune to EMC and have good temp/pressure tolerance. Typical temperature tolerance of downhole oil-well evaluation systems are 450F. Typcal temperature tolerance for radar guidance is -50C. I have not been involved in space systems so i dont know what there requriments are.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

AndyG wrote:
Quote:
So, say we decide on a hardware upgrade. Now we have to fly a service guy to that country, pay his time and flight.

So you agree with me then. You should specify hardware that you KNOW will do the job from the outset.

Just like I have been saying.

Quote:
Oh, and a $1 or $2 per board? When we did our last hardware upgrade, from 512MB or 1GB of RAM, it cost a lot more than $2 per board.

Now you have to be arguing purely for the sake of it.

YOU claim that microcontrollers are only fit for TV remotes. And you want a GB of RAM?

I was trying to make two points in one message. And apparently not doing a good job. For my day job, I write code for an embedded PC. It's a fairly powerful system. P3 or P4 CPU and 512 meg to 1 gig of RAM. And I write in C++. And we run a real OS. And it saves us YEARS of development time by doing that. However, the reason we picked the OS we use is because drivers are available for the hardware we need. Never mind that our project has 20 some threads all chattering away to each other, and depends HEAVILY on the process/thread communication/synchronization API available from the OS.

My point was, if we acted like most other PC software vendors and just told the customers, "OK, we've gone from version 3.0 to version 4.0. You'll need to buy a totally new computer.", we'd have a lot of angry customers. As of yet, on our current physical form factor, we have had 15 or so software releases and have not yet required a hardware upgrade of the base PC board. We have, however, had some of the boards EOLed on us, and we were forced to buy new, more expensive boards and throw money down the drain because we don't need it. Why are the boards EOLed? Because most software projects require more and more CPU power as time goes on. We, due to great effort, have not. That is, to some degree, a wasted effort, as we can no longer even buy the slower CPUs on which we are perfectly happy to run. So, it is frustrating.

When I see people using OSes that provide no drivers, no filesystem, no convenience features except using up a timer for their own purposes and the providing you with a multi-threading API, I realize that is part of the problem that pushes hardware to go faster and faster whan many software projects have no need of the faster hardware that cuts into their profit margins. Now, as I said above, we run MANY threads and DEPEND on the thread sync system in our OS to keep our heads above water. But, how many threads can you fit on a 2k microcontroller running at 32.768KHz? That's about the low end of what Atmel makes. Why do they even OFFER 128K microcontrollers? Because people write in C with full OSes sitting on top. I can't imagine that much program code on a device with no file system, no GUI, just a heck of a lot of I/O. Digital and analog I/O is really easy to code tightly in ASM. At least I find it is. I have to read the data sheet, write it in assembly, then convert it to C. If I've already written the assembly, why bother writing it in C?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sorry, didn't notice the PC thing. :)

When it comes to OSes (and the start of this thread many moons ago) I have a great deal of sympathy with your OS standpoint. We don't use an off-the-shelf OS on microcontrollers either.

What I have been specifically taking issue with is your idea that micro controllers somehow drive TV remotes or Furbies and nothing more serious.

We use an H8S2238 for most serious development, only moving to Avrs when we can get away with the more limited.

This has 256 K of flash and 16 K RAM. It also has a pin-compatible sister chip, with twice as much Flash and RAM in case we have problems. Another $1 or $2 and we just keep on dedveloping.

Our flash is not full of OS. It is full of code we have developed, in C++, that we can run on a PC, H8 or AVR. We support maybe 50 "similar" peripherals, each running a different comms protocol making them all look identical to the PC.

This is about 160K of compiled code. Not bloated Windows application but tight code.

Why C++ ? So we can use inheritance big style...

Its a new peripheral. It's a coin acceptor. It runs "cctalk". It has coin routing. etc. etc. etc.

We take a new device (our bread and butter) and drop it into the system in an hour or two. We sell more boards. The peripheral manufacturer sells more acceptors.

Could we do this in assembly language? Yep. Could we do it in assembly language in in an hour or two? Nope. Are we certain that when we drop on a new device we can expect it to work for all eternity? Yep. Because 95% of the code to talk to it is proven and has not changed.

/Andy

If we are not supposed to eat animals, why are they made out of meat?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hmm, interesting. I will grant your point. :)

I quite enjoy C++ for the flexibility of it's objects. I have, upon a couple of occasions, turned around bug fixes while the customer was still on the phone with the tech. No, that would not happen in assembly.

However, we seem to have a problem where I work that you do not, and hence it has given me my focus to solve the problem with better coding rather than dumping hardware at the problem.

Imagine, to use popular software as an example, that upon the release of Windows 2000, Microsoft's marketing department had said, "No more service packs to NT4. If a customer has a bug that is only fixed in Windows 2000, they will receive a free copy of 2000. If their machine is not powerful enough to run it, then they will receive a free computer as well, and THAT is coming out of Engineering's budget." That is, to large effect, how our marketing department runs things. If a customer has a bug, they just get the latest version. So, we performance test the heck out of our software. There is a base feature set that accomodates most customers, and it must run at the same performance level across every version of the software. No going to the chip with twice the memory.

Why do they do this? Well, in part it's experience. We're the first department to use off the shelf hardware. Previously, it has been custom designed hardware. Most of which hasn't been made for 5-10 years at this point. So, marketing and engineering are very used to making sure that no hardware changes are needed. We needed to rebuild the firmware for one of the cards. Written in C, even. Took two weeks to get the build process right, as everyone's forgotten how to do it, and some files seem to have gone walking over the years. Also, with our product, changing out hardware is simply not an option. Most of our machines are located in inaccessable areas. Even if the customer WANTED to pull a board for us, they probably couldn't. So, we either tell them to live with the bugs, or we MAKE it fit.

Yes, one customer does have bigger hardware, but only because of one feature. And one of our next tasks is to trim the memory usage back so that it fits on the same hardware it alwas has.

I guess what bugs me about the general attitude here, is stuff like your comment of, "We've always got the sister chip that's twice as big." Some of us DON'T. We have to fight long and hard to work on what's in the field. And it's very frustrating to hear people say, "Hey, as long as it's done on time, hardware be damned." We even have a former satellite designer on our team. So, that is the sort of design that comes to the table. "Well, we COULD upgrade the hardware, but who's gonna pay the bill for the rocket." So, yes, if I think the project demands the flexibility, I'll write it in C, ever mindful that I'm using just THAT much more memory. If the program is just a few hundred bytes anyway, I'd write it in ASM, so that I'd have as much possible space left over on WHATEVER part I choose.

I've been playing with a project of my own in my head, thinking about that exact decision. And I've kind of come to the conclusion, "What WOULDN'T be timing critical?" Basically, banging hard as I can on an IR transmitter and a radio transmitter. Not a terribly large program, just very very timing sensitive. Hardly a candidate for C. Besides, if I can bring the memory usage down, the chip size drops, and thus the board size drops. Which would be good, in this case. However, once done, it would be unlikely to see any new I/O devices, so it shoudn't fall down in that respect either.