How to select micro controller architecture

Go To Last Post
57 posts / 0 new

Pages

Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hello avr freaks,
I want to know to how to select micro controller architecture for example 8bit , 16bit Or 32bit micro controller. Can some one explain me please. Thanks in advance

Last Edited: Mon. Feb 3, 2020 - 01:32 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Any micro choice is usually about "resources". Some jobs need a couple of K of flash, a few hundred bytes of RAM, a couple of MIPs, one UART and one ADC. Other jobs need 2GHZ, 64 bit CPU, 5 Megaflops, 4GB RAM, 3 host USB controllers and a video DSP.

 

The choice of 8/16/32 bit micro is just part of this. While you might specifically choose 32 bit for CPU power the chances are it's simply a consequence of the choice of other resources (they don't make 8 bit micros with 4GB addressing etc)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Depends on project requirements, developer familiarity and access to tools, and budget matters.  Second sourcing used to be important, but not any more.  If you're going to make millions of widgets, then getting the cheapest part that will do the job is the key task.  If not (and that applies to most projects), then developer familiarity and access to tools comes to the forefront, as long as the family has the horsepower and peripherals to do the job.  For myself, I just use the AVR and ARM Cortex M families now (having left behind about a dozen other families).  That gives me a device horsepower range of about 1000:1.  So far, I haven't had to look outside that range. :)

Last Edited: Sun. Feb 2, 2020 - 06:09 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well, that question is so generic that answering it is hard.

Things to look at:

  • availability of support (development tools, documentation, fora like this one, etc...)
  • heavy usage of math? Then 32 bit may be better.
  • peripherals
  • price
  • Is the environment noisy? Does the MCU need to drive significant loads directly, like LEDs? 5V operation? Better look at 8bit first.

 

These are just some things that came to mind, there are many others. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

To make an analogy, why do some folks choose to drive a Ford pickup and others a Bugatti Veyron?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

An MCU's ecosystem is essential.

The Amazing $1 Microcontroller - Jay Carlson due to $1 MCU review — looking for AVR part suggestions | AVR Freaks

El Tangas wrote:
... there are many others.
usually before price is electrical power (size of energy storage, renewable energy's reduced current and duration, etc); a use case is automotive tire pressure sensors.

Other criteria :

  • memory (sizing, partitioning or segmenting by MPU or MMU, address spaces)
  • licensing (OSS vs FLOSS vs proprietary vs reuse, OSH via CC-"multiple" vs proprietary vs reuse)
  • best practices (process, methods)
  • safety
  • reliability
  • security
  • requirements (infamously, Boeing 737 MAX MCAS)

 


Practical GPL Compliance - The Linux Foundation

Home | Software Package Data Exchange (SPDX) (BOM - hardware, software; technical data package to customer)

Best Practices for Open-Source Hardware 1.0 | Open Source Hardware Association

Firmware Update v18.03 | Barr Group

  • The State of Embedded Systems Safety

Functional Safety | Microchip Technology

Boeing: The 737 MAX MCAS Software Enhancement

 

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The choice is often quite arbitrary. Often, but not always, a 16 bit or 32 bit can do what an 8 bit does. Increasingly, with rapidly falling ARM (e.g. 32 bit) prices, folks will choose one of those over an 8 bit, in part because, just maybe some day, the application will really NEED the extra bit width.

 

Sometimes, there are factors like speed or memory size that force a choice. That choice is usually based on resources, not bit width, but not always. Sometimes, it is forced by some specific feature, like a WiFi radio or a video processor. When it comes to these resource things, you take what ever bit width it is that the device comes with.

 

Jim

Jim Wagner Oregon Research Electronics, Consulting Div. Tangent, OR, USA http://www.orelectronics.net

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

8 bit micros are on the way out, they will be completely gone within another year or two.  So everyone will be moving on to 16 or 32 bit micros or be left hopelessly behind.  

This is undisputed; they've been saying this for at least 20 years.  

 

Given enough execution time, even a 1-bit processor can solve any problem:

https://hackaday.com/tag/mc14500/

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

Last Edited: Mon. Feb 3, 2020 - 12:12 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

 they've been saying this for at least 20 years.  

I remember hearing it about 35 years ago. And everyone at work believed it too so we started designing 16bit boards, what a wast of time! wink

John Samperi

Ampertronics Pty. Ltd.

www.ampertronics.com.au

* Electronic Design * Custom Products * Contract Assembly

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:
This is undisputed; they've been saying this for at least 20 years.
:-)

I think they have pretty much managed to kill off 4 bit though - only took about 50 years !

ka7ehk wrote:
The choice is often quite arbitrary.
A lot of designs are actually done with an inappropriate micro simply because the designer was already familiar with the architecture and had the knowledge and toolchain to work with a particular micro. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

El Tangas wrote:
Is the environment noisy? Does the MCU need to drive significant loads directly, like LEDs? 5V operation? Better look at 8bit first.

That has nothing to do with CPU word size (number of bits).

 

It is purely coincidental that most of the currently-available 32-bit (and higher) chips are "low voltage" (3V or less)

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:
This is undisputed; they've been saying this for at least 20 years

For example:  http://www.8052mcu.com/forum/read/181200

 

But, ironically:  http://www.8052mcu.com/forumchat/read/190773

 

Warning: do not follow links to www.8052.com - change them to www.8052mcu.com

 

But I think 16 bit really is purely niche these days ?

 

I do remember 4-bit, though ...

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
Last Edited: Mon. Feb 3, 2020 - 09:10 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

awneil wrote:

But I think 16 bit really is purely niche these days ?

 

Maybe, but there is an argument that 16-bits is the sweet spot for a lot of deeply embedded stuff. 16 bits is the right size for memory addresses, with 64k being a good size for most code. 16 bits is the right size for many real world measurements like voltages and temperature when done to a sensible precision.

#1 This forum helps those that help themselves

#2 All grounds are not created equal

#3 How have you proved that your chip is running at xxMHz?

#4 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0


I once did a design with M16C(*) - was kind of fun.

 

 

But at the end of the day a micro is just a micro - if using C then the differences are almost totally immaterial anyway. 

 

(*) seem to remember that it was "Fujitsu" not "Renesas" at the time - but I may have dreamed that.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

awneil wrote:
It is purely coincidental that most of the currently-available 32-bit (and higher) chips are "low voltage" (3V or less)

Does it matter if it's coincidental or not? It is what it is.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

suppose i am using 8 bit micro controller interfacing with I2C & SPI.

and 16 bit micro controller interfacing with I2C & SPI. I am trying to figure out what will happen. Beacuse I2C has packet format of 9 bits ( 8 bits for data/address and 9th bit for acknowledgment). and i2C max speed is 400Kbps. if i will use 8 bit micro controller and 16 bit micro controller with same clock frequency for example 16MHz is there will be any difference regarding transmssion speed.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well, the original question was, "8bit , 16bit Or 32bit"

 

I think the general consensus is that the word size itself is seldom the deciding factor - it's generally going to be about "other things" - like peripherals, operating voltage etc.

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Brian Fairchild wrote:
64k being a good size for most (sic?) code

 

One of the driving factors for 32-bit is that 64K is very often not enough - certainly if your device has to be connected using Bluetooth, Zigbee, Internet, etc ...

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

While it's not entirely cut and dried the fact is that humans can just about cope with counting 0..255 so most communications protocols always send 8 bits as the "atomic unit" (yeah, I guess that are 5..9 bit UART and some SPI can do lots more than 8). If an 8 bit controller wants to send 16 bits to a 16 bit micro (why the micro data width should have any real bearing on this I don't know?) then it would just send two lots of 8. Similarly if the 16 bit CPu has 16 bits to send to the 8bit micro it will just send two lots of 8.

 

Again, not totally set in concrete but generally most things that pass to/fro between any micros are some multiple of 8 bits. So you don't generally send 17 or 25 or even 20. You tend to send 8 or 16 or 24 or 32 or 8*N bits.

 

My 64 bit PC often "talks" to my 8 bit AVR. Neither one really cares about the bit width of the other!

 

PS this very post came to you via the internet - everything in TCP/IP packets is aligned on 8 bit boundaries and padded when it is not.

Last Edited: Mon. Feb 3, 2020 - 11:31 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

 

uc_coder wrote:
suppose i am using 8 bit micro controller interfacing with I2C & SPI.

and 16 bit micro controller interfacing with I2C & SPI.

The whole point about standard interfaces like this is that your choice of microcontroller is entirely irrelevant:  I2C is I2C - it does not change depending on what microcontroller you use!

 

The CPU word size is purely an internal thing it is irrelevant "outside" the chip.

 

if i will use 8 bit micro controller and 16 bit micro controller with same clock frequency for example 16MHz is there will be any difference regarding transmission speed.

The CPU clock rate is irrelevant. The I2C controller takes care of all the timing on the bus: - so long as the bus speed compatible with  both Master & Slave, it is irrelevant what speed their different CPUs are running at.

 

EDIT

 

We've had this before quite recently ...

 

EDIT 2

 

The datasheet tells you how the bus speed is defined; eg, for  ATmega48 et al:

 

 

So, although it may be related to the CPU clock rate, it is not (necessarily) equal to the CPU clock rate.

 

Other chips may well have even more clocking options ...

 

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
Last Edited: Mon. Feb 3, 2020 - 12:15 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
not totally set in concrete but generally most things

 

Sadly, this was not the case with the CS1232 I chose to use as an ADC for cantilever strain gauges. The 24 bits per output required 27 clocks, plus seven bit wide register addressing, changing from read to write and back to read in the same 'SPI' exchange... 47 bits in total, if I recall correctly. For the gravy on top, initially my documentation was all in Chinese, which I *don't* read...

 

Neil

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

awneil wrote:

But I think 16 bit really is purely niche these days ?

 

Even how you decide what is '16 bit' is more blurred.

Some ARMs have a 16 bit opcode choice, does that make them 16 bit  ;) ?

Some new 8051's have a code fetch/execute that is 24b wide, does that make them 24b ?

If you have a 32b ALU inside an 8051, is that still an 8 bit MCU ?

What about a 32b FPU inside an 8051 ? - placing a hard number of the 'bit axis' gets very hard now...

AVR's have a 16b opcode and some 16b instructions, does that make them 16b ?

TI's MSP430 is sold as a 16b part.

 

uc_coder wrote:

Hello avr freaks,
I want to know to how to select micro controller architecture for example 8bit , 16bit Or 32bit micro controller. Can some one explain me please. Thanks in advance

 

More generally useful is to look at the core MHz and the Flash and RAM and Pin count/package, and peripherals, to decide what tasks you need to perform.

Notice the core itself does not appear in that list yet ?

With HLL the 'nominal bits of the core', matter less.

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

uc_coder wrote:

suppose i am using 8 bit micro controller interfacing with I2C & SPI.

and 16 bit micro controller interfacing with I2C & SPI. I am trying to figure out what will happen. Beacuse I2C has packet format of 9 bits ( 8 bits for data/address and 9th bit for acknowledgment). and i2C max speed is 400Kbps. if i will use 8 bit micro controller and 16 bit micro controller with same clock frequency for example 16MHz is there will be any difference regarding transmssion speed.

 

Transmission speed & width is defined by the peripheral, so the nominal core bits has no effect on that.

What you may like to check, is the other aspects of the peripheral, as it is common for 8b MCUs to have simplest non-FIFO SPI/UARTS, whilst 32b MCUs will often have FIFOs and DMA.

HW Details like that can affect the packet burst abilities - the more HW, the lazier the programmer can be ;)  (but it can get harder to get the thing working in the first place, from a 2000 page manual !)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

I would generally measure a CPU's width based on the width of the CPU registers/ALU not the opcode width.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0


There's always the oddball

 

Though I suspect this is pure specmanship

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:

There's always the oddball

 

Though I suspect this is pure specmanship

 

Wow, that deserves some sort of prize ! ;)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

I would generally measure a CPU's width based on the width of the CPU registers/ALU not the opcode width.

This is how I've always understood it.  Another definition I've seen is the width of data on which the CPU can natively execute the full range of load/store, arithmetic and logic operations.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

kk6gm wrote:
the width of data on which the CPU can natively execute the full range of load/store, arithmetic and logic operations.

 

I think that word is the key. The AVR have a few 16 bit instructions and a fair number of 1 bit instructions, but it's neither a 16 bit nor a 1 bit architecture;  only 8 bit instructions can do the full range of operations.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I agree with all of the above criteria, but I have one more.

 

If the chip only comes as a ball grid array device it is instantly off my list...

 

I've got to be able to solder one up or I'll pick a different micro, and I haven't yet figured out how to lay out a BGA chip's PCB, much less solder it up in my basement.

 

JC

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It's all a bit nebulous - generally it is the 'data path' which includes the ALU (another exception here might be transport trigerred achitectures where the alu is essentially another peripheral), but there's always exceptions - Z80 had a 4 bit alu double pumped, the LSI11/03 had a 8bit alu double pumped. The PDP8/s (?) had a bit serial alu. The various IBM 360 models had 8/16/32 bit alus. The opcode width is irrelevant - Cortex M has 16 bit opcodes as does the AVR -but very different in terms of architecture.

 

 

A recent project I did required 2 i2c (1 slave,1 master), uart for midi and a couple of GPIO. Also had to be 3v3. A zillion micros would fit that criteria. It came down to what dev boards did I have on hand to test the code with and Arduino support. 8/16/32 bit didn't factor in the decision. Nor would it have impacted on the performance of the unit. As well, if the part was available from LCSC, then that made my job even easier as I didn't want to issue multiple orders for the one project (I'm in Australia and delivery is around $30). Before anyone pipes up and says Digikey has 'free' delivery - most of the 'jellybean' parts are 1/10th of the cost from Digikey.

 

So to answer the OP - broad question so there is not one simple answer. Which architecture should you learn? I think most would suggest ARM Cortex M based devices as this covers a significant range of manufacturers and parts. The reality though is 99% of a design effort tends to be in writing the code and debugging - and those skills aren't architecture/chip specific.

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

most of the 'jellybean' parts are 1/10th of the cost

I remember some purchasing guy a few years back would get extremely upset when he was told to just order jellybean parts.  He said they always left a bad taste in his mouth.

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It's amazing that the Motorola 68000 computer architecture is alive and well.

MCF5225x ColdFire® Microcontrollers - NXP Semiconductors | Mouser

in stock, 10 week lead time

Will be making the transition to FSF GCC 11

Kartman wrote:
Which architecture should you learn? I think most would suggest ARM Cortex M based devices as this covers a significant range of manufacturers and parts.
RISC-V is pushing the ones at ARM.

 


https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/M680x0-Options.html#M680x0-Options

https://github.com/gcc-mirror/gcc/tree/master/libgcc/config/m68k

 

edit :

Status of Supported Architectures from Maintainers' Point of View - GNU Project - Free Software Foundation (FSF)

...

c       Port uses cc0.

...

via CC0Transition - GCC Wiki

due to avr-gcc and avr-g++ are deprecated now. | AVR Freaks

 

"Dare to be naïve." - Buckminster Fuller

Last Edited: Tue. Feb 4, 2020 - 02:57 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I've yet to come across a compelling argument as to why RISC-V would be or is any better than ARM. Once it is on silicon, any open source ‘openness’ is somewhat hidden. arguments of ‘security’ and knowing what is in the chip is pure noise - only reverse-engineering the chip would prove this and not too many people would go to that length. I think the reality will be more like Coke vs Pepsi or blue Steel vs LaTigra.

Seems the hobbyist level chips with RISC-V come out of China.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Kartman wrote:
I've yet to come across a compelling argument as to why RISC-V would be or is any better than ARM.

 

Competition is good. Anyway, I recently read a bit about the architecture. RISC-V doesn't have a flags register, it's rather unusual, isn't it?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

For an 8 bit micro, having a carry/borrow is handy, less so for a 32/64bit micro. Like every engineering endeavor, there’s always tradeoffs. From a C perspective, we never see the condition codes/flags so no-one will miss them!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

From a C perspective, we never see the condition codes/flags so no-one will miss them!

I think they missed the boat in forgetting to include the carry flag....would come in handy many times. 

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

All roads lead to some flavor of ARM Cortex.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

avrcandies wrote:

From a C perspective, we never see the condition codes/flags so no-one will miss them!

I think they missed the boat in forgetting to include the carry flag....would come in handy many times. 

 

I agree entirely; there are any number of cases - in particular, extended arithmetic and serial io via bit banging - where access to the carry flag would be handy and/or prevent a digression into assembly.

 

I regret that when I used occasionally to correspond with DMR that I never asked him why not.

 

Neil

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

How often do you use uint64 vars? You can do extended adds without carry - it just takes a couple more instructions to achieve it. Bit banging in C doesn’t need a carry bit. Rather than testing a carry, you test a bit in a register. Depending on the specifics of the architecture, this may be the same or similar operation.
The RISC-V board i have is 64 bit - i’m not doing many 128bit adds methinks!

Somewhere in the RISC-V blurb they explain their decision regarding the flags.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:

From a C perspective, we never see the condition codes/flags so no-one will miss them!

I think they missed the boat in forgetting to include the carry flag....would come in handy many times. 

 

There's at least one compiler I've come across, not AVR though, with an extra keyword to allow access to the carry flag.

#1 This forum helps those that help themselves

#2 All grounds are not created equal

#3 How have you proved that your chip is running at xxMHz?

#4 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

toalan wrote:

All roads lead to some flavor of ARM Cortex.

 

Which just turns the 8/16/32-bit question into a 'which flavour has the best peripherals?' question.

 

In the end the CPU core doesn't really matter; it's usually the peripherals and how good they are that matters.

#1 This forum helps those that help themselves

#2 All grounds are not created equal

#3 How have you proved that your chip is running at xxMHz?

#4 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Brian Fairchild wrote:
Which just turns the 8/16/32-bit question into a 'which flavour has the best peripherals?' question.

Somewhat - but not so much.

 

As there's then only four to choose from (M0, M3, M4, M7), there is a far clearer gradation of compute-power.

 

Although, as the M0 is low-end (low performance; low power; low cost) it does tend to come with the least set of peripherals - so you can still end up going for a "bigger" core just because of its peripherals.

 

And, of course, Cortex-M and Cortex-A are in whole different ballparks

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

and M23, M33 (Arm TrustZone for Cortex-M; IIRC, other enhancements)

SAM L11

32-bit Embedded Security Solutions | Microchip Technology

 

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

Brian Fairchild wrote:
There's at least one compiler I've come across, not AVR though, with an extra keyword to allow access to the carry flag.
avr-gcc has "SREG". The only thing about that though is that if you:

do_something();
flags = SREG;

you've no real guarantee that the point at which you read SREG is exactly 1 opcode later than whatever you did.

 

In the C compiler where the carry was available how did it handle this? How do you know that the optimiser has not reordered stuff so it's placed some intervening code (maybe like a for() loop iterator update?) between the thing that might have set the Carry and the C level access to it ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ISTR that Keil C51 did allow that

 

But, as clawson wrote:
you've no real guarantee that the point at which you read SREG is exactly 1 opcode later than whatever you did

 

So it did always seem rather pointless to me.

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Found this:

 

http://www.keil.com/support/man/docs/armcc/armcc_chr1359124214079.htm

 

So it has "Overflow" and "Carry" but I still don't see how you direct it to "that last thing I just did" as there isn't really such a concept in C.

 

Actually, the fact that the providing header is dspfns.h suggests this might be intrinsics for some kind of DSP co-processor? which presumably leaves that CPU in the "last state" while the main processor gets on and wiggles its flags all over the place?

 

EDIT: yup more about it here:

 

http://www.keil.com/support/man/docs/armcc/armcc_chr1359124210895.htm

 

So these are intrinsics on a separate DSP doing G.723 and G.729 by the looks of it.

Last Edited: Tue. Feb 4, 2020 - 03:54 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

you've no real guarantee that the point at which you read SREG is exactly 1 opcode later than whatever you did

I wasn't actually thinking of the AVR carry bit , per se, but a carry that represented the result of the code line computation....say you add 2 bytes, cat=dog + frog there would be an auxiliary carry value available immediately after that code line ( but only guaranteed to represent that computation immediately after that code line) so internally all math operations actually return 2 values, only one of which is typically used ....probably too much to ask!

When in the dark remember-the future looks brighter than ever.   I look forward to being able to predict the future!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It sound like they relate specifically to those "dspfns.h" routines:

Keil, an ARM company wrote:
The implementation of the European Telecommunications Standards Institute (ETSI) basic operations in dspfns.h exposes the status flags Overflow and Carry.

So, presumably, that ETSI stuff defines specifically what they do?

 

Not sure that it's saying you can just use these flags anywhere in any arbitrary 'C' code ?

 

It looks like they relate only to overflow/carry from those ETSI operations ?

 

EDIT

 

Oh - I think you just said that in your edit?

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
Last Edited: Tue. Feb 4, 2020 - 03:59 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Well it could be done like ldiv() in <stdlib.h> I suppose? That returns TWO results from one division. By the same token you could have an addc() or something that returned a result and a carry bit etc? Not sure how you could do it with +-/* though? I guess you might overload the operators (C++) and have a variant where what you add/subtract/etc are not just one value but a value and a carry result ?

 

But in C what is the need to expose carry anyway. If I want 16 bit addition on an 8 bit processor I just use "uint61_t + uint16_t". Sure, behind the scenes this might be implemented as ADC;ADD but why does the "internal" carry in that matter to me? Or is this about the overflow: you use uint8_t = 255 + 255 and want to know about the 9th bit of the result ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Kartman wrote:
Somewhere in the RISC-V blurb they explain their decision regarding the flags.

 

Their explanation is that it creates dependencies on the status register, which is bad for out-of-order CPU optimizations. Especially, it can creates unexpected dependencies, like reading the status register directly from the I/O space on an AVR would, if it had an out-of-order core. On this hypothetical AVR, this would need extra logic looking into every I/O operation, lest it accessed SREG, creating a dependency with previous ALU instructions...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Kartman wrote:
Seems the hobbyist level chips with RISC-V come out of China.
An industrial level RISC-V SoC out of Arizona :

Microchip Unveils Family Details and Opens Early Access Program for RISC-V Enabled Low-Power PolarFire SoC FPGA Family | Microchip Technology

edit :

Early Access Program for PolarFire® SoC FPGA - YouTube (1m9s)

 

"Dare to be naïve." - Buckminster Fuller

Last Edited: Wed. Feb 19, 2020 - 11:58 PM

Pages