$1 MCU review — looking for AVR part suggestions

Go To Last Post
164 posts / 0 new

Pages

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jaycarlson wrote:
while I had to mess with RL78-GCC, AVR-GCC and ARM-GCC compiler settings extensively to be able to get it to toggle a GPIO pin efficiently,
Tell us more about the extensive settings needed for avr-gcc when toggling a pin? With nothin but -Os on the command line a write to PIN will be an SBI and for an older school AVR that requires an RMW it will be 3 opcodes. How could it be more efficient than this and that is with nothing more than -Os ?? (which most IDE and makefiles pass as the default anyway)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

But that open good and simple a tool do you get for free!

 

Studio 7 is good but big! and if you don't know microsoft studio it will get some time to get used to, and hard to find some simple things.

 

So if it was to get something up running in a hurry I guess that something like studio 4 would have been better!

 

But that said efficient toggle an IO with AVR-GCC should be a 2 sec deal! (and you would spend way more time with the datasheet finding out how to do it.)

(But perhaps it's because you use a 1616 and not a "normal" mega168 or so!)

 

 

 

 

 

Last Edited: Thu. Jul 27, 2017 - 10:32 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I'm not necessarily talking about using an IDE anyway. If I sit at a command line and type only "avr-gcc -mmcu=at<something> -Os avr.c -o avr.elf" that will build a program that uses "tight" toggling code so I'm still a little perplexed by the extensive settings claim. perhaps that original "joke" about GCC optimisation wasn't a joke at all and OP simply does not understand how to drive GCC? While it still cannot claim to be as good as IAR I would say the code generation model is no as "tight" as just about any other choice of C compiler for AVR. Especially with G-J's changes going into v8.x

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

but I tried this 

/*
 * GccApplication18.c
 *
 * Created: 27-07-2017 14:57:59
 * Author : Admin
 */ 

#include <avr/io.h>


int main(void)
{
    /* Replace with your application code */
	DDRB=0x01;
    while (1) 
    {
        PINB=0x01;
    }
}

to toggle one a mega328 (would be same on 168), and it's 2 lines of code I wrote, and it can't be done faster then the output code:

int main(void)
{
    /* Replace with your application code */
	DDRB=0x01;
  cc:	81 e0       	ldi	r24, 0x01	; 1
  ce:	84 b9       	out	0x04, r24	; 4
    while (1) 
    {
        PINB=0x01;
  d0:	83 b9       	out	0x03, r24	; 3
  d2:	fe cf       	rjmp	.-4      	; 0xd0 <main+0x4>

everything is default  

 

And it took less than 5 minutes, from start (created a new project) to single step the ASM output code!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

sparrow2 wrote:
it can't be done faster then the output code

With "manual" toggle code, indeed I can't see better.  clk/6 frequency, right?

 

clk/2 can be had with timer CTC mode.  Would require two more setup lines for the A and B timer control registers, and an empty loop [optional].

 

Xmega, and most Cortex I've had a pee at, have an "OUTTOGGLE" register or equivalent.  Should be able to get similar performance, so I don't know exactly where the "hard to toggle" is coming from.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Can I take you back to this in #15 (a lot of posts ago)...

jaycarlson wrote:
As for compilers, I would not call AVR-GCC "high quality" — I would call it completely average, when compared to other platforms and compilers I've tested so far. Without the optimizer on, it produces fairly mediocre code (a bit-set operation was compiled into 9 instructions, in my testing), and with the optimizer configured to do anything useful, the code is very difficult to debug — I'm often forced to use assembly breakpoints, as Atmel Studio can't seem to figure out what I'm trying to do.

Clearly rubbish or a joke but maybe not? OP doesn't seem to know how to operate GCC if he thinks it would ever be relevant to run without optimizer!!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Xmega, and most Cortex I've had a pee at, ... 

Freudian slip there, Lee?

 

JC 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

DocJC wrote:
Freudian slip there, Lee?

lol.  And here I just got done laughing at http://www.avrfreaks.net/comment...

Paulvdh wrote:

 back off ass fast as possible

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

Tell us more about the extensive settings needed for avr-gcc when toggling a pin?

  • With the optimizer off, GCC (on AVR, ARM, or RL78) doesn't seem to know what SFRs are — it treats everything as 16-bit memory, with multiple instructions for indirect accesses. No other compiler I've tested so far does this.
  • With the optimizer set to -O1 or -Os, AVR-GCC begins treating SFRs as SFRs (with "in" and "out" instructions), but GCC's RL78 backend did not — in fact, it *never* understood SFR memory. I tried many, many settings, to no avail.
  • With the optimizer turned on at all in AVR-GCC, many of my breakpoints stopped working, and temp variables are immediately compiled out. I tried many different combinations of "Debug level" and "Optimization level" settings in Atmel Studio, but I could never get perfect debugging along with actual register manipulation. Please help if you have suggestions — seriously! Maybe this is more the Atmel Studio debugger's fault instead of avr-gcc, but I'm sort of referring to the toolchain as a whole when talking about AVR-GCC (sorry if this offends you). At the end of the day, other compilers/toolchains didn't have this problem.

 

clawson wrote:

Clearly rubbish or a joke but maybe not? OP doesn't seem to know how to operate GCC if he thinks it would ever be relevant to run without optimizer!!

I have no problem turning on the optimizer, but I need basic breakpoints to work, too. Even when I have the optimizer cranked all the way up in most environments, breakpoints still work (though variable watches don't). I understand if a variable gets optimized out (that's fine), but to get basic breakpoints working, I end up having to read through the assembly listing (which is not as easily-accessible as in other IDEs), and set assembly breakpoints instead of C breakpoints.

 

None of this is the end of the world. I get that. GCC is perfectly fine — and I will gladly continue using it when working on AVR projects. But the whole point of this project is to compare what's out there, and compared to the other compilers I've tested on these different MCU platforms, I'd call it completely average. That was all I was saying.

 

If you want me to say something nice about AVR-GCC specifically, I will say that it's much better than the RL78's GCC implementation. Will you get off my back now?

Last Edited: Fri. Jul 28, 2017 - 12:48 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jaycarlson wrote:

I think you would be blown away by the Silicon Labs EFM8 stuff. Three-stage pipelined architecture, running up to 72 MHz. None of this old-school 12-cycle-machine-clock rubbish; this is a single-cycle machine that, clock-for-clock, matches the TinyAVR closely (not going to say more until I finish testing!)

 

I have some sitting on the bench back home and one of the first jobs when I get back from holiday is to run them up and do some tests.

 

They are certainly very different to the 12T 12MHz parts I first used last century. On raw MIPS they will be no slouch when compared to the AVR; I just wonder what they will be like in the real world when the lack of modern 'compiler-friendly' features starts to bite. The early design decisions made by Atmel, as documented in the PDF which gets linked to from time to time, along with their collaboration with the compiler writers has yielded a vert capable 8-bitter which seems remarkably unconstrained unlike some other chips.

'This forum helps those who help themselves.'

 

pragmatic  adjective dealing with things sensibly and realistically in a way that is based on practical rather than theoretical consideration.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

for debugging use -Og for the optimizer!

 

If you want ok code, in the order you write.

 

And it's a problem for a (good) compiler to make a breakpoint if your code don't exist any more because it don't do anything.

 

For small test programs make sure that make "key" variables volatile or something like that, if not GCC will not generate any code because it's not used for anything!  

 

Add:

Perhaps show some code where you don't like the GCC's way of compile/debug.

Last Edited: Fri. Jul 28, 2017 - 02:58 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

sparrow2 wrote:

 

for debugging use -Og for the optimizer!

If you want ok code, in the order you write.

And it's a problem for a (good) compiler to make a breakpoint if your code don't exist any more because it don't do anything.

For small test programs make sure that make "key" variables volatile or something like that, if not GCC will not generate any code because it's not used for anything!  

Thanks for the good tips, but I'm well aware of all of these -- and none of them address what I'm saying: AVR-GCC doesn't use register accesses when the optimizer off, but as soon as you switch the optimizer on any level at all, breakpoints can start being problematic.

 

I'm not in front of AVR Studio right now, but I believe I've got a pathological case to illustrate what I'm saying:

while(1) {
    DDRB ^= 1;
}

With the optimizer off, that single line of code will get compiled to a single instruction to write the immediate value "1" to a register (used as the xor argument), followed by 3 or 4 instructions to do an indirect fetch from 16-bit memory, an xor operation, and another 3 or 4 instructions to do an indirect write back to 16-bit memory, followed by a jump. No "out" or "in" instructions will be present, even though we're obviously dealing with SFRs.

 

Alright, that's crap, so let's turn the optimizer on any setting -- -Og, -O1, whatever -- and if you recompile, those gross 4-instruction memory fetches turns into a single "in" instruction and a single "out" instruction. Atmel knows this is how you have to use AVR-GCC, so they make -O1 the default option. However, if I try to set a breakpoint on the DDRB toggle line, it will fire ONCE when the program starts, but never fire again. Why? Because they breakpoint is getting set on the single "load value 1" register call that happens outside the loop (since the optimizer is on!).

 

Again, I know perfectly well how to deal with this. You can go to the assembly view and set the break point on the "in" instruction. But this would be easier if AVR-GCC always used "in" and "out" instructions when doing register operations, even with the optimizer off. No other compiler considers these "optimizations"

 

That was the only point I was trying to make when I said "I find GCC completely average" when compared to everything else out there. It's certainly not "the best" and it's certainly not "the worst" -- and if you know how it works, you can use it to efficiently generate AVR code; but it takes a bit more "thinking" than other compilers do.

Last Edited: Fri. Jul 28, 2017 - 04:30 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Brian Fairchild wrote:

 

I just wonder what they will be like in the real world when the lack of modern 'compiler-friendly' features starts to bite. The early design decisions made by Atmel, as documented in the PDF which gets linked to from time to time, along with their collaboration with the compiler writers has yielded a vert capable 8-bitter which seems remarkably unconstrained unlike some other chips.

I think RISC cores with lots of registers were seen as the "modern, compiler-friendly" architecture when AVR designed it way back when, but -- and I'm not trying to start a flame war -- more CISC-ish cores seem like they ultimately came out ahead, since compilers started getting more and more advanced. It's much easier to write a compiler for AVR than for 8051, however, there are compilers that work equally well for both of them.

 

Note that there are a few... uhh... eccentricities that you have to deal with. For performance reasons, Keil passes parameters to functions using predefined registers, not stacks, so Keil will throw a warning if you call a function from within itself (though you can append "reentrant" to the function declaration to force Keil to use a different strategy for passing values that is safe for re-entrant functions).

 

I think the biggest one for beginners that even modern compilers don't try to solve for you is the RAM vs XRAM thing. A compiler can be instructed to assume all variables go in either RAM or XRAM, but you'll still find yourself specifying this manually. Or you can just put everything in XDATA and not care about squeezing out performance. My 16-bit signed biquad filter performance tests of the Nuvoton N76 (an 8051-derivative) produced I think 35 ksps when the buffers are in XRAM and 40 ksps when the buffers are in RAM. Huge difference, but it's not, like, you know TEN TIMES or something insane like that.

 

I agree AVR is a good architecture. But for it to be "unconstrained" when compared to, i.e., the EFM8 stuff, I'd like to see it with a 72 MHz core clock and an internal LDO to allow a much smaller process with a 1.8V core. Again, clock for clock, they're about the same -- but on AVR, you hit that 20 MHz speedbump pretty quickly (and that's assuming you want to drop a crystal into your design that could cost nearly half as much as the MCU itself!)

 

So yeah, there's other things at work than just the core architecture design.

Last Edited: Fri. Jul 28, 2017 - 04:51 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

for the 8051 the keil invented a 3 byte pointer, so all memory can be reached, don't they have that any more ?  

 

to be ANSI C the code need to be reentrant, if not it's cheating, and my guess is that you can force the GCC to do the same.(I remember some 8051 code where I had to have some double library routines so main and ISR could do the same things (that was the BSO compiler))

 

a limit of 20 MHz , then there are all the xmegas with 32MHz but I guess that they start at around $2, and they have dma controller, faster ADC etc. (take a look at something like a ATXMEGA32E5 for $2.06 @100 at digikey), and the good thing is same tool. (and it run 32MHz from internal clk.)

 

about crystal for 20MHz yes it's sad, the chips used to be able to run 20MHz from a $0.15 crystal, but some of the newer chips don't do that :(

 

speed compared between 8051 and AVR, I would say that (single clk)8051 is in general faster(at same clk speed) if you can stay inside the 256 byte RAM, where the AVR don't have any penalty for more RAM, and there it's normally faster.

 

But all this said normally I used to say if it's only for the price, a AVR is only a contender if you need EEPROM, but for normal small numbers your developing speeds means more than the chip price.

And the good thing is that the same compiler/tool handle AVR's from 1/2 Kbyte flash upto 512Kbyte(perhaps there is a bigger out now)  

 

 

 

 

 

 

Last Edited: Fri. Jul 28, 2017 - 08:05 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

sparrow2 wrote:

for the 8051 the keil invented a 3 byte pointer, so all memory can be reached, don't they have that any more?

Yup, they do. I'm impressed you know that detail! It's called a "generic pointer" — Keil does automatic conversion between pointer memory spaces for you, but it obviously can require extra instructions. Functionally, though, it's completely transparent to the user.

 

sparrow2 wrote:

to be ANSI C the code need to be reentrant, if not it's cheating, and my guess is that you can force the GCC to do the same.

Yeah, like I said, Keil can generate reentrant-capable functions if you decorate the function declaration appropriately, but if a function doesn't need to be reentrant, you save a few cycles by leaving it default (non-reentrant)

 

sparrow2 wrote:

then there are all the xmegas with 32MHz but I guess that they start at around $2, and they have dma controller, faster ADC etc. (take a look at something like a ATXMEGA32E5 for $2.06 @100 at digikey), and the good thing is same tool. (and it run 32MHz from internal clk.)

Yeah! I'll probably buy an Xmega at some point to play with, but not for this review. I'm curious where you think they fit into the world, in 2017, with all the Cortex-M0 stuff Atmel is doing? The SAM D10, a 48 MHz modern part, is significantly cheaper than a mega168pb, and has similar capabilities. 

sparrow2 wrote:

speed compared between 8051 and AVR, I would say that (single clk)8051 is in general faster(at same clk speed) if you can stay inside the 256 byte RAM, where the AVR don't have any penalty for more RAM, and there it's normally faster.

Yup, you got it. With Silicon Labs' pipelined cores, the number of clock cycles an instruction takes is simply equal to the number of bytes long the instruction is (minus conditional branches). You have essentially three levels of granularity on the 8051 -- registers, "scratchpad" RAM, and XRAM, so MOV and math operations can take 1, 2 or 3 clock cycles, depending what you're operating on (gross simplification, but useful way of thinking about things, in my opinion).

Last Edited: Fri. Jul 28, 2017 - 09:29 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yup, you got it. With Silicon Labs' pipelined cores, the number of clock cycles an instruction takes is simply equal to the number of bytes long the instruction is (minus conditional branches). You have essentially three levels of granularity on the 8051 -- registers, "scratchpad" RAM, and XRAM, so MOV and math operations can take 1, 2 or 3 clock cycles, depending what you're operating on (gross simplification, but useful way of thinking about things, in my opinion).

On a 8051 I would divide the internal RAM into two parts lower 128 byte and high 128 bytes (unless you reserve the high 128 as stack only)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jaycarlson wrote:

sparrow2 wrote:

speed compared between 8051 and AVR, I would say that (single clk)8051 is in general faster(at same clk speed) if you can stay inside the 256 byte RAM, where the AVR don't have any penalty for more RAM, and there it's normally faster.

Yup, you got it. With Silicon Labs' pipelined cores, the number of clock cycles an instruction takes is simply equal to the number of bytes long the instruction is (minus conditional branches). You have essentially three levels of granularity on the 8051 -- registers, "scratchpad" RAM, and XRAM, so MOV and math operations can take 1, 2 or 3 clock cycles, depending what you're operating on (gross simplification, but useful way of thinking about things, in my opinion).

There is some spread in the 'faster' bands.

8051 has boolean opcodes, interrupt priority and register bank switching, and can DJNZ on any DATA memory location - code that uses those features, benefits

AVR has some 16b-data opcodes and better pointer operations, so code that uses those can look better.

The biggest difference is AVR tops out at 16-20MHz at 5V, but lower MHz at lower Vcc. 8051 top out at 72MHz(LB1) at 3v, or 25~33MHz at 2.2~5.5V for other vendors.

 

The SiLabs series have what is effectively a fractional baud UART, even on the smallest parts, so peripherals can make a difference.

 

jaycarlson wrote:

 With Silicon Labs' pipelined cores, the number of clock cycles an instruction takes is simply equal to the number of bytes long the instruction is (minus conditional branches). 

Most 1T 8051's have at least some 1 byte 1 cycle opcodes, in the better ones nearly all 1 byte opcodes are 1 cycle.

The new STC8F makes quite a leap, into a 24b opcode fetch, which means all opcodes (1,2 or 3 byte) can have a 1 cycle base - eg bit/push/pop & mov dir,dir are now all 1 cycle instructions.

 

Last Edited: Fri. Jul 28, 2017 - 10:46 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Who-me wrote:

The new STC8F makes quite a leap, into a 24b opcode fetch, which means all opcodes (1,2 or 3 byte) can have a 1 cycle base - eg bit/push/pop & mov dir,dir are now all 1 cycle instructions.

I ordered some from Taobao. Not sure when/if they're going to ship (seller seems a bit funny). Can't track down an English datasheet. If you have any leads, let me know!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jaycarlson wrote:

Who-me wrote:

The new STC8F makes quite a leap, into a 24b opcode fetch, which means all opcodes (1,2 or 3 byte) can have a 1 cycle base - eg bit/push/pop & mov dir,dir are now all 1 cycle instructions.

I ordered some from Taobao. Not sure when/if they're going to ship (seller seems a bit funny). Can't track down an English datasheet. If you have any leads, let me know!

Did you order parts, or the Eval modules ? Taobao seems to have a few simple break-out boards, in the Y35~39 region, that could be fine for testing.

Look to use a STC8A8K64S4A12 LQFP64S.  Ones from a vendor called GEEK+ seem to have 2017 parts ?

A few tag CH340 USB-UARTS, but only one has a visible crystal, another has PADS for Xtal/Caps, but not fitted - maybe there is a Xtal-less CH340 variant now ?

I thought all CH340 needed a crystal, and I quite like crystals on Eval Boards, as it gives BAUD a high precision, allowing RC Osc calibrate checks.

 

I've used the older 1T STC15 parts, but re the new STC8, I've been waiting a little until the errata settles down...

http://www.stcmcu.com/sample-req...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

One thing is that push and pop etc is one clk, but with a limited stack size on a 8051 you have some challenges!

What is the real stack used for on the keil ? (just return, or also parameters, and what about local variables)

 

It's would not be fair for the AVR if the 8051 can use a faste model for test that won't work on a real application with more code! (read more RAM use).  

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

sparrow2 wrote:

One thing is that push and pop etc is one clk, but with a limited stack size on a 8051 you have some challenges!

The 8051 uses register bank switching for interrupts, so Stack is usually for call/return. Params can be passed in registers.

The Atmel AT89LP51Rx2 series added an extended stack option, to place Stack into XDATA, and so that frees all of DATA/IDATA for user variables, but that is somewhat rare.

(it also means some side-door stack access is not so easy).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

it's more important where Keil place local variables! if it's not a stack it can't be reentrant!   

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

sparrow2 wrote:

it's more important where Keil place local variables! if it's not a stack it can't be reentrant!   

And as I said, Keil's C51 compiler does not generate reentrant functions unless you explicitly ask it to (by decorating the function with the "reentrant" keyword). Locals end up in registers, until Keil runs out of space, and then it starts using RAM.

Last Edited: Sun. Jul 30, 2017 - 04:43 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

So I will just say that isn't fair for the AVR, and to be harsh, that is like a C compiler competing with ASM written with C syntax. devil 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

sparrow2 wrote:

So I will just say that isn't fair for the AVR, and to be harsh, that is like a C compiler competing with ASM written with C syntax. devil 

I hear you. I wouldn't call it "fair" or "unfair" — just different strengths and weaknesses based on different design choices. I get what you're saying about the "competing with ASM written with C syntax" but it's really just that the developer needs to have more thorough understanding of the memory model of the platform, which you don't need for AVR. That's what made AVR look very elegant when it was introduced. For what it's worth, I've had to use reentrant functions precisely once in the three or four commercial projects I've done on 8051s, and it's easily accomplished by adding the "reentrant" keyword to your function. Keil will throw a warning (though not an error, oddly!) if you forget this. Annoying, but workable. Generally, you don't need to know what's going on under the hood, unless you really care.

 

When you declare global variables without decorating them, they'll go in whichever memory space is "default" for for your memory model. Keil's "small" model places variables in RAM by default, while the "large" model places variables in XRAM by default, freeing your precious 128-bytes of RAM for locals and other stuff you need to optimize a bit. You can always override where variables are stored with the "xdata" or "data" keywords (horrible, horrible keywords — to this day, I always do a double-take when I get a weird compiler error thrown by a "void myFunction(uint8_t data)" declaration).

 

This stuff doesn't bother me as much as the 128-byte SFR limit, which is pretty easy to hit on modern MCUs with tons of peripherals. Manufacturers often use paging (sort of like bank-select statements in PIC), but unlike Microchip's XC8 compiler, Keil doesn't automatically generate SFR page select instructions, so if your Timer1 starts acting up when you try to enable Timer5, chances are you forgot to switch pages. That's a huge trap for new guys that's really annoying. Other manufacturers do all sorts of weird stuff — STC is the biggest offender in the "strange hacks" realm: they maintain Timer0/Timer1 compatibility with classic 8051 MCUs, but they turn them into auto-reload ("period") timers by putting a "hidden" reload register, aliased with the timer's value register, that's only accessible when the timer is in 13-bit mode, and stopped. It's clever, but pretty gross. They also quickly ran out of SFRs with their 6-channel 16-bit arbitrary-phase PWM module (which has almost 30 registers for configuration!), so they just gave up and dumped the whole peripheral into the end of XRAM somewhere. Hey, what do you want for a buck? wink

Last Edited: Sun. Jul 30, 2017 - 05:49 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This is a strange thread, growing fast & long

I'll also throw in a few cents in #127.

 

Just searched for "microcontroller" on octopart and ordered by price:

https://octopart.com/search?q=mi...

 

Most of the cheapest are "microcontrollor supervisory circuits"

But the first page also has a 100 or so pin device from rochester ??? Probably a typo ???

https://octopart.com/sak-xc2364a...

 

On page 11 it's getting serious with an attiny9 for 27ct

https://octopart.com/search?&q=m...

 

And on page 99 (the end) we're still at 60 cent or so, so that's 900 different uC's to choose from.

https://octopart.com/search?&q=m...

 

Paul van der Hoeven.
Bunch of old projects with AVR's:
http://www.hoevendesign.com

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Paulvdh wrote:

This is a strange thread, growing fast & long

I'll also throw in a few cents in #127.

Please do!

 

Paulvdh wrote:

Just searched for "microcontroller" on octopart and ordered by price:

https://octopart.com/search?q=mi...

Good idea. I stumbled upon some lower-price LC87 microcontrollers; these were originally Sanyo parts, but it looks like ON Semiconductor acquired them. Looks like Micah Scott was able to dump the LC87 firmware used on a Wacom tablet, but otherwise, I had never heard of them.

 

Trying to get a dev kit ordered as we speak! I sure do love me some strange microcontrollers....

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

By the way, these new Tinys are a substantial improvement over the previous-generation ones I was using. PDI (UPDI? Whats the difference?) feels like a normal debug interface now — no more switching back and forth between debugWIRE and weird ICSP mode to burn fuses. Also, it's nice to see much of the clock configuration done at run-time instead of with fuses. Really brings the platform in-line with other products.

 

I didn't realize how different the peripherals were, too? Even the GPIO port structure is completely different. Registers are grouped as offsets-from-base-addresses, just like how most ARM peripherals work, which I've never seen on an 8-bit MCU: 

PORTB.DIRSET = 1; // set B0 as an output
PORTB.OUTSET = 1; // set B0
PORTB.OUTTGL = 1; // toggle B0
PORTB.OUTCLR = 1; // clear B0

Interesting to see separate SET and CLR registers — I thought AVR always had a set-bit/clear-bit instruction, so I'm not sure why this was done? Anyone have insight into this? By the way, no, these are not preprocessor trickery — these are individual registers.

 

Sorry if this is super old news to everyone, but I think it's interesting to see such dramatic changes in a family, and I'd love to hear any background information, if anyone has any details?

Last Edited: Tue. Aug 1, 2017 - 03:44 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

IIRC the new tiny's are based more on XMEGA - did you look at how they do things ?

The AVR does have Set/Clr bit opcodes, but of quite limited reach - the usual trade off of reach vs size applies to all MCUs

The SET CLR register approach is more general, but likely needs larger code.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Registers are grouped as offsets-from-base-addresses, just like how most ARM peripherals work, which I've never seen on an 8-bit MCU: 

XMEGA, from the beginning.

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"Read a lot.  Write a lot."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jaycarlson wrote:
PDI (UPDI? Whats the difference?) ...
PDI can be via a USART whereas UPDI can be via a UART (synchronous vs self-clocking)

LUFA AVRISP2 is on a USB megaAVR to program XMEGA PDI.

Currently, UPDI is only via Atmel-ICE, JTAGICE3, EDBG, mEDBG, or a USB UART bridge; might not take much to add UPDI to LUFA AVRISP2.

jaycarlson wrote:
... I think it's interesting to see such dramatic changes in a family, and I'd love to hear any background information, if anyone has any details?
Possibly in

http://www.avrfreaks.net/forum/attiny417-attiny814-attiny816-attiny817

 


LUFA

AVRISP-MKII Clone (2010)

http://www.fourwalledcubicle.com/AVRISP.php

http://www.avrfreaks.net/forum/attiny417-attiny814-attiny816-attiny817?page=4#comment-2147276 (UPDI via USB UART bridge)

 

"Dare to be naïve." - Buckminster Fuller

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jaycarlson wrote:
I thought AVR always had a set-bit/clear-bit instruction, so I'm not sure why this was done?
Atomicity. (no RMWs)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I don't think SBI & CBI have a RMW issue (such as you'd see in the old PIC ports)...if you SBI a bit, that bit gets set to 1 & the others will be unaffected

I seem to remember some discussion regarding automatically clearing bits (such as IRQ flags) potentially being an issue, but can't find now.  The new instructions let you set (or clear) multiple bits at once--a great convenience.

 

...I found this old comparison between pics & avr:

 

I/O
Seperate PORT and PIN registers avoid read-modify-write issues with capacitively loaded pins. (Although has any AVR user never spent time wondering why their input port isn't working because they used PORTx instead of PINx...? ).

When in the dark remember-the future looks brighter than ever.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jaycarlson wrote:
! I sure do love me some strange microcontrollers....

Sometimes, when I'm really bored I watch EEVBlog.

A long time ago he took a toothbrush apart.

In it was  a small probably 6 legged device.

After a bit of researching he found out it was a 4-bit uC which was not available from any store (I believe) but only directly from the manufacturer

(Probably in minimum quantities of half a million or so).

 

What's a toothbrush got to do?

On/of button and a 2 minute timer?

Maybe it was also connected to the battery charging circuit.

 

Note:

About the strangeness of this thread:

This is #135 and #4 was marked as "the solution" :)

 

Paul van der Hoeven.
Bunch of old projects with AVR's:
http://www.hoevendesign.com

Last Edited: Tue. Aug 1, 2017 - 02:01 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

About the strangeness of this thread:

This is #135 and #4 was marked as "the solution" :)

Strange?  That's pretty typical around here...

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"Read a lot.  Write a lot."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

About the strangeness of this thread:

This is #135 and #4 was marked as "the solution" :)

Strange?  That's pretty typical around here...

"We sell solutions, not answers" is today's mantra. 

When in the dark remember-the future looks brighter than ever.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:
I don't think SBI & CBI have a RMW issue (such as you'd see in the old PIC ports)...if you SBI a bit, that bit gets set to 1 & the others will be unaffected I seem to remember some discussion regarding automatically clearing bits (such as IRQ flags) potentially being an issue, but can't find now. The new instructions let you set (or clear) multiple bits at once--a great convenience.

 

I will just quote from a typical AVR datasheet:

Alternatively, ADIF is cleared by writing a logical one to the flag. Beware that if doing a Read-Modify-Write on ADCSRA, a pending interrupt can be disabled. This also applies if the SBI and CBI instructions are used.

 (emphasis mine)

 

This means SBI/CBI read the whole byte, not just the one that is modified. This can have side effects, normally on interrupt flags.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

an interesting post from the past:

 

It's even more insidious than simply the sequence "IN ... ORI ... OUT" being non-atomic. (In fact, the specific example that Boxbourne gave is actually inaccurate, since most interrupt flags actually cannot be cleared by accidentally writing a "0" to them...)

In the AVR architecture, 32 of the I/O registers are directly bit-addressable. You can set and clear bits, as well as do some level of conditional branching, based entirely on the state of individual bits within the first 32 I/O registers. Bit setting and clearing on these 32 registers can happen in a single, atomic instruction, using the SBI and CBI op-codes.

But on most AVR's (the devices which are exceptions to this rule have notes in the Register Summary section of the datasheet), even these so-called bitwise operations actually operate on the whole register, even if only one bit is being changed. In a single instruction cycle (two clocks in this case), the whole I/O register is read into a scratch space, a single bit is modified, and the whole scratch register is written out to the I/O register again.

If at the point that the register was read in, there was an interrupt flag set, then that flag will be copied into the scratch space. Then the single bit will be modified. Then, the whole scratch space, including the interrupt flag, will be copied back into the I/O register. Writing a '1' to an interrupt flag generally causes that flag to be cleared, and thus you lose an interrupt. Even if you did it atomically.

Some AVR's have "fixed" the SBI/CBI op-code so that they truly only operate on single bits without any possibility of affecting the surrounding bits within the register.

..... hmmmm which ones?

When in the dark remember-the future looks brighter than ever.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I would guess that AVR cores where SBI/CBI takes just 1 cycle do not have the RMW problem. That would be xmega and AVR8L (like ATtiny 10).

 

Edit: And the new "Xtiny" like the tiny 817.

Last Edited: Tue. Aug 1, 2017 - 11:45 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

This means SBI/CBI read the whole byte, not just the one that is modified. This can have side effects, normally on interrupt flags.

In modern AVR, SBI/CBI touch only one bit in the target SFR.  No other bits are affected.  Anything introduced in the last 10 years.  Older AVR cores were subject to RMW effects for the CBI/SBI instructions, including (I believe) the m16.

 

The quoted datasheet excerpt likely is a copy/paste error.  I don't know if there's an authoritative list of which AVRs handle CBI/SBI as RMW.  If there is one, I'd like to know.

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"Read a lot.  Write a lot."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

avrcandies wrote:
The new instructions let you set (or clear) multiple bits at once-
Atomically - that was my point.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

SBI / CBI ?

 

Some time ago I dipped a part of my littlest pinky toe in the ARM world. (STM32F103C8T6)

It's a 32 bit processor but the I/O structure (timers, etc) is apparently mostly 16 bit.

It does not have and instruction to set or clear an I/O bit but instead it has a 32 bit Set/Clear I/O registers, one for each 16 bit port.

If you write a 32 bit value to that port. Then all the zero's in that value leave the I/O bits unchanged. The one's in that 32-bit value either set or reset the corresponding bit on the corresponding output port.

So with a single asm instruction for a regular port write you can set or reset anything from zero to all bits in the output port without touching bits which should not be touched. Single instruction, same CPU cycle, intrinsic atomicity (is that a word, it sure is a combo of 9 letters).

Neat feature.

 

I'm not sure what happens when both the set and reset bits are written to. I think it would toggle the output bit, but I'm not sure.

Paul van der Hoeven.
Bunch of old projects with AVR's:
http://www.hoevendesign.com

Last Edited: Wed. Aug 2, 2017 - 11:29 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Go on. ARM7TDMI had steering registers before the minellium. Xmega has had steering registers since 2007.
Your STM32F103 is vintage 2007. The new Tiny817 has got steering registers.
.
Oh, and the M3 STM32F103 is trashed by the Cortex-M4.
The actual GPIO performance depends on the actual PORT Silicon. The Xmega trashes M0 and a lot of M3.
.
David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

@OP

 

Have you seen...

 

https://dannyelectronics.wordpre...

 

...?

 

If you go back over several months you'll find various blogs on benchmarking various chip families.

'This forum helps those who help themselves.'

 

pragmatic  adjective dealing with things sensibly and realistically in a way that is based on practical rather than theoretical consideration.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

@david.prentice

 

And what's your point?

I didn't even hint at I/O performance, dates or a trashing contests and I haven't got a clue what a steering register is.

The only thing I pointed out is that the ..F103... has a different I/O structure and it has an inherent ability to set/reset multible bits on an I/O port atomically.

Paul van der Hoeven.
Bunch of old projects with AVR's:
http://www.hoevendesign.com

Last Edited: Wed. Aug 2, 2017 - 01:43 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Steering register is the name given to this mechanism. i.e. setting a bit, clearing a bit without the delay and atomicity difficulty of RMW.
My point was that this is nothing new. And the STM32F103 is a mature chip.
Furthermore, the Xmega is a similar vintage with similar mechanism and better GPIO performance.
Of course the 32-bit ARM core has a faster processing throughput.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I haven't got a clue what a steering register is.

Allows the peripherals to be mapped to different pins, rather than fixed.  Also allows 20 different functions in an 8 pin part, but you can only choose to use 4 of the 20 (: 

 

for example:

Datasheet PIC16F887 - DS41291F page 146.

11.6.7 PULSE STEERING MODE

In Single Output mode, pulse
steering allows any of the PWM pins to be the modulated signal. Additionally, the same PWM signal can be simultaneously available on multiple pins.

 

Once the self-driving AVRs are introduced, the steering register may take on a new meaning.

When in the dark remember-the future looks brighter than ever.

Last Edited: Wed. Aug 2, 2017 - 02:49 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Perhaps I have been using the wrong terminology.

 

I have always called Freescale PCOR, ST BSRR, Xmega OUTCLR registers "IO steering" registers.

And pin-mapping registers like ST MAPR register "mapping" registers.

 

Microcontrollers have always had "Alternate Functions" on particular pins.   It is fairly recent ( < 15 years)  to move a specific function to a different pin.

It makes life easier in some respects.   But it does mean you can't always assume that a physical pin is always using its default functionality.

 

David.

Last Edited: Wed. Aug 2, 2017 - 02:52 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Ah, so that is the "steering" part.

Triggers a memory about a story floating on (or sunk deep into) the 'net.

It was about a uC occasionallly losing bits in it's I/O configuration registers. The whole uC (or FPGA?) kept running happily, just some outputs stopped outputting the right signals untill the uC got a hardware reset. Then it worked all perfectly for a while, so no hardware pins blown, but the problem kept recurring.

The likely culprit was probably marginal decoupling / emc design, whatever but investigating and PCB revisions take time.

 

So as a (temporary) solution they put the whole I/O configuration in flash and used a periodic interrupt to rewrite it to the I/O ports.

 

 

Paul van der Hoeven.
Bunch of old projects with AVR's:
http://www.hoevendesign.com

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Gotta watch those weak bits....long ago, during college a student built a pretty neat "robot" & was giving a demo to the campus reporter, who was taking a bunch of up close action photos.  Apparently the camera flash disrupted a few EPROM program bits, giving the robot a serious case of spasms, nearly tearing itself apart. 

When in the dark remember-the future looks brighter than ever.

Pages