Atmel START not fit for purpose and no alternative provided

Go To Last Post
14 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

We have been developing products around ATSAM processors for severals years, over which time we and our partners have purchased tens of thousands of ATSAM processors for these designs. Previously we used ASF3. Although the code quality left something to be desired, on the whole we are happy with it and we could work around the issues we came across.

 

We now have a new design using an ATSAME5x processor. When we chose this processor, we didn't realise that ASF3 doesn't support it. It appears that we are forced to use Atmel START. This has caused us several problems (see #3 and #4 for the ones that are causing us the greatest trouble):

 

#1 Our firmware is open source, which means that Windows-only tools such as Atmel Studio are not acceptable to a large part of the community. We develop our firmware using Eclipse and the ARM Developer build of gcc. If Microchip values the open source community, they should make provision for users who do not wish to use Atmel Studio or other proprietary and platform-specific tools.

 

So my plan was to create a basic project in ASF/Atmel START incorporating all the drivers we are likely to need, then move the files to Eclipse and continue development there. This showed up a couple of other issues:

 

#2 The #include file mechanism used by ASF is appalling. For every single Atmel START component that I want to use, the code generator adds the include path to the directory containing that component. This is appallingly bad programming practice for two reasons. First, it makes maintaining the include paths difficult. Second, it increases the chance that there will be a name clash between an Atmel START include file and one of the developer's own include files, which will likely result in the wrong include file being picked sometimes.
 

Fortunately this can be resolved quite easily. In the Eclipse copy of the project I've moved all of the Atmel START stuff into a folder called AtmelStart and included that folder in the path. Then in a few of the atmel start files I had to change e.g. #include <hpl_tc_base.h> to #include <hpl/tc/hpl_tc_base.h>. In a few other places I changed from <> to "" in the #include directive so as to include the source file directory in the include path.

 

#3. The clock configurator in Atmel Studio is broken. I want to configure the clock system to generate a 120MHz CPU clock derived from XOSC0 which uses a 12MHz crystal. Configuring XOSC0 is on problem. There seems to be zero documentation in Atmel Studio on how I should generate a 120MHz CPU clock from a crystal oscillator (there is a User Guide button on the Atmel START page but clicking on it does nothing). From the chip datasheet it looks like I can do it by feeding XOSC0 into DPLL0, then feeding DPLL0 into generic clock generator 0. So I need to configure DPLL0 to multiply the input frequency by 10. However, when I click on Settings for DPLL0, it will allow me to delete the digits in the multiplier and fractional multiplier fields, but it won't allow me to enter any new digits. I updated ASF to latest available version yesterday, the version reads 7.0.1931.

 

#4. So to get round #4 I tried deleting just one digit from the multiplier field to get a multiplication of 14. The clock configurator says this will give me a clock frequency of 180.75MHz, but I should be able to edit the multiplier in the generated code So I click on Generate Project and get this every time:

 

Could not download contents from the server.
Exception thrown: the remote server returned an error: (500) Internal Server Error

 

The same happens if I delete 2 digits to get an integer multiplier of 1. I have not found any way of configuring DPLL0 that doesn't give the server error message.

 

#5. The compiler.h file still has macro definitions of 'min' and 'max' in it. These are an abomination in C development, and they break C++ compilation if you use certain standard header files. Use 'Min' and 'Max' macros if you really must. In ASF3 we were able to remove these macro definitions and change the few references to them in other ASF files to use Min and Max instead. In START we don't know what code the tool may generate that calls min and max.

 

In summary, I am getting p****d off with Microchip forcing me to use a broken tool with no alternative provided, and I am looking at STM32 processors for future designs. Our familiarity with the ATSAM peripherals and ASF have kept us using ATSAM processors so far, but that advantage is largely gone for processors not supported by ASF3, because START is not an adequate replacement.

 

One area which does appear to have improved over ASF3 is that frequently-used very short functions such as _gpio_set_level are now declared 'inline', which should avoid the need to use our own fast I/O functions in some cases. Although ASF4 possibly goes too far the other way by declaring some rather complex functions 'inline'.

Last Edited: Sun. Sep 2, 2018 - 12:32 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I have just wasted another hour trying to get this to work. I have found:

 

- If in the clock configurator I set DPLL0 source to XOSC32K then I can configure it, and code generation works. Of course the output clock frequency is completely wrong.

- If I then change DPLL0 source and reference clock (btw why do I have to select the input clock in two places?) to XOSC0 then 3 things happen:

(a) I get a yellow warning triangle and if I hover over it it reports "Input frequency (12MHz) is above the limit of 3.2MHz. This is regardless of the setting of Clock Divider.

(b) It reports the output frequency as 480MHz. Again, it is ignoring the clock divider.

(c) I get the Server Error if I try to generate the project.

 

I finally found a way that lets me generate code with a 120MHz CPU clock:

 

1. Feed XOSC0 into GCLK1. Set GCLK1 divisor to 4.

2. Set DPLL0 source and reference to GCLK1.

3. Set GCLK0 source to DPLL0 with divisor 1.

4. MCU and peripheral clocks mostly default to GCLK0 already.

 

I haven't tested the project to see if this works yet. This is unnecessarily convoluted. The chip allows DPLL0 to be fed directly from XOSC0 if the clock divider is set to give a division of 4 or greater. But the START clock configurator is broken in multiple ways when I try to configure that.

 

EDIT: I just tested it and it doesn't work. It's stuck waiting for DPLL0 lock. I suspect it's trying to initialise DPLL0 before GCLK1, or something like that.

 

EDIT 2: I got it to work by hand editing the code to set DPLL0 clock source to XOSC0 (in 2 places) just like I wanted to in the clock configurator but couldn't.

Last Edited: Sun. Sep 2, 2018 - 12:08 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

So having finally managed to fool Atmel START into letting me generate a 120MHz clock by setting DPLL0 to take its input from XOSC0 behind its back, I hit some more Atmel Studio/START bugs:

 

- When I try to upload the generated .elf file to the ATSAM51 chip using Atmel ICE, I get a consistent Verify Failure. A bit of searching revealed the other user have the same problem. The workaround is to upload a .hex or .bin file generated from the .elf file instead. Unfortunateoly the dialog defaults back to the .elf file, so every time I have to change it to select the .hex or .bin file instead.

 

- The ASF 4 documentation has this:

usart_async_set_baud_rate

Set USART baud rate.

int32_t usart_async_set_baud_rate(
    struct usart_async_descriptor *const descr,
    const uint32_t baud_rate
)

Parameters

descr

Type: struct usart_async_descriptor Struct *const

A USART descriptor which is used to communicate via USART

baud_rate

Type: const uint32_t

A baud rate to set

So I write this code:

usart_async_set_baud_rate(&USART_0, 57600);

When I send output to the SERCOM device, I find each bit is a few 10s of nanoseconds long instead of the expected 17.3 us. Tracing the call reveals that the baud rate passed isn't converted to a baud rate divisor as it should be, instead it is written direct to the baud rate divisor register of the SERCOM. This function has obviously NEVER been tested.

 

IMO Atmel START + ASF4 for the ATSAM51 is alpha test software at best. This is totally unacceptable given that the only alternative I have is to write all the drivers from scratch.

 

I regret choosing a ATSAM51E series processor for this project.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

#3. The clock configurator in Atmel Studio is broken

 While I share your general overall impression of Start, I didn't have any trouble getting it to generate a main clock of 120MHz.

 

The datasheet is pretty clear that the input to the PLL is limited to 3.2MHz.

And that GCLK0 has to be used to drive the CPU.

So I turned on XOSC0, fed it into GCLK1, and divided by 6 to get 2MHz output.

Then I configured GLCK0 to use the GCLK1 as an input, and multiplied by 60.

No complaints from Start, and the code downloaded fine.  I don't actually have a D51 system that I'm willing to burn such "maybe" code into, but I don't see any reason that it shouldn't work.

 

(I *really* don't understand why Microcontroller Manufactures ALL (it's not just Microchip/Atmel) seem to routinely make clock configuration SO complicated.  Most of the time I want to say "I have THIS crystal connected here, and I want these things to run at THIS clockrate.  But NOOOO...  dozens of options to configure...)

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

eschertech wrote:
If Microchip values the open source community

Do they?

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi westfw, thanks for replying.

 

westfw wrote:

The datasheet is pretty clear that the input to the PLL is limited to 3.2MHz.

And that GCLK0 has to be used to drive the CPU.

So I turned on XOSC0, fed it into GCLK1, and divided by 6 to get 2MHz output.

Then I configured GLCK0 to use the GCLK1 as an input, and multiplied by 60.

No complaints from Start, and the code downloaded fine.  I don't actually have a D51 system that I'm willing to burn such "maybe" code into, but I don't see any reason that it shouldn't work.

 

I think we must be using different series processors - which one are you using? The SAME51N19 processor I am using does not appear to provide any facility to do clock multiplication in a GCLK, only division. The DPLLs do indeed have a restriction of maximum 3.2MHz input, but when using XOSC0 or XOSC1 as an input, the input goes through a divider first (bits 10:0 in register DPLL Control B). So I set that to 1 which gives a division ratio of 4, hence 3MHz input. Then I set the DPLL to multiply by 40 (bits 12:0 of register DPLLRATIO = 39) to get 120MHz.

 

westfw wrote:

(I *really* don't understand why Microcontroller Manufactures ALL (it's not just Microchip/Atmel) seem to routinely make clock configuration SO complicated.  Most of the time I want to say "I have THIS crystal connected here, and I want these things to run at THIS clockrate.  But NOOOO...  dozens of options to configure...)

 

Although I hate being forced to to use code generators such as Atmel START, if you are going to use a code generator for configuring clocks then I agree, it should be more automatic.

 

It occurred to me that the SAME example projects ought have the clock generation configured properly. So I loaded a couple of the SAME54 eval board example projects. Neither runs the processor at anything like its full speed. The LED blink example runs the CPU at 12MHz, OSC2->GCLK0->MCLK. The TCP/IP server example runs the CPU at 48MHz, ignoring the crystal on XOSC1 and using the internal 48MHz oscillator. So maybe the people who generated the example projects had the same problem with START that I did.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

eschertech wrote:
Neither runs the processor at anything like its full speed. 

At full speed, do you need wait states?

 

Perhaps START won't let you have high speeds unless you set the wait states first?

 

I know that's the kind of thing one would like to expect a tool like START to do for you (or, at least, give a clear warning) - but ISTR posts here that seemed to suggest it didn't ...

 

frown

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

>> Then I configured GLCK0 to use the GCLK1 as an input, and multiplied by 60.

 The SAME51N19 processor I am using does not appear to provide any facility to do clock multiplication in a GCLK

Sorry, I was unclear.  I set DPLL with the "loop divider ratio" set to 59, which causes the input to be multiplied by 60, and set GCLK0 to use the DPLL with no additional division.

Same thing you did, I think, except with 2MHz frequency instead of 3MHz.

Looks like:

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

@westfw, that's what I did eventually in START to get it to set the GCLK0 frequency to 120MHz; except I used 3MHz out from GCLK1. But the code generated by START didn't work. The debugger showed that it was waiting forever for the DPLL to lock. My guess was that it was initialising GCLK1 and DPLL0 in the wrong order, but I didn't investigate to confirm that. Instead, I got it working by hand-editing the code to feed XOSC0 into DPLL0 using an input divisor of 4, which is what I was trying to get START to do in the first place.

 

@awneil, good point, I hadn't checked the number of wait states. The ASF3 boilerplate code sets the number of wait states according to CPU speed, so I assumed that START would take care of that. The code is running, but I'll check that it is setting the correct number of wait states. START does allow me to use a 120MHz clock if I feed DPLL0 from GCLK1 instead of directly from XOSC0 using the internal input divider - as @westfw has done.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

eschertech wrote:
The code is running

probably means that your waitstates are OK, then.

 

it was just a thought.

 

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
Last Edited: Mon. Sep 3, 2018 - 01:17 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The code generated START does not adjust the wait states according to clock frequency; but it doesn't need to. The datasheet says "Automatic wait state generation can be use by setting the Auto Wait State bit in the Control A register (NVMCTRL.CTRLA.AUTOWS)". The description of that register indicates that the AUTOWS bit is set by default.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

On samd51, don’t forget to enable the cache if you want it to go fast...
https://github.com/adafruit/ArduinoCore-samd/issues/37

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

@westfw, thanks for the tip!

 

Have you any idea how I can get START to pull in the TSENS (internal temperature sensor) driver and the reset controller driver? Neither of these appears in the list of available software components. The SUPC driver isn't listed either. I also looked in the ADC config for internal temperature sensor options, but they are not listed there either.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I am kinda happy to hear it is not only me. Coming from the Tiva C Series, which is awesome but expensive i had used some STM32s, but i did not like the mess of old and new libarys i have come to the ATSAMs. Which was going nice with the SAMD21 and ASF3. Since i need CAN for our next product we decided for the SAME51 but did not reconised ASF4 is something very different.

 

As far as i can see there is no option to use the TCC for PWM generation with more than 1 output and special settings, i have to write my own drivers. Which is in my opinion unnecessary and annoying. First there are for sure more people out there who want to control a h-bridge or similar and i am not good at this!

 

@eschertech - do you have some open source drivers you could provide? i am not getting the tcc to work at all...