IAR project to AVR Studio 4

Go To Last Post
16 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi there...

I have made a project in IAR 5.11B for AVR and i am
debuging it from AVR Studio 4.14.589 with a jtagice mkii.
The switching from IAR to AVR is very annoying so i
want to build and debug the project from AVR studio only.
How can i do that with the same project?
Are there any changes to the project that have to be
made?

PS. when i load the project to AVR from IDE/Debug/*.aps the build menu is inactive

Thanks...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

AFAIK there is no IAR plug-in for Studio. So you must build in IAR and swap screens to Studio.

Studio should detect that your object file is out of date and update the AVR via JTAG accordingly. You just need to build IAR with UBROF-8 output for Studio.

I presume that IAR would prefer you to use their own proprietary objects and debugger (which probably communicates via JTAG). So you would never need Studio at all.

I am not an IAR user ( I have a modest will to live ).
You will probably hear from a real IAR user that may have some proper experience.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

I am using IAR Workbench 5.51 for Atmel Atmega168 target. But, as I discovered, the IDE doesn't support LIVE watching of variables, which means, I have to pause the execution to view the latest value of variables.
a) Is there any workaround to overcome this?
b) Are there any IAR plugins to AVR-Studio, where I can debug by live watching of variable?

Thanks,
Madhu

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

But Studio doesn't have live watching of vars either - the processor must stop then the SRAM contents are retrieved over OCD and used to update the display. This is quite a long winded process (even approaching seconds for large AVRs) so you would not want this happening after the execution of every opcode. Studio does have an "auto-step" function to entertain the kids which basically does run a bit, stop, update the display, run a bit and so on. It is very very very slow and almost totally pointless.

If I were you I'd just run to breakpoints at salient points where you'd like to observe the state.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hello Clawson,
Thanks for that. Is it so slow because the emulator uses debugWIRE instead of ISP lines?

I thought the other thread discussion was more relevant. I will avoid duplication from here on.

Thanks,
Madhu.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Thanks for that. Is it so slow because the emulator uses debugWIRE instead of ISP lines?

Atmel are involved - say no more.

(but yes there must be a bandwidth blockage somewhere - remember that JTAG/dW only run at about 1MHz. Also the debugger is on a low speed (12MHz) USB line).

The other thread is not more relevant - this is an AS4 not AS5 issue discussed in this thread. But it's going to be the same in either so where it's discussed is moot.

The key thing is not to split the discussion in two places as it dilutes it and pisses off people who reply in one place only to find their same point already made in the other.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Is it so slow because the emulator uses debugWIRE instead of ISP lines?

It (the auto-step) is probably so slow because it needs to

1) Place a breakpoint at the next instruction (which means some communication between Studio and the chip, regardless of what interface is used)
2) Issue a "run command". More communication
3) Eventually (well rather sooon, but anyway...) receive a "stopped a breakpoint" message.
4) Read out everything from the AVR that might have changed (could be a lot of communication)
5) Remove the breakpoint
6) Repeat from 1

This description is sketchy. I do not know the technical details of the debugging protocols (they are in fact not released by Atmel, but proprietary). But this is how things more or less must go.

We often say here to people trying to use the simulator to "run the code on the real chip".

In your case you want to see the chip running at full speed. So don't run the code with a debugger attached. It will slow things down, meaning "not at full speed".

If you can live with letting sections run at full speed and then stop to look at variables etc, then use a few well-placed breakpoints.

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What is the merit in watching variables in "real time" any way? Either they are so fast changing that any display update will be a blur or they are irregularly updated so why not just catch that with a breakpoint anyway? The human mind will take about 100ms to recognise what it is seeing anyway.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

As you say, fetching the watched variable after every opcode or 'real time' is not really useful as we cannot watch variables changing faster than 100ms. I am new to this debugger. However, the previous debuggers ( for NEC, FREESCALE micros) that I have used supported fetching data something like every 1sec upto 100msec. I was able to watch variables change due to external event from CAN/LIN signals or due to micro hardware inputs. The variables are slow changing but it saved the effort of putting breakpoints at multiple places to watch them. For eg: If I have written an ADC driver, it is convenient for me to change external pot to its extreme positions and just watch the variable value changing to verify correct conversion. Now, I will have to break the execution to watch them. Its kind of informal unit-testing, helped fix the bugs quickly.

Anyways, thank you guys for the clarifications. I guess, I've no choice but to work with this limitation.

Thanks,
Madhu.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I was able to watch variables change due to external event from CAN/LIN signals or due to micro hardware inputs.

But CAN/LIN surely don't operate at 10Hz? What is the merit in getting a display update every 100ms for a process that is operating at 100's of kHZ or MHz speeds? If you really want to watch such a transaction then get a deep buffer logic analyser, capture a few seconds of traffic then keep using "zoom-in" until you see individual bit transitions (there could be 1,000's or 1,000,000's !)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I would agree. A logic analyser will capture your traffic far better.

Another alternative is a simple debug 'channel' e.g. output diagnostic info over SPI or an UART.

You compile your project with DEBUG macros that expand as appropriate. This will be less invasive than JTAG transferring every detail.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yes, CAN/LIN communication operate at kbps to 1mbps rate and it is not possible to perform all kinds of testing especially signals changing at that rate. However, while at the development stage of ECU, I would simulate other nodes using tools like CANoe. So, the rate of change of signals are in my control, where signals are changed manually using graphical panels.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I can sort of see the point ipmadhu is making. Simplified scenario: A system reads an analog signal that changes slowly, say I have a pot attached. At the same time there are other parts of the system that does not like being paused, say I'm running several software bitbanged PWM outputs. I want to verify that the ADC is doing what I want, so I put a "running watch" on the variable holding result of the AD conversion. I now turn the pot, wait a second and see that the AD conversion behaves. Done!

Other uses would be to test/monitor a running system over lengths of time etc.

The usability depends in part on if the readout (e.g. over a JTAG chain) will in any way disturb the running system, or of the JTAG interface has it's own "port" to RAM, registers etc..

And if the debug protocol is open (Ahem..., are you listening Atmel?) one could use this for writing software for actually doing automated tests of firmware running on The Real Thing.

[Ducking, since this will likely be a flame-target]

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

As I say Studio has "Auto-Step". I just built and simulated this:

#include 

int main(void) {
	DDRB = 0xFF;
	while(1) {
		PORTB ^= 0xFF;
	}
}

using auto-step and I'd say the "blobs" in the PORTB I/O view were flickering at about 10Hz.

Not sure if "auto step" works with an OCD rather than Simulator though.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

As I say Studio has "Auto-Step".

Yes, but I've always interpreted that as just a macro for "repeated breakpoints, stop-and-go". So it would really disturb a running app.

I interpreted ipmadhu's descriptions of other "live view systems" as they where some kind of "border scan" but quite deep into the chip (e.g. memory), and not disturbing any real-time aspect of a running app.

My tongue-in-cheek remark on Atmels closed debug protocols was a comment in the margins. Even if they opened up their protocols it would not work in this way I suppose, as it seems to me that the OCD hardware in AVRs is not implemented in the way that I envision above.

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Yes, but I've always interpreted that as just a macro for "repeated breakpoints, stop-and-go". So it would really disturb a running app.

But that's how this kind of function always works. I don't think JTAG on any chip is smart enough to interrogate registers/SFRs/RAM with the CPU not halted.