Atmel ICE automatically stepping over while debugging

Go To Last Post
12 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Dear freaks

The Atmel ICE debugging was working fine with my previoys setup.Recently I freshly installed Windows10 and Atmel studio due to my HDD crash.Now I was testing a Blink program and the Atmel ICE is behaving weired.The code is as follows

/*
 * Blink.c
 *
 * Created: 19-01-2018 09:34:21
 * Author : Sharanya
 */ 

#include <avr/io.h>
#include <util/delay.h>


int main(void)
{
    /* Replace with your application code */
	DDRB |= (1<<5);
	
    while (1) 
    {
    PORTB |= (1<<5);
	//_delay_ms(1000);
	PORTB &= ~(1<<5);
	//_delay_ms(1000);
	}
}

It is starting from

DDRB |= (1<<5);

Now after one step over.it is going to

PORTB |= (1<<5);

Then in the next step over,as expected to

PORTB &= ~(1<<5);

But now,here the problem arises.When I press another step over,it is coming directly to

PORTB &= ~(1<<5);

ignoring the

PORTB |= (1<<5);

Please help me.Why it is happening.The studio version is 7.0.1645.The Atmel ICE firmware version was upgraded when i Plugged in.I also tried with XOR like below

PORTB ^= (1<<5);

But after stepping over from here,the LED gets toggled continously and the yellow line marker on the left side disappears!!

Please help......

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 1

Switch to the mixed C and Asm view and step the opcodes to see what's happening. 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Strange...In the disassembly,it is doing as expected.I am giving screenshots step by step.

 

DDRB |= (1<<5);

For

PORTB |= (1<<5);

For

PORTB &= ~(1<<5);

Now here is two step process for the disassembly,and single click for the C

 

 

Why it is happening???

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Just recently I discovered a "-Og" option for GCC.

       -Og Optimize debugging experience.  -Og enables optimizations that do not
           interfere with debugging. It should be the optimization level of choice
           for the standard edit-compile-debug cycle, offering a reasonable level
           of optimization while maintaining fast compilation and a good debugging
           experience.

But I haven't tried it myself...

Paul van der Hoeven.
Bunch of old projects with AVR's:
http://www.hoevendesign.com

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Okk...I tried with both O0 & Og...In both cases,it is doing what is expected.But previously with my old setup,i always did it with O1 and it suceeded all the time.But why this time it behaves like this?Any Idea?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Also I would like to know one thing,when debugging an application while it is running in real time (For say i set a breakpoint when a counter reaches 100),the system can behave unexpected with O0/Og.How to overcome that?Any Idea?I previously tried always with O1 with success..but this made me hopeless!!sad

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

SHARANYADAS wrote:
the system can behave unexpected with O0/Og. How to overcome that?

Correct the expectation!

 

See #2 & #3

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

SHARANYADAS wrote:
Why it is happening???
Not sure what you are asking.

In the last image of post#4 the cursor is pointing to the assembly for the PORT |= (1 << 5); instruction (to be executed next), which is exactly what the while loop is setup to do.

So what is your question?

David (aka frog_jr)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

@awneil

Sorry to express my question properly!I read the contents of the link you provided(only your part though).Basically what I tried to say that with O1/O2/O3/Os,the system can behave different as compared O0/Og.So my question is,what optimization is correct for the final hex files to be running in real time with which I can debug properly?According to this link,

Optimize debugging experience. -Og enables optimizations that do not interfere with debugging. It should be the optimization level of choice for the standard edit-compile-debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience

Also reading what you wrote,i think i can use Og for both debugging and final hex file.Correct me If I am wrong!!

Last Edited: Fri. Jan 19, 2018 - 07:47 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

SHARANYADAS wrote:
with O1/O2/O3/Os,the system can behave different as compared O0/Og.

If that happens, then it is a fault in your code - you are making invalid assumptions about what the compiler does.

 

debug properly?

One can debug "properly" at any optimisation - but one has to understand that the generated machine code is not simply a 1:1 line-by-line transliteration of the source text.

 

That's what the link was all about.

 

 think i can use Og for both debugging and final hex file.

Yes - that is the whole point.

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Again learned something new.This is what I love about this site.

 

Thank a lot everybody for your patience & feedback!!