DS1302 not responding

Go To Last Post
27 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

OK, it's time for me to eat ash.

 

I just can't get this darn chip to talk to me by bit-bashing.

 

I have two routines, one sends a byte, low bit first, one receives a byte the same way.

 

To start the clock you need to send two pairs of bytes:

 

0x8E, 0x00 to write enable the registers

 

080, 0x00 sets seconds to zero and starts the clock

 

You can then keep sending 0x81 and read back to see if the clock has started, as the reply byte should increment by one every second, rolling over after 59.

This is the basic structure of the write - really just to prove I am setting RST/CE then clearing it after the write:

 

;RTC Macros

.macro	RTCRSTH
		sbi	PORTB,RTCRST
		rcall	FOURTEEN_NOPS
.endmacro

.macro	RTCRSTL
		cbi	PORTB,RTCRST
		RCALL	FOURTEEN_NOPS
.endmacro

.MACRO RTC_WRITE_TWO_BYTES

	RTCRSTH

	ldi		temp,@0
	rcall	RTC_WRITE_BYTE
	rcall	FOURTEEN_NOPS
	ldi		temp,@1
	rcall	RTC_WRITE_BYTE

	RTCRSTL

.endmacro

Now this is what the timing diagram looks like:

 

 

And this is what it looks like on the scope, bearing in mind bits are sent low bit first:

 

 

To cut to the chase, a receive looks exactly the same, except the data line is set as an input after the written byte. Putting the scope on during a read shows no signal coming back from the module.

 

I don't think it is a coding issue as (1) I can see all the correct signals going to the module and (2) as I'm using the scope to check it doesn't matter if my reading code is duff as I would see the return byte.

 

 

I have exhausted every avenue I can think of and am left with three possibilities (1) the chip is duff  (2) there's a glitch between the two writes which appears to be linked to setting the data line as an output in the data direction register or (3) something I'm too stupid to think of.

It's me again...

Last Edited: Mon. Apr 11, 2016 - 09:27 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The more I look at that trace the more that glitch worries me.

 

Rather than taking out the line:

 

sbi      DDRB,RTCDAT

 

 

I'm adding a short delay after the instruction to see what happens.

 

Although its technically redundant (and there is one at the start of the data send as well that doesn't cause a glitch) there has to be one between the write and read so anyway.

 

 

...

 

Reading between the lines of the Atmel datasheet it looks very possible that when writing the DDR the port is briefly tri-stated - but on that trace the glitch lasts for about 8 clock cycles.

It's me again...

Last Edited: Mon. Apr 11, 2016 - 09:39 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The glitch shouldn't be an issue if you meet the required setup and hold times.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You may be a hardware person. I find scope traces difficult to follow. Especially when they overlap.

Since your SDIO interface is a regular 8-bit clocked data, a Logic Analyser will show you exactly what is happening. And translate the data for you at the correct clock edge.

Anyway, the interface is easy to bit-bang. You just read the timing diagram, code the write and read functions.
Then test by writing to a static register and reading the known contents.

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sorry - I overlapped them so I could be sure that the clock was going high after the data was on the output pin

 

 

david.prentice wrote:
Then test by writing to a static register and reading the known contents.

 

I'll start by writing the write-protect register., seven of its bits are fixed but I can see if I can write and clear the WP bit.

It's me again...

Last Edited: Mon. Apr 11, 2016 - 11:49 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Wonder how much easier it would be to simply get a chip with I2C or SPI interface? DS1305 with SPI for example looks like it might fit the bill though while you gain about 60 bytes of NVRAM and 2 time of day alarms (and SPI) it cannot match the 200nA timekeeping current.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ARGH!

 

I just clicked 'quote' and this darn editor deleted about half a page of text.

 

 

In short - the first byte is being sent correctly, but the second byte, 0xFF, gets some of its bits pulled low. It appears the RTC chip may be interpreting the write command as a read and the two outputs are 'fighting' for the data line.

 

By triggering on the Reset line I can be sure that no 'phantom' bits are making my write command look like anything other than 0x8E (i.e. starts with low byte therefore must be a write).

 

The second pair of bytes, a command followed by a read seems to create an even worse mess, it looks like the chip is trying to send data but very6 badly shaped.

 

If I unplug the RTC, the outputs from the AVR return to what I would expect - 0x8E, 0xFF, 0x8F, then nothing.

 

FWIW I am using lots of nops to clock the data at well under 500Khz.

 

My conclusion is, provisionally, that I have fried the RTC.

 

 

As for using a different chip, well I haven't got an SPI free and I have used the I2C pins for something else I would prefer not to move.  Plus others ghet on fine with this chip so I OUGHT to be able to get it running fine.

It's me again...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Looks like I finally got this sorted.

 

I don't know if it is bad luck or the fact I'm running at 16MHz, but basically I had to extend the pulse length to well below 0.5us. It seems to need extra time to settle between sent bytes, especially when switching from write to read.

 

In the end instead of just going low-wait-high-wait for each clock pulse, I now go low-wait-high-low-wait and suddenly the data stared to flow back! All the example programs I've looked at don't do this.

 

Aside from being a simple issue of the chips not working as fast as they are supposed to the only material change is that this means clock goes low before changing the data pin to an input between a read and a write.

 

Focusing on writing the rest of the program now, but once done I will experiment on eroding away the NOPs to see what works.

It's me again...

Last Edited: Fri. Apr 15, 2016 - 10:44 AM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It's easy to violate timing on slow chips with a 16MHz AVR. Thus my comment in #3.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Please elaborate.    Which timing constraint?

 

The C file that I sent you would create "perfect" assembler instructions.   I did not find any timing issues with a 16MHz Uno hardware.

 

Yes,  as Kartman has said.   You do get chips that work far faster than than their data sheets suggest.   e.g. many TFT controllers write faster but tend to read slower.

 

David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Clk to data 200ns. So you need to sprinkle the nops() in the right places.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

David,

 

Your code uses delay - clock high - delay - clock low - delay for writes, but clock high - delay - clock low for reads.

 

This means the first low to initiate the read is  before the data pin is set as an input and is what I ended up using.

 

Originally I only used a low-high transition to write, then used a low-high to read. This gives exactly the same sequence as your code, except the position of the DDR chaneg is different and I was reading after the clock returned to high.

 

The one that didn't work, I was reading the DAT line once the CLK had gone low and then high. According to the datasheet the output is transferred to DAT  on the falling edge of CLK  and can be read after 200ns (as Cartman says) AND it remains valid when CLK returns to high. My read was therefore at least 500ns after CLK went low, but I was getting straight zeroes.

 

I suspect my problem was that DAT is actually going HI-Z when CLK rises, but I may well be wrong.

 

 

 

It's me again...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Latest discovery...

 

If you use 'Burst mode' to write you can't just send eight bytes and then move on to another command. You have to toggle CE/RST low and high again.

 

This may apply to reads, but I haven't tried it.

 

Thought I was going insane because I was writing the eight clock registers in burst mode, then writing a 'set' flag to ram. On power up, it kept turfing me into the 'set clock' routine again as the flag was reading 0x0C instead of 0xFF.

It's me again...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Think about it. You obviously need some form of stop and restart.

Regular SPI devices often have a burst mode. They need CS to become inactive to initiate some actions. And a fresh CS active to synchronise before the next sequence.

The CE has a similar function.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

david.prentice wrote:

Think about it. You obviously need some form of stop and restart.

Regular SPI devices often have a burst mode. They need CS to become inactive to initiate some actions. And a fresh CS active to synchronise before the next sequence.

The CE has a similar function.

 

Not obvious as when fetching a fixed number of bytes there's no absolute reason why it can't go back into 'normal' mode afterwards. But to be fair the datasheet does say this: "Additional SCLK cycles are ignored should they inadvertently occur."

 

Well working now.

 

Having lots of fun devising a routine to convert UTC to Sidereal time using integer arithmetic. If I work in seconds and multiply the required constants by 2^13 then lose the bottom bits of the result I should be accurate to with 3 seconds a century, which is good enough. Bit tedious writing 2x8 byte multiply routines :-)

 

:-)

It's me again...

Last Edited: Sun. Apr 17, 2016 - 12:13 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Bit tedious writing 2x8 byte multiply routines :-)

Huh?  AVR GCC has built-in support for 64bit integers.

 

EDIT:  Ah, an asm project.  Thanks for the reminder David ;-)

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

Last Edited: Sun. Apr 17, 2016 - 07:11 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

It is a mystery why anyone would choose to write a whole App in ASM.
If it is your particular ambition, you write all the multi-precision math operations as a one-off.
Then you copy-paste or link with this library of maths primitives.
.
Personally, I would just steal the proven ASM code from existing Atmel App Notes or public GCC source code.
.
Hey-ho, the DS1302 functions and Unix time_t functions are already in the public domain.
.
I can understand time spent on real-time Video or time-critical algorithms. Efficiency is noticeable.
I can not understand days spent on Unix algorithms.
.
David.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

?

 

Is there no point to cryptic crosswords?

 

If I cut and paste someone else's work, will I understand how it works better?

 

Would I find using GCC routines more satisfying or aesthetically pleasing?

 

Is the destination more important then the journey?

 

Is there more satisfaction in building a kit than making a model from scratch?

 

 

I know that these days many people just want something that works and consider plugging 'module A' into 'Device B' to be 'making it themselves', but if the answers to the questions above are 'yes' then I might as well just buy an off the shelf unit.

 

 

 

Plus... If I decide to add plotting little 62 x128 star maps in real time, then yes it is all going to be very time critical.

It's me again...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

If you need ASM maths http://elm-chan.org/cc_e.html under AVR assembler libraries toward the bottom.

John Samperi

Ampertronics Pty. Ltd.

www.ampertronics.com.au

* Electronic Design * Custom Products * Contract Assembly

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

 

The 'tedious' bit I was referring to only meant the business of having to set uyp an accumulator (well two of them) and add in all the stuff to shift bytes back and forth. 2x2 is much lighter on registers!

It's me again...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I know that these days many people just want something that works and consider plugging 'module A' into 'Device B' to be 'making it themselves', but if the answers to the questions above are 'yes' then I might as well just buy an off the shelf unit.

So, when are you opening up your own semiconductor foundry? ;-)

 

In all seriousness, I don't begrudge anyone's desire to learn, nor anyone's chosen learning process.  Indeed I applaud efforts to understand from 'first principles'.  I've been known to reinvent the wheel a few times myself.  However, since you're here asking for help, it does seem appropriate to receive it without too much cheek.  This is a public forum.  You will witness as many opinions as there are members.  Do with those opinions what you will.  You might find, though, that you >>can<< learn a thing or two from those who have different experiences from yours, and different preferences than your own.

"Experience is what enables you to recognise a mistake the second time you make it."

"Good judgement comes from experience.  Experience comes from bad judgement."

"Wisdom is always wont to arrive late, and to be a little approximate on first possession."

"When you hear hoofbeats, think horses, not unicorns."

"Fast.  Cheap.  Good.  Pick two."

"We see a lot of arses on handlebars around here." - [J Ekdahl]

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

 

As a yardstick I found a website that was looking at different results for using GCC on various processors to calculate a time and date from a Julian date. 18,000 cycles. or about 1,125us.

 

My code fetches the time and date from an RTC, converts it to a Julian date in seconds, converts to the rather shorter seconds of sidereal time, adjusts to local Greenwich Mean Sidereal Time, then extracts time in the same way you would get local time from the Julian date. 1016us.

 

My target was a millisecond, but that will do :-)

It's me again...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

So you are saying the C is just 10% less efficient (in execution time) than hand crafted Asm? That actually makes the C pretty good doesn't it? Also was that using <time.h> in AVR-LibC V2.0 or not? (I'm guessing not).

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Unless I have lost the plot somewhere,   I dug out an old DS1302 program and timed my "unixtime conversion" C functions.   (which I think originated from my old 6502 ASM code)

 

    struct tm_t *tp;
    time_t ds1302time, bumtime;
    ...
    STDFUNCTIME({tp = gmtime(&ds1302time);}, "gmtime");
    STDFUNCTIME({bumtime = mktime(tp);}, "mkime");
    printf("Date: %s %02d/%02d/%04d ", days[tp->tm_wday], tp->tm_mday, tp->tm_mon + 1, tp->tm_year + 1900);
    printf("Time: %02d:%02d:%02d bumtime=%ld\r\n", tp->tm_hour, tp->tm_min, tp->tm_sec, bumtime);

The FUNCTIME() macro reported 527us for gmtime() and 179us for mktime().

Bear in mind that this is with Codevision on a 16MHz Uno.

GCC might have faster arithmetic functions.

 

It had never crossed my mind that gmtime() efficiency would be important.  After all,  you would generally do any maths on a time_t.   tm_t is only of interest when you want something human-readable.

 

I have not looked at the GCC time.h functions.    I doubt if ASM would be any faster than C.    I am sure that Scandinavians can make dramatic improvements.

 

David.

 

Edit.   Just tried it with GCC in the AS7 Simulator.   gmtime() took 260us (4150 cycles).  mktime() took 74us.

Edit.   Ok,  I am using gmtime() rather than Julian time.   But Julian time is only gmtime() + Julian_seconds("1 Jan 1970")

In practice you will be periodically correcting your system timer with the RTC.    Because you probably want a better resolution than one second,   your system timer will count us or ms.

Last Edited: Mon. Apr 18, 2016 - 11:03 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:

So you are saying the C is just 10% less efficient (in execution time) than hand crafted Asm? That actually makes the C pretty good doesn't it? Also was that using <time.h> in AVR-LibC V2.0 or not? (I'm guessing not).

 

Er no - my explanation may not have been clear, I'm actually doing a lot more besides. Instead of just getting a julian date and converting it to a time, I'm fetching and converting a conventional date/time to a Julian-style one expressed as sidereal seconds*2^13 since New Year 2000, adding an offset (the Juilian sidereal time at New Year) then doing the conversion to a time (but not a date, I will grant). Which I think is probably a bit more work.

It's me again...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

david.prentice wrote:

Unless I have lost the plot somewhere,   I dug out an old DS1302 program and timed my "unixtime conversion" C functions.   (which I think originated from my old 6502 ASM code)

 

    struct tm_t *tp;
    time_t ds1302time, bumtime;
    ...
    STDFUNCTIME({tp = gmtime(&ds1302time);}, "gmtime");
    STDFUNCTIME({bumtime = mktime(tp);}, "mkime");
    printf("Date: %s %02d/%02d/%04d ", days[tp->tm_wday], tp->tm_mday, tp->tm_mon + 1, tp->tm_year + 1900);
    printf("Time: %02d:%02d:%02d bumtime=%ld\r\n", tp->tm_hour, tp->tm_min, tp->tm_sec, bumtime);

The FUNCTIME() macro reported 527us for gmtime() and 179us for mktime().

Bear in mind that this is with Codevision on a 16MHz Uno.

GCC might have faster arithmetic functions.

 

It had never crossed my mind that gmtime() efficiency would be important.  After all,  you would generally do any maths on a time_t.   tm_t is only of interest when you want something human-readable.

 

I have not looked at the GCC time.h functions.    I doubt if ASM would be any faster than C.    I am sure that Scandinavians can make dramatic improvements.

 

David.

 

Edit.   Just tried it with GCC in the AS7 Simulator.   gmtime() took 260us (4150 cycles).  mktime() took 74us.

Edit.   Ok,  I am using gmtime() rather than Julian time.   But Julian time is only gmtime() + Julian_seconds("1 Jan 1970")

In practice you will be periodically correcting your system timer with the RTC.    Because you probably want a better resolution than one second,   your system timer will count us or ms.

 

I just took the timings from a site that was comparing the speeds of a 'modulo heavy' GCC operation on different processors and optimisations.

 

1 second is fine for me.  My scope drive has a resolution of 1 arc-second = 1/15 second, but I don't even need arc-minute accuracy to find things. So plus or minus a handful of seconds isn't critical. Doing an 1-per second update from the RTC is plenty.

It's me again...

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

joeymorin wrote:

I know that these days many people just want something that works and consider plugging 'module A' into 'Device B' to be 'making it themselves', but if the answers to the questions above are 'yes' then I might as well just buy an off the shelf unit.

So, when are you opening up your own semiconductor foundry? ;-)

 

In all seriousness, I don't begrudge anyone's desire to learn, nor anyone's chosen learning process.  Indeed I applaud efforts to understand from 'first principles'.  I've been known to reinvent the wheel a few times myself.  However, since you're here asking for help, it does seem appropriate to receive it without too much cheek.  This is a public forum.  You will witness as many opinions as there are members.  Do with those opinions what you will.  You might find, though, that you >>can<< learn a thing or two from those who have different experiences from yours, and different preferences than your own.

 

I have learnt a fair bit over the past week.

 

I don't want to seem cheeky, that was a genuine expression of why I am using ASM, in the hope of saving people who want to convert me to C wasting their time :-)

 

I can just about use C, I've even managed to get an off-the shelf robot and an arduino working using it. But my C is like my French - I can read a newspaper, but don't expect me to write La Recerche du temps Perdu..

It's me again...