Fifference between 0X17 anf $17

Go To Last Post
14 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hello! I am new in this forum. I have a doubt that my be chidish, but I learne microprocessor long time ago (seventies 8080, TMS9900, 8085) and did some programing using line by line assemblers. Now that I am retired I decide to make a project as a challenge, and I am uving an AVR microcontroller ATtiny2313 and the asm. of AVRS Studio4. I have seen some examples of assembler and I would like to some one help me in what is the difference bettween ( ldi r16, 0xFF and out $17, r16) 0xFF thet means hexadecimal and the $17 that is also hexacedimal.

Thanks in advance
REgards,
Manuel Silva

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Those are just different radix specification conventions, accumulated through the loooong periods of time. Some people like it with a buck, some 0x, some add trailing 'h'. I guess AVR assemblers just swallow them all. There is no fifference :D

Some people here can tell interesting stories about why certain prefix/suffix was chosen where and when and some might even start a fight for their preferred notation.

The Dark Boxes are coming.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks for the information.
Regards
Manuel Silva

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Just a quick note. I tried all three in the ASM2 Assembler.

LDI A,0x0F ;Works fine
LDI A,$0F ;Also Works Fine
LDI A,0Fh ;Generates an Error Message

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

Operands

The following operands can be used:
...
Integer constants: constants can be given in several formats, including
Decimal (default): 10, 255
Hexadecimal (two notations): 0x0a, $0a, 0xff, $ff
Binary: 0b00001010, 0b11111111
Octal (leading zero): 010, 077
...

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I know that $XX was the norm for Motorola, and I think the 'XXh' notation came from x86 (if I remember correctly, real-mode had a convention with digits, a colon, more digits, and the trailing 'h', but you couldn't just read it as a single hex value without conversion because the first group of digits specified the offset from 0 as some multiple of a value (16? 64?), while the second specified the offset from that offset (so every single location actually had multiple values that could legitimately refer to it). I'm guessing that 0xXX notation originated in the mainframe world (which would explain why it's the notation used by C & Unix in general).

The one notation I've never understood the rationale for is octal. Hex digits neatly encapsulate a single nybble, but octal just seems like a waste of everyone's time. It doesn't divide neatly into anything, it's more work to decode in your head than hex, and involves 50% more keystrokes than hex. I'm guessing that octal made sense way back at the dawn of computing (before anyone actually came up with the idea of hex), but I can't think of any real purpose it serves today...

There's no place like ~/

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The old PDP-11s and such used an OCTAL character set (lbefore ASCII) that crunched 3 chars into its 16 bit registers. Hence the 3 character file extensions that were so common for 1/2 a century.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Back when I was your age (Eh? What's that again, sonny??) and one toggled bootstrap loaders into the compuer that then read the boot program from high-speed media like paper tape and punch cards, several computer architectures were "built" on octal. I'm most familair with the DEC PDP8 series. Toggle switches were in banks of 3. Items like assembler listings had addresses, op codes, etc. in octal. 12-bit memory words. So you ended up "thinking" in octal when working with the stuff, just like you might "think" in hex when working with AVR assembler listings, addresses, etc.

Another interesting way of thinking was on Burroughs business mainframes that worked in decimal, so everything (well, almost everything) was in 4-bit BCD nibbles. Nary an ASCII in sight either (except on commo lines)--real computers worked in EBCDIC.

Why, I remember when ...

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> The old PDP-11s and such used an OCTAL character set (lbefore ASCII)
> that crunched 3 chars into its 16 bit registers. Hence the 3
> character file extensions that were so common for 1/2 a century.

RADIX-50. But that doesn't explain the use of the octal notation at
all, as in order to squeeze three characters (out of the set
[A-Z0-9.$% ]) into one machine word, you could not align each of the
characters at an octal (or even bit) boundary. The "50" in RADIX-50
stands for 50[octal] characters available in the character set, so
40[decimal] characters. Thus, you'd need ln(40)/ln(2) = 5.322 bits
per digit. If you'd round that up to the next bit boundary, you
needed 3*6 = 18 bits, while at the tightest packing, it occupied
15.966 bits. ;-)

Google for the term to learn about the encoding used.

I think the affinity of the PDP-11 and other minicomputers of that era
for octal notation had a different motivation. Some kind of notation
was needed to represent binary data more effectively than just the
classical 0/1 binary printout, yet with the available digits, the
closest that was possible was to use digits 0 through 7, on a radix of
8. The idea to ``invent'' more digits up to 16 by appending letters
isn't such a natural one as it appears to us today, as one had to
abuse letters for that.

As for numbering forms, the PDP-11 software usually understood three
different forms of number literals (each representing a 16-bit machine
word). Numbers without a suffix were octal, numbers followed by a dot
were meant to be decimal, and IIRC RADIX-50 `numbers' where just three
characters introduced by an apostrophe.

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Actually those who ever handcoded PDP-11 in machine codes know how useful octal notation was there. The entire instruction set was comprised of 3-bit wide fields -- you could have a short list of instructions handy and enter the code in some primitive monitor without even needing an assembler.

It is of course not clear what came first, probably the real reason is the one Jorg suggested -- it just seemed more natural, and DEC engineers just designed the machine to be very convenient for programmers.

I almost never use octals today, but they're forever in my heart together with PDP-11. Probably it would be cool to store some strings in firmware for a smaller AVR in R50! :)

The Dark Boxes are coming.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

In the early days, the idea of the 8-bit byte as the basic building block for everything had not solidified - so there was no particular reason not to use 3-bit groupings and, hence, Octal.

Plus, as already noted, the idea of "stealing" some letters as extra digits to make a hexadecimal notation was a bit of a "quantum leap"

Of course, now that the 8-bit byte is well & truly entrenched, there really is no point to Octal - other than purely historical interest...
(and as a trap for the unwary who put leading zeros on decimal numbers... :shock: )

Top Tips:

  1. How to properly post source code - see: https://www.avrfreaks.net/comment... - also how to properly include images/pictures
  2. "Garbage" characters on a serial terminal are (almost?) invariably due to wrong baud rate - see: https://learn.sparkfun.com/tutorials/serial-communication
  3. Wrong baud rate is usually due to not running at the speed you thought; check by blinking a LED to see if you get the speed you expected
  4. Difference between a crystal, and a crystal oscillatorhttps://www.avrfreaks.net/comment...
  5. When your question is resolved, mark the solution: https://www.avrfreaks.net/comment...
  6. Beginner's "Getting Started" tips: https://www.avrfreaks.net/comment...
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Now that 8-bit byte is well & truly entrenched, it's time for us the curious to re-explore odd word length designs. 9 sounds like a good word length to me! :D

The Dark Boxes are coming.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Word lengths like 12, 18 and 36, used to be very common. Octal notation fit these better than hex.

Laurence Boyd II

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You could also count in octal on your fingers alone without the need to remove your shoes and socks :lol: (in fact you could even do it after a nasty industrial accident involving your thumbs! )

(I have happy memories of implementing a Forth interpreter on an LSI-11 in PDP/11 assembler 22 years ago in fact)