I turned on pedantic warnings and it really doesn't like me using binary constants like 0x11000000 and says binary constants are a GCC extension. What does that mean exactly? I take it GCC wants things in hex rather than binary. Is binary non standard?
Split from: if (variable) {
Is binary non standard?
Yeah, binary (0b110101) is not standard. It's extremely common, though, especially on compilers for embedded chips.
I don't think that there is a way to disable that warning while keeping the rest of -pedantic :-(
binary constants like 0x11000000
No, that's a hexadecimal constant!
But you're right that binary constants are not in the standard.
They have been proposed, but the standards authorities have (bizarrely, IMO) rejected them.
Although they are in C++14: https://en.cppreference.com/w/cpp/language/integer_literal
Nevermind. Was using C++ compiler that acts different.
You can disable gcc options temporarily, but it appears these extension warnings are fixed and cannot be disabled temporarily-
_Pragma("GCC diagnostic push")
_Pragma("GCC diagnostic ignored \"-Wpedantic\"")
do something that normally causes a warning
_Pragma("GCC diagnostic pop")
MarkThomas wrote:
binary constants like 0x11000000
No, that's a hexadecimal constant!
Sorry, typo. I meant to type 0b11000000. I changed them all to hex (0xc0).
What other sorts of things does -pedantic flag?
What other sorts of things does -pedantic flag?
https://gcc.gnu.org/onlinedocs/gcc/Warnings-and-Errors.html
Note these are the "extensions" that GCC (C not C++) provides:
https://gcc.gnu.org/onlinedocs/g...
The 0b thing is the last on that list.
If you want to build using "pure C" (so it's more portable) change this:
The options are:
https://gcc.gnu.org/onlinedocs/g...
but basically use -std=c99 (or if adventurous something even more exciting like c11, c17 or c18).
BTW I am 99% certain it was Joerg Wunsch (of AVR GCC fame) who got 0b added to GCC.
The 0b thing is the last on that list.
If it were pale, would it then be 0b wan?
They have been proposed, but the standards authorities have (bizarrely, IMO) rejected them.
Personally I rarely used binary constants feeling that all the 0-counting can be error-prone. The interesting one is using (perhaps in a repetitive pattern of constants or comparisons or similar) 0123 and getting the rude awakening, given the language's DEC roots.
There was a mention of C++ above -- are the standards there different w.r.t. binary constants?
0-counting can be error-prone.
Indeed - but the same applies to hex for 32-bit (and larger) with a lot of consecutive 0 or F
There are plenty of suggestions of ways to space groups of digits
There was a mention of C++ above -- are the standards there different w.r.t. binary constants?
I mentioned it: apparently, it's supported from C++14
>Personally I rarely used binary constants feeling that all the 0-counting can be error-prone.
It can be nice to use in some instances-
for example, you want to clear all flags in twi mstatus and set to idle state-
normal
TWI0.MSTATUS = TWI_WIF_bm|TWI_CLKHOLD_bm|TWI_RIF_bm|TWI_ARBLOST_bm|TWI_BUSSTATE_IDLE_gc; //clear all flags, set to idle
or a 1<< version which is even 'worse'
or looking at datasheet, just set the bits as needed, and make a comment
TWI0.MSTATUS = 0b11101101; //clear all flags, set to idle
The normal version does have an error, and lets say we are not sure about the 0b version. You need the datasheet in either case, but the 0b version will be easier to verify as you also do not need to check your use of defines (which shouldn't be a problem as there is a system in place for naming them, but you also have to deal with _bp, _bm, _gm, _gc). The bit order is also correct, where the normal version is not in this case.
More than 8 bits and it becomes too much of a good thing, so you are back to mentally converting binary to 0x notation, which is also easy enough.
Thank you gentlemen. Learned something new today, so that makes it a good day. So far, anyway...
This is the patch to Implement binary constants with a "0b" prefix
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=23479
Octal Wan: sic transit gloria mundi
I don't have the energy to fix the image. Just pretend...
What other sorts of things does -pedantic flag?
It attempts to make sure that everything, for which the standard requires a diagnostic message, does produce a diagnostic message. Without it the compiler might stay silent in cases where it is supposed to be complaining.
Basically, it is supposed to catch all formally diagnosable violations of language rules, aka "errors". This flag mostly makes sense when you accompany it with a `-std=...` setting, where you specify the exact version of language standard you want to enforce.
Pedantically speaking, GCC/Clang become C (or C++) compilers only when you specify `-std=...` and `-pedantic`. Without these settings these implementations do not qualify as valid C (or C++) implementations.
Thanks Andrey. I guess that is important if one wants to generate code that is portable. I like the "Pedantically speaking" start of your last sentence.
Note that the GNU guys and the compiler-tester guys have
a disagreement over whether a warning is a diagnostic.
The GNU guys claim it is.
For some reason, the other guys think it is not.
I guess that is important if one wants to generate code that is portable.
Indeed.
Most compilers have some sort of "strict" option that will disable all their proprietary extensions.
You still have to beware of implementation-defined behaviour, though ...
theusch wrote:
0-counting can be error-prone.
Indeed - but the same applies to hex for 32-bit (and larger) with a lot of consecutive 0 or F
>Personally I rarely used binary constants feeling that all the 0-counting can be error-prone.
It can be nice to use in some instances-
q.e.d. https://www.avrfreaks.net/commen... see the supposed clear of OCF0A
Note that the GNU guys and the compiler-tester guys have
a disagreement over whether a warning is a diagnostic.
That is a strange way to put it. It is not a "disagreement", it is jut a well-known and understood state of affairs, which has some rationale under it.
The situation can be described in a more pedantic fashion as follows:
1. There is no such thing as "warning" or "error" in realms of C and C++. Formally, there are only diagnostic messages. The standard imposes no requirements on the format or content of these messages. As far as the standard is concerned, the compiler is free to just output "Bark bark!" every time it wants to issue a diagnostic.
2. The language standard defines problematic contexts (i.e. invalid code), in which compilers are required to issue diagnostic messages. In some other problematic contexts compilers are advised, but not required to issue diagnostic messages (this is also invalid code, but for some reason difficult to detect). Such diagnostic messages are informally referred to as standard diagnostic messages. (Note, again, that as stated in #1 their content is not standardized, only the contexts in which they occur are.)
3. In addition to that compilers are allowed to issue additional diagnostic messages on their own accord, in additional contexts, which the compiler writers considered potentially problematic. In this latter case the code is valid from the standard point of view, but the compiler decided that there's something strange or dangerous about it. That would include, for example, reliance on implementation-defined behavior or on compiler extensions. Or occurrences of undefined behavior the compiler managed to detect. And so on...
4. In a perfect world standard diagnostic messages (#2) would be reported as "errors", since they indicate formally invalid code. And those additional compiler-invented messages (#3) would be reported as "warnings", since the code they apply to is formally valid.
5. The division into "errors" and "warnings" implemented in GCC compiler in its default configuration is not even close to that "perfect world" division described in #3
5.1. By default GCC fails to report some standard diagnostic messages (which is a big deal, since it makes GCC non-compliant).
5.2. By default GCC reports many standard diagnostic messages as "warnings" (which is not a big deal, since #1)
In short, GCC completely ignores some "errors", and reports some other "errors" as "warnings".
GCC implements a `-pedantic` flag, which is supposed to take care of 5.1. It [supposedly] makes GCC to issue all standard diagnostic messages, thus ensuring the standard-compliance of the compiler.
GCC implements a `-pedantic-errors` flag, which is supposed to take care of both 5.1 and 5.2. It [supposedly] makes GCC to report all standard diagnostic messages as "errors" (and abort translation).
---
For example, in GCC C you can compile this
int main(void) { char a[2] = "abcdef"; }
and you will get a mere "warning" for what is a actually a "hard error" (a constraint violation).
For example, in GCC C you can compile this
int main(void) { char a[2] = "abcdef"; }and you will get a mere "warning" for what is a actually a "hard error" (a constraint violation).
That's a little scary. A big piece of C++ code I worked on for several years had pages and pages of warnings, most of which I ignored because they were there before I was. I think we were using an Intel compiler. In my hobby GCC code I turn on all the warnings, and work the code until I don't get any. I never knew what was -pedantic was and have just started turning it on, but it doesn't flag anything new since I changed all those binaries to hex.
q.e.d. https://www.avrfreaks.net/commen... see the supposed clear of OCF0A
Is the problem there the capital B in the binary value? Is that why it didn't work?
A big piece of C++ code I worked on for several years had pages and pages of warnings, most of which I ignored because they were there before I was.
Coincidentally I had to fork a copy of some library code yesterday, that I would not change the master copy of, as it's being used all over the place in a load of projects. Once I had my own copy the first thing I did was work through the build to clear all the spurious warnings that had been there (but sadly "untouchable") for years. I did it for this very reason that I wanted to be able to now see the warnings that my own changes might then generate.
Is the problem there the capital B in the binary value? Is that why it didn't work?
ANDI r16,0B00000100 ; (1<<OCF0A)
and yet for the micro he was building for that bit is bit 4 not bit 2. The code should have read:
ANDI r16,0B00010000 ; (1<<OCF0A)
Actually, just typing that, I almost confused myself as to which bit was bit 4 which kind of proves the point! Anyway if the OP had used:
ANDI r16,(1 << OCF0A)
then this would have worked on micros where the bit is bit 2 and it would have worked on micros where the bit is bit 4.
EDIT: by a huge coincidence (or perhaps it isn't?) this happened just a day or two ago in a tutorial thread. Read on from post #6:
https://www.avrfreaks.net/commen...
EDIT2: Ok, I see the poster in Lee's thread and mine are one and the same - so he made the mistake because he picked up some ill thought out "tutorial" code. Given the (infamous!) author of the tutorial perhaps I should not be too surprised?
Actually, just typing that, I almost confused myself as to which bit was bit 4 which kind of proves the point!
LOL, and thanks. That poster hasn't responded, and I suspect it may not have been counting per se but "trying something". I'll guess that the resolution may be trying to "see" a 1-cycle pulse as proof of "working".
Oh, I followed the link to that other thread, and I see what you mean. Well, the digging and discussion should help the poster move on.
Not a great strategy.
I agree. C++ was new to me and some of the experts looked at the warnings and said they were fine. I tried to look for new ones when I made changes. There was some kind of memory bug in that code that moved around in release builds when you put in print statements to look for it, but worked fine in debug builds so finding it with the debugger didn't work. We ran some diagnostic products on the source code that took all night to complete but never found anything. Management wouldn't pop for the good code analyzer. It was $10k, or something like that. We never did find it. One time only we had to give Intel a debug build with the new features they wanted because the bug popped up there. It made them crazy because they compared source code length, or something, to previous builds before introducing into the fab. I think the bug moved to someplace in the code that didn't get used much, but would pop up now and then. It was crazy. Always so much time pressure to get them the new features they wanted.
No the issue (as Lee pointed out in subsequent posts) was that he used:
ANDI r16,0B00000100 ; (1<<OCF0A)and yet for the micro he was building for that bit is bit 4 not bit 2. The code should have read:
ANDI r16,0B00010000 ; (1<<OCF0A)Actually, just typing that, I almost confused myself as to which bit was bit 4 which kind of proves the point! Anyway if the OP had used:
ANDI r16,(1 << OCF0A)then this would have worked on micros where the bit is bit 2 and it would have worked on micros where the bit is bit 4.
I see. Big B, little b, all the same. I'm just starting to be able to look at assembly code and didn't follow the above arguments. Thanks.