lower case true false in stdbool.h ?

Go To Last Post
20 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I don't intend to complain and I am sorry if it sounds that way. I am wondering if there was a specific reason to use lower case for the boolian state constants true and false. I try to stick with standards as much as possible. So to me the TRUE and FALSE were always uppercase. I know I could just as easily go and add defs for the uppercase TRUE and FALSE, but I am thinking that there is a good reason behind using lowercase. In my lack of experience my fixes tend to cause way more pain than gain.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> I am wondering if there was a specific reason to use lower case
> for the boolian state constants true and false.

Yes: that specific reason is called the ISO C99 standard.

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

bnemec wrote:
I try to stick with standards as much as possible. So to me the TRUE and FALSE were always uppercase.

Was this "standard" set by windows.h by any chance? :wink:

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Yes: that specific reason is called the ISO C99 standard.
:oops:
I feared this was a stupid question.

At the risk of showing off more of my ignorance; but with the hope of reducing it at the same time...

Quote:
Was this "standard" set by windows.h by any chance? Wink

I don't know; my assumption that true and false would be uppercase is based on the standard, or what I thought was standard, that constants should be uppercase.

I think I understand the difference (in its entirety) between

const int MY_DATA = 100;

and

#define MY_DATA 100

The first is actually executed code that declares and initializes memory space as read only; the second is a preprocessor directive that just replaces every instance of MY_DATA in the source with 100. I think I have that part correct. I think MY_DATA is usually considered a constant either way it's declared/defined (of course not all #define directives are used as alternative to const) but this usage of #define always makes me want to put the new term in uppercase. I suppose the MY_DATA definition can be undefined and/or redefined later in the source, so it's not truely constant; then again, I read that declaired consts can be changed indirectly via pointers. Maybe I'm mistaken by thinking of the ISO C99 "true" and "false" as constants.

See how that could look like a contradiction? It's probably not a contradiction and only looks that way to me because I'm missing something.

I've been searching the ISO C99 as well as web for #define in C and any other keyword convolutions I can think of. So far most code snippets have the term in uppercase and I found a couple sites saying that usually the term is in upper case to identify it as a constant.

I am unable to find any standard that states that constants should be uppercase. Any one know where that comes from?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

There are all sorts of naming "conventions". As far as I know, most are just that, "conventions".

For example, I have seen MY_DATA. MyData, and kMyData (k presumably for "konstant"). There are probably as many other versions as there are programmers.

The hang up, if there is one, may be considering "true" to be a constant rather than a key word. But, when it comes down to it, "it is what it is".

Jim

 

Until Black Lives Matter, we do not have "All Lives Matter"!

 

 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
I am unable to find any standard that states that constants should be uppercase. Any one know where that comes from?

Interesting question... Both the original K&R and the second edition (based on Draft-proposed ANSI C) certainly suggest that constants should be upper case, if only by their examples. In fact the second edition has the example:
enum boolean { NO, YES };

Four legs good, two legs bad, three legs stable.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

I am unable to find any standard that states that constants should be uppercase. Any one know where that comes from?

It's not a "standard" in the sense you're thinking of. The architects of the language don't have any say at all in what case you choose to assign to your application-specific constants. A strictly standards compliant compiler won't throw errors in either case.

However, many different "coding conventions" have been invented to help improve code consistency, readability and maintainability. These conventions may dictate the desirability to use upper-case and lower-case labels in various circumstances.

At the end of the day, the definition of the boolean values used by an ISO C99 compliant compiler must conform to whatever the ISO says. The standard says lower-case, therefore they are lower-case.

If your coding convention dictates that TRUE and FALSE are better than true and false, then by all means create a new header which defines them as such:

#define TRUE true
#define FALSE false

...and document the reason why you are being forced to use labels other than the ones the ISO mandates.

Last Edited: Wed. Jan 17, 2007 - 05:25 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Maybe I'm mistaken by thinking of the ISO C99 "true" and "false" as constants.

No, they would not be considered constants, they are values that a bool variable can take. And #defines are not really constants either, they are macros, though you can define a macro that might be considered a constant.

Quote:
I am unable to find any standard that states that constants should be uppercase. Any one know where that comes from?

It is not a standard, just a convention. And the convention is for macros to be uppercase. Constants (that is, variables defined as const) are less likely to be uppercase.
One of the main reasons to make macros uppercase is because they can be very dangerous (especially in C++). For instance, you can make a macro look like a function call. If written and used properly, it can be fine. But written or used improperly, it can be disastrous.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ka7ehk wrote:
kMyData (k presumably for "konstant").

This particular way of naming things with prefix letters giving an idea of type and then the name made of several words with the boundaries denoted by upper case letters is known as "Hungarian notation" which is named in honor of one of Microsoft's original Windows programmers Charles Simonyi. It seems to be Microsoft's preferred naming convention in any code they deliver. The prefixing lower case letters used are:

c - char
by - BYTE (unsigned char)
n - short
i - int
x,y - int (used as co-ords)
cx,cy - int (used as length - c = count)
b or f - BOOL (which is an int), f = flag
w - WORD (unsigned short)
l - LONG (long)
dw - DWORD (unsigned long)
fn - function
s - string
sz - string with terminating 0
h - handle
p - pointer

(personally I think it's awful and makes for very unreadable looking code - but I guess that's just a personal opinion!)

I've not seen 'k' used in this convention though.

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The way I have understood it:b In C++ bool is a stronger type than just "anything zero is FALSE, anything else is TRUE". It more of what Boolean is to Pascal. Fine, but why not "TRUE" and "FALSE"? Because those names have been used for a ling time for the old "zero-is-false-all-others-is-true" type of boolean values. You'll often find this, or something similar, in a header file:

#define FALSE 0
#define TRUE -1

So, to distinguish these new somewhat stronger typed bool values from the old "I'm-a-boolean-hiding-in-a-ordinal-numeric-variable" some distinguishable names where needed, and the upper-case ones where already taken.

Aside, and somewhat ironic: Apart from the above, I really like them in lower case - but then I grew up with Pascal, not this half messy bad excuse for a macro-macro-macro-assembler-language which is called "C". Not to mention that a Dane deicded to shoot himself and the rest of the software world in the foot by inventing something called C++...

As of January 15, 2018, Site fix-up work has begun! Now do your part and report any bugs or deficiencies here

No guarantees, but if we don't report problems they won't get much of  a chance to be fixed! Details/discussions at link given just above.

 

"Some questions have no answers."[C Baird] "There comes a point where the spoon-feeding has to stop and the independent thinking has to start." [C Lawson] "There are always ways to disagree, without being disagreeable."[E Weddington] "Words represent concepts. Use the wrong words, communicate the wrong concept." [J Morin] "Persistence only goes so far if you set yourself up for failure." [Kartman]

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
This particular way of naming things with prefix letters giving an idea of type and then the name made of several words with the boundaries denoted by upper case letters is known as "Hungarian notation".... (personally I think it's awful and makes for very unreadable looking code - but I guess that's just a personal opinion!)

HN has been called the tactical nuclear weapon of source code obfuscation techniques:

http://mindprod.com/jgloss/unmainnaming.html

(It is item 30.)

- John

Last Edited: Wed. Jan 17, 2007 - 09:47 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks guys for taking time to give me a little history (Johan) and correction (uppercase constant names are not std, just "convention".)

Now I believe I can be satisfied with true and false and not complicate things up by defining TRUE and FALSE.

I tend to have a desire to find out the "why it's done the way it's done" and many times those types of questions get a "Because that's the way it is" answer. Unfortunately that still leaves my mind wondering why it was done that way. If it was just because it had to be one way or the other, I can understand that; I just try to get the whole picture to help me better understand.

Thanks again.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jfiresto, Thanks! That was some funny stuff. Wish I would have seen that back in my college days. Would have had to access the plotter in the the Civil Engineering lab to make a poster of those guidlines to post in CS labs.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
So, to distinguish these new somewhat stronger typed bool values from the old "I'm-a-boolean-hiding-in-a-ordinal-numeric-variable" some distinguishable names where needed, and the upper-case ones where already taken.

I think that it has more to do with the fact that all keywords in C are lowercase.

Quote:
It seems to be Microsoft's preferred naming convention in any code they deliver.

I believe that it has fallen out of favor at MS.

Quote:
HN has been called the tactical nuclear weapon of source code obfuscation techniques:

http://mindprod.com/jgloss/unmai...

(It is item 30.)


Item 30 is mostly an objection to people using HN incorrectly, inappropriately or excessively. Item 31 is really a better argument against it.

Regards,
Steve A.

The Board helps those that help themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

JohanEkdahl wrote:
You'll often find this, or something similar, in a header file:

#define FALSE 0
#define TRUE -1


I'd generally vote for

#define FALSE 0
#define TRUE ~FALSE

in fact and leave it up to the compiler to decide what it considers to be true.

Koshchi wrote:
I believe that it has fallen out of favor at MS.

Ah, miracles DO happen!

(I have to admit that while we were Microsoft's biggest customer in Europe back in the 80's and early 90's, when I got quite a lot of exposure to their code, I've had little dealing with their stuff in more recent years)

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> in fact and leave it up to the compiler to decide what it considers
> to be true.

This is well-defined by the C standard, so there's not much point in
letting "the compiler decide". When evaluating, any non-zero expression
must be considered true. When generating truth values (like in == or !=
operators), a "true" value *must* be one.

Personally, I'd consider a definition of TRUE other than to 1 very
dangerous, as the typical beginner's "over-eagerness" in boolean tests
as in:

#define BOOL int
#define FALSE 0
#define TRUE (~0)
...

  BOOL myresult;
  ...
  myresult = varA != varB;

  if (myresult == TRUE) {
    ...
  }

might make it really difficult to see the actual failure. This is because
it's rather two failures in a row. The correct code (without using the
C99 feature) would be:

#define BOOL int
#define FALSE 0
#define TRUE 1
...

  BOOL myresult;
  ...
  myresult = varA != varB;

  if (myresult) {
    ...
  }

> I think I understand the difference (in its entirety) between
> Code:
> const int MY_DATA = 100;
> ...
> The first is actually executed code that declares and initializes
> memory space as read only

I didn't verify the standard, but this doesn't appear to be true
anymore. It has been false in C++ all the time, but it seems to me
that GCC now also can generate code that handles MY_DATA as being a
genuine constant, rather than a read-only variable (i.e. something
that doesn't occupy any separate storage). Thus I figure this
interpretation is now allowable by the C99 standard.

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I think it's dangerous to assume that logical true is always 1. I think the best definition for TRUE is (!FALSE) so:

Quote:

#define BOOL int
#define FALSE 0
#define TRUE (!FALSE)
...

BOOL myresult;
...
myresult = varA != varB;

if (myresult==TRUE) {
...
}

will now work.
--Nick--

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sorry, got my nots mixed up - I meant ! and not ~

In fact the way we define TRUE and FALSE is:

#define FALSE (1==0)
#define TRUE (1==1)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

> I think it's dangerous to assume that logical true is always 1.

It isn't, unless you're talking about something else than the language
commonly known as "C". This language has a well-defined standard any
compiler should be based on. See above: the standard for this language
mandates the value of "true" to be 1 (and has always been, even in the era
of the informal K&R de-facto standard).

IIRC the comp.lang.c FAQ goes into great detail arguing about why all
these "let the compiler do it right" #defines are utter nonsense.

Yeah, googled for it:

http://c-faq.com/bool/bool2.html

Jörg Wunsch

Please don't send me PMs, use email if you want to approach me personally.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Since we're on the subject... perhaps we can set straight another one of my misunderstandings.

Quote:

#define BOOL int
#define FALSE 0
#define TRUE (~0)
...

  BOOL myresult;
  ...
  myresult = varA != varB;

  if (myresult == TRUE) {
    ...
  }


Oh my gosh, what is this? BTW, nice link Jorg.

I thought that true and false were just meaningful values to use in place of magic numbers when setting a Boolean variable state; for things like this:

ISR (TIMER1_COMPA_vect)
{
   // refresh the four 7-segment displays.
   refresh = TRUE;   
}

instead of doing this:

ISR (TIMER1_COMPA_vect)
{
   // refresh the four 7-segment displays.
   refresh = 1;   
}

or even this:

ISR (TIMER1_COMPA_vect)
{
   // refresh the four 7-segment displays.
   refresh = 102;   
}

If I'm not mistaken, when the main function of my program has the following loop...

for( ; ; )
   {
      if( refresh )
      {
         //code to refresh display...
      }
   }

...all three methods will do the same thing according to ISO C, but the first is much more meaningful code (at least to me).

Defining true and false has no effect on how the compiler evaluates a boolean variable, right? For example I could just as well do this:

#define true 0
#define false 1

Although the obfuscation techniques people would be happy with me; the following code will still execute MyFunction as long as the compiler follows the C99, correct? (for simplicity please ignore any optimization.)

int myBool;
myBool = 1;
if (myBool)
{
   MyFunction();
}