C Frequently Asked Questions (FAQ) Full VersionThe questions answered here are divided into several categories:
1. Null Pointers
2. Arrays and Pointers
3. Order of Evaluation
4. ANSI C
5. C Preprocessor
6. Variable-Length Argument Lists
8. Memory Allocation
11. Boolean Expressions and Variables
12. Operating System Dependencies
15. Miscellaneous (Fortran to C converters, YACC grammars, etc.)
Herewith, some frequently-asked questions and their answers:
Section 1. Null Pointers
1. What is this infamous null pointer, anyway?
A: The language definition states that for each pointer type, there is
a special value — the “null pointer” — which is distinguishable
from all other pointer values and which is not the address of any
object. That is, the address-of operator & will never yield a null
pointer, nor will a successful call to malloc. (malloc returns a
null pointer when it fails, and this is a typical use of null
pointers: as a “special” pointer value with some other meaning,
usually “not allocated” or “not pointing anywhere yet.”)
A null pointer is conceptually different from an uninitialized
pointer. A null pointer is known not to point to any object; an
uninitialized pointer might point anywhere. See also questions 49,
55, and 85.
As mentioned in the definition above, there is a null pointer for
each pointer type, and the internal values of null pointers for
different types may be different. Although programmers need not
know the internal values, the compiler must always be informed
which type of null pointer is required, so it can make the
distinction if necessary (see below).
References: K&R I Sec. 5.4 pp. 97-8; K&R II Sec. 5.4 p. 102; H&S
Sec. 5.3 p. 91; ANSI Sec. 184.108.40.206 p. 38.
2. How do I “get” a null pointer in my programs?
A: According to the language definition, a constant 0 in a pointer
context is converted into a null pointer at compile time. That is,
in an initialization, assignment, or comparison when one side is a
variable or expression of pointer type, the compiler can tell that
a constant 0 on the other side requests a null pointer, and
generate the correctly-typed null pointer value. Therefore, the
following fragments are perfectly legal:
char *p = 0;
if(p != 0)
However, an argument being passed to a function is not necessarily
recognizable as a pointer context, and the compiler may not be able
to tell that an unadorned 0 “means” a null pointer. For instance,
the Unix system call “execl” takes a variable-length, null-
pointer-terminated list of character pointer arguments. To
generate a null pointer in a function call context, an explicit
cast is typically required:
execl(“/bin/sh”, “sh”, “-c”, “ls”, (char *)0);
If the (char *) cast were omitted, the compiler would not know to
pass a null pointer, and would pass an integer 0 instead. (Note
that many Unix manuals get this example wrong.)
When function prototypes are in scope, argument passing becomes an
“assignment context,” and most casts may safely be omitted, since
the prototype tells the compiler that a pointer is required, and of
which type, enabling it to correctly cast unadorned 0’s. Function
prototypes cannot provide the types for variable arguments in
variable-length argument lists, however, so explicit casts are
still required for those arguments. It is safest always to cast
null pointer function arguments, to guard against varargs functions
or those without prototypes, to allow interim use of non-ANSI
compilers, and to demonstrate that you know what you are doing.
Unadorned 0 okay: Explicit cast required:
initialization function call,
no prototype in scope
variable argument in
comparison varargs function call
prototype in scope,
References: K&R I Sec. A7.7 p. 190, Sec. A7.14 p. 192; K&R II
Sec. A7.10 p. 207, Sec. A7.17 p. 209; H&S Sec. 4.6.3 p. 72; ANSI
Sec. 220.127.116.11 .
3. What is NULL and how is it #defined?
A: As a matter of style, many people prefer not to have unadorned 0’s
scattered throughout their programs. For this reason, the
preprocessor macro NULL is #defined (by <stdio.h> or <stddef.h>),
with value 0 (or (void *)0, about which more later). A programmer
who wishes to make explicit the distinction between 0 the integer
and 0 the null pointer can then use NULL whenever a null pointer is
required. This is a stylistic convention only; the preprocessor
turns NULL back to 0 which is then recognized by the compiler (in
pointer contexts) as before. In particular, a cast may still be
necessary before NULL (as before 0) in a function call argument.
(The table under question 2 above applies for NULL as well as 0.)
NULL should _only_ be used for pointers; see question 8.
References: K&R I Sec. 5.4 pp. 97-8; K&R II Sec. 5.4 p. 102; H&S
Sec. 13.1 p. 283; ANSI Sec. 4.1.5 p. 99, Sec. 18.104.22.168 p. 38,
Rationale Sec. 4.1.5 p. 74.
4. How should NULL be #defined on a machine which uses a nonzero bit
pattern as the internal representation of a null pointer?
A: Programmers should never need to know the internal
representation(s) of null pointers, because they are normally taken
care of by the compiler. If a machine uses a nonzero bit pattern
for null pointers, it is the compiler’s responsibility to generate
it when the programmer requests, by writing “0” or “NULL,” a null
pointer. Therefore, #defining NULL as 0 on a machine for which
internal null pointers are nonzero is as valid as on any other,
because the compiler must (and can) still generate the machine’s
correct null pointers in response to unadorned 0’s seen in pointer
5. If NULL were defined as follows:
#define NULL (char *)0
wouldn’t that make function calls which pass an uncast NULL work?
A: Not in general. The problem is that there are machines which use
different internal representations for pointers to different types
of data. The suggested #definition would make uncast NULL
arguments to functions expecting pointers to characters to work
correctly, but pointer arguments to other types would still be
problematical, and legal constructions such as
FILE *fp = NULL;
Nevertheless, ANSI C allows the alternate
#define NULL (void *)0
definition for NULL. Besides helping incorrect programs to work
(but only on machines with homogeneous pointers, thus questionably
valid assistance) this definition may catch programs which use NULL
incorrectly (e.g. when the ASCII NUL character was really
6. I use the preprocessor macro
#define Nullptr(type) (type *)0
to help me build null pointers of the correct type.
A: This trick, though popular in some circles, does not buy much. It
is not needed in assignments and comparisons; see question 2. It
does not even save keystrokes. Its use suggests to the reader that
the author is shaky on the subject of null pointers, and requires
the reader to check the #definition of the macro, its invocations,
and _all_ other pointer usages much more carefully.
7. Is the abbreviated pointer comparison “if(p)” to test for non-null
pointers valid? What if the internal representation for null
pointers is nonzero?
A: When C requires the boolean value of an expression (in the if,
while, for, and do statements, and with the &&, ||, !, and ?:
operators), a false value is produced when the expression compares
equal to zero, and a true value otherwise. That is, whenever one
where “expr” is any expression at all, the compiler essentially
acts as if it had been written as
if(expr != 0)
Substituting the trivial pointer expression “p” for “expr,” we have
if(p) is equivalent to if(p != 0)
and this is a comparison context, so the compiler can tell that the
(implicit) 0 is a null pointer, and use the correct value. There
is no trickery involved here; compilers do work this way, and
generate identical code for both statements. The internal
representation of a pointer does _not_ matter.
The boolean negation operator, !, can be described as follows:
!expr is essentially equivalent to expr?0:1
It is left as an exercise for the reader to show that
if(!p) is equivalent to if(p == 0)
“Abbreviations” such as if(p), though perfectly legal, are
considered by some to be bad style.
See also question 71.
References: K&R II Sec. A7.4.7 p. 204; H&S Sec. 5.3 p. 91; ANSI
Secs. 22.214.171.124, 3.3.9, 3.3.13, 3.3.14, 3.3.15, 126.96.36.199, and 3.6.5 .
8. If “NULL” and “0” are equivalent, which should I use?
A: Many programmers believe that “NULL” should be used in all pointer
contexts, as a reminder that the value is to be thought of as a
pointer. Others feel that the confusion surrounding “NULL” and “0”
is only compounded by hiding “0” behind a #definition, and prefer
to use unadorned “0” instead. There is no one right answer.
C programmers must understand that “NULL” and “0” are
interchangeable and that an uncast “0” is perfectly acceptable in
initialization, assignment, and comparison contexts. Any usage of
“NULL” (as opposed to “0”) should be considered a gentle reminder
that a pointer is involved; programmers should not depend on it
(either for their own understanding or the compiler’s) for
distinguishing pointer 0’s from integer 0’s.
NULL should _not_ be used when another kind of 0 is required, even
though it might work, because doing so sends the wrong stylistic
message. (ANSI allows the #definition of NULL to be (void *)0,
which will not work in non-pointer contexts.) In particular, do
not use NULL when the ASCII null character (NUL) is desired.
Provide your own definition
#define NUL ‘