Improve this page Quickly fork, edit online, and submit a pull request for this page. Requires a signed-in GitHub account. This works well for small changes. If you'd like to make larger changes you may want to consider using local clone. Page wiki View or edit the community-maintained wiki page associated with this page.

The C Preprocessor vs D

Back when C was invented, compiler technology was primitive. Installing a text macro preprocessor onto the front end was a straightforward and easy way to add many powerful features. The increasing size & complexity of programs have illustrated that these features come with many inherent problems. D doesn't have a preprocessor; but D provides a more scalable means to solve the same problems.


Header Files

The C Preprocessor Way

C and C++ rely heavily on textual inclusion of header files. This frequently results in the compiler having to recompile tens of thousands of lines of code over and over again for every source file, an obvious source of slow compile times. What header files are normally used for is more appropriately done doing a symbolic, rather than textual, insertion. This is done with the import statement. Symbolic inclusion means the compiler just loads an already compiled symbol table. The needs for macro "wrappers" to prevent multiple #inclusion, funky #pragma once syntax, and incomprehensible fragile syntax for precompiled headers are simply unnecessary and irrelevant to D.

#include <stdio.h>

The D Way

D uses symbolic imports:

import std.c.stdio;

#pragma once

The C Preprocessor Way

C header files frequently need to be protected against being #include'd multiple times. To do it, a header file will contain the line:

#pragma once

or the more portable:

#ifndef __STDIO_INCLUDE
#define __STDIO_INCLUDE
... header file contents
#endif

The D Way

Completely unnecessary since D does a symbolic include of import files; they only get imported once no matter how many times the import declaration appears.


#pragma pack

The C Preprocessor Way

This is used in C to adjust the alignment for structs.

The D Way

For D classes, there is no need to adjust the alignment (in fact, the compiler is free to rearrange the data fields to get the optimum layout, much as the compiler will rearrange local variables on the stack frame). For D structs that get mapped onto externally defined data structures, there is a need, and it is handled with:

struct Foo
{
	align (4):	// use 4 byte alignment
	...
}

Macros

Preprocessor macros add powerful features and flexibility to C. But they have a downside:

Here's an enumeration of the common uses for macros, and the corresponding feature in D:

  1. Defining literal constants:

    The C Preprocessor Way

    #define VALUE	5
    

    The D Way

    enum int VALUE = 5;
    
  2. Creating a list of values or flags:

    The C Preprocessor Way

    int flags:
    #define FLAG_X	0x1
    #define FLAG_Y	0x2
    #define FLAG_Z	0x4
    ...
    flags |= FLAG_X;
    

    The D Way

    enum FLAGS { X = 0x1, Y = 0x2, Z = 0x4 };
    FLAGS flags;
    ...
    flags |= FLAGS.X;
    
  3. Distinguishing between ascii chars and wchar chars:

    The C Preprocessor Way

    #if UNICODE
        #define dchar	wchar_t
        #define TEXT(s)	L##s
    #else
        #define dchar	char
        #define TEXT(s)	s
    #endif
    
    ...
    dchar h[] = TEXT("hello");
    

    The D Way

    dchar[] h = "hello";
    

    D's optimizer will inline the function, and will do the conversion of the string constant at compile time.

  4. Supporting legacy compilers:

    The C Preprocessor Way

    #if PROTOTYPES
    #define P(p)	p
    #else
    #define P(p)	()
    #endif
    int func P((int x, int y));
    

    The D Way

    By making the D compiler open source, it will largely avoid the problem of syntactical backwards compatibility.
  5. Type aliasing:

    The C Preprocessor Way

    #define INT 	int
    

    The D Way

    alias INT = int;
    
  6. Using one header file for both declaration and definition:

    The C Preprocessor Way

    #define EXTERN extern
    #include "declarations.h"
    #undef EXTERN
    #define EXTERN
    #include "declarations.h"
    
    In declarations.h:
    EXTERN int foo;
    

    The D Way

    The declaration and the definition are the same, so there is no need to muck with the storage class to generate both a declaration and a definition from the same source.
  7. Lightweight inline functions:

    The C Preprocessor Way

    #define X(i)	((i) = (i) / 3)
    

    The D Way

    int X(ref int i) { return i = i / 3; }
    
    The compiler optimizer will inline it; no efficiency is lost.
  8. Assert function file and line number information:

    The C Preprocessor Way

    #define assert(e)	((e) || _assert(__LINE__, __FILE__))
    

    The D Way

    assert() is a built-in expression primitive. Giving the compiler such knowledge of assert() also enables the optimizer to know about things like the _assert() function never returns.
  9. Setting function calling conventions:

    The C Preprocessor Way

    #ifndef _CRTAPI1
    #define _CRTAPI1 __cdecl
    #endif
    #ifndef _CRTAPI2
    #define _CRTAPI2 __cdecl
    #endif
    
    int _CRTAPI2 func();
    

    The D Way

    Calling conventions can be specified in blocks, so there's no need to change it for every function:
    extern (Windows)
    {
        int onefunc();
        int anotherfunc();
    }
    
  10. Hiding __near or __far pointer weirdness:

    The C Preprocessor Way

    #define LPSTR	char FAR *
    

    The D Way

    D doesn't support 16 bit code, mixed pointer sizes, and different kinds of pointers, and so the problem is just irrelevant.
  11. Simple generic programming:

    The C Preprocessor Way

    Selecting which function to use based on text substitution:
    #ifdef UNICODE
    int getValueW(wchar_t *p);
    #define getValue getValueW
    #else
    int getValueA(char *p);
    #define getValue getValueA
    #endif
    

    The D Way

    D enables declarations of symbols that are aliases of other symbols:
    version (UNICODE)
    {
        int getValueW(wchar[] p);
        alias getValue = getValueW;
    }
    else
    {
        int getValueA(char[] p);
        alias getValue = getValueA;
    }
    

Conditional Compilation

The C Preprocessor Way

Conditional compilation is a powerful feature of the C preprocessor, but it has its downside:

The D Way

D supports conditional compilation:

  1. Separating version specific functionality into separate modules.
  2. The debug statement for enabling/disabling debug harnesses, extra printing, etc.
  3. The version statement for dealing with multiple versions of the program generated from a single set of sources.
  4. The if (0) statement.
  5. The /+ +/ nesting comment can be used to comment out blocks of code.

Code Factoring

The C Preprocessor Way

It's common in a function to have a repetitive sequence of code to be executed in multiple places. Performance considerations preclude factoring it out into a separate function, so it is implemented as a macro. For example, consider this fragment from a byte code interpreter:

unsigned char *ip;	// byte code instruction pointer
int *stack;
int spi;		// stack pointer
...
#define pop()		(stack[--spi])
#define push(i)		(stack[spi++] = (i))
while (1)
{
    switch (*ip++)
    {
	case ADD:
	    op1 = pop();
	    op2 = pop();
	    result = op1 + op2;
	    push(result);
	    break;

	case SUB:
	...
    }
}

This suffers from numerous problems:

  1. The macros must evaluate to expressions and cannot declare any variables. Consider the difficulty of extending them to check for stack overflow/underflow.
  2. The macros exist outside of the semantic symbol table, so remain in scope even outside of the function they are declared in.
  3. Parameters to macros are passed textually, not by value, meaning that the macro implementation needs to be careful to not use the parameter more than once, and must protect it with ().
  4. Macros are invisible to the debugger, which sees only the expanded expressions.

The D Way

D neatly addresses this with nested functions:

ubyte* ip;		// byte code instruction pointer
int[] stack;		// operand stack
int spi;		// stack pointer
...

int pop()        { return stack[--spi]; }
void push(int i) { stack[spi++] = i; }

while (1)
{
    switch (*ip++)
    {
	case ADD:
	    op1 = pop();
	    op2 = pop();
	    push(op1 + op2);
	    break;

	case SUB:
	...
    }
}

The problems addressed are:

  1. The nested functions have available the full expressive power of D functions. The array accesses already are bounds checked (adjustable by compile time switch).
  2. Nested function names are scoped just like any other name.
  3. Parameters are passed by value, so need to worry about side effects in the parameter expressions.
  4. Nested functions are visible to the debugger.

Additionally, nested functions can be inlined by the implementation resulting in the same high performance that the C macro version exhibits.


#error and Static Asserts

Static asserts are user defined checks made at compile time; if the check fails the compile issues an error and fails.

The C Preprocessor Way

The first way is to use the #error preprocessing directive:

#if FOO || BAR
    ... code to compile ...
#else
#error "there must be either FOO or BAR"
#endif

This has the limitations inherent in preprocessor expressions (i.e. integer constant expressions only, no casts, no sizeof, no symbolic constants, etc.).

These problems can be circumvented to some extent by defining a static_assert macro (thanks to M. Wilson):

#define static_assert(_x) do { typedef int ai[(_x) ? 1 : 0]; } while(0)

and using it like:

void foo(T t)
{
    static_assert(sizeof(T) < 4);
    ...
}

This works by causing a compile time semantic error if the condition evaluates to false. The limitations of this technique are a sometimes very confusing error message from the compiler, along with an inability to use a static_assert outside of a function body.

The D Way

D has the static assert, which can be used anywhere a declaration or a statement can be used. For example:

version (FOO)
{
    class Bar
    {
	const int x = 5;
	static assert(Bar.x == 5 || Bar.x == 6);

	void foo(T t)
	{
	    static assert(T.sizeof < 4);
	    ...
	}
    }
}
else version (BAR)
{
    ...
}
else
{
    static assert(0);	// unsupported version
}

Template Mixins

D template mixins superficially look just like using C's preprocessor to insert blocks of code and parse them in the scope of where they are instantiated. But the advantages of mixins over macros are:

  1. Mixins substitute in parsed declaration trees that pass muster with the language syntax, macros substitute in arbitrary preprocessor tokens that have no organization.
  2. Mixins are in the same language. Macros are a separate and distinct language layered on top of C++, with its own expression rules, its own types, its distinct symbol table, its own scoping rules, etc.
  3. Mixins are selected based on partial specialization rules, macros have no overloading.
  4. Mixins create a scope, macros do not.
  5. Mixins are compatible with syntax parsing tools, macros are not.
  6. Mixin semantic information and symbol tables are passed through to the debugger, macros are lost in translation.
  7. Mixins have override conflict resolution rules, macros just collide.
  8. Mixins automatically create unique identifiers as required using a standard algorithm, macros have to do it manually with kludgy token pasting.
  9. Mixin value arguments with side effects are evaluated once, macro value arguments get evaluated each time they are used in the expansion (leading to weird bugs).
  10. Mixin argument replacements don't need to be ‘protected’ with parentheses to avoid operator precedence regrouping.
  11. Mixins can be typed as normal D code of arbitrary length, multiline macros have to be backslash line-spliced, can't use // to end of line comments, etc.
  12. Mixins can define other mixins. Macros cannot create other macros.
Forums | Comments | Search | Downloads | Home