dmd.lexer

Implements the lexical analyzer, which converts source code into lexical tokens.

Specification: Lexical

Authors

Walter Bright

Source: lexer.d

  • Declaration

    struct CompileEnv;

    Values to use for various magic identifiers

    • Declaration

      uint versionNumber;

      __VERSION__

    • Declaration

      const(char)[] date;

      __DATE__

    • Declaration

      const(char)[] time;

      __TIME__

    • Declaration

      const(char)[] vendor;

      __VENDOR__

    • Declaration

      const(char)[] timestamp;

      __TIMESTAMP__

    • Declaration

      bool previewIn;

      in means [ref] scope const, accepts rvalues

    • Declaration

      bool ddocOutput;

      collect embedded documentation comments

    • Declaration

      bool masm;

      use MASM inline asm syntax

  • Declaration

    class Lexer;

    • Declaration

      bool Ccompile;

      true if compiling ImportC

    • Declaration

      ubyte boolsize;

      size of a C Bool, default 1

    • Declaration

      ubyte shortsize;

      size of a C short, default 2

    • Declaration

      ubyte intsize;

      size of a C int, default 4

    • Declaration

      ubyte longsize;

      size of C long, 4 or 8

    • Declaration

      ubyte long_longsize;

      size of a C long long, default 8

    • Declaration

      ubyte long_doublesize;

      size of C long double, 8 or D real.sizeof

    • Declaration

      ubyte wchar_tsize;

      size of C wchar_t, 2 or 4

    • Declaration

      ErrorSink eSink;

      send error messages through this interface

    • Declaration

      CompileEnv compileEnv;

      environment

    • Declaration

      nothrow scope this(const(char)* filename, const(char)* base, size_t begoffset, size_t endoffset, bool doDocComment, bool commentToken, ErrorSink errorSink, const CompileEnv* compileEnv);

      Creates a Lexer for the source code base[begoffset..endoffset+1]. The last character, base[endoffset], must be null (0) or EOF (0x1A).

      Parameters

      const(char)* filename

      used for error messages

      const(char)* base

      source code, must be terminated by a null (0) or EOF (0x1A) character

      size_t begoffset

      starting offset into base[]

      size_t endoffset

      the last offset to read into base[]

      bool doDocComment

      handle documentation comments

      bool commentToken

      comments become TOK.comment's

      ErrorSink errorSink

      where error messages go, must not be null

      CompileEnv* compileEnv

      version, vendor, date, time, etc.

    • Declaration

      nothrow this(const(char)* filename, const(char)* base, size_t begoffset, size_t endoffset, bool doDocComment, bool commentToken, bool whitespaceToken, ErrorSink errorSink, const CompileEnv* compileEnv = null);

      Alternative entry point for DMDLIB, adds whitespaceToken

    • Declaration

      nothrow scope @safe this(ErrorSink errorSink);

      Used for unittests for a mock Lexer

    • Declaration

      final nothrow void resetDefineLines(const(char)[] slice);

      Reset lexer to lex #define's

    • Declaration

      final nothrow void nextDefineLine();

      Set up for next #define line. p should be at start of next line.

    • Declaration

      final const pure nothrow @nogc @property @safe bool empty();

      Range interface

    • Declaration

      pure nothrow @safe Token* allocateToken();

      Return Value

      a newly allocated Token.

    • Declaration

      final nothrow TOK peekNext();

      Look ahead at next token's value.

    • Declaration

      final nothrow TOK peekNext2();

      Look 2 tokens ahead at value.

    • Declaration

      final nothrow void scan(Token* t);

      Turn next token in buffer into a token.

      Parameters

      Token* t

      the token to set the resulting Token to

    • Declaration

      final nothrow Token* peekPastParen(Token* tk);

      tk is on the opening (. Look ahead and return token that is past the closing ).

    • Declaration

      final nothrow TOK hexStringConstant(Token* t);

      Lex hex strings: x"0A ae 34FE BD"

    • Declaration

      nothrow bool parseSpecialTokenSequence();

      Parse special token sequence:

      Return Value

      true if the special token sequence was handled

    • Declaration

      final nothrow void poundLine(ref Token tok, bool linemarker);

      Parse line/file preprocessor directive: #line linnum [filespec] Allow __LINE__ for linnum, and __FILE__ for filespec. Accept linemarker format:

      linnum [filespec] {flags}

      There can be zero or more flags, which are one of the digits 1..4, and must be in ascending order. The flags are ignored.

      Parameters

      Token tok

      token we're on, which is linnum of linemarker

      bool linemarker

      true if line marker format and lexer is on linnum

    • Declaration

      final nothrow void skipToNextLine(OutBuffer* defines = null);

      Scan forward to start of next line.

      Parameters

      OutBuffer* defines

      send characters to defines

    • Declaration

      static pure nothrow const(char)* combineComments(const(char)[] c1, const(char)[] c2, bool newParagraph);

      Combine two document comments into one, separated by an extra newline if newParagraph is true.