Report a bug
If you spot a problem with this page, click here to create a Bugzilla issue.
Improve this page
Quickly fork, edit online, and submit a pull request for this page. Requires a signed-in GitHub account. This works well for small changes. If you'd like to make larger changes you may want to consider using a local clone.

dmd.lexer

Implements the lexical analyzer, which converts source code into lexical tokens.

Specification Lexical

Authors:

Source lexer.d

class Lexer;
pure nothrow this(const(char)* filename, const(char)* base, size_t begoffset, size_t endoffset, bool doDocComment, bool commentToken);
Creates a Lexer for the source code base[begoffset..endoffset+1]. The last character, base[endoffset], must be null (0) or EOF (0x1A).
Parameters:
const(char)* filename used for error messages
const(char)* base source code, must be terminated by a null (0) or EOF (0x1A) character
size_t begoffset starting offset into base[]
size_t endoffset the last offset to read into base[]
bool doDocComment handle documentation comments
bool commentToken comments become TOK.comment's
pure nothrow @safe Token* allocateToken();
Returns:
a newly allocated Token.
final nothrow TOK peekNext();
Look ahead at next token's value.
final nothrow TOK peekNext2();
Look 2 tokens ahead at value.
final nothrow void scan(Token* t);
Turn next token in buffer into a token.
final nothrow Token* peekPastParen(Token* tk);
tk is on the opening (. Look ahead and return token that is past the closing ).
static pure nothrow const(char)* combineComments(const(char)[] c1, const(char)[] c2, bool newParagraph);
Combine two document comments into one, separated by an extra newline if newParagraph is true.