On 09/11/2013 05:01 PM, Dicebot wrote:
std.d.lexer is standard module for lexing D code, written by Brian Schott

---- Input ----

Code: https://github.com/Hackerpilot/phobos/tree/master/std/d

Documentation:
http://hackerpilot.github.io/experimental/std_lexer/phobos/lexer.html

...

(Commenting on what's visible in the documentation only for now.)

auto config = ...
... .byToken(config) ...

Seems to be a natural candidate for manual partial specialization.

enum config = ...
... .byToken!config() ...

uint line; ushort column; // is there overflow checking?

"Check to see if the token is of the same type and has the same string representation as the given token."

Tokens with the same string representation always are of the same type, so this seems redundant.

Furthermore, I'd expect (!a.opCmp(b)) === (a == b).

Why provide the operator overloads at all? They don't implement essential or natural functionality.


"includeSpecialTokens". It's not clear what this flag does.


"If the input range supports slicing, the caching layer aliases itself away and the lexing process is much more efficient."

It might be more sensible to require the user to manually wrap his range.

"pure nothrow bool isOperator(const TokenType t);
pure nothrow bool isOperator(ref const Token t);
pure nothrow bool isKeyword(const TokenType t);
pure nothrow bool isKeyword(ref const Token t);
..."

IMO we should get rid of these.



TokenType naming seems inconsistent. eg: & is amp, = is assign, == is equal, but &= is bitAndEqual and && is logicAnd

IMO better: & is and, = is assign, &= is andAssign and && is andAnd.

Of course, it might be best to use a template instead. Tok!"&", Tok!"&=" and Tok!"&&".

Reply via email to