Le 11/05/2012 13:50, Ary Manzana a écrit :
On 5/11/12 4:22 PM, Roman D. Boiko wrote:
What about line and column information?
Indices of the first code unit of each line are stored inside lexer and
a function will compute Location (line number, column number, file
specification) for any index. This way size of Token instance is reduced
to the minimum. It is assumed that Location can be computed on demand,
and is not needed frequently. So column is calculated by reverse walk
till previous end of line, etc. Locations will possible to calculate
both taking into account special token sequences (e.g., #line 3
"ab/c.d"), or discarding them.

But then how do you do to efficiently (if reverse walk is any efficient)
compute line numbers?

Usually tokens are used and discarded. I mean, somebody that uses the
lexer asks tokens, process them (for example to highlight code or to
build an AST) and then discards them. So you can reuse the same Token
instance. If you want to peek the next token, or have a buffer of token,
you can use a freelist ( http://dlang.org/memory.html#freelists , one of
the many nice things I learned by looking at DMD's source code ).

So adding line and column information is not like wasting a lot of
memory: just 8 bytes more for each token in the freelist.

SDC uses struct Location to store such data.

Every token and AST element have a member data of type location.

As it is value type, no need for free list all thoses sort of thing. When the token is discarded, the location is discarded too. The same goes for AST nodes.

Reply via email to