The code to cache bound and unbound symbol tables of the standard
library appears to be working and will soon be committed. On my Mac
the calculation being cached takes about 1/2 second (excluding parse
times). So 1/2 second should be saved on every compilation now.
This should speed up the testing. However, it isn't as much as I'd hoped,
but then I didn't really check the stats to see how long binding was taking :)

the total time saved by caching, however is a little bigger: 42 seconds.
That's how slow the parser is :) [including desugaring and macro
processing but honestly, they're fast compared to the parsing process]

WARNING: there is no dependency checking. If the library is changed,
the changes will not be noticed, and any diagnostics will point to the
wrong lines of the library files. To work around this either:

* blow away lib/std.libtab, or
* use the --force option on compilation, or
* touch lib/std.flx
* blow away your --cache_dir if you're using one


Yea, I plan to fix this :) Basically it just means comparing the time
stamp of the libtab cache with the max time of the head library file
and all include files. Unfortunately, this can only be done AFTER
loading the cache, since the cache has the names of the include
files we need to compare in it :) The check will be a bit slow because
it has to access the filesystem.

Technically, this caching AND the parse caching should also be
sensitive to changes in a few other things, including the grammar
and macro head files, and of course the flxg executable.
[How can I get the time stamp of the running executable?
Which works on Windows..? Oh, and all Unix platforms?
There is one reliable way: get flx to pass it into flxg ... :)

The other thing is the library path, as that affects which files
are used for an include.

The cached library code isn't optimised at all. It probably could
be if I made the WHOLE set of entries in the table roots
temporarily (so nothing got lost). It's not clear this is worthwhile.

At this time, parser syntax isn't cached. It used t be!
Caching the grammar would save a fixed overhead on every
parse, including the "main program", however it isn't clear
this is worthwhile, because we would only be caching the parsed
grammar, not the actual Dypgen state machine: building that
probably takes a lot longer than parsing the grammar, since
the initial grammar used to parse the grammar is hard coded
and reasonably small (so even if Dypgen has to build the
automaton, it probably doesn't take that long).

No, here is little chance of rewriting Dypgen in Felix :)
The rest of the compiler, perhaps..

It's also not clear how slow gcc is compiling the generated
output. At present even the flxg stats are screwed up
(by ME! I have to go back and make sure the timers are
all in the right places and the counters being properly
updated, slightly non-trivial with the new algorithm).

--
john skaller
skal...@users.sourceforge.net





------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
Felix-language mailing list
Felix-language@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/felix-language

Reply via email to