Hello,

I bought a Palm Tungsten E two days ago, one of the reason being to use it to 
have wikipedia with me all the time (I bought a 512MB flash card with it).

Today I tried to pluck the terodump static tree (164 MB). As planed (given the 
previous results sawn on the list), the python parser was killed by kernel at 
124'000 pages because of excessive memory consumption. I'm motivated to try 
further ideas, for instance a direct conversion from the wikipedia wiki 
format to plucker's internal one. But I've some questions :
- On the file format reference from CVS, it is said that the record id is only 
2 bytes. Should I understand that there can only be 65536 pages in a plucker 
file ? If yes, do you plan to overcome this limitation ?
- Does someone plan to improve the parser to be able to parse big documents ?
- Is there a way to do a search only in a selected set of pages ? Because with 
wikipedia neither searching the whole 164 MB nor a page with 250'000 links 
seems realistic to me.

Thanks,

Steph
_______________________________________________
plucker-list mailing list
[EMAIL PROTECTED]
http://lists.rubberchicken.org/mailman/listinfo/plucker-list

Reply via email to