Can I borrow a copy of that script to mess with?

--Wes


St�phane Magnenat said:
> Hello,
>
>> In defense of Plucker, I will point out that
>> commercial programs suffer from this as well.
>> TomeRaider, which has versions for Windows, Palm OS,
>> Pocket PC, Psion, Symbian, Nokia and Sony Ericsson,
>> has the full (~150mb) WikiPedia split into 9 files for
>> their Palm version.
>
> Yesterday I modified the wiki2statc.pl and wiki2staticz.pl scripts to
> create a
> static Wikipedia tree suitable for Plucker. It removes the images and all
> intra-wikipedia links. Then it creates one index per letter (_ 0..9 a..z).
> Then it calls Spider on this file with a recursion level of 2.
>
> Unfortunatly the Spider dies on my machine with 512 MB of RAM on letter a
> (less then 20'000 articles) because of memory exceeded (I was running in
> init
> 3 without anything else). So I need to try on a computer with 1GB of RAM
> but
> the Plucker creation tools definitly need improvements if one wants to
> pluck
> things bigger then small web sites. The perl script that makes the static
> version of wikipedia runs fine while parsing the whole english wikipedia
> (about 300'000 articles).
>
> Is anyone interested in Wikipedia with plucker and ready to give
> suggestions ?
>
> Stephane
> _______________________________________________
> plucker-list mailing list
> [EMAIL PROTECTED]
> http://lists.rubberchicken.org/mailman/listinfo/plucker-list
>


-- 
-- Wesley Mason
-- TDPSoft.net
<!-- Club88 -->
_______________________________________________
plucker-list mailing list
[EMAIL PROTECTED]
http://lists.rubberchicken.org/mailman/listinfo/plucker-list

Reply via email to