At 12:00 AM 01/27/02 +0800, Stas Bekman wrote:
>so we have about 3MB of source code in 134 files (and will be more
>likely 6MB, when 2.0 docs are done, with 200+ files). Do you think it's
>possible to grep through in a reasonable response time? Remember that
>there will be a lot of IO for opening and closing many files.
It's not like mod_perl is a high volume site. And it's running on a lot
faster machine than my machine:
~/modperl-docs > find src -name '*.pod' | wc -l
105
~/modperl-docs > time find src -name '*.pod' | xargs fgrep '$|' | wc -l
23
real 0m0.033s
user 0m0.030s
sys 0m0.010s
That seems reasonable enough, even if it was ten times slower.
>> All the reverse indexing engines will parse on indexing, so it will always
>> be an issue of defining what makes up a word.
>>
>> Let me ask Avi Rappoport if there's something good for searching code.
>
>I think that Randy's setup was quite satisfying, but nextrieve was even
>better. What do you think about nextrieve?
I don't know much about it. It's not open source, and it's not free. I
really doubt it integrates with Template Toolkit.
Could we feed the pod source into Parse::RecDescent and get it to tokenize
perl code? That would be more fun.
--
Bill Moseley
mailto:[EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]