Try setting your Period higher and using the -n option to restrict the
number of pages it does in an invocation ...
for instance, set Period to 1week, and -n option to 20k ...
this way it only processes 20k expired pages, and they will only expire
again in a week ...
On Thu, 22 Jun 2000, J C Lawrence wrote:
>
> What are the ways to speed indexing?
>
> My site (<100K pages) is now taking the better part of a day to
> index, despite the fact that most content hasn't changed on each
> re-index.
>
> I've built the pthreads based indexer and have been running it with
> various numbers of threads ranging from 1 to 150 (dual PII-333,
> Linux and MySQL). While there were performance gains with the
> increased thread count, they were small (10%-ish, no matter the
> thread count). System load was not the gating factor.
>
> --
> J C Lawrence Home: [EMAIL PROTECTED]
> ----------(*) Other: [EMAIL PROTECTED]
> --=| A man is as sane as he is dangerous to his environment |=--
> ______________
> If you want to unsubscribe send "unsubscribe udmsearch"
> to [EMAIL PROTECTED]
>
>
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: [EMAIL PROTECTED] secondary: scrappy@{freebsd|postgresql}.org
______________
If you want to unsubscribe send "unsubscribe udmsearch"
to [EMAIL PROTECTED]