Greetings Dimitri!

Very curious to know:

1)  How many URLs in your 37 gig database?

2)  How quickly do you display results?

3)  How long does an index take?

4)  How is your relevance?

5)  Have you noticed a degradation in performance with more URLs?

6)  Do you break up the database into categories?  I'm not sure that having
one huge 37 gig database is the way to go!  All it would take is one core
dump and your data is lost.  To experience this first hand, simply reboot
your PC while running index .  Every time this has happened, I've found that
index would not work again.  I'd still be able to search the existing data
but index always returned errors.  Of course this could be just me.

Many more questions to ask.

Regards,

John


----- Original Message -----
From: "Dmitri Kovalsky" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 14, 2002 7:57 AM
Subject: [aseek-users] Memory utilization and scalability


> I have 13G binary database and 33G MySQL database, and I got two problems:
> - index -D always says mysql server is gone away, but I have mysql
> configured using my-huge.cnf, with hundreds of megs for primary keys and
> buffers.
>
> - searchd always consumes 1.5G of RAM
>
> My machine is Dual 1Ghz pentium III with 2Gb of RAM running RedHat 7.2
with
> latest updates.
> So what I'd like to know, if, firstly I got something messed up or I just
> need more horsepower, and secondly, about searchd, is it ok or I should
buy
> more RAM?
>
> Also, what is the best way to use more than one machine for aspseek?
>
> Best Regards,
> Dmitri
>
>

Reply via email to