On Tue, 21 Jun 2005, Kevin Burton wrote:

> >Out of curiosity, how many queries are we talking and what sort of
> >complexity level? I've had replication setups do 600 (simple) updates/s
> >and slaving was current most of the time and never more than 1 second
> >behind.
> >
> >
> Mostly INSERTS.. We're running about 300qps at full speed and doing
> selects on slaves will cause it to fall behind.
>
> Reducing the connection count allows it to NOT fall behind but then I
> loose throughput.  I'm not happy with either situation.

Sounds like you may be due for a system redesign? :)

Depending on your setup, splitting up your data into "levels" can be a
lifesaver. "levels" meaning, having as little data as possible in the
database that you query often and archiving/moving older data onto another
database. Offloading queries onto something like memcached can also give
you great speed increases, but along with that are some gotchas to be
aware of. Just some cents from my side, I obviously don't know anything
about the setup you are working with. :)


Atle
-
Flying Crocodile Inc, Unix Systems Administrator


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to