Javier Diaz <[EMAIL PROTECTED]> wrote on 02/07/2004 10:28:32:
> We have some tables to record different data of the activity in our
website.
> The number of INSERT, DELETE and UPDATE operations in these tables is
huge
> (it could be more than 3000 a second). So far we don't have any speed
> problems, all these tables are optimised using the right indexes and
> everything is working ok. But now we are writing a Report Tool to get
some
> stats and figures from these tables. We have been doing a few tests and
any
> SELECT query taking more than one second or a few simultaneous SELECT,
and
> we have a real mess, lots of LOCKS. We definitely can not afford to slow
> down the web site, and we have been thinking in a few possible solutions
>
> 1- Create a duplicate once a day of each of the tables we need to
connect
> from the Report Tool. We can do this in the period of less activity in
the
> site.
> This a safest solution because we will be running all the SELECT's
> against the duplicates and there are no risks to cause problems in the
site
> but we will
> have the inconvenience that we can not get latest figures only the
> previous days.
Sounds like a nasty kludge to me. I really wouldn't be keen on it at all.
>
> 2- Use InnoDB instead of MyISAM, but we are not sure is this will be
good
> enough
It strikes me that this is what InnoDB is designed for. In your situation,
this is the first thing I would try.
> 3- Replication of these tables to another server ??
This would also work: it just uses more resources (another computer of
nearly equal power) and more development time (scripts to stop and start
replication. If you can borrow the replicated machine, you could test the
InnoDB solution on that: change the tables on the replicated server to
InnoDB and try running your report generator on that. If the replications
doesn't fall behind, the main server will probably handle it.
Alec
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]