mos skrev:
At 01:20 PM 4/14/2010, Carsten Pedersen wrote:
Been there, done that. It's a maintenance nightmare.

Why is it a maintenance nightmare? I've been using this technique for a couple of years to store large amounts of data and it has been working just fine.

In a previous reply, you mentioned splitting the tables on a daily basis, not yearly. Enormous difference. It's one thing to fiddle with a set of merge table once a year to create a new instance. Quite another when it's to be done every day. If you want to change the table structure, you'll have to do that on every single one of the underlying tables. That might be fine for 5 year-tables, but not fun if you need to do it for hundreds of tables.

If your merge table consists of 30 underlying tables*, a search in the table will result in 30 separate searches, one per table. Also, MySQL will need one file descriptor per underlying table *per client accessing that table*. Plus one shared file descriptor per index file. So if 30 clients are accessing a merge table that consists of 30 days worth of data, that's 930 file descriptors for the OS to keep track of. Clearly, this doesn't scale well.

*Approx 1 month in your suggested solution, which also fits with OP saying that ~90k of about ~3.2 mio get deleted every day.

/ Carsten


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to