Hi Kabel

Yes, I did, it won't do the job for us. I didn't explain the whole usecase: we are dealing with a 50-billion row table which we want
to split into 1-million-row tables, and then dynamically break each
of these into smaller pieces in order to speed up n^2 near-neighbor
joins. If we partition the 1 million row table from the very beginning,
we will end up with unmanageable number of tables/files.

thanks,
Jacek



Have you looked into MySQL partitioning? If you're using version 5.1, it might really help.. just partition the big table on chunk ID.

http://dev.mysql.com/doc/refman/5.1/en/partitioning.html

kabel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to