Mysqlhotcopy can copy the files anywhere. Just have it copy under ur
MySQL data directory and the new copy will be visible instantly. e.g.
/var/lib/mysql/data/offline_copy U will instantly see a new database
named "offline_copy" with the copy of ur table in it.
Brian Dunning wrote:
>
> Thanks
At 01:54 PM 9/27/2006, you wrote:
I have a very busy 14,000,000 record table. I made a new version of
the table, that's more efficient, and now I just need to copy over
the data. Just about anything I try swamps the machine and locks up
MySQL with "too many connections" because it's so damn busy.
The problem when I try this is that the database gets locked up:
INSERT INTO newtable2 SELECT * from oldtable
On Sep 27, 2006, at 12:37 PM, Dan Buettner wrote:
Brian, I'm not sure there's a quick way to copy 14 million records, no
matter how you slice it. Disabling the indexes on the destinat
Thanks Chris, this sounds great but when I read about mysqlhotcopy I
didn't see a way to make it create a live table that's open within
the same database, it seems to want only to create a separate backup
file in some directory.
On Sep 27, 2006, at 6:10 PM, Wagner, Chris (GEAE, CBTS) wrote
This is a situation where u should use mysqlhotcopy. That gives u a
snapshot of the current table and lets u work on it "offline" while the
real table is available.
hotcopy table A to B
blank table A to allow inserts
work on table B
merge A into B
delete A
rename B to A
--
Chris Wagner
CBTS
G
The table switch-a-roo scheme would accomplish this - it lets you copy
the data into the duplicate table, and can run as long as needed since
it won't be tying up a table that your users are trying to access.
Then once the move is completed, the table rename operation should
complete very quickly,
This is the kind of thing I've been trying, but anything like this
locks up the machine, all the users get errors, and I have to restart
mysql. This is why I'm looking for something like a "LOW PRIORITY"
solution, hoping that it won't try to use resources until they're
available.
On Sep
Brian, I'm not sure there's a quick way to copy 14 million records, no
matter how you slice it. Disabling the indexes on the destination
table might help - but then you've got to devote some time to when you
re-enable them.
You might try this workaround, where you're copying into a duplicate
of
I'm guessing what's happening is that your "import" is locking the table, putting everything else on hold. People keep connecting,
getting put on hold until you run out of connections. It's not that you machine is "so busy", it just can't do two things at once.
One of the limitations of MyISAM ta