> -Original Message-
> From: Rob Wultsch [mailto:wult...@gmail.com]
> Sent: Thursday, August 05, 2010 6:05 PM
> To: Daevid Vincent
> Cc: MySQL List
> Subject: Re: Possible tricks to ALTER on huge tables?
>
> Having significant amount of overhead for unused columns will without
> doubt
I had to do this trick with a few million rows in the table, and what
I did was to create a new table with the required structure, then did
"insert into select from", starting with the newest data first, cause
that made sense for my application. Then, renamed the old table and the
new.
YMMV
andu
Any kind of optimization you need works.
I would use a explain sql statements istead to import without indexes, that
will shed more light...
Even if u optimize the report, if you have concurrent access demanding a
bunch load of data you will other operations get stuck... for that is a good
reason
Daevid Vincent wrote:
We currently have some tables that are approaching 1 BILLION rows (real
Billion, with nine zeros, not that silly six zero version). Trying to do an
"ALTER" on them to add a column can sometimes take hours.
A few years ago I have tested possible table structures for an
app
hi guys,
we have a mysql replication setup in our production.
Master: mysql-5.0.77
slave: mysql-5.1.46
Recently, slave show many these kind of issues and SQL thread was stopped:
Last_SQL_Error: Error 'No data - zero rows fetched, selected, or
processed' on query. Default databas