Thanks for the responses, Inline:

On Sun, Mar 9, 2008 at 7:49 PM, Tim McDaniel <[EMAIL PROTECTED]> wrote:
> > On Sat, Mar 8, 2008 at 11:55 PM, Waynn Lue <[EMAIL PROTECTED]> wrote:
>  >> Subject: Re: mysqldump on specific columns only?
>
> >> I'm trying to dump all the columns of a table but one in a
>  >> restructuring of my schema.
>
>  By "dump", I assume you mean "output", not "delete".

Indeed, I meant "output."

>  I suspect I'm too inexperienced in MySQL or I don't understand the
>  question, because I don't see why you haven't mentioned
>  ALTER TABLE DROP COLUMN.  Or, if you don't want to touch the original
>  database but rather a copy database: mysqldump, load it into another
>  database, ALTER TABLE DROP COLUMN.  It's true that you wrote
>
>
>  >> since the files are going to be rather big.
>
>  The second notion has the disadvantage that the data will be copied
>  three times (mysqldump, load, ALTER TABLE).  On the other hand, you
>  have the full reliability of mysqldump, with its care in copying
>  CREATE TABLEs and data correctly.
>
>  Am I perhaps misunderstanding?

ALTER TABLE unfortunately takes longer than dumping and recreating the
tables.  I started a previous thread on this, and the consensus was
recreating everything was faster, and I'm taking advantage of this
downtime to restructure our schema, and add a few other columns that
will be needed.  The initial impetus was expanding an INT to a BIGINT,
but in the presence of a foreign key.  So that would require dropping
the FK, expanding the INT to BIGINT in two tables, then recreating the
FK.  I would also want to add three more columns while I'm at it.  And
in this case, the cost of all those operations is significantly higher
than just dumping all the tables, truncating/dropping the tables,
recreating the tables, then importing.

>  On Sun, 9 Mar 2008, Rob Wultsch <[EMAIL PROTECTED]> wrote:
>  > I would probably use [SELECT] INTO OUTFILE. Any particular reason it
>  > does not work?
>
>  It's not well-suited for "restructuring of my schema".  mysqldump
>  emits SQL that can be used immediately to create the table with
>  exactly the same data types and column names and keys and other
>  properties (like NOT NULL), and to insert data into columns with
>  quoting and escaping as needed, and nice little flourishes like DROP
>  TABLE IF EXISTS, LOCK TABLES, UNLOCK TABLES.

I'm actually not going to be using the exact same schema anyway, so
SELECT INTO does work, but ... let me continue this thought below.

>  > mysql> select c1,c2 from t INTO OUTFILE 'outfile';
>  > Query OK, 4 rows affected (0.00 sec)
>  >
>  >
>  > outfile contents:
>  > 1     2
>  > 4     5
>  > 7     8
>  > 10    11
>
>  That provides only the data to insert.  Further, it would require
>  considerable massaging to get it into the form of INSERT statements,
>  though for all I know MySQL provides other mechanisms to load that
>  sort of data.  But those are tab-separated columns: what if one of the
>  columns were a text field with a literal tab?

MySQL does provide LOAD DATA INFILE
(http://dev.mysql.com/doc/refman/5.0/en/load-data.html), but I was
hoping to do it through mysqldump instead of SELECT INTO because I
assumed the performance was faster.  If that's not the case, though,
I'll go ahead and use SELECT INTO OUTFILE and LOAD DATA INFILE.  I
guess most any character is fine as a delimiter, since my fields
should all just be IDs...

Thanks,
Waynn

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to