Indexes issue importing tablespaces

2014-10-10 Thread Ruben Cardenal
 

Hi, 

I have this problem among several different instaces of 5.6.20. I take
all the steps as stated in
http://dev.mysql.com/doc/refman/5.6/en/tablespace-copying.html and get
no errors neither warnings, neither in the cli or the mysql log. 

(13:23:02) [borrame]  alter table creditLine discard tablespace;
Query OK, 0 rows affected (0.30 sec) 

(copy operation of the .cfg and .ibd files from the origin server) 

(13:23:19) [borrame]  alter table creditLine import tablespace;
Query OK, 0 rows affected (44.35 sec) 

2014-10-10 13:26:42 1657 [Note] InnoDB: Importing tablespace for table
'letsbonus/creditLine' that was exported from host 'dualla'
2014-10-10 13:26:42 1657 [Note] InnoDB: Phase I - Update all pages
2014-10-10 13:27:04 1657 [Note] InnoDB: Sync to disk
2014-10-10 13:27:25 1657 [Note] InnoDB: Sync to disk - done!
2014-10-10 13:27:26 1657 [Note] InnoDB: Phase III - Flush changes to
disk
2014-10-10 13:27:26 1657 [Note] InnoDB: Phase IV - Flush complete
2014-10-10 13:27:26 1657 [Note] InnoDB: borrame.creditLine autoinc
value set to 87313435 

After this, the indexes look empty: 

(13:27:26) [borrame]  show index from creditLine;
++++--+--+---+-+--++--++-+---+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation
| Cardinality | Sub_part | Packed | Null | Index_type | Comment |
Index_comment |
++++--+--+---+-+--++--++-+---+
| creditLine | 0 | PRIMARY | 1 | id | A | 0 | NULL | NULL | | BTREE | |
|
| creditLine | 1 | creditLine_idClient | 1 | idClient | A | 0 | NULL |
NULL | | BTREE | | |
| creditLine | 1 | creditLine_idCreditSubTypology | 1 | idCreditTypology
| A | 0 | NULL | NULL | | BTREE | | |
| creditLine | 1 | creditLine_idCountry | 1 | idCountry | A | 0 | NULL |
NULL | | BTREE | | |
| creditLine | 1 | creditLine_idAffiliate | 1 | idAffiliate | A | 0 |
NULL | NULL | | BTREE | | |
| creditLine | 1 | endDate | 1 | endDate | A | 0 | NULL | NULL | YES |
BTREE | | |
| creditLine | 1 | status | 1 | status | A | 0 | NULL | NULL | | BTREE |
| |
++++--+--+---+-+--++--++-+---+
7 rows in set (0.00 sec) 

I have to optimize or null-alter the table to get them working: 

(13:27:34) [borrame]  alter table creditLine engine = InnoDB;
Query OK, 0 rows affected (12 min 57.41 sec)
Records: 0 Duplicates: 0 Warnings: 0 

(13:51:17) [borrame]  show index from creditLine;
++++--+--+---+-+--++--++-+---+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation
| Cardinality | Sub_part | Packed | Null | Index_type | Comment |
Index_comment |
++++--+--+---+-+--++--++-+---+
| creditLine | 0 | PRIMARY | 1 | id | A | 32237680 | NULL | NULL | |
BTREE | | |
| creditLine | 1 | creditLine_idClient | 1 | idClient | A | 16118840 |
NULL | NULL | | BTREE | | |
| creditLine | 1 | creditLine_idCreditSubTypology | 1 | idCreditTypology
| A | 1792 | NULL | NULL | | BTREE | | |
| creditLine | 1 | creditLine_idCountry | 1 | idCountry | A | 8967 |
NULL | NULL | | BTREE | | |
| creditLine | 1 | creditLine_idAffiliate | 1 | idAffiliate | A | 2 |
NULL | NULL | | BTREE | | |
| creditLine | 1 | endDate | 1 | endDate | A | 293069 | NULL | NULL |
YES | BTREE | | |
| creditLine | 1 | status | 1 | status | A | 4 | NULL | NULL | | BTREE |
| |
++++--+--+---+-+--++--++-+---+
7 rows in set (0.00 sec) 

Is this a know issue? Or something I'm missing? I've checked the doc and
saw nothing related to this. 

Thanks, 

Rubén. 
 

Re: Indexes issue importing tablespaces

2014-10-10 Thread Wagner Bianchi
Did you check if an ANALYZE TABLE is enough in this case?

--
Wagner Bianchi
Mobile: +55.31.8654.9510

 Em 10/10/2014, às 09:06, Ruben Cardenal my...@ruben.cn escreveu:
 
 
 
 Hi, 
 
 I have this problem among several different instaces of 5.6.20. I take
 all the steps as stated in
 http://dev.mysql.com/doc/refman/5.6/en/tablespace-copying.html and get
 no errors neither warnings, neither in the cli or the mysql log. 
 
 (13:23:02) [borrame]  alter table creditLine discard tablespace;
 Query OK, 0 rows affected (0.30 sec) 
 
 (copy operation of the .cfg and .ibd files from the origin server) 
 
 (13:23:19) [borrame]  alter table creditLine import tablespace;
 Query OK, 0 rows affected (44.35 sec) 
 
 2014-10-10 13:26:42 1657 [Note] InnoDB: Importing tablespace for table
 'letsbonus/creditLine' that was exported from host 'dualla'
 2014-10-10 13:26:42 1657 [Note] InnoDB: Phase I - Update all pages
 2014-10-10 13:27:04 1657 [Note] InnoDB: Sync to disk
 2014-10-10 13:27:25 1657 [Note] InnoDB: Sync to disk - done!
 2014-10-10 13:27:26 1657 [Note] InnoDB: Phase III - Flush changes to
 disk
 2014-10-10 13:27:26 1657 [Note] InnoDB: Phase IV - Flush complete
 2014-10-10 13:27:26 1657 [Note] InnoDB: borrame.creditLine autoinc
 value set to 87313435 
 
 After this, the indexes look empty: 
 
 (13:27:26) [borrame]  show index from creditLine;
 ++++--+--+---+-+--++--++-+---+
 | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation
 | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
 Index_comment |
 ++++--+--+---+-+--++--++-+---+
 | creditLine | 0 | PRIMARY | 1 | id | A | 0 | NULL | NULL | | BTREE | |
 |
 | creditLine | 1 | creditLine_idClient | 1 | idClient | A | 0 | NULL |
 NULL | | BTREE | | |
 | creditLine | 1 | creditLine_idCreditSubTypology | 1 | idCreditTypology
 | A | 0 | NULL | NULL | | BTREE | | |
 | creditLine | 1 | creditLine_idCountry | 1 | idCountry | A | 0 | NULL |
 NULL | | BTREE | | |
 | creditLine | 1 | creditLine_idAffiliate | 1 | idAffiliate | A | 0 |
 NULL | NULL | | BTREE | | |
 | creditLine | 1 | endDate | 1 | endDate | A | 0 | NULL | NULL | YES |
 BTREE | | |
 | creditLine | 1 | status | 1 | status | A | 0 | NULL | NULL | | BTREE |
 | |
 ++++--+--+---+-+--++--++-+---+
 7 rows in set (0.00 sec) 
 
 I have to optimize or null-alter the table to get them working: 
 
 (13:27:34) [borrame]  alter table creditLine engine = InnoDB;
 Query OK, 0 rows affected (12 min 57.41 sec)
 Records: 0 Duplicates: 0 Warnings: 0 
 
 (13:51:17) [borrame]  show index from creditLine;
 ++++--+--+---+-+--++--++-+---+
 | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation
 | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
 Index_comment |
 ++++--+--+---+-+--++--++-+---+
 | creditLine | 0 | PRIMARY | 1 | id | A | 32237680 | NULL | NULL | |
 BTREE | | |
 | creditLine | 1 | creditLine_idClient | 1 | idClient | A | 16118840 |
 NULL | NULL | | BTREE | | |
 | creditLine | 1 | creditLine_idCreditSubTypology | 1 | idCreditTypology
 | A | 1792 | NULL | NULL | | BTREE | | |
 | creditLine | 1 | creditLine_idCountry | 1 | idCountry | A | 8967 |
 NULL | NULL | | BTREE | | |
 | creditLine | 1 | creditLine_idAffiliate | 1 | idAffiliate | A | 2 |
 NULL | NULL | | BTREE | | |
 | creditLine | 1 | endDate | 1 | endDate | A | 293069 | NULL | NULL |
 YES | BTREE | | |
 | creditLine | 1 | status | 1 | status | A | 4 | NULL | NULL | | BTREE |
 | |
 ++++--+--+---+-+--++--++-+---+
 7 rows in set (0.00 sec) 
 
 Is this a know issue? Or something I'm missing? I've checked the doc and
 saw nothing related to this. 
 
 Thanks, 
 
 Rubén. 


Re: Indexes issue importing tablespaces

2014-10-10 Thread Ruben Cardenal
 

Hi Wagner, 

Yes! Analyze solves the situation in a moment. 

(14:21:09) [borrame]  alter table creditLine discard tablespace;
Query OK, 0 rows affected (0.41 sec) 

(14:21:21) [borrame]  alter table creditLine import tablespace;
Query OK, 0 rows affected (23.48 sec) 

(14:24:55) [borrame]  analyze table creditLine;
++-+--+--+
| Table | Op | Msg_type | Msg_text |
++-+--+--+
| borrame.creditLine | analyze | status | OK |
++-+--+--+
1 row in set (0.16 sec) 

(14:25:09) [borrame]  show index from creditLine;
++++--+--+---+-+--++--++-+---+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation
| Cardinality | Sub_part | Packed | Null | Index_type | Comment |
Index_comment |
++++--+--+---+-+--++--++-+---+
| creditLine | 0 | PRIMARY | 1 | id | A | 32237680 | NULL | NULL | |
BTREE | | |
| creditLine | 1 | creditLine_idClient | 1 | idClient | A | 16118840 |
NULL | NULL | | BTREE | | |
| creditLine | 1 | creditLine_idCreditSubTypology | 1 | idCreditTypology
| A | 5050 | NULL | NULL | | BTREE | | |
| creditLine | 1 | creditLine_idCountry | 1 | idCountry | A | 8161 |
NULL | NULL | | BTREE | | |
| creditLine | 1 | creditLine_idAffiliate | 1 | idAffiliate | A | 1794 |
NULL | NULL | | BTREE | | |
| creditLine | 1 | endDate | 1 | endDate | A | 64995 | NULL | NULL | YES
| BTREE | | |
| creditLine | 1 | status | 1 | status | A | 4 | NULL | NULL | | BTREE |
| |
++++--+--+---+-+--++--++-+---+
7 rows in set (0.00 sec) 

(14:25:14) [borrame]  

Thanks, 

Rubén. 

El 2014-10-10 14:19, Wagner Bianchi escribió: 

 Did you check if an ANALYZE TABLE is enough in this case?
 
 --
 WAGNER BIANCHI
 MOBILE: +55.31.8654. [2]9510 
 
 Em 10/10/2014, às 09:06, Ruben Cardenal my...@ruben.cn escreveu:
 
 Hi, 
 
 I have this problem among several different instaces of 5.6.20. I take
 all the steps as stated in
 http://dev.mysql.com/doc/refman/5.6/en/tablespace-copying.html [1] and get
 no errors neither warnings, neither in the cli or the mysql log. 
 
 (13:23:02) [borrame]  alter table creditLine discard tablespace;
 Query OK, 0 rows affected (0.30 sec) 
 
 (copy operation of the .cfg and .ibd files from the origin server) 
 
 (13:23:19) [borrame]  alter table creditLine import tablespace;
 Query OK, 0 rows affected (44.35 sec) 
 
 2014-10-10 13:26:42 1657 [Note] InnoDB: Importing tablespace for table
 'letsbonus/creditLine' that was exported from host 'dualla'
 2014-10-10 13:26:42 1657 [Note] InnoDB: Phase I - Update all pages
 2014-10-10 13:27:04 1657 [Note] InnoDB: Sync to disk
 2014-10-10 13:27:25 1657 [Note] InnoDB: Sync to disk - done!
 2014-10-10 13:27:26 1657 [Note] InnoDB: Phase III - Flush changes to
 disk
 2014-10-10 13:27:26 1657 [Note] InnoDB: Phase IV - Flush complete
 2014-10-10 13:27:26 1657 [Note] InnoDB: borrame.creditLine autoinc
 value set to 87313435 
 
 After this, the indexes look empty: 
 
 (13:27:26) [borrame]  show index from creditLine;
 ++++--+--+---+-+--++--++-+---+
 | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation
 | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
 Index_comment |
 ++++--+--+---+-+--++--++-+---+
 | creditLine | 0 | PRIMARY | 1 | id | A | 0 | NULL | NULL | | BTREE | |
 |
 | creditLine | 1 | creditLine_idClient | 1 | idClient | A | 0 | NULL |
 NULL | | BTREE | | |
 | creditLine | 1 | creditLine_idCreditSubTypology | 1 | idCreditTypology
 | A | 0 | NULL | NULL | | BTREE | | |
 | creditLine | 1 | creditLine_idCountry | 1 | idCountry | A | 0 | NULL |
 NULL | | BTREE | | |
 | creditLine | 1 | creditLine_idAffiliate | 1 | idAffiliate | A | 0 |
 NULL | NULL | | BTREE | | |
 | creditLine | 1 | endDate | 1 | endDate | A | 0 | NULL | NULL | YES |
 BTREE | | |
 | creditLine | 1 | status | 1 | status | A | 0 | NULL | NULL | | BTREE |
 | |
 ++++--+--+---+-+--++--++-+---+
 7 rows in set (0.00 sec) 
 
 I have to optimize or null-alter the table to get them working: 
 
 (13:27:34) [borrame]  alter table

Importing a database as a data file directory

2014-10-05 Thread Tim Johnson
I have a dual-boot OS X/Ubuntu 12.04 arrangement on a mac mini. The
ubuntu system has failed and I am unable to boot it. 

I have one database on the ubuntu partition that was not backed up. 

I am able to mount the ubuntu partion with fuse-ext2 from Mac OS X,
thus I can read and copy the mysql data files at /var/lib/mysql on the
ubuntu partition.

I presume that I should be able to retrieve the database by just
copying it to /opt/local/var/db/mysql5 - the location of the mysql
datafiles on the mac partition - and setting ownership and
permissions.

So, this is a Help me before I hurt myself sort of question: Are
there any caveats and gotchas to consider?

thanks
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Importing a database as a data file directory

2014-10-05 Thread Jan Steinman
 So, this is a Help me before I hurt myself sort of question: Are
 there any caveats and gotchas to consider?

Do you know if the database was shut down properly? Or did Ubunto crash and die 
and your partition become unbootable while the database was in active use?

Either way, you need to make sure MySQL is shut down when you move the files, 
and then repair them after starting.

I've had good experiences moving MyISAM files that way, but bad experience 
moving INNODB files. I suspect the latter are more aggressively cached.

 Mass media must constantly manipulate and deceive us in order to sell 
products... The most fundamental deception perpetrated on the public is that 
consumption of material goods is the source of human happiness. A secondary 
deception is hiding the fact that such consumption leads to major collateral 
damage -- the possible end of human life on the planet. -- Pat Murphy
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Importing a database as a data file directory

2014-10-05 Thread Reindl Harald


Am 05.10.2014 um 21:29 schrieb Tim Johnson:

I have a dual-boot OS X/Ubuntu 12.04 arrangement on a mac mini. The
ubuntu system has failed and I am unable to boot it.

I have one database on the ubuntu partition that was not backed up.

I am able to mount the ubuntu partion with fuse-ext2 from Mac OS X,
thus I can read and copy the mysql data files at /var/lib/mysql on the
ubuntu partition.

I presume that I should be able to retrieve the database by just
copying it to /opt/local/var/db/mysql5 - the location of the mysql
datafiles on the mac partition - and setting ownership and
permissions.

So, this is a Help me before I hurt myself sort of question: Are
there any caveats and gotchas to consider?


in case of MyISAM a no-brainer

* stop the db server
* copy the folder there
* set permissions
* start the server
* run mysql_upgrade --force -u root -p

well, in case of replication you might want to rebuild the slave(s) from 
scratch but that was it - doing this regulary to rsync whole databases 
from stopped replication slaves as well as the document root on testing 
machines




signature.asc
Description: OpenPGP digital signature


Re: Importing a database as a data file directory

2014-10-05 Thread Tim Johnson
* Jan Steinman j...@ecoreality.org [141005 13:12]:
  So, this is a Help me before I hurt myself sort of question: Are
  there any caveats and gotchas to consider?
 
 Do you know if the database was shut down properly? Or did Ubunto
 crash and die and your partition become unbootable while the
 database was in active use?
  The database had been shut down, no symptom occurred when the OS
  was booted, I just couldn't reboot (for starters) 

 Either way, you need to make sure MySQL is shut down when you move the files, 
 and then repair them after starting.
  Good tip. 

 I've had good experiences moving MyISAM files that way, but bad
 experience moving INNODB files. I suspect the latter are more
 aggressively cached.
  They are MyISAM ... 
  Thank you
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Importing a database as a data file directory

2014-10-05 Thread Tim Johnson
* Reindl Harald h.rei...@thelounge.net [141005 13:12]:
 
 Am 05.10.2014 um 21:29 schrieb Tim Johnson:
 I have a dual-boot OS X/Ubuntu 12.04 arrangement on a mac mini. The
 ubuntu system has failed and I am unable to boot it.
 
 I have one database on the ubuntu partition that was not backed up.
 
 I am able to mount the ubuntu partion with fuse-ext2 from Mac OS X,
 thus I can read and copy the mysql data files at /var/lib/mysql on the
 ubuntu partition.
 
 I presume that I should be able to retrieve the database by just
 copying it to /opt/local/var/db/mysql5 - the location of the mysql
 datafiles on the mac partition - and setting ownership and
 permissions.
 
 So, this is a Help me before I hurt myself sort of question: Are
 there any caveats and gotchas to consider?
 
 in case of MyISAM a no-brainer
  Yup. MyISAM ... 
 * stop the db server
 * copy the folder there
 * set permissions
 * start the server
 * run mysql_upgrade --force -u root -p
 
  Great! thanks for the detail

 well, in case of replication you might want to rebuild the slave(s) from 
 scratch but that was it - doing this regulary to rsync whole databases 
 from stopped replication slaves as well as the document root on testing 
 machines
 Got it. 

Thank you


-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Importing a database as a data file directory

2014-10-05 Thread Reindl Harald


Am 05.10.2014 um 22:39 schrieb Jan Steinman:

I've had good experiences moving MyISAM files that way, but bad experience
moving  INNODB files. I suspect the latter are more aggressively cached


simply no, no and no again

independent of innodb_file_per_table = 1 there is *always* a global
table-space (ibdata1) and you just can't move around innodb databases
on file-system level - there is not but and if and it has nothing to
do with caching

if caching would matter in that context it just would not be part of
the game in case off a not running service

http://www.xaprb.com/blog/2012/09/07/measuring-free-space-in-innodbs-global-tablespace/



signature.asc
Description: OpenPGP digital signature


Re: Importing SQL dumps into MySQL through Command line

2013-03-31 Thread Manuel Arostegui
2013/3/31 Norah Jones nh.jone...@gmail.com

 Hi,

 To source sqldump i can use the source command, but if I need to do the
 same stuff using command line without going to the sqlpromt, can I achieve
 that.



Hello,

You mean cat sqldump.sql | mysql -uwhatever -pwhatever whatever_database ?

Manuel.


Re: importing mysql structures only or restore ignoring non-existing tables and columns?

2011-10-31 Thread Luis Motta Campos
On 29 Oct 2011, at 22:59, luci spam wrote:

 I have 2 servers (1 for development, 1 for service)
 
 I keep ADD/DELETE columns and CREATE/DELETE Indexes on my development
 server, so these 2 server have similar but different mysql data structures.
 
 I know there's an option to expert structures only. (like –no-data)
 
 Is there a way (except 3rd party software like mysqldiff.org) to import
 structure only to an existing data?

 Alternativley, is there a way to import only data ignoring non-existing
 tables and colums? (I thought this may do the trick if I back-up data -
 import structure - restore the data.)


You can use SELECT...INTO DUMPFILE and LOAD DATA INFILE... to achieve what I 
believe you're asking here, save the eventual misunderstanding. ;-)

Cheers
--
Luis Motta Campos
is a DBA, Foodie, and Photographer


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



importing mysql structures only or restore ignoring non-existing tables and columns?

2011-10-29 Thread luci spam
I have 2 servers (1 for development, 1 for service)

I keep ADD/DELETE columns and CREATE/DELETE Indexes on my development
server, so these 2 server have similar but different mysql data structures.

I know there's an option to expert structures only. (like –no-data)

Is there a way (except 3rd party software like mysqldiff.org) to import
structure only to an existing data?

Alternativley, is there a way to import only data ignoring non-existing
tables and colums? (I thought this may do the trick if I back-up data -
import structure - restore the data.)

thanks in advance


Re: Importing User credentials from mysql.sql file???

2010-08-03 Thread Nunzio Daveri
Thanks Paul, I guess then all I need to do is remove the root entries at the 
begining of the insert statement and then just insert the rest into 5.X since 
the column names are there it should just import with no problems right?

Thanks again :-)

Nunzio





From: Paul DuBois paul.dub...@oracle.com
To: Nunzio Daveri nunziodav...@yahoo.com
Cc: mysql@lists.mysql.com
Sent: Tue, August 3, 2010 12:09:05 AM
Subject: Re: Importing User credentials from mysql.sql file???


On Aug 2, 2010, at 3:57 PM, Nunzio Daveri wrote:

 Hello Gurus, I just upgraded several MySQL 4.1 to 5.1 versions and also 
 wanted 

 to know how to extract the user name, password and credentials from the 
 mysql.sql file (around 22 of them per server - have 8 servers total)?  The 
 contract admin emailed me a sql file which is a dump of the default mysql 
 database from the 4.1 version and I am trying to see if I can just grep out 
 of 

 the mysql.sql file the INSERT INTO... from the .sql file and import that into 
 the 5.1.
 
 When I tried it on our test box it keeps on saying:
 
 ERROR 1136 - Column count doesn't match value count at row 2?
 
 Can someone please tell me how I can extract the data out of the 4.1 
 mysql.sql 

 file and inject it into the 5.1 version please?

Ask the admin to re-dump the data, this time using mysqldump --complete-insert
so that the INSERT statements include the column names.

-- 
Paul DuBois
Oracle Corporation / MySQL Documentation Team
Madison, Wisconsin, USA
www.mysql.com


  

Importing User credentials from mysql.sql file???

2010-08-02 Thread Nunzio Daveri
Hello Gurus, I just upgraded several MySQL 4.1 to 5.1 versions and also wanted 
to know how to extract the user name, password and credentials from the 
mysql.sql file (around 22 of them per server - have 8 servers total)?  The 
contract admin emailed me a sql file which is a dump of the default mysql 
database from the 4.1 version and I am trying to see if I can just grep out of 
the mysql.sql file the INSERT INTO... from the .sql file and import that into 
the 5.1.

When I tried it on our test box it keeps on saying:

ERROR 1136 - Column count doesn't match value count at row 2?

Can someone please tell me how I can extract the data out of the 4.1 mysql.sql 
file and inject it into the 5.1 version please?

Thanks in advance...

Nunzio


  

Re: Importing User credentials from mysql.sql file???

2010-08-02 Thread Paul DuBois

On Aug 2, 2010, at 3:57 PM, Nunzio Daveri wrote:

 Hello Gurus, I just upgraded several MySQL 4.1 to 5.1 versions and also 
 wanted 
 to know how to extract the user name, password and credentials from the 
 mysql.sql file (around 22 of them per server - have 8 servers total)?  The 
 contract admin emailed me a sql file which is a dump of the default mysql 
 database from the 4.1 version and I am trying to see if I can just grep out 
 of 
 the mysql.sql file the INSERT INTO... from the .sql file and import that into 
 the 5.1.
 
 When I tried it on our test box it keeps on saying:
 
 ERROR 1136 - Column count doesn't match value count at row 2?
 
 Can someone please tell me how I can extract the data out of the 4.1 
 mysql.sql 
 file and inject it into the 5.1 version please?

Ask the admin to re-dump the data, this time using mysqldump --complete-insert
so that the INSERT statements include the column names.

-- 
Paul DuBois
Oracle Corporation / MySQL Documentation Team
Madison, Wisconsin, USA
www.mysql.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Importing files as elements

2010-03-27 Thread Demetris


Hi all,

I  would like to populate one column of my table with XML files -
meaning each element in that column will be an XML file. Is this
possible and how can it be done? Would it be more efficient to
store the files in the filesystem and instead keep a set of pointers
as elements in that column that point to those files?

Thanks very much

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org




Importing table contents

2010-01-02 Thread Patrice Olivier-Wilson
I have 2 databases, different domains. Both have a table named 'tips'... 
both have different contents in the table.

Using phpMyAdmin for GUI.

I want to export databaseA tips as sql (done) then import content into 
databaseB tips. But when I run that operation, the databaseB says that 
there is already a table named tips in databaseB.


Yep, know that... I want to bring in the contents...not make a new table.

Any help, most appreciated

Thanks as always

--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing table contents

2010-01-02 Thread Gary Smith

Patrice Olivier-Wilson wrote:
I have 2 databases, different domains. Both have a table named 
'tips'... both have different contents in the table.

Using phpMyAdmin for GUI.

I want to export databaseA tips as sql (done) then import content into 
databaseB tips. But when I run that operation, the databaseB says that 
there is already a table named tips in databaseB.


Yep, know that... I want to bring in the contents...not make a new table.

Any help, most appreciated
When you export, PHPMyAdmin has the option to add drop table. This will 
drop the existing table structure and create a new one as it was when it 
was exported. Is this what you're after?


Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing table contents

2010-01-02 Thread Patrice Olivier-Wilson

Gary Smith wrote:

Patrice Olivier-Wilson wrote:
I have 2 databases, different domains. Both have a table named 
'tips'... both have different contents in the table.

Using phpMyAdmin for GUI.

I want to export databaseA tips as sql (done) then import content 
into databaseB tips. But when I run that operation, the databaseB 
says that there is already a table named tips in databaseB.


Yep, know that... I want to bring in the contents...not make a new 
table.


Any help, most appreciated
When you export, PHPMyAdmin has the option to add drop table. This 
will drop the existing table structure and create a new one as it was 
when it was exported. Is this what you're after?


Gary


I have data I need to keep in both db just trying to merge.



--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing table contents

2010-01-02 Thread Gary Smith

Patrice Olivier-Wilson wrote:

I have data I need to keep in both db just trying to merge.

There's two ways around this:

First is to not export the structure (uncheck structure). The second is 
to export with if not exists. This should (IIRC) do a create table if 
not exists, so it'll do what you're wanting to do.


Do you have any primary keys/auto increment columns that are going to 
overlap or anything like that?


Cheers,

Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing table contents

2010-01-02 Thread Patrice Olivier-Wilson

Gary Smith wrote:

Patrice Olivier-Wilson wrote:

I have data I need to keep in both db just trying to merge.

There's two ways around this:

First is to not export the structure (uncheck structure). The second 
is to export with if not exists. This should (IIRC) do a create 
table if not exists, so it'll do what you're wanting to do.


Do you have any primary keys/auto increment columns that are going to 
overlap or anything like that?


Cheers,

Gary


Gave it a try got this:
MySQL said:

#1062 - Duplicate entry '1' for key 1

--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing table contents

2010-01-02 Thread Gary Smith

Patrice Olivier-Wilson wrote:

Gave it a try got this:
MySQL said:

#1062 - Duplicate entry '1' for key 1
Yeah, that's what I was saying about in my previous mail. It looks like 
you've got a primary key on one of your columns, and you're attempting 
to insert data into it with a duplicate primary key (ie what the error 
message says). The easiest way to get around this one would be to write 
a query that pulls all of the columns apart from the primary key, and 
then replace that field with '' or somesuch.


For instance, let's say you've got a schema of the following:

table1(primarykey,column2,column3,column4,column5)

primarykey is obviously a primary key. You'd do something along the 
lines of select '',column2,column3,column4,column5 from table1;


Then export that resultset to an SQL file.

Anyone else aware of an easier way to do this? I've got into some bad 
habits over the years, but I'm not aware of another way to do what 
Patrice is trying to do.


Cheers,

Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing table contents

2010-01-02 Thread Patrice Olivier-Wilson

Gary Smith wrote:

Patrice Olivier-Wilson wrote:

Gave it a try got this:
MySQL said:

#1062 - Duplicate entry '1' for key 1
Yeah, that's what I was saying about in my previous mail. It looks 
like you've got a primary key on one of your columns, and you're 
attempting to insert data into it with a duplicate primary key (ie 
what the error message says). The easiest way to get around this one 
would be to write a query that pulls all of the columns apart from the 
primary key, and then replace that field with '' or somesuch.


For instance, let's say you've got a schema of the following:

table1(primarykey,column2,column3,column4,column5)

primarykey is obviously a primary key. You'd do something along the 
lines of select '',column2,column3,column4,column5 from table1;


Then export that resultset to an SQL file.

Anyone else aware of an easier way to do this? I've got into some bad 
habits over the years, but I'm not aware of another way to do what 
Patrice is trying to do.


Cheers,

Gary

If I export both db tables into csv, combine and then import back, that 
should do it, methinks...


just create a new table called tips2, merge the 2 into one...

--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing large databases faster

2009-12-18 Thread Brent Clark

On 17/12/2009 17:46, mos wrote:

Load Data ... is still going to be much faster.

Mike



Hiya

If you using on Linux and using LVM, look at mylvmbackup.

HTH

Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing large databases faster

2009-12-17 Thread Jay Ess

Madison Kelly wrote:

Hi all,

I've got a fairly large set of databases I'm backing up each Friday. 
The dump takes about 12.5h to finish, generating a ~172 GB file. When 
I try to load it though, *after* manually dumping the old databases, 
it takes 1.5~2 days to load the same databases. I am guessing this is, 
at least in part, due to indexing.


My question is; Given an empty target DB and a dump file generated via:

ssh r...@server mysqldump --all-databases -psecret  
/path/to/backup.sql
I use the -e -v -f -q -Q -K parameters for the mysqldump on large 
tables/databases. It does what you are asking for. Disables the key 
generation until all of the data is inserted. It also uses multi insert 
statements and not individual insert statement for every row which 
speeds up things considerable.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing large databases faster

2009-12-17 Thread mos

At 03:59 AM 12/17/2009, you wrote:

Madison Kelly wrote:

Hi all,

I've got a fairly large set of databases I'm backing up each Friday. The 
dump takes about 12.5h to finish, generating a ~172 GB file. When I try 
to load it though, *after* manually dumping the old databases, it takes 
1.5~2 days to load the same databases. I am guessing this is, at least in 
part, due to indexing.


My question is; Given an empty target DB and a dump file generated via:

ssh r...@server mysqldump --all-databases -psecret  /path/to/backup.sql
I use the -e -v -f -q -Q -K parameters for the mysqldump on large 
tables/databases. It does what you are asking for. Disables the key 
generation until all of the data is inserted. It also uses multi insert 
statements and not individual insert statement for every row which speeds 
up things considerable.



Load Data ... is still going to be much faster.

Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Importing large databases faster

2009-12-16 Thread Madison Kelly

Hi all,

I've got a fairly large set of databases I'm backing up each Friday. The 
dump takes about 12.5h to finish, generating a ~172 GB file. When I try 
to load it though, *after* manually dumping the old databases, it takes 
1.5~2 days to load the same databases. I am guessing this is, at least 
in part, due to indexing.


My question is; Given an empty target DB and a dump file generated via:

ssh r...@server mysqldump --all-databases -psecret  /path/to/backup.sql

How can I go about efficiently loading it into a new database? 
Specifically, can I disable triggers, indexes and what not until after 
load finishes? I can only imagine that a single ok, go create your 
indexes now at the end would be faster. Perhaps some way to hold off 
commits from happening as often? The target server has 32Gb of RAM, so I 
suspect I should be able to hold things in memory and commit to disk 
relatively rarely.


I am currently loading via this command:

mysql -psecret  /path/to/backup.sql

The source and destination MySQL versions are:

Source:
mysql  Ver 14.13 Distrib 5.1.19-beta, for unknown-linux-gnu (x86_64) 
using readline 5.0


Dest:
mysql  Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (x86_64) using 
readline 5.1


The reason for the discrepancy is that the old server was setup from 
source on CentOS 4.x by a previous tech and the destination server is 
the stock version from CentOS 5.x. The source server will be phased out 
soon, so no real attempt at maintaining matching versions was done.


Thanks!

Madi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Importing large databases faster

2009-12-16 Thread Gavin Towey
There are scripts out there such at the Maatkit mk-parallel-dump/restore that 
can speed up this process by running in parallel.

However if you're doing this every week on that large of a dataset, I'd just 
use filesystem snapshots.  You're backup/restore would then only take as long 
as it takes for you to scp the database from one machine to another.

Regards,
Gavin Towey


-Original Message-
From: Madison Kelly [mailto:li...@alteeve.com]
Sent: Wednesday, December 16, 2009 12:56 PM
To: mysql@lists.mysql.com
Subject: Importing large databases faster

Hi all,

I've got a fairly large set of databases I'm backing up each Friday. The
dump takes about 12.5h to finish, generating a ~172 GB file. When I try
to load it though, *after* manually dumping the old databases, it takes
1.5~2 days to load the same databases. I am guessing this is, at least
in part, due to indexing.

My question is; Given an empty target DB and a dump file generated via:

ssh r...@server mysqldump --all-databases -psecret  /path/to/backup.sql

How can I go about efficiently loading it into a new database?
Specifically, can I disable triggers, indexes and what not until after
load finishes? I can only imagine that a single ok, go create your
indexes now at the end would be faster. Perhaps some way to hold off
commits from happening as often? The target server has 32Gb of RAM, so I
suspect I should be able to hold things in memory and commit to disk
relatively rarely.

I am currently loading via this command:

mysql -psecret  /path/to/backup.sql

The source and destination MySQL versions are:

Source:
mysql  Ver 14.13 Distrib 5.1.19-beta, for unknown-linux-gnu (x86_64)
using readline 5.0

Dest:
mysql  Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (x86_64) using
readline 5.1

The reason for the discrepancy is that the old server was setup from
source on CentOS 4.x by a previous tech and the destination server is
the stock version from CentOS 5.x. The source server will be phased out
soon, so no real attempt at maintaining matching versions was done.

Thanks!

Madi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


Re: Importing large databases faster

2009-12-16 Thread Madison Kelly

Gavin Towey wrote:

There are scripts out there such at the Maatkit mk-parallel-dump/restore that 
can speed up this process by running in parallel.

However if you're doing this every week on that large of a dataset, I'd just 
use filesystem snapshots.  You're backup/restore would then only take as long 
as it takes for you to scp the database from one machine to another.

Regards,
Gavin Towey


Thanks! Will the Maatkit script work on a simple --all-databases dump?

As for the copy, it's a temporary thing. This is just being done weekly 
while we test out the new server. Once it's live, the new server will 
indeed be backed up via LVM snapshots. :)


Madi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Importing large databases faster

2009-12-16 Thread Gavin Towey
I don't think so, I'm pretty sure you have to use mk-parallel-dump to get the 
data in a format it wants.  The docs are online though.

Regards,
Gavin Towey

-Original Message-
From: Madison Kelly [mailto:li...@alteeve.com]
Sent: Wednesday, December 16, 2009 4:35 PM
To: Gavin Towey
Cc: mysql@lists.mysql.com
Subject: Re: Importing large databases faster

Gavin Towey wrote:
 There are scripts out there such at the Maatkit mk-parallel-dump/restore that 
 can speed up this process by running in parallel.

 However if you're doing this every week on that large of a dataset, I'd just 
 use filesystem snapshots.  You're backup/restore would then only take as long 
 as it takes for you to scp the database from one machine to another.

 Regards,
 Gavin Towey

Thanks! Will the Maatkit script work on a simple --all-databases dump?

As for the copy, it's a temporary thing. This is just being done weekly
while we test out the new server. Once it's live, the new server will
indeed be backed up via LVM snapshots. :)

Madi

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com


Re: Importing large databases faster

2009-12-16 Thread Shawn Green

Madison Kelly wrote:

Hi all,

I've got a fairly large set of databases I'm backing up each Friday. The 
dump takes about 12.5h to finish, generating a ~172 GB file. When I try 
to load it though, *after* manually dumping the old databases, it takes 
1.5~2 days to load the same databases. I am guessing this is, at least 
in part, due to indexing.


My question is; Given an empty target DB and a dump file generated via:

ssh r...@server mysqldump --all-databases -psecret  /path/to/backup.sql

How can I go about efficiently loading it into a new database? 
Specifically, can I disable triggers, indexes and what not until after 
load finishes? I can only imagine that a single ok, go create your 
indexes now at the end would be faster. Perhaps some way to hold off 
commits from happening as often? The target server has 32Gb of RAM, so I 
suspect I should be able to hold things in memory and commit to disk 
relatively rarely.


I am currently loading via this command:

mysql -psecret  /path/to/backup.sql



For that kind of dump, that kind of restore is what you get. Your 
current dump is generating GB of INSERT statements that need to be 
parsed then processed.


To get a faster restore, use a different sort of dump. I suggest you 
compare your current process to one that uses the --tab option of 
mysqldump (to save the data) then uses LOAD DATA INFILE ... to 
repopulate your server.  This is the fastest known method to populate a 
table other than a direct file copy.

--
Shawn Green, MySQL Senior Support Engineer
Sun Microsystems, Inc.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Newbie question: importing cvs settings - followup

2009-09-30 Thread Patrice Olivier-Wilson

Thanks again for assistance. FYI, I did track this thread down

http://ask.metafilter.com/57007/Missing-commas-in-CSV-file

(exerpt:
Maybe there is a space or something in the 14th column of the first 15 rows.
posted by. on February 14, 2007


It's a bug in Excel (not something you did wrong.)
posted by . February 14, 2007

)


--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Newbie question: importing cvs settings

2009-09-29 Thread Jerry Schwartz

-Original Message-
From: Patrice Olivier-Wilson [mailto:b...@biz-comm.com]
Sent: Sunday, September 27, 2009 10:19 AM
Cc: 'mysql'
Subject: Re: Newbie question: importing cvs settings

Back again... I have 192 records to import, and tried my extra line at
the end hoping for a work around, but nope, it failed at line 17 again.


Invalid field count in CSV input on line 17.

Anyone have an idea why this might be happening?

[JS] This is just a shot in the dark, but Excel can be rather surprising when 
it puts out a CSV file. Depending upon the data, and exactly how you've 
specified the export, it can put double-quotes in unexpected places.

If you leave out the 17th line of data what happens?

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com





Patrice Olivier-Wilson wrote:
 Yep, typo ...:-(

 I did some screen shots of 2 tests. A workaround solution is to make a
 final entry in the csv file that I don't really need. Then everything
 up to that point gets imported ok using CSV method. The LOAD DATA
 method did not enter anything.

 My earlier assumption about line 17 was false. It was dependent on how
 many rows, and I had been using tests with same amount of data.
 As I said, very beginner level, so thanks for the patience.

 screenshots at
 http://biz-comm.com/mysqlprojects/

 thank you







--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=jschwa...@the-
infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Newbie question: importing cvs settings

2009-09-29 Thread Patrice Olivier-Wilson

Jerry Schwartz wrote:

[JS] This is just a shot in the dark, but Excel can be rather surprising when 
it puts out a CSV file. Depending upon the data, and exactly how you've 
specified the export, it can put double-quotes in unexpected places.


If you leave out the 17th line of data what happens?

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com


  
  

Thanks Jerry, Gavin and John:

Sorry for not an immediate response to all of your suggestions. Other 
demands were pulling at me since I first asked for assistance.



I opened the .csv file with Text Wrangler, and the commas are missing at 
about line 17



portfolio_ID,portfolio_sort_ID,portfolio_title,portfolio_bodycopy,portfolio_image,portfolio_before
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg,
,1,Kitchens,,123.jpg
,1,Kitchens,,123.jpg
,1,Kitchens,,123.jpg

So not sure why that is happening. I'm on a Mac, using Excel 2008. But 
at least you all have helped me find what it is doing, so now, I can at 
least pull into a txt file and make corrections manually.


My thanks to all of you for your help and patience.

(above represents an empty portfolio_ID, a filled in sort_ID, title, 
empty bodycopy, image, empty before)


Thank you.

--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Newbie question: importing cvs settings

2009-09-28 Thread Gavin Towey
1. Try opening up the csv file in a text editor, viewing it in a spreadsheet 
looks like it's hiding some extra formatting or lines that may be causing 
problems.

2. Try importing through the mysql CLI.  From the screenshot you posted, it 
looks like PMA is parsing the file and creating an insert statement for each 
line.  It may be incorrectly handling some of that data.  Using the CLI you'll 
get better feedback about what, if any, the error is.

Regards,
Gavin Towey

-Original Message-
From: Patrice Olivier-Wilson [mailto:b...@biz-comm.com]
Sent: Saturday, September 26, 2009 11:02 AM
To: 'mysql'
Subject: Re: Newbie question: importing cvs settings

Yep, typo ...:-(

I did some screen shots of 2 tests. A workaround solution is to make a
final entry in the csv file that I don't really need. Then everything up
to that point gets imported ok using CSV method. The LOAD DATA method
did not enter anything.

My earlier assumption about line 17 was false. It was dependent on how
many rows, and I had been using tests with same amount of data.
As I said, very beginner level, so thanks for the patience.

screenshots at
http://biz-comm.com/mysqlprojects/

thank you

John wrote:
 I assume you mean csv not cvs!

 What is the error you get when the import fails? What version of MySQL are
 you using? Can you post the output of SHOW CREATE TABLE for the table you
 are trying to load the file in to and a sample of the csv which is failing
 to load?

 Do you get the same error if you try and load the files using MySQL client
 and the LOAD DATA INFILE command? (See this link for details on how to use
 LOAD DATA INFILE http://dev.mysql.com/doc/refman/5.1/en/load-data.html)

 Regards

 John Daisley
 MySQL  Cognos Contractor

 Certified MySQL 5 Database Administrator (CMDBA)
 Certified MySQL 5 Developer (CMDEV)
 IBM Cognos BI Developer

 Telephone +44 (0)7812 451238
 Email j...@butterflysystems.co.uk

 -Original Message-
 From: Patrice Olivier-Wilson [mailto:b...@biz-comm.com]
 Sent: 26 September 2009 17:08
 To: mysql
 Subject: Newbie question: importing cvs settings

 Greetings:

 I have a project for which need to import cvs files into db.

 I can do so up to a point. The import will only do 16 lines,
 consistently. Error is failing at line 17.

 Steps:

 create table fields in Excel document, where they all match database fields
 enter information in several of the columns, but not all as client will
 be filling it in online (leaving ID blank)
 save excel to .cvs
 log into phpMyAdmin
 import cvs


 I've experimented with several settings in the import, but consistently,
 it fails at line 17, even with different .cvs files.


 Any guidance, most appreciated.




--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Newbie question: importing cvs settings

2009-09-27 Thread Patrice Olivier-Wilson
Back again... I have 192 records to import, and tried my extra line at 
the end hoping for a work around, but nope, it failed at line 17 again.



Invalid field count in CSV input on line 17.

Anyone have an idea why this might be happening?



Patrice Olivier-Wilson wrote:

Yep, typo ...:-(

I did some screen shots of 2 tests. A workaround solution is to make a 
final entry in the csv file that I don't really need. Then everything 
up to that point gets imported ok using CSV method. The LOAD DATA 
method did not enter anything.


My earlier assumption about line 17 was false. It was dependent on how 
many rows, and I had been using tests with same amount of data.

As I said, very beginner level, so thanks for the patience.

screenshots at
http://biz-comm.com/mysqlprojects/

thank you



 






--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Newbie question: importing cvs settings

2009-09-27 Thread John
Patrice,

Can you post the output of SHOW CREATE TABLE for the table you are having
difficulty inserting into? Without knowing the table structure its very
difficult to work out why your data load is failing.

Regards

John Daisley
MySQL  Cognos Contractor

Certified MySQL 5 Database Administrator (CMDBA)
Certified MySQL 5 Developer (CMDEV)
IBM Cognos BI Developer

Telephone +44 (0)7812 451238
Email j...@butterflysystems.co.uk


-Original Message-
From: Patrice Olivier-Wilson [mailto:b...@biz-comm.com] 
Sent: 27 September 2009 15:19
Cc: 'mysql'
Subject: Re: Newbie question: importing cvs settings

Back again... I have 192 records to import, and tried my extra line at 
the end hoping for a work around, but nope, it failed at line 17 again.


Invalid field count in CSV input on line 17.

Anyone have an idea why this might be happening?



Patrice Olivier-Wilson wrote:
 Yep, typo ...:-(

 I did some screen shots of 2 tests. A workaround solution is to make a 
 final entry in the csv file that I don't really need. Then everything 
 up to that point gets imported ok using CSV method. The LOAD DATA 
 method did not enter anything.

 My earlier assumption about line 17 was false. It was dependent on how 
 many rows, and I had been using tests with same amount of data.
 As I said, very beginner level, so thanks for the patience.

 screenshots at
 http://biz-comm.com/mysqlprojects/

 thank you


  




-- 
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk

No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.409 / Virus Database: 270.13.113/2396 - Release Date: 09/26/09
05:51:00


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Newbie question: importing cvs settings

2009-09-26 Thread Patrice Olivier-Wilson

Greetings:

I have a project for which need to import cvs files into db.

I can do so up to a point. The import will only do 16 lines, 
consistently. Error is failing at line 17.


Steps:

create table fields in Excel document, where they all match database fields
enter information in several of the columns, but not all as client will 
be filling it in online (leaving ID blank)

save excel to .cvs
log into phpMyAdmin
import cvs


I've experimented with several settings in the import, but consistently, 
it fails at line 17, even with different .cvs files.



Any guidance, most appreciated.

--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Newbie question: importing cvs settings

2009-09-26 Thread John
I assume you mean csv not cvs!

What is the error you get when the import fails? What version of MySQL are
you using? Can you post the output of SHOW CREATE TABLE for the table you
are trying to load the file in to and a sample of the csv which is failing
to load?

Do you get the same error if you try and load the files using MySQL client
and the LOAD DATA INFILE command? (See this link for details on how to use
LOAD DATA INFILE http://dev.mysql.com/doc/refman/5.1/en/load-data.html)

Regards

John Daisley
MySQL  Cognos Contractor

Certified MySQL 5 Database Administrator (CMDBA)
Certified MySQL 5 Developer (CMDEV)
IBM Cognos BI Developer

Telephone +44 (0)7812 451238
Email j...@butterflysystems.co.uk

-Original Message-
From: Patrice Olivier-Wilson [mailto:b...@biz-comm.com] 
Sent: 26 September 2009 17:08
To: mysql
Subject: Newbie question: importing cvs settings

Greetings:

I have a project for which need to import cvs files into db.

I can do so up to a point. The import will only do 16 lines, 
consistently. Error is failing at line 17.

Steps:

create table fields in Excel document, where they all match database fields
enter information in several of the columns, but not all as client will 
be filling it in online (leaving ID blank)
save excel to .cvs
log into phpMyAdmin
import cvs


I've experimented with several settings in the import, but consistently, 
it fails at line 17, even with different .cvs files.


Any guidance, most appreciated.

-- 
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk

No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.409 / Virus Database: 270.13.113/2396 - Release Date: 09/26/09
05:51:00


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Newbie question: importing cvs settings

2009-09-26 Thread Patrice Olivier-Wilson

Yep, typo ...:-(

I did some screen shots of 2 tests. A workaround solution is to make a 
final entry in the csv file that I don't really need. Then everything up 
to that point gets imported ok using CSV method. The LOAD DATA method 
did not enter anything.


My earlier assumption about line 17 was false. It was dependent on how 
many rows, and I had been using tests with same amount of data.

As I said, very beginner level, so thanks for the patience.

screenshots at
http://biz-comm.com/mysqlprojects/

thank you

John wrote:

I assume you mean csv not cvs!

What is the error you get when the import fails? What version of MySQL are
you using? Can you post the output of SHOW CREATE TABLE for the table you
are trying to load the file in to and a sample of the csv which is failing
to load?

Do you get the same error if you try and load the files using MySQL client
and the LOAD DATA INFILE command? (See this link for details on how to use
LOAD DATA INFILE http://dev.mysql.com/doc/refman/5.1/en/load-data.html)

Regards

John Daisley
MySQL  Cognos Contractor

Certified MySQL 5 Database Administrator (CMDBA)
Certified MySQL 5 Developer (CMDEV)
IBM Cognos BI Developer

Telephone +44 (0)7812 451238
Email j...@butterflysystems.co.uk

-Original Message-
From: Patrice Olivier-Wilson [mailto:b...@biz-comm.com] 
Sent: 26 September 2009 17:08

To: mysql
Subject: Newbie question: importing cvs settings

Greetings:

I have a project for which need to import cvs files into db.

I can do so up to a point. The import will only do 16 lines, 
consistently. Error is failing at line 17.


Steps:

create table fields in Excel document, where they all match database fields
enter information in several of the columns, but not all as client will 
be filling it in online (leaving ID blank)

save excel to .cvs
log into phpMyAdmin
import cvs


I've experimented with several settings in the import, but consistently, 
it fails at line 17, even with different .cvs files.



Any guidance, most appreciated.

  



--
Patrice Olivier-Wilson
888-385-7217
http://biz-comm.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Importing CSV into MySQL

2009-09-16 Thread Tim Thorburn

Hi,

I'm sure I'm missing something quite obvious here, but the caffeine 
hasn't quite kicked in yet.  As the subject says, I'm importing a csv 
file into MySQL 5.1.36 on WinXP using phpMyAdmin 3.3.2 (Apache 2.2.11 
and PHP 5.3.0 should it matter).  I've done this many times, however I'm 
now getting a strange error regarding a character in my csv file.  I've 
tried both MyISAM and InnoDB tables (I don't think that would make any 
difference) and switching between utf8_general_ci and utf8_unicode_ci 
(thought this might make a difference, but sadly no).


The error message I get is |#1366 - Incorrect string value: '\xE8s' when 
it attempts to insert the word Radès.  I'm using a simple varchar, 
nothing really special there.  After googling for awhile it seems the 
most common option is to use PHP to convert the character into UTF8 
format before putting it into MySQL - the problem is some of these csv's 
are several hundred mb a piece so I'd much rather simply import than go 
this route.


Any thoughts?

TIA,
-Tim
|


RE: Importing CSV into MySQL

2009-09-16 Thread Gavin Towey
Hi Tim,

Try using LOAD DATA INFILE from the mysql CLI.  PMA can often introduce 
unexpected behavior for export/import.

Regards,
Gavin Towey

-Original Message-
From: Tim Thorburn [mailto:webmas...@athydro.com]
Sent: Wednesday, September 16, 2009 7:14 AM
To: mysql@lists.mysql.com
Subject: Importing CSV into MySQL

Hi,

I'm sure I'm missing something quite obvious here, but the caffeine
hasn't quite kicked in yet.  As the subject says, I'm importing a csv
file into MySQL 5.1.36 on WinXP using phpMyAdmin 3.3.2 (Apache 2.2.11
and PHP 5.3.0 should it matter).  I've done this many times, however I'm
now getting a strange error regarding a character in my csv file.  I've
tried both MyISAM and InnoDB tables (I don't think that would make any
difference) and switching between utf8_general_ci and utf8_unicode_ci
(thought this might make a difference, but sadly no).

The error message I get is |#1366 - Incorrect string value: '\xE8s' when
it attempts to insert the word Radès.  I'm using a simple varchar,
nothing really special there.  After googling for awhile it seems the
most common option is to use PHP to convert the character into UTF8
format before putting it into MySQL - the problem is some of these csv's
are several hundred mb a piece so I'd much rather simply import than go
this route.

Any thoughts?

TIA,
-Tim
|

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Duplicate key name when importing mysql dump file

2009-06-19 Thread Isart Montane
Hi Jason,

if you run mysql with -f it will ignore any errors and continue importing

cat aac.sql | mysql -f -u root AAC

Isart

On Wed, Jun 17, 2009 at 8:59 PM, Jason Novotny jason.novo...@gmail.comwrote:

  Hi,

   I'm trying to import a dumpfile like so:

 cat aac.sql | mysql -u root AAC

 It all runs fine until I get something like:

 ERROR 1061 (42000) at line 5671: Duplicate key name 'FK_mediaZip_to_zipSet'


 Is there a way I can tell it to ignore or replace the key?

 Thanks, Jason

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=isart.mont...@gmail.com




Re: Duplicate key name when importing mysql dump file

2009-06-19 Thread ars k
Hi Jason,
You may have to run ANALYZE TABLE.. for the particular table for which you
are facing the error. So it'll rebuild the indexes. This would be the best
one to save your data.

We can use the method which Mr.Isart suggested, but it'll ignore the error
and also will lead to data loss.

Regards,
Vinodh.k


On Sat, Jun 20, 2009 at 12:19 AM, Isart Montane isart.mont...@gmail.comwrote:

 Hi Jason,

 if you run mysql with -f it will ignore any errors and continue importing

 cat aac.sql | mysql -f -u root AAC

 Isart

 On Wed, Jun 17, 2009 at 8:59 PM, Jason Novotny jason.novo...@gmail.com
 wrote:

   Hi,
 
I'm trying to import a dumpfile like so:
 
  cat aac.sql | mysql -u root AAC
 
  It all runs fine until I get something like:
 
  ERROR 1061 (42000) at line 5671: Duplicate key name
 'FK_mediaZip_to_zipSet'
 
 
  Is there a way I can tell it to ignore or replace the key?
 
  Thanks, Jason
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/mysql?unsub=isart.mont...@gmail.com
 
 



Duplicate key name when importing mysql dump file

2009-06-17 Thread Jason Novotny
  
Hi,


   I'm trying to import a dumpfile like so:

cat aac.sql | mysql -u root AAC

It all runs fine until I get something like:

ERROR 1061 (42000) at line 5671: Duplicate key name 'FK_mediaZip_to_zipSet'


Is there a way I can tell it to ignore or replace the key?

Thanks, Jason

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Importing MS SQL exported CSV files adds spaces

2008-11-03 Thread Ali, Saqib
I exported a large data set from from Microsoft SQL server in CSV
format. However whenever I try to import that data to a a mySQL server
running on Linux, it adds a space between each character in each
field.

Essentially:
Saqib Ali
becomes
S a q i b  A l i

I have tried to use the dos2unix cmd on linux, but that didn't help either.

I am using the following SQL to import the data:
LOAD DATA LOCAL INFILE '/home/saqib/data.csv' INTO TABLE apps FIELDS
TERMINATED BY ',' ENCLOSED BY ''   LINES TERMINATED BY '\n';


Any thoughts?


saqib
http://doctrina.wordpress.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing MS SQL exported CSV files adds spaces

2008-11-03 Thread Ali, Saqib
 try OPTIONALLY ENCLOSED BY '  '

already tried that. no help :(

saqib
http://doctrina.wordpress.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing MS SQL exported CSV files adds spaces

2008-11-03 Thread Grant Allen

Ali, Saqib wrote:

I exported a large data set from from Microsoft SQL server in CSV
format. However whenever I try to import that data to a a mySQL server
running on Linux, it adds a space between each character in each
field.

Essentially:
Saqib Ali
becomes
S a q i b  A l i

I have tried to use the dos2unix cmd on linux, but that didn't help either.

I am using the following SQL to import the data:
LOAD DATA LOCAL INFILE '/home/saqib/data.csv' INTO TABLE apps FIELDS
TERMINATED BY ',' ENCLOSED BY ''   LINES TERMINATED BY '\n';



You SQL Server export will be Unicode, UTF8 or UTF16 ... thus the 
appearance of the extra spaces.  (That's just coincidence, they're not 
really extra spaces ... the text editor you're viewing the file with is 
treating the data as ASCII, and not recognising the multi-byte nature of 
the characters.)


Change your MySQL character set for the apps table appropriately (i.e. 
make it match the character set of the data), and see what happens.  
Alternatively, create a staging table with the right character set to 
load the data into first, and then convert it using an insert ... 
cast... style statement into your apps table (basic ETL).


Ciao
Fuzzy
:-)


Dazed and confused about technology for 20 years
http://fuzzydata.wordpress.com/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Importing - Adding Fields Into MySql From A List

2008-04-14 Thread revDAVE
Newbie question!

I have a list of field names from another database (not mysql) - like:

name
phone1
phone2
street
city
state
zip
info
etc (a bunch more fields)

Q: Is there a way I can add these to an existing empty/blank table?

Maybe I can use:

- phpMyAdmin ?
- sql commands with php - loop thru a list of these names?
- import field names from CSV?
- some other method?

I tried a test with php and got NO errors - but no result either ( looked
with phpMyAdmin after - the table didn't add or drop the fields... Nothing
changed)

?php
  
  $sql = 'ALTER TABLE `ztest` ADD `myfield2` VARCHAR(10) NOT NULL;';
  $sql = 'ALTER TABLE `ztest` DROP `myfield1`;';
  
 ?



I have phpMyAdmin and If there's a way add tables w / php - maybe that would
work also

If I can just get all the field names in the table as text fields - that
would be ok for now - then I can individually change the field type by hand
w phpMyAdmin...




--
Thanks - RevDave
Cool @ hosting4days . com
[db-lists]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing - Adding Fields Into MySql From A List

2008-04-14 Thread Garris, Nicole
 
It sounds like you want to easily create a new MySQL table that is a
copy of a table in a different DBMS.

The way I would do it is generate a DDL script from the other DBMS
(create table etc.) For example, SQL Server has a generate script
wizard that does it for you automatically. Then take the DDL and modify
it to meet your MySQL needs. Then you can run the script in MySQL Query
Browser (make sure you're pointed at the right database, or put a USE
command at the beginning of your script). Presto, you have the table you
want.

-Original Message-
From: revDAVE [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 14, 2008 8:25 AM
To: mysql@lists.mysql.com
Subject: Importing - Adding Fields Into MySql From A List

Newbie question!

I have a list of field names from another database (not mysql) - like:

name
phone1
phone2
street
city
state
zip
info
etc (a bunch more fields)

Q: Is there a way I can add these to an existing empty/blank table?

Maybe I can use:

- phpMyAdmin ?
- sql commands with php - loop thru a list of these names?
- import field names from CSV?
- some other method?

I tried a test with php and got NO errors - but no result either (
looked with phpMyAdmin after - the table didn't add or drop the
fields... Nothing
changed)

?php
  
  $sql = 'ALTER TABLE `ztest` ADD `myfield2` VARCHAR(10) NOT NULL;';
  $sql = 'ALTER TABLE `ztest` DROP `myfield1`;';
  
 ?



I have phpMyAdmin and If there's a way add tables w / php - maybe that
would work also

If I can just get all the field names in the table as text fields - that
would be ok for now - then I can individually change the field type by
hand w phpMyAdmin...




--
Thanks - RevDave
Cool @ hosting4days . com
[db-lists]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing and exporting from MySQL, escape slash problem

2008-03-05 Thread Sebastian Mendel

Dave M G schrieb:

PHP List, MySQL List

In my PHP environment, I have Magic Quotes turned off, and I use the 
mysql_real_escape_string() function clean strings of SQL syntax before 
inserting them into my database.


So the data stored in my database does not have escape characters in it. 
Particularly, double and single quotes don't have slashes in front of them.


This seems to work fine so long as I'm reading data into and out of the 
database from within my scripts.


However, when I backup and import databases - I use the phpMyAdmin 
interface - they have escape slashes in front of every double and single 
quote characters. I'm not sure if it's on the export or import where 
they get added in.


what version of phpMyAdmin?


I've looked through the phpMyAdmin online documentation, and I can't see 
any option to control the presence of escape slashes. It seems to me 
that if it adds them in when exporting, it should take them out when 
importing. Or vice versa, but in either case be consistent.


I just want my database to be exactly as it is before any export or 
import options.


I'm a little muddled as to where I'm making the mistake. Can anyone 
advice on the best practice for preserving my database as is when 
backing up and restoring?


this 'bug' is unknown to me, did you tried to reproduce on phpMyAdmin demo 
servers?


http://pma.cihar.com/

http://wiki.cihar.com/pma/Getting_Help

--
Sebastian

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [PHP] Importing and exporting from MySQL, escape slash problem

2008-03-04 Thread Dave M G

Richard, Jed,

Thank you for replying.

Richard said:

It's possible that there is an .htaccess file in phpMyAdmin that has
Magic Quotes on that is messing you up...


The .htaccess file for phpMyAdmin says php_flag magic_quotes_gpc Off, 
so I guess that means I'm okay there.



Other than that, it's specific to phpMyAdmin, so maybe ask those guys
what they did...


I joined their list through Sourceforge, but I haven't seen any mail 
from it, and any mail I send gets bounced back to me. I'm not sure what 
the issue is.


Jed said:
If you're having trouble importing a mysql dump using phpMyAdmin, it might be simpler not to use it, and use mysqldump instead. 


I suppose I'll have to if I can't get phpMyAdmin to behave. It's too 
bad, though, as phpMyAdmin is so convenient otherwise.


--
Dave M G

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [PHP] Importing and exporting from MySQL, escape slash problem

2008-03-02 Thread Richard Lynch
It's possible that there is an .htaccess file in phpMyAdmin that has
Magic Quotes on that is messing you up...

Other than that, it's specific to phpMyAdmin, so maybe ask those guys
what they did...

On Sat, March 1, 2008 7:38 pm, Dave M G wrote:
 PHP List, MySQL List

 In my PHP environment, I have Magic Quotes turned off, and I use the
 mysql_real_escape_string() function clean strings of SQL syntax before
 inserting them into my database.

 So the data stored in my database does not have escape characters in
 it.
 Particularly, double and single quotes don't have slashes in front of
 them.

 This seems to work fine so long as I'm reading data into and out of
 the
 database from within my scripts.

 However, when I backup and import databases - I use the phpMyAdmin
 interface - they have escape slashes in front of every double and
 single
 quote characters. I'm not sure if it's on the export or import where
 they get added in.

 I've looked through the phpMyAdmin online documentation, and I can't
 see
 any option to control the presence of escape slashes. It seems to me
 that if it adds them in when exporting, it should take them out when
 importing. Or vice versa, but in either case be consistent.

 I just want my database to be exactly as it is before any export or
 import options.

 I'm a little muddled as to where I'm making the mistake. Can anyone
 advice on the best practice for preserving my database as is when
 backing up and restoring?

 Thanks for any advice.

 --
 Dave M G

 --
 PHP General Mailing List (http://www.php.net/)
 To unsubscribe, visit: http://www.php.net/unsub.php




-- 
Some people have a gift link here.
Know what I want?
I want you to buy a CD from some indie artist.
http://cdbaby.com/from/lynch
Yeah, I get a buck. So?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing and exporting from MySQL, escape slash problem

2008-03-02 Thread Jed Reynolds

Dave M G wrote:

PHP List, MySQL List

In my PHP environment, I have Magic Quotes turned off, and I use the 
mysql_real_escape_string() function clean strings of SQL syntax before 
inserting them into my database.


So the data stored in my database does not have escape characters in 
it. Particularly, double and single quotes don't have slashes in front 
of them.


This seems to work fine so long as I'm reading data into and out of 
the database from within my scripts.


However, when I backup and import databases - I use the phpMyAdmin 
interface - they have escape slashes in front of every double and 
single quote characters. I'm not sure if it's on the export or import 
where they get added in.


It probably depends on what you mean by exporting...if you do a select 
* into outfile it might remove them. If you're doing a mysql dump, it 
will add them because it's constructing  sql statements, and they have 
to be escaped.


I would take phpMyAdmin out of the equation and just use mysqldump. You 
should have no problem doing something like creating a scratch table, 
dumping it, and re-importing from the dump.


mysql create table B like table A;
mysql insert into table B select * from A where c like %'% limit 10;
bash$ mysqldump --opt frumble B  b.sql
bash$ mysql -u -p frumble  b.sql

This should  replace table B in database frumble without incident. If 
you're having trouble importing a mysql dump using phpMyAdmin, it might 
be simpler not to use it, and use mysqldump instead.


HTH

Jed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Importing and exporting from MySQL, escape slash problem

2008-03-01 Thread Dave M G

PHP List, MySQL List

In my PHP environment, I have Magic Quotes turned off, and I use the 
mysql_real_escape_string() function clean strings of SQL syntax before 
inserting them into my database.


So the data stored in my database does not have escape characters in it. 
Particularly, double and single quotes don't have slashes in front of them.


This seems to work fine so long as I'm reading data into and out of the 
database from within my scripts.


However, when I backup and import databases - I use the phpMyAdmin 
interface - they have escape slashes in front of every double and single 
quote characters. I'm not sure if it's on the export or import where 
they get added in.


I've looked through the phpMyAdmin online documentation, and I can't see 
any option to control the presence of escape slashes. It seems to me 
that if it adds them in when exporting, it should take them out when 
importing. Or vice versa, but in either case be consistent.


I just want my database to be exactly as it is before any export or 
import options.


I'm a little muddled as to where I'm making the mistake. Can anyone 
advice on the best practice for preserving my database as is when 
backing up and restoring?


Thanks for any advice.

--
Dave M G

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: importing a single table from mysqldump

2007-10-19 Thread Craig Huffstetler
A little bit easier of a way to do this could be the following command:

mysql -u[user] -h[host] -p [database]  [mysql dump file]

Make sure the database you're importing into is EMPTY (or at least the
TABLES you are importing to are empty...)

On 10/19/07, Werner Van Belle [EMAIL PROTECTED] wrote:

 Hello,

 If it is a dump you can pipe it into mysql. If you have a csv like file
 you
 can import it with LOAD DATA LOCAL INFILE like things. An example below:

 DROP TABLE IF EXISTS EnsgDescriptions;
 CREATE TABLE IF NOT EXISTS EnsgDescriptions
 (stable_id VARCHAR(128) PRIMARY KEY,
 description  VARCHAR(128));
 LOAD DATA LOCAL INFILE 'imports/ensgdescriptions.csv'
 INTO TABLE EnsgDescriptions;
 Wkr (don't write this in your script :-),

 --
 Dr. Werner Van Belle
 http://werner.sigtrans.org/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




Re: importing a single table from mysqldump

2007-10-19 Thread Werner Van Belle
Hello,

If it is a dump you can pipe it into mysql. If you have a csv like file you 
can import it with LOAD DATA LOCAL INFILE like things. An example below:

DROP TABLE IF EXISTS EnsgDescriptions;
CREATE TABLE IF NOT EXISTS EnsgDescriptions
(stable_id VARCHAR(128) PRIMARY KEY,
description  VARCHAR(128));
LOAD DATA LOCAL INFILE 'imports/ensgdescriptions.csv'
INTO TABLE EnsgDescriptions;
Wkr (don't write this in your script :-),

-- 
Dr. Werner Van Belle
http://werner.sigtrans.org/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



importing a single table from mysqldump

2007-10-19 Thread James Graham
Hi,
I have a 250mb dump and need to extract some data.

I know how to export a single table, but not import a single table using
mysqldump.

Any ideas?

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Importing data from excel sheet

2007-04-07 Thread sam rumaizan
I have created table in mysql with 12 fields 
   
  Field1Field2 Field3  Field4 ……… Field12
   
   
  I have an excel sheet with 12 columns and 150 rows.
  My question is how can I import all of the columns from the excel sheet to my 
table without losing any information.  
   



 

 
-
Don't be flakey. Get Yahoo! Mail for Mobile and 
always stay connected to friends.

Re: Importing data from excel sheet

2007-04-07 Thread Andrew Dashin

Hi, sam

You can try to export table to file from Excel in CSV format.
And then import data from this file to mysql.
Something like this should help you:

LOAD DATA INFILE 'yourtabledata.txt' INTO TABLE yourtable FIELDS 
TERMINATED BY ',' ENCLOSED BY '' LINES TERMINATED BY '\r\n';


sam rumaizan wrote:
I have created table in mysql with 12 fields 
   
  Field1Field2 Field3  Field4 ……… Field12
   
   
  I have an excel sheet with 12 columns and 150 rows.
  My question is how can I import all of the columns from the excel sheet to my table without losing any information.  
   




 

 
-
Don't be flakey. Get Yahoo! Mail for Mobile and 
always stay connected to friends.
  


--
Andrew Dashin


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Handling of NULL values when importing data from SQL Server BCP files

2007-03-28 Thread Dan Buettner

In the category of terrible, horrible, no good, very bad (but at least
documented) software behavior, I bumped into this today:
http://bugs.mysql.com/bug.php?id=14770
where the LOAD DATA INFILE command does not respect the default value of a
column if no value is supplied in the file.  Instead, it assigns zero to
numeric columns, empty string to character columns, etc., per
http://dev.mysql.com/doc/refman/5.0/en/load-data.html

This is awful!  I mean, it's documented behavior and all, but it's still
just bloody awful!

Has anyone else found a graceful solution to this problem?  The thing is, my
data files may have real zeros in numeric columns, whereas NULL of course
is used to indicate an unknown value.  When I migrate this application from
SQL Server to MySQL in the next couple of weeks, suddenly I'll have a lot of
zeros where I previously had NULLs.

Dan


Importing Multiple SQL Files/Database Tables

2007-03-20 Thread David Blomstrom
I'm in the process of moving a MySQL database with about 170 tables from my PC 
to a new MacBook Pro. On my PC, I exported each database table as a SQL file, 
then copied a folder containing all these files to my Mac.

I'd like to know if there's a way to import all these SQL files into a database 
on my Mac, or do I have to import them one at a time. I generally update my 
online database by deleting a table, then clicking a tab in phpMyAdmin that 
fetches a pop-up SQL window. However, it only allows me to import one SQL file 
at a time.

Regardless of how I import these SQL files, I next need to know if I can copy 
or move more than one table at a time to another database. I want to dump 
everything into a temporary database, then move only the tables I decide to 
keep into a permanent database.

I installed a program called MAMP, which includes MySQL 5 and the current 
phpMyAdmin. When I opened phpMyAdmin, I discovered that I can check multiple 
tables. But when I click the Operations tab to copy/move them to another 
database, it looks like I can only do one at a time.

Thanks.

 
-
No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.

Importing Text File Into mySQL

2006-11-17 Thread John Kopanas

I have a text file with over 500K rows of data in it.  The problem is
that the data is no seperated by commas but instead  occupy a certain
amount of characters.  So for instance:

ID 1 -11
NAME 12-50
COMPANY_NAME 51-100
...

How would you parse import this data into mysql?

Thanks for your help :-).

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing Text File Into mySQL

2006-11-17 Thread Bruce Ferrell


I did a little shell script to do it.  the key was the shell variable IFS:

Normally IFS= 

to make it work right I set it as follows:

IFS=


Yes, thats a newline between the quotes


John Kopanas wrote:

I have a text file with over 500K rows of data in it.  The problem is
that the data is no seperated by commas but instead  occupy a certain
amount of characters.  So for instance:

ID 1 -11
NAME 12-50
COMPANY_NAME 51-100
...

How would you parse import this data into mysql?

Thanks for your help :-).



--
One day at a time, one second if that's what it takes


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing Text File Into mySQL

2006-11-17 Thread Dan Nelson
In the last episode (Nov 17), John Kopanas said:
 I have a text file with over 500K rows of data in it.  The problem is
 that the data is no seperated by commas but instead  occupy a certain
 amount of characters.  So for instance:
 
 ID 1 -11
 NAME 12-50
 COMPANY_NAME 51-100
 ...
 
 How would you parse import this data into mysql?

Create a table that matches your layout exactly, then 

LOAD DATA ...
FIELDS TERMINATED BY ''
FIELDS ENCLOSED BY ''; 

It's mentioned in one of the comments at
http://dev.mysql.com/doc/refman/5.0/en/load-data.html

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Importing Text File Into mySQL

2006-11-17 Thread John Kopanas

I am trying to figure out how this would work?  How does LOAD DATA
figure out when one column begins and another ends when some of the
data are addresses with spaces in them?

On 11/17/06, Dan Nelson [EMAIL PROTECTED] wrote:

In the last episode (Nov 17), John Kopanas said:
 I have a text file with over 500K rows of data in it.  The problem is
 that the data is no seperated by commas but instead  occupy a certain
 amount of characters.  So for instance:

 ID 1 -11
 NAME 12-50
 COMPANY_NAME 51-100
 ...

 How would you parse import this data into mysql?

Create a table that matches your layout exactly, then

LOAD DATA ...
FIELDS TERMINATED BY ''
FIELDS ENCLOSED BY '';

It's mentioned in one of the comments at
http://dev.mysql.com/doc/refman/5.0/en/load-data.html

--
Dan Nelson
[EMAIL PROTECTED]




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing Text File Into mySQL

2006-11-17 Thread Dan Nelson
In the last episode (Nov 17), John Kopanas said:
 On 11/17/06, Dan Nelson [EMAIL PROTECTED] wrote:
 In the last episode (Nov 17), John Kopanas said:
  I have a text file with over 500K rows of data in it.  The problem
  is that the data is no seperated by commas but instead occupy a
  certain amount of characters.  So for instance:
 
  ID 1 -11
  NAME 12-50
  COMPANY_NAME 51-100
  ...
 
  How would you parse import this data into mysql?
 
 Create a table that matches your layout exactly, then
 
 LOAD DATA ...
 FIELDS TERMINATED BY ''
 FIELDS ENCLOSED BY '';
 
 It's mentioned in one of the comments at
 http://dev.mysql.com/doc/refman/5.0/en/load-data.html

 I am trying to figure out how this would work?  How does LOAD DATA
 figure out when one column begins and another ends when some of the
 data are addresses with spaces in them?

It goes by the field widths in the table you're loading into.  So set
it up like:

id VARCHAR(10),
name VARCHAR(38),
company_name VARCHAR(49),

etc.  If you've got any numeric fields, you'll probably have to make
them varchars now, then convert them to numbers with an ALTER TABLE
later (unless your fields happen to hit the magic widths listed at 
http://dev.mysql.com/doc/refman/5.0/en/load-data.html ).  Actually,
using DECIMAL fields should work, since you can specify their widths.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing Text File Into mySQL

2006-11-17 Thread Visolve DB Team

Hi,

LOAD DATA restricts only on BLOB,TEXT,ENUM and FIXED DATA.  Others are 
equally supported [INSERT stmt].  Likewise for mysqlimport also.


If the data file has no delimiter other than space[even the name, 
addresss,etc contain spaces], then the target table will not the one what is 
expected, though it is possible to import the data. It also depends on the 
width of the fields. The data and the fieldname does not corresponds.


Thanks
ViSolve DB Team.
- Original Message - 
From: John Kopanas [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Saturday, November 18, 2006 4:35 AM
Subject: Importing Text File Into mySQL



I have a text file with over 500K rows of data in it.  The problem is
that the data is no seperated by commas but instead  occupy a certain
amount of characters.  So for instance:

ID 1 -11
NAME 12-50
COMPANY_NAME 51-100
...

How would you parse import this data into mysql?

Thanks for your help :-).

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: utf8 importing problem

2006-10-30 Thread Jerry Schwartz
Most likely the UTF8 is still in the data base, but whatever program you are
using to view it is not displaying UTF8 properly. MySQL's command line
program will not, for example, even if you SET NAMES utf8.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Saturday, October 28, 2006 2:22 AM
 To: mysql@lists.mysql.com
 Subject: utf8 importing problem

 I use MySQL database with utf8 character set and utf8_czech_ci
 collation.
 It works well on Linux server but when I try to
 export the data and import into the same database but running
 on XP machine the utf8 is
 gone.Instead of a proper coding there are some strange characters.

 I used
 mysqldump --default-character-set=utf8 mimi /home/Result.sql
 to export data to /home/Result.sql file on Linux machine.
 Then I downloaded the file to my XP and here I used
 mysql --default-character-set=utf8 mimi  Result.sql
 to import data.
 Is it correct?

 Any help would be appreciated

 L.

 --- End of forwarded message ---

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: utf8 importing problem

2006-10-30 Thread Jerry Schwartz
Remember that my MySQL skills are at the beginner level; and this whole
Unicode / utf8 business always gives me a headache.

Any Unicode or utf8 characters with diacritical marks will look funky in
DOS.

Normally what I do is take my data, convert it from uft8 to utf8, and see if
the results match what I originally had. I'm not sure how reliable this
technique is, but it will at least recognize files which have characters
that are not utf8-encoded. This is counting on the fact that PHP doesn't
recognize this as a null conversion.

My particular problem was that I had files that were created in Windows
applications, using CP1252 encoding. I needed to get these into utf8, and
wanted to test my results. I have a PHP script to do this. My translator
seems to work: at least, my results look right after the translation. What
disturbs me is that the translated (utf8) files also seem to be CP1252,
which seems counter-intuitive. I did this awhile ago, so it may be that
CP1252 has alternate encodings that are a superset of utf8 (and I've
forgotten).

How are you looking at the data? Try directing some of it from MySQL into a
text file, and open the text file with Notepad. It will give you a choice of
opening the file as ANSI, Unicode, or utf8. Try opening the file in Unicode:
if it is not Unicode, then the letters with diacritical marks should look
wrong (or be missing).

If MySQL's engine is as blind as PHP, you can do this in SQL. The problem,
as I see it, is that the engine already believes the data is UTF8 so it
might not work.

SELECT COUNT(*) FROM table1 WHERE CONVERT(field1 USING utf8) != field1;

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Monday, October 30, 2006 1:05 PM
 To: Jerry Schwartz
 Subject: RE: utf8 importing problem

 Jerry,
 I checked the imported data ( sql file) and the data are in
 utf8 coding.
 Is there a way how to check the imported data in a table
 itself,  to eliminate the problem in the
 application?

 Thank you for your reply
 L.



  Most likely the UTF8 is still in the data base, but
 whatever program you are
  using to view it is not displaying UTF8 properly. MySQL's
 command line
  program will not, for example, even if you SET NAMES utf8.
 
 
  Regards,
 
  Jerry Schwartz
  Global Information Incorporated
  195 Farmington Ave.
  Farmington, CT 06032
 
  860.674.8796 / FAX: 860.674.8341
 
 
   -Original Message-
   From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
   Sent: Saturday, October 28, 2006 2:22 AM
   To: mysql@lists.mysql.com
   Subject: utf8 importing problem
  
   I use MySQL database with utf8 character set and utf8_czech_ci
   collation.
   It works well on Linux server but when I try to
   export the data and import into the same database but running
   on XP machine the utf8 is
   gone.Instead of a proper coding there are some strange characters.
  
   I used
   mysqldump --default-character-set=utf8 mimi /home/Result.sql
   to export data to /home/Result.sql file on Linux machine.
   Then I downloaded the file to my XP and here I used
   mysql --default-character-set=utf8 mimi  Result.sql
   to import data.
   Is it correct?
  
   Any help would be appreciated
  
   L.
  
   --- End of forwarded message ---
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
   http://lists.mysql.com/[EMAIL PROTECTED]
  
  
 
 
 







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



utf8 importing problem

2006-10-28 Thread [EMAIL PROTECTED]
I use MySQL database with utf8 character set and utf8_czech_ci 
collation. 
It works well on Linux server but when I try to 
export the data and import into the same database but running on XP machine the 
utf8 is 
gone.Instead of a proper coding there are some strange characters.

I used 
mysqldump --default-character-set=utf8 mimi /home/Result.sql 
to export data to /home/Result.sql file on Linux machine. 
Then I downloaded the file to my XP and here I used 
mysql --default-character-set=utf8 mimi  Result.sql 
to import data. 
Is it correct? 

Any help would be appreciated

L.

--- End of forwarded message ---

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Importing a Database (.mpb file)

2006-10-08 Thread David Blomstrom
Hopeffully this will be the last question in this series. :)

I want to copy a database from my PC to my Apple laptop. I installed MySQL's 
GUI Tools on both computers, created a file named Backup.mpb on my PC, then put 
a copy of it on my Mac. Now I'm trying to figure out how to get Backup.mbp into 
my new MySQL program.

I thought perhaps I could import it with phpMyAdmin, just as I import SQL 
files, but that didn't work. If the database is named Sky, do I have to 
create that database in my new MySQL program before I can import the Sky 
backup? What's the easiest way to import it?

Thanks.







Re: Importing a Database (.mpb file)

2006-10-08 Thread Daniel Kasak

David Blomstrom wrote:

Hopeffully this will be the last question in this series. :)

I want to copy a database from my PC to my Apple laptop. I installed MySQL's 
GUI Tools on both computers, created a file named Backup.mpb on my PC, then put 
a copy of it on my Mac. Now I'm trying to figure out how to get Backup.mbp into 
my new MySQL program.

I thought perhaps I could import it with phpMyAdmin, just as I import SQL files, but that 
didn't work. If the database is named Sky, do I have to create that database 
in my new MySQL program before I can import the Sky backup? What's the easiest way to 
import it?
  


What format is the data in? I don't know what an 'mpb' file is. Did you 
mean 'mdb'? If so, the easiest way is to use one of the data transfer 
utilities listed on the MySQL website. There are some plugins for MS 
Access that will set up your tables and export the data for you. It's 
been a very long time since I looked at these - probably 5 years or so.


--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Error 1064 when importing 4.0 dump into 4.1 via command line

2006-09-26 Thread Curious George

I dumped a database from a 4.0 mysql and am attempting to move it to a
server running 4.1 - using the command line:
$ mysql -u root -pmypassword empty4.1db  4.0dump.sql

The result:
ERROR 1064 (42000) at line 2: You have an error in your SQL syntax; check
the manual that corresponds to your MySQL server version for the right
syntax to use near '77, 137289, 4)' at line 1

When I look at the file using vi, I see lots of ^M (all over the place). I
believe that these are DOS line breaks and I know how to get rid of them,
but it is a 35MB file and I don't know if I can do it over my slow
connection from home (via SSH).

My question is: are those ^Ms my problem? Or is that normal and I have
some other issue?

Thanks in advance for any pointers.


Re: Error 1064 when importing 4.0 dump into 4.1 via command line

2006-09-26 Thread Carlos Proal

I dont think that is the problem but, what do you mean by a slow
connection ?, you cant run the dos2unix command on the remote server ?

The error ocurred on line 2, did you see any special word in that line
? can you share with us that line? , remember that each version may
can reserve different words.

Carlos


On 9/26/06, Curious George [EMAIL PROTECTED] wrote:

I dumped a database from a 4.0 mysql and am attempting to move it to a
server running 4.1 - using the command line:
$ mysql -u root -pmypassword empty4.1db  4.0dump.sql

The result:
ERROR 1064 (42000) at line 2: You have an error in your SQL syntax; check
the manual that corresponds to your MySQL server version for the right
syntax to use near '77, 137289, 4)' at line 1

When I look at the file using vi, I see lots of ^M (all over the place). I
believe that these are DOS line breaks and I know how to get rid of them,
but it is a 35MB file and I don't know if I can do it over my slow
connection from home (via SSH).

My question is: are those ^Ms my problem? Or is that normal and I have
some other issue?

Thanks in advance for any pointers.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing large data sets

2006-07-13 Thread sheeri kritzer

On 6/22/06, Scott Haneda [EMAIL PROTECTED] wrote:

I have two chunks of data to import, one is in this format:
01001 - AGAWAM, MA,01001,0,0,291,249,0,42.070206,-72.622739
Where it is comma sep and partially quoted

The other is in this format
99502 ANCHORAGE,
AK,256,265,1424,1962,1131,528,643,6209,99502,61.096163,
-150.093943
Where everything is in quotes and comma sep

Can someone tell me, how to rapidly import all this data into a table.



Check out mysqlimport:

http://dev.mysql.com/doc/refman/5.0/en/mysqlimport.html

particularly the

--fields-optionally-enclosed-by
and
--fields-terminated-by

options.

I'm sure it's too late for you, but you do NOT need to edit things in
a text editor.

-Sheeri

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re-importing a mysqldump file

2006-06-25 Thread Ian Barnes
Hi,

 

I need to auto re-import a mysqldump file, but when importing it I need to
make a certain field a value for all information imported.  For example my
db looks like this:

 

Id

Name

Value

Serverid

 

Now, on the remote server, name and value get exported, and when I re-import
it here, I need id to auto-increment and serverid to be set by something
that I specify depending on what file im importing. 

 

Is this possible? Or would the best way be to import the dumped file into a
temp table and then select out of the temp table into my correct table ?

 

Thanks for any help!

Ian



Re: Re-importing a mysqldump file

2006-06-25 Thread John Meyer

Ian Barnes wrote:

Is this possible? Or would the best way be to import the dumped file into a
temp table and then select out of the temp table into my correct table ?




Anyway to use a trigger?

--
Online library -- http://pueblonative.110mb.com
126 books and counting.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Re-importing a mysqldump file

2006-06-25 Thread Ian Barnes
Hi,

No unfortunately not...

Cheers
Ian

 -Original Message-
 From: John Meyer [mailto:[EMAIL PROTECTED]
 Sent: 25 June 2006 05:41 PM
 To: mysql@lists.mysql.com
 Subject: Re: Re-importing a mysqldump file
 
 Ian Barnes wrote:
  Is this possible? Or would the best way be to import the dumped file
 into a
  temp table and then select out of the temp table into my correct table ?
 
 
 
 Anyway to use a trigger?
 
 --
 Online library -- http://pueblonative.110mb.com
 126 books and counting.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Importing large data sets

2006-06-22 Thread Scott Haneda
I have two chunks of data to import, one is in this format:
01001 - AGAWAM, MA,01001,0,0,291,249,0,42.070206,-72.622739
Where it is comma sep and partially quoted

The other is in this format
99502 ANCHORAGE, 
AK,256,265,1424,1962,1131,528,643,6209,99502,61.096163,
-150.093943
Where everything is in quotes and comma sep

Can someone tell me, how to rapidly import all this data into a table.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing large data sets

2006-06-22 Thread Karl Larsen

Scott Haneda wrote:

I have two chunks of data to import, one is in this format:
01001 - AGAWAM, MA,01001,0,0,291,249,0,42.070206,-72.622739
Where it is comma sep and partially quoted

The other is in this format
99502 ANCHORAGE, 
AK,256,265,1424,1962,1131,528,643,6209,99502,61.096163,

-150.093943
Where everything is in quotes and comma sep

Can someone tell me, how to rapidly import all this data into a table.
  
   Well you need to design a table that has columns for each set of 
data your entering, and then you will use a INSERT INTO file_name VALUE 
( your data ); and your going to need to change all those  to '. In 
Linux I use the joe editor and it's a simple matter to change all  to '.


Karl


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing 3Gb File

2006-06-11 Thread Ian Barnes
Hi,

This is all I could see just before it happened:

mysql show processlist;
++--+---+-+-+--++---

---+
| Id | User | Host  | db  | Command | Time | State  | Info
|
++--+---+-+-+--++---

---+
| 11 | root | localhost | testing | Query   | 0| creating table | CREATE
TABLE ` upgrade_history` (
  `upgrade_id` int(10) NOT NULL auto_increment,
  `upgrade_vers |
| 12 | root | localhost | | Query   | 0|| show
processlist
|
++--+---+-+-+--++---

---+
2 rows in set (0.00 sec)


Then after that it the following happened for 60 seconds and then it timed
out:

mysql show processlist;
++--+---+-+-+--+---+
--+
| Id | User | Host  | db  | Command | Time | State | Info
|
++--+---+-+-+--+---+
--+
| 11 | root | localhost | testing | Sleep   | 0|   |
|
| 12 | root | localhost | | Query   | 0|   | show
processlist |
++--+---+-+-+--+---+
--+
2 rows in set (0.00 sec)


I have put more RAM in (I now have 768Mb's, as well as 3Gig of SWAP).

Thanks for the suggestions!

Cheers
Ian


 -Original Message-
 From: mos [mailto:[EMAIL PROTECTED]
 Sent: 09 June 2006 07:15 PM
 To: mysql@lists.mysql.com
 Subject: Re: Importing 3Gb File
 
 At 10:20 AM 6/8/2006, you wrote:
 Hi,
 
 I am trying to import a 3.2Gb sql dump file back into my sql server
 (4.1.12)
 and im coming across the following error:
 
 mysql: Out of memory (Needed 178723240 bytes)
 mysql: Out of memory (Needed 178719144 bytes)
 
 That error comes up after about 30 minutes worth of import and I would
 guess
 about half way through the import.
 
 What does Show Processlist say its doing just before the crash? I've had
 problems with Load Data on a very large table 500 million rows because the
 machine did not have enough memory to build the index. The data was loaded
 just fine, it's building the index that hung it out to dry because that
 eats up memory like crazy. How much memory do you have on your machine?
 The
 cheapest solution might be to go out and get a few more gb of RAM.
 
 Mike
 
 
 
 The file in question is a mysqldump
 -all-databases file from another server that im trying to import onto my
 desktop machine. I have tried to alter the my.cnf file a bit, and this is
 what it looks like:
 
 [client]
 #password   = your_password
 port= 3306
 socket  = /tmp/mysql.sock
 
 [mysqld]
 port= 3306
 socket  = /tmp/mysql.sock
 skip-locking
 key_buffer = 64M
 max_allowed_packet = 8M
 table_cache = 512
 sort_buffer_size = 8M
 net_buffer_length = 8M
 myisam_sort_buffer_size = 45M
 set-variable=max_connections=300
 
 # Replication Master Server (default)
 # binary logging is required for replication
 #log-bin
 
 # required unique id between 1 and 2^32 - 1
 # defaults to 1 if master-host is not set
 # but will not function as a master if omitted
 server-id   = 1
 
 #bdb_cache_size = 4M
 #bdb_max_lock = 1
 
 # Uncomment the following if you are using InnoDB tables
 #innodb_data_home_dir = /var/db/mysql/
 #innodb_data_file_path = ibdata1:10M:autoextend
 #innodb_log_group_home_dir = /var/db/mysql/
 #innodb_log_arch_dir = /var/db/mysql/
 # You can set .._buffer_pool_size up to 50 - 80 %
 # of RAM but beware of setting memory usage too high
 #innodb_buffer_pool_size = 16M
 #innodb_additional_mem_pool_size = 2M
 # Set .._log_file_size to 25 % of buffer pool size
 #innodb_log_file_size = 5M
 #innodb_log_buffer_size = 8M
 #innodb_flush_log_at_trx_commit = 1
 #innodb_lock_wait_timeout = 50
 
 [mysqldump]
 quick
 max_allowed_packet = 16M
 
 [mysql]
 no-auto-rehash
 # Remove the next comment character if you are not familiar with SQL
 #safe-updates
 
 [isamchk]
 key_buffer = 10M
 sort_buffer_size = 20M
 read_buffer = 2M
 write_buffer = 2M
 
 [myisamchk]
 key_buffer = 10M
 sort_buffer_size = 20M
 read_buffer = 2M
 write_buffer = 2M
 
 [mysqlhotcopy]
 interactive-timeout
 
 
 Does anyone have any advice as to what I could change to make it import,
 and
 not break half way through. The command im running to import is: mysql -n
 -f
 -p  alldb.sql
 
 Thanks in advance,
 Ian
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED

RE: Importing 3Gb File

2006-06-11 Thread Ian Barnes
Hi,

Yes, I don't actually know if I have very large blogs, but the possibility
exists, and is quite large.

I am running FreeBSD so I don't have the ulimit program, the only program I
have is called limits and these are what I get when running it:

[EMAIL PROTECTED] /home # limits
Resource limits (current):
  cputime  infinity secs
  filesize infinity kb
  datasize   524288 kb
  stacksize   65536 kb
  coredumpsize infinity kb
  memoryuseinfinity kb
  memorylocked infinity kb
  maxprocesses 5446
  openfiles   10893
  sbsize   infinity bytes
  vmemoryuse   infinity kb
[EMAIL PROTECTED] /home #

I have upped my RAM in the unit with 512Mb to 768Mb, so I think I should
have enough RAM now.

Any other ideas?

Thanks,
Ian

 -Original Message-
 From: Jeremy Cole [mailto:[EMAIL PROTECTED]
 Sent: 09 June 2006 05:01 PM
 To: Ian Barnes
 Cc: mysql@lists.mysql.com
 Subject: Re: Importing 3Gb File
 
 Hi Ian,
 
  I am trying to import a 3.2Gb sql dump file back into my sql server
 (4.1.12)
  and im coming across the following error:
 
  mysql: Out of memory (Needed 178723240 bytes)
  mysql: Out of memory (Needed 178719144 bytes)
 
 That error message comes from some single place trying to allocate 178MB
 at a single time.  Do you have large BLOBs in your data?
 
 This error message means that mysqld is beind denied memory by the OS,
 either because you are actually out of memory, or because your ulimit
 has been reached (more likely).
 
 Check your ulimits for your system with ulimit -a, and adjust if
 necessary in the mysql.server or mysqld_safe script (those both run as
 root, so can adjust ulimits upwards).
 
 Regards,
 
 Jeremy
 
 --
 Jeremy Cole
 MySQL Geek, Yahoo! Inc.
 Desk: 408 349 5104
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing 3Gb File

2006-06-11 Thread mos

At 04:41 PM 6/11/2006, Ian Barnes wrote:

Hi,



Then after that it the following happened for 60 seconds and then it timed
out:

I have put more RAM in (I now have 768Mb's, as well as 3Gig of SWAP).

Thanks for the suggestions!

Cheers
Ian



Ian,
Timed out? Wasn't there an error? Have you looked at the MySQL Log 
files? http://dev.mysql.com/doc/refman/5.0/en/error-log.html
You're trying to load 3.2gb of data with only 768mb of ram and 3gb of swap 
and that may not be enough. Also make sure your MySQL data directory has 
enough disk space. You will need more than 3gb of space free in your data 
directory (I'm assuming you're using MyISAM tables and not InnoDb). With 
indexes you could easily be looking at double or triple that size (6-9gb of 
disk space).


Mike




 -Original Message-
 From: mos [mailto:[EMAIL PROTECTED]
 Sent: 09 June 2006 07:15 PM
 To: mysql@lists.mysql.com
 Subject: Re: Importing 3Gb File

 At 10:20 AM 6/8/2006, you wrote:
 Hi,
 
 I am trying to import a 3.2Gb sql dump file back into my sql server
 (4.1.12)
 and im coming across the following error:
 
 mysql: Out of memory (Needed 178723240 bytes)
 mysql: Out of memory (Needed 178719144 bytes)
 
 That error comes up after about 30 minutes worth of import and I would
 guess
 about half way through the import.

 What does Show Processlist say its doing just before the crash? I've had
 problems with Load Data on a very large table 500 million rows because the
 machine did not have enough memory to build the index. The data was loaded
 just fine, it's building the index that hung it out to dry because that
 eats up memory like crazy. How much memory do you have on your machine?
 The
 cheapest solution might be to go out and get a few more gb of RAM.

 Mike



 The file in question is a mysqldump
 -all-databases file from another server that im trying to import onto my
 desktop machine. I have tried to alter the my.cnf file a bit, and this is
 what it looks like:
 
 [client]
 #password   = your_password
 port= 3306
 socket  = /tmp/mysql.sock
 
 [mysqld]
 port= 3306
 socket  = /tmp/mysql.sock
 skip-locking
 key_buffer = 64M
 max_allowed_packet = 8M
 table_cache = 512
 sort_buffer_size = 8M
 net_buffer_length = 8M
 myisam_sort_buffer_size = 45M
 set-variable=max_connections=300
 
 # Replication Master Server (default)
 # binary logging is required for replication
 #log-bin
 
 # required unique id between 1 and 2^32 - 1
 # defaults to 1 if master-host is not set
 # but will not function as a master if omitted
 server-id   = 1
 
 #bdb_cache_size = 4M
 #bdb_max_lock = 1
 
 # Uncomment the following if you are using InnoDB tables
 #innodb_data_home_dir = /var/db/mysql/
 #innodb_data_file_path = ibdata1:10M:autoextend
 #innodb_log_group_home_dir = /var/db/mysql/
 #innodb_log_arch_dir = /var/db/mysql/
 # You can set .._buffer_pool_size up to 50 - 80 %
 # of RAM but beware of setting memory usage too high
 #innodb_buffer_pool_size = 16M
 #innodb_additional_mem_pool_size = 2M
 # Set .._log_file_size to 25 % of buffer pool size
 #innodb_log_file_size = 5M
 #innodb_log_buffer_size = 8M
 #innodb_flush_log_at_trx_commit = 1
 #innodb_lock_wait_timeout = 50
 
 [mysqldump]
 quick
 max_allowed_packet = 16M
 
 [mysql]
 no-auto-rehash
 # Remove the next comment character if you are not familiar with SQL
 #safe-updates
 
 [isamchk]
 key_buffer = 10M
 sort_buffer_size = 20M
 read_buffer = 2M
 write_buffer = 2M
 
 [myisamchk]
 key_buffer = 10M
 sort_buffer_size = 20M
 read_buffer = 2M
 write_buffer = 2M
 
 [mysqlhotcopy]
 interactive-timeout
 
 
 Does anyone have any advice as to what I could change to make it import,
 and
 not break half way through. The command im running to import is: mysql -n
 -f
 -p  alldb.sql
 
 Thanks in advance,
 Ian
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing 3Gb File

2006-06-09 Thread Jeremy Cole

Hi Ian,


I am trying to import a 3.2Gb sql dump file back into my sql server (4.1.12)
and im coming across the following error:

mysql: Out of memory (Needed 178723240 bytes)
mysql: Out of memory (Needed 178719144 bytes)


That error message comes from some single place trying to allocate 178MB 
at a single time.  Do you have large BLOBs in your data?


This error message means that mysqld is beind denied memory by the OS, 
either because you are actually out of memory, or because your ulimit 
has been reached (more likely).


Check your ulimits for your system with ulimit -a, and adjust if 
necessary in the mysql.server or mysqld_safe script (those both run as 
root, so can adjust ulimits upwards).


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing 3Gb File

2006-06-09 Thread mos

At 10:20 AM 6/8/2006, you wrote:

Hi,

I am trying to import a 3.2Gb sql dump file back into my sql server (4.1.12)
and im coming across the following error:

mysql: Out of memory (Needed 178723240 bytes)
mysql: Out of memory (Needed 178719144 bytes)

That error comes up after about 30 minutes worth of import and I would guess
about half way through the import.


What does Show Processlist say its doing just before the crash? I've had 
problems with Load Data on a very large table 500 million rows because the 
machine did not have enough memory to build the index. The data was loaded 
just fine, it's building the index that hung it out to dry because that 
eats up memory like crazy. How much memory do you have on your machine? The 
cheapest solution might be to go out and get a few more gb of RAM.


Mike




The file in question is a mysqldump
-all-databases file from another server that im trying to import onto my
desktop machine. I have tried to alter the my.cnf file a bit, and this is
what it looks like:

[client]
#password   = your_password
port= 3306
socket  = /tmp/mysql.sock

[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 64M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 8M
net_buffer_length = 8M
myisam_sort_buffer_size = 45M
set-variable=max_connections=300

# Replication Master Server (default)
# binary logging is required for replication
#log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id   = 1

#bdb_cache_size = 4M
#bdb_max_lock = 1

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/db/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /var/db/mysql/
#innodb_log_arch_dir = /var/db/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 16M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 5M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout


Does anyone have any advice as to what I could change to make it import, and
not break half way through. The command im running to import is: mysql -n -f
-p  alldb.sql

Thanks in advance,
Ian


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing 3Gb File

2006-06-09 Thread Burke, Dan


 At 10:20 AM 6/8/2006, you wrote:
 Hi,
 
 I am trying to import a 3.2Gb sql dump file back into my sql server
 (4.1.12)
 and im coming across the following error:
 
 mysql: Out of memory (Needed 178723240 bytes)
 mysql: Out of memory (Needed 178719144 bytes)
 
 That error comes up after about 30 minutes worth of import and I
would
 guess
 about half way through the import.
 
 What does Show Processlist say its doing just before the crash? I've
had
 problems with Load Data on a very large table 500 million rows because
the
 machine did not have enough memory to build the index. The data was
loaded
 just fine, it's building the index that hung it out to dry because
that
 eats up memory like crazy. How much memory do you have on your
machine?
 The
 cheapest solution might be to go out and get a few more gb of RAM.
 

If indexing is the problem when loading the dump, are there not options
to have mysqldump not disable the indexes until the import is complete,
so that the index is generated as records are inserted, instead of all
in one shot?

Ah, here it is... I've never used this before, but a quick test shows it
does what I assumed it does.  (I'm running 5.0, so mileage may vary in
4.1)

mysqldump --disable-keys=false database [table]

Of course, this is assuming that you have the luxury of re-exporting
this data from the original source. 

Dan.

---
This e-mail transmission is strictly confidential and intended solely
for the person or organization to whom it is addressed. It may contain
privileged and confidential information and if you are not the intended
recipient, you must not copy, distribute or take any action in reliance
on it. If you have received this e-mail in error, please notify the sender
as soon as possible and delete the e-mail message and any attachment(s).
This message has been scanned for viruses by TechTeam's email gateway.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Importing 3Gb File

2006-06-08 Thread Ian Barnes
Hi,

I am trying to import a 3.2Gb sql dump file back into my sql server (4.1.12)
and im coming across the following error:

mysql: Out of memory (Needed 178723240 bytes)
mysql: Out of memory (Needed 178719144 bytes)

That error comes up after about 30 minutes worth of import and I would guess
about half way through the import. The file in question is a mysqldump
-all-databases file from another server that im trying to import onto my
desktop machine. I have tried to alter the my.cnf file a bit, and this is
what it looks like:

[client]
#password   = your_password
port= 3306
socket  = /tmp/mysql.sock

[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 64M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 8M
net_buffer_length = 8M
myisam_sort_buffer_size = 45M
set-variable=max_connections=300

# Replication Master Server (default)
# binary logging is required for replication
#log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id   = 1

#bdb_cache_size = 4M
#bdb_max_lock = 1

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/db/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /var/db/mysql/
#innodb_log_arch_dir = /var/db/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 16M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 5M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout


Does anyone have any advice as to what I could change to make it import, and
not break half way through. The command im running to import is: mysql -n -f
-p  alldb.sql 

Thanks in advance,
Ian


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing 3Gb File

2006-06-08 Thread Kishore Jalleda

On 6/8/06, Ian Barnes [EMAIL PROTECTED] wrote:


Hi,

I am trying to import a 3.2Gb sql dump file back into my sql server (
4.1.12)
and im coming across the following error:

mysql: Out of memory (Needed 178723240 bytes)
mysql: Out of memory (Needed 178719144 bytes)

That error comes up after about 30 minutes worth of import and I would
guess
about half way through the import. The file in question is a mysqldump
-all-databases file from another server that im trying to import onto my
desktop machine. I have tried to alter the my.cnf file a bit, and this is
what it looks like:

[client]
#password   = your_password
port= 3306
socket  = /tmp/mysql.sock

[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 64M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 8M
net_buffer_length = 8M
myisam_sort_buffer_size = 45M
set-variable=max_connections=300

# Replication Master Server (default)
# binary logging is required for replication
#log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id   = 1

#bdb_cache_size = 4M
#bdb_max_lock = 1

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/db/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /var/db/mysql/
#innodb_log_arch_dir = /var/db/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 16M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 5M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout


Does anyone have any advice as to what I could change to make it import,
and
not break half way through. The command im running to import is: mysql -n
-f
-p  alldb.sql

Thanks in advance,
Ian


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Assuming that you have dumped your databases using mysqldump, what options
did you give to mysqldump, as of 4.1, --opt is enabled by default, and
this enables the --quick option which basically forces mysqldump to
retrieve one row at a time instead of buffering the whole table into memory
and then writing out the result.

So if you have the --quick option enabled in myslqdump, you should not be
getting the out of memory errors, also I see you are using the -n option
with mysql CLT, which does not buffer sql statements/queries  into memory
before flushing them, but if the dump itself consists of large rows of table
data flushed into one large sql statement, then mysql CLT would still treat
it as one query, so i am sure you have to change the way you dump your
tables...

Kishore Jalleda
http://kjalleda.googlepages.com/projects


FW: Importing 3Gb File

2006-06-08 Thread Ian Barnes
Sorry, forgot to send to the list aswell. My reply is at the bottom.

 -Original Message-
 From: Ian Barnes [mailto:[EMAIL PROTECTED]
 Sent: 08 June 2006 09:58 PM
 To: 'Kishore Jalleda'
 Subject: RE: Importing 3Gb File
 
 
 
  -Original Message-
  From: Kishore Jalleda [mailto:[EMAIL PROTECTED]
  Sent: 08 June 2006 06:18 PM
  To: Ian Barnes
  Cc: mysql@lists.mysql.com
  Subject: Re: Importing 3Gb File
 
  On 6/8/06, Ian Barnes [EMAIL PROTECTED] wrote:
  
   Hi,
  
   I am trying to import a 3.2Gb sql dump file back into my sql server (
   4.1.12)
   and im coming across the following error:
  
   mysql: Out of memory (Needed 178723240 bytes)
   mysql: Out of memory (Needed 178719144 bytes)
  
   That error comes up after about 30 minutes worth of import and I would
   guess
   about half way through the import. The file in question is a mysqldump
   -all-databases file from another server that im trying to import onto
 my
   desktop machine. I have tried to alter the my.cnf file a bit, and this
  is
   what it looks like:
  
   [client]
   #password   = your_password
   port= 3306
   socket  = /tmp/mysql.sock
  
   [mysqld]
   port= 3306
   socket  = /tmp/mysql.sock
   skip-locking
   key_buffer = 64M
   max_allowed_packet = 8M
   table_cache = 512
   sort_buffer_size = 8M
   net_buffer_length = 8M
   myisam_sort_buffer_size = 45M
   set-variable=max_connections=300
  
   # Replication Master Server (default)
   # binary logging is required for replication
   #log-bin
  
   # required unique id between 1 and 2^32 - 1
   # defaults to 1 if master-host is not set
   # but will not function as a master if omitted
   server-id   = 1
  
   #bdb_cache_size = 4M
   #bdb_max_lock = 1
  
   # Uncomment the following if you are using InnoDB tables
   #innodb_data_home_dir = /var/db/mysql/
   #innodb_data_file_path = ibdata1:10M:autoextend
   #innodb_log_group_home_dir = /var/db/mysql/
   #innodb_log_arch_dir = /var/db/mysql/
   # You can set .._buffer_pool_size up to 50 - 80 %
   # of RAM but beware of setting memory usage too high
   #innodb_buffer_pool_size = 16M
   #innodb_additional_mem_pool_size = 2M
   # Set .._log_file_size to 25 % of buffer pool size
   #innodb_log_file_size = 5M
   #innodb_log_buffer_size = 8M
   #innodb_flush_log_at_trx_commit = 1
   #innodb_lock_wait_timeout = 50
  
   [mysqldump]
   quick
   max_allowed_packet = 16M
  
   [mysql]
   no-auto-rehash
   # Remove the next comment character if you are not familiar with SQL
   #safe-updates
  
   [isamchk]
   key_buffer = 10M
   sort_buffer_size = 20M
   read_buffer = 2M
   write_buffer = 2M
  
   [myisamchk]
   key_buffer = 10M
   sort_buffer_size = 20M
   read_buffer = 2M
   write_buffer = 2M
  
   [mysqlhotcopy]
   interactive-timeout
  
  
   Does anyone have any advice as to what I could change to make it
 import,
   and
   not break half way through. The command im running to import is: mysql
 -
  n
   -f
   -p  alldb.sql
  
   Thanks in advance,
   Ian
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
  
  
 
  Assuming that you have dumped your databases using mysqldump, what
 options
  did you give to mysqldump, as of 4.1, --opt is enabled by default, and
  this enables the --quick option which basically forces mysqldump to
  retrieve one row at a time instead of buffering the whole table into
  memory
  and then writing out the result.
 
  So if you have the --quick option enabled in myslqdump, you should not
 be
  getting the out of memory errors, also I see you are using the -n option
  with mysql CLT, which does not buffer sql statements/queries  into
 memory
  before flushing them, but if the dump itself consists of large rows of
  table
  data flushed into one large sql statement, then mysql CLT would still
  treat
  it as one query, so i am sure you have to change the way you dump your
  tables...
 
  Kishore Jalleda
  http://kjalleda.googlepages.com/projects
 
 Hi Kishore,
 
 Thanks for the info!
 
 I don't know how the file was dumped (I know it was via mysqldump), but I
 assume it was the default dump method (mysqldump -uuser -p --all-databases
  alldb.sql).
 
 I have tried running it with the -q option and it still fails with that
 message. (mysql -f -q -p  /home/iandb.sql)
 
 Any other ideas?
 
 Cheers
 Ian


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: FW: Importing 3Gb File

2006-06-08 Thread Dan Buettner
Ian, I ran into a similar situation not too long ago when setting up 
replication.  I don't recall if the error message was identical but was 
certainly close.


My problem was that we had customized our original server (and 
mysqldump) to allow 16 MB packets, which is essentially a setting 
governing the largest SQL command allowed.  When I did the install on 
the slave machine, I didn't remember to tweak those same settings, so 
the initial import kept failing until I did.


I see in your my.cnf file you've got an 8M setting for mysqld and a 16M 
setting for mysqldump - you should set those to both be the same, first off.


As for your import problem - is it possible this 3.2 GB dumpfile 
contains some large binary data?  Or that for some other reason the 
people at the source might have chosen to drastically increase the 
max_allowed_packet setting on their server and their mysqldump?


The numbers you show below indicate a max_allowed_packet setting of 180M 
might work for you, allowing you to finish the import.


Note you need to set the large size for the server AND the client, AND 
mysqldump if you're using it too.

http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

HTH,
Dan



Ian Barnes wrote:

Sorry, forgot to send to the list aswell. My reply is at the bottom.


-Original Message-
From: Ian Barnes [mailto:[EMAIL PROTECTED]
Sent: 08 June 2006 09:58 PM
To: 'Kishore Jalleda'
Subject: RE: Importing 3Gb File




-Original Message-
From: Kishore Jalleda [mailto:[EMAIL PROTECTED]
Sent: 08 June 2006 06:18 PM
To: Ian Barnes
Cc: mysql@lists.mysql.com
Subject: Re: Importing 3Gb File

On 6/8/06, Ian Barnes [EMAIL PROTECTED] wrote:

Hi,

I am trying to import a 3.2Gb sql dump file back into my sql server (
4.1.12)
and im coming across the following error:

mysql: Out of memory (Needed 178723240 bytes)
mysql: Out of memory (Needed 178719144 bytes)

That error comes up after about 30 minutes worth of import and I would
guess
about half way through the import. The file in question is a mysqldump
-all-databases file from another server that im trying to import onto

my

desktop machine. I have tried to alter the my.cnf file a bit, and this

is

what it looks like:

[client]
#password   = your_password
port= 3306
socket  = /tmp/mysql.sock

[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 64M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 8M
net_buffer_length = 8M
myisam_sort_buffer_size = 45M
set-variable=max_connections=300

# Replication Master Server (default)
# binary logging is required for replication
#log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id   = 1

#bdb_cache_size = 4M
#bdb_max_lock = 1

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/db/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /var/db/mysql/
#innodb_log_arch_dir = /var/db/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 16M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 5M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout


Does anyone have any advice as to what I could change to make it

import,

and
not break half way through. The command im running to import is: mysql

-

n

-f
-p  alldb.sql

Thanks in advance,
Ian


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

http://lists.mysql.com/[EMAIL PROTECTED]



Assuming that you have dumped your databases using mysqldump, what

options

did you give to mysqldump, as of 4.1, --opt is enabled by default, and
this enables the --quick option which basically forces mysqldump to
retrieve one row at a time instead of buffering the whole table into
memory
and then writing out the result.

So if you have the --quick option enabled in myslqdump, you should not

be

getting the out of memory errors, also I see you are using the -n option
with mysql CLT, which does not buffer sql statements/queries  into

memory

before flushing them, but if the dump itself consists of large rows of
table
data flushed into one large sql statement, then mysql CLT would still
treat
it as one query, so i am sure you have to change the way you dump your
tables...

Kishore Jalleda
http://kjalleda.googlepages.com/projects

Hi Kishore

Re: importing a dumpfile from with the mysql client

2006-05-07 Thread Luke Vanderfluit

Hi.

sheeri kritzer wrote:


Ah, I get it.I don't know if there's a way to do that, but why not
just put the SQL statements in the file?

-Sheeri


Thanks George.
You're right, that's what I meant.

Sheeri, it would be quite possible to put it the dumpfile, if the 
dumpfile was editable (-:

The dumpfile we are using is so large that I can't open it with any editor.

So again the question is:
If the remote session is still active but I haven't carried out those 
commands and the database has been updated, modified, manipulated etc. 
in the meantime, is there a danger in closing the session or carrying 
out the commands several days later ?


The commands that I am talking about are:
 mysql SET FOREIGN_KEY_CHECKS = 1;
 mysql COMMIT;

Thanks.
Kind regards.


On 5/5/06, George Law [EMAIL PROTECTED] wrote:


I think what he is saying is that be began the transaction in a
command line client
session from one location but was not able to give the

 mysql SET FOREIGN_KEY_CHECKS = 1;
 mysql COMMIT;

commands in the same session. (ie - I owe, I owe, its off to work I go
:)  )

This is a good question, one I have wondered about myself.

Is there a way in mysql to attach to session to issue a commit?




-Original Message-
From: sheeri kritzer [mailto:[EMAIL PROTECTED]
Sent: Friday, May 05, 2006 3:02 PM
To: Luke Vanderfluit
Cc: MySQL List
Subject: Re: importing a dumpfile from with the mysql client

On 5/4/06, Luke Vanderfluit [EMAIL PROTECTED] wrote:

[snip]

 I started this process remotely then went to the site to finish it.
 But when the dump finished (several hours later) I was not able to
 execute the following commands from my original location.

 mysql SET FOREIGN_KEY_CHECKS = 1;
 mysql COMMIT;

What do you mean you were not able?  Did you get an error?  Was the
server hung?  Did the keyboard stick to make you unable to type the
commands?

 My question is:
 Since the import completed the database has grown in size and been
 backed up etc.
 Yet from the original session I have not executed those 2 commands.

 Is it safe to execute them? Or would executing them cause corruption
or
 other unforseen stuff?
 Is it unnecessary to execute them?
 Is it safe to kill that original mysql session?

Those 2 commands, in and of themselves, aren't dangerous.  It's what's
in the script that could be the problem.  However, since you really
didn't mention what error you got, it's not easy to figure out how to
fix it.

It depends what tables you're running these on.  If you're running
them on MyISAM tables, deadlocking cannot happen.  Then again,
transactions are meaningless too.  If you're running on InnoDB you
have the possibility of deadlocking, but MySQL is pretty good about
avoiding that.

If other users/applications are writing to the SAME data then there's
the possibility of inconsistent data, but I wouldn't expect
corruption.

if you post what errors you actually got, I might be able to help.
Also, are you using the same db and tables that other apps/users are
using?  Does the dumpfile contain DROP TABLE IF EXISTS commands?

-Sheeri

 --
 Luke Vanderfluit.
 Analyst/Programmer.
 Internode Systems Pty. Ltd.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
Luke Vanderfluit.
Analyst/Programmer.
Internode Systems Pty. Ltd.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: importing a dumpfile from with the mysql client

2006-05-05 Thread sheeri kritzer

On 5/4/06, Luke Vanderfluit [EMAIL PROTECTED] wrote:

[snip]


I started this process remotely then went to the site to finish it.
But when the dump finished (several hours later) I was not able to
execute the following commands from my original location.

mysql SET FOREIGN_KEY_CHECKS = 1;
mysql COMMIT;


What do you mean you were not able?  Did you get an error?  Was the
server hung?  Did the keyboard stick to make you unable to type the
commands?


My question is:
Since the import completed the database has grown in size and been
backed up etc.
Yet from the original session I have not executed those 2 commands.

Is it safe to execute them? Or would executing them cause corruption or
other unforseen stuff?
Is it unnecessary to execute them?
Is it safe to kill that original mysql session?


Those 2 commands, in and of themselves, aren't dangerous.  It's what's
in the script that could be the problem.  However, since you really
didn't mention what error you got, it's not easy to figure out how to
fix it.

It depends what tables you're running these on.  If you're running
them on MyISAM tables, deadlocking cannot happen.  Then again,
transactions are meaningless too.  If you're running on InnoDB you
have the possibility of deadlocking, but MySQL is pretty good about
avoiding that.

If other users/applications are writing to the SAME data then there's
the possibility of inconsistent data, but I wouldn't expect
corruption.

if you post what errors you actually got, I might be able to help. 
Also, are you using the same db and tables that other apps/users are

using?  Does the dumpfile contain DROP TABLE IF EXISTS commands?

-Sheeri


--
Luke Vanderfluit.
Analyst/Programmer.
Internode Systems Pty. Ltd.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: importing a dumpfile from with the mysql client

2006-05-05 Thread George Law
I think what he is saying is that be began the transaction in a
command line client 
session from one location but was not able to give the  

 mysql SET FOREIGN_KEY_CHECKS = 1;
 mysql COMMIT;

commands in the same session. (ie - I owe, I owe, its off to work I go
:)  )

This is a good question, one I have wondered about myself.

Is there a way in mysql to attach to session to issue a commit?




-Original Message-
From: sheeri kritzer [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 05, 2006 3:02 PM
To: Luke Vanderfluit
Cc: MySQL List
Subject: Re: importing a dumpfile from with the mysql client

On 5/4/06, Luke Vanderfluit [EMAIL PROTECTED] wrote:

[snip]

 I started this process remotely then went to the site to finish it.
 But when the dump finished (several hours later) I was not able to
 execute the following commands from my original location.

 mysql SET FOREIGN_KEY_CHECKS = 1;
 mysql COMMIT;

What do you mean you were not able?  Did you get an error?  Was the
server hung?  Did the keyboard stick to make you unable to type the
commands?

 My question is:
 Since the import completed the database has grown in size and been
 backed up etc.
 Yet from the original session I have not executed those 2 commands.

 Is it safe to execute them? Or would executing them cause corruption
or
 other unforseen stuff?
 Is it unnecessary to execute them?
 Is it safe to kill that original mysql session?

Those 2 commands, in and of themselves, aren't dangerous.  It's what's
in the script that could be the problem.  However, since you really
didn't mention what error you got, it's not easy to figure out how to
fix it.

It depends what tables you're running these on.  If you're running
them on MyISAM tables, deadlocking cannot happen.  Then again,
transactions are meaningless too.  If you're running on InnoDB you
have the possibility of deadlocking, but MySQL is pretty good about
avoiding that.

If other users/applications are writing to the SAME data then there's
the possibility of inconsistent data, but I wouldn't expect
corruption.

if you post what errors you actually got, I might be able to help. 
Also, are you using the same db and tables that other apps/users are
using?  Does the dumpfile contain DROP TABLE IF EXISTS commands?

-Sheeri

 --
 Luke Vanderfluit.
 Analyst/Programmer.
 Internode Systems Pty. Ltd.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: importing a dumpfile from with the mysql client

2006-05-05 Thread sheeri kritzer

Ah, I get it.I don't know if there's a way to do that, but why not
just put the SQL statements in the file?

-Sheeri

On 5/5/06, George Law [EMAIL PROTECTED] wrote:

I think what he is saying is that be began the transaction in a
command line client
session from one location but was not able to give the

 mysql SET FOREIGN_KEY_CHECKS = 1;
 mysql COMMIT;

commands in the same session. (ie - I owe, I owe, its off to work I go
:)  )

This is a good question, one I have wondered about myself.

Is there a way in mysql to attach to session to issue a commit?




-Original Message-
From: sheeri kritzer [mailto:[EMAIL PROTECTED]
Sent: Friday, May 05, 2006 3:02 PM
To: Luke Vanderfluit
Cc: MySQL List
Subject: Re: importing a dumpfile from with the mysql client

On 5/4/06, Luke Vanderfluit [EMAIL PROTECTED] wrote:

[snip]

 I started this process remotely then went to the site to finish it.
 But when the dump finished (several hours later) I was not able to
 execute the following commands from my original location.

 mysql SET FOREIGN_KEY_CHECKS = 1;
 mysql COMMIT;

What do you mean you were not able?  Did you get an error?  Was the
server hung?  Did the keyboard stick to make you unable to type the
commands?

 My question is:
 Since the import completed the database has grown in size and been
 backed up etc.
 Yet from the original session I have not executed those 2 commands.

 Is it safe to execute them? Or would executing them cause corruption
or
 other unforseen stuff?
 Is it unnecessary to execute them?
 Is it safe to kill that original mysql session?

Those 2 commands, in and of themselves, aren't dangerous.  It's what's
in the script that could be the problem.  However, since you really
didn't mention what error you got, it's not easy to figure out how to
fix it.

It depends what tables you're running these on.  If you're running
them on MyISAM tables, deadlocking cannot happen.  Then again,
transactions are meaningless too.  If you're running on InnoDB you
have the possibility of deadlocking, but MySQL is pretty good about
avoiding that.

If other users/applications are writing to the SAME data then there's
the possibility of inconsistent data, but I wouldn't expect
corruption.

if you post what errors you actually got, I might be able to help.
Also, are you using the same db and tables that other apps/users are
using?  Does the dumpfile contain DROP TABLE IF EXISTS commands?

-Sheeri

 --
 Luke Vanderfluit.
 Analyst/Programmer.
 Internode Systems Pty. Ltd.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



importing a dumpfile from with the mysql client

2006-05-04 Thread Luke Vanderfluit

Hi.

I recently imported a dumpfile into mysql4.1.18
I did this using the 'source' syntax from inside the mysql client.

this is syntax I used:

mysqlcreate database dbname;
mysqluse dbname;
-unset the creation of bin-logs in my.cnf
mysql SET SQL_LOG_BIN = 0
then some tips to fast import dump files.
mysql SET FOREIGN_KEY_CHECKS = 0;
mysql SET AUTOCOMMIT=0;
mysql SOURCE foo.out-20060419-23

I started this process remotely then went to the site to finish it.
But when the dump finished (several hours later) I was not able to 
execute the following commands from my original location.


mysql SET FOREIGN_KEY_CHECKS = 1;
mysql COMMIT;

My question is:
Since the import completed the database has grown in size and been 
backed up etc.

Yet from the original session I have not executed those 2 commands.

Is it safe to execute them? Or would executing them cause corruption or 
other unforseen stuff?

Is it unnecessary to execute them?
Is it safe to kill that original mysql session?

Since the commands were executed from a single session, and the database 
has been extensively modified in the meantime, would executing those 
commands from the same original session now, cause problems?


Thanks.
Kind regards.
Luke.

--
Luke Vanderfluit.
Analyst/Programmer.
Internode Systems Pty. Ltd.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Importing raw MySQL files

2006-04-17 Thread Christopher Beale

Hi everyone.

I used to run mysql on a local server here, which has some databases 
which are very important to me on it. The server was running mysql on 
Arch Linux, but unfortunatly this crashed, I was not able to rebuild the 
operating system but I was able to recover the raw database files off of 
the hard disk. Is there a way of importing these to my MySQL 3.2 server? 
(I beleive that Arch Linux was running 5.0). I have tried simply placing 
them in the mysql database folder but I get errors such as Incorrect 
information in file: './my0007/ee_pm.frm... when I try and perform any 
operations in PhpMyAdmin.


Any help would be appreciated as I do not want to have to rebuilt the 
databases from scratch/


Cheers
Chris

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing raw MySQL files

2006-04-17 Thread Logan, David (SST - Adelaide)
Hi Chris,

Looks like you may have to either upgrade your current server to the
version the Arch Linux was running or install a second temp server of
that version, export the databases and then import them into the ver 3.2
server.

Personally, I'd look closely at upgrading if at all possible.

Regards


---
** _/ **  David Logan 
***   _/ ***  ITO Delivery Specialist - Database
*_/*  Hewlett-Packard Australia Ltd
_/_/_/  _/_/_/    E-Mail: [EMAIL PROTECTED]
   _/  _/  _/  _/     Desk:   +618 8408 4273
  _/  _/  _/_/_/  Mobile: 0417 268 665
*_/   **
**  _/    Postal: 148 Frome Street,
   _/ **  Adelaide SA 5001
  Australia 
invent   
---

-Original Message-
From: Christopher Beale [mailto:[EMAIL PROTECTED] 
Sent: Monday, 17 April 2006 7:46 PM
To: mysql@lists.mysql.com
Subject: Importing raw MySQL files

Hi everyone.

I used to run mysql on a local server here, which has some databases 
which are very important to me on it. The server was running mysql on 
Arch Linux, but unfortunatly this crashed, I was not able to rebuild the

operating system but I was able to recover the raw database files off of

the hard disk. Is there a way of importing these to my MySQL 3.2 server?

(I beleive that Arch Linux was running 5.0). I have tried simply placing

them in the mysql database folder but I get errors such as Incorrect 
information in file: './my0007/ee_pm.frm... when I try and perform any 
operations in PhpMyAdmin.

Any help would be appreciated as I do not want to have to rebuilt the 
databases from scratch/

Cheers
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing raw MySQL files

2006-04-17 Thread Dan Buettner

Hi everyone.

I used to run mysql on a local server here, which has some databases 
which are very important to me on it. The server was running mysql 
on Arch Linux, but unfortunatly this crashed, I was not able to 
rebuild the operating system but I was able to recover the raw 
database files off of the hard disk. Is there a way of importing 
these to my MySQL 3.2 server? (I beleive that Arch Linux was running 
5.0). I have tried simply placing them in the mysql database folder 
but I get errors such as Incorrect information in file: 
'./my0007/ee_pm.frm... when I try and perform any operations in 
PhpMyAdmin.


Any help would be appreciated as I do not want to have to rebuilt 
the databases from scratch/


Cheers
Chris



Chris, you may be in luck.  MySQL MyISAM table files are binary 
compatible across platforms.  That is to say that database table 
files from Arch Linux will work fine on Mac, Solaris, Windows, etc., 
without modification.  I'm not as familiar with InnoDB tablespaces 
but they may work as well.  Definitely try this with a copy of your 
files, not the originals.


In all likelihood you need to update to at least the same version of 
the server software you were running on the now-crashed server.  In 
other words, if you update to 5.0.x and place the table files in the 
proper path, you should be OK.


MySQL 3.2 is a very old version and likely simply isn't compatible 
with tables created in 5.0.  If the situation were reversed (you had 
a 3.2 server that crashed and you wanted to bring the tables into 
5.0) that might work fine.


Hope this helps,
Dan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: // confusing syntax error when importing //

2006-04-04 Thread sheeri kritzer
How did you try to do it on the remote system?

I copied and pasted your query to a server running MySQL
4.1.12-standard-log, and another running MySQL 5.0.19-standard-log,
and they both created the table just fine.

Have you tried copying and pasting the table creation itself to see if
that works?

Or using the commandline to do it on the remote machine --

mysql -u root -p -h host db  file.sql

-Sheeri

On 4/3/06, julian oliver [EMAIL PROTECTED] wrote:
 --apologies if you receive this email from another account--

 hi,

 i exported a *.sql using phpMyAdmin on a system running:

 mysql  Ver 14.12 Distrib 5.0.19, for pc-linux-gnu (i486) using
 readline 5.1

 i'm trying to import it from this localhost to a remote host
 running:

 mysql  Ver 14.12 Distrib 5.0.18, for pc-linux-gnu (i486) using
 readline 5.1

 the 13MB file fails during import with this error:

 //-

 Error

 SQL query:

 -- phpMyAdmin SQL Dump
 -- version 2.8.0.2-Debian-3
 -- http://www.phpmyadmin.net
 --
 -- Host: localhost
 -- Generation Time: Apr 03, 2006 at 07:09 PM
 -- Server version: 5.0.19
 -- PHP Version: 4.4.2-1+b1
 --
 -- Database: `nuke`
 --
 -- 
 --
 -- Table structure for table `cur`
 --
 CREATE TABLE `cur` (
 `cur_id` int( 8 ) unsigned NOT NULL AUTO_INCREMENT ,
 `cur_namespace` tinyint( 2 ) unsigned NOT NULL default '0',
 `cur_title` varchar( 255 ) CHARACTER SET latin1 COLLATE latin1_bin NOT
 NULL default '',
 `cur_text` mediumtext NOT NULL ,
 `cur_comment` tinyblob NOT NULL ,
 `cur_user` int( 5 ) unsigned NOT NULL default '0',
 `cur_user_text` varchar( 255 ) CHARACTER SET latin1 COLLATE latin1_bin
 NOT NULL default '',
 `cur_timestamp` varchar( 14 ) CHARACTER SET latin1 COLLATE latin1_bin
 NOT NULL default '',
 `cur_restrictions` tinyblob NOT NULL ,
 `cur_counter` bigint( 20 ) unsigned NOT NULL default '0',
 `cur_is_redirect` tinyint( 1 ) unsigned NOT NULL default '0',
 `cur_minor_edit` tinyint( 1 ) unsigned NOT NULL default '0',
 `cur_is_new` tinyint( 1 ) unsigned NOT NULL default '0',
 `cur_random` double unsigned NOT NULL default '0',
 `cur_touched` varchar( 14 ) CHARACTER SET latin1 COLLATE latin1_bin NOT
 NULL default '',
 `inverse_timestamp` varchar( 14 ) CHARACTER SET latin1 COLLATE
 latin1_bin NOT NULL default '',
 UNIQUE KEY `cur_id` ( `cur_id` ) ,
 KEY `cur_namespace` ( `cur_namespace` ) ,
 KEY `cur_title` ( `cur_title` ( 20 ) ) ,
 KEY `cur_timestamp` ( `cur_timestamp` ) ,
 KEY `cur_random` ( `cur_random` ) ,
 KEY `name_title_timestamp` ( `cur_namespace` , `cur_title` ,
 `inverse_timestamp` ) ,
 KEY `user_timestamp` ( `cur_user` , `inverse_timestamp` ) ,
 KEY `usertext_timestamp` ( `cur_user_text` , `inverse_timestamp` ) ,
 KEY `namespace_redirect_timestamp` ( `cur_namespace` , `cur_is_redirect`
 , `cur_timestamp` )
 ) ENGINE = MYISAM DEFAULT CHARSET = latin1 PACK_KEYS =1 AUTO_INCREMENT
 =1028;

 MySQL said: Documentation
 #1064 - You have an error in your SQL syntax.  Check the manual that
 corresponds to your MySQL server version for the right syntax to use
 near 'collate
 +latin1_bin NOT NULL default '',
   `cur_text` mediumtext


 //-

 as i know very little about mysql generally, i'm having a hard time
 deriving what the right syntax might be here.
 out of interest the same *.sql imports on the local system just fine
 using mysql on the commandline ('mysql -u user -p database 
 file.sql'),
 but on the remote machine fails with the same error as seen above.

 if someone could point me in the right direction i'd be very grateful.

 cheers,

 julian
 --
   julian oliver
   [EMAIL PROTECTED]

 --
 http://www.fastmail.fm - Does exactly what it says on the tin


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   5   >