Problem w/ mysqldump

2009-09-02 Thread Victor Subervi
Hi:
I have the following python code:
import os
os.system(mysqldump -u root -pPASSWORD --opt spreadsheets  dump.sql)
This nicely creates the file...but the file is empty! The database exists
and has lots of data, I double-checked it. If there is nothing wrong with my
code, is there some way to do the same thing from within MySQL? Can I at
least print everything to screen and copy it? Where would I find my database
in Windoze?
TIA,
Victor


join to return first record for each patient number

2009-09-02 Thread william drescher

I have two tables:
PtActive
   ptNum  // the patient's number
   user   // the user who made this patient active
   expires// when the patient becomes inactive again
primary index: PtNum

PtName
  ptNum
  sequence
  lname
  fname
primary index: ptNum, sequence

The table PtName may have multiple rows with the same ptNum (if 
the patient changes his/her name.


I am going mildly nuts trying to devise a query that will 
retrieve only the lowest ptName (ie: their current name) for all 
active patients for this user.


in PHP
I tried:
$sql =select PtName.ptNum, lname,  fname from PtName, PtActive 
where PtName.ptNum = PtActive.ptNum and PtActive.user = 
'$currentUser' 	order by PtName.ptNum, PtName.nameSequence ;


but this retrieves all names for this patient.

I tried:
$sql =select distinct PtName.ptNum, lname,  fname from PtName, 
PtActive where PtName.ptNum = PtActive.ptNum and PtActive.user = 
'$currentUser' order by PtName.ptNum, PtName.nameSequence ;


but this retrieves all names for all active patients.

I tried a subquery
$sql =select ptNum, lname,  fname from PtName where ptNum 
=(select ptNum from PtActive where PtActive.user = '$currentUser' 
limit 1);


but this returns all the names for the first active patient.
If I remove the limit 1, it fails with the error message 
Subquery returns more than 1 row


Help !

bill



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



mysqldump warning or actual error?

2009-09-02 Thread Matt Neimeyer
My local windows machine has mysql 5.1.33 installed on it. One of my
Mac OSX dev servers has some 4.1 flavor of MySQL on it.

When I try to do something like the following: mysqldump -h devserver
-u me -p somedb  dump.sql

I get the following:

mysqldump: Error: 'Table 'information_schema.files' doesn't exist'
when trying to dump tablespaces

It looks like it creates the export fine but I've been ssh-ing into
the dev box and doing it locally there just in case

Should I be worried? Is there some option that would supress that
(that i didn't see in mysqldump --help)? Is it truely harmless?

Thanks

Matt

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump warning or actual error?

2009-09-02 Thread Michael Dykman
If you look at the options for mysqldump more closely, you will see
that you can specify the version of the server which will be importing
the result file.  These cause MySQL to taylor the SQL syntax according
to the target platform.

 - michael dykman

On Wed, Sep 2, 2009 at 11:03 AM, Matt Neimeyerm...@neimeyer.org wrote:
 My local windows machine has mysql 5.1.33 installed on it. One of my
 Mac OSX dev servers has some 4.1 flavor of MySQL on it.

 When I try to do something like the following: mysqldump -h devserver
 -u me -p somedb  dump.sql

 I get the following:

 mysqldump: Error: 'Table 'information_schema.files' doesn't exist'
 when trying to dump tablespaces

 It looks like it creates the export fine but I've been ssh-ing into
 the dev box and doing it locally there just in case

 Should I be worried? Is there some option that would supress that
 (that i didn't see in mysqldump --help)? Is it truely harmless?

 Thanks

 Matt

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com





-- 
 - michael dykman
 - mdyk...@gmail.com

Don’t worry about people stealing your ideas. If they’re any good,
you’ll have to ram them down their throats!

   Howard Aiken

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Looking for real configuration data

2009-09-02 Thread Wei Zheng
[Sorry if you receive multiple copies of this message.] 
[Please feel free to forward the message to others who may be
interested.] 

Hi, 

We are a computer systems research group at the Computer Science
department at Rutgers University, and are conducting research on
simplifying the software configuration process.  The idea is to 
leverage the configurations of existing users of a piece of software to
ease the configuration process for each new user of the software. 

The reason for this message is that we would like to collect a large
number of deployed configurations to help evaluate our ideas. Thus, we
ask systems administrators and end users to submit information about
their configurations for any software that they have had to configure,
such as Apache, MySQL, and Linux. 

We hope that you have a few minutes to take our survey which is located
at: http://vivo.cs.rutgers.edu/massconf/MassConf.html As an incentive,
all surveys completed in their entirety will be entered into a drawing
of a number of $50 gift certificates (from Amazon.com). 

Important: Our work is purely scientific, so we have no interest in any
private or commercially sensitive information that may come along with
your configuration data.  We will make sure that no such information is
ever made public.  In fact, if you wish, you are more than welcome to
anonymize or remove any sensitive information from the configuration
data you send us. 

If you have any questions regarding this message or our work, feel free
to email Wei Zheng (wzheng at cs dot rutgers dot edu). 


Thanks for your time, 

Wei Zheng 
PhD student, Vivo Research Group (http://vivo.cs.rutgers.edu) 
Rutgers University 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Problem w/ mysqldump

2009-09-02 Thread Emile van Sebille

On 9/2/2009 3:43 AM Victor Subervi said...

Hi:
I have the following python code:
import os
os.system(mysqldump -u root -pPASSWORD --opt spreadsheets  dump.sql)


First, test this at the system command line -- you'll likely get an 
empty file there as well, so calling from within python simply does the 
same.


Then read the mysqldump docs for the command arguments and supply the 
database name...


Emile



This nicely creates the file...but the file is empty! The database exists
and has lots of data, I double-checked it. If there is nothing wrong with my
code, is there some way to do the same thing from within MySQL? Can I at
least print everything to screen and copy it? Where would I find my database
in Windoze?
TIA,
Victor




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump warning or actual error?

2009-09-02 Thread Mikhail Berman

Hi Matt,

The error you are getting is very particular to information_schema 
database.


Information_schema does NOT actually have tables, they are views:

|INFORMATION_SCHEMA| is the information database, the place that stores 
information about all the other databases that the MySQL server 
maintains. Inside |INFORMATION_SCHEMA| there are several read-only 
tables. They are actually views, not base tables, so there are no files 
associated with them. 


http://dev.mysql.com/doc/refman/5.1/en/information-schema.html

Therefore mysqldump generates error trying to dump tables that does not 
exist.


Regards,

Mikhail Berman

Matt Neimeyer wrote:

My local windows machine has mysql 5.1.33 installed on it. One of my
Mac OSX dev servers has some 4.1 flavor of MySQL on it.

When I try to do something like the following: mysqldump -h devserver
-u me -p somedb  dump.sql

I get the following:

mysqldump: Error: 'Table 'information_schema.files' doesn't exist'
when trying to dump tablespaces

It looks like it creates the export fine but I've been ssh-ing into
the dev box and doing it locally there just in case

Should I be worried? Is there some option that would supress that
(that i didn't see in mysqldump --help)? Is it truely harmless?

Thanks

Matt

  


Re: Best practice to disable log bin for mysql_upgrade

2009-09-02 Thread Russell E Glaue
Does anyone have any information regarding my question?

Is the 'share/mysql_fix_privilege_tables.sql' script distributed with MySQL the
same as the script (represented as '[fix_priv_tables]') generated interally by
the mysql_upgrade command that contains SQL statements to upgrade the tables in
the mysql database?

If it isn't the same, is there a way to generate this [fix_priv_tables] script
outside of an actual upgrade, so that I could apply it myself?

-RG


Russell E Glaue wrote:
 According to this page in the MySQL 5.0 Manual:
 http://dev.mysql.com/doc/refman/5.0/en/mysql-upgrade.html
 -
 mysql_upgrade executes the following commands to check and repair tables and 
 to
 upgrade the system tables:
 
 mysqlcheck --all-databases --check-upgrade --auto-repair
 mysql  [fix_priv_tables]
 
 ...
 [fix_priv_tables] represents a script generated interally by mysql_upgrade 
 that
 contains SQL statements to upgrade the tables in the mysql database.
 -
 
 Is there a way to generate this [fix_priv_tables] script outside of an actual
 upgrade, so that I could apply it myself?
 
 Is this [fix_priv_tables] script the same as the
 share/mysql_fix_privilege_tables.sql script distributed with MySQL?
 
 -RG
 
 
 
 Russell E Glaue wrote:
 I am upgrading minor MySQL versions, 5.0.67 to 5.0.84
 I have a master-master replication setup, and want to upgrade both 
 installations.
 The ideal procedure for upgrading mysql using the mysql_upgrade command is to
 have binary logging turned off during the execution of mysql_upgrade.

 My situation is I do not want to turn off binary logging for the entire 
 server,
 I would like to turn off binary logging just for the session of the
 mysql_upgrade connection.

 mysql_upgrade does not support anything like --disable-log-bin (which seems 
 is a
 feature that should be supported for this cli app) (1)
 So it seems my only option is to turn off binary logging for the entire 
 server
 while I execute mysql_upgrade. Which also means blocking write access to the
 server while it runs so that statements I do want logged for replication do 
 not
 occur while binary logging is off.

 Is there another simple way to achieve this? Or what is best practice that
 achieves the least amount of down time?



 Alternately, there are sql files in the share directory of each archive:

 mysql-5.0.67-linux-i686-glibc23/share/
 |-- fill_help_tables.sql
 |-- mysql_fix_privilege_tables.sql
 |-- mysql_system_tables.sql
 |-- mysql_system_tables_data.sql
 |-- mysql_system_tables_fix.sql
 `-- mysql_test_data_timezone.sql
 mysql-5.0.84-linux-i686-glibc23/share/
 |-- fill_help_tables.sql
 |-- mysql_fix_privilege_tables.sql
 |-- mysql_system_tables.sql
 |-- mysql_system_tables_data.sql
 |-- mysql_system_tables_fix.sql
 `-- mysql_test_data_timezone.sql

 Is it true (yes or no) that the difference between these 6 sql files from one
 distribution to the next is what would be changed if the mysql_upgrade was
 executed to upgrade an installation from 5.0.67 to 5.0.84 ?



 According to this: http://forge.mysql.com/worklog/task.php?id=4991

 There are two scripts: share/mysql_system_tables.sql and
 share/mysql_system_tables_fix.sql
 These two scripts comprise the content of: 
 share/mysql_fix_privilege_tables.sql

 The following is true about these three files:
 cat share/mysql_system_tables.sql share/mysql_system_tables_fix.sql 
 share/mysql_fix_privilege_tables.sql

 To upgrade the schema of the mysql server, only the
 share/mysql_fix_privilege_tables.sql sql script need be applied.

 Is this correct?
 Is it true for MySQL 5.0.x ?



 References:
 (1) http://bugs.mysql.com/bug.php?id=46638thanks=4 mysql_upgrade needs
 --disable-log-bin option
 (2) http://forums.mysql.com/read.php?20,275140,275140#msg-275140 MySQL 
 Forums
 :: General :: Best practice to disable log bin for mysql_upgrade
 (3) http://forums.mysql.com/read.php?11,274796,274796#msg-274796 MySQL 
 Forums
 :: Install :: How to best disable log bin for mysql_upgrade


 -RG

 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Problem w/ mysqldump

2009-09-02 Thread Daevid Vincent
While not python, maybe this bash script will give you some clues?
http://daevid.com/content/examples/daily_backup.php

Also, please don't cross post to multiple lists. Not everyone on this mySQL
list is on the python list and vice versa. It's just bad netiquette. 

 -Original Message-
 From: news [mailto:n...@ger.gmane.org] On Behalf Of Emile van Sebille
 Sent: Wednesday, September 02, 2009 7:18 AM
 To: mysql@lists.mysql.com
 Cc: python-l...@python.org
 Subject: Re: Problem w/ mysqldump
 
 On 9/2/2009 3:43 AM Victor Subervi said...
  Hi:
  I have the following python code:
  import os
  os.system(mysqldump -u root -pPASSWORD --opt spreadsheets 
  dump.sql)
 
 First, test this at the system command line -- you'll likely get an 
 empty file there as well, so calling from within python 
 simply does the 
 same.
 
 Then read the mysqldump docs for the command arguments and supply the 
 database name...
 
 Emile
 
 
  This nicely creates the file...but the file is empty! The 
 database exists
  and has lots of data, I double-checked it. If there is 
 nothing wrong with my
  code, is there some way to do the same thing from within 
 MySQL? Can I at
  least print everything to screen and copy it? Where would I 
 find my database
  in Windoze?
  TIA,
  Victor
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=dae...@daevid.com
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Problem w/ mysqldump

2009-09-02 Thread Victor Subervi
I checked my own backup script from earlier years and everything was good.
You know, if I could simply figure out where the data was actually stored,
in what file, I could copy it over to another computer. Any ideas?
Thanks,
V

On Wed, Sep 2, 2009 at 4:09 PM, Daevid Vincent dae...@daevid.com wrote:

 While not python, maybe this bash script will give you some clues?
 http://daevid.com/content/examples/daily_backup.php

 Also, please don't cross post to multiple lists. Not everyone on this mySQL
 list is on the python list and vice versa. It's just bad netiquette.

  -Original Message-
  From: news [mailto:n...@ger.gmane.org] On Behalf Of Emile van Sebille
  Sent: Wednesday, September 02, 2009 7:18 AM
  To: mysql@lists.mysql.com
  Cc: python-l...@python.org
  Subject: Re: Problem w/ mysqldump
 
  On 9/2/2009 3:43 AM Victor Subervi said...
   Hi:
   I have the following python code:
   import os
   os.system(mysqldump -u root -pPASSWORD --opt spreadsheets
   dump.sql)
 
  First, test this at the system command line -- you'll likely get an
  empty file there as well, so calling from within python
  simply does the
  same.
 
  Then read the mysqldump docs for the command arguments and supply the
  database name...
 
  Emile
 
 
   This nicely creates the file...but the file is empty! The
  database exists
   and has lots of data, I double-checked it. If there is
  nothing wrong with my
   code, is there some way to do the same thing from within
  MySQL? Can I at
   least print everything to screen and copy it? Where would I
  find my database
   in Windoze?
   TIA,
   Victor
  
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/mysql?unsub=dae...@daevid.com
 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=victorsube...@gmail.com




I want to change my sqldatabase server from one machine to another

2009-09-02 Thread Uma Mudumba
Can you please send me a script to do that?
I have MySQL Reference Manual for version 3.23.55.
/* Copyright Abandoned 1996, 1999, 2001 MySQL AB
   This file is public domain and comes with NO WARRANTY of any kind */

/* Version numbers for protocol  mysqld */

#ifdef _CUSTOMCONFIG_
#include custom_conf.h
#else
#define PROTOCOL_VERSION10
#define MYSQL_SERVER_VERSION3.23.55
#define MYSQL_SERVER_SUFFIX 
#define FRM_VER 6
#define MYSQL_VERSION_ID32355
#define MYSQL_PORT  3306
#define MYSQL_UNIX_ADDR /tmp/mysql.sock
#define MYSQL_CONFIG_NAME   my

/* mysqld compile time options */
#ifndef MYSQL_CHARSET
#define MYSQL_CHARSET   latin1
#endif
#endif


I have mysqlhotcopy scripts, but that copies only to the same sql. I want to 
copy the database to another machine.
All myscripts should access the database from the new machine and new location.

Thanks,
Uma-



RE: Problem w/ mysqldump

2009-09-02 Thread Daevid Vincent
If it's MYISAM tables, then they're all self contained in folders in
/var/lib/mysql/spreadsheets. Remember that if you do copy the files, to shut
down mysql first or you could copy corrupt files.

If you're using INNODB, then the schema is in that folder, but the actual
data is in the /var/lib/mysql/ib* files. Along with all the other INNODB
databases you may have on the system -- all mashed together. Hence the need
for a mysql dump tool ;-)

Mysql dump is the better way to go in either situation.

 -Original Message-
 From: Victor Subervi [mailto:victorsube...@gmail.com] 
 Sent: Wednesday, September 02, 2009 1:43 PM
 To: Daevid Vincent; mysql@lists.mysql.com
 Subject: Re: Problem w/ mysqldump
 
 I checked my own backup script from earlier years and 
 everything was good.
 You know, if I could simply figure out where the data was 
 actually stored,
 in what file, I could copy it over to another computer. Any ideas?
 Thanks,
 V
 
 On Wed, Sep 2, 2009 at 4:09 PM, Daevid Vincent 
 dae...@daevid.com wrote:
 
  While not python, maybe this bash script will give you some clues?
  http://daevid.com/content/examples/daily_backup.php
 
  Also, please don't cross post to multiple lists. Not 
 everyone on this mySQL
  list is on the python list and vice versa. It's just bad netiquette.
 
   -Original Message-
   From: news [mailto:n...@ger.gmane.org] On Behalf Of Emile 
 van Sebille
   Sent: Wednesday, September 02, 2009 7:18 AM
   To: mysql@lists.mysql.com
   Cc: python-l...@python.org
   Subject: Re: Problem w/ mysqldump
  
   On 9/2/2009 3:43 AM Victor Subervi said...
Hi:
I have the following python code:
import os
os.system(mysqldump -u root -pPASSWORD --opt spreadsheets
dump.sql)
  
   First, test this at the system command line -- you'll 
 likely get an
   empty file there as well, so calling from within python
   simply does the
   same.
  
   Then read the mysqldump docs for the command arguments 
 and supply the
   database name...
  
   Emile
  
  
This nicely creates the file...but the file is empty! The
   database exists
and has lots of data, I double-checked it. If there is
   nothing wrong with my
code, is there some way to do the same thing from within
   MySQL? Can I at
least print everything to screen and copy it? Where would I
   find my database
in Windoze?
TIA,
Victor
   
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
   http://lists.mysql.com/mysql?unsub=dae...@daevid.com
  
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/mysql?unsub=victorsube...@gmail.com
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Problem w/ mysqldump

2009-09-02 Thread Victor Subervi
Not in Windoze. The only folders I have inside the Programs/MySQL are Docs,
bin and Shared
V

On Wed, Sep 2, 2009 at 5:29 PM, Daevid Vincent dae...@daevid.com wrote:

 If it's MYISAM tables, then they're all self contained in folders in
 /var/lib/mysql/spreadsheets. Remember that if you do copy the files, to
 shut
 down mysql first or you could copy corrupt files.

 If you're using INNODB, then the schema is in that folder, but the actual
 data is in the /var/lib/mysql/ib* files. Along with all the other INNODB
 databases you may have on the system -- all mashed together. Hence the need
 for a mysql dump tool ;-)

 Mysql dump is the better way to go in either situation.

  -Original Message-
  From: Victor Subervi [mailto:victorsube...@gmail.com]
  Sent: Wednesday, September 02, 2009 1:43 PM
  To: Daevid Vincent; mysql@lists.mysql.com
  Subject: Re: Problem w/ mysqldump
 
  I checked my own backup script from earlier years and
  everything was good.
  You know, if I could simply figure out where the data was
  actually stored,
  in what file, I could copy it over to another computer. Any ideas?
  Thanks,
  V
 
  On Wed, Sep 2, 2009 at 4:09 PM, Daevid Vincent
  dae...@daevid.com wrote:
 
   While not python, maybe this bash script will give you some clues?
   http://daevid.com/content/examples/daily_backup.php
  
   Also, please don't cross post to multiple lists. Not
  everyone on this mySQL
   list is on the python list and vice versa. It's just bad netiquette.
  
-Original Message-
From: news [mailto:n...@ger.gmane.org] On Behalf Of Emile
  van Sebille
Sent: Wednesday, September 02, 2009 7:18 AM
To: mysql@lists.mysql.com
Cc: python-l...@python.org
Subject: Re: Problem w/ mysqldump
   
On 9/2/2009 3:43 AM Victor Subervi said...
 Hi:
 I have the following python code:
 import os
 os.system(mysqldump -u root -pPASSWORD --opt spreadsheets
 dump.sql)
   
First, test this at the system command line -- you'll
  likely get an
empty file there as well, so calling from within python
simply does the
same.
   
Then read the mysqldump docs for the command arguments
  and supply the
database name...
   
Emile
   
   
 This nicely creates the file...but the file is empty! The
database exists
 and has lots of data, I double-checked it. If there is
nothing wrong with my
 code, is there some way to do the same thing from within
MySQL? Can I at
 least print everything to screen and copy it? Where would I
find my database
 in Windoze?
 TIA,
 Victor

   
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=dae...@daevid.com
   
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
   http://lists.mysql.com/mysql?unsub=victorsube...@gmail.com
  
  
 




Re: I want to change my sqldatabase server from one machine to another

2009-09-02 Thread Wolfgang Schaefer
Uma Mudumba wrote:
 Can you please send me a script to do that?
 I have MySQL Reference Manual for version 3.23.55.
 /* Copyright Abandoned 1996, 1999, 2001 MySQL AB
This file is public domain and comes with NO WARRANTY of any kind */

 /* Version numbers for protocol  mysqld */

 #ifdef _CUSTOMCONFIG_
 #include custom_conf.h
 #else
 #define PROTOCOL_VERSION10
 #define MYSQL_SERVER_VERSION3.23.55
 #define MYSQL_SERVER_SUFFIX 
 #define FRM_VER 6
 #define MYSQL_VERSION_ID32355
 #define MYSQL_PORT  3306
 #define MYSQL_UNIX_ADDR /tmp/mysql.sock
 #define MYSQL_CONFIG_NAME   my

 /* mysqld compile time options */
 #ifndef MYSQL_CHARSET
 #define MYSQL_CHARSET   latin1
 #endif
 #endif


 I have mysqlhotcopy scripts, but that copies only to the same sql. I want to 
 copy the database to another machine.
 All myscripts should access the database from the new machine and new 
 location.

 Thanks,
 Uma-


   
Why don't you use replication to sync the new machine? Once both
databases are synced you can switch to the new machine. In this case you
should not even have a downtime.

http://dev.mysql.com/doc/refman/4.1/en/replication-howto.html

cheers,
wolfgang


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: upgrading from 4.1 to 5.0 trick

2009-09-02 Thread Hank
On Fri, Aug 28, 2009 at 9:18 AM, Shawn Green shawn.gr...@sun.com wrote:

 Hank wrote:

 Hello All,
  I'm in the process of upgrading my database from 4.1 to 5.0 on CentOS.
  I've been testing the mysqlcheck --check-upgrade   --auto-repair
 command,
 and on one of my MYISAM tables, it's taking forever to upgrade the table.
  It has about 114 million rows, and I'm guessing it needs to be upgraded
 due
 to the VARCHAR columns. Anyway, it's been running for a day and a half,
 and
 I finally had to kill it.

 So will this old trick still work?  I've done this many times on 4.1
 with
 great success:

 In mysql 5.0 - I create two new empty tables, one identical to the
 original
 and one identical but with no indexes.  I name these tables with _ion
 and
 _ioff suffixes.

 I then do a insert into table_ioff select * from source which inserts
 just
 the original data into the new table, but doesn't have to rebuild any
 indexes.  I then flush the tables.

 Then in the file system, I swap the table_ion.frm and table_ion.MYI
 files with the table_ioff ones.  Flush tables again.

  I then just use myisamchk -r to repair the index file.  It runs in about
 an
 hour.

 Can I do this same thing to upgrade the tables, instead of using
 mysqlcheck, which seems to be rebuilding the table row-by-row, instead of
 sorting (which myisamchk does).

 thanks.

 -Hank


 Hello Hank,

 Your technique will work within the following narrow limits of operation:

 * This will only work for MyISAM tables.

 * myisamchk is dangerous to run against any table that is in active use as
 it operates at the file level and has caused corruptions with live tables.
  Whenever possible either stop the server or prevent access from MySQL to
 that table with a FLUSH TABLES WITH READ LOCK before using myisamchk.

 http://dev.mysql.com/doc/refman/5.0/en/flush.html

 Alternatively, you should be able to match or improve this import then
 index process if you use an ALTER TABLE ... DISABLE KEYS command before
 the import followed by an ALTER TABLE ... ENABLE KEYS command after the
 import or if you use LOAD DATA INFILE ... . Also if you can import all of
 the data to an empty table in a single batch (statement), the indexes will
 be computed only once using the batch-index algorithm (it's a sort, not a
 merge) and that will also save processing time.

 http://dev.mysql.com/doc/refman/5.0/en/alter-table.html
 http://dev.mysql.com/doc/refman/5.0/en/load-data.html

 http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html

 The overall problem is still that the on-disk structure of the 5.0 tables
 has changed and that you still need to perform some kind of dump-restore or
 rebuild of the data as part of the conversion.
 Warmest regards,
 Shawn Green, MySQL Senior Support Engineer
 Sun Microsystems, Inc.
 Office: Blountville, TN


 Hello Shawn,

 Thanks for your reply.   Yes, I have all of your conditions covered.
1. They are myisam tables
2. This is not a production system, so other people aren't accessing the
tables.
3. And your last comment about dump/restore is taken care of (in my original
note) since I am creating a new table (without indexes) in mysql 5.0, and
then inserting all the data from the old table into the new one.  Then I'm
swapping the MYI/frm files, and then rebuilding the new table.

I've tested this several times now, and it works like a charm.

Finally, I don't like to use the ALTER TABLE DISABLE/ENABLE statements,
since they operate in silent mode -- I have no idea what it's doing, or how
long to expect the process to take.  It would be very nice of those commands
had some built-in progress meter or feedback/callback method.


Speeding up a pretty simple correlated update query

2009-09-02 Thread Hank
Hello All,
  I have a legacy application which was written using a compound primary key
of an item number (non unique) along with a category ID. The combination of
the item number and category ID make the records unique.

  I am in the process of replacing the compound (VARCHAR) keys with an
unique integer key in these tables.

So I have created an item_seq table and assigned a unique sequence number to
each compound key -- it looks like this (all tables are myisam tables, and
mysql version 5.0)

desc item_seq;
+---+--+--+-+-++
| Field | Type | Null | Key | Default | Extra  |
+---+--+--+-+-++
| seq   | int(10) unsigned | NO   | PRI | NULL| auto_increment |
| itemid| char(11) | NO   | MUL | ||
| category  | char(4)  | NO   | | ||
+---+--+--+-+-++

I also have my main transactional table with about 180,000,000 rows -- it
looks like this:

desc item_trans;
+-+---+--+-+-+---+
| Field   | Type  | Null | Key | Default |
Extra |
+-+---+--+-+-+---+
| seq | int(10) unsigned  | NO   | MUL | |
|
| itemid  | char(11)  | NO   | PRI | |
|
| category| char(4)   | NO   | PRI | |
|
| transid | int(10)   | NO   | PRI | |
|

Currently the seq field is null for the entire table.  So of course, I
want to update the main transaction table with the new sequence number.

So I've disabled all the keys on the item_trans table -- since I am
updating every row, it wouldn't (shouldn't) be using the index anyway.

Here is my correlated update query:

 update item_trans i, item_seq is
 set i.seq=is.seq
 where is.itemid=i.itemid and is.category=i.category;

  If I run an explain on the select version of the update, this is what I
get:

++-+--++---++-++---+---+
| id | select_type | table| type   | possible_keys | key| key_len |
ref| rows  | Extra |
++-+--++---++-++---+---+
|  1 | SIMPLE  | item_trans| ALL| PRIMARY   | NULL   | NULL
   | NULL   | 178948797 |   |
|  1 | SIMPLE  | item_seq | eq_ref | itemid| itemid | 20  |
g.item_trans.itemid,g.item_trans.category| 1 |
|
++-+--++---++-++---+---+

... which is exactly what I would expect it to do.  Update every record of
the item_trans table, and do a full index lookup on the items_seq table.

SO... I've been running this query to update item_trans, and it's been
running for 5 days now.

I've also tried running this with the primary key index on the item_trans
table (but not the seq index), and that ran slower in my initial tests.

Are there any faster ways to update 180 million records with a correlated
update query?  And I'm fairly certain that trying to do this in PHP
one-record at a time would take much longer than a SQL solution.

Thanks,

-Hank


RE: Speeding up a pretty simple correlated update query

2009-09-02 Thread Gavin Towey
Do you know that if you create seq column on the original table as an 
auto_increment primary key, it will fill in the numbers automatically?  There's 
no need to create the values on another table and update with a join.

Regards,
Gavin Towey

-Original Message-
From: Hank [mailto:hes...@gmail.com]
Sent: Wednesday, September 02, 2009 4:35 PM
To: mysql@lists.mysql.com
Subject: Speeding up a pretty simple correlated update query

Hello All,
  I have a legacy application which was written using a compound primary key
of an item number (non unique) along with a category ID. The combination of
the item number and category ID make the records unique.

  I am in the process of replacing the compound (VARCHAR) keys with an
unique integer key in these tables.

So I have created an item_seq table and assigned a unique sequence number to
each compound key -- it looks like this (all tables are myisam tables, and
mysql version 5.0)

desc item_seq;
+---+--+--+-+-++
| Field | Type | Null | Key | Default | Extra  |
+---+--+--+-+-++
| seq   | int(10) unsigned | NO   | PRI | NULL| auto_increment |
| itemid| char(11) | NO   | MUL | ||
| category  | char(4)  | NO   | | ||
+---+--+--+-+-++

I also have my main transactional table with about 180,000,000 rows -- it
looks like this:

desc item_trans;
+-+---+--+-+-+---+
| Field   | Type  | Null | Key | Default |
Extra |
+-+---+--+-+-+---+
| seq | int(10) unsigned  | NO   | MUL | |
|
| itemid  | char(11)  | NO   | PRI | |
|
| category| char(4)   | NO   | PRI | |
|
| transid | int(10)   | NO   | PRI | |
|

Currently the seq field is null for the entire table.  So of course, I
want to update the main transaction table with the new sequence number.

So I've disabled all the keys on the item_trans table -- since I am
updating every row, it wouldn't (shouldn't) be using the index anyway.

Here is my correlated update query:

 update item_trans i, item_seq is
 set i.seq=is.seq
 where is.itemid=i.itemid and is.category=i.category;

  If I run an explain on the select version of the update, this is what I
get:

++-+--++---++-++---+---+
| id | select_type | table| type   | possible_keys | key| key_len |
ref| rows  | Extra |
++-+--++---++-++---+---+
|  1 | SIMPLE  | item_trans| ALL| PRIMARY   | NULL   | NULL
   | NULL   | 178948797 |   |
|  1 | SIMPLE  | item_seq | eq_ref | itemid| itemid | 20  |
g.item_trans.itemid,g.item_trans.category| 1 |
|
++-+--++---++-++---+---+

... which is exactly what I would expect it to do.  Update every record of
the item_trans table, and do a full index lookup on the items_seq table.

SO... I've been running this query to update item_trans, and it's been
running for 5 days now.

I've also tried running this with the primary key index on the item_trans
table (but not the seq index), and that ran slower in my initial tests.

Are there any faster ways to update 180 million records with a correlated
update query?  And I'm fairly certain that trying to do this in PHP
one-record at a time would take much longer than a SQL solution.

Thanks,

-Hank

The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Speeding up a pretty simple correlated update query

2009-09-02 Thread Hank
Hello Gavin,
 That's what I did with the first one-to-one table to create the unique SEQ
field mapping to each item/category combination.  The problem is on the
TRANSACTION table, where there are multiple instances of each item/category.
 If I just put a auto_increment primary key on that table, I'd get a unique
TRANSACTION ID, which is not what I want.  I want to populate the
transaction table with the new integer seq key created in the first table.

I guess I should have stated that my overall objective here is to eventually
drop the VARCHAR itemid and category id fields from the transaction table,
leaving only the new item sequence id (plus transid) as the primary key.
There are many tables throughout the schema that do this, and I would be
replacing them all.  It's just that this is the largest table, and the
correlated update is taking a long time, and I'm looking for a better
solution (if one exists).  thanks.

-Hank


On Wed, Sep 2, 2009 at 7:50 PM, Gavin Towey gto...@ffn.com wrote:

 Do you know that if you create seq column on the original table as an
 auto_increment primary key, it will fill in the numbers automatically?
  There's no need to create the values on another table and update with a
 join.

 Regards,
 Gavin Towey

 -Original Message-
 From: Hank [mailto:hes...@gmail.com]
 Sent: Wednesday, September 02, 2009 4:35 PM
 To: mysql@lists.mysql.com
 Subject: Speeding up a pretty simple correlated update query

 Hello All,
  I have a legacy application which was written using a compound primary key
 of an item number (non unique) along with a category ID. The combination of
 the item number and category ID make the records unique.

  I am in the process of replacing the compound (VARCHAR) keys with an
 unique integer key in these tables.

 So I have created an item_seq table and assigned a unique sequence number
 to
 each compound key -- it looks like this (all tables are myisam tables, and
 mysql version 5.0)

 desc item_seq;
 +---+--+--+-+-++
 | Field | Type | Null | Key | Default | Extra  |
 +---+--+--+-+-++
 | seq   | int(10) unsigned | NO   | PRI | NULL| auto_increment |
 | itemid| char(11) | NO   | MUL | ||
 | category  | char(4)  | NO   | | ||
 +---+--+--+-+-++

 I also have my main transactional table with about 180,000,000 rows -- it
 looks like this:

 desc item_trans;

 +-+---+--+-+-+---+
 | Field   | Type  | Null | Key | Default |
 Extra |

 +-+---+--+-+-+---+
 | seq | int(10) unsigned  | NO   | MUL | |
|
 | itemid  | char(11)  | NO   | PRI | |
|
 | category| char(4)   | NO   | PRI | |
|
 | transid | int(10)   | NO   | PRI | |
|

 Currently the seq field is null for the entire table.  So of course, I
 want to update the main transaction table with the new sequence number.

 So I've disabled all the keys on the item_trans table -- since I am
 updating every row, it wouldn't (shouldn't) be using the index anyway.

 Here is my correlated update query:

  update item_trans i, item_seq is
  set i.seq=is.seq
  where is.itemid=i.itemid and is.category=i.category;

  If I run an explain on the select version of the update, this is what I
 get:


 ++-+--++---++-++---+---+
 | id | select_type | table| type   | possible_keys | key| key_len |
 ref| rows  | Extra |

 ++-+--++---++-++---+---+
 |  1 | SIMPLE  | item_trans| ALL| PRIMARY   | NULL   | NULL
   | NULL   | 178948797 |   |
 |  1 | SIMPLE  | item_seq | eq_ref | itemid| itemid | 20  |
 g.item_trans.itemid,g.item_trans.category| 1 |
 |

 ++-+--++---++-++---+---+

 ... which is exactly what I would expect it to do.  Update every record of
 the item_trans table, and do a full index lookup on the items_seq table.

 SO... I've been running this query to update item_trans, and it's been
 running for 5 days now.

 I've also tried running this with the primary key index on the item_trans
 table (but not the seq index), and that ran slower in my initial