Re: IP Resolution
Eric, Thanks for the advise. I was not reporting a bug. This is a live installation of a database I have to get working. I was hoping somebody may give me some compilation options to make MySQL understand IP addresses. Or something. I cannot upgrade the database as this is our production level and changing up would require months of testing. In time when I do, I can tell you if this may be a bug candidate. So, if there is anybody why MySQL thinks all IP addresses are seen my MySQL as 0.0.0.0, I would very much like to know. I note that netstat and other IP reporting programs have no problem. Regards, Ben Clewett. Eric Bergen wrote: MySQL 4.1.14 is the current version. You should always upgrade to the lastest release and test your problem before trying to report bugs. Ben Clewett wrote: Dear MySQL, My MySQL 4.1.9 has lost the ability to work out what IP address clients are connecting from. Eg: $ mysqladmin processlist ++--+---+--+-+--+---+--+ | Id | User | Host | db | Command | Time | State | Info| ++--+---+--+-+--+---+--+ | 5 | test | 0.0.0.0:55049 | test | Sleep | 10 | || | 6 | root | localhost | | Query | 0| | show processlist | ++--+---+--+-+--+---+--+ This is annoying as I can't authenticate users based on their IP address. I suspect this may be a clash between IPv6 and IPv4. It happened after a patch was applied to the AIX operating system and MySQL recompiled. Would any members have experience or ideas of how this problem may be resolved? Regards, Ben -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: How to test the MySQL Server (windows version) ?
Hi! lu ming wrote: Hi! I tried to run the perl scripts in the windows platform yesterday,but an error occured [[...]] I can just repeat: This script is still under construction, and adaptions are still needed to make tests work correctly on Windows platforms. If you run it, you are on your own, and we are not investing any effort in supporting it externally now; we fully concentrate on improving it for our own internal tests. When this is finished, we can still announce it and then work on the problems which you and other users may experience. Jörg -- Joerg Bruehe, Senior Production Engineer MySQL AB, www.mysql.com Office: (+49 30) 417 01 487 VoIP: [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: DBDesigner4 under Linux
I have a symbolic link to libmysqlclient.so.10.0.0 When I run startdbd for the first time symbolic links are created. Thanks, Bruno -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: DBDesigner4 under Linux
If you have libmysqlclient.so.10.0.0 for example at /usr/lib/ try to add new symbolic link ln -s /usr/lib/libmysqlclient.so.10.0.0 /usr/lib/libmysqlclient.so and run /sbin/ldconfig good luck Bruno Cochofel wrote: I have a symbolic link to libmysqlclient.so.10.0.0 When I run startdbd for the first time symbolic links are created. Thanks, Bruno -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: How to test the MySQL Server (windows version) ?
Hi, joerg Thanks for your patient answer! I hope that we will have the ability to run the test suite on windows system early. Best regards luming _ 与联机的朋友进行交流,请使用 MSN Messenger: http://messenger.msn.com/cn -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: SQL statement help
Turn off your auto commit (if using innodb) and run the queries. See how many rows it updates and compare them. If it matches commit it. Else rollback. C.F. Scheidecker Antunes wrote: Another thing is that table2 might not have a match. C.F. Scheidecker Antunes wrote: I actually forgot something. I need a two statements into one. The reason is that table_two has multiple values, so I need the minimal ID from table 2.Thanks. Maybe I am working too much here and tired to see the right statement. J.R. Bullington wrote: I am not the smartest on the list, but I think this would do it for you. UPDATE table_one t1, table_two t2 SET t1.ID = t2.ID WHERE t2.num = t1.num AND t1.ID = 0 J.R. -Original Message- From: C.F. Scheidecker Antunes [mailto:[EMAIL PROTECTED] Sent: Sunday, October 16, 2005 1:45 PM To: mysql@lists.mysql.com Subject: SQL statement help Importance: High Hello All, I have two MySQL 4.1 tables. The first table has an ID field which is set to zero. The second one has ID fields and filenames. I need a single SQL statement to set the ID from table 1 to the ID from table 2 ONLY IF the ID on one is zero. That is, if the ID on table one is not set (still equal to zero), grab the ID from table2 where the num of table2 is equal to table1. table one - Title - num - ID (set to zero initially) table two - num - ID - filename SET table1.ID = someID if table1.ID EQUAL to zero. Set the ID on table1 from the table 2 where the num on table 2 equals the num on table1 only if the ID on table 1 is zero. Thank you in advance, C.F. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: DBDesigner4 under Linux
Ok, I have DBDesigner4 libs under /usr/local/lib/DBDesigner4. I tried to do what you want and I found that I already have a link to libmysqlclient.so.14.0.0 Maybe this is the problem, don't you think? I don't know wich program uses that lib... Ady Wicaksono wrote: If you have libmysqlclient.so.10.0.0 for example at /usr/lib/ try to add new symbolic link ln -s /usr/lib/libmysqlclient.so.10.0.0 /usr/lib/libmysqlclient.so and run /sbin/ldconfig good luck Bruno Cochofel wrote: I have a symbolic link to libmysqlclient.so.10.0.0 When I run startdbd for the first time symbolic links are created. Thanks, Bruno -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: renaming the database
Octavian Rasnita wrote: Is there a command for renaming a MySQL database? Sorry, there is no command for that. For small databases you can use administration tools like phpMyAdmin, etc. which will do it for you by duplicating the database structure and data, and then drop the 'old' database. For big databases this will take quite long, so it's probably better to stick with the old name (what's in a name? ;-) ) Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Datatype conversion
hi all, i have a strange problem here. I have around 600 tables in Oracle DB(On 8i, so i am not able to use mysql migration tool kit!!). Now i want move all of these to mysql. I am initially trying to convert the schema from oracle to mysql. My problem is, in the existing oracle DB, for storing both integers as well as decimal values i am using the same dataype 'number'. But during migration i want to distinguish if that field 'number' in oracle is actually holding an integer or decimals. Is this possible to do or i have to manually do it only. Thank you sujay
AW: limited threads to two but 25 waiting?!
Hello: After haveing a look at the `show processlist` I realized that querys will be locked if there is an update or insert statement otherwise the `STATUS` is as regular copying to temp, sorting etc. So that not all querys will be locked automatically. (But when checking there where lots of insert intos so that every query seemd to be locked) some select querys `SELECT kk2.k_id FROM katalog_katalog kk LEFT JOIN katalog_katalog kk1 ON k_id=kk1.kat_id LEFT JOIN katalog_katalog kk2 ON kk1.k_kat_id=kk2.kat_id WHERE kk.kat_id=34678 AND kk1.k_kategorie_id=56 AND kk2.k_kategorie_id=24; An insert into/update looks like this: INSERT INTO katalog SET kategorie_id=36, titel=foo foo bar, artikel=bar bar bar foo The Update gets an additonal `WHERE id=1234` about the create statements: Two of many tables hopefully give an insight //- snip CREATE TABLE `katalog` ( `id` int(11) NOT NULL auto_increment, `kategorie_id` int(11) NOT NULL default '0', `datum` datetime NOT NULL default '-00-00 00:00:00', `titel` varchar(200) collate latin1_german2_ci default NULL, `untertitel` text collate latin1_german2_ci, `einleitung` text collate latin1_german2_ci, `artikel` text collate latin1_german2_ci, `quelle` text collate latin1_german2_ci, `bild_id` int(11) default NULL, `txt1` text collate latin1_german2_ci, `txt2` text collate latin1_german2_ci, `txt3` text collate latin1_german2_ci, `txt4` text collate latin1_german2_ci, PRIMARY KEY (`id`), KEY `kategorie_id_idx` (`kategorie_id`), KEY `datum_idx` (`datum`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_german2_ci PACK_KEYS=1 ; and CREATE TABLE `katalog_katalog` ( `kat_id` int(11) NOT NULL default '0', `kategorie_id` int(11) NOT NULL default '0', `k_id` int(11) NOT NULL default '0', `k_kategorie_id` int(11) NOT NULL default '0', KEY `kat_id_idx` (`kat_id`), KEY `kategorie_id_idx` (`kategorie_id`), KEY `k_id_idx` (`k_id`), KEY `k_kategorie_id_idx` (`k_kategorie_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_german2_ci PACK_KEYS=1; //- snap I do have written the skip-dbd in /etc/mysql/my.cnf The idea with the not working skip-dbd came because of the numbers shown in `show variables` bdb_cache_size | 8388600 May my guess was not right. Thanx so far for the help!!! yours mathias -Ursprüngliche Nachricht- Von: Gleb Paharenko [mailto:[EMAIL PROTECTED] Gesendet: Sonntag, 16. Oktober 2005 14:45 An: mysql@lists.mysql.com Betreff: Re: limited threads to two but 25 waiting?! Hello. Please, send to the list your queries and CREATE statements for tables which are used by you queries. Include the output of SHOW PROCESSLIST. 2. When writng in my.cnf the skip-bdb option still the server reserves memmory for berklyDB aswell with innoDB. But writing somthing wrong into the file will cause mysql to throw an error. Are you sure that server reserves the memory for BDB? According to http://dev.mysql.com/doc/refman/5.0/en/bdb-start.html it shouldn't. It could show the values of different bdb related variables, but isn't using them. To what file are you trying to write? M.E. Koch wrote: Hi, I have searched and tried and have no clue why the db on 4.1.11-Debian_4sarge2 behaves like this. I have no TABLE LOCK query anywhere in my code! About the server (LAMP/ 2x3Mhz, 4GB RAM) 1. prob. the mysql show processlist gives me a list of 25 threads waiting for there work. even on heavy load `pgrep mysql` will just show two PIDs doing somthing even if there are just SELECT queries on the DB. therefor the server get's really slow. The case get's even more worse if there is a UPDATE or INSERT statement. (LOCK problem) 2. When writng in my.cnf the skip-bdb option still the server reserves memmory for berklyDB aswell with innoDB. But writing somthing wrong into the file will cause mysql to throw an error. I have no idea anymore where to look or what to check pls help. any tuning-tipps are wellcome! mysql show variables; gives me that +-+--+ | Variable_name | Value| +-+--+ | back_log| 50| | basedir | /usr/ | | bdb_cache_size | 8388600 | | bdb_home| | | bdb_log_buffer_size | 0 | | bdb_logdir | | | bdb_max_lock| 1 | | bdb_shared_data | OFF | | bdb_tmpdir | | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client| latin1| | character_set_connection| latin1| | character_set_database
Re: renaming the database
Thanks. I want to rename the database in order to keep it as a backup, then to temporarily create a new one with the same name as a test, because more programs use that database name. Teddy - Original Message - From: Jigal van Hemert [EMAIL PROTECTED] To: Octavian Rasnita [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Monday, October 17, 2005 12:11 PM Subject: Re: renaming the database Octavian Rasnita wrote: Is there a command for renaming a MySQL database? Sorry, there is no command for that. For small databases you can use administration tools like phpMyAdmin, etc. which will do it for you by duplicating the database structure and data, and then drop the 'old' database. For big databases this will take quite long, so it's probably better to stick with the old name (what's in a name? ;-) ) Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: renaming the database
What about renaming the folder indicating the database name. I work with Windows XP and renaming a folder works well. Regards, Cor - Original Message - From: Jigal van Hemert [EMAIL PROTECTED] To: Octavian Rasnita [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Monday, October 17, 2005 12:11 PM Subject: Re: renaming the database Octavian Rasnita wrote: Is there a command for renaming a MySQL database? Sorry, there is no command for that. For small databases you can use administration tools like phpMyAdmin, etc. which will do it for you by duplicating the database structure and data, and then drop the 'old' database. For big databases this will take quite long, so it's probably better to stick with the old name (what's in a name? ;-) ) Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: renaming the database
C.R. Vegelin wrote: What about renaming the folder indicating the database name. I work with Windows XP and renaming a folder works well. Cor, Have you tried it with InnoDB tables or anything other than MyISAM tables? InnoDB uses a single tablespace per server (unless specified that it should use a file per table, but then it still uses a general tablespace), so renaming a directory or folder will probably only confuse InnoDB and prevent it from starting. Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: renaming the database
I think you can just rename the directory that that database lives in. If you read the documentation for CREATE DATABASE in dev.mysql.com, you'll see they discuss renaming the directory (although it does not directly say this can be done to rename the database, but it comes really close to that.) Based on the documentation the database name is simply a directory name, no more and no less. Renaming tables has its own command syntax, you can look it up. Bob Cochran Octavian Rasnita wrote: Hi, Is there a command for renaming a MySQL database? Thank you. Teddy -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Query dies silently
Gleb Paharenko wrote: Hello. I get 142k rows back in just over 4 seconds. It means that on the server side query works, so the problem probably is in PHP. Good call. I did some testing, and sure enough, the mysql_query() is successful. The failure comes in trying to retrieve the rows using while($row = mysql_fetch_assoc($queryID)) { $arrResults[] = $row; } So, it would seem that I need to take this issue to a different list. Isn't the PHP level logging high enough? Increase the memory_limit variable in you php.ini. I did try to do an ini_set() on the memory_limit variable, but it appears that my host does not have this enabled. Thanks for the tips. -Erich- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: move log files to a different disk.
Hi Friends, Any help on this would be of great help. regards anandkl On 10/17/05, Ananda Kumar [EMAIL PROTECTED] wrote: Hi All, I am using version 5 of mysql on lunix. I have place my log files in the same disk of datafiles, can you please let me know how i can move log files into a different disk. regards anandkl
Re: renaming the database
Robert L Cochran [EMAIL PROTECTED] wrote on 10/17/2005 07:37:26 AM: I think you can just rename the directory that that database lives in. If you read the documentation for CREATE DATABASE in dev.mysql.com, you'll see they discuss renaming the directory (although it does not directly say this can be done to rename the database, but it comes really close to that.) Based on the documentation the database name is simply a directory name, no more and no less. Renaming tables has its own command syntax, you can look it up. Bob Cochran Octavian Rasnita wrote: Hi, Is there a command for renaming a MySQL database? Thank you. Teddy The only way I have done it has been to create an empty database with the name I want. Then I used RENAME TABLE to move all of the tables into the new database. Sure it takes a lot of RENAME TABLE statements but it works. If these are InnoDB tables, all I am doing is moving metadata and that is FAST. For MyISAM or other file-based storage engines, it copies files from one folder to another. For some file systems, that is also just a metadata shift and will still be FAST. Others will require a physical move of the data from one location to another (good thing that those filesystems are becoming quite rare these days) I have a 2.1GB database with all InnoDB tables in it that I wanted to change the name of. It took me longer to write my RENAME TABLE script than it did to actually move the data. http://dev.mysql.com/doc/refman/4.1/en/rename-table.html Shawn Green Database Administrator Unimin Corporation - Spruce Pine
Re: Datatype conversion
Sujay Koduri [EMAIL PROTECTED] wrote on 10/17/2005 06:25:58 AM: hi all, i have a strange problem here. I have around 600 tables in Oracle DB(On 8i, so i am not able to use mysql migration tool kit!!). Now i want move all of these to mysql. I am initially trying to convert the schema from oracle to mysql. My problem is, in the existing oracle DB, for storing both integers as well as decimal values i am using the same dataype 'number'. But during migration i want to distinguish if that field 'number' in oracle is actually holding an integer or decimals. Is this possible to do or i have to manually do it only. Thank you sujay Unless you can somehow extract some additional information about your fields, you will not be able to automate such a conversion. How were you planning to make the decision by hand? You could probably write something to dump your Oracle schema as a MySQL-ready script complete with the numeric data type conversions (based on how you wanted to decide which columns were integers and which ones were not). Then it becomes a simple matter of playing that script through the CLI (use the source command) to actually create your destination tables. Shawn Green Database Administrator Unimin Corporation - Spruce Pine
RE: Datatype conversion
yes..exactly thats what i am planning to do now. Thank you sujay _ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, October 17, 2005 7:35 PM To: Sujay Koduri Cc: mysql@lists.mysql.com Subject: Re: Datatype conversion Sujay Koduri [EMAIL PROTECTED] wrote on 10/17/2005 06:25:58 AM: hi all, i have a strange problem here. I have around 600 tables in Oracle DB(On 8i, so i am not able to use mysql migration tool kit!!). Now i want move all of these to mysql. I am initially trying to convert the schema from oracle to mysql. My problem is, in the existing oracle DB, for storing both integers as well as decimal values i am using the same dataype 'number'. But during migration i want to distinguish if that field 'number' in oracle is actually holding an integer or decimals. Is this possible to do or i have to manually do it only. Thank you sujay Unless you can somehow extract some additional information about your fields, you will not be able to automate such a conversion. How were you planning to make the decision by hand? You could probably write something to dump your Oracle schema as a MySQL-ready script complete with the numeric data type conversions (based on how you wanted to decide which columns were integers and which ones were not). Then it becomes a simple matter of playing that script through the CLI (use the source command) to actually create your destination tables. Shawn Green Database Administrator Unimin Corporation - Spruce Pine
help me please
Hellou! I have MySQL(charset is utf-8) + PHP 5.0.5. Latvian symbols(auczsnsczikl) not correctly viewed in MySQL Query Browser, but in 'phpmyadmin' its ok. When i use php to viewed data with latvian symbols in IE, then latvian symbols(aui.) is converted to ? symbols. Please Help! Ar cienu, Aleksejs!
Confusion Over Numeric Types
Hi, When I create a table using my ISP's web control panel and I create a column with a type of TINYINT it automatically creates a column with a value of TINYINT(4). After looking at the documentation on the MySQL site it is not exactly clear what this means: http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html Is TINYINT(4) the same as an INT(4)? Thanks for your advice. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: renaming the database
And, of course, you need to grant permissions for the new database name. Bob [EMAIL PROTECTED] wrote: Robert L Cochran [EMAIL PROTECTED] wrote on 10/17/2005 07:37:26 AM: I think you can just rename the directory that that database lives in. If you read the documentation for CREATE DATABASE in dev.mysql.com, you'll see they discuss renaming the directory (although it does not directly say this can be done to rename the database, but it comes really close to that.) Based on the documentation the database name is simply a directory name, no more and no less. Renaming tables has its own command syntax, you can look it up. Bob Cochran Octavian Rasnita wrote: Hi, Is there a command for renaming a MySQL database? Thank you. Teddy The only way I have done it has been to create an empty database with the name I want. Then I used RENAME TABLE to move all of the tables into the new database. Sure it takes a lot of RENAME TABLE statements but it works. If these are InnoDB tables, all I am doing is moving metadata and that is FAST. For MyISAM or other file-based storage engines, it copies files from one folder to another. For some file systems, that is also just a metadata shift and will still be FAST. Others will require a physical move of the data from one location to another (good thing that those filesystems are becoming quite rare these days) I have a 2.1GB database with all InnoDB tables in it that I wanted to change the name of. It took me longer to write my RENAME TABLE script than it did to actually move the data. http://dev.mysql.com/doc/refman/4.1/en/rename-table.html Shawn Green Database Administrator Unimin Corporation - Spruce Pine -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: renaming the database
Robert L Cochran [EMAIL PROTECTED] wrote on 10/17/2005 11:00:33 AM: And, of course, you need to grant permissions for the new database name. Bob [EMAIL PROTECTED] wrote: Robert L Cochran [EMAIL PROTECTED] wrote on 10/17/2005 07:37:26 AM: I think you can just rename the directory that that database lives in. If you read the documentation for CREATE DATABASE in dev.mysql.com, you'll see they discuss renaming the directory (although it does not directly say this can be done to rename the database, but it comes really close to that.) Based on the documentation the database name is simply a directory name, no more and no less. Renaming tables has its own command syntax, you can look it up. Bob Cochran Octavian Rasnita wrote: Hi, Is there a command for renaming a MySQL database? Thank you. Teddy The only way I have done it has been to create an empty database with the name I want. Then I used RENAME TABLE to move all of the tables into the new database. Sure it takes a lot of RENAME TABLE statements but it works. If these are InnoDB tables, all I am doing is moving metadata and that is FAST. For MyISAM or other file-based storage engines, it copies files from one folder to another. For some file systems, that is also just a metadata shift and will still be FAST. Others will require a physical move of the data from one location to another (good thing that those filesystems are becoming quite rare these days) I have a 2.1GB database with all InnoDB tables in it that I wanted to change the name of. It took me longer to write my RENAME TABLE script than it did to actually move the data. http://dev.mysql.com/doc/refman/4.1/en/rename-table.html Shawn Green Database Administrator Unimin Corporation - Spruce Pine Good catch! Alternatively, if he wanted to move all of his permissions to the new database name, he could simply update any old entries in the `mysql.db` and `mysql.columns_priv` tables then issue a FLUSH PRIVILEGES command. UPDATE `mysql.db` SET `Db` = 'new_dbname` WHERE `Db` = 'old_dbname'; UPDATE `mysql.columns_priv` SET `Db` = 'new_dbname` WHERE `Db` = 'old_dbname'; FLUSH PRIVILEGES; This way if he wants to archive his old database, he can grant a new set of permissions (with new GRANT statements). And if he just wants to rename his database (migrating any existing permissions to the new name) he can do that, too. Thanks! Shawn Green Database Administrator Unimin Corporation - Spruce Pine
Re: Confusion Over Numeric Types
Hi Shaun, Am Montag, 17. Oktober 2005 16:56 schrieb Shaun: http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html Is TINYINT(4) the same as an INT(4)? No, they only have the same display width, see the following quote form the manual: Another extension is supported by MySQL for optionally specifying the display width of an integer value in parentheses following the base keyword for the type (for example, INT(4)). This optional display width specification is used to left-pad the display of values having a width less than the width specified for the column. If you dont add a display width, MySQL uses a default display width, that is for tinyint: -127 ( sign + 1 byte storage ) The display width doesent change the bytes used to store a specific type, so tinyint always uses 1 byte for storage and int always uses 4 bytes for storage - its independ from the display width. Dirk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Confusion Over Numeric Types
Shaun wrote: Hi, When I create a table using my ISP's web control panel and I create a column with a type of TINYINT it automatically creates a column with a value of TINYINT(4). After looking at the documentation on the MySQL site it is not exactly clear what this means: http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html Is TINYINT(4) the same as an INT(4)? Thanks for your advice. TINYINT(4) means a type that holds a value for -128 to 127 displayed with max (4) character positions. The part in parenthesis is the input/output display parameter, not the number of bytes of storage. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: renaming the database
Hello. This is a frequently asked question. See: http://lists.mysql.com/mysql/188094 For more solutions search in the archives at: http://lists.mysql.com Octavian Rasnita wrote: Hi, Is there a command for renaming a MySQL database? Thank you. Teddy -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Non-linear degradation in bulk loads?
About 6GB... Is there any way to forcibly limit this? -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Sunday, October 16, 2005 2:23 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? Jon, your hardware/OS combination is quite new and unusual. It might indeed be an OS problem. We observed from a 64-bit RHEL 4 that when the file cache of the OS grew bigger than 4 GB, then the file I/O performance dropped to one tenth of the normal. You would not expect that kind of behavior from a 64-bit OS. When you see the slowdown, what does 'top' say about the OS file cache size? The database is our main sites database but we've dramatically reduced the load on that machine over the past couple months through careful optimization of our code. The box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course). We have 1GB allocated to the buffer pool, and our usual 1GB * 3 log files. 8 I/O threads. Load on the box sits at around 6-7, with a large (50%) amount of time spent in wait state, but actual disk throughput to our software RAID array (No longer on a SAN...) is quite low -- 6-9k blocks/s out, 1-6k blocks/s in. Regards, Heikki - Original Message - From: Jon Frisby [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, October 14, 2005 10:39 PM Subject: RE: Non-linear degradation in bulk loads? I've tried tweaking the structure of the schema to have, for example, a = PRIMARY KEY index on email, no other indexes, and then insert in sorted = order -- made no improvement whatsoever. Another clue that leads me to = believe that this may be an OS issue: Starting a large cp on the same = box (from a local filesystem other than the one the InnoDB data pool was = on, to NFS) caused MySQL to become COMPLETELY backlogged (we went from = ~15-20 connections at any given instant to 750 (our max_connections = setting)). -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 12, 2005 8:15 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? =20 Jon, =20 hmm... maybe one of the indexes inevitably is in a random order. =20 Please post a typical =20 SHOW INNODB STATUS\G =20 when the inserts happen slowly. =20 What is your my.cnf like? =20 Regards, =20 Heikki Innobase/Oracle =20 - Alkuper=E4inen viesti - L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20 mysql@lists.mysql.com L=E4hetetty: Wednesday, October 12, 2005 3:08 AM Aihe: RE: Non-linear degradation in bulk loads? =20 =20 Two solutions: 1) sort the rows to be inserted on the key 'email'=20 before inserting. 2) Or: http://dev.mysql.com/doc/mysql/en/innodb-tuning.html If you have UNIQUE constraints on secondary keys, starting=20 from MySQL 3.23.52 and 4.0.3, you can speed up table imports by temporarily=20 turning off the uniqueness checks during the import session: SET UNIQUE_CHECKS=3D0; For big tables, this saves a lot of disk I/O because InnoDB can use=20 its insert buffer to write secondary index records in a batch. But make sure you do not have any duplicates in the rows! =20 After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20 subsequent to that it also occurred to me to try putting the=20 data in in sorted order. Unfortunately, doing=20 UNIQUE_CHECKS=3D0 did not work, and even the combination of=20 both did not work. First chunk (3.4m rows) was ~1.5 minutes,=20 second was ~5 minutes... =20 At this point I'm inclined to believe that there is something=20 very wrong with the disk subsystem because of this and other=20 problems (doing a large cp from the datapool filesystem to=20 another filesystem brought the database to a near-halt, among=20 other things). =20 As a stop-gap solution, I created the table with no indexes,=20 and loaded all the data (loaded in linear time), and plan on=20 doing a CREATE UNIQUE INDEX on the table. Will this happen=20 in linear time, or near-linear time? =20 *sigh* =20 -JF=20 =20 =20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: =20 http://lists.mysql.com/[EMAIL PROTECTED] =20 =20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Non-linear degradation in bulk loads?
Actually, I believe we're running 32-bit, with bigmem... Does similar behavior occur in such a scenario? -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Sunday, October 16, 2005 2:23 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? Jon, your hardware/OS combination is quite new and unusual. It might indeed be an OS problem. We observed from a 64-bit RHEL 4 that when the file cache of the OS grew bigger than 4 GB, then the file I/O performance dropped to one tenth of the normal. You would not expect that kind of behavior from a 64-bit OS. When you see the slowdown, what does 'top' say about the OS file cache size? The database is our main sites database but we've dramatically reduced the load on that machine over the past couple months through careful optimization of our code. The box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course). We have 1GB allocated to the buffer pool, and our usual 1GB * 3 log files. 8 I/O threads. Load on the box sits at around 6-7, with a large (50%) amount of time spent in wait state, but actual disk throughput to our software RAID array (No longer on a SAN...) is quite low -- 6-9k blocks/s out, 1-6k blocks/s in. Regards, Heikki - Original Message - From: Jon Frisby [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, October 14, 2005 10:39 PM Subject: RE: Non-linear degradation in bulk loads? I've tried tweaking the structure of the schema to have, for example, a = PRIMARY KEY index on email, no other indexes, and then insert in sorted = order -- made no improvement whatsoever. Another clue that leads me to = believe that this may be an OS issue: Starting a large cp on the same = box (from a local filesystem other than the one the InnoDB data pool was = on, to NFS) caused MySQL to become COMPLETELY backlogged (we went from = ~15-20 connections at any given instant to 750 (our max_connections = setting)). -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 12, 2005 8:15 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? =20 Jon, =20 hmm... maybe one of the indexes inevitably is in a random order. =20 Please post a typical =20 SHOW INNODB STATUS\G =20 when the inserts happen slowly. =20 What is your my.cnf like? =20 Regards, =20 Heikki Innobase/Oracle =20 - Alkuper=E4inen viesti - L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20 mysql@lists.mysql.com L=E4hetetty: Wednesday, October 12, 2005 3:08 AM Aihe: RE: Non-linear degradation in bulk loads? =20 =20 Two solutions: 1) sort the rows to be inserted on the key 'email'=20 before inserting. 2) Or: http://dev.mysql.com/doc/mysql/en/innodb-tuning.html If you have UNIQUE constraints on secondary keys, starting=20 from MySQL 3.23.52 and 4.0.3, you can speed up table imports by temporarily=20 turning off the uniqueness checks during the import session: SET UNIQUE_CHECKS=3D0; For big tables, this saves a lot of disk I/O because InnoDB can use=20 its insert buffer to write secondary index records in a batch. But make sure you do not have any duplicates in the rows! =20 After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20 subsequent to that it also occurred to me to try putting the=20 data in in sorted order. Unfortunately, doing=20 UNIQUE_CHECKS=3D0 did not work, and even the combination of=20 both did not work. First chunk (3.4m rows) was ~1.5 minutes,=20 second was ~5 minutes... =20 At this point I'm inclined to believe that there is something=20 very wrong with the disk subsystem because of this and other=20 problems (doing a large cp from the datapool filesystem to=20 another filesystem brought the database to a near-halt, among=20 other things). =20 As a stop-gap solution, I created the table with no indexes,=20 and loaded all the data (loaded in linear time), and plan on=20 doing a CREATE UNIQUE INDEX on the table. Will this happen=20 in linear time, or near-linear time? =20 *sigh* =20 -JF=20 =20 =20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: =20 http://lists.mysql.com/[EMAIL PROTECTED] =20 =20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:
Re: Non-linear degradation in bulk loads?
Jon, I do not know. Why not install a 64-bit Linux in your computer? Regards, Heikki Oracle/Innobase - Alkuperäinen viesti - Lähettäjä: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED]; mysql@lists.mysql.com Lähetetty: Monday, October 17, 2005 8:46 PM Aihe: RE: Non-linear degradation in bulk loads? Actually, I believe we're running 32-bit, with bigmem... Does similar behavior occur in such a scenario? -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Sunday, October 16, 2005 2:23 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? Jon, your hardware/OS combination is quite new and unusual. It might indeed be an OS problem. We observed from a 64-bit RHEL 4 that when the file cache of the OS grew bigger than 4 GB, then the file I/O performance dropped to one tenth of the normal. You would not expect that kind of behavior from a 64-bit OS. When you see the slowdown, what does 'top' say about the OS file cache size? The database is our main sites database but we've dramatically reduced the load on that machine over the past couple months through careful optimization of our code. The box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course). We have 1GB allocated to the buffer pool, and our usual 1GB * 3 log files. 8 I/O threads. Load on the box sits at around 6-7, with a large (50%) amount of time spent in wait state, but actual disk throughput to our software RAID array (No longer on a SAN...) is quite low -- 6-9k blocks/s out, 1-6k blocks/s in. Regards, Heikki - Original Message - From: Jon Frisby [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, October 14, 2005 10:39 PM Subject: RE: Non-linear degradation in bulk loads? I've tried tweaking the structure of the schema to have, for example, a = PRIMARY KEY index on email, no other indexes, and then insert in sorted = order -- made no improvement whatsoever. Another clue that leads me to = believe that this may be an OS issue: Starting a large cp on the same = box (from a local filesystem other than the one the InnoDB data pool was = on, to NFS) caused MySQL to become COMPLETELY backlogged (we went from = ~15-20 connections at any given instant to 750 (our max_connections = setting)). -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 12, 2005 8:15 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? =20 Jon, =20 hmm... maybe one of the indexes inevitably is in a random order. =20 Please post a typical =20 SHOW INNODB STATUS\G =20 when the inserts happen slowly. =20 What is your my.cnf like? =20 Regards, =20 Heikki Innobase/Oracle =20 - Alkuper=E4inen viesti - L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20 mysql@lists.mysql.com L=E4hetetty: Wednesday, October 12, 2005 3:08 AM Aihe: RE: Non-linear degradation in bulk loads? =20 =20 Two solutions: 1) sort the rows to be inserted on the key 'email'=20 before inserting. 2) Or: http://dev.mysql.com/doc/mysql/en/innodb-tuning.html If you have UNIQUE constraints on secondary keys, starting=20 from MySQL 3.23.52 and 4.0.3, you can speed up table imports by temporarily=20 turning off the uniqueness checks during the import session: SET UNIQUE_CHECKS=3D0; For big tables, this saves a lot of disk I/O because InnoDB can use=20 its insert buffer to write secondary index records in a batch. But make sure you do not have any duplicates in the rows! =20 After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20 subsequent to that it also occurred to me to try putting the=20 data in in sorted order. Unfortunately, doing=20 UNIQUE_CHECKS=3D0 did not work, and even the combination of=20 both did not work. First chunk (3.4m rows) was ~1.5 minutes,=20 second was ~5 minutes... =20 At this point I'm inclined to believe that there is something=20 very wrong with the disk subsystem because of this and other=20 problems (doing a large cp from the datapool filesystem to=20 another filesystem brought the database to a near-halt, among=20 other things). =20 As a stop-gap solution, I created the table with no indexes,=20 and loaded all the data (loaded in linear time), and plan on=20 doing a CREATE UNIQUE INDEX on the table. Will this happen=20 in linear time, or near-linear time? =20 *sigh* =20 -JF=20 =20 =20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: =20 http://lists.mysql.com/[EMAIL PROTECTED] =20 =20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:
RE: Non-linear degradation in bulk loads?
Sorry to spam the group, but I just noticed that I asserted we were on a 2.4 kernel -- we're on 2.6.9-11 as part of CentOS 4.1... -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Sunday, October 16, 2005 2:23 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? Jon, your hardware/OS combination is quite new and unusual. It might indeed be an OS problem. We observed from a 64-bit RHEL 4 that when the file cache of the OS grew bigger than 4 GB, then the file I/O performance dropped to one tenth of the normal. You would not expect that kind of behavior from a 64-bit OS. When you see the slowdown, what does 'top' say about the OS file cache size? The database is our main sites database but we've dramatically reduced the load on that machine over the past couple months through careful optimization of our code. The box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course). We have 1GB allocated to the buffer pool, and our usual 1GB * 3 log files. 8 I/O threads. Load on the box sits at around 6-7, with a large (50%) amount of time spent in wait state, but actual disk throughput to our software RAID array (No longer on a SAN...) is quite low -- 6-9k blocks/s out, 1-6k blocks/s in. Regards, Heikki - Original Message - From: Jon Frisby [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, October 14, 2005 10:39 PM Subject: RE: Non-linear degradation in bulk loads? I've tried tweaking the structure of the schema to have, for example, a = PRIMARY KEY index on email, no other indexes, and then insert in sorted = order -- made no improvement whatsoever. Another clue that leads me to = believe that this may be an OS issue: Starting a large cp on the same = box (from a local filesystem other than the one the InnoDB data pool was = on, to NFS) caused MySQL to become COMPLETELY backlogged (we went from = ~15-20 connections at any given instant to 750 (our max_connections = setting)). -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 12, 2005 8:15 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? =20 Jon, =20 hmm... maybe one of the indexes inevitably is in a random order. =20 Please post a typical =20 SHOW INNODB STATUS\G =20 when the inserts happen slowly. =20 What is your my.cnf like? =20 Regards, =20 Heikki Innobase/Oracle =20 - Alkuper=E4inen viesti - L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20 mysql@lists.mysql.com L=E4hetetty: Wednesday, October 12, 2005 3:08 AM Aihe: RE: Non-linear degradation in bulk loads? =20 =20 Two solutions: 1) sort the rows to be inserted on the key 'email'=20 before inserting. 2) Or: http://dev.mysql.com/doc/mysql/en/innodb-tuning.html If you have UNIQUE constraints on secondary keys, starting=20 from MySQL 3.23.52 and 4.0.3, you can speed up table imports by temporarily=20 turning off the uniqueness checks during the import session: SET UNIQUE_CHECKS=3D0; For big tables, this saves a lot of disk I/O because InnoDB can use=20 its insert buffer to write secondary index records in a batch. But make sure you do not have any duplicates in the rows! =20 After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20 subsequent to that it also occurred to me to try putting the=20 data in in sorted order. Unfortunately, doing=20 UNIQUE_CHECKS=3D0 did not work, and even the combination of=20 both did not work. First chunk (3.4m rows) was ~1.5 minutes,=20 second was ~5 minutes... =20 At this point I'm inclined to believe that there is something=20 very wrong with the disk subsystem because of this and other=20 problems (doing a large cp from the datapool filesystem to=20 another filesystem brought the database to a near-halt, among=20 other things). =20 As a stop-gap solution, I created the table with no indexes,=20 and loaded all the data (loaded in linear time), and plan on=20 doing a CREATE UNIQUE INDEX on the table. Will this happen=20 in linear time, or near-linear time? =20 *sigh* =20 -JF=20 =20 =20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: =20 http://lists.mysql.com/[EMAIL PROTECTED] =20 =20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:
Re: Non-linear degradation in bulk loads?
Jon, I am not 100 % sure that the problem we saw was in a 64-bit Linux. It might have been 32-bit. Anyway, since CentOS is a clone of RHEL, this might be the same file cache phenomenon. I do not know if one can force the file cache to stay smaller than 4 GB. You can try running some dummy programs that occupy a few GB of memory. Regards, Heikki Oracle/Innobase - Alkuperäinen viesti - Lähettäjä: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED]; mysql@lists.mysql.com Lähetetty: Monday, October 17, 2005 8:49 PM Aihe: RE: Non-linear degradation in bulk loads? Sorry to spam the group, but I just noticed that I asserted we were on a 2.4 kernel -- we're on 2.6.9-11 as part of CentOS 4.1... -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Sunday, October 16, 2005 2:23 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? Jon, your hardware/OS combination is quite new and unusual. It might indeed be an OS problem. We observed from a 64-bit RHEL 4 that when the file cache of the OS grew bigger than 4 GB, then the file I/O performance dropped to one tenth of the normal. You would not expect that kind of behavior from a 64-bit OS. When you see the slowdown, what does 'top' say about the OS file cache size? The database is our main sites database but we've dramatically reduced the load on that machine over the past couple months through careful optimization of our code. The box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course). We have 1GB allocated to the buffer pool, and our usual 1GB * 3 log files. 8 I/O threads. Load on the box sits at around 6-7, with a large (50%) amount of time spent in wait state, but actual disk throughput to our software RAID array (No longer on a SAN...) is quite low -- 6-9k blocks/s out, 1-6k blocks/s in. Regards, Heikki - Original Message - From: Jon Frisby [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, October 14, 2005 10:39 PM Subject: RE: Non-linear degradation in bulk loads? I've tried tweaking the structure of the schema to have, for example, a = PRIMARY KEY index on email, no other indexes, and then insert in sorted = order -- made no improvement whatsoever. Another clue that leads me to = believe that this may be an OS issue: Starting a large cp on the same = box (from a local filesystem other than the one the InnoDB data pool was = on, to NFS) caused MySQL to become COMPLETELY backlogged (we went from = ~15-20 connections at any given instant to 750 (our max_connections = setting)). -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 12, 2005 8:15 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? =20 Jon, =20 hmm... maybe one of the indexes inevitably is in a random order. =20 Please post a typical =20 SHOW INNODB STATUS\G =20 when the inserts happen slowly. =20 What is your my.cnf like? =20 Regards, =20 Heikki Innobase/Oracle =20 - Alkuper=E4inen viesti - L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20 mysql@lists.mysql.com L=E4hetetty: Wednesday, October 12, 2005 3:08 AM Aihe: RE: Non-linear degradation in bulk loads? =20 =20 Two solutions: 1) sort the rows to be inserted on the key 'email'=20 before inserting. 2) Or: http://dev.mysql.com/doc/mysql/en/innodb-tuning.html If you have UNIQUE constraints on secondary keys, starting=20 from MySQL 3.23.52 and 4.0.3, you can speed up table imports by temporarily=20 turning off the uniqueness checks during the import session: SET UNIQUE_CHECKS=3D0; For big tables, this saves a lot of disk I/O because InnoDB can use=20 its insert buffer to write secondary index records in a batch. But make sure you do not have any duplicates in the rows! =20 After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20 subsequent to that it also occurred to me to try putting the=20 data in in sorted order. Unfortunately, doing=20 UNIQUE_CHECKS=3D0 did not work, and even the combination of=20 both did not work. First chunk (3.4m rows) was ~1.5 minutes,=20 second was ~5 minutes... =20 At this point I'm inclined to believe that there is something=20 very wrong with the disk subsystem because of this and other=20 problems (doing a large cp from the datapool filesystem to=20 another filesystem brought the database to a near-halt, among=20 other things). =20 As a stop-gap solution, I created the table with no indexes,=20 and loaded all the data (loaded in linear time), and plan on=20 doing a CREATE UNIQUE INDEX on the table. Will this happen=20 in linear time, or near-linear time? =20 *sigh* =20 -JF=20 =20 =20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: =20
RE: Non-linear degradation in bulk loads?
We only upgraded to CentOS 4.1 due to an emergency data center migration... We weren't prepared to undergo the risk of a 64-bit upgrade at the same time. I believe we're experimenting with 64-bit kernel now as part of our efforts to diagnose and resolve the I/O issue. -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Monday, October 17, 2005 10:52 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? Jon, I do not know. Why not install a 64-bit Linux in your computer? Regards, Heikki Oracle/Innobase - Alkuperäinen viesti - Lähettäjä: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED]; mysql@lists.mysql.com Lähetetty: Monday, October 17, 2005 8:46 PM Aihe: RE: Non-linear degradation in bulk loads? Actually, I believe we're running 32-bit, with bigmem... Does similar behavior occur in such a scenario? -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Sunday, October 16, 2005 2:23 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? Jon, your hardware/OS combination is quite new and unusual. It might indeed be an OS problem. We observed from a 64-bit RHEL 4 that when the file cache of the OS grew bigger than 4 GB, then the file I/O performance dropped to one tenth of the normal. You would not expect that kind of behavior from a 64-bit OS. When you see the slowdown, what does 'top' say about the OS file cache size? The database is our main sites database but we've dramatically reduced the load on that machine over the past couple months through careful optimization of our code. The box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course). We have 1GB allocated to the buffer pool, and our usual 1GB * 3 log files. 8 I/O threads. Load on the box sits at around 6-7, with a large (50%) amount of time spent in wait state, but actual disk throughput to our software RAID array (No longer on a SAN...) is quite low -- 6-9k blocks/s out, 1-6k blocks/s in. Regards, Heikki - Original Message - From: Jon Frisby [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, October 14, 2005 10:39 PM Subject: RE: Non-linear degradation in bulk loads? I've tried tweaking the structure of the schema to have, for example, a = PRIMARY KEY index on email, no other indexes, and then insert in sorted = order -- made no improvement whatsoever. Another clue that leads me to = believe that this may be an OS issue: Starting a large cp on the same = box (from a local filesystem other than the one the InnoDB data pool was = on, to NFS) caused MySQL to become COMPLETELY backlogged (we went from = ~15-20 connections at any given instant to 750 (our max_connections = setting)). -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 12, 2005 8:15 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? =20 Jon, =20 hmm... maybe one of the indexes inevitably is in a random order. =20 Please post a typical =20 SHOW INNODB STATUS\G =20 when the inserts happen slowly. =20 What is your my.cnf like? =20 Regards, =20 Heikki Innobase/Oracle =20 - Alkuper=E4inen viesti - L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20 mysql@lists.mysql.com L=E4hetetty: Wednesday, October 12, 2005 3:08 AM Aihe: RE: Non-linear degradation in bulk loads? =20 =20 Two solutions: 1) sort the rows to be inserted on the key 'email'=20 before inserting. 2) Or: http://dev.mysql.com/doc/mysql/en/innodb-tuning.html If you have UNIQUE constraints on secondary keys, starting=20 from MySQL 3.23.52 and 4.0.3, you can speed up table imports by temporarily=20 turning off the uniqueness checks during the import session: SET UNIQUE_CHECKS=3D0; For big tables, this saves a lot of disk I/O because InnoDB can use=20 its insert buffer to write secondary index records in a batch. But make sure you do not have any duplicates in the rows! =20 After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20 subsequent to that it also occurred to me to try putting the=20 data in in sorted order. Unfortunately, doing=20 UNIQUE_CHECKS=3D0 did not work, and even the combination of=20 both did not work. First chunk (3.4m rows) was ~1.5 minutes,=20 second was ~5 minutes... =20 At this point I'm inclined to believe that there is something=20 very wrong with the disk subsystem because of this and other=20 problems (doing a large
Re: renaming the database
Shawn wrote I have a 2.1GB database with all InnoDB tables in it that I wanted to change the name of. It took me longer to write my RENAME TABLE script than it did to actually move the data. It's easier less error-prone to do it in an sproc which prepares executes the rename cmds. Only hitch is that MySQL doesn't yet let you prepare a drop database cmd. SET GLOBAL log_bin_trust_routine_creators=TRUE; DROP PROCEDURE IF EXISTS RenameDatabase; DELIMITER | CREATE PROCEDURE RenameDatabase (IN oldname CHAR(64), IN newname CHAR(64)) BEGIN DECLARE sname CHAR(64) DEFAULT NULL; DECLARE rows INT DEFAULT 1; CREATE DATABASE IF NOT EXISTS newname; REPEAT SELECT table_name INTO sname FROM information_schema.tables AS t WHERE t.table_schema = oldname LIMIT 1; SET rows = FOUND_ROWS(); IF rows = 1 THEN SET @scmd = CONCAT( 'RENAME TABLE ', oldname, '.', sname, ' TO ', newname, '.', sname ); PREPARE cmd FROM @scmd; EXECUTE cmd; DEALLOCATE PREPARE cmd; ELSE SET @scmd = CONCAT( "UPDATE mysql.db SET Db = '", @newname, "' WHERE Db = '", @oldname, "'" ); PREPARE cmd FROM @scmd; EXECUTE cmd; SET @scmd = CONCAT( "UPDATE mysql.columns_priv SET Db = '", @newname, "' WHERE Db = '", @oldname, "'" ); PREPARE cmd FROM @scmd; EXECUTE cmd; DEALLOCATE PREPARE cmd; FLUSH PRIVILEGES; -- MySQL does not yet support PREPARE DROP DATABASE: -- SET @scmd = CONCAT( 'DROP DATABASE ', oldname ); -- PREPARE cmd FROM @scmd; -- EXECUTE cmd; -- DEALLOCATE PREPARE cmd; END IF; UNTIL rows = 0 END REPEAT; END; | DELIMITER ; PB - [EMAIL PROTECTED] wrote: Robert L Cochran [EMAIL PROTECTED] wrote on 10/17/2005 07:37:26 AM: I think you can just rename the directory that that database lives in. If you read the documentation for CREATE DATABASE in dev.mysql.com, you'll see they discuss renaming the directory (although it does not directly say this can be done to rename the database, but it comes really close to that.) Based on the documentation the database name is simply a directory name, no more and no less. Renaming tables has its own command syntax, you can look it up. Bob Cochran Octavian Rasnita wrote: Hi, Is there a command for renaming a MySQL database? Thank you. Teddy The only way I have done it has been to create an empty database with the name I want. Then I used RENAME TABLE to "move" all of the tables into the new database. Sure it takes a lot of RENAME TABLE statements but it works. If these are InnoDB tables, all I am doing is moving metadata and that is FAST. For MyISAM or other file-based storage engines, it copies files from one folder to another. For some file systems, that is also just a metadata shift and will still be FAST. Others will require a physical move of the data from one location to another (good thing that those filesystems are becoming quite rare these days) I have a 2.1GB database with all InnoDB tables in it that I wanted to change the name of. It took me longer to write my RENAME TABLE script than it did to actually move the data. http://dev.mysql.com/doc/refman/4.1/en/rename-table.html Shawn Green Database Administrator Unimin Corporation - Spruce Pine No virus found in this incoming message. Checked by AVG Anti-Virus. Version: 7.0.344 / Virus Database: 267.12.2/137 - Release Date: 10/16/2005 No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.344 / Virus Database: 267.12.2/137 - Release Date: 10/16/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: move log files to a different disk.
Ananda Kumar [EMAIL PROTECTED] wrote on 10/17/2005 09:16:25 AM: Hi Friends, Any help on this would be of great help. regards anandkl On 10/17/05, Ananda Kumar [EMAIL PROTECTED] wrote: Hi All, I am using version 5 of mysql on lunix. I have place my log files in the same disk of datafiles, can you please let me know how i can move log files into a different disk. regards anandkl Did you RTFM? http://dev.mysql.com/doc/refman/5.0/en/server-options.html http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html http://dev.mysql.com/doc/refman/5.0/en/adding-and-removing.html http://dev.mysql.com/doc/refman/5.0/en/innodb-start.html These all have options to control where different log files wind up. You didn't say which log files you wanted to move so I had to give you all of them... Shawn Green Database Administrator Unimin Corporation - Spruce Pine
Re: renaming the database
Errm, needed to take the grants table updates out of the repat loop: CREATE PROCEDURE RenameDatabase (IN oldname CHAR(64), IN newname CHAR(64)) BEGIN DECLARE sname CHAR(64) DEFAULT NULL; DECLARE rows INT DEFAULT 1; DECLARE total INT DEFAULT 0; CREATE DATABASE IF NOT EXISTS newname; REPEAT SELECT table_name INTO sname FROM information_schema.tables AS t WHERE t.table_schema = oldname LIMIT 1; SET rows = FOUND_ROWS(); IF rows = 1 THEN SET @scmd = CONCAT( 'RENAME TABLE ', oldname, '.', sname, ' TO ', newname, '.', sname ); PREPARE cmd FROM @scmd; EXECUTE cmd; DEALLOCATE PREPARE cmd; SET total = total + 1; END IF; UNTIL rows = 0 END REPEAT; IF total 0 THEN SET @scmd = CONCAT( UPDATE mysql.db SET Db = ', @newname, ' WHERE Db = ', @oldname, ' ); PREPARE cmd FROM @scmd; EXECUTE cmd; SET @scmd = CONCAT( UPDATE mysql.columns_priv SET Db = ', @newname, ' WHERE Db = ', @oldname, ' ); PREPARE cmd FROM @scmd; EXECUTE cmd; DEALLOCATE PREPARE cmd; FLUSH PRIVILEGES; -- MySQL does not yet support PREPARE DROP DATABASE: -- SET @scmd = CONCAT( 'DROP DATABASE ', oldname ); -- PREPARE cmd FROM @scmd; -- EXECUTE cmd; -- DEALLOCATE PREPARE cmd; END IF; END; | DELIMITER ; PB -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.344 / Virus Database: 267.12.2/137 - Release Date: 10/16/2005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: SQL statement help
C.F. Scheidecker Antunes [EMAIL PROTECTED] wrote on 10/16/2005 01:58:56 PM: I actually forgot something. I need a two statements into one. The reason is that table_two has multiple values, so I need the minimal ID from table 2.Thanks. Maybe I am working too much here and tired to see the right statement. J.R. Bullington wrote: I am not the smartest on the list, but I think this would do it for you. UPDATE table_one t1, table_two t2 SET t1.ID = t2.ID WHERE t2.num = t1.num AND t1.ID = 0 J.R. -Original Message- From: C.F. Scheidecker Antunes [mailto:[EMAIL PROTECTED] Sent: Sunday, October 16, 2005 1:45 PM To: mysql@lists.mysql.com Subject: SQL statement help Importance: High Hello All, I have two MySQL 4.1 tables. The first table has an ID field which is set to zero. The second one has ID fields and filenames. I need a single SQL statement to set the ID from table 1 to the ID from table 2 ONLY IF the ID on one is zero. That is, if the ID on table one is not set (still equal to zero), grab the ID from table2 where the num of table2 is equal to table1. table one - Title - num - ID (set to zero initially) table two - num - ID - filename SET table1.ID = someID if table1.ID EQUAL to zero. Set the ID on table1 from the table 2 where the num on table 2 equals the num on table1 only if the ID on table 1 is zero. Thank you in advance, C.F. Once you identify that you want one row identified by a min or max value from a set of matching rows, you can use any of the patterns for group-wize maximum. Start here: http://dev.mysql.com/doc/refman/4.1/en/examples.html and look especially here: http://dev.mysql.com/doc/refman/4.1/en/example-maximum-column-group-row.html and here http://dev.mysql.com/doc/refman/4.1/en/example-maximum-row.html for a pattern you can use. Shawn Green Database Administrator Unimin Corporation - Spruce Pine
SSL connection error
Hi all I'm back with a new subject may be the last one was not attractive :) I'm using mysql 4.1.11-2 in Fedora Core 4. I need to set up mysql connections over SLL I follow the mysql manual instructions, create certificates and keys for the CA, the client and the server and modify the /etc/my.cnf file with the ssl-ca, ssl-cert and ssl-key for the client and the mysqld sections of the my.cnf file. My problem is that mysqld log an error describing that it has no permission to read the certificate file, I've been with this for more than 3 days. I'm attaching ls output, my.cnf file, mysqld.log file and a fragment of the mysqld strace output with the open syscall returning error. Thanks in advance for your time and interest best regards -- Israel Fdez. Cabrera [EMAIL PROTECTED] #ls / | grep etc drwxr-xr-x 83 root root 12288 Oct 15 16:50 etc #ls /etc | grep pki drwxr-xr-x 7 root root4096 Oct 14 17:51 pki #ls /etc/pki total 104 drwxr-xr-x 3 root root 4096 Oct 14 21:46 CA drwxr-xr-x 3 root root 4096 Oct 8 16:54 dovecot -rwxr-xr-x 1 root root 1088 Oct 8 16:54 gencert.sh -rwxr-xr-x 1 root root 1056 Oct 8 16:54 gencert.sh~ -rw-r--r-- 1 root root 236 Oct 8 16:54 index.txt -rw-r--r-- 1 root root 21 Oct 8 16:54 index.txt.attr -rw-r--r-- 1 root root 21 Oct 8 16:54 index.txt.attr.old -rw-r--r-- 1 root root 118 Oct 8 16:54 index.txt.old drwxr-xr-x 2 root root 4096 Oct 8 16:54 newcerts drwxr-xr-x 2 root root 4096 Oct 8 16:54 rpm-gpg -rw-r--r-- 1 root root3 Oct 8 16:54 serial -rw-r--r-- 1 root root3 Oct 8 16:54 serial.old drwxr-xr-x 5 root root 4096 Oct 14 17:51 tls #ls /etc/pki/tls total 40 lrwxrwxrwx 1 root root 19 Oct 8 16:54 cert.pem - certs/ca-bundle.crt drwxr-xr-x 2 root root 4096 Oct 15 14:18 certs drwxr-xr-x 2 root root 4096 Oct 8 16:54 misc -r--r--r-- 1 root root 7998 Oct 14 17:59 openssl.cnf drwxr-xr-x 2 root root 4096 Oct 8 16:54 private #ls /etc/pki/tls/certs total 492 -rw-r--r-- 1 root root 427833 Oct 8 16:54 ca-bundle.crt -rw-r--r-- 1 root root3617 Oct 14 21:46 client-cert.pem -rw-r--r-- 1 root mysql887 Oct 8 16:54 client-key.pem -rw-r--r-- 1 root mysql769 Oct 8 16:54 client-req.pem -rw-r--r-- 1 root root 610 Oct 8 16:54 make-dummy-cert -rw-r--r-- 1 root root2240 Oct 8 16:54 Makefile -rw-r--r-- 1 root root3617 Oct 14 21:46 server-cert.pem -rw-r--r-- 1 root root 887 Oct 14 21:46 server-key.pem -rw-r--r-- 1 root mysql769 Oct 8 16:54 server-req.pem open(/etc/pki/tls/certs/server-cert.pem, O_RDONLY) = -1 EACCES (Permission denied) write(2, Error when connection to server ..., 42) = 42 write(2, 1872:error:0200100D:system libra..., 122) = 122 write(2, 1872:error:20074002:BIO routines..., 70) = 70 write(2, 1872:error:140AD002:SSL routines..., 88) = 88 write(2, Unable to get certificate from \'..., 68) = 68 open(/etc/pki/CA/cacert.pem, O_RDONLY) = -1 EACCES (Permission denied) open(/etc/pki/tls/cert.pem, O_RDONLY) = -1 EACCES (Permission denied) time([1129246383]) = 1129246383 open(/dev/urandom, O_RDONLY|O_NONBLOCK|O_NOCTTY) = -1 EACCES (Permission denied) open(/dev/random, O_RDONLY|O_NONBLOCK|O_NOCTTY) = -1 EACCES (Permission denied) open(/dev/srandom, O_RDONLY|O_NONBLOCK|O_NOCTTY) = -1 ENOENT (No such file or directory) socket(PF_FILE, SOCK_STREAM, 0) = 3 connect(3, {sa_family=AF_FILE, path=/var/run/egd-pool}, 19) = -1 ENOENT (No such file or directory) close(3)= 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Strange error in query with joins and subquery
Just recently (possibly since upgrading to MySQL 5.0.13 RC), I've been getting the following error with queries like the one below. Unknown column 'photos.id' in 'on clause' (1054) SELECT photos.* FROM photos LEFT JOIN tags_photos ON tags_photos.photo = photos.id WHERE 1 AND tags_photos.tag IN (6) AND NOT EXISTS ( SELECT 1 FROM tags AS t LEFT OUTER JOIN tags_photos AS t2 ON ( t.id = t2.tag AND t2.photo = photos.id ) WHERE t.id IN (6) AND t2.tag IS NULL ) GROUP BY photos.id This was working previously, and there is definitely an `id` column in the `photos` table. The table definitions follow: CREATE TABLE `photos` ( `id` int(10) unsigned NOT NULL auto_increment, `filename` varchar(100) NOT NULL, `caption` varchar(200) NOT NULL, `description` text NOT NULL, `user` int(10) unsigned NOT NULL default '0', `exif` longtext NOT NULL, `uploaded` int(10) unsigned NOT NULL default '0', `modified` int(10) unsigned NOT NULL, `privacy` enum('public','friends-family','friends','family','private') NOT NULL default 'public', `rights` enum('copyright','creative-commons','public-domain') NOT NULL default 'copyright', `notes` text NOT NULL, `offensive` tinyint(1) unsigned NOT NULL, `resizeMode` varchar(5) NOT NULL, `allowOriginal` tinyint(1) NOT NULL default '1', `licenseURL` varchar(200) NOT NULL, `licenseName` varchar(50) NOT NULL, `licenseButton` varchar(250) NOT NULL, PRIMARY KEY (`id`), KEY `user` (`user`), KEY `uploaded` (`uploaded`), KEY `privacy` (`privacy`), KEY `modified` (`modified`), CONSTRAINT `photos_ibfk_1` FOREIGN KEY (`user`) REFERENCES `users` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 CREATE TABLE `tags_photos` ( `tag` int(10) unsigned NOT NULL default '0', `photo` int(10) unsigned NOT NULL default '0', PRIMARY KEY (`tag`,`photo`), KEY `photo` (`photo`), CONSTRAINT `tags_photos_ibfk_1` FOREIGN KEY (`tag`) REFERENCES `tags` (`id`), CONSTRAINT `tags_photos_ibfk_2` FOREIGN KEY (`photo`) REFERENCES `photos` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 CREATE TABLE `tags` ( `id` int(10) unsigned NOT NULL auto_increment, `tagName` varchar(50) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Innodb buffer pool allocation on Window 2000
I have a Windows 2000 host with 3.5 G of ram. I have configured MySQL 4.1.14 with innodb_buffer_pool = 1024M, when I start mysqld I get the following error: 051013 16:46:00 InnoDB: Error: cannot allocate 1073758208 bytes of InnoDB: memory with malloc! Total allocated memory InnoDB: by InnoDB 16975556 bytes. Operating system errno: 8 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. InnoDB: We keep retrying the allocation for 60 seconds... InnoDB: Fatal error: cannot allocate the memory for the buffer pool 051013 16:47:00 [ERROR] Can't init databases 051013 16:47:00 [ERROR] Aborting 051013 16:47:01 [Note] C:\Program Files\MySQL\bin\mysqld: Shutdown complete If I lower the value to 512M it comes up ok. Not being an in depth Windows admin, What do I look for in the registry or elsewhere to allow MySQL to have a 1024M innodb_buffer_pool? Thanks, Jo
RE: Non-linear degradation in bulk loads?
Side question: If I use a 64-bit MySQL build on a 64-bit kernel, is it safe and sane to allocate say, 6GB to the InnoDB buffer pool? On an 8GB box, 64-bit software stack, what is the optimum memory allocation for a pure-InnoDB (* - MyISAM used only for grant tables) mysql server running as the sole application on the machine? -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Monday, October 17, 2005 10:55 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? Jon, I am not 100 % sure that the problem we saw was in a 64-bit Linux. It might have been 32-bit. Anyway, since CentOS is a clone of RHEL, this might be the same file cache phenomenon. I do not know if one can force the file cache to stay smaller than 4 GB. You can try running some dummy programs that occupy a few GB of memory. Regards, Heikki Oracle/Innobase - Alkuperäinen viesti - Lähettäjä: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED]; mysql@lists.mysql.com Lähetetty: Monday, October 17, 2005 8:49 PM Aihe: RE: Non-linear degradation in bulk loads? Sorry to spam the group, but I just noticed that I asserted we were on a 2.4 kernel -- we're on 2.6.9-11 as part of CentOS 4.1... -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Sunday, October 16, 2005 2:23 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? Jon, your hardware/OS combination is quite new and unusual. It might indeed be an OS problem. We observed from a 64-bit RHEL 4 that when the file cache of the OS grew bigger than 4 GB, then the file I/O performance dropped to one tenth of the normal. You would not expect that kind of behavior from a 64-bit OS. When you see the slowdown, what does 'top' say about the OS file cache size? The database is our main sites database but we've dramatically reduced the load on that machine over the past couple months through careful optimization of our code. The box is a dual, dual-core Opteron, 8GB of RAM running a 32-bit Linux 2.4 kernel and MySQL 4.0.20 (32-bit of course). We have 1GB allocated to the buffer pool, and our usual 1GB * 3 log files. 8 I/O threads. Load on the box sits at around 6-7, with a large (50%) amount of time spent in wait state, but actual disk throughput to our software RAID array (No longer on a SAN...) is quite low -- 6-9k blocks/s out, 1-6k blocks/s in. Regards, Heikki - Original Message - From: Jon Frisby [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, October 14, 2005 10:39 PM Subject: RE: Non-linear degradation in bulk loads? I've tried tweaking the structure of the schema to have, for example, a = PRIMARY KEY index on email, no other indexes, and then insert in sorted = order -- made no improvement whatsoever. Another clue that leads me to = believe that this may be an OS issue: Starting a large cp on the same = box (from a local filesystem other than the one the InnoDB data pool was = on, to NFS) caused MySQL to become COMPLETELY backlogged (we went from = ~15-20 connections at any given instant to 750 (our max_connections = setting)). -JF -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 12, 2005 8:15 AM To: mysql@lists.mysql.com Subject: Re: Non-linear degradation in bulk loads? =20 Jon, =20 hmm... maybe one of the indexes inevitably is in a random order. =20 Please post a typical =20 SHOW INNODB STATUS\G =20 when the inserts happen slowly. =20 What is your my.cnf like? =20 Regards, =20 Heikki Innobase/Oracle =20 - Alkuper=E4inen viesti - L=E4hett=E4j=E4: Jon Frisby [EMAIL PROTECTED] Vastaanottaja: Heikki Tuuri [EMAIL PROTECTED];=20 mysql@lists.mysql.com L=E4hetetty: Wednesday, October 12, 2005 3:08 AM Aihe: RE: Non-linear degradation in bulk loads? =20 =20 Two solutions: 1) sort the rows to be inserted on the key 'email'=20 before inserting. 2) Or: http://dev.mysql.com/doc/mysql/en/innodb-tuning.html If you have UNIQUE constraints on secondary keys, starting=20 from MySQL 3.23.52 and 4.0.3, you can speed up table imports by temporarily=20 turning off the uniqueness checks during the import session: SET UNIQUE_CHECKS=3D0; For big tables, this saves a lot of disk I/O because InnoDB can use=20 its insert buffer to write secondary index records in a batch. But make sure you do not have any duplicates in the rows! =20 After sending my mail, I discovered SET UNIQUE_CHECKS=3D0, and=20 subsequent to that it also occurred to me to try putting the=20 data in in sorted order.