Re: null data in table question
Hi, don't forget to apply the correct changes to your queries. Having NULLs or not, let you write for example : select ... from ... where midinials is NULL; And be aware about NULL indexation in some storages. Those values are not indexed for example in oracle. I'm not sure about innodb, but this sould be. In all the cases, you can't have a unique index on such columns. Mathias Selon Martijn Tonies [EMAIL PROTECTED]: Hi Scott, I have created a web-based simple application, and used mysql for data storage. All has worked well. But I do have a simple question I would like to ask the group. I have some web-based forms that match table structure. Most of the important fields have validation, and I ensure good data into the table. But I have a few columns in the table such as middleInitial where I do not validate the data. And in the database is shows a null when I do a select * from. Is a null acceptable in the database, or is there something I should do on columns that the user may not put in data? Given that NULL means unknown and there's no value/state for non applicable, NULLs don't belong in places where you actually want to fill in nothing or empty. An empty string is an empty string. Why not insert that instead? With regards, Martijn Tonies Database Workbench - tool for InterBase, Firebird, MySQL, Oracle MS SQL Server Upscene Productions http://www.upscene.com Database development questions? Check the forum! http://www.databasedevelopmentforum.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysql 4.0 to 4.1 migration and charset problems
Hi, I have mysql 4.0 db with configuration: set-variable = default-character-set=latin2 set-variable = character-set=latin2 now I'm trying to migrate to mysql 4.1.12 My current config is: character-set-server=latin2 collation-server=latin2_general_ci The problem is connection/reply/client-character set. mysql 4.1 by default uses 'latin1' so in all queries (from previously latin2 mysql 4.0 database) I don't have latin2 characters just '?' chars. That's obvious because due to default latin1 in 4.1. SET NAMES 'latin2' from client side of coruse cures whole problem but... I can't fix every possible application that my users have installed (and if there are thousands of users this is simply impossible). What's the proper way of dealing with this problem? I've tried to use some nasty hacks like init-connect=SET NAMES latin2 and this half-works - I see latin2 characters in server responses... but it breaks for example phpmyadmin (characters are broken in results; when I drop init-connect hack phpmyadmin works nicely). So init-connect isn't usable. Now when connecting with mysql command line client then I by default get latin1 but can change easily to latin2 using [mysql] default-character-set=latin2 in .my.cnf file. That's great but this works only for mysql cmd line client _only_ while my primary concern is php. Now is the funny part, there is no .my.cnf file, I connect with mysql cmdline. mysql \s -- mysql Ver 14.7 Distrib 4.1.12, for pld-linux-gnu (powerpc) using readline 4.3 Connection id: 2 Current database: Current user: [EMAIL PROTECTED] SSL:Not in use Current pager: stdout Using outfile: '' Using delimiter:; Server version: 4.1.12-log Protocol version: 10 Connection: Localhost via UNIX socket Server characterset:latin2 Db characterset:latin2 Client characterset:latin1 Conn. characterset:latin1 UNIX socket:/var/lib/mysql/mysql.sock Uptime: 34 min 35 sec Threads: 1 Questions: 12 Slow queries: 0 Opens: 11 Flush tables: 1 Open tables: 0 Queries per second avg: 0.006 -- mysql show variables like '%char%'; +--++ | Variable_name| Value | +--++ | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin2 | | character_set_results| latin1 | | character_set_server | latin2 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--++ 7 rows in set (0.00 sec) As you can see client/connection/results are latin1 BUT: mysql select @@global.character_set_client, @@global.character_set_connection, @@global.character_set_database, @@global.character_set_results, @@global.character_set_results, @@global.character_set_server; +---+---+-+++---+ | @@global.character_set_client | @@global.character_set_connection | @@global.character_set_database | @@global.character_set_results | @@global.character_set_results | @@global.character_set_server | +---+---+-+++---+ | latin2| latin2| latin2 | latin2 | latin2 | latin2 | +---+---+-+++---+ 1 row in set (0.00 sec) now it tells me that these are latin2!? Anyway primary question is how to deal with characters in mysql 4.0-4.1 conversion (how to get latin2 as default in client connections)? I've talked with few people on #mysql on freenode but we haven't seen any solution. -- Arkadiusz MiśkiewiczPLD/Linux Team http://www.t17.ds.pwr.wroc.pl/~misiek/ http://ftp.pld-linux.org/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql repl
Hello. Could you examine slave relay log and master binary log to find which statement causes duplicated entry with mysqlbinlog utility. Which versions of MySQL do you use on your master and slave? See: http://dev.mysql.com/doc/mysql/en/mysqlbinlog.html http://dev.mysql.com/doc/mysql/en/slave-logs.html http://dev.mysql.com/doc/mysql/en/show-slave-status.html MaFai [EMAIL PROTECTED] wrote: Dear mysql@lists.mysql.com: Our db is the innodb eng. We have no hot backup tools to dump the data. If lock the table by the following cmd before we dump the data FLUSH TABLES WITH READ LOCK; It seems ok,it can lock the innodb too. But the replication would be crushed ,even it start up smoothly,due to the duplicated entry. Do any one try to lock innodb tables to make the snap shot? Since shutdown the master db would take great effect,we don't hope so. Best regards. MaFai [EMAIL PROTECTED] 2005-07-06 34955929 -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Misconfigured master - server id was not set
Hi, I have slave-master-setup that is special in two ways: 1) The slave connects through an stunnel 2) The slave replicates only one db As soon as I start the slave process, the server connects but fails with the following log messages: Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Slave SQL thread initialized, starting replication in log 'mysql-bin.001' at position 227973, relay log './ijssel1-relay-bin.001' position: 4 Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Slave I/O thread: connected to master '[EMAIL PROTECTED]:3307', replication started in log 'mysql-bin.001' at position 227973 Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Error reading packet from server: Misconfigured master - server id was not set (server_errno=1236) Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Got fatal error 1236: 'Misconfigured master - server id was not set' from master when reading data from binary log Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Slave I/O thread exiting, read up to log 'mysql-bin.001', position 227973 Jul 6 14:58:18 ijssel1 mysqld[11755]: 050706 14:58:18 Error reading relay log event: slave SQL thread was killed On the master side I see the stunnel connecting. To rule out corrupted binary logs, I did a RESET MASTER on the master. I verified with SHOW VARIABLES and SHOW BINLOG EVENTS that the master has server-id 1, the slave 2, and the server-id 1 is correctly used in the binlogs. Any ideas? Jan. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Network drive
I think I haven't understood your question. I guess that in case of a network failure you can have the same behavior as a power shutdown. About the networked drives? Anyone? --- Martijn Tonies [EMAIL PROTECTED] wrote: Hi Ruben, I would like to make a short, quick and simple question. Is it possible to have the following line: innodb_data_home_dir=X:/data/ in a my.ini config file? I'm using windows XP, mysql 4.1.12, X: is a mapped network drive to a Linux folder using samba, all the permissions are set and tested. I have seen this posted many times but without many replies. I want to use a folder in a mapped drive to save my InnoDB data. Is this possible? I don't know this particular answer for MySQL, but I do have 1 question: If the database engine doesn't have control over the files and/or drive, what should it do in the case of a network failure? Let alone the latency of a networked file... With regards, Martijn Tonies Database Workbench - tool for InterBase, Firebird, MySQL, Oracle MS SQL Server Upscene Productions http://www.upscene.com Database development questions? Check the forum! http://www.databasedevelopmentforum.com Rúben Carvalho ___ Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Mac OS X Table Corruption
I am running into repeatable table corruption with MySQL 4.x on Mac OS X 10.x. I previously had a MySQL install on 10.2 Client under 3.x and never had an issue or any major problems at all. I upgraded to MySQL 4.x and have subsequently installed MySQL 4.x (from the supplied pkg's) on my 10.2 client, a 10.3 Server and 10.4 Server (Tiger) install. The typical application here was standalone servers running MySQL, Apache and PHP 4/5 running a hand full of small websites. Things run along fine until with out warning my PHP / MySQL queries fail returning no data when there should be. As a temporary fix I created a cron job to run: /usr/local/mysql/bin/myisamchk -ov on all my tables every 4 hours. This fixes the data issues but the problem is I am loosing data: - recovering (with keycache) MyISAM-table '/var/mysql/data/xxx.MYI' Data records: 17 Data records: 14 Now this was last night at midnight. I recall manually adding the 3 records that evening and the previous cron job had only 14 records. It seems that I the fix is flushing my new data down the toilet. The only common thread I can see is that I use MacSQL (an old version 2.6.3) for running test queries etc rather than the CLI. Can a mysql client cause this corruption? All my systems have different OS's, versions of MySQL and PHP. They all have UPS's. The old 10.2 system (soon to be retired) has only a single drive. The new 10.4 system has mirrored drives on a RAID set- up. The 10.3 install (recently wiped) had the same set-up. I think I can rule out a hardware issue and a OS / MySQL issue. It even show up on tables I never interact with directly i.e. my Moveable Type db for my blogs. I create an entry and the next day it's dropped out of the database. Here is the full myisamchk output on the subject table (after adding my data back in): Checking MyISAM file: /var/mysql/data/.MYI Data records: 18 Deleted blocks: 2 - check file-size myisamchk: error: Size of datafile is: 876 Should be: 1160 - check record delete-chain myisamchk: error: record delete-link-chain corrupted - check key delete-chain - check index reference - check data record references index: 1 myisamchk: error: Found key at page 1024 that points to record outside datafile - check record links myisamchk: error: Record-count is not ok; is 15 Should be: 18 myisamchk: warning: Found 0 deleted space. Should be 80 myisamchk: warning: Found 0 deleted blocks Should be: 2 myisamchk: warning: Found 15 partsShould be: 20 parts MyISAM-table '/var/mysql/data/oilfielddepot_r6/subnavbar.MYI' is corrupted Fix it using switch -r or -o so I run: myisamchk -vr /var/mysql/data/xx.MYI - recovering (with sort) MyISAM-table '/var/mysql/data/ xx.MYI' Data records: 18 - Fixing index 1 - Searching for keys, allocating buffer for 45 keys - Dumping 15 keys Data records: 15 Poof!... gone again. I then re-inserted my data again, ran a 'REPAIR TABLE...' directly from the client and the data seems to stick. I jump thought these hoops each time and the problem seems to go away and then out of the blue this comes back. It's driving my crazy. Any ideas at all out there? Flush tables? A different repair system? Dan T -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Network drive
Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 11:06:10 AM: I think I haven't understood your question. I guess that in case of a network failure you can have the same behavior as a power shutdown. About the networked drives? Anyone? --- Martijn Tonies [EMAIL PROTECTED] wrote: Hi Ruben, I would like to make a short, quick and simple question. Is it possible to have the following line: innodb_data_home_dir=X:/data/ in a my.ini config file? I'm using windows XP, mysql 4.1.12, X: is a mapped network drive to a Linux folder using samba, all the permissions are set and tested. I have seen this posted many times but without many replies. I want to use a folder in a mapped drive to save my InnoDB data. Is this possible? I don't know this particular answer for MySQL, but I do have 1 question: If the database engine doesn't have control over the files and/or drive, what should it do in the case of a network failure? Let alone the latency of a networked file... With regards, Martijn Tonies Database Workbench - tool for InterBase, Firebird, MySQL, Oracle MS SQL Server Upscene Productions http://www.upscene.com Database development questions? Check the forum! http://www.databasedevelopmentforum.com Rúben Carvalho RDBMS over a network: NOT recommended. Not only can you not enforce OS-level locking on your files (maybe you can, I guess it may depend on your device and inteface protocols) but the MOST COMMON bottleneck to database performance is disk I/O. If you went with networked storage, you are not only going to suffer through disk lag (seek time + rotational positioning before the operation can start) but you are incurring network overhead on top of it IN BOTH DIRECTIONS. Unless your network device is flash-only (all memory, no disks), you just cut your throughput by at least 75%. And even if your device is flash-only you will reduce your data throughput by 25-50% (all performance numbers are rough estimates pulled out of my a** but based on the number of extra network hops necessary to get at and read your files). I don't care how fast your network is, networked storage can't be as fast as local disks. Again, it is highly discouraged to use networked storage for anything but the most trivial database uses (small file sizes, low traffic, etc). For any application that requires even modest performance, spend your money on a fast RAID configuration. You will be much happier in the long run. Shawn Green Database Administrator Unimin Corporation - Spruce Pine
Re: Mac OS X Table Corruption
I had someone on the mac-osx server admin list point this out to me: http://docs.info.apple.com/article.html?artnum=107852#sql which is regarding a byte-ordering issue resolved in an upgrade from OS X 10.3.2 to 10.3.3. Now in my case I could have upgraded with out performing this procedure. I then moved these db's to 10.4. The thing is that I never use the default Apple MySQL install. I always use the MySQL supplied pkg installers. Also I have seen the issue on 10.2. Perhaps it is a MySQL issue and I upgraded the MySQL versions as did Apple and I messed things up along the way. Could the dump and restore perhaps fix this? Can I still do it with out down grading? Dan T On Jul 6, 2005, at 9:26 AM, Dan Tappin wrote: I am running into repeatable table corruption with MySQL 4.x on Mac OS X 10.x. I previously had a MySQL install on 10.2 Client under 3.x and never had an issue or any major problems at all. I upgraded to MySQL 4.x and have subsequently installed MySQL 4.x (from the supplied pkg's) on my 10.2 client, a 10.3 Server and 10.4 Server (Tiger) install. The typical application here was standalone servers running MySQL, Apache and PHP 4/5 running a hand full of small websites. Things run along fine until with out warning my PHP / MySQL queries fail returning no data when there should be. As a temporary fix I created a cron job to run: /usr/local/mysql/bin/myisamchk -ov on all my tables every 4 hours. This fixes the data issues but the problem is I am loosing data: - recovering (with keycache) MyISAM-table '/var/mysql/data/ xxx.MYI' Data records: 17 Data records: 14 Now this was last night at midnight. I recall manually adding the 3 records that evening and the previous cron job had only 14 records. It seems that I the fix is flushing my new data down the toilet. The only common thread I can see is that I use MacSQL (an old version 2.6.3) for running test queries etc rather than the CLI. Can a mysql client cause this corruption? All my systems have different OS's, versions of MySQL and PHP. They all have UPS's. The old 10.2 system (soon to be retired) has only a single drive. The new 10.4 system has mirrored drives on a RAID set-up. The 10.3 install (recently wiped) had the same set- up. I think I can rule out a hardware issue and a OS / MySQL issue. It even show up on tables I never interact with directly i.e. my Moveable Type db for my blogs. I create an entry and the next day it's dropped out of the database. Here is the full myisamchk output on the subject table (after adding my data back in): Checking MyISAM file: /var/mysql/data/.MYI Data records: 18 Deleted blocks: 2 - check file-size myisamchk: error: Size of datafile is: 876 Should be: 1160 - check record delete-chain myisamchk: error: record delete-link-chain corrupted - check key delete-chain - check index reference - check data record references index: 1 myisamchk: error: Found key at page 1024 that points to record outside datafile - check record links myisamchk: error: Record-count is not ok; is 15 Should be: 18 myisamchk: warning: Found 0 deleted space. Should be 80 myisamchk: warning: Found 0 deleted blocks Should be: 2 myisamchk: warning: Found 15 partsShould be: 20 parts MyISAM-table '/var/mysql/data/oilfielddepot_r6/subnavbar.MYI' is corrupted Fix it using switch -r or -o so I run: myisamchk -vr /var/mysql/data/xx.MYI - recovering (with sort) MyISAM-table '/var/mysql/data/ xx.MYI' Data records: 18 - Fixing index 1 - Searching for keys, allocating buffer for 45 keys - Dumping 15 keys Data records: 15 Poof!... gone again. I then re-inserted my data again, ran a 'REPAIR TABLE...' directly from the client and the data seems to stick. I jump thought these hoops each time and the problem seems to go away and then out of the blue this comes back. It's driving my crazy. Any ideas at all out there? Flush tables? A different repair system? Dan T -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Mac OS X Table Corruption
Dan Tappin [EMAIL PROTECTED] wrote on 07/06/2005 11:26:13 AM: I am running into repeatable table corruption with MySQL 4.x on Mac OS X 10.x. I previously had a MySQL install on 10.2 Client under 3.x and never had an issue or any major problems at all. I upgraded to MySQL 4.x and have subsequently installed MySQL 4.x (from the supplied pkg's) on my 10.2 client, a 10.3 Server and 10.4 Server (Tiger) install. The typical application here was standalone servers running MySQL, Apache and PHP 4/5 running a hand full of small websites. Things run along fine until with out warning my PHP / MySQL queries fail returning no data when there should be. As a temporary fix I created a cron job to run: /usr/local/mysql/bin/myisamchk -ov on all my tables every 4 hours. This fixes the data issues but the problem is I am loosing data: - recovering (with keycache) MyISAM-table '/var/mysql/data/xxx.MYI' Data records: 17 Data records: 14 Now this was last night at midnight. I recall manually adding the 3 records that evening and the previous cron job had only 14 records. It seems that I the fix is flushing my new data down the toilet. The only common thread I can see is that I use MacSQL (an old version 2.6.3) for running test queries etc rather than the CLI. Can a mysql client cause this corruption? All my systems have different OS's, versions of MySQL and PHP. They all have UPS's. The old 10.2 system (soon to be retired) has only a single drive. The new 10.4 system has mirrored drives on a RAID set- up. The 10.3 install (recently wiped) had the same set-up. I think I can rule out a hardware issue and a OS / MySQL issue. It even show up on tables I never interact with directly i.e. my Moveable Type db for my blogs. I create an entry and the next day it's dropped out of the database. Here is the full myisamchk output on the subject table (after adding my data back in): Checking MyISAM file: /var/mysql/data/.MYI Data records: 18 Deleted blocks: 2 - check file-size myisamchk: error: Size of datafile is: 876 Should be: 1160 - check record delete-chain myisamchk: error: record delete-link-chain corrupted - check key delete-chain - check index reference - check data record references index: 1 myisamchk: error: Found key at page 1024 that points to record outside datafile - check record links myisamchk: error: Record-count is not ok; is 15 Should be: 18 myisamchk: warning: Found 0 deleted space. Should be 80 myisamchk: warning: Found 0 deleted blocks Should be: 2 myisamchk: warning: Found 15 partsShould be: 20 parts MyISAM-table '/var/mysql/data/oilfielddepot_r6/subnavbar.MYI' is corrupted Fix it using switch -r or -o so I run: myisamchk -vr /var/mysql/data/xx.MYI - recovering (with sort) MyISAM-table '/var/mysql/data/ xx.MYI' Data records: 18 - Fixing index 1 - Searching for keys, allocating buffer for 45 keys - Dumping 15 keys Data records: 15 Poof!... gone again. I then re-inserted my data again, ran a 'REPAIR TABLE...' directly from the client and the data seems to stick. I jump thought these hoops each time and the problem seems to go away and then out of the blue this comes back. It's driving my crazy. Any ideas at all out there? Flush tables? A different repair system? Dan T I'm not a Mac guru but what you describe sounds like file system corruption. Can you do a diagnostic scan of your hard drives looking for bad sectors? You can move your data to another portion of the disk if you run an ALTER TABLE to make some trivial change. ALTER TABLE will create a new copy of the table somewhere else on the disk (applying your change) and drops the old table when it's through. No, database clients *usually* never cause table corruption (there may be the rare occasion...). The maintenance of the table structures are the responsibility of the MySQL server, not the client. So I would discurage looking that way for long. The way you describe your system, it almost sounds as though you have physically shared the same files between all of your server. Doing that could cause the corruption you describe. Each server requires its OWN sets of files. Are there ANY other processes that may be attempting to directly write to your database files? Your data files should be protected and isolated from other user-type files and direct contact from any other process but your MySQL server. Another option may be to rebuild your databases by first dumping your data, removing and reinstalling your servers, then reloading your data (one at a time, of course). I would save that as an option of last resort. HTH... Shawn Green Database Administrator Unimin Corporation - Spruce Pine
Re: Network drive
Well, thank you very much for your explanation. My problem is I would like to have the data files being saved in a machine behind a proxy but the server running in a machine outside the proxy (the clients don't have access to the machine behind the proxy). Any ideas? Thank you --- [EMAIL PROTECTED] wrote: Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 11:06:10 AM: I think I haven't understood your question. I guess that in case of a network failure you can have the same behavior as a power shutdown. About the networked drives? Anyone? --- Martijn Tonies [EMAIL PROTECTED] wrote: Hi Ruben, I would like to make a short, quick and simple question. Is it possible to have the following line: innodb_data_home_dir=X:/data/ in a my.ini config file? I'm using windows XP, mysql 4.1.12, X: is a mapped network drive to a Linux folder using samba, all the permissions are set and tested. I have seen this posted many times but without many replies. I want to use a folder in a mapped drive to save my InnoDB data. Is this possible? I don't know this particular answer for MySQL, but I do have 1 question: If the database engine doesn't have control over the files and/or drive, what should it do in the case of a network failure? Let alone the latency of a networked file... With regards, Martijn Tonies Database Workbench - tool for InterBase, Firebird, MySQL, Oracle MS SQL Server Upscene Productions http://www.upscene.com Database development questions? Check the forum! http://www.databasedevelopmentforum.com Rúben Carvalho RDBMS over a network: NOT recommended. Not only can you not enforce OS-level locking on your files (maybe you can, I guess it may depend on your device and inteface protocols) but the MOST COMMON bottleneck to database performance is disk I/O. If you went with networked storage, you are not only going to suffer through disk lag (seek time + rotational positioning before the operation can start) but you are incurring network overhead on top of it IN BOTH DIRECTIONS. Unless your network device is flash-only (all memory, no disks), you just cut your throughput by at least 75%. And even if your device is flash-only you will reduce your data throughput by 25-50% (all performance numbers are rough estimates pulled out of my a** but based on the number of extra network hops necessary to get at and read your files). I don't care how fast your network is, networked storage can't be as fast as local disks. Again, it is highly discouraged to use networked storage for anything but the most trivial database uses (small file sizes, low traffic, etc). For any application that requires even modest performance, spend your money on a fast RAID configuration. You will be much happier in the long run. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Rúben Carvalho ___ How much free photo storage do you get? Store your holiday snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: innodb crashes during heavy usage with exceeded memory error
Thank a lot for your quick reply. SInce we are not using myisam tables ( except for the system tables), I deallocated memory from myisam and allocated to innodb. When I allocated close to 1.7G to innodb buffer size, mysql used to crash more often. So I decreased innodb_buffer_size to 1G. What memory parameter do you suggest to increase and how do you alter innodb table extender? This is the error that causes mysql to crash. 050704 18:19:21 InnoDB: Warning: could not allocate 100892621 + 100 bytes to retrieve InnoDB: a big column. Table name `sessions/horde_sessionhandler` 050704 18:37:16 InnoDB: ERROR: the age of the last checkpoint is 483189148, InnoDB: which exceeds the log group capacity 483180135. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 050704 18:38:23 InnoDB: Fatal error: cannot allocate 100892688 bytes of InnoDB: memory with malloc! Total allocated memory The table that's it's complaining about is a session table which stores web session information. Basically the data in that table is a throw away data and gets deleted when the user logs out. The session data column in that table is defined as long blob. The average length of the row is not that big but once in a while application inserts a row for a user with session data that could exceed 200MG . Since we have lots of concurrent users and if application inserts few rows with 200MG data simultaneously innodb runs out of memory and crashes. Eventhough we have 8 gig memory, I'm not able to start mysql if I allocate more than 2Gig to innodb_buffer_size. Do you know how I can allocate more than 2 Gig memory to innodb on 32 bit machine. That might solve our problem. Thanks for your suggestions. Kasthuri On Jul 5, 2005, at 2:36 PM, Mir Islam wrote: You definitely should increase memory sizes in your my.cnf file. The settings that you have are for a very smal setup. You also need to allocate more space for innodb table extender. So instead of 10m have something like 50m or try and see which settings is better. The reason is under heavy load if innodb has to constantly extend the table space it will not have opportunity to anything else. So having 100m for each extend will reduce the number of times it has to increase table space. Are you doing any deletes/updates at the same time? You said 50m inserts to a table. Is that figure per day ? How large (in bytes) is an average row? Lastly look into the my.cnf for a large setup that ships with mysql. I think it is called my-large.cnf that will give you some help on settings. On 7/5/05, Kasthuri Ilankamban [EMAIL PROTECTED] wrote: Hi, We are running mysql version 4.1.7 with innodb on i686 running 2.4.26 linux kernal with 8G memory. Mysql crashes consistently during heavy usage with fatal innodb error. We are running a high volume front end application which inserts 50M data to a row in innodb table often. I don't know whether these inserts causing the memory overflow. Anyway I have included our my.cnf file and error logs from last crash below. Any help would be greatly appreciated. Thanks in advance. Kasthuri -- /etc/my.cnf key_buffer = 8M max_allowed_packet = 128M read_buffer_size = 512K sort_buffer_size = 512K myisam_sort_buffer_size = 5M thread_cache = 1024 table_cache = 1024 query_cache_size = 32M max_connections=1000 wait_timeout=300 # Uncomment the following if you are using InnoDB tables innodb_data_home_dir = /var/mysql/data innodb_data_file_path = ibdata1:10M;ibdata2:1G;ibdata3:1G:autoextend # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1024M innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 256M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_flush_method = O_DSYNC innodb_lock_wait_timeout = 10 - Mysql Error from error logs: 050704 18:19:21 InnoDB: Error: cannot allocate 101892621 bytes of memory for InnoDB: a BLOB with malloc! Total allocated memory InnoDB: by InnoDB 1989384815 bytes. Operating system errno: 12 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. 050704 18:19:21 InnoDB: Warning: could not allocate 100892621 + 100 bytes to retrieve InnoDB: a big column. Table name `sessions/horde_sessionhandler` 050704 18:37:16 InnoDB: ERROR: the age of the last checkpoint is 483189148, InnoDB: which exceeds the log group capacity 483180135. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 050704 18:38:23 InnoDB: Fatal error: cannot allocate 100892688 bytes of InnoDB: memory
Re: Network drive
Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 12:54:24 PM: Well, thank you very much for your explanation. My problem is I would like to have the data files being saved in a machine behind a proxy but the server running in a machine outside the proxy (the clients don't have access to the machine behind the proxy). Any ideas? Thank you --- [EMAIL PROTECTED] wrote: Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 11:06:10 AM: I think I haven't understood your question. I guess that in case of a network failure you can have the same behavior as a power shutdown. About the networked drives? Anyone? --- Martijn Tonies [EMAIL PROTECTED] wrote: Hi Ruben, I would like to make a short, quick and simple question. Is it possible to have the following line: innodb_data_home_dir=X:/data/ in a my.ini config file? I'm using windows XP, mysql 4.1.12, X: is a mapped network drive to a Linux folder using samba, all the permissions are set and tested. I have seen this posted many times but without many replies. I want to use a folder in a mapped drive to save my InnoDB data. Is this possible? I don't know this particular answer for MySQL, but I do have 1 question: If the database engine doesn't have control over the files and/or drive, what should it do in the case of a network failure? Let alone the latency of a networked file... With regards, Martijn Tonies Database Workbench - tool for InterBase, Firebird, MySQL, Oracle MS SQL Server Upscene Productions http://www.upscene.com Database development questions? Check the forum! http://www.databasedevelopmentforum.com Rúben Carvalho RDBMS over a network: NOT recommended. Not only can you not enforce OS-level locking on your files (maybe you can, I guess it may depend on your device and inteface protocols) but the MOST COMMON bottleneck to database performance is disk I/O. If you went with networked storage, you are not only going to suffer through disk lag (seek time + rotational positioning before the operation can start) but you are incurring network overhead on top of it IN BOTH DIRECTIONS. Unless your network device is flash-only (all memory, no disks), you just cut your throughput by at least 75%. And even if your device is flash-only you will reduce your data throughput by 25-50% (all performance numbers are rough estimates pulled out of my a** but based on the number of extra network hops necessary to get at and read your files). I don't care how fast your network is, networked storage can't be as fast as local disks. Again, it is highly discouraged to use networked storage for anything but the most trivial database uses (small file sizes, low traffic, etc). For any application that requires even modest performance, spend your money on a fast RAID configuration. You will be much happier in the long run. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Rúben Carvalho There are different kinds of secure setups. As a first idea, your database server and your web server DO NOT need to be on the same machine. There are MANY ways to setup a secure web system. How many of each type of networking component are at your disposal (proxy servers, firewalls, web servers, network interface cards, routers, etc.)? Different types of security are available with different hardware/software configurations. Basically, it all boils down to keeping the users only where you want the users to be (outside of your network) and allowing only certain servers (or even just particular processes on those servers) to access your internal resources. Shawn Green Database Administrator Unimin Corporation - Spruce Pine
Re: Network drive
Ruben Carvalho wrote: Well, thank you very much for your explanation. My problem is I would like to have the data files being saved in a machine behind a proxy but the server running in a machine outside the proxy (the clients don't have access to the machine behind the proxy). Any ideas? Thank you --- [EMAIL PROTECTED] wrote: Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 11:06:10 AM: I think I haven't understood your question. I guess that in case of a network failure you can have the same behavior as a power shutdown. About the networked drives? Anyone? --- Martijn Tonies [EMAIL PROTECTED] wrote: Hi Ruben, I would like to make a short, quick and simple question. Is it possible to have the following line: innodb_data_home_dir=X:/data/ in a my.ini config file? I'm using windows XP, mysql 4.1.12, X: is a mapped network drive to a Linux folder using samba, all the permissions are set and tested. I have seen this posted many times but without many replies. I want to use a folder in a mapped drive to save my InnoDB data. Is this possible? I don't know this particular answer for MySQL, but I do have 1 question: If the database engine doesn't have control over the files and/or drive, what should it do in the case of a network failure? Let alone the latency of a networked file... With regards, Martijn Tonies Database Workbench - tool for InterBase, Firebird, MySQL, Oracle MS SQL Server Upscene Productions http://www.upscene.com Database development questions? Check the forum! http://www.databasedevelopmentforum.com Rúben Carvalho RDBMS over a network: NOT recommended. Not only can you not enforce OS-level locking on your files (maybe you can, I guess it may depend on your device and inteface protocols) but the MOST COMMON bottleneck to database performance is disk I/O. If you went with networked storage, you are not only going to suffer through disk lag (seek time + rotational positioning before the operation can start) but you are incurring network overhead on top of it IN BOTH DIRECTIONS. Unless your network device is flash-only (all memory, no disks), you just cut your throughput by at least 75%. And even if your device is flash-only you will reduce your data throughput by 25-50% (all performance numbers are rough estimates pulled out of my a** but based on the number of extra network hops necessary to get at and read your files). I don't care how fast your network is, networked storage can't be as fast as local disks. Again, it is highly discouraged to use networked storage for anything but the most trivial database uses (small file sizes, low traffic, etc). For any application that requires even modest performance, spend your money on a fast RAID configuration. You will be much happier in the long run. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Rúben Carvalho By proxy do you mean firewall? If so, open up/forward the mysql port. Much simpler than trying to get file sharing working through the proxy. I have clients that connect the mysql server through a firewall and there are no issues. Just open up the correct ports and you should be set. It's much more difficult (IMHO) to open up ports for file sharing. Also, if it is a true proxy, this means you will incur even more lag due to the proxy with file sharing. It may be better to keep the files on the machine that is accessible by the clients: then use whatever kind of firewall software/hardware necessary to keep the computer more secure (if that's the aim of the proxy.) If it's necessary to keep the files on the computer behind the proxy for backup purposes, then ftp|sftp etc through the proxy|firewall with hot backup|db dump may be the best option. Again, a quick breakdown of what you are trying to do may lead to better suggestions from the list. -- Thanks, James -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Network drive
I don't have a web server, I mean, I have a standalone java application running in my clients and the application calls the database. Is there any way of having something listening to my application calls in my open machine (outside the proxy) and this something would then call the database running behind the proxy? --- [EMAIL PROTECTED] wrote: Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 12:54:24 PM: Well, thank you very much for your explanation. My problem is I would like to have the data files being saved in a machine behind a proxy but the server running in a machine outside the proxy (the clients don't have access to the machine behind the proxy). Any ideas? Thank you --- [EMAIL PROTECTED] wrote: Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 11:06:10 AM: I think I haven't understood your question. I guess that in case of a network failure you can have the same behavior as a power shutdown. About the networked drives? Anyone? --- Martijn Tonies [EMAIL PROTECTED] wrote: Hi Ruben, I would like to make a short, quick and simple question. Is it possible to have the following line: innodb_data_home_dir=X:/data/ in a my.ini config file? I'm using windows XP, mysql 4.1.12, X: is a mapped network drive to a Linux folder using samba, all the permissions are set and tested. I have seen this posted many times but without many replies. I want to use a folder in a mapped drive to save my InnoDB data. Is this possible? I don't know this particular answer for MySQL, but I do have 1 question: If the database engine doesn't have control over the files and/or drive, what should it do in the case of a network failure? Let alone the latency of a networked file... With regards, Martijn Tonies Database Workbench - tool for InterBase, Firebird, MySQL, Oracle MS SQL Server Upscene Productions http://www.upscene.com Database development questions? Check the forum! http://www.databasedevelopmentforum.com Rúben Carvalho RDBMS over a network: NOT recommended. Not only can you not enforce OS-level locking on your files (maybe you can, I guess it may depend on your device and inteface protocols) but the MOST COMMON bottleneck to database performance is disk I/O. If you went with networked storage, you are not only going to suffer through disk lag (seek time + rotational positioning before the operation can start) but you are incurring network overhead on top of it IN BOTH DIRECTIONS. Unless your network device is flash-only (all memory, no disks), you just cut your throughput by at least 75%. And even if your device is flash-only you will reduce your data throughput by 25-50% (all performance numbers are rough estimates pulled out of my a** but based on the number of extra network hops necessary to get at and read your files). I don't care how fast your network is, networked storage can't be as fast as local disks. Again, it is highly discouraged to use networked storage for anything but the most trivial database uses (small file sizes, low traffic, etc). For any application that requires even modest performance, spend your money on a fast RAID configuration. You will be much happier in the long run. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Rúben Carvalho There are different kinds of secure setups. As a first idea, your database server and your web server DO NOT need to be on the same machine. There are MANY ways to setup a secure web system. How many of each type of networking component are at your disposal (proxy servers, firewalls, web servers, network interface cards, routers, etc.)? Different types of security are available with different hardware/software configurations. Basically, it all boils down to keeping the users only where you want the users to be (outside of your network) and allowing only certain servers (or even just particular processes on those servers) to access your internal resources. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Rúben Carvalho ___ Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Network drive
something = ODBC is the first thing that comes to mind. You can set specific permissions on the ODBC and you don't have to open up but 1 port (3306 or whatever you choose) in your proxy / firewall. Whenever your app calls the ODBC, the connection is made and everything is happy. Just my $.02 J.R. -Original Message- From: Ruben Carvalho [mailto:[EMAIL PROTECTED] Sent: Wednesday, July 06, 2005 1:20 PM To: [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Subject: Re: Network drive I don't have a web server, I mean, I have a standalone java application running in my clients and the application calls the database. Is there any way of having something listening to my application calls in my open machine (outside the proxy) and this something would then call the database running behind the proxy? --- [EMAIL PROTECTED] wrote: Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 12:54:24 PM: Well, thank you very much for your explanation. My problem is I would like to have the data files being saved in a machine behind a proxy but the server running in a machine outside the proxy (the clients don't have access to the machine behind the proxy). Any ideas? Thank you --- [EMAIL PROTECTED] wrote: Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 11:06:10 AM: I think I haven't understood your question. I guess that in case of a network failure you can have the same behavior as a power shutdown. About the networked drives? Anyone? --- Martijn Tonies [EMAIL PROTECTED] wrote: Hi Ruben, I would like to make a short, quick and simple question. Is it possible to have the following line: innodb_data_home_dir=X:/data/ in a my.ini config file? I'm using windows XP, mysql 4.1.12, X: is a mapped network drive to a Linux folder using samba, all the permissions are set and tested. I have seen this posted many times but without many replies. I want to use a folder in a mapped drive to save my InnoDB data. Is this possible? I don't know this particular answer for MySQL, but I do have 1 question: If the database engine doesn't have control over the files and/or drive, what should it do in the case of a network failure? Let alone the latency of a networked file... With regards, Martijn Tonies Database Workbench - tool for InterBase, Firebird, MySQL, Oracle MS SQL Server Upscene Productions http://www.upscene.com Database development questions? Check the forum! http://www.databasedevelopmentforum.com Rúben Carvalho RDBMS over a network: NOT recommended. Not only can you not enforce OS-level locking on your files (maybe you can, I guess it may depend on your device and inteface protocols) but the MOST COMMON bottleneck to database performance is disk I/O. If you went with networked storage, you are not only going to suffer through disk lag (seek time + rotational positioning before the operation can start) but you are incurring network overhead on top of it IN BOTH DIRECTIONS. Unless your network device is flash-only (all memory, no disks), you just cut your throughput by at least 75%. And even if your device is flash-only you will reduce your data throughput by 25-50% (all performance numbers are rough estimates pulled out of my a** but based on the number of extra network hops necessary to get at and read your files). I don't care how fast your network is, networked storage can't be as fast as local disks. Again, it is highly discouraged to use networked storage for anything but the most trivial database uses (small file sizes, low traffic, etc). For any application that requires even modest performance, spend your money on a fast RAID configuration. You will be much happier in the long run. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Rúben Carvalho There are different kinds of secure setups. As a first idea, your database server and your web server DO NOT need to be on the same machine. There are MANY ways to setup a secure web system. How many of each type of networking component are at your disposal (proxy servers, firewalls, web servers, network interface cards, routers, etc.)? Different types of security are available with different hardware/software configurations. Basically, it all boils down to keeping the users only where you want the users to be (outside of your network) and allowing only certain servers (or even just particular processes on those servers) to access your internal resources. Shawn Green Database Administrator Unimin Corporation -
Re: innodb crashes during heavy usage with exceeded memory error
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Kasthuri Ilankamban wrote: [snip] The table that's it's complaining about is a session table which stores web session information. Basically the data in that table is a throw away data and gets deleted when the user logs out. The session data column in that table is defined as long blob. The average length of the row is not that big but once in a while application inserts a row for a user with session data that could exceed 200MG . Since we have lots of concurrent users and if application inserts few rows with 200MG data simultaneously innodb runs out of memory and crashes. Eventhough we have 8 gig memory, I'm not able to start mysql if I allocate more than 2Gig to innodb_buffer_size. Do you know how I can allocate more than 2 Gig memory to innodb on 32 bit machine. That might solve our problem. [snip] Kasthuri, Maybe it's time to re-think your application architecture? A 200-meg BLOB is quite large for a highly-concurrent system, considering that MySQL will have to read/save it in its entirety _and_ allocate network buffers for it, so essentially you're allocating _400_ megs or so _per_ client. (not to mention that many of your web sessions are sending 200 megs of data around your network between your appserver(s) and your database, which is a performance issue as well) -Mark - -- Mark Matthews MySQL AB, Software Development Manager - Connectivity www.mysql.com -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.6 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD4DBQFCzBgMtvXNTca6JD8RAhPYAKDDqEMlqXKM1q+cEkj2DTUcR795EQCY4h8J xIIf3/Uyktd0PO5M6573qw== =gWXC -END PGP SIGNATURE- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Network drive
He doesn't need ODBC to make a connection, his Java app is doing that already... I agree with J.R., you should move your database server behind your firewall and just open the one port (3306). If you would prefer, you can designate your database server to use a different port (42000 for example) so that anyone that hits your site with a port scanner won't automatically detect that you are running a MySQL server. Just adjust your Java clients and firewall to use the same port number. This is not a great security suggestion as I personally do not believe in security through obscurity but the combination of using an unreserved port and opening only that port through your firewall at least creates two obstacles to anyone wanting to do damage. For a more secure client-server config, enable SSL and require it on all connections. If you add up the effects of SSL + strange port + firewall you can see how each layer contributes to the overall security of your application. Using SSL, even your logins are encrypted. Without it, anyone sniffing your network traffic can see both the queries and the responses. Nothing that goes over the internet can be perfectly secure but there are levels of security that are both reasonable and practical. How much security you need depends entirely on how sensitive your data is. Shawn Green Database Administrator Unimin Corporation - Spruce Pine J.R. Bullington [EMAIL PROTECTED] wrote on 07/06/2005 01:27:39 PM: something = ODBC is the first thing that comes to mind. You can set specific permissions on the ODBC and you don't have to open up but 1 port (3306 or whatever you choose) in your proxy / firewall. Whenever your app calls the ODBC, the connection is made and everything is happy. Just my $.02 J.R. -Original Message- From: Ruben Carvalho [mailto:[EMAIL PROTECTED] Sent: Wednesday, July 06, 2005 1:20 PM To: [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Subject: Re: Network drive I don't have a web server, I mean, I have a standalone java application running in my clients and the application calls the database. Is there any way of having something listening to my application calls in my open machine (outside the proxy) and this something would then call the database running behind the proxy? --- [EMAIL PROTECTED] wrote: Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 12:54:24 PM: Well, thank you very much for your explanation. My problem is I would like to have the data files being saved in a machine behind a proxy but the server running in a machine outside the proxy (the clients don't have access to the machine behind the proxy). Any ideas? Thank you --- [EMAIL PROTECTED] wrote: Ruben Carvalho [EMAIL PROTECTED] wrote on 07/06/2005 11:06:10 AM: I think I haven't understood your question. I guess that in case of a network failure you can have the same behavior as a power shutdown. About the networked drives? Anyone? --- Martijn Tonies [EMAIL PROTECTED] wrote: Hi Ruben, I would like to make a short, quick and simple question. Is it possible to have the following line: innodb_data_home_dir=X:/data/ in a my.ini config file? I'm using windows XP, mysql 4.1.12, X: is a mapped network drive to a Linux folder using samba, all the permissions are set and tested. I have seen this posted many times but without many replies. I want to use a folder in a mapped drive to save my InnoDB data. Is this possible? I don't know this particular answer for MySQL, but I do have 1 question: If the database engine doesn't have control over the files and/or drive, what should it do in the case of a network failure? Let alone the latency of a networked file... With regards, Martijn Tonies Database Workbench - tool for InterBase, Firebird, MySQL, Oracle MS SQL Server Upscene Productions http://www.upscene.com Database development questions? Check the forum! http://www.databasedevelopmentforum.com Rúben Carvalho RDBMS over a network: NOT recommended. Not only can you not enforce OS-level locking on your files (maybe you can, I guess it may depend on your device and inteface protocols) but the MOST COMMON bottleneck to database performance is disk I/O. If you went with networked storage, you are not only going to suffer through disk lag (seek time + rotational positioning before the operation can start) but you are incurring network overhead on top of it IN BOTH DIRECTIONS. Unless your
Finding all Childless Parent Records v. 3.32
Hi all, I have a table: id, name, parent_id I need to find all records that have no children. I know how to do it using a sub select, but I need to do this in version MySQL 3.32 and I am not sure how. Thanks, Charles -- RightCode, Inc. 900 Briggs Road #130 Mount Laurel, NJ 08054 P: 856.608.7908 F: 856.439.0154 E: [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Data type DOUBLE and DATETIME
I want to store a date of this format mmddhhmmss.ss Should I use DOUBLE as the data type? Yes I do need all the .sss's. Or should I use DATETIME and then have another column to store the fraction of seconds? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Mac OS X Table Corruption
On Jul 6, 2005, at 10:26 AM, [EMAIL PROTECTED] wrote: I'm not a Mac guru but what you describe sounds like file system corruption. Can you do a diagnostic scan of your hard drives looking for bad sectors? You can move your data to another portion of the disk if you run an ALTER TABLE to make some trivial change. ALTER TABLE will create a new copy of the table somewhere else on the disk (applying your change) and drops the old table when it's through. I've tried both of these. The way you describe your system, it almost sounds as though you have physically shared the same files between all of your server. Doing that could cause the corruption you describe. Each server requires its OWN sets of files. Are there ANY other processes that may be attempting to directly write to your database files? Your data files should be protected and isolated from other user-type files and direct contact from any other process but your MySQL server. Nope... they each have their own files. Nothing else is touching them. Another option may be to rebuild your databases by first dumping your data, removing and reinstalling your servers, then reloading your data (one at a time, of course). I would save that as an option of last resort. I think that's where I am heading. Dan T -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Finding all Childless Parent Records v. 3.32
Charles Kline [EMAIL PROTECTED] wrote on 07/06/2005 02:25:12 PM: Hi all, I have a table: id, name, parent_id I need to find all records that have no children. I know how to do it using a sub select, but I need to do this in version MySQL 3.32 and I am not sure how. Thanks, Charles -- RightCode, Inc. 900 Briggs Road #130 Mount Laurel, NJ 08054 P: 856.608.7908 F: 856.439.0154 E: [EMAIL PROTECTED] Use a temp table CREATE TEMPORARY TABLE tmpParents SELECT DISTINCT parent_id FROM tablename; SELECT t.id, t.name FROM tablename t LEFT JOIN tmpParents tp ON tp.parent_id = t.id WHERE tp.parent_id is null; DROP TEMPORARY TABLE tmpParents; Shawn Green Database Administrator Unimin Corporation - Spruce Pine
Re: innodb crashes during heavy usage with exceeded memory error
A big part of your problem seems to be that 32bit OS's can not allocate more than 2G memory to a single process, and with each client connection needing so much buffer space, you quickly exceed that limit. Short of moving to a 64bit OS, the only solution I have come across that allows mysql to utilize more than 2G total memory is to run multiple concurrent mysqld processes on the same machine. This can be done by configuring them to replicate from eachother (A-B and B-A), and then write your application so it connects to either A or B (or C, D, etc). It can also be done with the mysql cluster (this was talked about in the documentation and email list extensively some months ago when I was testing the cluster), but that may be beyond what you are trying to accomplish. Of course, either of these solutions may put too much stress on another part of your systems (disk, cpu), it all depends on what you are doing. As per the InnoDB table extender, this is taken from the manual (section 15.8): InnoDB increases the size of that file automatically in 8MB increments when it runs out of space. Starting with MySQL 4.0.24 and 4.1.5, the increment size can be configured with the option innodb_autoextend_increment, in megabytes. The default value is 8. Regards, Devananda Kasthuri Ilankamban wrote: Thank a lot for your quick reply. SInce we are not using myisam tables ( except for the system tables), I deallocated memory from myisam and allocated to innodb. When I allocated close to 1.7G to innodb buffer size, mysql used to crash more often. So I decreased innodb_buffer_size to 1G. What memory parameter do you suggest to increase and how do you alter innodb table extender? This is the error that causes mysql to crash. 050704 18:19:21 InnoDB: Warning: could not allocate 100892621 + 100 bytes to retrieve InnoDB: a big column. Table name `sessions/horde_sessionhandler` 050704 18:37:16 InnoDB: ERROR: the age of the last checkpoint is 483189148, InnoDB: which exceeds the log group capacity 483180135. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 050704 18:38:23 InnoDB: Fatal error: cannot allocate 100892688 bytes of InnoDB: memory with malloc! Total allocated memory The table that's it's complaining about is a session table which stores web session information. Basically the data in that table is a throw away data and gets deleted when the user logs out. The session data column in that table is defined as long blob. The average length of the row is not that big but once in a while application inserts a row for a user with session data that could exceed 200MG . Since we have lots of concurrent users and if application inserts few rows with 200MG data simultaneously innodb runs out of memory and crashes. Eventhough we have 8 gig memory, I'm not able to start mysql if I allocate more than 2Gig to innodb_buffer_size. Do you know how I can allocate more than 2 Gig memory to innodb on 32 bit machine. That might solve our problem. Thanks for your suggestions. Kasthuri -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MAX on UNSIGNED INT Column
Jacob S. Barrett wrote: I have a column of type UNSIGNED INT which holds a 32bit counter. When the value of the field exceeds 2147483647 (signed max) the value of MAX on the column returns a negative number. Possibly this bug, fixed in 4.1.12? http://bugs.mysql.com/bug.php?id=9298 -- Keith Ivey [EMAIL PROTECTED] Smokefree DC http://www.smokefreedc.org Washington, DC -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Data type DOUBLE and DATETIME
Haisam K. Ido [EMAIL PROTECTED] wrote on 07/06/2005 02:35:54 PM: I want to store a date of this format mmddhhmmss.ss Should I use DOUBLE as the data type? Yes I do need all the .sss's. Or should I use DATETIME and then have another column to store the fraction of seconds? For me, how this data would be stored would depend on what you will need it for later. My first idea is to convert the mmddhhnnss portion to an integer using UNIX_TIMESTAMP() and add that integer value to your .ss information to create and store the value as a double. That way you can do direct subtraction to get the difference between any two times in seconds or fractions of a second (as everything will be in the same unit). If you leave it as a packed number, you will get some very odd results if you try to do direct math on the value. Look at the mathematical representation of the difference between two time values just one second apart: 2005-07-06 12:01:59 packs as 20050706120159 2005-07-06 12:02:00 packs as 20050706120200 If we subtract those two values we get the value 41 not 1 as we might want. However, if we first convert them through UNIX_TIMESTAMP(), everything works as expected: UNIX_TIMESTAMP('2005-07-06 12:01:59') = 1120665719 UNIX_TIMESTAMP('2005-07-06 12:02:00') = 1120665720 The difference between those two values is only 1 (second). IMHO, makes this a much more convenient way to do time math. Will this work for what you need? Shawn Green Database Administrator Unimin Corporation - Spruce Pine
Re: MAX on UNSIGNED INT Column
On Wednesday 06 July 2005 11:42 am, Keith Ivey [EMAIL PROTECTED] wrote: Jacob S. Barrett wrote: I have a column of type UNSIGNED INT which holds a 32bit counter. When the value of the field exceeds 2147483647 (signed max) the value of MAX on the column returns a negative number. Possibly this bug, fixed in 4.1.12? http://bugs.mysql.com/bug.php?id=9298 Bingo. Sorry I didn't see that one when searching. I am on 4.1.10 so it looks like I will be upgrading the box this weekend. Thanks, Jake -- Jacob S. Barrett Chief Technology Officer PogoZone LLC email: [EMAIL PROTECTED] web: www.pogozone.com voice: 360-676-8772 fax: 360-733-3941 address: 114 W. Magnolia Street Suite 417 Bellingham, Washington 98225 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: innodb crashes during heavy usage with exceeded memory error
Matthew has already responded. But I will also add, you must rethink how your application is coded. I can not possibly imagine a situation where a user session needs 200meg insert. No matter what database you use you will have a terrible time trying to scale this application. If you give more details about your application and what is being stored as session data perhaps we can help. snip -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Help - need to quickly optimize a record count!
I am cross-posting this to the PHP and the MySQL lists because I'm not sure in which technology my solution will lie. I have a pretty busy PHP/MySQL site that executes the following query a lot: select count(*) as `count` from terms; My MySQL account was disabled by my ISP because this query was causing resource issues, so I need to find a different way to know the record count in that table. A few records are added once every 5 minutes by a cron job. The record count is constant the rest of the time. No records are ever deleted. Is it possible to create some kind of server-side variable, in which the cron job could store the record count, which would be accessible to all scripts, and would stay the same until it gets reset? Or is there a less-intense MySQL query I should be using instead? Thanks in advance for any suggestions. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
UDF failure
Hello, I'm trying to build a UDF for MySQL 4.1.12. I'm running on Linux (FedoraFC1). My UDF function seemed to blow up the server so I went to the source and tried to build udf_example.cc. Using the precompiled binaries for Linux and using the source distribution for the same version (4.1.12) I was unable to build udf_example.cc as a shared object (I followed the instructions in the source -- they didnt work so I used what seemed to make the most sense for building my object). When building the shared object the compiler spits out a warning from asm/atomic.h (#warning Using kernel header in userland program. BAD!) I'm using gcc version 3.3.2 20031022 (Red Hat Linux 3.3.2-1) on a Fedore Core 1 machine. I built the shared object with gcc -I/usr/include/mysql -shared -o udf_example.so udf_example.cc When attempting to load a new function with the mysql cli client the client complains that the server has gone away and reconnects with connection id of 1. Nothing is listed in mysql.funcs either. There needs to be better documentation of the process for building UDF's. The code I need to run can be run nicely by MySQL but I can't declare a stored function as documented. The stored function is fairly simple: DELIMITER // CREATE FUNCTION distance(z1 VARCHAR(10), z2 VARCHAR(10)) RETURNS REAL BEGIN DECLARE x REAL; DECLARE y REAL; DECALRE miles REAL; SET x = 69.1*(lat - lat2); SET y = 69.1*(lon1 - lon2) * COS(lon1*57.3); SET miles = SQRT(x*x+y*y); RETURN miles; END // The mysql client spits back : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(z1 VARCHAR(10), z2 VARCHAR(10)) RETURNS REAL Using the example given in the documentation I get: mysql CREATE FUNCTION hello (s CHAR(20)) RETURNS CHAR(50) - RETURN CONCAT('Hello, ',s,'!'); ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(s CHAR(20)) RETURNS CHAR(50) RETURN CONCAT('Hello, ',s,'!')' at line 1 DELIMITER ; // Can someone elucidate the mystery of either stored functions or UDF's? Thanks, in advance, Nic -- Nic Stevens - [EMAIL PROTECTED]
Re: Help - need to quickly optimize a record count!
Brian Dunning [EMAIL PROTECTED] wrote on 07/06/2005 04:43:11 PM: I am cross-posting this to the PHP and the MySQL lists because I'm not sure in which technology my solution will lie. I have a pretty busy PHP/MySQL site that executes the following query a lot: select count(*) as `count` from terms; My MySQL account was disabled by my ISP because this query was causing resource issues, so I need to find a different way to know the record count in that table. A few records are added once every 5 minutes by a cron job. The record count is constant the rest of the time. No records are ever deleted. Is it possible to create some kind of server-side variable, in which the cron job could store the record count, which would be accessible to all scripts, and would stay the same until it gets reset? Or is there a less-intense MySQL query I should be using instead? Thanks in advance for any suggestions. You could create a new table that you populate once with SELECT COUNT(*) then update that table every time your CRON job runs. That way you don't have to keep performing the COUNT() query when you could look up the value from a table. Shawn Green Database Administrator Unimin Corporation - Spruce Pine
Re: Help - need to quickly optimize a record count!
Or even make .txt file with the cron and just include that txt file in your php ? include_once('figures.txt'); ? Then in backgrond with perl(php) and cron you will udate that .txt file :-) So it will be 1 quesry per 15 minutes :-) Peter [EMAIL PROTECTED] wrote: Brian Dunning [EMAIL PROTECTED] wrote on 07/06/2005 04:43:11 PM: I am cross-posting this to the PHP and the MySQL lists because I'm not sure in which technology my solution will lie. I have a pretty busy PHP/MySQL site that executes the following query a lot: select count(*) as `count` from terms; My MySQL account was disabled by my ISP because this query was causing resource issues, so I need to find a different way to know the record count in that table. A few records are added once every 5 minutes by a cron job. The record count is constant the rest of the time. No records are ever deleted. Is it possible to create some kind of server-side variable, in which the cron job could store the record count, which would be accessible to all scripts, and would stay the same until it gets reset? Or is there a less-intense MySQL query I should be using instead? Thanks in advance for any suggestions. You could create a new table that you populate once with SELECT COUNT(*) then update that table every time your CRON job runs. That way you don't have to keep performing the COUNT() query when you could look up the value from a table. Shawn Green Database Administrator Unimin Corporation - Spruce Pine -- Best regards, Peter http://AboutSupport.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: UDF failure
Hello. When attempting to load a new function with the mysql cli client the client complains that the server has gone away and reconnects with connection Check MySQL error log, server could die while loading the UDF. Stored procedures appeared only in 5 version. This works fine for me on 5.0.7. mysql create function hello (s char(20)) returns char(50) RETURN CONCAT('Hello, ',s,'!');// Query OK, 0 rows affected (0.00 sec) Nic Stevens [EMAIL PROTECTED] wrote: Hello, I'm trying to build a UDF for MySQL 4.1.12. I'm running on Linux (FedoraFC1). My UDF function seemed to blow up the server so I went to the source and tried to build udf_example.cc. Using the precompiled binaries for Linux and using the source distribution for the same version (4.1.12) I was unable to build udf_example.cc as a shared object (I followed the instructions in the source -- they didnt work so I used what seemed to make the most sense for building my object). When building the shared object the compiler spits out a warning from asm/atomic.h (#warning Using kernel header in userland program. BAD!) I'm using gcc version 3.3.2 20031022 (Red Hat Linux 3.3.2-1) on a Fedore Core 1 machine. I built the shared object with gcc -I/usr/include/mysql -shared -o udf_example.so udf_example.cc When attempting to load a new function with the mysql cli client the client complains that the server has gone away and reconnects with connection id of 1. Nothing is listed in mysql.funcs either. There needs to be better documentation of the process for building UDF's. The code I need to run can be run nicely by MySQL but I can't declare a stored function as documented. The stored function is fairly simple: DELIMITER // CREATE FUNCTION distance(z1 VARCHAR(10), z2 VARCHAR(10)) RETURNS REAL BEGIN DECLARE x REAL; DECLARE y REAL; DECALRE miles REAL; SET x = 69.1*(lat - lat2); SET y = 69.1*(lon1 - lon2) * COS(lon1*57.3); SET miles = SQRT(x*x+y*y); RETURN miles; END // The mysql client spits back : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(z1 VARCHAR(10), z2 VARCHAR(10)) RETURNS REAL Using the example given in the documentation I get: mysql CREATE FUNCTION hello (s CHAR(20)) RETURNS CHAR(50) - RETURN CONCAT('Hello, ',s,'!'); ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(s CHAR(20)) RETURNS CHAR(50) RETURN CONCAT('Hello, ',s,'!')' at line 1 DELIMITER ; // Can someone elucidate the mystery of either stored functions or UDF's? -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help - need to quickly optimize a record count!
Hello. What engine do you use? 'SELECT COUNT(*) FROM TABLE_NAME' is very optimized, however only for MyISAM tables. Have you thought about a dedicated table for such a purposes, or adding auto_increment column to the table? Brian Dunning [EMAIL PROTECTED] wrote: I am cross-posting this to the PHP and the MySQL lists because I'm not sure in which technology my solution will lie. I have a pretty busy PHP/MySQL site that executes the following query a lot: select count(*) as `count` from terms; My MySQL account was disabled by my ISP because this query was causing resource issues, so I need to find a different way to know the record count in that table. A few records are added once every 5 minutes by a cron job. The record count is constant the rest of the time. No records are ever deleted. Is it possible to create some kind of server-side variable, in which the cron job could store the record count, which would be accessible to all scripts, and would stay the same until it gets reset? Or is there a less-intense MySQL query I should be using instead? Thanks in advance for any suggestions. -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: why are these two queries deadlocking?
Hello. Have you been at: http://dev.mysql.com/doc/mysql/en/innodb-locks-set.html In my opinion, the deadlock could appear in your case, according to that page. Because both REPLACE and INSERT could put next-key locks. And DELETE generally set record locks on every index record that is scanned in the processing of the SQL query. Brady Brown [EMAIL PROTECTED] wrote: SHOW INNODB STATUS indicates these two queries are deadlocking: (1) REPLACE INTO TMP_pixel_xfer SELECT * FROM user_question q INNER JOIN user_session s USING(user_session_id) WHERE user_question_id BETWEEN '27853011' AND '27891923' ORDER BY s.user_id (2) DELETE t from TMP_user_client_report t LEFT JOIN user_question u USING(user_session_id,question_id) WHERE u.user_id IS NULL I execute 'SET TRANSACTION ISOLATION LEVEL READ COMMITTED' before executing each respective query. Note that the two queries both join to user_question, but neither query changes any data in user_question. Only data in their respective TMP_ tables is modified. But apparently there are row-locks set on user_question anyway. Why is this? And shouldn't each query be using it's own fresh copy of user_question since the isolation level is set to READ COMMITTED beforehand? Is there another strategy I can use to avoid the deadlock? Any insight is greatly appreciated. One last note that may or may not be relevant. I began to experience this deadlock only after I upgraded from mysql 4.0 to 4.1. Thanks, Brady -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Misconfigured master - server id was not set
Hello. Setting master to 127.0.0.1 could produce a problem. MASTER_HOST and MASTER_PORT are the hostname (or IP address) of the master host and its TCP/IP port. Note that if MASTER_HOST is equal to localhost, then, like in other parts of MySQL, the port may be ignored (if Unix socket files can be used, for example). See: http://dev.mysql.com/doc/mysql/en/change-master-to.html Jan Schneider [EMAIL PROTECTED] wrote: Hi, I have slave-master-setup that is special in two ways: 1) The slave connects through an stunnel 2) The slave replicates only one db As soon as I start the slave process, the server connects but fails with the following log messages: Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Slave SQL thread initialized, starting replication in log 'mysql-bin.001' at position 227973, relay log './ijssel1-relay-bin.001' position: 4 Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Slave I/O thread: connected to master '[EMAIL PROTECTED]:3307', replication started in log 'mysql-bin.001' at position 227973 Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Error reading packet from server: Misconfigured master - server id was not set (server_errno=1236) Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Got fatal error 1236: 'Misconfigured master - server id was not set' from master when reading data from binary log Jul 6 14:58:11 ijssel1 mysqld[11755]: 050706 14:58:11 Slave I/O thread exiting, read up to log 'mysql-bin.001', position 227973 Jul 6 14:58:18 ijssel1 mysqld[11755]: 050706 14:58:18 Error reading relay log event: slave SQL thread was killed On the master side I see the stunnel connecting. To rule out corrupted binary logs, I did a RESET MASTER on the master. I verified with SHOW VARIABLES and SHOW BINLOG EVENTS that the master has server-id 1, the slave 2, and the server-id 1 is correctly used in the binlogs. Any ideas? Jan. -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
noob : advice on design?
Dear Group, I have just completed the Sams MySQL in 24 hours and feel like I now have a reasonable understanding of creatinf queries, etc. However, the one area that seemed sadly lacking was that of database design. I recently purchased a book named Database design for mere mortals, which seems to be very slow going, and is going to take me forever to get through the 550 pages. I'm not looking for a quick fix, but there has to be some middle ground. Is there anywhere I can go to get a reasonable working knowledge of database design just so I can get started with the task I have been given. In the meantime I can make my way through this book. So, where do I go to learn about the initial design, seeing as it is so crucial? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysql user name length
Hi all, Is there any reason why I shouldn't increase the size of the allowable user names in mysql to var(32) instead of the default var(16) ??? Couldn't really find much on it, but wanted to ask if anyone knows of any troubles this may cause... Thanks, Tim. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql user name length
On Wed, Jul 06, 2005 at 03:46:02PM -0700, Tim Traver wrote: Is there any reason why I shouldn't increase the size of the allowable user names in mysql to var(32) instead of the default var(16) ??? Couldn't really find much on it, but wanted to ask if anyone knows of any troubles this may cause... Yes, there are a number of places within the server that only expect the username to be 16 characters, and will almost certainly break in the face of longer usernames. Jim Winstead MySQL Inc. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: UDF failure
Looks like you're confusing Prepared Statements with Functions/Stored Procedures. You don't compile a function, and you're using 4.1, you need Mysql 5. (specifically 5.0.3 I think, you'd have to look that up, but you would want 5.0.7 anyway. Maybe I have that confused with triggers, cant remember.) Prepared Statements are C code using the C API. You have SQL Language code, that looks correct, but you're implementing it incorrectly. (with the wrong version) Functions are not compiled, like you are doing with gcc. Prepared Statements: http://dev.mysql.com/doc/mysql/en/c-api-prepared-statements.html Functions: http://dev.mysql.com/doc/mysql/en/stored-procedures.html Hope that helps. Greg On 7/6/05, Gleb Paharenko [EMAIL PROTECTED] wrote: Hello. When attempting to load a new function with the mysql cli client the client complains that the server has gone away and reconnects with connection Check MySQL error log, server could die while loading the UDF. Stored procedures appeared only in 5 version. This works fine for me on 5.0.7. mysql create function hello (s char(20)) returns char(50) RETURN CONCAT('Hello, ',s,'!');// Query OK, 0 rows affected (0.00 sec) Nic Stevens [EMAIL PROTECTED] wrote: Hello, I'm trying to build a UDF for MySQL 4.1.12. I'm running on Linux (FedoraFC1). My UDF function seemed to blow up the server so I went to the source and tried to build udf_example.cc. Using the precompiled binaries for Linux and using the source distribution for the same version (4.1.12) I was unable to build udf_example.cc as a shared object (I followed the instructions in the source -- they didnt work so I used what seemed to make the most sense for building my object). When building the shared object the compiler spits out a warning from asm/atomic.h (#warning Using kernel header in userland program. BAD!) I'm using gcc version 3.3.2 20031022 (Red Hat Linux 3.3.2-1) on a Fedore Core 1 machine. I built the shared object with gcc -I/usr/include/mysql -shared -o udf_example.so udf_example.cc When attempting to load a new function with the mysql cli client the client complains that the server has gone away and reconnects with connection id of 1. Nothing is listed in mysql.funcs either. There needs to be better documentation of the process for building UDF's. The code I need to run can be run nicely by MySQL but I can't declare a stored function as documented. The stored function is fairly simple: DELIMITER // CREATE FUNCTION distance(z1 VARCHAR(10), z2 VARCHAR(10)) RETURNS REAL BEGIN DECLARE x REAL; DECLARE y REAL; DECALRE miles REAL; SET x = 69.1*(lat - lat2); SET y = 69.1*(lon1 - lon2) * COS(lon1*57.3); SET miles = SQRT(x*x+y*y); RETURN miles; END // The mysql client spits back : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(z1 VARCHAR(10), z2 VARCHAR(10)) RETURNS REAL Using the example given in the documentation I get: mysql CREATE FUNCTION hello (s CHAR(20)) RETURNS CHAR(50) - RETURN CONCAT('Hello, ',s,'!'); ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(s CHAR(20)) RETURNS CHAR(50) RETURN CONCAT('Hello, ',s,'!')' at line 1 DELIMITER ; // Can someone elucidate the mystery of either stored functions or UDF's? -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- Greg Fischer 1st Byte Solutions http://www.1stbyte.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Datediff
I am using 4.0.18-standard So I do not have `DATEDIFF`, but I need to ability to do so, anyone know some other simple trick to get days between two dates? -- - Scott HanedaTel: 415.898.2602 http://www.newgeo.com Novato, CA U.S.A. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Datediff
Scott Haneda wrote: I am using 4.0.18-standard So I do not have `DATEDIFF`, but I need to ability to do so, anyone know some other simple trick to get days between two dates? to_days(SomeDate) - to_days(SomeOtherDate) will give you the number of days between the 2. -- Daniel Kasak IT Developer NUS Consulting Group Level 5, 77 Pacific Highway North Sydney, NSW, Australia 2060 T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989 email: [EMAIL PROTECTED] website: http://www.nusconsulting.com.au -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: noob : advice on design?
fast and dirty - http://www.geekgirls.com/menu_databases.htm Do the design from scratch on the right side. Monty Harris wrote: Dear Group, I have just completed the Sams MySQL in 24 hours and feel like I now have a reasonable understanding of creatinf queries, etc. However, the one area that seemed sadly lacking was that of database design. I recently purchased a book named Database design for mere mortals, which seems to be very slow going, and is going to take me forever to get through the 550 pages. I'm not looking for a quick fix, but there has to be some middle ground. Is there anywhere I can go to get a reasonable working knowledge of database design just so I can get started with the task I have been given. In the meantime I can make my way through this book. So, where do I go to learn about the initial design, seeing as it is so crucial? -- Respectfully, Ligaya Turmelle Life is a game so have fun -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql 4.0 to 4.1 migration and charset problems
Hello. For a pity, I could give explanations only for your query about selecting @@global.xxx variables. I think server returns correct results, because you're selecting global variables, while character_set_client, character_set_connection, character_set_results are session variables. And with SET NAMES you're setting @@character_xxx variables which are synonym for @@session.character_xxx. In what way have you done your upgrade? If you haven't used mysqldump you could get some problems. Make the dump, and restore it setting the correct connection variables for mysql program. Be aware of that mysqldump could put SET NAMES at the beginning of the dump file. Use set-names=latin2 for it. See: http://dev.mysql.com/doc/mysql/en/mysqldump.html Arkadiusz Miskiewicz [EMAIL PROTECTED] wrote: Hi, I have mysql 4.0 db with configuration: set-variable =3D default-character-set=3Dlatin2 set-variable =3D character-set=3Dlatin2 now I'm trying to migrate to mysql 4.1.12 My current config is: character-set-server=3Dlatin2 collation-server=3Dlatin2_general_ci The problem is connection/reply/client-character set. mysql 4.1 by default= =20 uses 'latin1' so in all queries (from previously latin2 mysql 4.0 database)= I=20 don't have latin2 characters just '?' chars. That's obvious because due to= =20 default latin1 in 4.1. SET NAMES 'latin2' from client side of coruse cures whole problem but... I= =20 can't fix every possible application that my users have installed (and if=20 there are thousands of users this is simply impossible). What's the proper way of dealing with this problem? I've tried to use some= =20 nasty hacks like init-connect=3DSET NAMES latin2 and this half-works - I = see=20 latin2 characters in server responses... but it breaks for example phpmyadm= in=20 (characters are broken in results; when I drop init-connect hack phpmyadmin= =20 works nicely). So init-connect isn't usable. Now when connecting with mysql command line client then I by default get=20 latin1 but can change easily to latin2 using=20 [mysql] default-character-set=3Dlatin2 in .my.cnf file. That's great but this works only for mysql cmd line client= =20 _only_ while my primary concern is php. Now is the funny part, there is no .my.cnf file, I connect with mysql cmdli= ne. mysql \s =2D- mysql Ver 14.7 Distrib 4.1.12, for pld-linux-gnu (powerpc) using readline = 4.3 Connection id: 2 Current database: Current user: [EMAIL PROTECTED] SSL:Not in use Current pager: stdout Using outfile: '' Using delimiter:; Server version: 4.1.12-log Protocol version: 10 Connection: Localhost via UNIX socket Server characterset:latin2 Db characterset:latin2 Client characterset:latin1 Conn. characterset:latin1 UNIX socket:/var/lib/mysql/mysql.sock Uptime: 34 min 35 sec Threads: 1 Questions: 12 Slow queries: 0 Opens: 11 Flush tables: 1 Ope= n=20 tables: 0 Queries per second avg: 0.006 =2D- mysql show variables like '%char%'; +--++ | Variable_name| Value | +--++ | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin2 | | character_set_results| latin1 | | character_set_server | latin2 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--++ 7 rows in set (0.00 sec) As you can see client/connection/results are latin1 BUT: mysql select @@global.character_set_client,=20 @@global.character_set_connection, @@global.character_set_database,=20 @@global.character_set_results, @@global.character_set_results,=20 @@global.character_set_server; +---+---+--= =2D--++= =2D---+---+ | @@global.character_set_client | @@global.character_set_connection |=20 @@global.character_set_database | @@global.character_set_results |=20 @@global.character_set_results | @@global.character_set_server | +---+---+--= =2D--++= =2D---+---+ | latin2| latin2| latin= 2 =20