RE: 4.1.12 Crashing on Mandrake 10.1
I'm not sure but I know when I installed it yesterday via RPM it kept core dumping and restarting. Glad I still had 4.1.11 available. Donny -Original Message- From: Douglas K. Fischer [mailto:[EMAIL PROTECTED] Sent: Thursday, May 19, 2005 1:18 PM To: mysql@lists.mysql.com Subject: Re: 4.1.12 Crashing on Mandrake 10.1 Mark Hughes wrote: A lot of the RPM's for 4.1.12 seems to have been removed from the download page so I presume there's a known problem: http://dev.mysql.com/downloads/mysql/4.1.html I noticed this when I just went to download 4.1.12. What's going on? Is 4.1.12 unstable or is this strictly a build/package issue? Any ETA on the return of the RPMs? Cheers, Doug -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: SATA vs SCSI
Kevin, I am in the same boat that you are, I can't store anything in memory, just have too much data. I've got 2tb on one box right now, I did get a quote last week for that much memory, I think it was 4 million just for the memory. Also.. if you have a high cache hit rate you can effectively have memory tables (in theory at least). I just haven't seen anywhere near 20k qps. 20k qps is not that hard to do with all memory tables especially if you are only using the primary key and nothing else. We have one quad opteron that was pushing over 13k queries per seconds without using any memory tables at all. All queries were simple perfect selects only using the primary key. But we found that in many cases we were wasting so much time, doing so many single selects, now we are only running about 1k qps on a box, but we are doing thousands of IN queries now. So we get lower qps, but faster overall performance. Donny -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Preventing slaves from falling behind masters...
With Mysql you should ONLY use RAID10. Everything else is not worth your time. As long as you are using 15k SCSI drives, on both your master and your slave, your slave should rarely ever fall behind. Especially if you are doing less than 1,000 inserts per second on the master. Otherwise you should be just fine. The only time our slaves ever fall behind, is when you delete 40 records from table A then insert 40 new ones. But we do that for 600k records, so we do a few million deletes and inserts in a short period of time. Donny -Original Message- From: Kevin Burton [mailto:[EMAIL PROTECTED] Sent: Wednesday, May 11, 2005 7:30 PM To: mysql@lists.mysql.com Subject: Preventing slaves from falling behind masters... If you're running in a master/slave environment.. and you're application is using the slave too often... replication can fall behind which can then confuse your application. This can happen if the IO performance of both the master and slaves is equivalent and you're performaning INSERT/UPDATE/DELETE with a load of 1 on the master. Then when the transactions move to the slave all the IO is used up and any additional SELECTS will just cause the slave to fall behind. Has anyone else seen this? One way I was thinking of solving this is to use RAID5 on our master and then RAID0 on the slaves so that the master is a hard bottleneck. Then the slaves have no problem running transactions via replication and have load available to run SELECTS. Any other ideas? -- Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. See irc.freenode.net #rojo if you want to chat. Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html Kevin A. Burton, Location - San Francisco, CA AIM/YIM - sfburtonator, Web - http://peerfear.org/ GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Mysql tuning - server Crash 1
Heikki, I sent this to a few friends of mine who work on fedora quite a bit. As a general note, Fedora Cores are not considered stable. None of them wanted to officially comment, but just asked that you show proof. Especially since most of RH4 is Fedora. I know I've used FC1, FC2, and FC3. And on fc2 and fc3 the 32 and 64 bit versions. And besides certain ide problems which I would never use on a mysql server anyway, we've never had any problems with fedora at all. They wanted me to send you some of the benchmarks comparing Fedora, Suse, and some of the other distros using mysql. But the site they sent me which shows fedora beating all of them is currently down. Oh well. Donny -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Sunday, February 27, 2005 2:58 PM To: mysql@lists.mysql.com Subject: Re: Mysql tuning - server Crash 1 Max, - Original Message - From: Deluxe Web [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Sunday, February 27, 2005 2:17 PM Subject: Re: Mysql tuning - server Crash 1 Hi Heikki, http://lists.mysql.com/mysql/180583 http://lists.mysql.com/mysql/180581 Running fedora core1 Mysql: 4.0.16-Max-log If you check http://lists.mysql.com/mysql/180583 (.err) you will see errors. does that contain ALL relevant info? Let me know if you have any questions. You should upgrade to 4.1.10. We might get better error diagnostics then. As a general note, Fedora Cores are not considered stable. Thank you. I appreciate your help!! Max Best regards, Heikki Tuuri Innobase Oy Foreign keys, transactions, and row level locking for MySQL InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM tables http://www.innodb.com/order.php -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: INSERT queries hang on amd64
I know this may be strange, but have you turned on innodb on the box? Even if don't use it? I have 8 amd64 boxes and have never experienced this problem you are talking about. They range from single proc to quad proc. Never this problem but all of them have innodb turned on. Turn it on and see what happens, it could be a bug inside the mysql code. Donny -Original Message- From: Michel Buijsman [mailto:[EMAIL PROTECTED] Sent: Wednesday, February 16, 2005 5:41 AM To: mysql@lists.mysql.com Subject: INSERT queries hang on amd64 My problem: INSERT queries hang on amd64. This looks a lot like Don MacAskill's bugreport in http://bugs.mysql.com/bug.php?id=3483 Which is listed as closed, but the bug is apparently still there so maybe it should be reopened... I've just upgraded from 4.1.9 to 4.1.10 (binary) on a dual opteron with 8G ram, running Debian 3.1 pure64 gcc-3.4. Also tried 4.0.23 last week, but that had the same problems. Kernels I've tried are 2.6.11-rc1-mm1 and 2.6.11-rc2. I'm running 2 boxes in a replication setup, using the amd64 as the master (or standalone) gave me pretty much what Don describes in bug 3483, insert queries start hanging after a while with no way to kill them except kill -9 from the OS, which is a bit rough. ;-) The master or standalone setup required quite a bit of poking to get it to hang in a test environment, or just a few short hours running as a production server. I haven't been able to narrow it down to one specific thing, but it appears to have something to do with indexes, because it ran fine after dropping all of them. I hacked up a test script that's throwing a random selection of selects and inserts at it at random intervals between 0 and 2 seconds, from 20 concurrent threads. I could trigger it by running a repair table on it while the test script was running. The repair thread would wait for its turn, then lock everything else out and do its thing, and when it was done the first insert after that would hang indefinitely. (Left it hanging over the weekend, nothing.) Adding skip-concurrent-insert fixes the problem, but cripples the performance so that's not really an option. Then tried to run one of the amd64 boxes as a slave off of a xeon box: Same problem, the replication thread hangs within seconds of starting the server, and again won't budge for anything except kill -9. This is on a server without _any_ other connections except for a processlist. I've run out of things to try, so I hope someone here can help... my.cnf: [mysqld] snipped replication setup user=mysql socket=/tmp/mysql.sock skip-locking set-variable= key_buffer=2G set-variable= table_cache=1024 set-variable= sort_buffer=16M set-variable= read_buffer=16M set-variable= max_allowed_packet=10M set-variable= thread_cache=64 set-variable= thread_stack=512K set-variable= tmp_table_size=16M set-variable= interactive_timeout=600 set-variable= wait_timeout=600 set-variable= max_connections=1024 set-variable= query_cache_type=2 set-variable= query_cache_size=100M set-variable= join_buffer_size=8M set-variable= thread_concurrency=4 set-variable= myisam_sort_buffer_size=64M skip-innodb -- Michel Buijsmantty.nl -- 2dehands.nl -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: What is the max length of IN() function?
Actually, I've done a test with this in the past, we could not find a limit. But there is a magic number where the optimizer stops doing a good job of optimizing the query and it starts to get really slow. In our case we were using words, and phrases, so we would have something like: IN ('a', 'apple', 'apple car', 'car', 'c') etc... We found that once it hits about 200 or so entries the query went from 0.00 seconds to about 2-3 seconds. Sometimes much more. Donny -Original Message- From: Tom Crimmins [mailto:[EMAIL PROTECTED] Sent: Wednesday, February 16, 2005 9:07 PM To: Daevid Vincent Cc: mysql@lists.mysql.com Subject: RE: What is the max length of IN() function? -Original Message- From: Daevid Vincent Sent: Wednesday, February 16, 2005 20:59 To: mysql@lists.mysql.com Subject: What is the max length of IN() function? I tried to find this function on the dev.mysql.com site, but good luck finding in... ;-) Can someone tell me what the maximum length is for this function? SELECT * FROM foo WHERE bar IN(1,2,3,4,. N); How many entries can there be in between 1 and N ? Hundreds? Thousands? Millions? From http://dev.mysql.com/doc/mysql/en/comparison-operators.html, The number of values in the IN list is only limited by the max_allowed_packet value. --- Tom Crimmins Interface Specialist Pottawattamie County, Iowa -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: 2 gigs limits on MyISAM indexes?
What does the error log say? Anything? Donny -Original Message- From: Frank Denis (Jedi/Sector One) [mailto:[EMAIL PROTECTED] Sent: Friday, January 28, 2005 10:42 AM To: Mat Cc: mysql@lists.mysql.com Subject: Re: 2 gigs limits on MyISAM indexes? On Fri, Jan 28, 2005 at 04:00:24PM +, Mat wrote: What Operating System are you running this on? Linux 2.6, 64 bits. MySQL 4.1.9. Also, is there anything in the errorlog? Nothing, but as soon as I restart the server, it enters a strange state where all slots are full with unauthenticated connections that never die. No real query seems to be running any more. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Connection performance, suggestions?
Attempt to connect to mysql via ip address, and make sure on the mysql box that that you add the connecting boxes to your hosts file. Your problem should go away then. Donny -Original Message- From: Larry Lowry [mailto:[EMAIL PROTECTED] Sent: Saturday, January 22, 2005 9:25 PM To: Peter J Milanese; mysql@lists.mysql.com Subject: Re: Connection performance, suggestions? Actually I was just trying to see why the same code to mySql is so much slower than SQL Server. I tried the same code using the data provider from CoreLabs and found the connections occur 4 times faster than the mySQL AB connector. The slowness just seems to be in the connector. Now I know my method seems silly but to my boss it makes sense. We have web sites that make these connections. They are obviously stateless and each makes it's own connection. Where it takes 22 seconds to connect to mySql 100 times I can in the same loop execute a select statement and bring back a 700k binary field adding only abour .5 seconds to the whole loop time. Half a second to dig into a database with 400k records and yank the record 100 times but 22seconds just to open and close the conection. Something just does not seem right. Thanks for your input. Larry Lowry - Original Message - From: Peter J Milanese [EMAIL PROTECTED] To: Larry Lowry [EMAIL PROTECTED] Sent: Saturday, January 22, 2005 10:22 AM Subject: Re: Connection performance, suggestions? You are not simulating. Your scripting produces 100 linear connections. You are attempting, by theory, to simulate simultaneous connections. Big difference. - Sent from my NYPL BlackBerry Handheld. - Original Message - From: Larry Lowry [EMAIL PROTECTED] Sent: 01/22/2005 10:36 AM To: Peter J Milanese [EMAIL PROTECTED] Subject: Re: Connection performance, suggestions? - Original Message - From: Peter J Milanese [EMAIL PROTECTED] To: Larry Lowry [EMAIL PROTECTED]; mysql mysql@lists.mysql.com Sent: Friday, January 21, 2005 12:57 PM Subject: Re: Connection performance, suggestions? A single transaction logs into the db several times? Not really. I am simulating the connection process of a bunch of browser based/web services transactions. I assume its a browser based transaction, no? Correct. Are you limiting connections (in my.conf)? Have you tuned the config, if yes how so? Here is my My.INI. [mysqld] basedir=C:/MySql datadir=D:/MySqlData language=C:/MySql/share/english port=3306 key_buffer_size=512M table_cache=64 net_buffer_length=1M max_allowed_packet=3M query_cache_limit = 1M query_cache_size = 32M query_cache_type = 1 read_buffer_size=2M read_rnd_buffer_size=8M skip-innodb Larry --Original Message-- From: Larry Lowry To: mysql Sent: Jan 21, 2005 12:49 PM Subject: Connection performance, suggestions? Before I ask this question I must state I love mySql and want to use it more. But I work in a shop were we mostly use SQL Server 2000. In trying to use mySql (4.12 and 4.19) we are seeing some performance issues. I hope someone can help me with this. I am in a Windows 2003 server environment and running mostly asp.net applications. I have narrowed the problem down to the speed at which the database connections open and close. The following code opens a mySql database 100 times. This takes 21 to 23 seconds. I know I should only open the connection once but this represents the asp environment where we are using a cluster of web servers. MySQL is running on an Athlon 64 3500+ with 2 gigs of memory. It is the only process on the server. The SQL server code below does the same thing except to Sql Server 2000. This only takes .1 to .4 seconds. Sql Server is running on a PIII at 1.2ghz with 1gb memory. It's a crappy old box for testing. I am using the stock Framework 1.1 SqlClient and MySql.Data.MySqlClient version 1.0.3.31712. All test boxes are on the same network switch. Can anyone help explain this difference? Is it the database engines or the data connectors? Any help would be appreciated. Thanks Larry Lowry 'MySql Code Dim sDBCS As String = Server=DB;UserId=userid;Password=pass;Database=images Dim i As Long Dim ti As Long = Microsoft.VisualBasic.Timer() Dim db As MySqlConnection db = New MySqlConnection(sDBCS) For i = 1 To 100 db.Open() db.Close() Next db.Dispose() tbEnd.Text = Microsoft.VisualBasic.Timer() - ti 'SQL Server Dim sDBCS As String = Data Source=db;User Id=userid;Password=pass;Initial Catalog=images Dim i As Long Dim ti As Long = Microsoft.VisualBasic.Timer() Dim db As SqlConnection db = New SqlConnection(sDBCS) For i = 1 To 100 db.Open() db.Close() Next db.Dispose() tbEnd.Text = Microsoft.VisualBasic.Timer() - ti - Sent from my NYPL BlackBerry Handheld. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
RE: Max connections being used every 10-12 day.
Frederik, I may be losing my mind, but I don't think I am according to your show index, you have multiple indexes on the same fields which is absolutely worthless and actually makes things slower. For example, id, which you have as primary should not have any other indexes on it, but with the explain you have PRIMARY,id,id_2,id_3 Get rid of id, id_2, and id_3. You need to do this for everything that you have duplicates of. Next according to your original query, the real query you should try and run should look like this: Select A.id, A.parent from art A inner join art B using (id) where A.id=60 or B.parent=60 order by A.date Donny -Original Message- From: Fredrik Carlsson [mailto:[EMAIL PROTECTED] Sent: Monday, January 03, 2005 11:08 AM To: Donny Simonton Cc: mysql@lists.mysql.com Subject: Re: Max connections being used every 10-12 day. mysql describe art; +-+--+--+-+-++ | Field | Type | Null | Key | Default | Extra | +-+--+--+-+-++ | id | int(11) | | PRI | NULL| auto_increment | | parent | int(11) | YES | MUL | NULL|| | bodyid | int(11) | YES | | NULL|| | lst| varchar(80) | YES | MUL | NULL|| | mf | varchar(80) | YES | | NULL|| | mt | varchar(80) | YES | | NULL|| | subc | varchar(200) | YES | MUL | NULL|| | sdate | varchar(45) | YES | | NULL|| | batch | varchar(80) | YES | MUL | NULL|| | mgid | varchar(90) | YES | | NULL|| | date| datetime | YES | MUL | NULL|| +-+--+--+-+-++ 11 rows in set (0.12 sec) mysql show index from art; +++--+--+-+--- +-+--++--++-+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +++--+--+-+--- +-+--++--++-+ | art | 0 | PRIMARY |1 | id | A | 405011 | NULL | NULL | | BTREE | | | art | 1 | id |1 | id | A | 405011 | NULL | NULL | | BTREE | | | art | 1 | date |1 | date| A | 405011 | NULL | NULL | YES | BTREE | | | art | 1 | lst |1 | lst| A | 213 | NULL | NULL | YES | BTREE | | | art | 1 | id_2 |1 | id | A | 405011 | NULL | NULL | | BTREE | | | art | 1 | id_2 |2 | parent | A | 405011 | NULL | NULL | YES | BTREE | | | art | 1 | lst_2 |1 | lst| A | 213 | NULL | NULL | YES | BTREE | | | art | 1 | lst_2 |2 | parent | A | 57858 | NULL | NULL | YES | BTREE | | | art | 1 | lst_2 |3 | date| A | 405011 | NULL | NULL | YES | BTREE | | | art | 1 | batch|1 | batch | A | 141 | NULL | NULL | YES | BTREE | | | art | 1 | batch|2 | lst| A |1177 | NULL | NULL | YES | BTREE | | | art | 1 | lst_3 |1 | lst| A | 213 | NULL | NULL | YES | BTREE | | | art | 1 | lst_3 |2 | parent | A | 57858 | NULL | NULL | YES | BTREE | | | art | 1 | lst_3 |3 | batch | A | 67501 | NULL | NULL | YES | BTREE | | | art | 1 | id_3 |1 | id | A | 405011 | NULL | NULL | | BTREE | | | art | 1 | id_3 |2 | parent | A | 405011 | NULL | NULL | YES | BTREE | | | art | 1 | id_3 |3 | date| A | 405011 | NULL | NULL | YES | BTREE | | | art | 1 | parent |1 | parent | A | 57858 | NULL | NULL | YES | BTREE | | | art | 1 | subc |1 | subc | A | 40501
RE: Max connections being used every 10-12 day.
Frederick, What exactly are you trying to accomplish? Personally, I don't recommend using union unless absolutely necessary, since most people don't really understand when it should be used. And I think it shouldn't be used in this case either. Select A.id, A.parent, B.id, B.parent from art A inner join art B using (id) order by A.date; See if that gives you the same results as the original query and then explain it to see if you get anything differently. Also what is the table structure including indexes of the table? Donny -Original Message- From: Fredrik Carlsson [mailto:[EMAIL PROTECTED] Sent: Sunday, January 02, 2005 5:18 AM To: Donny Simonton Cc: mysql@lists.mysql.com Subject: Re: Max connections being used every 10-12 day. It is a single PIII 500MHz, so i just changed thread_concurrency to 2 :), thanks The slow query log don't show that many slow queries, but they did show alot of queries that was'nt using any index, can these queries cause some kind of occasional lock up? Is there a faster way to perform this query? (SELECT id,parent FROM art WHERE id=495098 ORDER BY date) UNION (SELECT id,parent FROM art WHERE parent=495098 ORDER BY date); explain show the following ++---+--+-+-+---+- -+-+ | table | type | possible_keys| key | key_len | ref | rows | Extra | ++---+--+-+-+---+- -+-+ | art | const | PRIMARY,id,id_2,id_3 | PRIMARY | 4 | const |1 | | | art | ref | parent | parent | 5 | const |2 | Using where; Using filesort | ++---+--+-+-+---+- -+-+ // Fredrik Carlsson Donny Simonton wrote: What kind of box is this? According to you're my.cnf it looks like it's a either a dual with hyperthreading or a quad box. I don't see that you have your slow query log turned on, this should be the first thing you should do in my opinion. This is what mine looks like. ### Slow Query Information ### log-long-format log-slow-queries log-queries-not-using-indexes set-variable= long_query_time=3 Then go in and fix all of those that are showing up in the slow query log. With 4-5 queries per second, you should NEVER fill up the 200 connections unless you just have some awful queries or you have some tables that are getting corrupted and are being repaired during that time. Donny -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Max connections being used every 10-12 day.
The inner join should ALWAYS return a faster result than the union, if you have the indexes correctly. Can you send me the explain of the inner join version and also the full table structure and indexes on the table? This should be fairly easy to solve. -Original Message- From: Fredrik Carlsson [mailto:[EMAIL PROTECTED] Sent: Sunday, January 02, 2005 4:45 PM To: mysql@lists.mysql.com Subject: Re: Max connections being used every 10-12 day. The inner join statement returned the same stuff but it was not as fast as the union is and the inner join seems to use more cpu resources. Could these union queries really be the problem behind my occasional lock ups and that 200 connections being used? i mean the server is not that loaded and the http logs show amazeingly low http traffic the night of the lock up. I upgraded mysql to 4.0.22 and tuned down thread_concurency to 2 to se if that helps. When these lock ups occures it dont seems like mysql is freeing the connections, for exampel the last time it happend was around 02:00 a couple of days ago and when i checked the server 14 hours later (16:00) it still said that max_connections was full and mysqld was idling using 0% of the CPU, if the queries was queued up would'nt mysql at least show some activity? // Fredrik Donny Simonton wrote: Frederick, What exactly are you trying to accomplish? Personally, I don't recommend using union unless absolutely necessary, since most people don't really understand when it should be used. And I think it shouldn't be used in this case either. Select A.id, A.parent, B.id, B.parent from art A inner join art B using (id) order by A.date; See if that gives you the same results as the original query and then explain it to see if you get anything differently. Also what is the table structure including indexes of the table? Donny -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: [PHP] phpMyAdmin w/ winXP - IIS w/PHP 4.3 w/mysql 4.1.8
The problem is not with phpmyadmin, the problem is with php. If you install 4.3 of php it will not work with mysql 4.1.8 or any version mysql 4.1 or 5.0. It will only work if you turn on the short passwords option in 4.1. I've not tried it on 5.0 lately. You can get it installed but it takes a little work. This is not a phpmyadmin bug it's all, it's not really a php, and it's not a mysql bug. I reported it to both phpmyadmin and php.net over a year ago. Think this is a problem, wait until you get a $40k 64 bit machine and try to install php on it via source because you want to use php 4.3 and mysql 4.1 and you find out you can't install anything because 64 bit installs stuff in different places than php is expecting it. And the php devel team has no plans on fix it. So you have to hack the config script to get it to work. Donny -Original Message- From: GH [mailto:[EMAIL PROTECTED] Sent: Saturday, January 01, 2005 1:15 PM To: Willy Sudiarto Raharjo; php-general; mysql@lists.mysql.com Subject: Re: [PHP] phpMyAdmin w/ winXP - IIS w/PHP 4.3 w/mysql 4.1.8 It would be nice if phpMyAdmin would kindly note that on their website... Also, when I run a phpInfo()... it says i have the 3.23.49 could this be a contributing factor? On Sat, 1 Jan 2005 15:55:27 +0700, Willy Sudiarto Raharjo [EMAIL PROTECTED] wrote: Has anyone had any problems installing phpMyAdmin with the above configuration? I get an error about the mySql client and authentication methods? MySQL Error: 1251 : Client does not support authentication protocol requested by server MySQL 4.1.x is using a different authentication protocols so it may break phpmyadmin functionality. Use 4.0.x if you want to use phpmyadmin clearly or maybe you should wait for the next release -- Willy Sudiarto Raharjo Registered Linux User : 336579 Public-key : http://www.informatix.or.id/willy/public-key.txt Blog : http://willysr.blogspot.com OOo Documentation Project (ID) : http://project.informatix.or.id -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Max connections being used every 10-12 day.
What kind of box is this? According to you're my.cnf it looks like it's a either a dual with hyperthreading or a quad box. I don't see that you have your slow query log turned on, this should be the first thing you should do in my opinion. This is what mine looks like. ### Slow Query Information ### log-long-format log-slow-queries log-queries-not-using-indexes set-variable= long_query_time=3 Then go in and fix all of those that are showing up in the slow query log. With 4-5 queries per second, you should NEVER fill up the 200 connections unless you just have some awful queries or you have some tables that are getting corrupted and are being repaired during that time. Donny -Original Message- From: Fredrik Carlsson [mailto:[EMAIL PROTECTED] Sent: Saturday, January 01, 2005 4:37 AM To: mysql@lists.mysql.com Subject: Max connections being used every 10-12 day. Hi list, I have a rather serious problem that i really dont know how to solve, Every 8-12 day my mysql server stops responding and i get the error code that indicates that max_connections are full, the problem is that i have checked all of my code over and over again to se that every connections are closed properly and they are. No persistent connections are being used and the max connections error allways occures at night 02:00-03:00, my httpd logs shows no unnormal amount of traffic at the time. The last time this happend i tuned the wait_timeout down to 15 seconds to se if that helped, but no effect :( The server is running NetBSD 1.6.2 and mysql 4.0.21 I really need help on this one because i dont know what is causing max_connections to be used all at once or how to reproduce the error, i only know that it happens very periodicly and 'show full processlist' hardly ever shows any connections not even the day/hours before the error. The server has about 4-5 queries / seconds. According to the manual the max_connections have one connection reserved for the superuser but i have never been able to use that extra connection to se which user that is eating upp all the connections. // Fredrik Carlsson # The MySQL server [mysqld] port= 3306 socket= /tmp/mysql.sock skip-locking key_buffer = 280M max_allowed_packet = 32M table_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M myisam_sort_buffer_size = 64M thread_cache = 8 query_cache_size = 64M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 max_connections = 200 wait_timeout = 15 connect_timeout = 5 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: scanning 2 rows slow index fast 26GB MyISAM
Bryan, Select count(*) is basically a different query then select locationid or any of your fields. I have tables with way more than a billion rows of information, I have some in innodb and some in myisam, and neither of them when heavily loaded will take as long as yours is taking. I recommend that you try this: Run and Explain: select locationid from matrix where accountid = 11 and wordid = 72 and position = 'Body'; Who many results do you get? Then run and explain: select locationid from matrix where accountid = 11 and wordid = 72 and position = 'Body' and date now() - interval 10 day; How many results to you get on this query? I am betting the problem is that you only have results in the past 10 days and nothing before that with accounted =11, worded =72 and position = Body. Which would then do a scan on the date, since it doesn't have anything before that. Just a theory. Donny -Original Message- From: Bryan Heitman [mailto:[EMAIL PROTECTED] Sent: Thursday, December 23, 2004 2:02 PM To: Sergio Salvi Cc: mysql@lists.mysql.com Subject: Re: scanning 2 rows slow index fast 26GB MyISAM Hi Sergio, All of your suggestions deal with key optimization, I do not believe I have a key issue here. Remember that select count(*), an index-only query returns in .06 seconds which is very quick. The real question, is why does it take 5 mins to retrieve the row data for these 2 rows that the index retrieved so quickly. Why the delay and why the heavy read activity on the MYD file. That to me does not make a lot of sense on the time it takes, does MyISAM not handle large MYD files w/ a billion rows that well where I should split my data across many tables instead? I have certainly not ran across this issue before, but this is the first time I have a table with a billion rows. mysql show index from matrix; +++--+--+-+--- +-+--++--++-+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +++--+--+-+--- +-+--++--++-+ | matrix | 1 | myKey|1 | AccountID | A | NULL | NULL | NULL | | BTREE | | | matrix | 1 | myKey|2 | WordID | A | NULL | NULL | NULL | | BTREE | | | matrix | 1 | myKey|3 | Position| A | NULL | NULL | NULL | | BTREE | | | matrix | 1 | myKey|4 | date| A | NULL | NULL | NULL | | BTREE | | | matrix | 1 | myKey2 |1 | LocationID | A | NULL | NULL | NULL | | BTREE | | +++--+--+-+--- +-+--++--++-+ 5 rows in set (0.00 sec) - Original Message - From: Sergio Salvi [EMAIL PROTECTED] To: Bryan Heitman [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Thursday, December 23, 2004 12:01 PM Subject: Re: scanning 2 rows slow index fast 26GB MyISAM On Thu, 23 Dec 2004, Bryan Heitman wrote: My mistake! Here you go: Ok, no prob :) CREATE TABLE `matrix` ( `WordID` int(11) unsigned NOT NULL default '0', `LocationID` int(11) unsigned NOT NULL default '0', `Position` enum('Body','From','Subject','To','Mailbox','File') NOT NULL default 'Body', `times` int(11) unsigned NOT NULL default '0', `MyOrder` int(11) unsigned NOT NULL default '0', `AccountID` int(11) unsigned NOT NULL default '0', `date` timestamp(19) NOT NULL, KEY `myKey` (`AccountID`,`WordID`,`Position`,`date`), KEY `myKey2` (`LocationID`) ) TYPE=MyISAM MAX_ROWS=10 AVG_ROW_LENGTH=300 DATA DIRECTORY='/home/imap/fuse3.disk2/SQL/search/' Oops, I forgot to ask you to send the output of show index from matrix. But your index myKey looks goods, you could try changing the order of the fields in your key. Try creating a index with your fields ordered by the Cardinality value from the show index from matrix output (in asceding order). Also, what happens if you don't specify the date value in your query? Check the time it takes and the explain output. Another thing I would suggest is to create (or replace) your index, trying all (or almost all) of the possible combinations regarding the order of the keys in your index. It helped me in some situations, and sometimes it's better for me to keep two indices with the same keys but different order, because of my different selects. Hope that helps! []s, Sergio -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To
RE: Optimising a query on a large table.
Rob, First of all I would say, your query is pretty badly laid out. First, unless you need every fields from a table returned only ask for those specific fields, and do you have an index on the combination of person_id + session_start? If not, your query will always be slow. But this is how I would write it. Select bla, bla2 from table where person_id = 10 and session_start between ('2004-09-01 00:00:00' and '2004-10-01 00:00:00') Then why would you order by session_start, when the odds are that you added the data to the table by time anyway. So why waste the servers time ordering something that may already be ordered for you automatically. But you would know that better than any of us. A query like this should take no longer than 0.1 seconds to execute in most cases, even with a few gigs of data. Doonny -Original Message- From: news [mailto:[EMAIL PROTECTED] On Behalf Of Rob Keeling Sent: Thursday, December 23, 2004 11:55 AM To: mysql@lists.mysql.com Subject: Optimising a query on a large table. I have a 152MB MyISAM table that I am trying to execute a simple select statement on, I need to retreave all rows with a given index, sorted by date. This is taking a very long period of time to execute. What can I do to speed up the query. The sql is, SELECT * FROM table WHERE (person_id LIKE 10) AND (session_start '2004-09-01 00:00:00') AND (session_start '2004-10-01 00:00:00') Order by session_start Thanks Rob Keeling - -- I love deadlines. I love the whooshing noise they make as they go by. - Douglas Adams -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MYSQL is getting slow
What are some typical queires for a given schema? I have no idee thats the whole problem, its a propretaire product. Nothing is proprietary when it comes to mysql, you can turn on the slow query log which is the first thing, do you have it turned on? What is the slow query set to? Are any showing up? If so, then it must be time to add some indexes until they go away. The next thing you can always do is turn on the general query log, and that will pretty much log everything, and then you can see everything that is being done. I've use a similar product that worked with snort in the past, it worked fine for us, but snort is boring. You need to pull the data right from the switch, using something like netflow and logging like 600mb/s of netflow logs to mysql is much more fun! Donny -Original Message- From: Patrick Marquetecken [mailto:[EMAIL PROTECTED] Sent: Friday, December 10, 2004 3:03 AM To: [EMAIL PROTECTED] Subject: Re: MYSQL is getting slow On Thu, 9 Dec 2004 12:34:36 -0800 Dathan Pattishall [EMAIL PROTECTED] wrote: This is a very broad question, I sometimes do this myself. But to answer I know, but where must i begin ... the question to the best of my ability I have to ask a few. Are you using RAID? If so what RAID level? no raid What are you're my.cnf settings? standaard execpt for the bind address What type of Indexes are you using? On the larged table that has only 2 fields there are primary indexes on both fields. And each table has its ons primary index, and some other normal indexes. What does vmstat and iostat say? NIDS-console distfiles # free total used free sharedbuffers cached Mem:507508 497872 9636 0 2232 399024 -/+ buffers/cache: 96616 410892 Swap: 506036 62384 443652 procs ---memory-- ---swap-- -io --system-- cpu r b swpd free buff cache si sobibo incs us sy id wa 0 2 62384 5044 2380 40163611 251187 522 15 6 79 0 1 0 62384 4412 2392 40229600 451113 1272 1859 17 5 79 0 0 2 62384 4504 1840 40274400 455672 1325 1977 16 8 76 0 2 0 62412 4960 1808 40140800 3959 0 1170 1866 35 13 52 0 What Filesystem are you using? ext3 What are some typical queires for a given schema? I have no idee thats the whole problem, its a propretaire product. Typically for most orginizations mysql dedicated on the box below is wicked fast, even when not optimized because of system cache etc. But if your running out of diskspace then that's a problem in itself. I got a lot of disk space left. -Original Message- From: Patrick Marquetecken [mailto:[EMAIL PROTECTED] Sent: Thursday, December 09, 2004 7:49 AM To: [EMAIL PROTECTED] Subject: MYSQL is getting slow Hi, I have 3 snort sensors logging to a central mySQL database after two weeks the size of the database is about 3.3GB and the machine is getting slow, as i'm not used to be working with mySQL is my question how far can i go before the machine stop responding ? HP Pentium4 - 2.2mhz - 512 mb, 99% used and no swap, HD of 40GB. TIA Patrick -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Why is this simple query so slow?
Aaron, Three things. 1. Do a show create table Offers_To_Buy 2. And why in the world would you have force index(scdd) when your where clause is on subcatID? If you can explain what you are trying to do, I'm sure many people can help you get exactly what you are looking for. 3. Why so many indexes? Do you search on every one of those fields? If not, then you are probably wasting diskspace and speed. Donny -Original Message- From: Aaron [mailto:[EMAIL PROTECTED] Sent: Friday, December 10, 2004 8:12 PM To: [EMAIL PROTECTED] Subject: Why is this simple query so slow? Hi all , I have a relatively simple query that is taking longer than I think it should. Can anyone possibly give me some idea why this might be or any potential bottleneck areas I might want to check out? thanks! Here is some information. The query below takes around 8 seconds, and returns 3253 rows. Mysql Version: 4.1.7-standard-log Operating System: Linux 2.4.20-8smp Query:SELECT ID FROM Offers_To_Buy FORCE INDEX(scdd) WHERE subcatID = 72 ; ... ... 3253 rows in set (8.00 sec) Explain says: mysql EXPLAIN SELECT ID FROM Offers_To_Buy FORCE INDEX(scdd) WHERE subcatID = 72 ; ++-+---+--+---+--+-- ---+---+--+-+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | ++-+---+--+---+--+-- ---+---+--+-+ | 1 | SIMPLE | Offers_To_Buy | ref | scdd | scdd | 4 | const | 2988 | Using where | ++-+---+--+---+--+-- ---+---+--+-+ 1 row in set (0.02 sec) mysql SHOW INDEXES FROM Offers_To_Buy ; +---++-+--+- +---+-+--++--++- + | Table | Non_unique | Key_name| Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +---++-+--+- +---+-+--++--++- + | Offers_To_Buy | 1 | ID |1 | ID | A |NULL | NULL | NULL | | BTREE | | | Offers_To_Buy | 1 | oldtitle|1 | oldtitle | A |NULL | NULL | NULL | | BTREE | | | Offers_To_Buy | 1 | user|1 | userID | A |NULL | NULL | NULL | | BTREE | | | Offers_To_Buy | 1 | date|1 | deletedate | A |NULL | NULL | NULL | | BTREE | | | Offers_To_Buy | 1 | subcategory |1 | subcatID | A |NULL | NULL | NULL | | BTREE | | | Offers_To_Buy | 1 | country |1 | country | A |NULL | NULL | NULL | | BTREE | | | Offers_To_Buy | 1 | source |1 | source | A |NULL | NULL | NULL | | BTREE | | | Offers_To_Buy | 1 | scdd|1 | subcatID | A |NULL | NULL | NULL | | BTREE | | | Offers_To_Buy | 1 | scdd|2 | deletedate | A |NULL | NULL | NULL | | BTREE | | | Offers_To_Buy | 1 | keywords|1 | keywords | NULL |NULL | NULL | NULL | YES | FULLTEXT | | | Offers_To_Buy | 1 | bid |1 | bid | NULL |NULL | NULL | NULL | | FULLTEXT | | +---++-+--+- +---+-+--++--++- + 11 rows in set (0.00 sec) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL 4.1.7 Network slowdown
I have a bunch of 4.1.7 boxes all running linux, some 32 bit and some 64 bit and I've never experienced the problem you are having. Are you have the problem if you connect from a remote linux box? The final question, does the windows box have reverse DNS setup for it? If not add it to the /etc/hosts file on your fc3 linux box. And reconnect to mysql. Donny -Original Message- From: Frank Febbraro [mailto:[EMAIL PROTECTED] Sent: Tuesday, December 07, 2004 1:01 PM To: [EMAIL PROTECTED] Subject: Re: MySQL 4.1.7 Network slowdown This might be related to a bug I filed a couple months ago, assuming your server is running on Windows http://bugs.mysql.com/bug.php?id=5787 Sorry, I did not mention it, we are running on Linux, Fedora Core 3 to be exact. My client machine (the remote machine in this mix) is a windows machine though -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL 4.1.7 Network slowdown
I've been using mysql 4.1 since the first day it was out; it's all been trial and error. If I remember correctly, I found it on 4.1.0 when I was doing a processlist, and noticed that some of the boxes connecting to mysql had reverse and others didn't. So now we actually go to the /etc/hosts file and name all of our boxes something just in the hosts file so it will be easier to know exactly what everything is. You might still want to file that as a bug and you can at least have mysql determine if it's a bug or not. They will probably say it's on some documentation like on page 13,645 paragraph 12 or something so they won't consider it a bug. But at least this way somebody else may know about the problem. Donny -Original Message- From: Frank Febbraro [mailto:[EMAIL PROTECTED] Sent: Tuesday, December 07, 2004 3:09 PM To: Donny Simonton; [EMAIL PROTECTED] Subject: Re: MySQL 4.1.7 Network slowdown The final question, does the windows box have reverse DNS setup for it? If not add it to the /etc/hosts file on your fc3 linux box. And reconnect to mysql. WOW! That was it. Things are lightning fast now. Sorry for going completely down the wrong path. What would have been the best way to figure this out, not knowing this could have been the case? I truly apprecuate the help. Regards, Frank -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL 4.1.7 Network slowdown
It was probably attempting to do a reverse and nothing exists, so it just has to timeout. Donny -Original Message- From: Frank Febbraro [mailto:[EMAIL PROTECTED] Sent: Tuesday, December 07, 2004 3:23 PM To: Donny Simonton; [EMAIL PROTECTED] Subject: Re: MySQL 4.1.7 Network slowdown I've been using mysql 4.1 since the first day it was out; it's all been trial and error. If I remember correctly, I found it on 4.1.0 when I was doing a processlist, and noticed that some of the boxes connecting to mysql had reverse and others didn't. So now we actually go to the /etc/hosts file and name all of our boxes something just in the hosts file so it will be easier to know exactly what everything is. You might still want to file that as a bug and you can at least have mysql determine if it's a bug or not. They will probably say it's on some documentation like on page 13,645 paragraph 12 or something so they won't consider it a bug. But at least this way somebody else may know about the problem. So do you think the 5+ second delay was that it was actually getting a reverse lookup, just from somewhere that responded really slow, or it tried, got some timeout, then gave up? Thanks again...so happy. Frank -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Table Locking Problem? Very Slow MyISAM DB - PLS HELP!
Andrew, DO you have the slow query log turned on? What does one of your tables look like and the one of the 8 queries you talk about? It very well could just be a index problem. And what is the size of the data. Donny -Original Message- From: Andrew Nelson [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 01, 2004 11:22 PM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: Table Locking Problem? Very Slow MyISAM DB - PLS HELP! The reason I ask is because eight select statements should not bog down a production server. On the MySQL side, is anything being written to the slow query log? On the application side is there any virus scanning or similar activity being performed? Does iostat show any heavy reading or writing activity? Is memory being swapped? What is the server load? Do you have a high wait time or is CPU usage the only symptom? Nothing is being written very often.. Maybe a few email accounts added/modified each day. iostat shows very small disk activity (around 0.4Mb/s average). No memory is being swapped.. We do have a high wait time - the email is queueing up and the website interface to update the database (in PHP) times out when it's busy and I figured it's the CPU - I don't get what is so computational, the Databases are quite small (it's just a userbase - about 200Kb). I appreciate your help, Thanks, Andrew. Andrew Nelson wrote: Hi Victor, How did you deduce that the database server is the bottleneck? Are all your processes running on the same machine? Because 'ps -aux' shows it running at 94% of the CPU and when I stop/start the mysql server, it seems to be ok again for another hour. Any ideas? Andrew Nelson wrote: Hi, I have a MySQL 3.23.55 server managing accounts on my exim mail server.. The table type on all tables MyISAM.. I have the MTA performing various queries for each incoming email - determining mail aliases, vacation messages and filtering rules etc but they're all pretty much SELECT statements.. I know this isn't ideal and i've started replacing runtime queries with processes that search text files instead (generated every few minutes etc) but it should still be able to cope I would have thought? At it's busiest, it's performing about 8 trivial queries per second. It's a Xeon 2.6Ghz machine with 1Gb of RAM (running on FreeBSD) but it's grinding to a halt. I have to keep stopping and restarting the MySQL server to regain speed. As there's many processes trying to access the same tables to do SELECTs I thought it might be a locking issue.. BDB didn't seem to help - can anyone suggest something that might help? Thanks, Andrew. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL support for AMD64
I've got 3 amd64 machines running mysql. One with 32 gigs of memory and 2 with 16gigs. All of them are quad 848's. We use fedora core 2 on all of our boxes. 2 of the boxes are pushing over 3000 queries per second. And one is over 4k per second. Personally, I have about 30 mysql boxes, and I will never buy a non-64 bit machine again. Donny -Original Message- From: Steve Poirier [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 01, 2004 8:29 AM To: 'Lynn Bender'; [EMAIL PROTECTED] Subject: RE: MySQL support for AMD64 I would recommend Raid 10 over Raid 5 even if it's kinda a big hit on your storage cabality. http://www.experts-exchange.com/Storage/Q_20640972.html I'm successfully running a Master/Slave setup with the following machines: Quad Opteron 64 / 32G RAM Dual Opteron 64 / 16G RAM Using gentoo compiled from scratch (stage 1) 100% stability around 1000 queries / second _ Steve -Original Message- From: Lynn Bender [mailto:[EMAIL PROTECTED] Sent: November 30, 2004 2:23 PM To: [EMAIL PROTECTED] Subject: MySQL support for AMD64 I just received a box with the following specs: Dual AMD64 8G ram Two 3ware 2.4 terabyte RAID 5 arrays. My company has been using Redhat for most of its production machines. 1. Does anyone have any success/horror stories running MySQL 4.0.x on RHES 3/ AMD64? 2. Does anyone have alternate recommendations for running MySQL databases in the terabyte range on AMD64? Thanks Lynn Bender UnsubCentral Secure Email List Suppression Management Neutral. Bonded. Trusted. You are receiving this commercial email from a representative of UnsubCentral, Inc. 13171 Pond Springs Road, Austin, TX 78729 Toll Free: 800.589.0445 To cease all communication with UnsubCentral, visit http://www.unsubcentral.com/unsubscribe or send an email to [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: 4.0.18 - 4.1.7 LEFT JOIN problem
Move this: tries.status IN('running','waitkill','preemption' to a where clause and remove it from the join. Never actually tried to do a IN in a join before. I personally don't think it should work. Donny -Original Message- From: Sergei Golubchik [mailto:[EMAIL PROTECTED] Sent: Tuesday, November 23, 2004 2:13 PM To: Dilipan Sebastiampillai Cc: [EMAIL PROTECTED] Subject: Re: 4.0.18 - 4.1.7 LEFT JOIN problem Hi! On Nov 23, Dilipan Sebastiampillai wrote: Hi! On Nov 22, Dilipan Sebastiampillai wrote: I moved a db from 4.0.18 to 4.1.7 and I realise that the SAME sql query using LEFT JOIN gives me different result. The result from 4.0.18 does s real LEFT JOIN but 4.1.7 differs sometimes and I don't want that. I think it comes from an 'optimization' how can I make a query without using the LEFT JOIN optimization of MySQL 4.1.7 ? What is the query ? Can you provide a repeatable test case ? the answer is amazingly wrong ! have a look : mysql SELECT hosts.name,hosts.hostId, tries.hostId, tries.status FROM hosts LEFT JOIN tries ON tries.hostId=hosts.hostId AND tries.status IN('running','waitkill','preemption') LIMIT 20; +-+++-+ | name| hostId | hostId | status | +-+++-+ | chimp13 | 1530 | 1393 | running | | chimp13 | 1530 | 1485 | running | | chimp13 | 1530 | 1418 | running | | chimp13 | 1530 | 1499 | running | I agree that it doesn't look right. But the query alone is not enough for me to repeat the bug. I need also both tables hosts and tries. If they are big, you may try to remove unrelated rows, or create a completely independent test case. Actually you can even upload big tables if you don't want to spend time on a test case. But only with a repeatable test case you can make sure that the bug won't be present in 4.1.8. Regards, Sergei -- __ ___ ___ __ / |/ /_ __/ __/ __ \/ / Sergei Golubchik [EMAIL PROTECTED] / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Senior Software Developer /_/ /_/\_, /___/\___\_\___/ Osnabrueck, Germany ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: 4.1.8
I think a lot of it has to do with the fact that 4.1 is now production and a lot more people are now using it then before. So you are bound to have new bugs crop up, or in some cases people think they are bugs and they just haven't read the manual. You can see what's already been fixed for 4.1.8 at http://dev.mysql.com/doc/mysql/en/News-4.1.8.html Donny -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, November 15, 2004 2:35 PM To: [EMAIL PROTECTED] Subject: 4.1.8 Does 4.1.8 address any of the issues I am reading about in 4.1.7? Are all of these issues valid or are you finding that its Lazy coding? Thanks Donny Lairson President http://www.gunmuse.com 469 228 2183 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) and Replication
Michael, Normally I would let a fellow domain registrar fend for themselves, but I'm feeling nice today. :) Do a show create table contacts and see what the charset is set too. I bet the character set on the slave is different. Are you running this from the command line? Or are you getting this error in the error_log? Donny -Original Message- From: Michael Grubb [mailto:[EMAIL PROTECTED] Sent: Monday, November 15, 2004 2:49 PM To: [EMAIL PROTECTED] Subject: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) and Replication Hello all, I'm having a very peculiar and nerve racking experience with MySQL 4.1.7 running on Fedora Core 2. On the master the query executes just fine, however on the slave I get: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) for operation '='' on query. Default database: 'BRITEERNO'. Query: 'Update contacts set firstname = '',lastname = '',email = '' ,company = '',address1 = '',address2 = '' ,address3 = '',city = '',state = 'XX' ,zip = '', country = 'XX',phone = '' ,fax = '',user = '',pass = '',changed_on = now() ,status = 'X' where RNK_id = ''' I'm completely and uterly stumped by this, I have no idea of where to go from here. Any help will be greatly appreciated. Thank, Michael Grubb 000domains.com Below is output from my master and slave show variables command. Show variables on the master: character_set_client latin1 character_set_connection latin1 character_set_databaselatin1 character_set_results latin1 character_set_server latin1 character_set_systemutf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection latin1_swedish_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci Show variables on the slave: character_set_client latin1 character_set_connectionlatin1 character_set_database latin1 character_set_results latin1 character_set_server latin1 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection latin1_swedish_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Poor Select speed on simple 1 table query
Mos, Personally, I never use like for anything. I would add a fulltext index myself and call it a day. But that's me. Donny -Original Message- From: mos [mailto:[EMAIL PROTECTED] Sent: Monday, November 15, 2004 2:40 PM To: [EMAIL PROTECTED] Subject: Poor Select speed on simple 1 table query It doesn't get any simpler than this. :) The Select statement takes way too long to complete. select rcd_id, company_name from company where company_name like fra% 12357 rows fetched (86.08 seconds) However if it returns just the column value from the index, it is quite fast: select company_name from company where company_name like 'fra%' 12357 rows fetched ( 0.14 sec) So by referencing a column (Rcd_Id or Cust_Name) from the data file, it becomes 600x slower compared to just referencing the value from the index by itself namely Company_Name. I've run Analyze on the table, I've even repaired it and rebuilt the index with no increase in speed. The table has 10 million rows in it. CREATE TABLE `company` ( `Rcd_Id` int(4) NOT NULL auto_increment, `Company_Name` char(30) NOT NULL default '', `Cust_Name` char(15) default NULL, PRIMARY KEY (`Rcd_Id`), KEY `CompanyName_Index` (`Company_Name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 mysql explain select rcd_id, company_name from company where company_name like 'fra%'; ++-+-+---+---+ ---+-+--+---+-+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | ++-+-+---+---+ ---+-+--+---+-+ | 1 | SIMPLE | company | range | CompanyName_Index | CompanyName_Index | 30 | NULL | 10505 | Using where | ++-+-+---+---+ ---+-+--+---+-+ mysql explain select company_name from company where company_name like 'gre%'; ++-+-+---+---+ ---+-+--+---+--+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra| ++-+-+---+---+ ---+-+--+---+--+ | 1 | SIMPLE | company | range | CompanyName_Index | CompanyName_Index | 30 | NULL | 10505 | Using where; Using index | ++-+-+---+---+ ---+-+--+---+--+ So is a 600x slower query typical of queries that reference the data portion of the table compared to queries that reference just the indexed columns? Is there any way to speed it up? TIA Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Poor Select speed on simple 1 table query
You could always add an index on company_name + rcd_id. That technically shouldn't help, but I've seen crazier things before. Donny -Original Message- From: mos [mailto:[EMAIL PROTECTED] Sent: Monday, November 15, 2004 4:23 PM To: [EMAIL PROTECTED] Subject: RE: Poor Select speed on simple 1 table query At 03:32 PM 11/15/2004, Donny Simonton wrote: Mos, Personally, I never use like for anything. I would add a fulltext index myself and call it a day. But that's me. Donny Donny, Unfortunately I can't. The query must return all rows that *start* with a certain phrase. It's similar to doing a search on a range of values using Between fra and frazzz but this too is very very slow (324 seconds to return 62k rows). The Like operator will use the index if the wildcard is not used in the first character position. The Explain command shows the index is being used, and the thing that I think is slowing it down is not the Where clause but the reference to Rcd_Id which is not in the index. It's almost like MySQL is returning the results from the index file and then doing a non-indexed table join to the table data to get the Rcd_Id. Mike -Original Message- From: mos [mailto:[EMAIL PROTECTED] Sent: Monday, November 15, 2004 2:40 PM To: [EMAIL PROTECTED] Subject: Poor Select speed on simple 1 table query It doesn't get any simpler than this. :) The Select statement takes way too long to complete. select rcd_id, company_name from company where company_name like fra% 12357 rows fetched (86.08 seconds) However if it returns just the column value from the index, it is quite fast: select company_name from company where company_name like 'fra%' 12357 rows fetched ( 0.14 sec) So by referencing a column (Rcd_Id or Cust_Name) from the data file, it becomes 600x slower compared to just referencing the value from the index by itself namely Company_Name. I've run Analyze on the table, I've even repaired it and rebuilt the index with no increase in speed. The table has 10 million rows in it. CREATE TABLE `company` ( `Rcd_Id` int(4) NOT NULL auto_increment, `Company_Name` char(30) NOT NULL default '', `Cust_Name` char(15) default NULL, PRIMARY KEY (`Rcd_Id`), KEY `CompanyName_Index` (`Company_Name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 mysql explain select rcd_id, company_name from company where company_name like 'fra%'; ++-+-+---+---+ ---+-+--+---+-+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | ++-+-+---+---+ ---+-+--+---+-+ | 1 | SIMPLE | company | range | CompanyName_Index | CompanyName_Index | 30 | NULL | 10505 | Using where | ++-+-+---+---+ ---+-+--+---+-+ mysql explain select company_name from company where company_name like 'gre%'; ++-+-+---+---+ ---+-+--+---+--+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | ++-+-+---+---+ ---+-+--+---+--+ | 1 | SIMPLE | company | range | CompanyName_Index | CompanyName_Index | 30 | NULL | 10505 | Using where; Using index | ++-+-+---+---+ ---+-+--+---+--+ So is a 600x slower query typical of queries that reference the data portion of the table compared to queries that reference just the indexed columns? Is there any way to speed it up? TIA Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) and Replication
Got me stumped on this one, I've been using 4.1 for over a year now and I haven't run into this problem except somebody was using phpmyadmin or something. Maybe somebody else can help with this one, sorry. Donny _ From: Michael Grubb [mailto:[EMAIL PROTECTED] Sent: Monday, November 15, 2004 4:23 PM To: Donny Simonton Cc: [EMAIL PROTECTED] Subject: Re: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) and Replication Donny, I certainly appreciate your help. show create table contacts shows the same output on both the master and the slave ('latin1'). This is being run by the slave's SQL thread, and the Error is shown in the Last error line of show slave status and the error log. Thanks again, Michael Donny Simonton wrote: Michael, Normally I would let a fellow domain registrar fend for themselves, but I'm feeling nice today. :) Do a show create table contacts and see what the charset is set too. I bet the character set on the slave is different. Are you running this from the command line? Or are you getting this error in the error_log? Donny -Original Message- From: Michael Grubb [mailto:[EMAIL PROTECTED] Sent: Monday, November 15, 2004 2:49 PM To: [EMAIL PROTECTED] Subject: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) and Replication Hello all, I'm having a very peculiar and nerve racking experience with MySQL 4.1.7 running on Fedora Core 2. On the master the query executes just fine, however on the slave I get: Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) for operation '='' on query. Default database: 'BRITEERNO'. Query: 'Update contacts set firstname = '',lastname = '',email = '' ,company = '',address1 = '',address2 = '' ,address3 = '',city = '',state = 'XX' ,zip = '', country = 'XX',phone = '' ,fax = '',user = '',pass = '',changed_on = now() ,status = 'X' where RNK_id = ''' I'm completely and uterly stumped by this, I have no idea of where to go from here. Any help will be greatly appreciated. Thank, Michael Grubb 000domains.com Below is output from my master and slave show variables command. Show variables on the master: character_set_client latin1 character_set_connection latin1 character_set_database latin1 character_set_results latin1 character_set_server latin1 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection latin1_swedish_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci Show variables on the slave: character_set_client latin1 character_set_connection latin1 character_set_database latin1 character_set_results latin1 character_set_server latin1 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection latin1_swedish_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci
RE: mysql NOT operator
Why would have to write not, when your keyword=a will not return b to begin with? But if this was really just an example you would do this. SELECT data_id from table WHERE keyword = a AND keyword != b Donny -Original Message- From: L a n a [mailto:[EMAIL PROTECTED] Sent: Saturday, November 06, 2004 10:35 PM To: [EMAIL PROTECTED] Subject: mysql NOT operator Hello, Could you please tell me how I can write an sql statement in php when I'd llike to select boolean search in one field like except or NOT result. What I mean here is that I can execute the following: 1. SELECT data_id from table WHERE keyword = a AND keyword =b 2. SELECT data_id from table WHERE keyword = a OR keyword =b However, NOT operator gives an error: 3. SELECT data_id from table WHERE keyword = a NOT keyword =b ( returns sql error) Could you please help? Thank you, Lana -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: 4.1.7 serious problems
Are you connecting from the local mysql command line client? Or from some other box? I saw this happen once before, but we were using an old client. Donny -Original Message- From: Heikki Tuuri [mailto:[EMAIL PROTECTED] Sent: Saturday, November 06, 2004 4:06 AM To: [EMAIL PROTECTED] Subject: Re: 4.1.7 serious problems Ugo, - Original Message - From: Ugo Bellavance [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, November 05, 2004 3:42 PM Subject: Re: 4.1.7 serious problems Gleb Paharenko wrote: Hi. There were several posts in list like yours. Do you use InnoDB tables? Try to increase values of key_buffer_size, read_buffer_size and so on. InnoDB is enabled but no InnoDB table is used yet (coming soon). However, it crashes with only 1 client connected. There is still plenty of free memory. if mysqld crashes that easily, then you may have a hardware fault, or the OS version in that computer is buggy. You can try running memtestx86 or memburn: http://v.iki.fi/~vherva/memburn.c You can also try upgrading the kernel. Thanks, Best regards, Heikki Tuuri Innobase Oy Foreign keys, transactions, and row level locking for MySQL InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM tables http://www.innodb.com/order.php Order MySQL technical support from https://order.mysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: data not getting passed to mysql
The problem is not a mysql problem, it's a php problem. It's probably because you don't have register_globals = On. The new versions of php have register_globals off by default. If you turn it on in the php.ini and restart apache, I bet it will work. Then again, I could just blame it on Novell and call it a day. :) Donny -Original Message- From: tom miller [mailto:[EMAIL PROTECTED] Sent: Saturday, October 02, 2004 9:43 AM To: [EMAIL PROTECTED] Subject: data not getting passed to mysql i'am not sure if this is related to suse 9.1 or what i'am beginning with mysql and php i'am running: mysql 4.0.21-standard suse9.1 kernel version 2.6.5-7.10b-default php version 4.3.4 - default install apache 2.0.49 - prefork i created an address book using php and when i go thru and fill in all the required feilds and click on submit it just resets the form. i was previously using this on mandrake 9.2 and it worked flawlessly however i was using apache 1.3 i figured it was something worng in my php scripting but i took a sample calculator from php.net and it it too was not passing data to mysql i have searched google many times over and different forums and never found my answer or i'am going in the wrong direction. i was looking into mysql modules that apache uses but that lead to dead ends. looked into how php handels the data and that too lead me to a dead end. i wiped my machine out and reinstalled the os figured i missed something or there was a corrupt file but that apparently was not the case. if some one could make some helpful suggestions as to whats causing this i would be much appreciative thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Looking for null values.
I think you would have to do one column at a time. Like this. Select * from QA where title is NULL; Or you could get a little more crazy with something like this. Select * from QA where (title is NULL) or (blabla is NULL) or (jimbob is NULL) or (theskyisfall is NULL); Donnny -Original Message- From: Scott Hamm [mailto:[EMAIL PROTECTED] Sent: Saturday, October 02, 2004 10:22 AM To: [EMAIL PROTECTED] Subject: Looking for null values. How do I use SQL to look for null values in ANY column? SELECT * FROM QA WHERE * = null; -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL speed
Why not just add an index on touser+hidden. Problem solved. Donny -Original Message- From: Dan Nelson [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 14, 2004 11:48 AM To: Dirk Schippers Cc: [EMAIL PROTECTED] Subject: Re: MySQL speed In the last episode (Sep 14), Dirk Schippers said: For several years I am hosting a popular website using PHP and MySQL. As the site is growing and evolving, speed is becoming more and more important. With my latest inventions on the website, I notice that the website is becoming slow and I want to find out what's causing this. And this is the question that makes me totally confused: How is it possible that the following query: SELECT COUNT(*) FROM messages WHERE touser = 20 AND hidden = 0 with a key on touser can take up to 1 second (I even had moments where it would take up to 18 seconds!!!) even if EXPLAIN says mysql only has to walk trough 2500 rows thanks to the key on touser. (Oh yeah, I did an ANALYSE on the table) If I think of making my own program walking trough a datafile with 2500 items, checking every item for the flag hidden I would think that should not take up to even 0.01 second! Of course MySQL is more complicated than this, but I think it still is a very big difference. That's up to 2500 random disk seeks, and even the fastest SCSI disks do only 300 seeks/sec (ATA disks max at ~150). Best case is where all the records are in memory and it doesn't have to hit the disk at all, but depending on how many tables you have and your RAM, that may not always be true. Try creating a multicolumn index on (touser,hidden), which will let mysql process the query without doing any record fetches at all. -- Dan Nelson [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: best-performing CPU + platform for MySQL now? Opteron? OpenBSD? SuSE?
I can verify that a quad opteron 2.2 runs about a million times better than a quad xeon 3.06. The opteron can handle more than 3 gigs of memory which is a 32 bit limitation. Right now in my quad opteron we have 32 gigs of memory and MySQL is using 16.8 gigs of the memory. We run fedora core 2, with the rpm built by MySQL. We don't run anything else any longer. Donny -Original Message- From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 14, 2004 2:11 PM To: Miles Keaton Cc: [EMAIL PROTECTED] Subject: Re: best-performing CPU + platform for MySQL now? Opteron? OpenBSD? SuSE? On Mon, Sep 06, 2004 at 12:48:37PM -0700, Miles Keaton wrote: If my company wants to get the best-performing fastest platform for a MySQL server, what would it be these days? Opteron? Dual? Quad? And on a related note... If a 64-bit CPU, then I'm assuming it would need an operating system designed for that 64-bit CPU, to get best performance, right? I know that OpenBSD has an amd64 version and that the OpenBSD developers seem to say that Opteron is their favorite (and most-currently-developed) CPU. I've used OpenBSD in the past and like it a lot. Is anyone here using MySQL on OpenBSD+Opteron in a high-load situation? MySQL works quite well on Opteron machines. However, OpenBSD is a poor platform choice for running MySQL. It's known to run much better on FreeBSD or Linux (depending on your particular preference). Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ [book] High Performance MySQL -- http://highperformancemysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: best-performing CPU + platform for MySQL now? Opteron? OpenBSD? SuSE?
MySQL released this a few weeks ago. http://www.mysql.com/news-and-events/press-release/release_2004_27.html As far as personal benchmarks, it's fast. Real fast. With a quad Xeon (which was more expensive than the quad opteron) our master server had a normal load of 2-3. With the the quad opteron it's less than .25. Donny -Original Message- From: Brian Abbott [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 14, 2004 4:32 PM To: [EMAIL PROTECTED]; 'Donny Simonton' Cc: 'Miles Keaton' Subject: RE: best-performing CPU + platform for MySQL now? Opteron? OpenBSD? SuSE? Do you guys have metrics on this that you would be willing to share? We are looking at upgrading to the Opteron (from the Xeon) at the moment. Any information would be very helpful. Brian Abbott -Original Message- From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] Sent: Tuesday, September 14, 2004 2:21 PM To: Donny Simonton Cc: [EMAIL PROTECTED]; 'Miles Keaton' Subject: Re: best-performing CPU + platform for MySQL now? Opteron? OpenBSD? SuSE? On Tue, Sep 14, 2004 at 03:05:07PM -0500, Donny Simonton wrote: I can verify that a quad opteron 2.2 runs about a million times better than a quad xeon 3.06. The opteron can handle more than 3 gigs of memory which is a 32 bit limitation. Right now in my quad opteron we have 32 gigs of memory and MySQL is using 16.8 gigs of the memory. We run fedora core 2, with the rpm built by MySQL. We don't run anything else any longer. And we've had good but limited experiences so far with 64 bit FreeBSD 5 on amd64 (also a quad w/32GB). Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ [book] High Performance MySQL -- http://highperformancemysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: The UNION makes us strong^H^H^H^Hcrazy
Why not just do it like this. You will need to compare the results yourself to make sure. But the numbers should be the same. And you don't have to do a union. select empssn,paycode_1,payrate_1 from paympe where paycode_1 != '000' or paycode_2 != '000' That might need to be an and and not an or. Donny -Original Message- From: Andy Bakun [mailto:[EMAIL PROTECTED] Sent: Friday, September 10, 2004 2:29 PM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: The UNION makes us strong^H^H^H^Hcrazy On Fri, 2004-09-10 at 14:12, [EMAIL PROTECTED] wrote: According to the docs, this should work in versions past mySQL 4, and I seem to be running a version rather later than that mysql Ver 11.18 Distrib 3.23.52, for pc-linux (i686) What am I doing wrong here? I have two valid SELECT statements; the field sizes and types are the same (indeed, empssn is the same field). This SHOULD provide me with what I'm looking for, but... The command: mysql --version shows the version of the mysql client program. You are running version 11.18 of the mysql client program from the MySQL 3.23.52 distribution, or what is commonly called MySQL 3. The syntax allowed in queries is dependent on the version of the mysqld (and other) binaries that the SERVER users. I suspect you are running mysqld from the MySQL 3.23.52 distribution also. This a version that is earlier than MySQL 4, and doesn't support UNIONs. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: please explain why this query isn't optimized
The other simple solution is to do something like this. Select changed FROM archived_stats order by changed DESC limit 0,1 I haven't actually tried it, it may be just as slow. If you always know that you have dates in the changed for the past week or past day, this may be an even better solution for you. Select changed from archived_stats where changed DATE_SUB(NOW(), INTERVAL 1 DAY) order by changed DESC limit 0,1 Hope this helps. Donny -Original Message- From: Dave Dyer [mailto:[EMAIL PROTECTED] Sent: Saturday, September 04, 2004 2:58 AM To: Dan Nelson Cc: Donny Simonton; [EMAIL PROTECTED] Subject: Re: please explain why this query isn't optimized Getting the same answer, from a simpler query, in infinitely less time, just seems wrong to me. Makes perfect sense. Simpler queries *are* easier to optimize, you know :) Makes perfect sense. Thanks, I think the relevant points have been covered. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: please explain why this query isn't optimized
It would help if you would say how many entries do you have for changed =0 and how many are greater than 0. Since changed is a timestamp you should never get an entry of 0. So the query of changed0 will always do a full table scan. This is definitely not a bug. Donny -Original Message- From: Dave Dyer [mailto:[EMAIL PROTECTED] Sent: Friday, September 03, 2004 7:04 PM To: [EMAIL PROTECTED] Subject: please explain why this query isn't optimized Before I post it as a bug, perhaps someone can explain why this query is not optimized to use the index (it has to examine all 287k rows). mysql explain SELECT MAX(changed) FROM archived_stats where changed0; ++---+---+-+-+--+- +--+ | table | type | possible_keys | key | key_len | ref | rows | Extra| ++---+---+-+-+--+- +--+ | archived_stats | range | changed | changed | 4 | NULL | 2878820 | Using where; Using index | ++---+---+-+-+--+- +--+ whereas this query is optimized: mysql explain SELECT MAX(changed) FROM archived_stats; +--+ | Comment | +--+ | Select tables optimized away | +--+ The table in question: mysql describe archived_stats; +--+---+--+-+-+--- + | Field| Type | Null | Key | Default | Extra | +--+---+--+-+-+--- + | number | char(32) | | MUL | | | | bad_login| int(11) | | | 0 | | | good_login | int(11) | | | 0 | | | last_login | timestamp(14) | YES | | NULL| | | batch_flow | int(11) | | | 0 | | | upload_image | int(11) | | | 0 | | | page_proof | int(11) | | | 0 | | | process_form | int(11) | | | 0 | | | changed | timestamp(14) | YES | MUL | 00 | | | sync_date| datetime | YES | | -00-00 00:00:00 | | +--+---+--+-+-+--- + 10 rows in set (0.03 sec) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Installing MySQL 4.1 from RPM on Fedora2
Start mysql on fc2 with /etc/rc.d/init.d/mysql start If that fails then look at the mysql error log. Donny -Original Message- From: Danesh Daroui [mailto:[EMAIL PROTECTED] Sent: Thursday, September 02, 2004 3:04 PM To: [EMAIL PROTECTED] Subject: Installing MySQL 4.1 from RPM on Fedora2 Hi all, I have installed Server, Client, Benhcmark, Share, Embeded and Compact- Share RPMs version 4.1 on a Fedora2 system. The have been installed successfully but when I type: Shell mysql -u root it says that it can not open socket and I think it is because mysqld is not alive. When I type: Shell mysqld it returns a fatal error and refers me to Security notes about installation. Also, I thought that an user account which name is mysql should be created automatically when RPMs are installed, but there is no user account with name mysql !!! Please help me to fix the problem and install mysql on my linux system. Regards, Danesh Daroui -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Replication bug?
David, I haven't ever attempted to delete the slave user on the master, and since I only run replication on 4.1 boxes and not 4.0 boxes, I won't be able to help much. But I would probably submit it to http://bugs.mysql.com and they can verify that it is a bug. But they will probably not recommend deleting the slave user again. :) Donny -Original Message- From: Logan, David (SST - Adelaide) [mailto:[EMAIL PROTECTED] Sent: Wednesday, September 01, 2004 8:53 PM To: MySQL List Subject: Replication bug? Hi Folks, We are trying to put a monitoring solution in place at a client and have come up against something during testing. If the replication user disappears off the master and the slave cannot log in, the Slave_IO_Thread still shows running and no error in the last error number field. Does anybody know if this is intentional? I can't find any references to this in the doco. This is MySQL 4.0.20 and RH Advanced Server 2.1 After deleting the user on the master, the following message appears in the log but the slave status shows a healthy relationship. 40902 10:54:13 Slave I/O thread: error connecting to master '[EMAIL PROTECTED]:3307': Error: 'Access denied for user: '[EMAIL PROTECTED]' (Using password: YES)' errno: 1045 mysql show slave status \G *** 1. row ** Master_Host: hpim202-98.aus.hp.com Master_User: repl Master_Port: 3307 Connect_retry: 60 Master_Log_File: hpim202-98-bin.001 Read_Master_Log_Pos: 913879 Relay_Log_File: MAU023W-relay-bin.010 Relay_Log_Pos: 305 Relay_Master_Log_File: hpim202-98-bin.001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_do_db: Replicate_ignore_db: Last_errno: 0 Last_error: Skip_counter: 0 Exec_master_log_pos: 913879 Relay_log_space: 301 1 row in set (0.00 sec) Regards David Logan Database Administrator HP Managed Services 139 Frome Street, Adelaide 5000 Australia +61 8 8408 4273 - Work +61 417 268 665 - Mobile +61 8 8408 4259 - Fax -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Problem running MySQL in high school lab
Todd, I don't use Windows XP as a production machine, but I do run MySQL on my personal machine running Windows XP, I run the Windows version of MySQL. Is there any reason that you are using Cygwin to run MySQL when you can run the MySQL windows binaries without any problems? The only thing I can think of is you are trying to teach them linux as well. I know in the MySQL training classes offered by MySQL they are always taught using Windows 2000 or XP, and they use the standard MySQL windows installer. Donny -Original Message- From: Todd O'Bryan [mailto:[EMAIL PROTECTED] Sent: Monday, August 30, 2004 8:09 PM To: [EMAIL PROTECTED] Subject: Problem running MySQL in high school lab I'm trying to teach my students how to use MySQL, and have installed it on all the lab machines along with Cygwin. Originally, I had the permissions set wrong and my students couldn't start the server, but I fixed that, and now mysqld works fine. Unfortunately, if you then mysql -u root, after a rather short period of time, the program crashes and pops an error message to the screen. The message, which I should have written down but didn't, says that an assertion has failed in ftell.c (not sure about the filename, but the gist is right) and stream != NULL (that I'm sure of) and then the program dies. I don't have similar problems when I'm logged in as me (which has Administrator privileges) or the machine Administrator. It must be a permissions problem, but I don't know what I need to give the students to prevent it. The MySQL stuff on the local machines need not be secure, so I've given full access to all users in the entire /cygwin/usr/local/ directory and its subdirectories, which is where I installed MySQL and all the other packages we're going to be playing with. There are some kids in there who don't need the temptation of being logged in as an Administrator, and since we're going to be using JDBC later for which the MySQL server will need to be running almost constantly in the background, I'd like to get this resolved with the least amount of temptation. The lab is all Windows XP Professional machines, and the students log into a domain hosted by a server in another teacher's lab. Any ideas appreciated, Todd P.S. If you could cc me any replies, I'd appreciate it, since I read the list on digest. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: replication threads on different CPUs
Luke, As far as I know you can't do that in mysql, it would have to be at the kernel level. Replication threads, really don't use much cpu anyway at least not on the master, since all it's doing is basically reading a binary file. Now the kernel itself does do something like you are talking about, but I have no idea how it decides what is going to use what proc. Now, my master box which is only running mysql and which is replicating to 2 different slaves, but also handles all selects and inserts for about 6 webservers looks like this. Cpu0 : 10.1% us, 4.0% sy, 0.0% ni, 71.5% id, 12.6% wa, 0.5% hi, 1.2% si Cpu1 : 2.2% us, 1.1% sy, 0.0% ni, 94.0% id, 2.5% wa, 0.0% hi, 0.1% si Cpu2 : 10.3% us, 4.1% sy, 0.0% ni, 70.7% id, 12.9% wa, 0.8% hi, 1.3% si Cpu3 : 2.9% us, 1.4% sy, 0.0% ni, 92.4% id, 3.2% wa, 0.0% hi, 0.2% si Cpu4 : 8.9% us, 3.5% sy, 0.0% ni, 80.1% id, 5.8% wa, 0.6% hi, 1.2% si Cpu5 : 2.3% us, 1.1% sy, 0.0% ni, 93.7% id, 2.7% wa, 0.0% hi, 0.1% si Cpu6 : 8.8% us, 3.4% sy, 0.0% ni, 80.5% id, 5.6% wa, 0.6% hi, 1.2% si Cpu7 : 2.5% us, 1.2% sy, 0.0% ni, 93.2% id, 3.0% wa, 0.0% hi, 0.2% si So the load is being shared by all of the procs, just no idea what decides where. Donny -Original Message- From: Crouch, Luke H. [mailto:[EMAIL PROTECTED] Sent: Thursday, August 26, 2004 10:36 AM To: [EMAIL PROTECTED] Subject: replication threads on different CPUs I'm a bit of a linux newbie and a newbie to replication, so I'll try to ask this question simply... we're using mysql 4 to do replication, and I notice on the master I have this from using mytop: Id User Host/IP DB TimeCmd Query or State -- --- -- --- -- 2670 root localhost rmps 0 Query show full processlist 2668 repl rh-mysql-4 409 Binlog Has sent all binlog to slave; waiting for binlog to be updated 2666 repl rh-mysql-2 411 Binlog Has sent all binlog to slave; waiting for binlog to be updated 2667 repl rh-mysql-3 411 Binlog Has sent all binlog to slave; waiting for binlog to be updated it looks like three seperate threads are running on the master here, one for each slave. in this particular machine, we have 2 HT processors, so 4 possible CPU threads. if it's not done automatically by Linux (RedHat 9) or MySQL, can I configure these threads to use their own CPU to maximize the performance? I assume MySQL or Linux will do this automatically. we have the same machines in use for the slaves...so can I make the slave IO thread execute on one processor, and the slave SQL thread execute on another? does this happen automatically as well? thanks, -L Luke Crouch 918-461-5326 [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Slow Queries on Fast Server?
John, From my experience it is a lot more on how big is your data, not necessarily the amount of data. Which version of mysql are you running? Are you using a mysql prepared version (you downloaded it from mysql.com). I'm using 4.1.3 and I have a table that has a char 68 with 29 million rows that is fulltext indexed and all of my queries using something similar to yours take 0.1 to 0.2 seconds max. Also if you provided your full table structure including the indexes that would help. Donny -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Thursday, August 26, 2004 8:08 AM To: [EMAIL PROTECTED] Subject: Re: Slow Queries on Fast Server? I'm gathering by the lack of response that perhaps MySQL is incapable of executing a count of the number of fulltext matches on 3 million rows. I really thought that MySQL 4 was really suppose to be able to handle such a load I still think my configuration may be to blame ? - John -- Could you send the output of an EXPLAIN for your query? Sure, pretty sure the index is fine though: mysql EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH (search_text) AGAINST ('black'); +--+--+---+-+-+--- ---+--+-+ | table| type | possible_keys | key | key_len | ref | rows | Extra | +--+--+---+-+-+--- ---+--+-+ | product_fulltext | fulltext | search_text | search_text | 0 | |1 | Using where | +--+--+---+-+-+--- ---+--+-+ 1 row in set (0.00 sec) [EMAIL PROTECTED] wrote: Have you checked the Optimization section of the manual yet? http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html Oh yes, as I've attempted to configure the my.cnf file for best performance. The query is correct. The fulltext index is correct as I built the fulltext index on the single column (took 9 minutes) and even did repair and optimize on the table... so I don't think its the index. I'm thinking its the server config... - John [EMAIL PROTECTED] wrote: I'm running into a problem with some queries running on a dedicated mysql server (2.0 GHz, 2GB RAM). Fulltext searching really exemplifies this as most MATCH, AGAINST queries are taking 5-20 seconds. Performance was excellent for some reason one day (0.2 - 0.75 seconds) but it was only fast for a day or so. Here's the rundown: TABLE: fulltext_table (some_id, the_text) Rows: 3,237,981 Type: MyISAM Size: 920.8 MB QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue'); or QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue') LIMIT 0, 20; Both are problematic. I even tried placing a limit of 2 on the first query but it didn't improve anything. The table has a fulltext index on the column and is optimized. No other users are connected to the server. Is there a RED FLAG in here somewhere? MySQL configuration settings (using my-huge.cnf template): key_buffer = 500M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 10M read_buffer_size = 2M myisam_sort_buffer_size = 64M #thread_cache = 8 thread_concurrency = 8 #- Modifications --- # ft_min_word_len = 3 set-variable = table_cache=1024 set-variable = max_heap_table_size=64M set-variable = tmp_table_size=128M set-variable = query_cache_limit=2M query_cache_type=1 Performance Test: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white'); +--+ | COUNT(*) | +--+ |95074 | +--+ 1 row in set (27.83 sec) Statistics for vmstat 1 (my apologies if this doesn't look pretty): --- procs memory swap io system cpu r b swpd free buff cache si sobibo incs us sy id wa 0 0 19500 17800 42432 177172800 060 11330 0 0 99 1 0 1 19500 21524 42428 1765728 7240 960 0 536 444 5 1 82 12 0 1 19500 19512 42424 176782000 2348 0 912 592 0 1 50 49 0 1 19500 17788 42424 176954000 1980 0 868 588 0 1 51 48 0 1 19500 17568 42424 176976000 2300 0 723 401 0 0 50 49 0 1 19500 17704 42428 176962000 193620 662 364 0 0 51 49 0 1 19500 17560 42428 176976400 2224 0 696 400 0 0 51 49 0 1 19500 17504 42424 176982400 2136 0 670 380 0 0 51 49 0 1 19500 17616 42424 176971200 2228 0 693 415 0 0 51 49 0 1 19508 17608 42420 176972408 2348 8 692 389 0 0 50 50 0 1 19508 17532 42428 17697920
RE: table conversion problems
Sergei, I don't know much about innodb, but myisam doesn't have a 4 gig limit unless you are using a dynamic type of table. If you are using a fixed table which is by using int, char, etc... Not text, varchar, blobs. As long as you don't use the last ones, you don't have a 4 gig limit. As far as your questions about innodb, can't help you there. Except for in my case when I have switched tables to innodb, I copy them in chunks to speed up the process. That's what it says on the innodb website, so that's what we did. Donny -Original Message- From: Sergei Skarupo [mailto:[EMAIL PROTECTED] Sent: Thursday, August 26, 2004 8:36 PM To: Mysql List (E-mail) Subject: table conversion problems Hi all, Started a conversion from MyISAM to InnoDB; it's been almost two days and the statement is still executing... The (MyISAM) data table size is almost 4G. There were two reasons for this conversion: to start supporting transactions and to avoid the 4G limit of MyISAM tables; this table has been created without explicitly specifying MAX_ROWS and AVG_ROW_LENGTH. The avg row length is 28 bytes, there's only a primary key comprised of 3 integers. The state of this thread that's performing the conversion is Copy to tmp table. We need to start updating the table as soon as possible... Is there a way to monitor the progress? Is the tmp table allocated in InnoDB tablespace? What are the consequences of killing the thread? Will it waste whatever InnoDB tablespace has been already used for this conversion? I'm using Mysqlcc. How long may it take to cancel this statement by pushing Cancel execution and clear results button? Sometimes it takes a while... What does this button actually do? Thanks in advance for your help! -- Sergei -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: 1 day 28 min insert
Matt, I know you have gotten a lot of recommendations, I have 3 for you that I don't think anybody has mentioned. 1. Try a merge table. We had 1 table with about 750 million rows in it, and every once in a while we would need to do something crazy to it and it would be locked up for hours. We decided to break it up into 10 different tables, based on the last digit of one of the fields. So whenever we did inserts they went directly into one of the 10 tables, but whenever we did a select if we didn't know which table to search in we used the merge table and it was just fine. This definitely helped us. 2. Get MySQL to come to your office and have them do some consulting. Or have them do it online, personally I recommend onsite consulting. It's not very expensive especially if you can solve your problem. We have MySQL coming out to our office in right over a week from now and the consultant is staying for 3 days. Do I really need the consultant to come in, not really, but if they can tweak, improve, or help us with just a few things, it's worth every penny. For example, two weeks ago, I was about to buy another quad Xeon 3.4ghz machine and before I ordered it I contacted our consultant just to ask their opinion and he told me to go with a 64bit machine. So now I have a quad AMD 64 2.2 with 32gigs of memory coming in next Wednesday for me to play with. You can find the information on the mysql site somewhere. 3. The last option, is a cheaper option than #2, but it can sometimes work just as well. Find the next time mysql will be holding a training class in your area, and go to the class. Then while you are in the class have the instructor look at your tables, queries, etc... and see what they think. Now, I've been to 3 of them now, some of the instructors can baffle you with how much they know. And some of them are more book taught. As one instructor asked me, if you can write a 12 table join without looking at your table structure, why are you here? My response, I have this problem, want to take a look. My problem was resolved by the end of the session. So it's up to you, but personally I recommend #2. You can have them come in and you can have a list of 100 questions, and go right down the list and have them answer everyone of them. I may even have them setup my new little toy when he comes in, you never know. Donny -Original Message- From: Remigiusz Sokoowski [mailto:[EMAIL PROTECTED] Sent: Friday, August 20, 2004 1:12 AM To: matt ryan; [EMAIL PROTECTED] Subject: Re: 1 day 28 min insert matt ryan wrote: The table is 9 gig, and the index is 8 gig unfortunately the primary unique key is almost every column, if I were to make it one using concat, it would be huge. I tried making those fields a hash, but it did not work, I had duplicate hashes for non duplicate records!! Matt If I well understood, You have in Your index almost all data, You have in Your table? Why not add field for unique key (auto increment if You want less work)? It reduces size of Your primary index and thus speed up working with it! Best regards -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: anyone heard an ETA for MySQL 4.1? is it steady?
I've been using it since 4.1.0 was released. Works great for me. Wouldn't use 4.0 or 3.x because it's missing so many features that 4.1 has. And we have servers that use 4.0.x and 3.x and everytime I have to deal with them I cringe. We have one mysql server pushing over 3500 queries/second right now using 4.1.3. And about 15 others running 4.1.x. So in my opinion it's stable. Does it have bugs? Yes, but so does 3.x and 4.0.x. So does, php, apache, perl, windows? But are the bug's usually off the wall bugs, yes. But even if you find a bug with 4.1.x, usually there are other ways to do solve the problem anyway. Just my 2 cents. Donny -Original Message- From: Miles Keaton [mailto:[EMAIL PROTECTED] Sent: Friday, August 20, 2004 12:17 AM To: [EMAIL PROTECTED] Subject: anyone heard an ETA for MySQL 4.1? is it steady? Programming a new site that will be launched in a month. We'd like to use MySQL 4.1 but aren't sure how production-ready it is. Anyone heard how the development is coming along, or when it will be officially released? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: mysql 4.0.20 uses just one CPU on Gentoo (now: glibc patches)
As mysql will tell you, have your tried to use the mysql supplied binary or the mysql supplied rpm? I bet that will solve your problems. One thing I learned from mysql a long time ago, don't compile yourself unless you absolutely have too. Donny -Original Message- From: Mike Blazer [mailto:[EMAIL PROTECTED] Sent: Friday, August 20, 2004 7:06 PM To: Pete Harlan Cc: [EMAIL PROTECTED] Subject: Re: mysql 4.0.20 uses just one CPU on Gentoo (now: glibc patches) Thanks Pete! I just recompiled mysql with the emerge installer with all gentoo.org patches - same result. Just one CPU works. I'll report BUG to both mysql AB and Gentoo. Thanks Mike Pete Harlan wrote: It would be nice if the manual were updated to say something about it, but here's the answer I got when I asked a similar question a few weeks ago. HTH, --Pete Egor Egorov [EMAIL PROTECTED] writes: ...skipped ... -- == Mike Blazer [EMAIL PROTECTED] == -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Problem with Mysql 4.1.3: Error #1267 - Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE)
This is not a mysql problem, it is a phpmyadmin problem. You must be using 2.6.0 rc1, download 2.6.0 beta 2 and you won't have the problem any more. I have complained to them about it, but not exactly sure what they are going to do. Donny -Original Message- From: Martin Rytz [mailto:[EMAIL PROTECTED] Sent: Wednesday, August 18, 2004 9:35 AM To: [EMAIL PROTECTED] Subject: Problem with Mysql 4.1.3: Error #1267 - Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) Hi All I am very confused about MySQL 4.1.3. I have problems with the character set and the collation. Under 4.0.20 I had never such problems. I downloaded the beta-files from version 4.1.3b and zipped it to c:\mysql. Then I loaded my data within a script with load data infile into my tables... The point is, that all tables have now the collation 'latin1_swedish_ci'?!!? Is this the default collation? Why does this collation appear? Interesting is, that the meta-data (all tables in the mysql-db) does have the 'latin1_swedish_ci' - collation too. The problem is now with phpmyadmin, if I make a select like SUBSTRING_INDEX( domain, '.', -2 ) from url_cat, the following error-message appears: Error #1267 - Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE). I found out, that if I use the following statement select SUBSTRING_INDEX( domain, _latin1'.', -2 ) from url_cat it works But the Problem is, i won't change all my old scripts from 4.0!!! I found out, that if I use mysql in DOS, everything works without error!!! Do I have to change something in phpmyadmin (i have version 2.6.0rc1)? I am very confused about it and i don't know how and which character-set and collation I have to choose. I live in Switzerland... Which character-set and collation I have to choose? Is there a 'standard' character-set/collation I should use? Do I have to set parameters in the my.cnf - file? My system runs under Windows XP with SP2. Thank you in advance for your help. Yours Martin Rytz DBA from Switzerland -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Kernel panic when mysql stop command issued
I know that 2.6.7 works just fine with MySQL 4.0.20. I don't use Max though. Donny -Original Message- From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] Sent: Monday, August 16, 2004 12:11 AM To: Demetrios Stavrinos Cc: [EMAIL PROTECTED] Subject: Re: Kernel panic when mysql stop command issued On Sat, Aug 14, 2004 at 03:01:06PM -0700, Demetrios Stavrinos wrote: Kernel panic: Fatal exception in interrupt...In Interrupt handler - not syncing message appears when the mysql-max stop is issued. Other than that everything works. I changed hardware (everything new) and re-installed Linux and MySQL and upgraded to latest 2.6.3 from mdk (It was happening with the previous 2.6.3 also). Problem is repeatable 4 out of 5 tries. Linux 2.6.3-15mdkenterprise #1 SMP Fri Jul 2 20:07:05 mysql MySQL-Max-4.0.20-3mdk. Has any one heard or seen anything like it? Try a different kernel. If MySQL is able to screw with the kernel, it's a kernel bug--or a weird hardware problem manifesting itself as one. Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ [book] High Performance MySQL -- http://highperformancemysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Replication blocked
There is only one thread for replication on the slave. It does one step a time. If you use mysqlbinlog on one of your binary files on your master, you will see exactly how it all works. Multi-threaded would probably cause thousands of problems. Unless it was threaded per table, but that would still cause problems because of multi-table deletes and updates. Donny -Original Message- From: Batara Kesuma [mailto:[EMAIL PROTECTED] Sent: Friday, August 13, 2004 1:00 AM To: [EMAIL PROTECTED] Subject: Replication blocked Hi, I have 2 DB server, running as master and slave. I just add an index to one of my table on master, it took about 12 minutes. During adding the index, I have insert/update queries to other tables. On master this has no problem at all. The problem is, on slave these queries were blocked by the previous 12 minutes query. Does this mean that there is only 1 thread to run the SQL from master? Can this be set to multithread? Thank you very much. Regards, bk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Optimizer Index Weirdness
Have you tried using between instead of = =? We have found that between in some cases works better than . Not saying it will make it use the correct index. Donny -Original Message- From: David Griffiths [mailto:[EMAIL PROTECTED] Sent: Friday, July 30, 2004 9:35 PM To: MySQL List Subject: Optimizer Index Weirdness We have a table with 40 million rows. It has statistics on traffic from our website. Logs are processed once a night, and the data from those logs are added. Our table (traffic_boats, InnoDB) has three columns of interest: day INT yearmonth INT stem_base VARCHAR(100) There is an index on day, an index on yearmonth, an index on stem_base, an index on (day, yearmonth), an index on (day, yearmonth and stem_base). I added the last two today to try to fix the performance issues we are having. A typical query would like like, SELECT * FROM traffic_boats WHERE stem_base = '' AND yearmonth = 200407 AND day = 07 AND day = 27; An explain-plan shows that the optimizer is picking the index on stem_base. It *should* be picking the composite index on (day, yearmonth and stembase). The greater-than-less-than is throwing it off. I can add a USE INDEX to force it to use the index I want it to, but that's a little hokey (and it gives me flashbacks to the days that I managed one of those commercial-RDBMS where tuning was a nightmare). I've tried analyze table and optimize table (it's InnoDB) without luck. What's really weird is that optimizer comes up with a bad count of rows to be examined. If I let the optimizer pick the index, mysql explain SELECT * FROM traffic_boats WHERE stem_base = '' AND yearmonth = 200407 AND day = 07 AND day = 27; +---+--+-- ---+--+-+---+---+-+ | table | type | possible_keys | key | key_len | ref | rows | Extra | +---+--+-- ---+--+-+---+---+-+ | traffic_boats | ref | idx_yearmonth,idx_day,idx_stem,ymd_stem_idx,ymd_idx | idx_stem | 100 | const | 42600 | Using where | +---+--+-- ---+--+-+---+---+-+ 1 row in set (0.02 sec) It thinks it needs to examine 42600 rows. If I force the correct index, mysql explain SELECT * FROM traffic_boats USE INDEX (ymd_stem_idx) WHERE stem_base = '' AND yearmonth = 200407 AND day = 07 AND day = 27; +---+---+---+--+-+--+- +-+ | table | type | possible_keys | key | key_len | ref | rows| Extra | +---+---+---+--+-+--+- +-+ | traffic_boats | range | ymd_stem_idx | ymd_stem_idx | 108 | NULL | 4019400 | Using where | +---+---+---+--+-+--+- +-+ 1 row in set (0.00 sec) It thinks it needs to examine 4,019,400 rows. If I ran this query without the USE INDEX it would take a few minutes. If I force the index, it takes 20 seconds. You would think that using stem_base, day and yearmonth would be much more selective than using just stem_base. Anyone got some insight into this? David -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Mysql growing pains, 4 days to create index on one table!
Matt, I've been reading this thread for a while and at this point, I would say that you would need to provide the table structures and queries that you are running. For example, we have one table that has 8 billion rows in it and it close to 100 gigs and we can hammer it all day long without any problems. It really depends on how you are doing things. But as far as you mentioning about mysql not using multiple indexes, it does. You just have to create an index on multiple fields at one time. I've got tables with 10 fields in one index, now the trick with mysql is that you must use all top 10 fields in your where clause for mysql to really take advantage of the index. But I would definitely send the list your table structure with your indexes and some of your selects and inserts. You can always change the names of things if you don't want people to know the names of everything. Just my 2 cents. Donny -Original Message- From: matt ryan [mailto:[EMAIL PROTECTED] Sent: Thursday, July 15, 2004 9:32 AM To: [EMAIL PROTECTED] Subject: Re: Mysql growing pains, 4 days to create index on one table! Tim Brody wrote: You may find that the 'dic' KEY isn't necessary, as it's the first part of your PRIMARY KEY. I've found better performance for multi-column keys by putting the columns in order of least variance first, e.g. for a list of dates: 1979-04-23 1979-07-15 1980-02-04 1980-06-04 You want a key on (YEAR-MONTH-DAY) If you can you could put the index/data on different disks - not sure how you would do that in Windows (INDEX DIRECTORY option for CREATE TABLE?). You should definitely put the binary log file on another disk, but again not something I've used. I've found MySQL to be a royal pain working with multi-GB tables (my biggest is 12GB+13GB index). I've learnt that MySQL is a bit like a confused puppy - it doesn't know why it wet the floor, but it expects you to know that pained expression means you need to move it somewhere else ... I need the DIC in the key to keep the record unique, I have thousands with everything identical except the DIC. I was confused on the multi key index issue, I thought it would seek faster if I put the most unique field up front, which I do on most tables, I did not on this one though. I have one large raid array now, so I cant split the data, or put the binary log on another disk. I found mysql was great up to about 3 gig, then everything hit the brakes and got really really really slow I'm scared of joines, every time I do a join in mysql on indexed fields in mysql, the performance is horrible, because the where clause is not a field that's in the join, performance is poopy I wish mysql could use multiple indexes like oracle, to narrow down the results, I've got some simple queries that take hours due to single index use, but every query field is indexed. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: error 27
Yes, if you are using a dynamic table which means it has varchar's, text, or blobs the limit is 2 gigs. If you are using a fixed table which uses chars only, then there is no limit that I have seen. Donny -Original Message- From: J S [mailto:[EMAIL PROTECTED] Sent: Friday, June 25, 2004 9:38 AM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: RE: error 27 Could this problem be due to the size of my tables? Is there a limit on how big the table can be? I'm using mysql-standard-4.0.20. -rw-rw 1 mysqlmysql2147483647 Jun 25 01:49 internet_usage.MYD -rw-rw 1 mysqlmysql622724096 Jun 25 01:49 internet_usage.MYI Hi, I got an error 27. DBD::mysql::st execute failed: Got error 27 from table handler at /home/u752359/logfile.pl line 144, PS_F line 3079464. The FAQs say: Check whether you have hit 2 Gb limit. If that is not the case, shutdown MySQL server and repair a table with (my)isamchk. How do I check if I have a 2GB limit? I logged on to mysql and ran a select from the table successfully. Do I still need to run myisamchk ? Thanks, js. _ Want to block unwanted pop-ups? Download the free MSN Toolbar now! http://toolbar.msn.co.uk/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] _ Stay in touch with absent friends - get MSN Messenger http://www.msn.co.uk/messenger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: error 27
Yep, modify the uid from a varchar to a char. It will make your table bigger, because char uses all 10 characters. But it will allow you to get past the 2 gig limit. It will take a while for the table to be modified though. But it's definitely worth the wait. Also personally I would change the ip from a bigint to an int, if that is really an IP address like it seems. Just my opinion. No matter what I would make a backup of your data before making any changes. Donny -Original Message- From: J S [mailto:[EMAIL PROTECTED] Sent: Friday, June 25, 2004 1:17 PM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: RE: error 27 I don't really understand the difference (I need to read up a bit more). My (default mysql) table internet_usage has the following columns: uid varchar (10) ip bigint time datetime urlid int size int Is there something I can do to this to fix it so that it can grow larger than 2 GB? The 'uid' is a mix of chars and ints, e.g u752352. Yes, if you are using a dynamic table which means it has varchar's, text, or blobs the limit is 2 gigs. If you are using a fixed table which uses chars only, then there is no limit that I have seen. Donny -Original Message- From: J S [mailto:[EMAIL PROTECTED] Sent: Friday, June 25, 2004 9:38 AM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: RE: error 27 Could this problem be due to the size of my tables? Is there a limit on how big the table can be? I'm using mysql-standard-4.0.20. -rw-rw 1 mysqlmysql2147483647 Jun 25 01:49 internet_usage.MYD -rw-rw 1 mysqlmysql622724096 Jun 25 01:49 internet_usage.MYI Hi, I got an error 27. DBD::mysql::st execute failed: Got error 27 from table handler at /home/u752359/logfile.pl line 144, PS_F line 3079464. The FAQs say: Check whether you have hit 2 Gb limit. If that is not the case, shutdown MySQL server and repair a table with (my)isamchk. How do I check if I have a 2GB limit? I logged on to mysql and ran a select from the table successfully. Do I still need to run myisamchk ? Thanks, js. _ Want to block unwanted pop-ups? Download the free MSN Toolbar now! http://toolbar.msn.co.uk/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] _ Stay in touch with absent friends - get MSN Messenger http://www.msn.co.uk/messenger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] _ It's fast, it's easy and it's free. Get MSN Messenger today! http://www.msn.co.uk/messenger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Multi-row INSERTs
Russ, We use #2 currently, and we are actually about to switch back to the inserting them one at a time. The reason is very simple. In our case we have a insert statement that will insert a maximum of 600 entries at a time. But we could have up to 25 different programs running that could possibly be doing that its own insert of 600 records. The problem is that say the first one takes 3 seconds to insert all 600, but 1 second after the first one starts the second program tries to insert, well, it will need to wait for the first one to finish. So what ends up happening, if all 25 programs try to insert at the same time, all 25 inserts can take about 5 minutes because they are all waiting on each other. Now if you didn't need the auto_increment id, then you could just use an insert delayed which would be a million times faster for the program itself, but not necessarily for mysql. Now if you do the one insert at a time, and each of the 25 programs started inserting one at a time, in theory they would all finish at the same time. Would it be faster then the massive inserts? Again it should be slower, but we have found that it's faster in the long run. But it could also be that our table we are inserting these records into has 252 million rows in it right now. So I would definitely benchmark it yourself, before taking my word for it. We have run into the same problem with INSERT ... ON DUPLICATE KEY UPDATE..., with a small table when it's mainly doing inserts, it's super fast. But with a table with 44 million rows and only 3 columns it takes about 1-2 seconds to do the update part of the insert. But again, we found this by noticing that when the table was small or it's doing inserts the command is super fast. But as time goes on, it gets slower. Donny -Original Message- From: Russ Brown [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 09, 2004 7:45 AM To: [EMAIL PROTECTED] Subject: Re: Multi-row INSERTs On Wed, 9 Jun 2004 12:58:01 +0200, Jigal van Hemert [EMAIL PROTECTED] wrote: Anyway, my question is this. If I do a single-statement multi-line insert, are the auto-increment IDs of the rows inserted guaranteed to be sequential? Bear in mind also that I'm using InnoDB tables here. Conversely, if I know for a fact that it is not guaranteed, I know that I need to think of something else. :-) Will locking the table work for you? If you lock the table for writing no other process can slip a query in between your queries for certain... Regards, Jigal. Unfortunately locking the table isn't an option as the table is being accessed extremely regularly by other clients performing similar inserts. The key to this is speed and overhead: at present I'm inserting the rows individually and recording each row's ID as I go. However I want to be able to reduce the number of queries involved, so I've though of two possibilities: 1) If the inserted rows have sequential IDs in the same order that they appeared in the INSERT statement, I can do them all in one go, use LAST_INSERT_ID to get the ID of the first and derive the rest by incrementing in the application logic. 2) Insert them all in one statement and then select them back to get each row's ID. Now, I know that 2) will work, and it will allow me to reduce the number of queries per process from N (where N is on average about 9) to 2. However, if 1) will work it will allow me to reduce the number of queries to 1, (plus a call to LAST_INSERT_ID), and wouldn't involve having to match up the rows from the second query in 2). If nobody knows the answer I'll just go with 2, but I thought it was an interesting bit of trivia in addition to being useful to my specific circumstance. Thanks. -- Russ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Problems maintaining large high-availability search
1. The timeout is set to 5 min, because of the number of queries, there are a lot of unused http processes that linger with connections, and the only way to seeminly keep MySQL connections available is to keep timeouts short. What about using a connection-pool? Like Apache::DBI. It should solve most of your problems (in theory...) Why not just use a master-slave system and then you wouldn't have to worry about it. All inserts go off the master, and all selects on the slave. However, I have updates that take over an hour at a time, and the 5 min timeout will kill the update process. If I change the timeout to 2 hours, Apache will eat up all the connections. Im a little confused at the behaviour. Probably annoying, but not confusing behaviour - if I understand your problem correctly. An hour is not bad, I just had one that took right at 97 hours that finally finished. With a master-slave system you wouldn't have to worry about this problem anymore. Any suggestions appreciated. It seems to me the only answer is to maintain a completely seperate MySQL server with a 2 hour timeout on the same box, hotcopy the database, do the updates, and hotcopy it back, which I would *really* prefer not to do. There has got to be an easier way - any suggestions? This doesn't sound a very nice solution. Same as above, but an even easier solution is to not delete the records you don't need anymore. Just add a status flag, and update the flag to deleted or whatever. And change your selects to only look for ones that aren't deleted. Then you never have to optimize again. (In theory). 2. The other problem I have is that because I am rotating so many records daily and the queries are so complex and the tables/indexes so large, I want to keep the db OPTIMIZEd, but running an OPTIMIZE TABLE on this database also takes over an hour and creates timeouts on queries. At the same time, I want to get the optimize done as soon as possible. IMHO you'd be better off by optimizing your databases so that an OPTIMIZE command would not be needed at all. See above. I think you should change tha application instead of hacking MySql. (Been there, done that.) Probably you should place a layer between Apache and MySQL. Or do the slow queries in a spearate process (definetely not in Apache). Or separate the time slow queries into smaller ones. IMHO the first thing I would do is add a status flag. And see how that works. Then I would go to master-slave replication system if the flag option doesn't work as well as it should. It works for us without any problems. Donny -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
can you drop multiple indexes at one time?
I definitely don't see this in the documentation anywhere, but can you drop multiple indexes at one time with an alter table? Donny
RE: 4.1 Beta
You could always buy Oracle and charge an extra million or two. :)) Donny -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 01, 2004 4:20 AM To: [EMAIL PROTECTED] Cc: 'Jonathan Soong'; [EMAIL PROTECTED] Subject: RE: 4.1 Beta Donny Simonton [EMAIL PROTECTED] wrote on 01/06/2004 03:12:43: We have been using 4.1 in a production environment since about a month after 4.1.0 was released. And we have run into a bug or two now and then, but that usually happens the first day we try something out. Overall, I won't install any other version. I would entirely agree with this, if it were my personal call. But the main reason I want 4.1 is the excellent character set support, in particular for Japanese customers. And regardless of the actual quality of the underlying software, I will have great difficulty convincing Japanese customers that something labelled alpha is suitable for use in a critical environment, which is where it will be. Beta or Gamma would be difficult enough, but alpha is effectively impossible. Luckily I have three months before the possible first shipment to Japan. But it is still a problem shipping software which, while I know is entirely fit for purpose, is labelled by its own designers as unstable. OTOH, I obviously *need* the character sets for these customers. Alec -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: 4.1 Beta
Well, in that case, download the source. Modify the source to no longer include the word alpha. So instead of calling it, 4.1.2-alpha-standard-log, call it 4.1.2-super-duper-standard-log. 99% of the code is the same core code that is in 4.0.x and 3.23.x. It just has a few more features. Donny -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 01, 2004 7:39 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: RE: 4.1 Beta But then I would have to do without the excellent MySQL support. Donny Simonton [EMAIL PROTECTED] wrote on 01/06/2004 13:26:00: You could always buy Oracle and charge an extra million or two. :)) Donny -Original Message- Donny Simonton [EMAIL PROTECTED] wrote on 01/06/2004 03:12:43: We have been using 4.1 in a production environment since about a month after 4.1.0 was released. And we have run into a bug or two now and then, but that usually happens the first day we try something out. Overall, I won't install any other version. I would entirely agree with this, if it were my personal call. But the main reason I want 4.1 is the excellent character set support, in particular for Japanese customers. And regardless of the actual quality of the underlying software, I will have great difficulty convincing Japanese customers that something labelled alpha is suitable for use in a critical environment, which is where it will be. Beta or Gamma would be difficult enough, but alpha is effectively impossible. Luckily I have three months before the possible first shipment to Japan. But it is still a problem shipping software which, while I know is entirely fit for purpose, is labelled by its own designers as unstable. OTOH, I obviously *need* the character sets for these customers. Alec -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Upgrade 4.1.1 to 4.1.2
Rpm -U xxx -Original Message- From: Larry Lowry [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 01, 2004 9:55 AM To: [EMAIL PROTECTED] Subject: Upgrade 4.1.1 to 4.1.2 I'm running 4.1.1 on RH 9. I want to upgrade this to 4.1.2. I'm using the rpm from the MySQL site. If I just try to install it with rpm -i then I get a bunch of errors about conflicting files from the 4.1.1-1 install. If I try to uninstall first with rpm -e the version 4.1.1 then it tells me that it is not installed. I seem to always have this problem with rpm packages. What is the best way to get this installed? TIA Larry Lowry -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: rework this
It depends on which version of MySQL you are using. Since you are using a subquery, I assume you are using a fairly new version. Delete audit_log_records from audit_log_records, audit_log where audit_log_records.tracker_id = audit_log.tracker_id and audit_log.operation='A' That should be it. It's called a multi-table delete, but you are only deleting from one table. You can't use left join or inner join with a multi-table delete or update. You have to do a comma join. Donny -Original Message- From: Bob Lockie [mailto:[EMAIL PROTECTED] Sent: Monday, May 31, 2004 2:37 PM To: MySQL Mailing List Subject: rework this I need help with this SQL. I want to delete all records from the 'audit_log_records' table that have 'tracker_id' columns that are the same as those from the audit_log table that have 'A' for the operation type. delete from audit_log_records where tracker_id=(select tracker_id from audit_log where audit_log.operation='A'); -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Root users permissions not working
Peter, Actually, Mysql -uUSERNAME -pPASSWORD works just fine. Donny -Original Message- From: Sunmaia [mailto:[EMAIL PROTECTED] Sent: Monday, May 31, 2004 7:44 PM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: RE: Root users permissions not working hi this may be the problem or it may just be a typo! you are showing #mysql -u root -psomepassword which should be #mysql -u root -p somepassword (note the space after -p) the first will not pass the password, which would explain the lack of privileges. Otherwise start with skip-grant-tables and reset the root password. If you do is there a possibility it may have been hacked? Peter -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: 01 June 2004 01:26 To: [EMAIL PROTECTED] Subject: Root users permissions not working Hi Guys, I have been using mysql for a year now without any problems. Recently, I just noticed that my root user is not able to create any databases when I login. I have tried to change my root password and try but it does not work: -- #mysql -u root -psomepassword Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 52 to server version: 3.23.58 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql create database slugz; ERROR 1044: Access denied for user: '[EMAIL PROTECTED]' to database 'slugz' mysql -- I am able to do use databasename and show table commands but cannot make any changes. I have been able to before without any problems and I have not made any configuration changes to mysql. Is there something that I might be missing? Any help would be grately appreciated. Thanks in advance, -Simran -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] Email has been scanned for viruses and SPAM by Trader Mailmanager www.trader.uk.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: 4.1 Beta
Jonathan, I have no idea, but I can tell you about the stability of the 4.1 tree from my experience. If you are using MySQL for anything besides prepared statements, I would say don't worry if it's beta or not. It works and works great! The only reason I mention anything about prepared statements, is I was upgrading a few machines from 4.1.1 to 4.1.2 so I decided to print out the change log and read all of the new additions/bug fixes. And to be it seems like 45% of them were with prepared statements in one way or another. We have been using 4.1 in a production environment since about a month after 4.1.0 was released. And we have run into a bug or two now and then, but that usually happens the first day we try something out. Overall, I won't install any other version. Donny -Original Message- From: Jonathan Soong [mailto:[EMAIL PROTECTED] Sent: Monday, May 31, 2004 8:22 PM To: [EMAIL PROTECTED] Subject: 4.1 Beta Hi Guys, I'm just wondering if anyone could tell me when 4.1 will be moving to Beta? I was told by a MySQL guy at Linux Conf AU 2004 (January) that 4.1 would be in Beta by March!! As such, we have been developing applications on 4.1, but are hesitant to roll them into production on Alpha software. I've looked all over the site and through the mailing list archives for an answer, and have been checking the Mysql site every week. Does anyone know when 4.1 will be Beta!!!?? :) Cheers Jon -- Jonathan Soong Information Services Institute of Medical and Veterinary Science (IMVS) Email: [EMAIL PROTECTED] Web : http://www.imvs.sa.gov.au Tel : +61 8 82223095 Fax : +61 8 82223147 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL Benchmark.
Now the fun part becomes which linux distro do you use? Which is faster? Because trust me, each distro will benchmark differently. Let the games begin! Donny -Original Message- From: JG [mailto:[EMAIL PROTECTED] Sent: Saturday, May 29, 2004 4:34 PM To: [EMAIL PROTECTED] Subject: Re: MySQL Benchmark. At 11:57 AM 5/27/2004 -0700, you wrote: At 11:02 AM 5/27/2004 -0700, you wrote: Hi, I am sure all FreeBSD users are sick and tired of saying this. USE 4.10 Lots of people blindly follow version numbers but 5.x is a lot different than 4.x in ways I don't yet feel comfortable with using on production machines. Only one of those issues is with benchmark numbers, but that is certainly one of them. Thank you, Eric I'll try FreeBSD 4.x later (with LinuxThreads) but I don't see how it will do much better. - Jeremy I tried 4.10 FreeBSD with LinuxThreads using default mysql configuration settings and then using the my-huge.cnf in original and modified forms. The results were no where near those produced by an out-of-the-box linux install (Mandrake) with their standard mysql package installation. I've been on a few FreeBSD mailing lists with this information and tried all that was suggested and more. Nothing helped get anywhere near the base linux numbers. I'm out of time and can't take a performance hit like this on this expensive hardware, so I'm going with Linux. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Server capabilities
Jose, First I would recommend fixing your site. Warning: main(db/db.php): failed to open stream: No such file or directory in /home/uc0208ea/public_html/mainfile.php on line 77 Fatal error: main(): Failed opening required 'db/db.php' (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/uc0208ea/public_html/mainfile.php on line 77 But as far as your scenario goes, to me it's all about diskspace. In your examples you are saying that you need about 10 terabytes of diskspace. At least in the first example. First, I would determine what your budget would be, then worry about how you can fit your system inside of that budget. The next biggest thing is how many people will be hitting it at one time? I know of a company that has a $200,000 mysql server, with 2 identical backup servers that are running as slaves. Do you know they get about 100 queries/second? A 486 with 256mb of memory could handle 100 queries/second. That's my 2 cents. Donny -Original Message- From: tachu [mailto:[EMAIL PROTECTED] Sent: Friday, May 28, 2004 6:52 PM To: [EMAIL PROTECTED] Subject: Server capabilities OK I have one more scenario I need some help with. Say im using master and slaves replication and i have N ammount of servers would mysql be able to have say 10 databases with about 30 tables per db and a total of say 100 megs of data per/db. would the server be able to manage that. considering i would have about 5 mysql servers and in front of that 50 web servers serving php pages? or my second scenario is i have those main 50 servers each with one mysql server running in it and 2000 dbs per server or in the last scenario i have those same 50 servers with 1 db and 6 tables. what issues/limitation do you see in this setup? Thanks any help is appreciated Jose E. Avila -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Is MySQL 4.1 ready?
Wait until 4.1.2 is out in the next few days. I've been using it since 4.1.0, besides a few little bugs here and there, which almost every version has, it's very stable. I won't install any other version of any of my machines. Donny -Original Message- From: Marc Greenstock [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 25, 2004 11:27 PM To: [EMAIL PROTECTED] Subject: Is MySQL 4.1 ready? I would like to hear from people who have tested version 4.1 to determine if I should upgrade now or wait until it's in production release? Marc. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Is MySQL 4.1 ready?
I was told they found a bug that they wanted to fix. So soon is what I was told. Probably by the end of this week or the beginning of next week. Donny -Original Message- From: Marc Greenstock [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 25, 2004 11:40 PM To: [EMAIL PROTECTED] Subject: Re: Is MySQL 4.1 ready? Wait until 4.1.2 is out in the next few days. - Do you know where I can find any indication of when 4.1.2 is expected to be released? Marc. Donny Simonton [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Wait until 4.1.2 is out in the next few days. I've been using it since 4.1.0, besides a few little bugs here and there, which almost every version has, it's very stable. I won't install any other version of any of my machines. Donny -Original Message- From: Marc Greenstock [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 25, 2004 11:27 PM To: [EMAIL PROTECTED] Subject: Is MySQL 4.1 ready? I would like to hear from people who have tested version 4.1 to determine if I should upgrade now or wait until it's in production release? Marc. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: upgraded hardware: new server is faster, but GROUP BY operations are slower???
Sounds like a tmp drive issue to me. Maybe you had reiserfs on your old tmp partition and now you have ext3 or something like that. Donny -Original Message- From: Charles, Tony (Exchange) [mailto:[EMAIL PROTECTED] Sent: Friday, May 21, 2004 12:47 PM To: [EMAIL PROTECTED] Subject: upgraded hardware: new server is faster, but GROUP BY operations are slower??? Hi all, Just bought a new server, which was supposed to improve the performance of our app. The new machine has the same OS (Redhat 8), same MySQL (4.0.18), and same my.cnf. The problem is that this (frequently run) query, actually runs 41% slower! select * from LEG L, LEG_DETAIL D, DEAL_LEGS G where L.latest_id = D.latest_id and D.latest_id = G.latest_id group by L.dealid limit 75; If I remove the GROUP BY, then the new box returns the results 14% faster than the old server. (So what's so special about GROUP BY?) I ran the benchmark scripts on both boxes, and it seems the following operations are slower on the new server, for some reason? count_distinct (1000) count_distinct_2 (1000) count_distinct_group (1000) count_distinct_group_on_key (1000) count_distinct_group_on_key_parts (1 count_distinct_key_prefix (1000) count_group_on_key_parts (1000) count_on_key (50100) select_distinct (800) select_group (2911) Can anyone suggest why the new machine might be slower, ONLY IN THE ABOVE areas? Just in case these numbers help, here are lines from the RUN file, for two of the slow operations above Operationsecondsusr sys cpu tests count_distinct_group 19.001.170.081.25 1000 count_distinct_group (new box)26.000.390.260.65 1000 count_distinct12.000.510.020.53 1000 count_distinct (new box) 15.000.100.010.11 1000 Any advice at all, would be very much appreciated! Thanks, Tony *** Bear Stearns is not responsible for any recommendation, solicitation, offer or agreement or any information about any transaction, customer account or account activity contained in this communication. *** -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: thread stack issues
I have this same problem on 4.1.1 as well. Donny -Original Message- From: Steven Roussey [mailto:[EMAIL PROTECTED] Sent: Thursday, May 20, 2004 7:33 PM To: [EMAIL PROTECTED] Subject: thread stack issues Since going from 4.0.18 to 4.0.20 (or 4.0.19) I now receive these warnings on startup: 040520 14:55:21 mysqld started 040520 14:55:21 Warning: Asked for 196608 thread stack, but got 126976 /usr/sbin/mysqld: ready for connections. Version: '4.0.20-standard' socket: '/tmp/mysql.sock' port: 3306 I noticed on another server that it had the same problem with v4.0.18. So some servers have a problem with this version and others do not. All have the warning with 4.0.20. They are configured differently. What configuration options would be effecting this? -steve-- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Storage Solution Question
We have a MySQL server that is a backend processing server that in about 60 days will probably run out of disk space. The data cannot be archived off, because it is always used and changed many times a day. The server currently has 6 72 gig SCSI 15k drives in it. We have it raided with 2 drives together for the OS and tmp. And then the data drive is 204 gigs. We currently have 66 gigs free, and we are adding about 1.2 gigs of new data per day. I don't think we would ever get over a terabyte of data, but you never know. What are my options? What are solutions that people have used, that have worked? I just know whatever the solution is has to be fast! Because we do thousands of inserts and selects at the same time. Thanks. Donny
RE: MySQL limits.
Let's see if I can give you some ideas. -Original Message- From: RV Tec [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 18, 2004 8:28 AM To: [EMAIL PROTECTED] Subject: MySQL limits. We have a database with approximately 135 tables (MyISAM). Most of them are small, but we have 5 tables, with 8.000.000 records. And that number is to increase at least 1.000.000 records per month (until the end of the year, the growing rate might surpass 2.000.000 records/month). So, today our database size is 6GB. That's an average size for most applications. The server handles about 35-40 concurrent connections. We have a lot of table locks, but that does not seem to be a problem. Most of the time it works really well. Table locks in my opinion are bad. Especially with 35 concurrent connections. On one of my servers we currently have 1498 threads running, we are averaging 2044.431 queries per second, and 1 slow query for the past month. I restarted mysql on the wrong box on accident. But I would still consider these numbers to be nothing compared to some others around here. From time to time (2 weeks uptime or so), we have to face a Signal 11 crash (which is pretty scary, since we have to run a myisamchk that takes us offline for at least 1 hour). We believe this signal 11 is related to the MySQL server load (since we have changed OS's and hardware -- RAM mostly). What does it say in the mysql_error_log when this happens? Mysql will usually dump the reason out in the error log and it's pretty easy to solve after that. Have you considered using the binary version of MySQL instead of compiling from source? Our server is one P4 3GHz, 2GB RAM (400mhz), SCSI Ultra160 36GB disks (database only) running on OpenBSD 3.5. We are aware that OpenBSD might not be the best OS for this application... at first, it was chosen by it's security. Now we are looking (if that helps) to a OS with LinuxThreads (FreeBSD perharps?). Sorry, can't help you with BSD. Linux for me all of the way. The fact is that we are running MySQL on a dedicated server, that keeps the load between 0.5 and 1.5. CPU definitively is not a problem. The memory could be a problem... our key_buffer is set to 384M, according to the recommendations at my-huge.cnf. So, it seems we have a lot of free memory. We have already tried to increase key_buffer (along with the other settings), but it does not seem to hurt or to improve our performance (although, the memory use increases). 384 for key_buffer is probably fine with 2gigs of memory. Some will say that you can go up to 1/2 of the memory, but I like to stay around 400 myself. But it really varies based on what you are doing. We had to do a lot of testing of our application to find the right number. To track down this signal 11, we have just compiled MySQL with debugandreturned totheoriginal my-huge.cnf recommendations. Now it seems we are running on a overclocked 486 66mhz. That's what debug does. Use the binary, that's my recommendation. Is there any way to prevent this signal 11 to happen or is it a message that we have exceeded MySQL capability? Exceeded MySQL's capability? I don't think you have scratched the surface yet. Error messages are just that, an error of some type. Without knowing the version of MySQL you are running, it's even harder to know. Is MySQL able to handle such load with no problems/turbulences at all? If so, what would be the best hardware/OS configuration? For me, I buy dual proc xeons with hyperthreading. 2 or 4 gigs of memory. Fedora Linux, RPM install of mysql 4.1.1 (4.1.2 is getting close!) Apache 2.x, and php. I install apache and php on all of our servers no matter what, because you never know when you need them. I know many people will tell you to buy opteron's, we just haven't bought one yet, since our vendor of choice doesn't offer them yet. What is the largest DB known to MySQL community? I've heard that cox communications is fairly large, at least according to this: http://www.mysql.com/news-and-events/press-release/release_2003_21.html It says theirs is about 600 gigs. But I am sure there are larger ones around. On one server we have about 170 gigs right now of databases. Donny If it's needed, I can provide DMESG, MySQL error log, compile options and some database statistics. Thanks a lot for your help! Best regards, RV Tec -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: OPTIMIZE TABLE and mySQL replication
Actually, if you are using 4.1.1 optimize table does get passed to the slave. This is from the 4.1.1 change log. ANALYZE TABLE, OPTIMIZE TABLE, REPAIR TABLE, and FLUSH statements are now stored in the binary log and thus replicated to slaves. This logging does not occur if the optional NO_WRITE_TO_BINLOG keyword (or its alias LOCAL) is given. Exceptions are that FLUSH LOGS, FLUSH MASTER, FLUSH SLAVE, and FLUSH TABLES WITH READ LOCK are not logged in any case. For a syntax example, see section 14.5.4.2 FLUSH Syntax. Donny -Original Message- From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] Sent: Friday, May 14, 2004 2:46 PM To: Jim Cc: [EMAIL PROTECTED] Subject: Re: OPTIMIZE TABLE and mySQL replication On Fri, May 14, 2004 at 03:10:05PM -0400, Jim wrote: Hi List, What are some issues relating to using OPTIMIZE TABLE and replication? Does running OPTIMIZE TABLE on a master DB cause the optimizations to be passed on to the slaves? It does not. The command doesn't replicate. -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ [book] High Performance MySQL -- http://highperformancemysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: OPTIMIZE TABLE and mySQL replication
It surprised me at first, but then I was actually happy about it. Donny -Original Message- From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] Sent: Friday, May 14, 2004 4:26 PM To: Donny Simonton Cc: [EMAIL PROTECTED]; 'Jim' Subject: Re: OPTIMIZE TABLE and mySQL replication On Fri, May 14, 2004 at 03:26:28PM -0500, Donny Simonton wrote: Actually, if you are using 4.1.1 optimize table does get passed to the slave. This is from the 4.1.1 change log. ANALYZE TABLE, OPTIMIZE TABLE, REPAIR TABLE, and FLUSH statements are now stored in the binary log and thus replicated to slaves. This logging does not occur if the optional NO_WRITE_TO_BINLOG keyword (or its alias LOCAL) is given. Exceptions are that FLUSH LOGS, FLUSH MASTER, FLUSH SLAVE, and FLUSH TABLES WITH READ LOCK are not logged in any case. For a syntax example, see section 14.5.4.2 FLUSH Syntax. Ugh. That's the *default*? Gee, that won't surprise anyone, I'm sure... :-( -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ [book] High Performance MySQL -- http://highperformancemysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: OPTIMIZE TABLE and mySQL replication
Yes. -Original Message- From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] Sent: Friday, May 14, 2004 4:26 PM To: Donny Simonton Cc: [EMAIL PROTECTED]; 'Jim' Subject: Re: OPTIMIZE TABLE and mySQL replication On Fri, May 14, 2004 at 03:26:28PM -0500, Donny Simonton wrote: Actually, if you are using 4.1.1 optimize table does get passed to the slave. This is from the 4.1.1 change log. ANALYZE TABLE, OPTIMIZE TABLE, REPAIR TABLE, and FLUSH statements are now stored in the binary log and thus replicated to slaves. This logging does not occur if the optional NO_WRITE_TO_BINLOG keyword (or its alias LOCAL) is given. Exceptions are that FLUSH LOGS, FLUSH MASTER, FLUSH SLAVE, and FLUSH TABLES WITH READ LOCK are not logged in any case. For a syntax example, see section 14.5.4.2 FLUSH Syntax. Ugh. That's the *default*? Gee, that won't surprise anyone, I'm sure... :-( -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ [book] High Performance MySQL -- http://highperformancemysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Database structure
Here are a few examples of my tables. Table name, # of records, type, and size. The database currently has 898 million records in it and it's right over 100 gigs. Phrase49,769,178 MyISAM5.3 GB Volume9,671,996 MyISAM1.1 GB Word7,790,076 MyISAM942.2 MB WordMagic128,881,167 MyISAM6.0 GB WordMagicScores111,060,572 MyISAM7.4 GB WordWatcher44,270,528 MyISAM4.3 GB WordPhrases11,154,414 MyISAM450.9 MB WordRelated13,685,867 MyISAM2.7 GB WordRelated213,194,313 MyISAM2.6 GB WordScore68,437,613 MyISAM12.7 GB WordScoreTemp118,723,375 MyISAM25.3 GB WordSearch188,769,835 MyISAM11.5 GB WordStem15,623,221 MyISAM417.3 MB Donny -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 11, 2004 10:09 AM To: Ronan Lucio Cc: [EMAIL PROTECTED] Subject: Re: Database structure Where did you read that 25 million records would be a problem? I've heard of people with billions of records in one table. The only question would be performance, but indexes would largely take care of that. You may run into issues with the physical size of the table and the underlying OS not being able to create a large enough file (i.e. 4GB). This is a problem with the OS, not MySQL. InnoDB would allow you to work around file size limitation in the OS by splitting the database into separate files. On May 11, 2004, at 10:10 AM, Ronan Lucio wrote: Hi, I´m working in a project of a database that should be grow to more than 25,000,000 of clients. For all I´ve read in MySQL manual pages it´s too much records to place in only one table. So, my main doubt is how to divide it. I divide the client table in few tables according with the different kinds of clients. Even getting some duplicated records and getting some difficulties importing and exporting clients from one table to another it should take the database load cooler. But, I think I´ll need to place all logins and access levels in the same table. Would it be a problem? Any idea how can I deal with it? I´m thinking to use InnoDB tables. Thanks, Ronan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- Brent Baisley Systems Architect Landover Associates, Inc. Search Advisory Services for Advanced Technology Environments p: 212.759.6400/800.759.0577 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Why are compiled queries 50% slower than parsed queries
Kevin, I've been using 4.1.1 since it was released and I've never heard of compiled queries. Are you talking about prepared statements? Donny -Original Message- From: Kevin Cowley [mailto:[EMAIL PROTECTED] Sent: Thursday, April 29, 2004 1:35 PM To: [EMAIL PROTECTED] Mysql. Com Subject: Why are compiled queries 50% slower than parsed queries OK I've just been completing some benchmarking comparing compiled queries (new in 4.1.1 and above) with standard parsed queries. For both INSERT queries insert into table( col1, col2,col3) values(?,?,?) and SELECT queries select col1, col2,col3 where col1 between ? and ? the compile query is 50% slower Parsed :- Insert total average mean mean max min run records time timetimerecordstime time 0248517.610451 0.000306 0.000239 1008 105588.00 231.00 Select run 0 average query rows timetime 0 61 0.075193 0.007519 17 0.049296 0.004930 2 41 0.092106 0.009211 39 0.048571 0.004857 4 16 0.048425 0.004843 5 62 0.092040 0.009204 66 0.066477 0.006648 71 0.280004 0.028000 88 0.047900 0.004790 94 0.113344 0.011334 100 0.214862 0.021486 110 0.140650 0.014065 121 0.251094 0.025109 130 0.123815 0.012382 140 0.133870 0.013387 156 0.306981 0.030698 160 0.074054 0.007405 170 0.095875 0.009588 181 0.126500 0.012650 198 0.109567 0.010957 Compiled :- Insert total average mean mean max min run records time timetimerecordstime time 024851 18.224807 0.000733 0.000506 1117 303256.00 438.00 Select run 0 average query rows timetime 00 0.086140 0.086140 10 0.062718 0.062718 20 0.109377 0.109377 30 0.062499 0.062499 40 0.062543 0.062543 50 0.109723 0.109723 60 0.085447 0.085447 70 0.132177 0.132177 80 0.062524 0.062524 90 0.133001 0.133001 100 0.526721 0.526721 110 0.525792 0.525792 120 0.524818 0.524818 130 0.133818 0.133818 140 0.525871 0.525871 150 0.527045 0.527045 160 0.085946 0.085946 170 0.108599 0.108599 180 0.526602 0.526602 190 0.119850 0.119850 Anyone care to shed some light on this? Kevin Cowley RD Tel: 0118 902 9099 (direct line) Email: [EMAIL PROTECTED] Web: http://www.alchemetrics.co.uk ** ALCHEMETRICS LIMITED (ALCHEMETRICS) Mulberry Park, Fishponds Road, Wokingham, Berkshire, RG41 2GX Tel: +44 (0) 118 902 9000Fax: +44 (0) 118 902 9001 This e-mail is confidential and is intended for the use of the addressee only. If you are not the intended recipient, you are hereby notified that you must not use, copy, disclose, otherwise disseminate or take any action based on this e-mail or any information herein. If you receive this transmission in error, please notify the sender immediately by reply e-mail or by using the contact details above and then delete this e-mail. Please note that e-mail may be susceptible to data corruption, interception and unauthorised amendment. Alchemetrics does not accept any liability for any such corruption, interception, amendment or the consequences thereof. ** -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: mysqld too busy to check its grant tables?
Jeremy, We have also seen the problem on linux a while back; we haven't had the problem lately though. Donny -Original Message- From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] Sent: Monday, April 26, 2004 3:36 PM To: MySQL Cc: [EMAIL PROTECTED] Subject: Re: mysqld too busy to check its grant tables? On Mon, Apr 26, 2004 at 01:27:08PM -0700, MySQL wrote: On Mon, 26 Apr 2004, Jeremy Zawodny wrote: Yeah, we've been seeing this bug a bit too. I'm trying to isolate it and figure out if it's a FreeBSD related problem or something more MySQL specific. Are you using FreeBSD's native threads or LinuxThreads? FreeBSD native threads. Intesting. Are you using wildcard hosts in you grants, like %.example.com? Yes. Okay. I have a hunch that it's somehow corrupting entries in the acl_cache. I'm working to see if I can prove this, since it's a bit hard to reproduce on demand. Is LinuxThreads becoming stable enough to become production worthy? Heck yes. That's all we've used for the last 1.5 - 2 year at Yahoo. What would you say is the greatest performance gain, or most positive result of this? Two things: 1. I/O is much faster. 2. MySQL can use all CPUs in a SMP box It's been a really big win for us. The machine this database is on has 15k rpm SCSI drives which I've seen spike up to unreal usage (60MB/s), which means that we can't really improve much on performance without splitting the database onto more hardware. Just curious why you are placing all your bets on LinuxThreads. We had horrible performance on native threads, instability, and couldn't take advantage of SMP boxes. If you've ever looked into FreeBSD 4.x's poor excuse for threads you'd see why. It's all async calls instead of real threading. :-( Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ [book] High Performance MySQL -- http://highperformancemysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: mysqld too busy to check its grant tables?
We have tried the same thing, flush hosts and flush privileges did nothing. Refresh reload did nothing. Once the load went down the problem normally went away. We have tried shutting down mysql and starting it back up, that helped until the load went down. Rebooting the box usually helped, but once the load went back up, the problem came right back. Now the box I am talking about is a 4.0.x box. I have also seen it on 3.23.x. I have a few boxes running 4.1.1, and we have not seen this problem on 4.1. But that could probably just be a fluke. Donny -Original Message- From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] Sent: Monday, April 26, 2004 3:49 PM To: Donny Simonton Cc: [EMAIL PROTECTED]; 'MySQL' Subject: Re: mysqld too busy to check its grant tables? On Mon, Apr 26, 2004 at 03:44:03PM -0500, Donny Simonton wrote: Jeremy, We have also seen the problem on linux a while back; we haven't had the problem lately though. In our case, we've found that neither FLUSH HOSTS or FLUSH PRIVIELGES have any effect. Was that the case for you folks as well? Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ [book] High Performance MySQL -- http://highperformancemysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: mysqld too busy to check its grant tables?
The main one we have noticed it on quite a few times is a mail server that stores all of the mail in mysql. And it receives mail for about 300,000 domains. So it's normally fairly busy. Currently the load average on the box, is 6 and for the past 10 minutes 7.41. So the load average on the box is not very high, especially for this box. The box is a dual proc, currently using 55% system, 30% user, with about 15% idle. But we aren't having that problem right now. Hope that helps. Donny -Original Message- From: Jeremy Zawodny [mailto:[EMAIL PROTECTED] Sent: Monday, April 26, 2004 4:21 PM To: Donny Simonton Cc: [EMAIL PROTECTED]; 'MySQL' Subject: Re: mysqld too busy to check its grant tables? On Mon, Apr 26, 2004 at 04:11:37PM -0500, Donny Simonton wrote: We have tried the same thing, flush hosts and flush privileges did nothing. Refresh reload did nothing. Once the load went down the problem normally went away. We have tried shutting down mysql and starting it back up, that helped until the load went down. Rebooting the box usually helped, but once the load went back up, the problem came right back. Now the box I am talking about is a 4.0.x box. I have also seen it on 3.23.x. I have a few boxes running 4.1.1, and we have not seen this problem on 4.1. But that could probably just be a fluke. Weird. Our problems seem not to be load related at all. When you say load do you mean busy cpu or high load average? Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ [book] High Performance MySQL -- http://highperformancemysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: SLOW 22million rows, 5 hour query?
Use insert delayed, and you will cut your time in half. At least with my experience. But also how long does it actually take to run the query itself. Giving a summary explain doesn't help much. You really need a table structure that the select is using and a full explain. Donny -Original Message- From: Tim Cutts [mailto:[EMAIL PROTECTED] Sent: Saturday, April 24, 2004 6:02 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: SLOW 22million rows, 5 hour query? On 23 Apr 2004, at 9:48 pm, [EMAIL PROTECTED] wrote: Relevant `explain` details: Full table scan: 22,371,273 rows, Using temporary; Using filesort The filesort is a giveaway. Can you increase the sort buffer size so that the sort can happen in memory rather than having to use a file to sort? Tim -- Dr Tim Cutts Informatics Systems Group Wellcome Trust Sanger Institute Hinxton, Cambridge, CB10 1SA, UK -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: SLOW 22million rows, 5 hour query?
Cliff, still no explain still not table structure. Until that happens enjoy the 5 hour club. Donny -Original Message- From: Cliff Daniel [mailto:[EMAIL PROTECTED] Sent: Saturday, April 24, 2004 6:41 PM To: Donny Simonton Cc: 'Tim Cutts'; [EMAIL PROTECTED] Subject: Re: SLOW 22million rows, 5 hour query? http://dev.mysql.com/doc/mysql/en/INSERT_SELECT.html Do not use DELAYED with INSERT ... SELECT. With respect to the table structure...can you explain how when you have to read every single row regardless how the structure (assuming you are going down the path of idexes) affects the query? Cliff Donny Simonton [EMAIL PROTECTED] writes: Use insert delayed, and you will cut your time in half. At least with my experience. But also how long does it actually take to run the query itself. Giving a summary explain doesn't help much. You really need a table structure that the select is using and a full explain. Donny -Original Message- From: Tim Cutts [mailto:[EMAIL PROTECTED] Sent: Saturday, April 24, 2004 6:02 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: SLOW 22million rows, 5 hour query? On 23 Apr 2004, at 9:48 pm, [EMAIL PROTECTED] wrote: Relevant `explain` details: Full table scan: 22,371,273 rows, Using temporary; Using filesort The filesort is a giveaway. Can you increase the sort buffer size so that the sort can happen in memory rather than having to use a file to sort? Tim -- Dr Tim Cutts Informatics Systems Group Wellcome Trust Sanger Institute Hinxton, Cambridge, CB10 1SA, UK -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: update if insert fails
Actually if you are using 4.1.x INSERT INTO table (a,b,c) VALUES (1,2,3) ON DUPLICATE KEY UPDATE c=3; One of the best new features, because insert is faster than an update, update is faster than a delete, and replace is the slowest command you can run. These are based on my benchmarks about 6 months ago. This is definitely one of my favorite commands now. Donny -Original Message- From: B. Fongo [mailto:[EMAIL PROTECTED] Sent: Friday, April 23, 2004 7:23 AM To: 'Andy Ford'; [EMAIL PROTECTED] Mysql. Com Subject: RE: update if insert fails Use REPLACE instead. It is similar to the INSERT except that, it will replace any old record that match - with the new one. A replace statement may look like this: REPLACE INTO TestTable (TestId, User) VALUES (007,Bond); HTH Babs || -Original Message- || From: Andy Ford [mailto:[EMAIL PROTECTED] || Sent: Friday, April 23, 2004 1:59 PM || To: [EMAIL PROTECTED] || Subject: update if insert fails || || Hi || || Is there such a statement where, if the insert fails (due to a duplicate || record) an update will happen || || Thanks || || Andy || || || -- || MySQL General Mailing List || For list archives: http://lists.mysql.com/mysql || To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Performance
I would bet 99.9% of the problem is you aren't using indexes. Run an explain on your queries that will show you were the problem probably is. Donny -Original Message- From: Teus van Arkel [mailto:[EMAIL PROTECTED] Sent: Thursday, April 22, 2004 6:11 AM To: [EMAIL PROTECTED] Subject: Performance Hello, We are setting up a mysql database for working with and manipulate large amounts of data. I'm using a visual basic script for manipulating particular rows in every record in a test-database with almost 65.000 records. This script is checking the spaces in row and put this in several variables. With these variabel we later get blocks of these data en put them in another row. Currently we are working with ms access but are testing mysql for performance. Untill now mysql is slower than msaccess. In this vb script we connect to the mysql server with odbc and adodb. server-sides query's and optimistick locking. The Redhat MySQL server ( pentium III 733mhz, 128mb memory ) is showing a 100% CPU usage when I run the visual basic code, memory is not a problem ! Does somebody have some ideas to increase the performance of MySQL ? With thanx in advance. Teus -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Searching the Docs
Lou, I normally download the html version of the manual and have a little search feature just for it. And you are correct, I've looked for a few of your examples and can't find any information on them besides you can turn them on or off basically. Not in the mood to bust out the code to figure out exactly what they do though. Sorry. Donny -Original Message- From: Lou Olsten [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 21, 2004 3:35 PM To: [EMAIL PROTECTED] Subject: Re: Searching the Docs I use the online docs extensively as I am still very much in learning mode with MySQL. However, I've been frustrated recently because it appears I cannot search for an EXACT string literal, which brings me back a ton of hits I don't want. For example, I'm trying to search for the dynamic system variable called convert_character_set, but it returns results with convert or set etc., when I only want to see hits for the exact string. Is there a search type I can use, or some quoting system, or anything that will allow me to search in this manner? I've got several dynamic variables that I cannot find definitions for: convert_character_set error_count slave_compressed_protocol sql_big_tables sql_low_priority_updates sql_max_join_size sql_slave_skip_counter warning_count Thanks, Lou -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL Website
You can always go to a mirror. http://mysql.mirrors.pair.com/ Even though Jim, I think that's his name would probably like to know about the problem. Donny -Original Message- From: Brad Teale [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 21, 2004 7:37 AM To: 'Peter Burden'; Lehman, Jason (Registrar's Office) Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: RE: MySQL Website It appears to be the web server. I can reach mysql.com just fine in a traceroute, but can't get a HEAD or webpage to come up! traceroute to mysql.com (66.35.250.190), 30 hops max, 38 byte packets 1 cdm-208-180-236-1.cnro.cox-internet.com (208.180.236.1) 18.963 ms 10.260 ms 12.200 ms 2 cdm-208-180-1-50.cnro.cox-internet.com (208.180.1.50) 7.622 ms 9.933 ms 9.904 ms 3 cdm-208-180-1-73.cnro.cox-internet.com (208.180.1.73) 17.948 ms 17.666 ms 14.908 ms 4 dllsbbrc01-gew0402.ma.dl.cox-internet.com (66.76.45.145) 128.870 ms 182.677 ms 91.958 ms 5 dllsdsrc01-gew0303.rd.dl.cox.net (68.1.206.5) 23.685 ms 26.633 ms 22.810 ms 6 dllsbbrc01-pos0101.rd.dl.cox.net (68.1.0.144) 23.805 ms 26.595 ms 27.092 ms 7 12.119.145.125 (12.119.145.125) 79.373 ms 78.874 ms 75.386 ms 8 gbr6-p30.dlstx.ip.att.net (12.123.17.54) 75.101 ms 79.933 ms 74.823 ms 9 tbr2-p013701.dlstx.ip.att.net (12.122.12.89) 82.161 ms 80.284 ms 77.678 ms 10 ggr2-p390.dlstx.ip.att.net (12.123.17.85) 78.322 ms 75.077 ms 81.961 ms 11 dcr2-so-4-0-0.Dallas.savvis.net (208.172.139.225) 76.214 ms 77.886 ms 76.674 ms 12 dcr2-loopback.SantaClara.savvis.net (208.172.146.100) 108.356 ms 105.723 ms 112.343 ms 13 bhr1-pos-0-0.SantaClarasc8.savvis.net (208.172.156.198) 95.535 ms 88.560 ms 84.063 ms 14 csr1-ve243.SantaClarasc8.savvis.net (66.35.194.50) 88.678 ms 86.770 ms 85.408 ms 15 66.35.212.174 (66.35.212.174) 89.425 ms 89.129 ms 98.684 ms 16 mysql.com (66.35.250.190) 87.200 ms 85.178 ms 87.600 ms Thanks, Brad Teale Universal Weather and Aviation, Inc. mailto:[EMAIL PROTECTED] -Original Message- From: Peter Burden [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 21, 2004 7:35 AM To: Lehman, Jason (Registrar's Office) Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: MySQL Website Lehman, Jason (Registrar's Office) wrote: I should have been clearer. I can't reach the website. I can get to lists.mysql.com with no problem except for the fact that images won't pull form www.mysql.com but I definitely come to a grinding halt when I try to reach www.mysql.com. I can't do a tracert because the university has shut that off here. But I guess it is working for everyone else. I'm experiencing similar problems - using both Mozilla and IE. 'wget' eventually got the HTML but it took nearly 2 minutes. The headers don't suggest anything strange. This is also a University site with 'traceroute' disabled and everything accessed through a cache. www.netcraft.com's site analysis also doesn't suggest anything untoward. -Original Message- From: Rhino [mailto:[EMAIL PROTECTED] Sent: Tuesday, April 20, 2004 6:31 PM To: Lehman, Jason (Registrar's Office) Subject: Re: MySQL Website - Original Message - From: Lehman, Jason (Registrar's Office) [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, April 20, 2004 11:53 AM Subject: MySQL Website Does anyone know what is going on with the MySQL website? It appears to be undergoing a major redesign. The sections appear to be organized differently and the style sheets have also changed. Or did you have something else in mind? Rhino -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Gripe with MySQL
Everybody should remember as well, if you run rm -rf /*.* on your server you will delete everything from your server, but linux will stay running. Even though that's not documented either. If you use a client like PHPMyadmin or one of the other 80 million that are around you won't have to worry about error checking because they have already done it for you. Now as far as your clients/customers, if you don't have error checking in yourself, that's your problem not mysql's problem. Donny -Original Message- From: Michael McTernan [mailto:[EMAIL PROTECTED] Sent: Tuesday, April 20, 2004 9:04 AM To: Stormblade Cc: [EMAIL PROTECTED] Subject: RE: Gripe with MySQL Hi, ENUM columns can also trip you up if you insert a value outside of the ENUM; an empty string is inserted instead. This is documented behaviour (mysql.com seems to be going slowly though, so can't dig a reference right now), even if it is undesired in some cases. Thanks, Mike -Original Message- From: news [mailto:[EMAIL PROTECTED] Behalf Of Stormblade Sent: 20 April 2004 01:42 To: [EMAIL PROTECTED] Subject: Gripe with MySQL Ok. Love MySQL and I will be using it for my personal use and recommending it to clients as a lower cost alternative. I've only been using it for a very short time but there one major gripe I have with it and I believe it's just a design thing. MySQL seems to put the burden of error checking and such on the client. - All fields have a default value even when I don't tell it to? - Allow Null will only result in an error if I explicitly try to set the field to null. These are the two things that I really dislike. I think it's a poor design to rely on clients only for error checking. MySQL supports foreign keys. This is good because the database handles referential integrity. But it shouldn't stop there. I should also be able to tell the database not to allow a field to be empty/null and it should not put anything in there that I have not told it to. One scenario I can think of is this. My company uses MySQL as it's database. Different departments implement various interfaces to this database. All it would take is ONE client to have ONE bad SQL and although the insert works (Thanks to default values being put in) the data is not valid. I've only been working with MySQL for a little bit so this is just my first impressions. I'll be very happy to be told I'm wrong or that future updates (5.0 perhaps) will change some of the things I've mentioned. Relying on clients for database integrity is a bad idea in my experience. -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Stormblade (Shaolin Code Warrior) Software Developer (15+ Years Programming exp.) My System: http://www.anandtech.com/mysystemrig.html?rigid=1683 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: when 4.1.2 release
4.1.2 will probably not be beta or gamma. Not sure why. I've been using 4.1.1 in a production environment since it was released. I love it! We still use 4.0.x or 3.23.x on some of our older stuff, and everytime I have to use it I get aggravated. Once you use it and you find all of the differences in speed and functionality, you don't want to go back. Donny -Original Message- From: electroteque [mailto:[EMAIL PROTECTED] Sent: Monday, April 19, 2004 4:12 PM To: Victoria Reznichenko; [EMAIL PROTECTED] Subject: RE: when 4.1.2 release Huh as in production ? -Original Message- From: Victoria Reznichenko [mailto:[EMAIL PROTECTED] Sent: Tuesday, April 20, 2004 5:52 AM To: [EMAIL PROTECTED] Subject: Re: when 4.1.2 release Marek Lewczuk [EMAIL PROTECTED] wrote: Hello, when do you plan to release 4.1.2 version ? It will be released in several weeks. -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Victoria Reznichenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: How can I avoid filesort with BETWEEN and ORDER BY
Steven, In your case, you query doesn't even use an index. And you are using an order by DESC. Now what I would recommend is something like this, change your query just to test this out. SELECT p.* FROM p_cat c, p_ad p WHERE p.cat = c.id AND c.lft BETWEEN 4 AND 5 ORDER BY p.date DESC LIMIT 0,30; Select * from p_cat inner join p_ad on p_cat.id = p_ad.cat and p_cat.lft = 4 Order by p_ad.date ASC limit 0,30. Also add an index on id + lft on the p_cat table. And you also don't have an index on p_ad.date which is what you are trying to order by. Sorry, I had to rewrite the query because aliases drive me insane. Now in this case, you will see that with lft I have it set to do an exact match, not a range which is what between will give you. If you only were doing between two numbers like 4,5 or 100,101, I would personally recommend using IN. But that's my preference. Now with your order by, if you do the order by ASC, you won't have many if any problems. But you as a lot of people need to order by DESC, which mysql doesn't support very well, at least if you use explain. But there are work arounds to solve the problem, if you are like me and want to have 0 slow queries. Donny -Original Message- From: Steven Ducat [mailto:[EMAIL PROTECTED] Sent: Sunday, April 11, 2004 5:32 PM To: [EMAIL PROTECTED] Subject: How can I avoid filesort with BETWEEN and ORDER BY I am trying to optimize a query using both BETWEEN and ORDER BY but after months of reading and research I still can not get the hang of this. Details are as follows: 2 Tables CREATE TABLE `p_ad` ( `id` int(11) NOT NULL auto_increment, `cat` mediumint(9) NOT NULL default '0', `title` varchar(50) default NULL, `description` text, `location` varchar(50) default NULL, `pcode` varchar(8) default NULL, `pcode_id` smallint(4) default NULL, `ph` varchar(50) default NULL, `email` varchar(50) default NULL, `user_id` int(11) NOT NULL default '0', `date` timestamp(14) NOT NULL, `price` decimal(10,2) default NULL, `email_priv` tinyint(1) default '0', PRIMARY KEY (`id`), KEY `cat_pc_date` (`cat`,`pcode_id`,`date`), KEY `c_p_d` (`cat`,`pcode`,`date`), KEY `user` (`user_id`), KEY `cat_date` (`cat`,`date`) ) TYPE=MyISAM; CREATE TABLE `p_cat` ( `id` mediumint(9) NOT NULL auto_increment, `name` varchar(50) NOT NULL default '', `parent` mediumint(11) default '0', `lft` mediumint(11) NOT NULL default '0', `rgt` mediumint(11) NOT NULL default '0', PRIMARY KEY (`id`), KEY `LFT` (`lft`), KEY `PARENT` (`parent`) ) TYPE=MyISAM; Query as follows: EXPLAIN SELECT p.* FROM p_cat c, p_ad p WHERE p.cat = c.id AND c.lft BETWEEN 4 AND 5 ORDER BY p.date DESC LIMIT 0,30; +---+---++--+-+--+ ---+-+ | table | type | possible_keys | key | key_len | ref | rows | Extra | +---+---++--+-+--+ ---+-+ | p | ALL | cat_pc_date,c_p_d,cat_date | NULL |NULL | NULL 60002 | Using temporary; Using filesort | | c | range | PRIMARY,LFT | LFT| 3 | NULL | 1 | Using where | +---+---++--+-+--+ ---+-+ Is there any way I can get a query like this to avoid using a temporary table and filesort. ?? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Too Many Connections
Run mysqladmin extended-status Look for something like this: | Max_used_connections | 138| If it says, 512 is your max connections that you have used, then you need to raise it. If your number is much lower and you are getting that problem, it's a different problem, but that's just what mysql is reporting. Donny -Original Message- From: Mark Susol | Ultimate Creative Media [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 07, 2004 10:35 PM To: [EMAIL PROTECTED] Subject: Too Many Connections What is the best way to diagnose the root cause of this error? What scripts are doing the connecting and totalling them up? Warning: mysql_connect(): User ultimated has already more than 'max_user_connections' active connections I have a very active phpBB but I'm on a new server and its not pulling a server loading over 0.5. I ran some data before (crontab php script gathered the info for me every 5 minutes for several weeks) and the problem happened before related to server loading..not necessarily how many users I had on that site posting. That was an older Cobalt RaQ4. I seemed to be having a lot of search bots accessing the site then. [mysqld] set-variable = max_connections=512 set-variable = max_user_connections=200 set-variable = key_buffer=64M set-variable = table_cache=256 set-variable = sort_buffer=4M set-variable = wait_timeout=300 I've only had this problem this week, its run 3 weeks fine. I do have a corrupted MYI file according to myisamck. Mark Súsol --- u l t i m a t e CreativeMedia Web | Print | CD Media | eCommerce www.ultimatecreativemedia.com Ph: 301-668-0588 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL 5.0 - What is really available?
Another option is DB Designer 4, http://fabforce.net/dbdesigner4/ never quite used it because I live by phpmyadmin, but I know a few people who use it. Donny -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 30, 2004 10:26 AM To: [EMAIL PROTECTED] Subject: MySQL 5.0 - What is really available? I've downloaded and installed 5.0, it seems to be working fine. However, I need a graphical user interface to create tables and such (when I say I need, trust me, I need). I'm running on XP (that explains a lot I know) So: Is Control Center for 5.0 available? Is MyODBC 5.0 available? I'm having a hard time find from the rather extensive 'manual.html' what other than the actual database engine is available. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL installation in Fedora Core 1
Sami, I run Fedora core 1 and haven't had any problems. This is all you do if you want MySQL 4.1.1. wget http://www.mysql.com/get/Downloads/MySQL-4.1/MySQL-server-4.1.1-1.i386.rpm/f rom/http://mysql.mirrors.pair.com/ wget http://www.mysql.com/get/Downloads/MySQL-4.1/MySQL-client-4.1.1-0.i386.rpm/f rom/http://mysql.mirrors.pair.com/ wget http://www.mysql.com/get/Downloads/MySQL-4.1/MySQL-devel-4.1.1-0.i386.rpm/fr om/http://mysql.mirrors.pair.com/ rpm -i MySQL-server-4.1.1-1.i386.rpm rpm -i MySQL-client-4.1.1-0.i386.rpm rpm -i MySQL-devel-4.1.1-0.i386.rpm Sorry, word wrap sucks. Donny -Original Message- From: Sami Maisniemi [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 30, 2004 5:11 PM To: [EMAIL PROTECTED] Subject: MySQL installation in Fedora Core 1 I intend to build up a web server that is based on Fedora Core 1. Installation and configuration of Apache and PHP4 was pretty simple, but there are some problems in configuration of MySQL. I installed the MySQL server RPM included in the distribution. Even in the MySQL pages it is mentioned that running /etc/rc.d/init.d/mysqld start is enough for post-installation process. However, there is no such file. Running mysql leads to an error: [EMAIL PROTECTED] smaisnie]# mysql ERROR 2002: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) Does anyone have any idea how to carry out post-installation process? Regards Sami Maisniemi -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Best Performing Hardware/OS/MySQL?
SCSI, 15,000 RPM drives and a decent amount of memory 2-16 gigs. Dual procs definitely do help; we have tried it with dual procs with hyperthreading and without and with hyperthreading seems to be much faster. Besides that, you can run it on any OS; we use Fedora, with Linux 2.6.x. But that's our choice. But just say no to IDE drives! Donny -Original Message- From: Chad Attermann [mailto:[EMAIL PROTECTED] Sent: Monday, March 29, 2004 3:56 PM To: [EMAIL PROTECTED] Subject: Best Performing Hardware/OS/MySQL? Hello All, I have been a MySQL user for some time and have always run MySQL on older generation Sun servers running Solaris 8. I now seem to be outgrowing my setup and I (and I'm sure others on the list) would appreciate input from the MySQL community as to which hardware, OS, and MySQL flavor/version combinations are best for running MySQL. I expect responses from you all to be subjective, and that's OK. Any information about experiences by seasoned MySQL professionals, especially those that have experiemtned with many different combinations of hardware and OS, will save those of us planning ahead a lot of time and grief. Thanks a lot in advance for your input. Chad Attermann [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: High load with a few queries
Chris, You would have to send the table structures including any indexes and also the real queries. It would also help if you would send an explain on your select statements. 100+ a minute is not much, I have one server currently doing: Queries per second avg: 3157.235 Yes, that's per second, not per minute. So I would probably say the problem is all in your select, which probably doesn't have an index or something. Donny -Original Message- From: Chris [mailto:[EMAIL PROTECTED] Sent: Monday, March 29, 2004 8:04 PM To: [EMAIL PROTECTED] Subject: High load with a few queries Hi Guys, Iv got a script that iv been working on written in perl which interfaces to mysql via the dbi module.. The script only does two selects within the entire script, so I didn't think it would be too taxing on the machine. Turns out that when the script is executed around 100 times a minute, the load on the machine skyrockets (load average around 42.10). Obviously this is not good, so my question is, where can I start on optimizing mysql for high usage (if you can even call it that)? Basically the first select statement goes and selects a column value from a row that matches another column based on a username string. so for example: select data from mytable where username='theuser' then the second statement does: insert into myothertable values('blah','blah') mytable is small, no more than like 140 rows. myothertable is large, and the table where all data gets dumped to. As you can see, this is pretty basic. I never did much with queries being sent this fast (100+ a minute) so any advise is welcome. Thanks. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: ORDER DESC vs. ORDER ASC exec time
Personally, it's an unexpected flaw that I hope one day will be fixed. But I'm not holding my breath, even though they seem to be planning for it. But it could also be because of query-cache. To benchmark something like this, you really need to add SQL_NO_CACHE to your select statement to get accurate numbers. Donny -Original Message- From: Vadim P. [mailto:[EMAIL PROTECTED] Sent: Monday, March 29, 2004 8:16 PM To: [EMAIL PROTECTED] Subject: ORDER DESC vs. ORDER ASC exec time Hi all, Just noticed that a simple query that returns only 14 rows is 10X slower when ORDER .. DESC is used compared to ORDER .. ASC. The table has about 700,000 records, indexed on the field the table is being ordered by. Is this expected behavior? MySQL 4.0.18 running under OpenBSD 3.4 Intel/PIII 900MHz/2GB RAM = mysql SELECT distinct billingCycle FROM PhoneCalls ORDER BY billingCycle DESC; +--+ | billingCycle | +--+ | 2004-04-01 | | 2004-03-01 | | 2004-02-01 | | 2004-01-01 | | 2003-12-01 | | 2003-11-01 | | 2003-10-01 | | 2003-09-01 | | 2003-08-01 | | 2003-07-01 | | 2003-06-01 | | 2003-05-01 | | 2003-04-01 | | 2003-01-01 | +--+ 14 rows in set (14.77 sec) mysql SELECT distinct billingCycle FROM PhoneCalls ORDER BY billingCycle; +--+ | billingCycle | +--+ | 2003-01-01 | | 2003-04-01 | | 2003-05-01 | | 2003-06-01 | | 2003-07-01 | | 2003-08-01 | | 2003-09-01 | | 2003-10-01 | | 2003-11-01 | | 2003-12-01 | | 2004-01-01 | | 2004-02-01 | | 2004-03-01 | | 2004-04-01 | +--+ 14 rows in set (1.06 sec) mysql mysql query database -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: mysql 1gb memory limit?
I've got a server with 24 gigs in it and it works just fine. About 3 with 8 gigs and a few with 2 gigs. All running 4.1.1. And all of them run without any problems on Xeon's with Hyperthreading. Donny -Original Message- From: Jigal van Hemert [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 4:39 AM To: [EMAIL PROTECTED] Subject: mysql 1gb memory limit? In order to get the max performance from our servers we're reading every bit we can lay our hands on about performance tuning. In this document: http://www.phpconference.de/2003/slides/business_track/kneschke_webserver- performance-tuning.pdf (it's in German), it says on page 9 that MySQL 4.1.x on 32-bit Intel Linux can only use 1GB!?!? Is this true? Anyway to overcome this limit? Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: mysql 1gb memory limit?
Peter, I have never heard of such a limit and I have been using it for a while. And --big-tables, which BTW, you can't easily search for on mysql.com, because of the minimum 4 characters in full text indexing, says this: --big-tables Allow large result sets by saving all temporary sets on file. This option prevents most ``table full'' errors, but also slows down queries for which in-memory tables would suffice. Since Version 3.23.2, MySQL is able to handle large result sets automatically by using memory for small temporary tables and switching to disk tables where necessary. So --big-tables, has nothing to do with any limit. Donny -Original Message- From: Peter J Milanese [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 7:32 AM To: Donny Simonton Cc: 'Jigal van Hemert'; [EMAIL PROTECTED] Subject: RE: mysql 1gb memory limit? Yes. There's a limit. Start mysql with --big-tables. I think there's a finer way of doing it, just don't remember what it was ;) P Donny Simonton [EMAIL PROTECTED] 03/09/2004 08:00 AM To: 'Jigal van Hemert' [EMAIL PROTECTED], [EMAIL PROTECTED] cc: Subject:RE: mysql 1gb memory limit? I've got a server with 24 gigs in it and it works just fine. About 3 with 8 gigs and a few with 2 gigs. All running 4.1.1. And all of them run without any problems on Xeon's with Hyperthreading. Donny -Original Message- From: Jigal van Hemert [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 4:39 AM To: [EMAIL PROTECTED] Subject: mysql 1gb memory limit? In order to get the max performance from our servers we're reading every bit we can lay our hands on about performance tuning. In this document: http://www.phpconference.de/2003/slides/business_track/kneschke_webserver- performance-tuning.pdf (it's in German), it says on page 9 that MySQL 4.1.x on 32-bit Intel Linux can only use 1GB!?!? Is this true? Anyway to overcome this limit? Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: mysql 1gb memory limit?
Peter, There is no 1 gig limit that I am aware of. I have been using MySQL 4.1 since the day it was released. And when 4.1.1 came out I switched about half of our machines to using it, and when 4.1.2 comes out in the next week or so, I will switch our stuff that is using 4.1.x to that as well. We are using Fedora core 1 and we don't have a memory limit at all. Linux does have a 2gig memory limit per process or thread. But MySQL can definitely use more than 1gig of memory. If it couldn't then I wouldn't be using it. Donny -Original Message- From: Peter J Milanese [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 9:06 AM To: Donny Simonton Cc: 'Jigal van Hemert'; [EMAIL PROTECTED] Subject: RE: mysql 1gb memory limit? Donny- While I appreciate your bluntness, I did have this issue a time back with 4.1.x. In your email, the reference to big-tables stated that it prevents table fulls. This would dictate that it does affect limits, or working around them , likely a limit set forth by temporary tables (in memory). The other thing that I did was increase block sizes on the filesystem storing the data. I set it to allow 2TB filesizes (16TB Filesystem), as my largest table pushes about 1/4 of that. Linux kernel (32-bit 2.4.x) defaults at 2TB max, but that's the other limit you'd have to deal with (with LVM) Back to the point, the 1gig limit stated in the initial email can be overcome. Things you have to keep in mind are which OS to choose, which architecture, and the underlying filesystem. P Donny Simonton [EMAIL PROTECTED] 03/09/2004 09:09 AM To: 'Peter J Milanese' [EMAIL PROTECTED] cc: 'Jigal van Hemert' [EMAIL PROTECTED], [EMAIL PROTECTED] Subject:RE: mysql 1gb memory limit? Peter, I have never heard of such a limit and I have been using it for a while. And --big-tables, which BTW, you can't easily search for on mysql.com, because of the minimum 4 characters in full text indexing, says this: --big-tables Allow large result sets by saving all temporary sets on file. This option prevents most ``table full'' errors, but also slows down queries for which in-memory tables would suffice. Since Version 3.23.2, MySQL is able to handle large result sets automatically by using memory for small temporary tables and switching to disk tables where necessary. So --big-tables, has nothing to do with any limit. Donny -Original Message- From: Peter J Milanese [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 7:32 AM To: Donny Simonton Cc: 'Jigal van Hemert'; [EMAIL PROTECTED] Subject: RE: mysql 1gb memory limit? Yes. There's a limit. Start mysql with --big-tables. I think there's a finer way of doing it, just don't remember what it was ;) P Donny Simonton [EMAIL PROTECTED] 03/09/2004 08:00 AM To: 'Jigal van Hemert' [EMAIL PROTECTED], [EMAIL PROTECTED] cc: Subject:RE: mysql 1gb memory limit? I've got a server with 24 gigs in it and it works just fine. About 3 with 8 gigs and a few with 2 gigs. All running 4.1.1. And all of them run without any problems on Xeon's with Hyperthreading. Donny -Original Message- From: Jigal van Hemert [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 4:39 AM To: [EMAIL PROTECTED] Subject: mysql 1gb memory limit? In order to get the max performance from our servers we're reading every bit we can lay our hands on about performance tuning. In this document: http://www.phpconference.de/2003/slides/business_track/kneschke_webserver- performance-tuning.pdf (it's in German), it says on page 9 that MySQL 4.1.x on 32-bit Intel Linux can only use 1GB!?!? Is this true? Anyway to overcome this limit? Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: subquery and order by
I'd never actually tried that before, it definitely seems like a bug to me. SELECT * FROM WordScoreTemp WHERE word = ( SELECT word FROM Word WHERE word = 'mysql' ) The above works fine. SELECT * FROM WordScoreTemp WHERE word = ( SELECT word FROM Word WHERE word = 'mysql' ) order by score; Does not. Donny -Original Message- From: van der Scheun, Willem (GXS) [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 10:27 AM To: '[EMAIL PROTECTED]' Subject: subquery and order by Hi, I'm new to mysql and I just installed 4.1.1 and ran into trouble combining a subquery and 'order by'. I guess the 3 queries below show my problem mysql SELECT host,facility,priority,date,time FROM logs WHERE host=(select ip from hostip where host='ams602.avctr.gxs.com'); +---+--+--++--+ | host | facility | priority | date | time | +---+--+--++--+ | 204.90.248.18 | auth | info | 2004-03-06 | 02:00:03 | | 204.90.248.18 | auth | info | 2004-03-09 | 02:00:03 | | 204.90.248.18 | mail | info | 2004-03-09 | 04:15:08 | | 204.90.248.18 | mail | info | 2004-03-09 | 04:15:09 | | 204.90.248.18 | auth | info | 2004-03-09 | 09:17:26 | +---+--+--++--+ 5 rows in set (0.01 sec) mysql SELECT host,facility,priority,date,time FROM logs WHERE host=(select ip from hostip where host='ams602.avctr.gxs.com') order by date; Empty set (0.00 sec) mysql SELECT host,facility,priority,date,time FROM logs WHERE host=(select ip from hostip where host='ams602.avctr.gxs.com') and facility='auth' order by date; +---+--+--++--+ | host | facility | priority | date | time | +---+--+--++--+ | 204.90.248.18 | auth | info | 2004-03-06 | 02:00:03 | | 204.90.248.18 | auth | info | 2004-03-09 | 02:00:03 | | 204.90.248.18 | auth | info | 2004-03-09 | 09:17:26 | +---+--+--++--+ 3 rows in set (0.01 sec) The first query selects a few records from a syslog database where entries are stored with the IP address, but which I want to search using the hostname. The second query wants to order the output by date but to my big surprise does not give any results. When I extend the query with a select on a second field and do the 'order by date' I do get a result. Am I missing something here? Thanks, Willem -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Optimizing Queries
Chris, Is it faster if you remove the 'IS NOT NULL'? I know that's not the results you want, but we have found that is NOT NULL will do a full scan. But we normally use it with a join. Since you are using one table, I'm not sure how it would affect it. Donny -Original Message- From: Chris Fossenier [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 10:38 AM To: [EMAIL PROTECTED] Subject: Optimizing Queries Hello, I'm trying to determine the best way to optimize the query below. Right now it is taking around 9mins and we need it to take no more than 30 seconds (we can get it under 30s on MS SQL): explain select count(distinct(phone)) as TOTAL FROM speedlink WHERE county in('247','085','145','285','215','211') AND state = 'GA' AND ( homeowner = 'Y' OR probable_homeowner IN ('8','9') OR homeowner_probability_model BETWEEN '080' AND '102' ) AND phone IS NOT NULL AND first IS NOT NULL AND last IS NOT NULL -- ++-+---+--+--- -- ---+---+-+--- +-- ---+-+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows| Extra | ++-+---+--+--- -- ---+---+-+--- +-- ---+-+ | 1 | SIMPLE | speedlink | ref | idx_state,idx_county,idx_phone,idx_homeowner,idx_hpm,idx_ph,idx_first,idx_ la st | idx_state | 3 | const | 2840162 | Using where | ++-+---+--+--- -- ---+---+-+--- +-- ---+-+ 1 row in set (0.00 sec) Here are some of my thoughts on what could be done to speed it up, but haven't implemented these yet: 1) make the phone field UNIQUE on data load. This would reduce the data for other queries to be run but maybe it makes sense to have a few table sets. 2) change the numeric fields from varchars to ints, smallints or something like that. 3) Not sure if NULL values are slower or faster than using a comparison with ' ', interested on feedback. 4) Split the table into multiple files? I'm not sure how to do this but have seen it mentioned in some articles. Some information on the table: - ISAM - 120 million rows - 26 fields in total - 23 fields indexed (all fields in the above query are indexed) - speedlink.MYD is 12GB, speedlink.MYI is 24GB Some info on the server - Quad Xeon 900MHz - 4GB RAM - DB is storage on an EMC Symmetrix storage system (fibre channel SAN) Any/all assistance is appreciated. Thanks. Chris. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: mysql 1gb memory limit?
If you turn on highmem support you can easily get 24gigs of memory. I think with 2.6, it's up to 64 gigs. But there are even patches for 512gigs. Not sure if those patches work, but the highmem support works just fine. Donny -Original Message- From: Igor Dorovskoy [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 1:25 PM To: 'Donny Simonton'; 'Jigal van Hemert'; [EMAIL PROTECTED] Subject: RE: mysql 1gb memory limit? Hi Donny, What server hardware do you use to support 24G RAM over 32-bits limit? Please advise your choice of hardware and software configuration and how long and reliable it works for you on hyperthreaded Xeons? Best regards, Igor ua3qrz -Original Message- From: Donny Simonton [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 8:00 AM To: 'Jigal van Hemert'; [EMAIL PROTECTED] Subject: RE: mysql 1gb memory limit? I've got a server with 24 gigs in it and it works just fine. About 3 with 8 gigs and a few with 2 gigs. All running 4.1.1. And all of them run without any problems on Xeon's with Hyperthreading. Donny -Original Message- From: Jigal van Hemert [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 09, 2004 4:39 AM To: [EMAIL PROTECTED] Subject: mysql 1gb memory limit? In order to get the max performance from our servers we're reading every bit we can lay our hands on about performance tuning. In this document: http://www.phpconference.de/2003/slides/business_track/kneschke_webserver- performance-tuning.pdf (it's in German), it says on page 9 that MySQL 4.1.x on 32-bit Intel Linux can only use 1GB!?!? Is this true? Anyway to overcome this limit? Regards, Jigal. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]