RE: Need older version of mysql (current version seeminly corrupts FTS tables)
See: http://downloads.mysql.com/archives.php Thank you. Nice link to have around. Key 3 is the FTS key. The others are a UNIQUE KEY (#1) and a KEY(#2). Do you have the same values for full-text parameters (ft_mit_word_len for example)? Not at first. I had noticed that not long after I sent my message, and I rebuilt all the tables with mysamchk -rf *MYI. That eliminated the Duplicate Key errors, but not the Incorrect key file... Try to repair it error. Did it again with 4.0.18 and had the same error. :( Next I did mysamchk -o *MYI to really be sure, and I even used the 4.0.18 binary. Then I tried running 4.0.23 with debug turned on (somewhat) and still get the error. I'll try again with 4.0.18 (or 17) with debug and see if it has the same error at the same place. Here is what I had in the trace file (note that the query is a replace cmd that often is used to update an entry): Do I need more fine grain debug info to find the error? mysql_change_db: info: Use database: search do_command: info: Command on TCP/IP (9) = 3 (Query) dispatch_command: query: replace into forums_posts_1239959 (forumid,messageid,parent,rootmessageid,deleted,deleted_marked,approved,auto respond,loginid,ip,user_id,author,email,title,message,search_forumid ) values (2255626,1108700026,0,1108700026,'no','no','yes','no',1524436,1100232325,'1c c48d0a485629a91e2b5634c122a339', '[EMAIL PROTECTED]','[EMAIL PROTECTED]','Message text deleted for privacy, but I can sent if needed','fid2255626') thr_lock: info: write_wait.data: 0x0 mi_get_status: info: key_file: 28418048 data_file: 39236852 mi_write: error: Got error: 121 on write _mi_writeinfo: info: operation: 1 tot_locks: 1 _mi_ck_delete: info: root_page: 19973120 _mi_prefix_search: info: key: '004' _mi_prefix_search: info: key: '004' _mi_prefix_search: info: key: '004' _mi_prefix_search: info: key: '004' d_search: error: Didn't find key mi_update: error: key: 2 errno: 126 _mi_writeinfo: info: operation: 1 tot_locks: 1 my_message_sql: error: Message: 'Incorrect key file for table: 'forums_posts_1239959'. Try to repair it' thr_unlock: info: updating status: key_file: 28418048 data_file: 39236852 mi_lock_database: info: changed: 1 w_locks: 0 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Need older version of mysql (current version seeminly corrupts FTS tables)
I have replaced one server with another, and the new one has everything new (RHEL 3, newest updates) and MySQL 4.0.23 (old one was RH9 and MySQL 4.0.18). We now get table corruptions constantly (it only takes a minute before several tables get marked as crashed). I'd like to revert to the 4.0.18 version (which I stopped updating after having some other issue, which I can no longer remember). Where can I download it Anyhow, with the errors, I get these in my application's log (mysql does not log any errors): Duplicate entry 'Some text here ---f' for key 3 Incorrect key file for table: 'table_messages_1'. Try to repair it Key 3 is the FTS key. The others are a UNIQUE KEY (#1) and a KEY(#2). I'd like to make sure it is not a mysql version issue. I have seen similar behavior in the 4.1 series, and don't want to try it now (and do all the table conversions -- possibly each way if it fails). Thanks! -steve-- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Help with query performance anomaly
For production systems, I would never let the mysql optimizer guess a query plan when there are joins of big tables and you know exactly how it should behave. Once you think a query is finished, you should optimize it yourself. Use STRAIGHT_JOIN and USE INDEX as found here in the manual: http://dev.mysql.com/doc/mysql/en/JOIN.html STRAIGHT_JOIN is identical to JOIN, except that the left table is always read before the right table. This can be used for those (few) cases for which the join optimizer puts the tables in the wrong order. http://dev.mysql.com/doc/mysql/en/SELECT.html The use of USE INDEX, IGNORE INDEX, FORCE INDEX to give the optimizer hints about how to choose indexes is described in section 14.1.7.1 JOIN Syntax. -steve-- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Upgrading MySQL erased all data
Thanks. I passed this on and he found what was lost. I guess since the data directory was owned by mysql, he could not find the databases when doing a MacOS file search. I impressed upon him to use a separate datadir as we do with our servers, thus bypassing this whole thing. Thanks again! -steve-- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Upgrading MySQL erased all data
Installing MySQL 4.1.7 (upgrading from 4.1.3) on MacOS X erased the contents of /usr/local/mysql/data -- the privs and data of the previous installation. FYI Luckily (and unfortunately) we have a backup of that database from last week. (The guy that did it here in the office is still in a bit of a frenzy though). -s -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Problems with MySQL 4.0.20
Thank you very much for your bug report! And sorry if I doubted your report at the beginning; I hadn't thought of the rpm script. No problem. I sometimes get bug reports that I know are impossible! Yet they weren't. This one I would have barely noticed if it had not knocked the slaves all offline. Mysql query just in case. -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Problems with MySQL 4.0.20
We start mysql with 'service mysql start' (we install from the RPM for linux). I've never seen mysql create binlog files under the name root before, and after reverting to an old version, it doesn't again. It created a big mess with all the slaves stuck at the end of an older binlog and not advancing to the next one and complaining about corruption. Unfortunately, I don't have the contents of the log (I think the size of the file was 79 bytes) since a script here checks that all the slaves are at a certain point and then deletes the logs on the master. Log: 040519 17:53:41 mysqld started /usr/sbin/mysqld: ready for connections. Version: '4.0.18-standard-log' socket: '/tmp/mysql.sock' port: 3306 040520 16:58:54 /usr/sbin/mysqld: Normal shutdown 040520 16:58:56 /usr/sbin/mysqld: Shutdown Complete 040520 16:58:56 mysqld ended 040520 16:59:10 mysqld started 040520 16:59:10 Warning: Asked for 196608 thread stack, but got 126976 /usr/sbin/mysqld: ready for connections. Version: '4.0.20-standard-log' socket: '/tmp/mysql.sock' port: 3306 040520 16:59:14 Failed to open log (file '/binlogs/binlog.032', errno 13) 040520 16:59:34 Aborted connection 134 to db: 'db' user: 'aaa' host: `something.i' (Got an error writing communication packets) 040520 16:59:36 Aborted connection 544 to db: 'db' user: 'aaa' host: `something.i' (Got an error writing communication packets) 040520 16:59:36 Aborted connection 541 to db: 'db' user: 'aaa' host: `something.i' (Got an error writing communication packets) Binary logs are created by the mysqld daemon (after mysqld possibly changes to uid of 'mysql' if --user=mysql was used). So in any case, if mysqld is running as user mysql (no matter if it was 'mysql' which started mysqld or if it was 'root' which did 'mysqld --user=mysql'), the binary logs are created by 'mysql'. If you have some binary logs created by 'root', it means 'mysqld' was run as 'root'; this is what you should really check (if you can provide us with the way you started mysqld ('service mysql start', whatever) and a listing of 'ps -elf | grep mysqld', we may be able to check if it is a MySQL bug but this is quite unlikely, from the above reasoning). Thank you! -- __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Mr. Guilhem Bichot [EMAIL PROTECTED] / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Full-Time Software Developer /_/ /_/\_, /___/\___\_\___/ Bordeaux, France ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Problems with MySQL 4.0.20
Hmm, I don't see any changes in ft-related files since 4.0.18 that could cause it (there were bugfixes, but they affect only *searching* - that is MATCH - and not *updating*). Can you create a test case ? Well, I put up a file in the secret folder a few days ago as referenced in a bug report: http://bugs.mysql.com/?id=3870 There is a select statement that crashes the server found in the log file. I put the files up and posted the bug from a remote computer and couldn't write much about it at the time. The table is fine according to 'check table the_table_name'. The select crashes it. The select also crashes it in older versions of myslq!! Doing a repair in the old version and then doing the select in the old version is OK. That is why I came to the conclusion that the file is corrupt. CHECK TABLE does not find the corruption, however. Another note on this: The tables I had the most problems with had FTS indicies. I can't say that it is more than coincidental just yet. I am not conclusive that it is a cause and effect relationship at this time. Even returning to the older versions of mysql is not getting rid of all our problems (we are seeing extremely high loads on the same stream of queries as usual). Selectively repairing tables has helped. It may be that it is not FTS related and we should repair all tables. We are going to try that tonight. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Problems with MySQL 4.0.20
We had some servers that were upgraded from 4.0.17/18 to 4.0.20 and had several problems thereafter: 1. Tables with FTS indices became corrupted, with queries on them causing segfaults on the servers. 2. BinLog files were getting created with ownership of root, not mysql. Then Mysql complains that it can not read the file and so goes and creates another (which is fine and owned by mysql). All slaves to the master then die with corruption warnings about the master. 3. All servers suddenly have a lot of connection errors: Aborted connection 109 to db: 'xyz' user: 'aaa' host: `something.i' (Got timeout reading communication packets) 4. Thread stack warnings: Warning: Asked for 196608 thread stack, but got 126976 Reverting back to 4.0.17/18 fixed everything except one server still reports #4 (better than all servers reporting it). All FTS tables needed to be repaired (using the older version -- didn't test or trust the newer one). -steve-- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
thread stack issues
Since going from 4.0.18 to 4.0.20 (or 4.0.19) I now receive these warnings on startup: 040520 14:55:21 mysqld started 040520 14:55:21 Warning: Asked for 196608 thread stack, but got 126976 /usr/sbin/mysqld: ready for connections. Version: '4.0.20-standard' socket: '/tmp/mysql.sock' port: 3306 I noticed on another server that it had the same problem with v4.0.18. So some servers have a problem with this version and others do not. All have the warning with 4.0.20. They are configured differently. What configuration options would be effecting this? -steve-- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Quirk with max()
I've seen a quirk in Mysql behavior over the years when dealing with max(). In a query such as this: select max(somecol) from sometbl where id=# and otherthing=# (index is on id, but not on otherthing) We see the query run just fine (0.x seconds to run) almost all of the time. But about 30 times a day it will take 4-20 seconds to run. It is not a common query, and it is only slow occasionally. But it accounts for 100% of slow queries logged. If I change the query to this, it never shows up in the slow query log: select somecol from sometbl where id=# and otherthing=# order by 1 desc limit 1 This behavior has acted like this for at least 5 years, in all versions that I have had experience with, up to and including 4.0.18. Someone else here had changed the query to use max() and that caused the slow query log to have data, which despite half a billion queries a day, should never happen. The above workaround still works, so it is no big deal. I am a little surprised that this still happens though, and am curious if anyone else has seen this as well, or is it just me? -steve-- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Feature request related to COMPRESS and UNCOMPRESS functions
Here is the background: Anyone that is running a huge system like MARC that has millions of uncompressed blob records in huge tables, needs to be able to migrate, in real-time and without down-time, to compressed blobs. Therefore, we need a way to know if a given field is compressed or not. I hear you on that! We did the compression on the application end. When we started compressing all of the blobs in the table were uncompressed except newly added ones. We took advantage of the fact that zlib fails on decompression. So we wrote a function my_decompress() that takes the blob and decompresses it and if it fails just returns the original (assumed to be already decompressed). Works great and decompression gets divided among the webservers which scales better than having MySQL do it. However, you should develop a way to take tables offline. Lack of proper table maintenance can slow things down by a factor of 10 or more (and one of the reasons we can not use InnodDB). -steve-- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: [4.1.1] 1062 errors on non-unique index during data load
I saw something like this as well. Using 4.1.2 made it go away. Try doing a bk pull of the dev version of 4.1.2 and give it a go. -steve-- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Advise on High Availability configuration
I am wary of something so 'do it yourself'. Have you looked at ReHat's clustering solution? http://www.redhat.com/software/rha/cluster/ http://www.redhat.com/software/rha/cluster/manager/ I don't think it has any issue with InnoDB, key buffers, etc. I believe this solution works best for failover situations. Also if you have machine A doing A-type work and B doing B-type work, then if one goes down then the other will do both A-type and B-type work until the other machine comes back up. I think if both A-type work and B-type work are both MySQL, then you may have to use different ports for connections, use skip-name-resolve (and setting the name of error files, binlog files, etc) in my.cnf to eliminate issues with moving between machines. Also there is Veritas Cluster Server which has a MySQL module. Emic has load balancing as well as failover, offering these items: i) online backup capability ii) dynamic load balancing iii) fault management with fast failovers iv) high availability, and v) performance scalability with each added server node vi) does not require shared SCSI raid array http://www.emicnetworks.com/ http://www.emicnetworks.com/products/mysql.html Lastly, there is MySQL which bought a company to add clustering themselves: http://www.mysql.com/press/release_2003_30.html I have not used any, though I am evaluating all at the moment. -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: 4.1.1 FTS 2-level?
Thanks for the additional information. When 4.1.2 comes out, I'll give it a test and return with some stats on real world result times (for my data set at least). -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Spatial Extension in MySQL 4.1.1-alpha
You did an insert this way: mysql insert into geom values(GeomFromText('POINT(1,1)')); and expected results this way: mysql select AsText(g) from geom; +---+ | AsText(g) | +---+ | Point(1 1)| +---+ 1 row in set (0.00 sec) The formatting of the POINT coordinates are different. I think you should be inserting POINT (1 1) not POINT(1,1). It is the comma. -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
4.1.1 FTS 2-level?
Does Mysql 4.1.1 have the two level index system integrated into it for full text searches? Thanks. :) -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
2,000,000 rows in FULLTEXT
* MATCH ... AGAINST() in natural language mode now treats words that are present in more than 2,000,000 rows as stopwords. Arg! Is this really true only for natural mode? It seems to be doing it for boolean mode too. Did this change having anything to do with index creation? So I could downgrade back a version and have it still work OK? Sincerely, Steven Roussey http://Network54.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Suboptimal index-usage with FULLTEXT-search
The WHERE-clause for these searches sometimes is a combination of different columns in the table, and I have noticed some very sub-optimal index-usage Adding correct estimations for MATCH is in todo. Until it's done you can use USE_INDEX/IGNORE_INDEX as a simple workaround. FORCE_INDEX works in this case. I don't think USE_INDEX does since the FTS index always reports one match and that overrides the USE_INDEX hint. However, at this point the FTS index is not used at all. If the original poster was always doing a query where the other columns narrowed down the matches significantly, then it would be decided to never really use the full text index. Say in the fictional case of a table of messages that belong to a million different forums. When searching inside one forum, the fulltext index would likely always have more matches than the key on the forumid. This won't really get fixed unless you can do a composite index of normal and fulltext indexes. Even with tsearch2 which is coming in Postgres 7.4, it doesn't let you do a composite index. :( It does have some nice language configuration stuff (for example, you can tell it that it is to be parsed as English and it will then send it to an English stemmer and an English stopword list -- I don't know if you can preprocess it ). There is a very ugly workaround on this. You could fake a composite index. Assuming that you create a separate copied table for FTS, in that table scan all the words in the document and prefix them with the other column. First drop all the one and two character words (and the stopwords) yourself. Then insert something like this as the message f31334wanted f31334say f31334hello instead of I wanted to say hello. Do the same with the search and suddenly things work a lot faster. And uglier... Sincerely, Steven Roussey http://Network54.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: InnoDB and lots of UPDATES
Use transaction: begin update ... update ... ... update ... commit; This way you will only have a syncs to disk at every commit instead of every update. This won't help -- I'm not doing a batch process. Each update is coming from a different connection... --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL 4.0.14 stops responding to PHP 4.3.2
No, it turns out this is not the key. With mysql_connect() I'm actually failing MORE often than with mysql_pconnect - so far it hasn't stayed up 15 minutes without error. (Fortunately, I have a cron job checking on it and restarting.) After the failed connection attempt, there will be an error message you can get from mysql_error(). What is it? Also, rather than restart, will mysqladmin flush-hosts fix it as well? (We have to run the flush-hosts every hour in a cron job. Never bothered to figure out the root cause as this fixes things.) --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
InnoDB and lots of UPDATES
I have a question about InnoDB and how it would handle updates on the order of about 3,000-5,000 a second. The UPDATEs update a single record on a primary key. In MySQL, it does a table lock thus serializing the updates. There are a few selects, though on a couple of orders of magnitude less often. The table locks have the potential to cause problems at this volume. So to avoid the table locks, I have considered using InnoDB. However, it syncs to disk after every UPDATE and I don't think that will work. The disk is a RAID 10 array of 6 15K drives (3x2). Any suggestions? --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Lots of FULLTEXT stuff (suggestions)
Thanks for replying. Your posts that I've found when searching for FULLTEXT information have had great ideas. :-) Searching millions of posts efficiently and effectively isn't easy. :-( Heh. FULLTEXT does not scale very well once the files get bigger than your RAM. The redesign of the index where it gets normalized will help quite a bit in reducing the size of the files. For large tables, it will help immensely. Most 1-3 letter words that you don't want indexed should be stopwords anyway, right? So why NOT index the ones that are left? Doesn't seem like it'd make the index much larger to me. BTW, what is your min_word_len value? I haven't really thought about it. Although I don't see any value in one-letter words (or numbers). I use min_word_len=3 and my own stop list, which is merge of stopwords in many languages. I made both changes at the same time and ended up with a slightly smaller index. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Lots of FULLTEXT stuff (suggestions)
Lots of stuff STEMMING! (controlled more finely than server level I hope), multi-byte character set support, proximity operators. Anything to get it closer to Verity's full-text functionality. ;-) Yes, all these things would be nice... :) And the FULLTEXT index shouldn't always be chosen for non-const join types when another index would find less rows first. The short answer is that it doesn't work that way (also, I think this is why there are no composite indexes between integer and fulltext indexes). The two systems don't know anything about each other. Also, are the current MySQL versions using the 2 level full-text index format yet? I'm thinking not? No. MySQL 4.1.0 has some low-level support for this, but FTS needs to altered (quite a bit I'd guess) to use it. So there is hope that it will come in the 4.1.x line, but no guarantee. In November 2001, he said the new .frm format would be here this year. It's been almost 2 years since then, so when is it do? ;-/ I think it was pushed back to version 5.1. I'd figure another two years. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Slow results with simple, well-indexed query
Here's the CREATEs, somewhat edited to remove parts not relevant to this discussion, to save space: I never actually looked at your JOIN statement more than a quick glimpse, but I will (though not just right now). Before I do, can you try this (I still don't have data or I'd play with it myself: mysql EXPLAIN SELECT cg.cw FROM cg USE INDEX(q_id_2),q,cit,sref - WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id - AND cg.cw LIKE 't%' - AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) - ORDER BY cg.cw - LIMIT 1000,10; I wanted it to use that new index, and it chose not to, so we can show it the way. (You can also try FORCE INDEX instead of USE INDEX). In the meantime, I'll actually read over your table descriptions and the JOIN. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Slow results with simple, well-indexed query
All the indexes were single indexes, partly because I haven't yet made the effort to understand composite index. I guess it's time ;-). Oh. There are better places to start than this list. ;) The manual can be a great starting place, and several people on this list have written books about MySQL which are great for getting started. You can look at Amazon.com, etc. mysql EXPLAIN SELECT cg.cw FROM cg,q,cit,sref - WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id - AND cg.cw LIKE 't%' - AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) - ORDER BY cg.cw - LIMIT 1000,10; You do a range on the fist and last table in the chain. :( It is best to normalize your table structure. I should have noticed that upfront. The 'simple, well-indexed query' through me off. Normalizing is a great thing to learn, and probably the first thing to understand after how to do a SELECT and composite indexes. What does this mean for regular searching? In most cases, there will be some criteria entered that need to be searched on, and the id fields will also be needed for the joins. For example, in the database, one might want to search based on cg.exp (fulltext), sref.rdr, sref.cd (the date field), sref.kbd, cit.w, and various other ones I've edited out of this display to save space, and often a combination of several of these at once. How should I set up indexes for the potential searches that might be executed? (I should mention that this is a read-only database; it's built from a parsed SGML file and is never added to directly, if that's an issue.) Well, I would rewrite the table design. :) It looks like it is taking its structure from the SGML format. Personally, I'd use that as an intermediate format in order to populate another set of tables that would be in normal form. Again, a book will help here. Sorry I can't be of more help. At least we went from 1 m 15 sec to 4.05 sec. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Slow results with simple, well-indexed query
Hmmm, just in case you can't change the table layout... Run this through MySQL. First I get rid of the other index I made, then add chained indexes so there is no need for data file lookup. Also, one direction of the query table join chain was not always using the indexes for the where. One direction is preferable (not knowing what the data is) since then we can use the index for the sort, but the other direction may have benefits that outweigh that, so that is why we used the composite index I last suggested. Anyhow, just to be clear, lets force all composite indexes for this query in both direction and force the optimizer to use both directions and see what we get (and sorry if I misspell or something, since I don't have your DB to check against). Run this and send back the results: # Get rid of the index I added before ALTER TABLE cg DROP INDEX q_id_2; # Get a baseline for direction 1 SELECT STRAIGHT_JOIN cg.cw FROM cg,q,cit,sref WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; EXPLAIN SELECT STRAIGHT_JOIN cg.cw FROM cg,q,cit,sref WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; # Get a baseline for direction 2 SELECT STRAIGHT_JOIN cg.cw FROM sref,cit,q,cg WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; EXPLAIN SELECT STRAIGHT_JOIN cg.cw FROM sref,cit,q,cg WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; # add indexes ALTER TABLE cg ADD INDEX dir1(cw,q_id), ADD INDEX dir2(q_id,cw); ALTER TABLE q ADD INDEX dir1(id,cit_id), ADD INDEX dir2(cit_id,id); ALTER TABLE cit ADD INDEX dir1(id,sref_id), ADD INDEX dir2(sref_id,id); ALTER TABLE sref ADD INDEX dir1(id,cd), ADD INDEX dir2(cd,id); # Get a new result for direction 1 SELECT STRAIGHT_JOIN cg.cw FROM cg USE INDEX(dir1), q USE INDEX(dir1), cit USE INDEX(dir1), sref USE INDEX(dir1) WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; EXPLAIN SELECT STRAIGHT_JOIN cg.cw FROM cg USE INDEX(dir1), q USE INDEX(dir1), cit USE INDEX(dir1), sref USE INDEX(dir1) WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; # Get a new result for direction 2 SELECT STRAIGHT_JOIN cg.cw FROM sref USE INDEX(dir2), cit USE INDEX(dir2), q USE INDEX(dir2), cg USE INDEX(dir2) WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; EXPLAIN SELECT STRAIGHT_JOIN cg.cw FROM sref USE INDEX(dir2), cit USE INDEX(dir2), q USE INDEX(dir2), cg USE INDEX(dir2) WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; # Send results back via email!! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Slow results with simple, well-indexed query
After looking over your results, I would keep the dir1 index at least on the first and last table. But since this data is read only, why not reformulate the data for the queries you are going to make? This is the opposite of normalizing, and will require more disk space, and is not flexible, but it will be fast. Of course, it depends on what you are doing and how many types of queries you have. This 'normalize by queries' or what I refer to as 'selective denormalization' likely won't appear in any books. It is a technique that I was taught from DBA's that have been doing this sort of stuff for Fortune100 companies for decades. I use it only rarely, and only where the data is needed in realtime where the structure of the data and its quantity would not normally return results so quickly. It would likely work in your case also. All the best, --steve- PS: Get a big fast SCSI RAID 10 array of 15K drives (stripe the mirrors). Actually, since this data is readonly and copies are stored elsewhere, you could get by with RAID 1. I'm guessing you are being held back by the disk, or your memory buffers. Oh, and just for fun: # order the files ALTER TABLE cg ORDER BY dir1; ALTER TABLE q ORDER BY dir1; ALTER TABLE cit ORDER BY dir1; ALTER TABLE sref ORDER BY dir1; # Get a new result for direction 1 SELECT STRAIGHT_JOIN cg.cw FROM cg USE INDEX(dir1), q USE INDEX(dir1), cit USE INDEX(dir1), sref USE INDEX(dir1) WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; # order the files ALTER TABLE cg ORDER BY dir2; ALTER TABLE q ORDER BY dir2; ALTER TABLE cit ORDER BY dir2; ALTER TABLE sref ORDER BY dir2; # Get a new result for direction 2 SELECT STRAIGHT_JOIN cg.cw FROM sref USE INDEX(dir2), cit USE INDEX(dir2), q USE INDEX(dir2), cg USE INDEX(dir2) WHERE cg.q_id = q.id AND q.cit_id = cit.id AND cit.sref_id = sref.id AND cg.cw LIKE 't%' AND sref.cd DATE_SUB(CURDATE(), INTERVAL 6 MONTH) ORDER BY cg.cw LIMIT 1000,10; -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Slow results with simple, well-indexed query
Executing just the search on the word table, with no joins to the table with the dates, is still slow: Then it is not worth while to focus on anything else until you fix that. Are the contents of this field always in lower case? Is so, then change the column to a binary type. The explain says: rows: 318244 Extra: Using where; Using filesort That means that is sorting all 318,244 (est) records first, then going down to the 3000th and giving you five records. Just a guess. See if that helps then we can move on to the join. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: How to create a stop word file?
how to separate each stop word in the list A different word on each line. -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Slow results with simple, well-indexed query
No, the contents can be of mixed case. Where does that leave things? **Index the length of the entire column.** It then should not need to have to do the filesort. Actually the binary option would not have really helped. The explain should say 'Using Index'. Get back to me on this and tell me the results. In a working environment I'd never be querying on this table alone, it would always be joined in to other tables that would limit things in some way, but these don't seem to be affecting things. The suggestions other people have made to try to get it to do the smaller queries first don't seem to be having much effect, unfortunately. Optimize the join once you know how to optimize its parts. One thing at a time. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Slow results with simple, well-indexed query
GOD! OK, sorry, I wasn't quite expecting this: Wow! :) But what's the explanation for this huge improvement? Again, I was always told the opposite, and the Manual itself says: ... Yes, and it is true (usually). But your EXPLAIN showed a filesort and that is bad. What happens is that if the resultset is sorted only on the first few characters (based on the index) of that column. Since you requested an ORDER BY, it had to go back and fully sort the resultset. If you have the index do the whole column, then this step is not needed. Even better is that due to the limit, it can safely go right to the part of the table it needs to, and once it gets the 5 rows, it is done (rather then getting all of them for the sort step). Even better in this particular case is that all the information needed is in the index (the MYI file) so it did not even need to do a read on the data file (MYD). Less disk access is a good thing... Hmm. When I returned to the multiple-table query that started this thread, And it was slow. Yeah, one thing at a time. It makes it easier for people reading this list now or in the future (if it comes up in a search result) if we go over things one item at a time. Since I never saw the whole table definitions (the indexes in particular), I'll have to try and guess through it. So try this: ALTER TABLE cg add index(q_id,cw); Tell me how that works and send the EXPLAIN. The point here is that now you are doing a join and you are using both columns to qualify the resultset. So we should use a composite index rather than have individual ones (of which MySQL will choose only one). Also, you can change line AND cg.cw BETWEEN 't' AND 'tzzz' To AND cg.cw like 't%' For better readability (how many zzz's are enough, eh?). Personal preference. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MYSQL Scalability on SMPs
It looks like Igor comitted it to the 4.1 tree on the 2nd of this month: I'd assume that this change is necessary but not sufficient for the MySQL table type table locking issue... I know, I know, there is InnoDB for that, but there are reasons not to use it despite this particular wonderful advantage. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Searching on Two Keys with OR?
ORing on two different fields is what I have been asking about :). This is not optimized, and I don't think it is set to be optimized until 5.1 (as per someone else's comment). Using a composite index was suggested This is bad information. It works for AND, not for OR. You have two workarounds: temp tables and unions. You can have it use one index, though. And you can give it a hint on which index it ought to use if you think you know better than the optimizer. At any rate, this is one of MySQL's deficiencies that many of us have worked around for a long time. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: replication problem
So does anyone else have any ideas what is going on here? Shall I report this as a bug? Did you post how you setup the servers to load the different my.cnf files? Hopefully you don't have one at a default location. Otherwise, it sounds like the config information is not properly set -- either some user setup error that is alluding all of us, or a config loading error in MySQL. I'd like to check how the config files are located, etc., before doing a bug report. My two cents. PS: I'd also have both servers running the latest version, just in case it was a bug that was already fixed. --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MYSQL Scalability on SMPs
MyISAM performance is limited right now by a global lock in the key cache. However, I believe there is work going on to fix that in the 4.1 tree. Really? I thought it was going to be fixed in the 5.1 tree, which will be years away from production quality. 4.1 would be really cool, but it seems so soon (non-InnoDB)... --steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: RAID hardware suggestions/experience
After testing a lot of different configurations (which was quite a headache), I came up with the following. First of all, for both speed and reliability, you will want SCSI. The list of reasons are quite long for SCSI, and as you are doing research on the subject, it is an obvious choice and I don't need to list them here. Get drives with 15K RPM, since disk seek time is a killer in database applications. U160 or U320 SCSI 3. With lots of cache on the drive (should be standard). I've found U160 to be sufficient, but U320 might be better for backups, etc. We do have U320 controllers now, to be ready for the future. Next, I found RAID 10 to be the best combination of redundancy and speed. It is not cheaper though. I have not tested hardware RAID (which is a shame -- it is a big hole in my experience), but use software RAID. Either way, position all the sets of mirrors such that each mirror set (2 drives) are on separate channels. This way, if your SCSI controller (or RAID controller) has a channel die, the whole array can still function (even with half of the drives down). Then stripe your (3-4) mirrors. Don't stripe too many. More sets to stripe increase performance, but syncing the rotations of many drives degrades performance. So there are diminishing returns. For our calculations, 3-4 mirrors were sufficient. Most of our RAID sets are six drives (3 stripe of 2 mirror). For one, we wanted more space and it has 8 drives (4x2). Don't forget to install spares at the same time. I like using external SCSI disk enclosures, so you can swap servers with less headache. -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: transactions with php
If you're using a non-persistent connection, PHP will close the connection when the script terminates, and the MySQL server will roll back implicitly. For a non-persistent connection, the connection remains open and you may not see the rollback behavior your expect. I thought this was fixable now. A while back Monty said that changing users on a connection would reset the connection automatically. He was talking about the next version (which was several versions back of the 4 series). Resetting the connection (according to this theory) would set all the per connection variables to their default and rollback any non-committed transactions. My PHP Mysql extension is a bit hacked up (and I have more to do!), so I can't remember the default now. But I think it should add a 'change user' command when the page ends on any persistent connection. Change to a blank user. So in theory then, web pages would be safe for transactions. A really ugly hack (assuming a Mysql server version as described above) would be (in PHP) to connect persistently to mysql then change the user to a dummy, and then change the user to the one you want again. Doing this at the start of every page should then make it transaction safe. Can someone from MySQL confirm that changing users will reset the connection and rollback unfinished transactions? And starting in what version? -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Triggers
Just a couple of quick notes: 1. While I use PHP CLI for a lot of things (can we say cron?), it is not a sufficient replacement for triggers. What happens when someone is using the Mysql command prompt to alter data? Or using a non-PHP application? 2. While I agree that having application code rather than stored procedures is a much better bet for code management, it is only true if you are dealing with a **very simple system** such as a web site. There are whole new levels of complexity when the system includes a web site, in-house VB and Java applications, software from other providers (CRM, whatever), etc., all accessing the same data repository. In these cases, stored procedures, triggers, views, etc., are necessary for data integrity, and code management. Just a note: while the website this email is attached to is a complex web application, it is still a simple system. I don't mean to imply that websites are simple. Just the system (webserver database) is simple. -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: RAID hardware suggestions/experience
What sort of throughput are you seeing in that setup? God, I can't remember anymore. I can run a test again though. If you have one you want me to run, just send it. We don't have other people's money to spend, so all our disks are U160 18GB 15K IBM. They were less than $100 each when we got them. They work great! We only care about throughput when we do a clean backup. Application performance is our measuring stick. Nothing like an FTS query on a big ass table to do a test of both simultaneously. At any rate, one server is just a replication failover. I can shut it down for a little while and do another test. Then I can post back to the list. -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: RAID hardware suggestions/experience
2 x 2.8 GHZ Xeon 4 GB of RAM 5 15K SCSI Drives ICP SCSCI RAID control card with 1 Gb of ram on it. I just bought 30 of these boxes to build out my mysql farm for close to 400-600 queries a second with 60 connections a second of mix read / writes. What kind of queries are you doing? Our simple dual Athlon, with software RAID and the disks I mentioned before does 3000+ queries a second. I've pushed it to 8000 before, but it got too slow for me. 60/40 read/write. -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: RAID hardware suggestions/experience
A lot of table scans do to bitmasked column values. Such that the above query will not utilize a key. That statement gave me a cold shiver up my spine. You could try an inverted index or match-cache technique, or denormalization. These type of techniques are very app specific, but can reduce things by a factor of 10 or more. (And it assumes things are properly normalized as a starting point.) None may work for you, though. Who knows. Ug. Tables scans. I don't know if I can sleep tonight. I feel for you. At least I know why you need 30 database servers. That has got to be a sight! I'd love to have such hardware at my disposal! Wow. Have some fun! -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: How to have AUTO_INCREMENT ignoring 0?
Just a note on this subject. We have a field that uses 0 to mean something special too. It was a bad idea that is on my TODO list to fix some day. (The corresponding table used 0 to mean something special, and then joined to the table with the autoindex. The fix is too use null in that other table and do a left join. I just don't like left joins if I can avoid it.) The note: In version 3.23.x, doing an ALTER TABLE would also change the value of the row with autoindex value of 0. Version 4.0.x does not. So you *really* need to watch out if you are using 3.23.x this (unsupported) way. You wanted mysqldump to make a safe backup, you will need to change it yourself. You can do the dump in order, and after the first insert for the zero row, do an update to make it really zero, then go on as normal. Of course, its best to avoid all this and not use zero. Headaches await... -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: transactions with php
In http://www.mysql.com/doc/en/News-4.0.6.html * mysql_change_user() will now reset the connection to the state of a fresh connect (Ie, ROLLBACK any active transaction, close all temporary tables, reset all user variables etc..) So it is in there, starting with version MySQL 4.0.6. -steve- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Problem with replication and corrupting tables
Quick question: Are the binlog and relaylog files the same format? Initial tests seem to indicate that they are the same. Can I use mysqlbinlog -o Relay_Log_Pos Relay_Log_File | mysql to get the slave more up to date (without having the slave SQL thread running)? I tried the above but the Relay_Log_Pos from 'show slave status' seemed way past the end of the file as it returned no results. :( How do I get a proper offset from which to start? Being able to do this would isolate the issue squarely at the slave SQL thread if the above had no issues. Also, I uploaded a small trace file that shows the corruption. It is the smallest I was able to make last night (about 72MB -- 6MB gzipped). It is in the secret folder. Hopefully it will help. -steve- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Using mysqlbinlog to restore blob data
Rick, I am able to restore from logs that had binary data (even though the output looked real strange and messed up the terminal window). I did have a problem once when I tried filtering data between mysqlbinlog and mysql. Be careful if you do that. What version of mysql are you using? I have no idea about the -t option for mysqlbinlog, but I'd guess it is similar to the 'load data/table from master' command inside mysql. (mysqlbinlog can connect to a remote server rather than using a local file). -steve- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Problem with replication and corrupting tables
Hi, mysqlbinlog -j Relay_Log_Pos Relay_Log_File | mysql works fine. I used -o instead of -j before. So I answered my last question. When doing this: mysqlbinlog -j Relay_Log_Pos Relay_Log_File | more I see that it had advanced to the query after the one with the problem in the trace file. In fact, the query succeeded and was there after a REPAIR TABLE .. USE_FRM. Now that I got the above to work, I ran it. And I found a surprising result (to me): It still failed. So the problem is not with the replication code per se. So maybe I can make a test case - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Problem with replication and corrupting tables
Hi, And fixed. Sorry for the waste of time. Only 4 days before I was set to replace the disk the database was on, and it is going bad. :( -steve- sql,query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Problem with replication and corrupting tables
An update. I'm now running the debug version on the slave. I could not trace out 'info' since it wrote way too much to the trace file. What I did find that was unique when the table crashed is this: handle_slave_sql: query: insert into forums_posts_new_3 ( w_search: error: Got errno: 0 from key_cache_read mi_write: error: Got error: 126 on write my_message_sql: error: Message: 'Incorrect key file for table: 'forums_posts_new_3'. Try to repair it' sql_print_error: error: Slave: error 'Incorrect key file for table: 'forums_posts_new_3'. Try to repair it' on query 'insert into forums_posts_new_3 I don't know why it had a problem with error zero from key_cache_read -- that seems to be the oddest thing in the log. It appears no where else. I'll keep digging. Does no one else have a problem with a slave stopping and corrupting its own tables? The only thing about the insert query that may be seen as odd is that it has binary data in it. That is one of the fields is like this: 'xÚÍTKoã6^P¾÷W~L}ÉÅÕf~K^B^EÚÝ,~\®~Qm7Û\0qÐ`~O#~S~R^HQ¤AÒQÕ_ßoH%V^N=ôP`~Q~Ge~ J3ó=ffKIó¡Ó~AZ~Ò$^O~MiRGm`¥é`9FbGø0^[´K?Ó~MN~T:m^B^]9à$RòHc~\^NÄ^T~S^O^S~M ^Fi~X^F^_Ø^Rç^PÒN~QoȤ~J^^ðÕé¿^R)~^ò»Þ¨H^G^^4Õ|èQ^TW^]¾LåC·~@±(~DòRÇèX½»¾¿ú. ÿûÌ©~[(²Q^[Z^?~Y¨Áw^].@^A~Oa\0;I;~B¨°e² ^^ݶß[~^~Lk¡~@~K^UÝ¡b2~@2ÊoÀ^Wîå?^E¹~\^Y^O^\z(a ^N^PGyö.3i~Bw~I¢æ~\\\N~NæÐ~_~N4\{®n~R^D~]~F£~DË~EàYel\\Ø~U*#Ô¥Æf`ò²^N¾×ùi^P w ®~N±Z~_ù¯·B¯ã^DÊ~R¥~H?~Cȶ|X~S`Uùdö~!àú£w^W~I~N§~T~QL/~D~A3Û~P~I®~V%ÿÔ0»õ^mÖ Yù^?Éü~GxmMJVÓíé0Q`^S~A~Cá~E+ÜgßîP³áÁØ~IÐ_³qq^CÍ}E×@^F^Es0^]:è èñ¡ð^UÍ~K1^U=~JÍR@ù¿¡{¦ágc7P?^U?^P7áb^BÞÜ^ZÞYá~C~FÅKk~^æ21C~K*Ø~Y87»^A~U ¥;ø^SL˽ÀÃ~IɶN^WA?×zåÛ^C^NiäܨÆÍ^e~]*úÝwÎM~R_Ò·n¹^A^T±~CJ^_~Vɾj| ^Yø~B~G^Ua^H^T+%7-ÚF/Así!Â0ÑV ^?áÐê~Jö`8^C±¦í^Rf¶^E\~L~N:^EiÄ~O:ê~Ph~O^TCÖD~VÆÑ2P~Kj~]^L·$^Qí^A¯fP~W:þ ÷´Ó¨~Wt^HF l^zÎ^Q¥tís~_| qGÀíµ\\^D#^H*UN®~D^A2~DL^BÚWU%)T`×g~WæÀç~Y^\±e~T^_^]E~OõC£?Y%vÕAs~_·~QË^XÀC¡ ^OB¦^Oç`~T*hÞ^^S^HÞ^_caÚ^[k^QóÓåiCÙ}9t~A¹t^A~J^CcQ¤A^W\4ÅÕ^YÀ~\ì~GKé3ý:^ S¸¢{ç^MQ[Ù^Ay%ü\^G¯ÂS¦^R^S^P~W^TbR͹û~\ÂÎ| E¿Ü`xq^A}åâr~H;^O~[^Z#éÊ8c8e¹| HÙãe^R^T®ç~~_^[M~WÅ!kp@^^ÜÏ~Ijû²~B^Vµ÷i^ÿ~_$~LŦ^\wnÉ2¥à~F~]Vk*´¡lãå\0ÝÞå~_ ^U½3CK1^\Þ¯ß~X~A[^]ßt|NUk~Z58~Jïßþ^HMTêðpõÿ^G~\AþçRtïc\\Ä×á*ÿ}s^\éáÓ~N®w;º ÙíöôøÛí-mo^_·_÷t^?÷ëçÕê~[õå^_-:^G^W' I'm starting to run out of ideas... -steve- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Bug in ORDER BY
I'm glad you found the problem! Sorry my suggestion did not work. I'm still confused on why you have quotes around the field names in the order by part of the query, though. All the best! -steve- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Problem with replication and corrupting tables
Below is a trace (--debug=d,enter,exit,info,error,query,general,where: O,/tmp/mysqld.trace) of the slave thread. This is the best I can do as far as a bug report. No other queries were running and the slave I/O thread was idle (I firewalled its connection to the master/rest of the world). Without the SQL slave thread all is OK. This server can do any number of normal operations without error. The IO slave works fine. The SQL slave normally causes corruption, but has also caused a crash (a backtrace is in the first message of this thread). The error in this more detailed log seems different than in the previous log. But both point to the key cache. Why the SQL slave thread would cause something bad to happen in the key cache is beyond me. Another day... Very tired, -steve- my_b_seek: enter: pos: 0 my_malloc: exit: ptr: 84dc248 my_malloc: exit: ptr: 84bffd8 my_malloc: exit: ptr: 8525b18 handle_slave_sql: query: insert into forums_posts_new_0 ( forumid, messageid, parent, title, author, message, approved, email, ip, rootmessageid,loginid,autorespond,user_id ) values (32380, 1045077656, 0, 'Faculty experts available to discuss issues involving Korea', 'UM', 'http://www.umich.edu/news/Releases/2003/Feb03/r020703a.html', 'yes', '', inet_aton('244.118.132.197'), 1045077656, 0,'no','4a119100a6134a6dee9964dc257ea582' ) my_malloc: exit: ptr: 8522f60 set_lock_for_tables: enter: lock_type: 7 for_update: 1 check_access: enter: want_access: 2 master_access: 4294967295 hash_search: exit: found key at 26 my_malloc: exit: ptr: 8512f48 mi_get_status: info: key_file: 302662656 data_file: 1911596088 mi_write: enter: isam: 56 data: 57 _mi_make_key: exit: keynr: 0 w_search: enter: page: 64677888 key_cache_read: enter: file 56, filepos 64677888, length 1024 find_key_block: enter: file 56, filepos 64677888 _mi_bin_search: exit: flag: 1 keypos: 2 w_search: enter: page: 12455936 key_cache_read: enter: file 56, filepos 12455936, length 1024 find_key_block: enter: file 56, filepos 12455936 _mi_bin_search: exit: flag: 1 keypos: 4 w_search: enter: page: 8588288 key_cache_read: enter: file 56, filepos 8588288, length 1024 find_key_block: enter: file 56, filepos 8588288 _mi_bin_search: exit: flag: 1 keypos: 31 w_search: enter: page: 8554496 key_cache_read: enter: file 56, filepos 8554496, length 1024 find_key_block: enter: file 56, filepos 8554496 _mi_bin_search: exit: flag: 1 keypos: 28 _mi_insert: enter: key_pos: bfefc8ae key_cache_write: enter: file 56, filepos 8554496, length 1024 find_key_block: enter: file 56, filepos 8554496 _mi_make_key: exit: keynr: 1 w_search: enter: page: 118468608 key_cache_read: enter: file 56, filepos 118468608, length 1024 find_key_block: enter: file 56, filepos 118468608 _mi_bin_search: exit: flag: 1 keypos: 1 w_search: enter: page: 7552 key_cache_read: enter: file 56, filepos 7552, length 1024 find_key_block: enter: file 56, filepos 7552 _mi_bin_search: exit: flag: 1 keypos: 23 w_search: enter: page: 71856128 key_cache_read: enter: file 56, filepos 71856128, length 1024 find_key_block: enter: file 56, filepos 71856128 _mi_bin_search: exit: flag: 1 keypos: 11 w_search: enter: page: 71792640 key_cache_read: enter: file 56, filepos 71792640, length 1024 find_key_block: enter: file 56, filepos 71792640 w_search: error: page 71792640 had wrong page length: 26656 w_search: exit: Error: 126 mi_write: error: Got error: 126 on write print_error: enter: error: 126 my_message_sql: error: Message: 'Incorrect key file for table: 'forums_posts_new_0'. Try to repair it' thr_unlock: info: updating status: key_file: 302662656 data_file: 1911596088 flush_key_blocks_int: enter: file: 56 blocks_used: 8647 blocks_changed: 1 send_error: enter: sql_errno: 0 err: Incorrect key file for table: 'forums_posts_new_0'. Try to repair it close_thread_tables: info: thd-open_tables=0x84f4fc0 mi_extra: enter: function: 2 sql_print_error: error: Slave: error 'Incorrect key file for table: 'forums_posts_new_0'. Try to repair it' on query 'insert into forums_posts_new_0 ( forumid, messageid, parent, title, author, message, approved, email, ip, rootmessageid,loginid,autorespond,user_id ) values (32380, 1045077656, 0, 'Faculty experts available to discuss issues involving Korea', 'UM', 'http://www.umich.edu/news/Releases/2003/Feb03/r020703a.html', 'yes', '', inet_aton('144.118.132.197'), 1045077656, 0,'no','4a119100a6134a6dee9964dc257ea586' )', error_code=1034 sql_print_error: error: Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with SLAVE START. We stopped at log 'binlog.004' position 116581764 ~THD(): info: freeing host my_malloc: exit: ptr: 84aa508 hash_init: enter: hash: 84aa9b0 size: 16 my_malloc: exit: ptr: 84c74b8 vio_new: enter: sd=90 my_malloc:
RE: Bug in ORDER BY
SELECT * FROM EventList ORDER BY 'EventDate', 'EventOrder' LIMIT 50; I'm surprised you happened to get anything in order. Maybe the message got simplified by the list manager, but did you really mean to order by a constant string? Why not: SELECT * FROM EventList ORDER BY `EventDate`, `EventOrder` LIMIT 50; Or better: SELECT * FROM EventList ORDER BY EventDate, EventOrder LIMIT 50; This wasn't the query you had the issue with, but it is easier to quote. ;) Same theory applies. Discard this if the list manager had mangled your post. -steve- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Problem with replication and corrupting tables
Hi all, I have a problem with replication, that while repeatable for me very easily, I can not come up with a way for others to repeat it without all our tables and binlogs (tens of gigabytes). So I'm simply going to describe things here and see if anyone else has experienced anything similar or might have some suggestions. After thinking about using replication, for what seems like forever, I finally got around to it. Both the master and the slave are v4.0.10. I started it up and all seemed to work well for a while. Maybe a few hours. Then I found that a table got corrupted on the slave: ERROR: 1034 Incorrect key file for table: 'forums_posts_new_0'. Try to repair it 030215 10:01:12 Slave: error 'Incorrect key file for table: 'forums_posts_new_0'. Try to repair it' on query 'insert into forums_posts_new_0... Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with SLAVE START. We stopped at log 'binlog.003' position 97273308 At this point the slave SQL thread stopped. The IO thread continued. A couple of days later I noticed the error, repaired the table and started the slave thread again. With the IO thread so far ahead, the SQL thread could pump through the queries much faster. Now it only takes 3-4 minutes before another table gets corrupted. However, it is not just any table. I have tables 'forums_posts_new_0' to 'forums_posts_new_9' that hold messages. Out of all the tables, only these get corrupted. If I repair the table, then start the slave it will work for 5-15 minutes until another table is corrupted. Repeat. Repeat. Repeat. I checked the drives and the file system for errors and found no problems. The machine that acts as a slave to the master is also used for data-warehouse and FTS operations, has lots of disk access on its database and has no errors. I have tried stopping data warehouse and FTS operations while the slave runs, but it makes no difference. BTW: Sometimes the slave crashes when doing replication (and in the following example, only replication). Example of a backtrace: 0x806f53b handle_segfault + 447 0x826ae18 pthread_sighandler + 184 0x8296b07 memcpy + 39 0x823703d _mi_balance_page + 649 0x8236994 _mi_insert + 392 0x82367d2 w_search + 518 0x8236793 w_search + 455 0x8236793 w_search + 455 0x8236793 w_search + 455 0x8236793 w_search + 455 0x8236482 _mi_ck_write_btree + 142 0x82363e9 _mi_ck_write + 65 0x823602f mi_write + 591 0x80c257d write_row__9ha_myisamPc + 101 0x80a17f5 write_record__FP8st_tableP12st_copy_info + 513 0x80a110d mysql_insert__FP3THDP13st_table_listRt4List1Z4ItemRt4List1Zt4List1Z4Item 15enum_duplicates + 1129 0x807ad7a mysql_execute_command__Fv + 6598 0x807d226 mysql_parse__FP3THDPcUi + 146 0x80add97 exec_event__15Query_log_eventP17st_relay_log_info + 427 0x80e3faa exec_relay_log_event__FP3THDP17st_relay_log_info + 542 0x80e4aca handle_slave_sql + 602 0x82685cc pthread_start_thread + 220 0x829dd8a thread_start + 4 After the above crash, mysqld restarted and the slave continued to run for a while without error. Weird. For a while... -steve- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Help debugging a stuck FTS thread
V4.0.9 How can I use the mysqld.sym file via gdb? It doesn't like the format. Copying the stacktrace into another file where I have to edit out a bunch of junk from gdb for resolve_stack_dump is a bit slow. It seems that two processes have hung for a while. All the slots have filled up even though I have set up the my.cnf file with: set-variable= max_connections=83 set-variable= max_user_connections=80 The extra three connections for me does not seem to work. :( So I can look inside. Here is a quick look at ps: ... 23436 ?S 0:03 /root/mysql-standard-4.0.9-gamma-pc-linux-i686/bin/mysqld --defaults-extra-file=/root/mysql-standar 23442 ?S 0:03 /root/mysql-standard-4.0.9-gamma-pc-linux-i686/bin/mysqld --defaults-extra-file=/root/mysql-standar 23443 ?S 0:02 /root/mysql-standard-4.0.9-gamma-pc-linux-i686/bin/mysqld --defaults-extra-file=/root/mysql-standar 22623 ?R311:57 /root/mysql-standard-4.0.9-gamma-pc-linux-i686/bin/mysqld --defaults-extra-file=/root/mysql-standar 15068 ?R324:51 /root/mysql-standard-4.0.9-gamma-pc-linux-i686/bin/mysqld --defaults-extra-file=/root/mysql-standar Here are details on the two in runable state: # resolve_stack_dump -s bin/mysqld.sym -n stack22623 0x8245857 _ftb_climb_the_tree + 295 0x8245e1d ft_boolean_find_relevance + 521 0x8245b97 ft_boolean_read_next + 695 0x80c439d ft_read__9ha_myisamPc + 53 0x82fa9e8 get_next__9FT_SELECT + 32 0x80bebf9 find_all_keys__FP13st_sort_paramP10SQL_SELECTPPUcP11st_io_cacheN23 + 989 0x80be480 filesort__FP8st_tableP13st_sort_fieldUiP10SQL_SELECTUlUlPUl + 736 0x809cb88 create_sort_index__FP13st_join_tableP8st_orderUlUl + 304 0x8092e4a mysql_select__FP3THDP13st_table_listRt4List1Z4ItemP4ItemP8st_orderT4T3T4 UlP13select_result + 6714 0x80913d6 handle_select__FP3THDP6st_lexP13select_result + 102 0x807942a mysql_execute_command__Fv + 950 0x807ce26 mysql_parse__FP3THDPcUi + 146 0x807853b dispatch_command__F19enum_server_commandP3THDPcUi + 1475 0x8077f6d do_command__FP3THD + 149 0x80777af handle_one_connection + 635 0x826726c pthread_start_thread + 220 # resolve_stack_dump -s bin/mysqld.sym -n stack15068 0x8245854 _ftb_climb_the_tree + 292 0x8245e1d ft_boolean_find_relevance + 521 0x8245b97 ft_boolean_read_next + 695 0x80c439d ft_read__9ha_myisamPc + 53 0x82fa9e8 get_next__9FT_SELECT + 32 0x80bebf9 find_all_keys__FP13st_sort_paramP10SQL_SELECTPPUcP11st_io_cacheN23 + 989 0x80be480 filesort__FP8st_tableP13st_sort_fieldUiP10SQL_SELECTUlUlPUl + 736 0x809cb88 create_sort_index__FP13st_join_tableP8st_orderUlUl + 304 0x8092e4a mysql_select__FP3THDP13st_table_listRt4List1Z4ItemP4ItemP8st_orderT4T3T4 UlP13select_result + 6714 0x80913d6 handle_select__FP3THDP6st_lexP13select_result + 102 0x807942a mysql_execute_command__Fv + 950 0x807ce26 mysql_parse__FP3THDPcUi + 146 0x807853b dispatch_command__F19enum_server_commandP3THDPcUi + 1475 0x8077f6d do_command__FP3THD + 149 0x80777af handle_one_connection + 635 0x826726c pthread_start_thread + 220 This server is doing FTS exclusively. (IN BOOLEAN MODE). Before killing everything, is there a way to see what it is waiting on? -steve- sql query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: proposal: new back end tailored to data mining very large tables
First of all, I'd try optimizing your app before writing a whole new back-end. As such, I'd keep to the normal mysql list. For example, even if the indexes are big, try to index all the columns that might be searched. Heck, start by indexing all of them. If the data is read-only, try myisampack. Or, do the index thing above and use InnoDB for this app and be sure to select only those columns that you need. InnoDB does not read the whole record if it does not need to, even in a table scan (which is the worst case scenario you are calculating). All your calculations assume a full table scan which can be avoided by good choice of indexes and by using InnoDB to avoid whole-record retrieval. Am I missing something? Pulling data from a small 14GB table should not be a problem. My machine ($10K) deals with 100GB of data and does 5000 to 1 queries per second. Also, your reference to denormalization didn't make any sense to me. What level of normal form are you expecting? Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: How to speed things up in MySQL ?
I guess the reason for this is that the I have a some blob fields whitch are all used.. (each record consist of approx 600 KB...) There it is. MySQL's MyISAM will get the whole record even if it only needs a part. You can try InnoDB as it does it differently. [At some point I may try and add that capability to MyISAM. I've been researching good ways to compress text and still search it in its compressed state. Then I hope to add the ability of MyISAM tables to automatically compress TEXT/VARCHAR/CHAR (non binary) fields without worrying about uncompressing everything if someone does a LIKE on that field. BLOBs can use gzip type compression. And neither the data nor decompression should occur if the field is not used. When I get more time...] However, back to your issue. Have you tried adding INDEX(ProductionYear,ID)? That would prevent MySQL from having to read the datafile at all, since it could find everything it needs in the index file. My speed guesses: o If you didn't change the structure of the table or add an index, then switching to InnoDB would improve performance quite a bit. o Adding an index(ProductionYear) to the above InnoDB table would speed things even more. o Adding INDEX(ProductionYear,ID) to the MyISAM table would be even faster still. -steve- http://Network54.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: RE: MySQL in combination with PHP problem
I'm using temporary tables for this but there is a problem. Temporary tables are visible trough the entire connection. So in future one browser window can interact (can display) with results from another browser window. Does anyone have a sugestion how to solve this? You could do something like: drop table tmp1 if exists; create temporary table tmp1... At some point MySQL was going to change the call to change the user to also reset the connection to a default state (this would drop the temp table, rollback uncommitted transactions, etc.). I imagine this is done on the server side. Does 4.0.9 support this? If so, I can produce a patch to fix PHP to use it. I sent it a long time ago, but you know how things are... -steve- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: mysqlstat (WAS Re: Optimizing Ext3 for MySQL)
Hmm. mytop 1.4 will have a feature that may help you 1.4? I'm still on 1.0. Guess I'm behind the curve. Jeremy, can you add something to protect against binary data coming across in a query and messing up the terminal window? Leave it running a while and all of a sudden it is a big mess. Yikes! It'll sorta like a vmstat the watches the output of SHOW STATUS, mostly the Com_* counters Poor man's version: watch mysqladmin extended-status Sincerely, Steven Roussey http://Network54.com/ query,sql,stuff,cool - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: mysqlstat (WAS Re: Optimizing Ext3 for MySQL)
One of my favorite Borg quotes is: Crude but effective. :-) I like that. ;) This got me thinking again about a feature I'd like to see in mysqld. I'd like to add something like SQL_STATISTICS to SELECT/UPDATE/INSERT statements. The idea would be that if this keyword was used, then mysqld would calculate additional statistics for that query (which you could get later with SHOW SQL_STATISTICS or something). The statistics would include actual numbers from the processing of the query: # of index records read # of data section records read # bytes requested from file system (read write) [this being the most important] # CPU time # Disk read, write, and wait times etc... Such information would be extremely valuable. For example, it wasn't until I put fulltext on separate server that I would discover what a disk read hog it was: Main server: 3000 q/s Disk read: 540 KB/sLoad: 1 FTS server: 2 q/s Disk read: 7600 KB/sLoad: 12 (taken from http://marc.theaimsgroup.com/?l=mysqlm=104042853614294w=2) Sadly, I will have to write my own FTS system soon. I'll be using mysql to do it, so hopefully Serg will be able to port whatever ideas I settle on back inside the mysqld engine (assuming they worth porting!!) I'll start by moving the ideas of the cosine vector search from C to SQL. Then I can try a lot of different things without constant re-compiling. I'll also get the advantage of having it work with several languages on a record basis, rather than a table basis... Now if only I had a paying job, I could focus on it and get it done quicker... Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Aborted connections.../mysqladmin flush-hosts
Dear sql query, Since updating from 4.0.7 to 4.0.9 I have noticed mysqld having problems with connections from our webserver, eventually denying access until I issue a mysqladmin flush-hosts. Shutting down the server and restarting tends to work for a longer period of time. I can't say for sure if the issue lies with the version change or just coincident with it. Can anyone confirm this behavior? Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Optimizing Ext3 for MySQL
I use ext3 and have a qps of anywhere from 2800-8000 and use the defaults with no problems. Have you tried: iostat -k 1 to look at your disk access? What kind of disks are they anyhow? IDE or SCSI? RAIDed? In what fashion? Lastly, you said that this is a script that is running, right? The table that gets updated, is it fixed or dynamic? Using blobs? Might also look at: vmstat 1 and look at CPU usage... sql,query,queries -steve- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: How can we tell if we're maxing out MySQL/InnoDB capacity?
- Yesterday happened to be one of the busiest days for us ever on our MySQL backed web site. For the entire day MySQL was hit with up to 1200 queries/second, and many queries were being delayed at least 2-15 seconds. - I know how you feel. We were hitting 7700 queries/second today until the web server went into swap space and ruined everything. :( - What other statistics can I look at? - Besides 'iostat -k 1', I'd try 'vmstat 1'. - Our hardware is dual P3 1GHz, 2GB of RAM, and about 56GB of IDE RAID-1 backed disk (3ware escalade). - I love IDE RAID for a workstation (great for video and audio production), but for a database server I'd suggest SCSI RAID. What is the disk rotation speed? Disk seek is very important for databases. (Today, in fact, our main server will be getting a multichannel 10 disk SCSI 15K rpm striped RAID array.) But the real question is: are you CPU or disk bound? If it doesn't point really heavily at either, then it is both. :( -s - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: How can we tell if we're maxing out MySQL/InnoDB capacity?
- Yesterday happened to be one of the busiest days for us ever on our MySQL backed web site. For the entire day MySQL was hit with up to 1200 queries/second, and many queries were being delayed at least 2-15 seconds. - I know how you feel. We were hitting 7700 queries/second today until the web server went into swap space and ruined everything. :( - What other statistics can I look at? - Besides 'iostat -k 1', I'd try 'vmstat 1'. - Our hardware is dual P3 1GHz, 2GB of RAM, and about 56GB of IDE RAID-1 backed disk (3ware escalade). - I love IDE RAID for a workstation (great for video and audio production), but for a database server I'd suggest SCSI RAID. What is the disk rotation speed? Disk seek is very important for databases. (Today, in fact, our main server will be getting a multichannel 10 disk SCSI 15K rpm striped RAID array.) But the real question is: are you CPU or disk bound? If it doesn't point really heavily at either, then it is both. :( -s - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Mysql versions, disks, speed, FTS, and EXPLAIN feature request (SELECT [CALC_RESOURCES])
Hi all! I wanted to thank the MySQL team for making such a great product! We moved from 3.23 to 4.0.x a couple of months ago and everything works great. Just upgraded to 4.0.6 and glad to see it work out of the box without a rev 4.0.6a. Those glib issues were such a pain! 4.0.5a and 4.0.6 have been as solid as any 3.23.x in our experience. Just a note to users of Full Text Search: put it on some other machine. FTS was basically clearing the MySQL and Linux caches with all its read data. Putting it on a separate machine let all the other stuff run just fine. Short example: Main server: 3000 q/s Disk read: 540 KB/sLoad: 1 FTS server: 2 q/s Disk read: 7600 KB/sLoad: 12 Our queries are not representative of anyone else's! YMMV! Just a note on how FTS can really read a lot of data and how moving it can really clear things up. Which brings me to: EXPLAIN feature request: to have EXPLAIN RESOURCES SELECT or something similar to be able to show resource usage instead of query/index plan. Resources like CPU, disk read, and disk write. Likely it should not actually be EXPLAIN, since it would do the operation (where explain does not). So maybe SELECT [CALC_RESOURCES] ... followed by a SHOW RESOURCES_USAGE or something. -steve- Main server: Load is 1.0 Server version 4.0.6-gamma-standard-log Protocol version10 Connection Localhost via UNIX socket UNIX socket /tmp/mysql.sock Uptime: 14 hours 18 min 27 sec Threads: 228 Questions: 152864656 Slow queries: 2877 Opens: 89075 Flush tables: 1 Open tables: 2825 Queries per second avg: 2967.842 # iostat -k 10 avg-cpu: %user %nice%sys %idle 23.050.00 19.30 57.65 Device:tpskB_read/skB_wrtn/skB_readkB_wrtn dev3-00.60 0.0011.20 0112 dev8-0 31.6097.6078.80976788 dev8-1 27.5080.8073.60808736 dev8-2 30.2092.0068.80920688 dev8-3 29.0091.2070.00912700 dev8-4 26.9075.6069.60756696 dev8-5 30.7098.4074.40984744 FTS server: Load is 12 Server version 4.0.6-gamma-standard-log Protocol version10 Connection Localhost via UNIX socket UNIX socket /tmp/mysql.sock Uptime: 15 hours 50 min 12 sec Threads: 36 Questions: 132274 Slow queries: 2527 Opens: 332 Flush tables: 1 Open tables: 64 Queries per second avg: 2.320 # iostat -k 10 Device:tpskB_read/skB_wrtn/skB_readkB_wrtn dev3-0 268.40 7623.20 3.60 76232 36 - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Odd Slowness in 4.0.5a with binlog
Thanks for your comments! I am not sure why you observe this load (AFAIK, blocked threads should not change the load), but it explains well, why 0 processes are running. They are all waiting. The mysqld segfault with 0 processes does not happen consistently but happens often enough to warrant mentioning. (That is, the slowness of the server and the high load always occur when the binlog disk is full, but the segfault only occurs sometimes when this happens.) When the binlog disk gets full, the queries per second drop to a miserable 500 qps while the load shoots up. I was thinking about your comment about the threads waiting. Maybe they wait for a while, slowing down each query (resulting in more having to be processed simultaneously) which cascades to bring the server to a crawl. There may also be a related issue that sometimes this wait gets stuck (mutex issue?) and then what you described happens. Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Odd Slowness in 4.0.5a with binlog
I have the binlog files stored to a separate drive (actually a partition on a drive other than the raid array for the tables). When this partition becomes full, mysql's load average goes from 1.5 to 25 and stays there until more space is available on the partition that holds the binlog. This isn't the biggest deal in the world, since I should make sure there is enough space, but why is this happening? It never happened with 3.23.x. Although it used to cause a segfault in 4.0.2. (MySQL 4.0.5 actually did segfault, but a while after it ran out of space and had 0 processes running. How it had 0 processes running is beyond me...) Sincerely, Steven Roussey http://Network54.com/ sql,query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
InnoDB and ALTER TABLE ... ORDER BY
Does InnoDB support ALTER TABLE ... ORDER BY ...? If it weren't for this command, we would never get the continuous great performance we get from MySQL. And it keeps us from ever really considering InnoDB. :( Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Manual stuff missing
There seems to be nothing in the Manual about a lot of things. For example, the utilities mysqldumpslow, and mysqlcheck, etc. Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: CPU Load extremely high with MySQL 4.0.4
OK, it seems to be working. The load is not spiraling out of control. :) Sincerely, Steven Roussey http://Network54.com/ -Original Message- From: Lenz Grimmer [mailto:[EMAIL PROTECTED]] Before I make a 4.0.5a release of the Linux binaries (and finally announce 4.0.5), could someone please test, if the following package solves the load problem? - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
MySQL saturating Fast Ethernet after 3500 qps?
Hi all, Does anyone with experience with MySQL with over 3000 queries per second willing to shed some light for me? By my guess, full duplex Fast Ethernet can only handle 8000-1 packets per second. Doing 3000+ queries per second then ought to be saturating the network, right? Reason is that the client is seeing slow queries, but the MySQL server is not. Will Gigabit Ethernet alleviate this problem? TIA! Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: MySQL + PHP : Commands out of sync; You can't run this command now
Comment out the body of the _restore_connection_defaults in php_mysql.c file in PHP. Recompile, etc. Or dont use persistent connections. Should be fixed in PHP 4.3. See http://bugs.php.net/?id=19529 for more info. Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: CPU Load extremely high with MySQL 4.0.4
With 3.23.53a everything works fine. MySQL 4.0.4 was compiled against a wrongly built glibc, which leads to the above mentioned behaviour. Please download MySQL 4.0.5, which has just been released and should fix this problem: :( Sadly, 4.0.5 does _not_ fix this issue. I just tried it and our load went from 1 to 145 in 20 seconds... Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: How much data can MySQL push out?
They've been using Replication for a long time at Slashdot. Really? If I were to go by their Alexa traffic rating of 1390, I'd think it would not even be necessary. Our traffic ranking is 859 and we don't need to do anything like that. Maybe Alexa is not a good measure. :( I like our ranking there... I'll check what our bandwidth utilization is. We don't have a problem yet. Sql query Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Tips for LARGE system
handling from 3M to 20M records/day with potentially a lot of processing, live inserts/ updates etc. How many records do you forsee? (There are some things to look up in the manual to give MySQL a hint at the total size of the table so it can start using longer internal pointers.) How big are the records? What kind of processing? Are you separating your transactional system from the reporting system? What is the pattern of use of the records? Unfortunately, your information is not enough to go on. For example, we do 200,000,000+ queries a day and barely break a load of 1.00 on Linux. But we have had a lot of time to optimize things. 3) Are there any guidelines in estimating more presicely what hardware I will need? Will your application we processor or disk based? I'd guess disk based, so get lots of RAM (fast RAM -- like DDR or whatever), and as many fast disks as you can afford and stripe them. Personally, I built our machine from parts at Frys Electronics. I think you could put something together for a reasonable amount of money. Do note, it has been my experience that hardware costs are inversely exponentially proportional to the optimization of the database structure and queries. Fully normalizing and then selectively denormalizing will huge differences in throughput. As will optimizing queries. (Specifically to the database in use helps tons too.) 4) I cannot foresee all the possible growth, nor will the initial budget be huge. Is it then woth planning for building out / clustering for some redunacy and some load balancing upfront Other people can tell you more about replication and how it is different from clustering or load balancing. And you can use it for backup in a way too. It depends on your requirements though. If things are partitionable, then some thinking about that upfront will do you a world of good later. Depend on your application though. 7) I'd also appreciate any input from people who have used official mysql support before. We have used their support and it was excellent. Sincerely, Steven Roussey http://Network54.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Updated MySQL 3.23.53a binaries now available
-- o The Intel Linux binaries (normal and Max) have now been built against a glibc with static-nss enabled again. This should resolve the problem with mysqld failing to change to another user ID as well as the notorious load issue that has been plagueing previous releases. We have tested this binary on different Linux distributions and were not able to reproduce these problems anymore. Please contact me directly, if you still observe any abnormal behaviour on Linux with this binary. -- Load issue resolved. Whew! Looking forward to 4.0.5 with the same fix. Such a relief! Thanks for all the work to resolve this! Sincerely, Steven Roussey http://Network54.com/?pp=e sql,query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Changes to PHP/MySQL extension
I have had problems with PHP and MySQL and have made some changes. First off, in http://bugs.php.net/bug.php?id=19529 MySQL is complaining about things not being cleaned up. This is because any query that returns results (which one's don't -- any?) must get them. In case of an unbuffered query, we need to eat the rest of the rows before exiting (like we do when a new query is run when an old unbuffered query was not finished). I removed the warning in this case, but you all can change it as you please. The case that is hitting me (and EVERYONE out there using persistent connections with current revs) is that there is a rollback. But there is nothing to get the results of the rollback. This means, that the next script to use this persistent connection will generally fail on the first query, but might do alright on the others. For this reason, I recommend anyone use PHP 4.2.3 (maybe later versions as well) to turn off persistent connections for now. Also, in CVS there is something to reset AUTOCOMMIT. It also did not clean up and that causes additional issues. I removed it rather than fixed it. Who says AUTOCOMMIT=1 should be the default for a certain server? That is user configurable on the server side. Personally, I like the idea of resetting all the variables that might have been changed (including that one). There are a lot. No good way to do right now. Oh, what else? Ah.. the code in CVS for mysql.connection.timeout. That rocks! However, it sets the default to zero and then checks for -1 as a sign not to include the option. Oops. So the default timeout value is set to nothing when the user doesn't do anything. That is a unpredictable change in behavior. I have some changes below. My first time even looking at this code, so look for any mistakes. static int _restore_connection_defaults(zend_rsrc_list_entry *rsrc TSRMLS_DC) { php_mysql_conn *link; charquery[128]; charuser[128]; charpasswd[128]; /* check if its a persistent link */ if (Z_TYPE_P(rsrc) != le_plink) return 0; link = (php_mysql_conn *) rsrc-ptr; if (link-active_result_id) do { int type; MYSQL_RES *mysql_result; mysql_result = (MYSQL_RES *) zend_list_find(link-active_result_id, type); if (mysql_result type==le_result) { if (!mysql_eof(mysql_result)) { while (mysql_fetch_row(mysql_result)); } zend_list_delete(link-active_result_id); link-active_result_id = 0; } } while(0); /* rollback possible transactions */ strcpy (query, ROLLBACK); if (mysql_real_query(link-conn, query, strlen(query)) !=0 ) { MYSQL_RES *mysql_result=mysql_store_result(link-conn); mysql_free_result(mysql_result); } /* unset the current selected db */ #if MYSQL_VERSION_ID 32329 strcpy (user, (char *)(link-conn)-user); strcpy (passwd, (char *)(link-conn)-passwd); mysql_change_user(link-conn, user, passwd, ); #endif return 0; } And change the two copies of this: if (connect_timeout != -1) to if (connect_timeout = 0) My 2 cents for the day. Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
FW: Bug #19529 [Com]: Occational Commands out of sync errors
I thought I'd cross post this thread here. Maybe someone with MySQL API experience can assist. Is there a way to reset a connection? That is, send one command that can keep a connection open, but reset all the variables and rollback unfinished transactions? I don't use transactions, but I can see where this would be part of the need. Sincerely, Steven Roussey http://Network54.com/?pp=e -Original Message- From: PHP Bug Database [mailto:[EMAIL PROTECTED]] Sent: Monday, October 07, 2002 5:03 am To: [EMAIL PROTECTED] Subject: Bug #19529 [Com]: Occational Commands out of sync errors ATTENTION! Do NOT reply to this email! To reply, use the web interface found at http://bugs.php.net/?id=19529 ID: 19529 Comment by: [EMAIL PROTECTED] Reported By: [EMAIL PROTECTED] Status: Feedback Bug Type: MySQL related Operating System: Linux 2.4.18 PHP Version: 4.2.3 Assigned To: georg New Comment: Removing the ROLLBACK seems to have fixed the problem for me too. Reading the MySQL docs on the error message in question, it would seem that just adding a mysql_free_result call before executing the ROLLBACK query might fix things. I noticed that the PgSQL extension does something similar when rolling back transactions at shutdown. Previous Comments: [2002-10-06 14:51:51] [EMAIL PROTECTED] Currently, neither mysql 4.x or 3.x supports enough functionality for handling some problems when using persistent connections, e.g. restoring session variables to global variables, restoring auto_commit, unsetting user variables etc. The probably error is not MySQL-version dependend. The 4.x clientlib is 100% backwards compatible to MySQL 3.x ( .23). For some more information, it would be useful, if you could send me some sources... assigned to myself. Georg [2002-10-06 12:23:42] [EMAIL PROTECTED] The problem seems to have disappeared when the ROLLBACK was removed. [2002-10-06 11:02:28] [EMAIL PROTECTED] Scratch the above; I was looking at the current CVS version. In 4.2.3 the function is still called _rollback_mysql_transactions and only does the ROLLBACK, nothing more. I just disabled the ROLLBACK as well, so if neither Erik nor myself see any more errors, I think it's safe to assume that's where the problem is. But shouldn't it always be safe to execute a rollback in MySQL, even if transactions aren't in use? [2002-10-06 10:14:36] [EMAIL PROTECTED] I agree that the most plausible cause of the problem is in the _restore_connection_defaults function, which is the one responsible for doing the ROLLBACK. That theory would seem to be supported by the fact that the problem disappears when disabling persistent connections (since the function does nothing when the connection isn't persistent). What I don't get is why executing ROLLBACK on a straight-MyISAM database would cause problems. Is that a bug in MySQL? If the ROLLBACK isn't the problem, it would have to be either the SET AUTOCOMMIT=1 or the stuff about unsetting the selected DB. Since my application uses only one DB and no transactions, I'm going to just disable the entire function and recompile. [2002-10-06 09:58:10] [EMAIL PROTECTED] What happens is that the mysql connection that gets the problem is locked up. It won't ever work again. After a while all of our mysql connections showed the problem making the site completely unusable. As of v4.2.3 php does a ROLLBACK each time connection is re-used. ROLLBACKS will give an error if updates have been done on a MyISAM table, I have removed that query now. Will see if that solves it. It looks like all us are running quite many queries per second? I see the problem on a site running 60+ queries per second. The remainder of the comments for this report are too long. To view the rest of the comments, please view the bug report online at http://bugs.php.net/19529 - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
MYSQL_OPT_CONNECT_TIMEOUT
If MYSQL_OPT_CONNECT_TIMEOUT is set before mysql_connect() or mysql_real_connect() and the value is set to zero, what is the expected behavior? (Reason: PHP now does this as the default.) Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
MySQL errors in newer PHP 4.2.3
Since updating to 4.2.3, we have been getting intermittent errors of Commands out of sync. Anyone else see this? Sincerely, Steven Roussey http://Network54.com/?pp=e php,sql,query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: MySQL errors in newer PHP 4.2.3
Hmm, do you use pconnect or connect? I noticed someone changed our PHP script to use persistent connections (likely why the server is running slower). I am wondering if a cancelled connection is being reused or if it just is a more general bug in the mysql client code in PHP Sincerely, Steven Roussey http://Network54.com/?pp=e -Original Message- From: Jocelyn Fournier [mailto:[EMAIL PROTECTED]] Hi, Same problem for me, although it was already here with 4.2.0 for me (well it seems to be also a high QPS problem...). The problem seems to disappear with an apache restart, but sometimes appear again randomly. Regards, Jocelyn - Original Message - From: Steven Roussey [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: Mysql [EMAIL PROTECTED] Sent: Friday, September 20, 2002 2:14 AM Subject: MySQL errors in newer PHP 4.2.3 Since updating to 4.2.3, we have been getting intermittent errors of Commands out of sync. Anyone else see this? Sincerely, Steven Roussey http://Network54.com/?pp=e php,sql,query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: upgrade 3.23 - 4.0.3 question
- The real cause for the question lays in the problem which occured after I upgraded to 4.0.3 WITHOUT recompiling DBI module: the new mysql version worked without any visible errors, however it was unable to process more than about 30 queries per seconds, while the normal rate for this server is more than 100 queries/sec. How can it be so slow? Evidently something went wrong, but what could it be? - Would you be willing to try using 4.0.2 instead of 4.0.3? If you do not have the problem with 4.0.2, then the issue lies in something in 4.0.3. At least I have a problem with 4.0.3 in that it works fine for a few queries, but when I let it go at a normal 3000 queries per second it choked up and died (much like 3.23.51 -- by the way, which 3.23.x are you using now?). Thanks! Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: MySQL 4.0 and LIKE %string%
That optimization is for fields without an index AFAIK. Sincerely, Steven Roussey http://Network54.com/?pp=e sql,query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: 4.0.3 crashes every 5 seconds.
I am right now looking over the logs. I had the regular log on and the binlog on to test it. I'm going over the log right now. The binlog is helpful -- I'm looking at the last queries in each (since a new one was created every time MySQL crashed). The only thing in common is that they are always (100%) an update or replace. Some examples: update logins set expire=1030501300 where loginid=2961860; and SET TIMESTAMP=1030501283; update logins_plus set pageviews = pageviews+1 where transactionid = 1016040153 and loginid=54522 and period_begin='2002-08-13' and period_end = '2002-09-13'; and replace into presence.chatrooms_users(chatroomid,login,msgtime) values ( 65637, 'joe', 1030501249 ); So maybe at write__9MYSQL_LOGP3THDPCcUil + 1210 there is a problem with data sent via update/replace. However, the sever likely crashes before the log is written, so this is potentially meaningless... (One of the binlogs did not have any queries inside.) On the other hand, the log file (not binlog) shows a bunch of connections before the thread id goes back to 1. And just to make it more confusing, sometimes the last entry in a binlog can't be found in the log file. I wish there was a web accessible version of the source so I could quickly go look at it (like PHP's lxr.php.net)... Sincerely, Steven Roussey http://Network54.com/?pp=e -Original Message- From: Jocelyn Fournier [mailto:[EMAIL PROTECTED]] Sent: Tuesday, August 27, 2002 11:48 pm To: [EMAIL PROTECTED]; Mysql Subject: Re: 4.0.3 crashes every 5 seconds. Hi, I'm happy to know I'm not the only one which have this segfault problem. Do you have any idea of what kind of queries causes those segfault ? (I unfortunatly have also a nil pointer) Thanks and regards, Jocelyn - Original Message - From: Steven Roussey [EMAIL PROTECTED] To: Mysql [EMAIL PROTECTED] Sent: Wednesday, August 28, 2002 5:02 AM Subject: 4.0.3 crashes every 5 seconds. I downloaded a copy of 4.0.3 from the download page and it crashes every five seconds! Resolving the stack trace gives: 0x806ebdb handle_segfault__Fi + 447 0x8261718 pthread_sighandler + 184 0x828cf8f memcpy + 31 0x80a89ce write__9MYSQL_LOGP3THDPCcUil + 1210 0x80783fd dispatch_command__F19enum_server_commandP3THDPcUi + 4105 0x80773e5 do_command__FP3THD + 93 0x8076c2e handle_one_connection__FPv + 674 0x825edfc pthread_start_thread + 204 0x82941fa thread_start + 4 and 0x806ebdb handle_segfault__Fi + 447 0x8261718 pthread_sighandler + 184 0x828cf97 memcpy + 39 0x80a89ce write__9MYSQL_LOGP3THDPCcUil + 1210 0x80783fd dispatch_command__F19enum_server_commandP3THDPcUi + 4105 0x80773e5 do_command__FP3THD + 93 0x8076c2e handle_one_connection__FPv + 674 0x825edfc pthread_start_thread + 204 0x82941fa thread_start + 4 keep repeating. Unfortunately, the query field is (nil). :( Nice to have the sym file... I guess I rushed to download too quickly... I'll write back with more info soon. Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail mysql-unsubscribe- [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: 4.0.3 crashes every 5 seconds.
It did not occur to me under after I sent the last email, that the binlog does not log every query. For our site, it does not even log every database. So I'm going to look a bit harder at the other log file... Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
How to supply two database names
I have two database names that I would like binlog to ignore, how do I do that? binlog-ignore-db=db1 works OK for db1, but binlog-ignore-db=db1 db2 binlog-ignore-db=db1,db2 do not work. Sincerely, Steven Roussey http://Network54.com/?pp=e sql,query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: 4.0.3 crashes every 5 seconds.
Also, OS is ReadHat Linux 7.3 Glib is 2.2.5 Kernel is 2.4.18-10smp Two Athlon MPs and 1.5Gb RAM. Sincerely, Steven Roussey http://Network54.com/?pp=e -Original Message- From: Lenz Grimmer [mailto:[EMAIL PROTECTED]] Sent: Wednesday, August 28, 2002 12:49 am To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED]; Jocelyn Fournier Subject: Re: 4.0.3 crashes every 5 seconds. -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, On Wednesday 28 August 2002 05:02, Steven Roussey wrote: I downloaded a copy of 4.0.3 from the download page and it crashes every five seconds! Resolving the stack trace gives: 0x806ebdb handle_segfault__Fi + 447 0x8261718 pthread_sighandler + 184 0x828cf8f memcpy + 31 0x80a89ce write__9MYSQL_LOGP3THDPCcUil + 1210 0x80783fd dispatch_command__F19enum_server_commandP3THDPcUi + 4105 0x80773e5 do_command__FP3THD + 93 0x8076c2e handle_one_connection__FPv + 674 0x825edfc pthread_start_thread + 204 0x82941fa thread_start + 4 and 0x806ebdb handle_segfault__Fi + 447 0x8261718 pthread_sighandler + 184 0x828cf97 memcpy + 39 0x80a89ce write__9MYSQL_LOGP3THDPCcUil + 1210 0x80783fd dispatch_command__F19enum_server_commandP3THDPcUi + 4105 0x80773e5 do_command__FP3THD + 93 0x8076c2e handle_one_connection__FPv + 674 0x825edfc pthread_start_thread + 204 0x82941fa thread_start + 4 keep repeating. Unfortunately, the query field is (nil). :( Nice to have the sym file... I guess I rushed to download too quickly... I'll write back with more info soon. Ouch! Thanks for the report. Can you please give us some more details? Which operating system (kernel, glibc), which package did you use? Fortunately I did not send out the announcement yet... Bye, LenZ - -- For technical support contracts, visit https://order.mysql.com/?ref=mlgr __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Mr. Lenz Grimmer [EMAIL PROTECTED] / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Production Engineer /_/ /_/\_, /___/\___\_\___/ Hamburg, Germany ___/ www.mysql.com -BEGIN PGP SIGNATURE- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see http://www.gnupg.org iD8DBQE9bIB6SVDhKrJykfIRAtPKAJ9cpRmfSenXKY9lJUX5R2rp+lbTMQCfYg9Z eRdHC8jtmR+Tn9qvV0CchM8= =v1qU -END PGP SIGNATURE- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail mysql-unsubscribe- [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: 4.0.3 crashes every 5 seconds.
Lastly, we use the .tar.gz file of the linux binary made by MySQL AB. Sincerely, Steven Roussey http://Network54.com/?pp=e -Original Message- OS is ReadHat Linux 7.3 Glib is 2.2.5 Kernel is 2.4.18-10smp Two Athlon MPs and 1.5Gb RAM. Sql,query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
4.0.3 crashes every 5 seconds.
I downloaded a copy of 4.0.3 from the download page and it crashes every five seconds! Resolving the stack trace gives: 0x806ebdb handle_segfault__Fi + 447 0x8261718 pthread_sighandler + 184 0x828cf8f memcpy + 31 0x80a89ce write__9MYSQL_LOGP3THDPCcUil + 1210 0x80783fd dispatch_command__F19enum_server_commandP3THDPcUi + 4105 0x80773e5 do_command__FP3THD + 93 0x8076c2e handle_one_connection__FPv + 674 0x825edfc pthread_start_thread + 204 0x82941fa thread_start + 4 and 0x806ebdb handle_segfault__Fi + 447 0x8261718 pthread_sighandler + 184 0x828cf97 memcpy + 39 0x80a89ce write__9MYSQL_LOGP3THDPCcUil + 1210 0x80783fd dispatch_command__F19enum_server_commandP3THDPcUi + 4105 0x80773e5 do_command__FP3THD + 93 0x8076c2e handle_one_connection__FPv + 674 0x825edfc pthread_start_thread + 204 0x82941fa thread_start + 4 keep repeating. Unfortunately, the query field is (nil). :( Nice to have the sym file... I guess I rushed to download too quickly... I'll write back with more info soon. Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
4.0.2 mysqld.sym.gz is empty
I can't look up a MySQL 4.0.2 crash since the mysqld.sym.gz is empty! Can someone at MySQL email me the file, please. Pretty please. Thanks! Sincerely, Steven Roussey http://Network54.com/?pp=e query - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
FULLTEXT out of slow query log
Is there a way to not have mysql put fulltext searches in the slow query log? Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Load problems with 3.23.51
Just a note: I tried MySQL 4.0.2 and it works fine. Seems to be only 3.23.51 built by MySQL itself that has the issue. Releases before, and now a release after (albeit a 4.0.x version) work fine. Sincerely, Steven Roussey http://Network54.com/?pp=e I have MySQL 3.23.47 running on our sever. I skipped 48 through 50 and tried 51. No dice. It does not handle load, CPU and the load average go through the roof. I'm using Red Hat Linux 7.2 and the official mysql binaries. It appears to be slow to connect, causing 0.5 to 1.0 second delay on connection. Using persistent connections from PHP does not make much of a difference. I thought it might be the hostname lookup changes so I chose skip-grant-tables. This doesn't actually skip the hostname lookup though and had no effect. Most queries are shorter than 1 second so this problem causes catastrophic problems by making queries last a multiple times longer, which make the number of concurrent queries jump exponentially. This is a bad thing. And sadly makes 3.23.51 unusable. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Load problems with 3.23.51
Just an update: I installed a new fresh version of RedHat 7.3 (smp Athlon) and a new copy of MySQL 3.23.51, but the problem remains. Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Load problems with 3.23.51
?root crond- 040 nanosleep 1044 ?xfs xfs - 140 do_select 1080 ?daemon atd - 040 nanosleep 1112 ?root miniserv - 040 do_select 1117 tty2 root mingetty - 100 read_chan 1118 tty3 root mingetty - 100 read_chan 1119 tty4 root mingetty - 100 read_chan 1120 tty5 root mingetty - 100 read_chan 1121 tty6 root mingetty - 100 read_chan 1285 tty1 root mingetty - 100 read_chan 14125 ?root sshd - 140 do_select 409 ?root php - 000 nanosleep 9097 ?root sshd - 140 do_select 9099 pts/0root bash - 100 wait4 9369 pts/0root safe_mys - 100 wait4 9397 pts/0mysqlmysqld - 100 do_select 9399 pts/0mysqlmysqld - 040 do_poll 9400 pts/0mysqlmysqld - 040 rt_sigsuspend 9401 pts/0mysqlmysqld - 040 tcp_data_wait 9402 pts/0mysqlmysqld - 040 tcp_data_wait 9403 pts/0mysqlmysqld - 040 tcp_data_wait 9404 pts/0mysqlmysqld - 040 tcp_data_wait 9405 pts/0mysqlmysqld - 040 tcp_data_wait 9406 pts/0mysqlmysqld - 040 tcp_data_wait 9407 pts/0mysqlmysqld - 040 tcp_data_wait 9408 pts/0mysqlmysqld - 040 tcp_data_wait 9409 pts/0mysqlmysqld - 040 tcp_data_wait 9410 pts/0mysqlmysqld - 040 tcp_data_wait 9411 pts/0mysqlmysqld - 040 tcp_data_wait 9412 pts/0mysqlmysqld - 040 tcp_data_wait 9413 pts/0mysqlmysqld - 040 tcp_data_wait 9414 pts/0mysqlmysqld - 040 tcp_data_wait 9415 pts/0mysqlmysqld - 040 rt_sigsuspend 9416 pts/0mysqlmysqld - 040 tcp_data_wait 9417 pts/0mysqlmysqld - 040 tcp_data_wait 9418 pts/0mysqlmysqld - 040 tcp_data_wait 9419 pts/0mysqlmysqld - 040 tcp_data_wait 9420 pts/0mysqlmysqld - 040 tcp_data_wait 9421 pts/0mysqlmysqld - 040 tcp_data_wait more of same 9854 pts/0mysqlmysqld - 040 rt_sigsuspend 9855 pts/0mysqlmysqld - 040 rt_sigsuspend 9856 pts/0mysqlmysqld - 040 tcp_data_wait 9857 pts/0mysqlmysqld - 040 tcp_data_wait 9858 pts/0mysqlmysqld - 040 tcp_data_wait 9859 pts/0mysqlmysqld - 040 rt_sigsuspend 9860 pts/0mysqlmysqld - 040 tcp_data_wait 9861 pts/0mysqlmysqld - 040 rt_sigsuspend 9862 pts/0mysqlmysqld - 040 tcp_data_wait 9865 pts/0mysqlmysqld - 040 rt_sigsuspend 9866 pts/0mysqlmysqld - 040 tcp_data_wait 9870 pts/0mysqlmysqld - 040 tcp_data_wait 9873 pts/0mysqlmysqld - 040 rt_sigsuspend 9874 pts/0mysqlmysqld - 040 tcp_data_wait 9875 pts/0mysqlmysqld - 040 rt_sigsuspend 9876 pts/0mysqlmysqld - 040 tcp_data_wait 9878 pts/0mysqlmysqld - 040 rt_sigsuspend 9885 pts/0root ps - 100 - 9886 pts/0root more - 000 read_chan This is obviously different. 3.23.47 is in tcp_data_wait. 3.23.51 is usually doing nothing (!) or in suspend. Odd. Sincerely, Steven Roussey http://Network54.com/?pp=e -Original Message- From: Michael Bacarella [mailto:[EMAIL PROTECTED]] Sent: Saturday, June 29, 2002 11:20 pm To: Steven Roussey Subject: Re: Load problems with 3.23.51 Can you humor me and try running the command: ps -eo pid,tt,user,fname,tmout,f,wchan on that system a few times? Mine was far too hosed for this command to ever get off the ground. I think that'd help determine if there's some syscall that's sucking up all that cpu. -M - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Broken build of 3.23.51?
From: Michael Bacarella [mailto:[EMAIL PROTECTED]] Not the case for me. I tried the prebuilt binary and then my own built binary within 30 minutes of each other (the time of day when we had the lowest loads) and there was a clear difference. Unusable (40 loadav) vs. humming along smoothly (0.5 loadav). I saw my load go over 340 (which I did not think was possible) where it usually is 5. I'm not using the max version as you are so that is not part of it. The only thing I can think of is glibc stuff that MySQL does special for their releases. Also, are you using Linux or something else? Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Load problems with 3.23.51
| cleaning up | select ... | | 242 | webuser | webserver | main | Init DB| 19 ||| | 243 | webuser | webserver | main | Query | 19 ||| | 246 | webuser | webserver | logging | Sleep | 2 ||| | 247 | webuser | webserver | logging | Init DB| 2 | cleaning up | | | 249 | webuser | webserver | logging | Query | 1 | closing tables | update ... | | 250 | webuser | webserver | logging | Sleep | 43 ||| | 256 | webuser | webserver | main | Sleep | 2||| | 259 | webuser | webserver | main | Init DB| 19 ||| | 262 | webuser | webserver | main | Init DB| 19 ||| | 264 | webuser | webserver | logging | Init DB| 19 ||| | 267 | webuser | webserver | main | Init DB| 19 ||| | 270 | webuser | webserver | main | Init DB| 19 ||| | 274 | webuser | webserver | main | Query | 19 ||| | 275 | webuser | webserver | logging | Sleep | 0 ||| | 276 | webuser | webserver | logging | Init DB| 10 ||| | 278 | webuser | webserver | logging | Query | 1 | closing tables | update ... | | 280 | webuser | webserver | main | Query | 1| closing tables | select ... | | 285 | webuser | webserver | logging | Init DB| 6 | cleaning up | | | 288 | webuser | webserver | logging | Query | 1 | closing tables | update ... | | 295 | webuser | webserver | logging | Init DB| 2 | cleaning up || | 296 | webuser | webserver | | Connect| | login|| | 300 | root | localhost | | Query | 0 | | show processlist | | 301 | root | localhost | | Connect| | login|| | 302 | unauthenticated user | webserver | | Connect| | login|| | 303 | unauthenticated user | webserver | | Connect| | login|| | 304 | unauthenticated user | webserver | | Connect| | Reading from net | | | 305 | unauthenticated user | webserver | | Connect| | login|| | 306 | unauthenticated user | webserver | | Connect| | login|| | 307 | unauthenticated user | webserver | | Connect| | Reading from net || | 308 | unauthenticated user | webserver | | Connect| | login| | +-+--++-++-- ++ The process list was taken a little bit after the other (my ssh tool didn't have enough lines saved to get both simultaneously). Note the uptime in between. Could you by any change check by using the slow query log if there is some specific query that is causing problems ? All queries seem to be slow. Not every query going in, but every type. Almost everything in there is for 7-8 seconds (0 for lock). Simple stuff like 'select col from table where id_col = 123' and the id_col is an auto inc primary key. Explain works fine (but slow too). BTW - the smack tests use UNIX sockets right? That test works fine, but the real load is from TCP connections, which dies. Could this have anything to do with it? Tomorrow I'll try doing my own compile with gcc3.1. Note the other guy that had the same problem that went away after he compiled it himself: http://marc.theaimsgroup.com/?l=mysqlm=102537522606976w=2 In that case, I doubt he has the altered glibc compiled in. Could changes there have this effect? Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Load problems with 3.23.51
I tried 'skip-name-resolve' but it had no impact. :( So it may have nothing to do with name resolution. Here are the results in file RUN-mysql-Linux_2.4.16_0.13smp_i686: I'm going to run the tests on .47 next to see if there is any difference. Sincerely, Steven Roussey http://Network54.com/?pp=e -Original Message- From: Michael Widenius [mailto:[EMAIL PROTECTED]] Sent: Monday, June 24, 2002 4:24 am To: [EMAIL PROTECTED] Cc: Steven Roussey; [EMAIL PROTECTED] Subject: Re: Load problems with 3.23.51 Hi! Jeremy == Jeremy Zawodny [EMAIL PROTECTED] writes: Jeremy On Sat, Jun 22, 2002 at 05:25:59PM -0700, Steven Roussey wrote: Hi all, I have MySQL 3.23.47 running on our sever. I skipped 48 through 50 and tried 51. No dice. It does not handle load, CPU and the load average go through the roof. I'm using Red Hat Linux 7.2 and the official mysql binaries. It appears to be slow to connect, causing 0.5 to 1.0 second delay on connection. Using persistent connections from PHP does not make much of a difference. I thought it might be the hostname lookup changes so I chose skip-grant-tables. This doesn't actually skip the hostname lookup though and had no effect. Most queries are shorter than 1 second so this problem causes catastrophic problems by making queries last a multiple times longer, which make the number of concurrent queries jump exponentially. This is a bad thing. And sadly makes 3.23.51 unusable. Does anyone else note these types of issues? Jeremy As another data point for you, I've got 3.23.51 running on our master Jeremy quite well. The difference is that I built it from source (to get a Jeremy critical InnoDB patch). I don't recall which compiler the MySQL folks Jeremy used (and which glibc), but my source build used Debian Woody's gcc Jeremy 2.95.4. We are using gcc 2.95.3 and a patched glibc, the later one that we used in many builds before. This is the first email I got that 3.23.51 would be slow. Steven, could you try to run the MySQL benchmark suite on your machine and post me the results ? cd sql-bench perl run-all-tests --log The file I am interested in is the summary file named 'output/RUN-*' Regards, Monty - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Load problems with 3.23.51
12.007.134.47 11.60 80 select_big_str19.006.423.65 10.07 1 select_column+column 1.000.290.320.61 1 select_diff_key 76.000.250.030.28 500 select_distinct4.000.770.231.00 800 select_group 21.000.900.211.11 2911 select_group_when_MANY_tables 27.001.310.431.74 1 select_join1.000.220.070.29 100 select_key77.00 44.418.29 52.70 20 select_key2 85.00 42.516.77 49.28 20 select_key2_return_key82.00 40.646.48 47.12 20 select_key2_return_prim 83.00 41.887.19 49.07 20 select_key_prefix 87.00 43.347.64 50.98 20 select_key_prefix_join 3.001.310.842.15 100 select_key_return_key 76.00 40.186.97 47.15 20 select_many_fields 9.003.072.795.86 2000 select_query_cache49.004.040.414.45 1 select_query_cache2 49.003.850.334.18 1 select_range 88.003.091.624.71 410 select_range_key2 10.003.910.744.65 25010 select_range_prefix 12.003.930.774.70 25010 select_simple 1.000.220.340.56 1 select_simple_join 1.000.240.090.33 500 update_big11.000.000.000.00 10 update_of_key 13.002.291.183.47 5 update_of_key_big 7.000.020.010.03 501 update_of_primary_key_many_keys 11.000.030.000.03 256 update_with_key 58.00 10.907.23 18.13 30 update_with_key_prefix18.004.102.436.53 10 wisc_benchmark 2.001.060.371.43 114 TOTALS 2098.00 436.22 126.65 562.87 2667247 Sincerely, Steven Roussey http://Network54.com/?pp=e -Original Message- From: Michael Widenius [mailto:[EMAIL PROTECTED]] Sent: Monday, June 24, 2002 4:24 am To: [EMAIL PROTECTED] Cc: Steven Roussey; [EMAIL PROTECTED] Subject: Re: Load problems with 3.23.51 Hi! Jeremy == Jeremy Zawodny [EMAIL PROTECTED] writes: Jeremy On Sat, Jun 22, 2002 at 05:25:59PM -0700, Steven Roussey wrote: Hi all, I have MySQL 3.23.47 running on our sever. I skipped 48 through 50 and tried 51. No dice. It does not handle load, CPU and the load average go through the roof. I'm using Red Hat Linux 7.2 and the official mysql binaries. It appears to be slow to connect, causing 0.5 to 1.0 second delay on connection. Using persistent connections from PHP does not make much of a difference. I thought it might be the hostname lookup changes so I chose skip-grant-tables. This doesn't actually skip the hostname lookup though and had no effect. Most queries are shorter than 1 second so this problem causes catastrophic problems by making queries last a multiple times longer, which make the number of concurrent queries jump exponentially. This is a bad thing. And sadly makes 3.23.51 unusable. Does anyone else note these types of issues? Jeremy As another data point for you, I've got 3.23.51 running on our master Jeremy quite well. The difference is that I built it from source (to get a Jeremy critical InnoDB patch). I don't recall which compiler the MySQL folks Jeremy used (and which glibc), but my source build used Debian Woody's gcc Jeremy 2.95.4. We are using gcc 2.95.3 and a patched glibc, the later one that we used in many builds before. This is the first email I got that 3.23.51 would be slow. Steven, could you try to run the MySQL benchmark suite on your machine and post me the results ? cd sql-bench perl run-all-tests --log The file I am interested in is the summary file named 'output/RUN-*' Regards, Monty - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Load problems with 3.23.51
I used the mysql builds myself. Funny thing is that I use your tool 'mytop', which is very cool by the way, to watch things and it pauses for about 5-8 seconds when refreshing with .51 and is almost instant with .47 Sincerely, Steven Roussey http://Network54.com/?pp=e -Original Message- As another data point for you, I've got 3.23.51 running on our master quite well. The difference is that I built it from source (to get a critical InnoDB patch). I don't recall which compiler the MySQL folks used (and which glibc), but my source build used Debian Woody's gcc 2.95.4. That could have something to do with it... - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Load problems with 3.23.51
Hi all, I have MySQL 3.23.47 running on our sever. I skipped 48 through 50 and tried 51. No dice. It does not handle load, CPU and the load average go through the roof. I'm using Red Hat Linux 7.2 and the official mysql binaries. It appears to be slow to connect, causing 0.5 to 1.0 second delay on connection. Using persistent connections from PHP does not make much of a difference. I thought it might be the hostname lookup changes so I chose skip-grant-tables. This doesn't actually skip the hostname lookup though and had no effect. Most queries are shorter than 1 second so this problem causes catastrophic problems by making queries last a multiple times longer, which make the number of concurrent queries jump exponentially. This is a bad thing. And sadly makes 3.23.51 unusable. Does anyone else note these types of issues? Sincerely, Steven Roussey http://Network54.com/?pp=e - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: MySQL 3.23.44 not using indexes
The corollary to the above quote is that MySQL can not use indexes with an OR clause at the base level. MySQL *does* use the index on a different server, though, it just doesn't use the index on that server. Compare the results of EXPLAIN queries on the two servers (after ANALYZE TABLE was run): [Steven Roussey] Yes, I retract my corollary. MySQL can not use indexes on an OR clause if there is no common prefix to the same index. It can have base level OR and use an index but only if all the clauses in the OR use the same index (specificly some prefix of the index). My bad. Always read the manual -- its better than me! [The situation that could be optimized in MySQL for some future TODO is the case where you would be asking it to use multiple indexes for the query.] However, my guess is that either you need to run analyze on the other table on the other server, or that it is returning too many rows, so MySQL does a table scan because it determines it is faster. Indeed the EXPLAIN tends to bear that out. Are the tables defined the same on both servers? Is the data the same? Is the query the same? Are both analyzed? - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php