Re: Switching legacy tables from 4.0 to 4.1 -AND- from default to UTF-8 .... ADVICE ANYONE?
On 7/15/05, Bruce Dembecki [EMAIL PROTECTED] wrote: This process has worked for us taking our latin1 4.0 databases and turning them into utf8 4.1 databases. UTF8 data we had already put in our 4.0 database despite it's latin1 encoding was correctly exported out of 4.0 and correctly converted on it's way in to 4.1 Wow! Really? This part amazes me. So the MySQL importing process seems to do the converting of the special characters into the newly-defined encoding format? (from latin1 to utf-8 in my case) See - we do webhosting for clients around the world, and right now our default-encoded MySQL 4.0 databases have Swedish and Hebrew characters in them. I'm concerned that if I dumped them as latin1/default in 4.0, but then imported as utf-8 in 4.1 that the non-ASCII characters would get imported as the wrong encoding. (Assuming, yes, that I would set our new 4.1 databases to do ALL utf-8 in the /etc/my.cnf and gladly take the small performance/size hit.) Oh, and one more thing, the conversion worked for us in our Hong Kong boards where they have a lot of Japanese speakers, the Chinese speakers of course, and a lot of English messages, and of course even all Chinese messages with email addresses in regular text... so yes, not only did it work for us, it worked for us with a multitude of different character sets in the very same table (even in the same column). Gotsta love utf8. Best Regards, Bruce -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Multitable selection
Hello, I have 10 tables: table0: id, description, date table2: id, description, date ... table9: id, description, date Every table has 1 000 000 rows. How can I select all rows from this 10 tables ordered by date? What is faster: UNION or temporary table or something else ? mysql 4.1 Sincerely, Michael, http://xoib.com/ http://3d2f.com/ http://qaix.com/ http://ryxi.com/ http://gyxe.com/ http://gyxu.com/ http://xywe.com/ http://xyqe.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Performance difference? : [SELECT ON Merge TableAB] vs [(SELECT ON Table A) UNION (SELECT ON TABLE B) ]
Hi everyone, Take for example two identical tables A B, and a MERGE table merging both of them. Would there be any performance advantage if I do a select (with a where criteria on an indexed column) on the MERGE table, as opposed to doing a union of two selects with the same WHERE criteria? Many Thanks! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
authentication - which hostname is used?
Hi, when an IP-address is reverse mapped to multiple names, which one is used by mysql for user authentication? Right now (4.1.11) it looks like the first record is used, which I'm not sure good enough. Shouldn't mysql check all the returned names and see if one of them authenticates? /Per Jessen, Zürich -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Separate disk for logs, DRBD ...
Dear All, Am planning on making MySQL write its data files to disk1 and log files to disk2. My questions are : 1. I know I can put the connections, slow, query, and InnoDB logs on disk2. Is it also possible (and advisable) to put the binary logs with them ? 2. If disk2 is slower than disk1 ( like when disk1 is 15k RPM while disk2 is 10k RPM ), will it slow down any data-related operations ? 3. I'm thinking of using DRBD to replicate changes on one MySQL Master server to another box. Does anyone here have a similar setup ? I plan on buying 2 identical servers with 3 disk each - 1 for the OS, the other for Data, and the last one for Logs. If the Logs disk crashes, will MySQL be able to write logs to the Logs disk on the 2nd server via DRBD ? Regards, Danny
slow query
Hi, I have a problem with a query, because it works very slow and I am trying to analyze it with explain. I have read that if I use STRAIGHT_JOIN in a select query, the server will take the tables in the order I specified them in that query. I have tried to explain a query that uses STRAIGHT_JOIN , but I have seen that the tables that appear as a result, don't appear in that order. Here is the query: explain select STRAIGHT_JOIN sql_calc_found_rows a.pre_title, a.title, a.post_title, substring(a.body, 1, n.preview_size) as preview, a.body_hash, a.date, a.time, length(a.body) as size, a.id_categories, n.name as newspaper, sc.category, count(act.id) as visitors, count(aco.hash) as comments from articles a inner join newspapers n on(a.id_newspapers=n.id) inner join sections s on(a.id_sections=s.id and n.id=s.id_newspapers) inner join sections_categories sc on(a.id_categories=sc.id) left join articles_count act on(a.body_hash=act.hash_articles) left join articles_comments aco on(a.body_hash=aco.hash_articles) where n.active_view='yes' and s.active_view='yes' and a.date between '2005-01-01' and '2005-12-31' group by a.body_hash order by visitors desc, comments desc, a.title limit 0,30\G The result is below, but I have no idea if there is something wrong with my query or what could be the problem. Thank you for your advice. Teddy *** 1. row *** id: 1 select_type: SIMPLE table: aco type: system possible_keys: hash_articles key: NULL key_len: NULL ref: NULL rows: 0 Extra: const row not found *** 2. row *** id: 1 select_type: SIMPLE table: a type: index possible_keys: PRIMARY,id_newspapers,date,id_categories,id_sections key: body_hash key_len: 32 ref: NULL rows: 89285 Extra: Using where; Using temporary; Using filesort *** 3. row *** id: 1 select_type: SIMPLE table: n type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 4 ref: presa.a.id_newspapers rows: 1 Extra: Using where *** 4. row *** id: 1 select_type: SIMPLE table: s type: eq_ref possible_keys: PRIMARY,id_newspapers,active_view key: PRIMARY key_len: 8 ref: presa.n.id,presa.a.id_sections rows: 1 Extra: Using where *** 5. row *** id: 1 select_type: SIMPLE table: sc type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 2 ref: presa.a.id_categories rows: 1 Extra: Using where *** 6. row *** id: 1 select_type: SIMPLE table: act type: ref possible_keys: hash_articles key: hash_articles key_len: 32 ref: presa.a.body_hash rows: 1 Extra: Teddy -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Why does query load faster after executing 2nd time? (Query Caching DISABLED, Key-Cache already fully loaded)
Filesystem buffering? pow wrote: Hi everyone, Im puzzling over why a query loads faster the second time I execute it. I am sure it is not query cached, because that is off. I also made sure that the key that is used was already cached b4 i even executed the query the first time. So it is not like as if the 2nd execution used the key cache and the first attempt did not. I am executing the queries directly in MYSQL command prompt. Is there some other type of cacheing that i am missing? Thanks! Pow -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
datetime/timestamps/4.1.12
Hello, I just upgraded to 4.1.12 from 4.0.22 and my timestamps changed formats to %Y-%m-%d %H:%i:%s So I added the following lime to my my.cnf file datetime_format=%Y%m%d%H%i%s Which is the format I prefer, I restart the server and my time stamps still appear as %Y-%m-%d %H:%i:%s The 'show variables' command now confirms that the format is what I defined in the my.cnf but the output of the query does not change... What am I missing? Thanks, Michael
more queries vs a bigger one
Hi, I have a big query that involves searching in more tables, and I think this might be slower than creating more smaller queries. What do you think, is this true generally? The query searches in a big table but it also counts the number of records from other 2 tables based on a criteria, and usually the result is a big number of records, but the final result is limited using limit 0,30. So I am wondering... Could it work faster if I won't count(*) the number of records in those 2 tables, but get the result (only 30 records), then for each separate record use a separate query that gets that number? I don't know, could 31 queries work faster than a single bigger and complex query? Thank you. Teddy -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: datetime/timestamps/4.1.12
Hello, Nevermind - duh -datetime is not timestamp (oneday I fullfill my promise to myself and not work on Sundays when my 'duh' level is a bit higher). Sofrom what I gather, the backward compatibility comes in the form of adding a +0 (string to int). This is most inconvenient and annoying. Any plans on rectifying this, or has anyone found a workaround, or are we left to go SIOH (hint OH stands for our hat) :-} Later... Michael -Original Message- From: DePhillips, Michael P Sent: Sun 7/17/2005 11:05 AM To: mysql@lists.mysql.com Cc: Subject: datetime/timestamps/4.1.12 Hello, I just upgraded to 4.1.12 from 4.0.22 and my timestamps changed formats to %Y-%m-%d %H:%i:%s So I added the following lime to my my.cnf file datetime_format=%Y%m%d%H%i%s Which is the format I prefer, I restart the server and my time stamps still appear as %Y-%m-%d %H:%i:%s The 'show variables' command now confirms that the format is what I defined in the my.cnf but the output of the query does not change... What am I missing? Thanks, Michael
RE: datetime/timestamps/4.1.12
Mike, Have you tried creating a new table with a field for some sort of date? Try adding some data and see if the new date time format you specified in the my.cnf file. See if that works. -sam Hello, Nevermind - duh -datetime is not timestamp (oneday I fullfill my promise to myself and not work on Sundays when my 'duh' level is a bit higher). Sofrom what I gather, the backward compatibility comes in the form of adding a +0 (string to int). This is most inconvenient and annoying. Any plans on rectifying this, or has anyone found a workaround, or are we left to go SIOH (hint OH stands for our hat) :-} Later... Michael -Original Message- From: DePhillips, Michael P Sent: Sun 7/17/2005 11:05 AM To: mysql@lists.mysql.com Cc: Subject: datetime/timestamps/4.1.12 Hello, I just upgraded to 4.1.12 from 4.0.22 and my timestamps changed formats to %Y-%m-%d %H:%i:%s So I added the following lime to my my.cnf file datetime_format=%Y%m%d%H%i%s Which is the format I prefer, I restart the server and my time stamps still appear as %Y-%m-%d %H:%i:%s The 'show variables' command now confirms that the format is what I defined in the my.cnf but the output of the query does not change... What am I missing? Thanks, Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: more queries vs a bigger one
Hello, approx. how long does it take your big query to run as it is now? Are these queries appending a table? or are they buiding a result (from a chain of queries)? Have you tried separating them out? Any difference? -sam Hi, I have a big query that involves searching in more tables, and I think this might be slower than creating more smaller queries. What do you think, is this true generally? The query searches in a big table but it also counts the number of records from other 2 tables based on a criteria, and usually the result is a big number of records, but the final result is limited using limit 0,30. So I am wondering... Could it work faster if I won't count(*) the number of records in those 2 tables, but get the result (only 30 records), then for each separate record use a separate query that gets that number? I don't know, could 31 queries work faster than a single bigger and complex query? Thank you. Teddy -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Why does query load faster after executing 2nd time? (Query Caching DISABLED, Key-Cache already fully loaded)
Hello therei have seen this question before, I cannot exactly remember when but it was a while ago. My advice is to go to the mysql.com website and do a search thru the mailing list using a search term something like must execute query twice or something to that effect. -sam Filesystem buffering? pow wrote: Hi everyone, Im puzzling over why a query loads faster the second time I execute it. I am sure it is not query cached, because that is off. I also made sure that the key that is used was already cached b4 i even executed the query the first time. So it is not like as if the 2nd execution used the key cache and the first attempt did not. I am executing the queries directly in MYSQL command prompt. Is there some other type of cacheing that i am missing? Thanks! Pow -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql forgets user passwords
It's always a good idea to keep the thread on the list. That way, more people can help solve the problem, and others may benefit from the answers. Chris Fonnesbeck wrote: Here is what happens: mysql GRANT ALL ON test.* TO [EMAIL PROTECTED] IDENTIFIED BY 'testing'; Query OK, 0 rows affected (0.09 sec) mysql Bye Oliver:~ chris$ mysql mysql -u joe -p Enter password: ERROR 1045 (28000): Access denied for user 'joe'@'localhost' (using password: YES) That doesn't tell us anything. You only granted joe access to the test db, so he cannot access the mysql db. Hence, this error is expected. In any case, logging in is not the test. I need you to do all the steps I asked, in order, to help us narrow this down. Could you: 1) Log in as root and SHOW GRANTS FOR [EMAIL PROTECTED]; and send the output. 2) Restart the server in your usual way (you still haven't specified what that is). 3) Log in as root once more, enter SHOW GRANTS FOR [EMAIL PROTECTED]; and send that output. If my suspicion is correct, joe won't exist after the restart. Even if I'm wrong, however, that will tell us something. I can't make any progress, though, until you do all thre steps in order and report the results. Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Separate disk for logs, DRBD ...
Dear All, Am planning on making MySQL write its data files to disk1 and log files to disk2. My questions are : 1. I know I can put the connections, slow, query, and InnoDB logs on disk2. Is it also possible (and advisable) to put the binary logs with them ? We log to the OS Disk, and keep data on the data disks... Binary logs go to the log disks. The exception here is our InnoDB logs, in the event of a crash innodb needs them to rebuild itself, so they go with the data disks... but that's just us. 2. If disk2 is slower than disk1 ( like when disk1 is 15k RPM while disk2 is 10k RPM ), will it slow down any data-related operations ? May be an issue for InnoDB logging, unless you move the InnoDB logs to data disk like we do. Not an issue for Binary logs etc, they are handled by their own threads. 3. I'm thinking of using DRBD to replicate changes on one MySQL Master server to another box. Does anyone here have a similar setup ? I plan on buying 2 identical servers with 3 disk each - 1 for the OS, the other for Data, and the last one for Logs. Don't know a DRBD, so can't speak to that, but I can say what we did with three disks and why... One Disk for OS and logging... two disks mirrored for Data - now the why. Firstly we are old fashioned when ti comes to IT type stuff, everything is built redundant... mirrored disks give us a level of protection for our data. Next is performance... Mirrored disks in most Operating Systems (including ours) will read from both disks like a striped disk, so reads are pretty much twice as fast as a single disk. Clearly writes take normal amounts of time. So we get redundancy and double the read performance by using two disks mirrored for data... The OS isn't using much disk IO so having logs on a different disk than OS seems like you're not buying much in most cases. (As a side note we also use a hardware RAID card to run the mirror, rather than the Operating System... that way there's no performance hit on the OS in writing to the mirror, but again that's just us). Best Regards, Bruce -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
error: 'Can't create a new thread (errno 12).
Hello everyone, I have seen this message crop up recently when connections are denied: mysqladmin: connect to server at 'localhost' failed error: 'Can't create a new thread (errno 12). If you are not out of available me mory, you can consult the manual for a possible OS-dependent bug' Unfortuantely, I haven't been able to find anything in the manual relating to this or an OS bug. The server is running Debian 3.1, mysql 4.0.23, with kernel 2.6.7. I have the results of status and extended-status around the time of the outages: /var/log/apache# mysqladmin status Uptime: 63863 Threads: 14 Questions: 14224034 Slow queries: 8 Opens: 675 Fl ush tables: 1 Open tables: 64 Queries per second avg: 222.727 /var/log/apache# mysqladmin status mysqladmin: connect to server at 'localhost' failed error: 'Can't create a new thread (errno 12). If you are not out of available memory, you can consult the manual for a possible OS-dependent bug' /var/log/apache# mysqladmin status Uptime: 64007 Threads: 10 Questions: 14261583 Slow queries: 8 Opens: 675 Flush tables: 1 Open tables: 64 Queries per second avg: 222.813 /var/log/apache# mysqladmin extended-status +--++ | Variable_name| Value | +--++ | Aborted_clients | 23226 | | Aborted_connects | 28087 | | Bytes_received | 1193761149 | | Bytes_sent | 3902943211 | | Com_admin_commands | 2 | | Com_alter_table | 1 | | Com_analyze | 0 | | Com_backup_table | 0 | | Com_begin| 0 | | Com_change_db| 484513 | | Com_change_master| 0 | | Com_check| 73 | | Com_commit | 0 | | Com_create_db| 0 | | Com_create_function | 0 | | Com_create_index | 0 | | Com_create_table | 1 | | Com_delete | 826135 | | Com_delete_multi | 0 | | Com_drop_db | 0 | | Com_drop_function| 0 | | Com_drop_index | 0 | | Com_drop_table | 0 | | Com_flush| 0 | | Com_grant| 0 | | Com_ha_close | 0 | | Com_ha_open | 0 | | Com_ha_read | 0 | | Com_insert | 60970 | | Com_insert_select| 0 | | Com_kill | 0 | | Com_load | 0 | | Com_load_master_data | 0 | | Com_load_master_table| 0 | | Com_lock_tables | 0 | | Com_optimize | 0 | | Com_purge| 0 | | Com_rename_table | 0 | | Com_repair | 0 | | Com_replace | 21 | | Com_replace_select | 0 | | Com_reset| 0 | | Com_restore_table| 0 | | Com_revoke | 0 | | Com_rollback | 0 | | Com_savepoint| 0 | | Com_select | 5946078| | Com_set_option | 4 | | Com_show_binlog_events | 0 | | Com_show_binlogs | 0 | | Com_show_create | 0 | | Com_show_databases | 2 | | Com_show_fields | 4 | | Com_show_grants | 0 | | Com_show_keys| 27 | | Com_show_logs| 0 | | Com_show_master_status | 0 | | Com_show_new_master | 0 | | Com_show_open_tables | 0 | | Com_show_processlist | 203| | Com_show_slave_hosts | 0 | | Com_show_slave_status| 0 | | Com_show_status | 11 | | Com_show_innodb_status | 0 | | Com_show_tables | 14 | | Com_show_variables | 0 | | Com_slave_start | 0 | | Com_slave_stop | 0 | | Com_truncate | 0 | | Com_unlock_tables| 0 | | Com_update | 275842 | | Connections | 687129 | | Created_tmp_disk_tables | 0 | | Created_tmp_tables | 0 | | Created_tmp_files| 0 | | Delayed_insert_threads | 0 | | Delayed_writes | 0 | | Delayed_errors | 0 | | Flush_commands | 1 | | Handler_commit | 0 | | Handler_delete | 4455 | | Handler_read_first | 797| | Handler_read_key | 6067818| | Handler_read_next| 1286221638 | | Handler_read_prev| 1193582| | Handler_read_rnd | 5116834| | Handler_read_rnd_next| 3844277714 | | Handler_rollback | 0
RE: optimize a sql statement
Now,I make this sql statement to easy. Follow is the sql statement: - SELECT ol_i_id FROM orders,order_line WHERE order_line.ol_o_id = orders.o_id GROUP BY ol_i_id - Follow is the explain output: - 1, 'SIMPLE', 'orders', 'index', 'PRIMARY', 'PRIMARY', '4', '', 259231, 'Using index; Using temporary; Using filesort' 1, 'SIMPLE', 'order_line', 'ref', 'PRIMARY,order_line_ol_o_id', 'PRIMARY', '4', 'tpcw.orders.o_id', 1, '' - Can it be optimized? From: 王 旭 [EMAIL PROTECTED] To: mysql@lists.mysql.com Subject: optimize a sql statement Date: Sat, 16 Jul 2005 18:24:15 +0800 Follow is my sql statement: - SELECT SUM(ol_qty) sumolqty, ol_i_id FROM orders, order_line WHERE orders.o_id = order_line.ol_o_id AND orders.o_id (SELECT MAX(o_id)-1 FROM orders) AND NOT (order_line.ol_i_id = 5000) AND orders.o_c_id IN (SELECT o_c_id FROM orders, order_line WHERE orders.o_id = order_line.ol_o_id and orders.o_id (SELECT MAX(o_id)-1 FROM orders) AND order_line.ol_i_id = 5000) GROUP BY ol_i_id ORDER BY sumolqty DESC limit 50 -- follows are explain output: -- 1, 'PRIMARY', 'orders', 'range', 'PRIMARY', 'PRIMARY', '4', '', 19398, 'Using where; Using temporary; Using filesort' 1, 'PRIMARY', 'order_line', 'ref', 'PRIMARY,order_line_ol_i_id,order_line_ol_o_id', 'order_line_ol_o_id', '4', 'tpcw.orders.o_id', 1, 'Using where' 3, 'DEPENDENT SUBQUERY', 'orders', 'ref', 'PRIMARY,orders_o_c_id', 'orders_o_c_id', '5', 'func', 1, 'Using where; Using index' 3, 'DEPENDENT SUBQUERY', 'order_line', 'ref', 'PRIMARY,order_line_ol_i_id,order_line_ol_o_id', 'order_line_ol_o_id', '4', 'tpcw.orders.o_id', 1, 'Using where' 4, 'SUBQUERY', '', '', '', '', '', '', , 'Select tables optimized away' 2, 'SUBQUERY', '', '', '', '', '', '', , 'Select tables optimized away' -- This sql statement performance is too bad.Please help me to optimize it . thanks! _ 免费下载 MSN Explorer: http://explorer.msn.com/lccn/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] _ 与联机的朋友进行交流,请使用 MSN Messenger: http://messenger.msn.com/cn -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]