Re: [PHP] Because you guys/gals/girls/women/insert pc term here are a smart lot
So where's the advantage of VARCHAR ? Less space on disc = less data retrieved from disc = faster data retrieval - sometimes. If you have small columns, a small number of rows, or both, then char columns may be faster. If you have large columns of varying actual length, lots of rows, or both, then varchar columns may be faster. I still think a CHAR field would be faster than a VARCHAR because of the fixed row length (assuming every thing else is fixed). Perhaps someone from the MySQL list could clarify...? -- Richard Heyes HTML5 Graphing for FF, Chrome, Opera and Safari: http://www.rgraph.org (Updated January 4th) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: [PHP] Because you guys/gals/girls/women/insert pc term here are a smart lot
On Wed, Jan 7, 2009 at 2:26 PM, Richard Heyes rich...@php.net wrote: I still think a CHAR field would be faster than a VARCHAR because of the fixed row length (assuming every thing else is fixed). Perhaps someone from the MySQL list could clarify...? Say that your column length goes up to 2000 bytes, but on average is less than 512 bytes (ie. one disk block). What would be faster, reading 1 disk block (varchar), or reading 4 disk blocks (char) ? -- Richard Heyes HTML5 Graphing for FF, Chrome, Opera and Safari: http://www.rgraph.org (Updated January 4th) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be -- Celsius is based on water temperature. Fahrenheit is based on alcohol temperature. Ergo, Fahrenheit is better than Celsius. QED.
Re: [PHP] Because you guys/gals/girls/women/insert pc term here are a smart lot
There are other factors. If a table is completely fixed in size it makes for a faster lookup time since the offset is easier to compute. This is true, at least, for myisam tables. All books on tuning that I have read have said the CHAR makes for more efficient lookup and comparison that VARCHAR. Also, I was told by the instructor at a MySQL class that all VARCHAR columns are converted to CHAR when stored in memory. Can anyone else confirm this? On Wed, Jan 7, 2009 at 7:26 AM, Richard Heyes rich...@php.net wrote: So where's the advantage of VARCHAR ? Less space on disc = less data retrieved from disc = faster data retrieval - sometimes. If you have small columns, a small number of rows, or both, then char columns may be faster. If you have large columns of varying actual length, lots of rows, or both, then varchar columns may be faster. I still think a CHAR field would be faster than a VARCHAR because of the fixed row length (assuming every thing else is fixed). Perhaps someone from the MySQL list could clarify...? -- Richard Heyes HTML5 Graphing for FF, Chrome, Opera and Safari: http://www.rgraph.org (Updated January 4th) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com -- Jim Lyons Web developer / Database administrator http://www.weblyons.com
Re: [PHP] Because you guys/gals/girls/women/insert pc term here are a smart lot
On 1/7/09, Jim Lyons jlyons4...@gmail.com wrote: There are other factors. If a table is completely fixed in size it makes for a faster lookup time since the offset is easier to compute. This is true, at least, for myisam tables. All books on tuning that I have read have said the CHAR makes for more efficient lookup and comparison that VARCHAR. Also, I was told by the instructor at a MySQL class that all VARCHAR columns are converted to CHAR when stored in memory. Can anyone else confirm this? That's my recollection, also, derived from a MySQL class. IIRC, the char length is equal to the longest varchar record in the column. David
Re: Limit within groups
On Tue, Jan 6, 2009 at 3:13 PM, Jerry Schwartz jschwa...@the-infoshop.com wrote: Each account has multiple customers, and each customer has multiple sales. I want to get the top 20 customers for each account. http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/ Keep reading, it talks about top N per group later on in the post. Select the top N rows from each group -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: [PHP] Because you guys/gals/girls/women/insert pc term here are a smart lot
On Wed, Jan 7, 2009 at 9:17 AM, David Giragosian dgiragos...@gmail.com wrote: On 1/7/09, Jim Lyons jlyons4...@gmail.com wrote: There are other factors. If a table is completely fixed in size it makes for a faster lookup time since the offset is easier to compute. This is true, at least, for myisam tables. All books on tuning that I have read have said the CHAR makes for more efficient lookup and comparison that VARCHAR. Also, I was told by the instructor at a MySQL class that all VARCHAR columns are converted to CHAR when stored in memory. Can anyone else confirm this? That's my recollection, also, derived from a MySQL class. IIRC, the char length is equal to the longest varchar record in the column. Actually it's a fixed-length buffer big enough to hold the worst-case possible value, not the worst-case existing value. In bytes, no less. If it's a utf8 varchar(100), that's 300 bytes, even if the biggest value in the table is one character. -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Setup a replication slave without stopping master
Hi Jed, If you are using LVM, you might consider snapshotting, however, doing a live snapshot without stopping mysql server would only work if you were copying only myisam tables. Mysql-hot-copy would probably be better, but either way, you need to flush your tables, which will briefly lock them, so they can get onto disk. In contrast, InnoDB actually needs to shut down to cleanly close its table structures before you can physically copy the filesystem. Actually, not true -- an LVM snapshot (or other snapshot) is a great way to take a backup of InnoDB. You just need a truly atomic snapshot, and then you can let InnoDB run its recovery routine on the snapshot to get back to a consistent state. Why do I leave it firewalled? Because once you start writing to an LVM volume that's been snapshotted, you start copying disk extents like mad, creating a high load condition that can force queries to reach connect_timeout. I have my connect_timeout set pretty low in my environment. That will depend a lot on the workload. Here are some good links for further reading: http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/ http://www.mysqlperformanceblog.com/2008/06/09/estimating-undo-space-needed-for-lvm-snapshot/ -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Setup a replication slave without stopping master
Claudio, ehmthe problems is exactly that. On production server you cannot stop or lock the server so I need the replication slave mainly for backups (actually MySQL replication is simply great for this) Just don't rely on the slave to BE the backup. You can use it to make it easier to take backups, but don't fall into the slave==backup trap. An accidental DROP TABLE can show you why slave!=backup. NOTE: I have tried Innodb Hotbackup Tool today but it was locking the production server! Right, it does that :) So it sounds like you have a mixture of InnoDB and MyISAM tables, which is what I was trying to ask you in my previous message. In this case you have to do some dirty tricks. It depends on your situation. For example, you can get the InnoDB data out with --single-transaction; you can get the MyISAM data with flush-and-rsync, then repair and replace missing rows, etc. There is no good solution. But with a little work, and some knowledge of your workload, you can get most of the data out without too much trouble, leave the troublesome ones for last, roll up your sleeves for those, and then fix the differences with mk-table-checksum and/or other approaches. Have fun. -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Group by question
A question on grouping I've never been able to solve... create table j (proj char(3), id int, score double,cpid char(32),team char(10)); insert into j values('aaa',1,100,'a','team1'); insert into j values('bbb',2,200,'a','team1'); insert into j values('ccc',3,300,'a','team2'); insert into j values('aaa',4,100,'b','team2'); insert into j values('bbb',5,300,'b','team1'); insert into j values('ccc',6,400,'b','team1'); insert into j values('aaa',7,101,'c','team1'); insert into j values('bbb',8,302,'c','team2'); insert into j values('ccc',9,503,'c','team2'); mysql select * from j; +--+--+---+---+---+ | proj | id | score | cpid | team | +--+--+---+---+---+ | aaa |1 | 100 | a | team1 | | bbb |2 | 200 | a | team1 | | ccc |3 | 300 | a | team2 | | aaa |4 | 100 | b | team2 | | bbb |5 | 300 | b | team1 | | ccc |6 | 400 | b | team1 | | aaa |7 | 101 | c | team1 | | bbb |8 | 302 | c | team2 | | ccc |9 | 503 | c | team2 | +--+--+---+---+---+ 9 rows in set (0.00 sec) mysql select cpid,sum(score),team from j group by cpid; +---++---+ | cpid | sum(score) | team | +---++---+ | a |600 | team1 | | b |800 | team2 | | c |906 | team1 | +---++---+ 3 rows in set (0.00 sec) Using MAX or MIN on the team gives different but not necessarily closer results. mysql select cpid,sum(score),max(team) from j group by cpid; +---++---+ | cpid | sum(score) | max(team) | +---++---+ | a |600 | team2 | | b |800 | team2 | | c |906 | team2 | +---++---+ 3 rows in set (0.00 sec) mysql select cpid,sum(score),min(team) from j group by cpid; +---++---+ | cpid | sum(score) | min(team) | +---++---+ | a |600 | team1 | | b |800 | team1 | | c |906 | team1 | +---++---+ 3 rows in set (0.00 sec) Given that for cpid = 'bbb', they have 2 rows where it is team1, and only 1 with team2 but the original query gives team2 and rightly so as it just uses the first row in mysql's slightly illegal (but useful!) use of allowing other columns in the query but not in the group by. The question is, is there any way to modify this query so that it would return the team having the most entries? Theoretical what I would like: | cpid | sum(score) | team | +---++---+ | a |600 | team1 | | b |800 | team1 | | c |906 | team2 | If not, is there an easy way to have another column, say mostteam char(10) and run an update statement on the whole table which would put the correct value in? Regards Phil -- Distributed Computing stats http://stats.free-dc.org
Re: Group by question
Phil is there any way to modify this query so that it would return the team having the most entries? See Within-group aggregates at http://www.artfulsoftware.com/queries.php PB - Phil wrote: A question on grouping I've never been able to solve... create table j (proj char(3), id int, score double,cpid char(32),team char(10)); insert into j values('aaa',1,100,'a','team1'); insert into j values('bbb',2,200,'a','team1'); insert into j values('ccc',3,300,'a','team2'); insert into j values('aaa',4,100,'b','team2'); insert into j values('bbb',5,300,'b','team1'); insert into j values('ccc',6,400,'b','team1'); insert into j values('aaa',7,101,'c','team1'); insert into j values('bbb',8,302,'c','team2'); insert into j values('ccc',9,503,'c','team2'); mysql select * from j; +--+--+---+---+---+ | proj | id | score | cpid | team | +--+--+---+---+---+ | aaa |1 | 100 | a | team1 | | bbb |2 | 200 | a | team1 | | ccc |3 | 300 | a | team2 | | aaa |4 | 100 | b | team2 | | bbb |5 | 300 | b | team1 | | ccc |6 | 400 | b | team1 | | aaa |7 | 101 | c | team1 | | bbb |8 | 302 | c | team2 | | ccc |9 | 503 | c | team2 | +--+--+---+---+---+ 9 rows in set (0.00 sec) mysql select cpid,sum(score),team from j group by cpid; +---++---+ | cpid | sum(score) | team | +---++---+ | a |600 | team1 | | b |800 | team2 | | c |906 | team1 | +---++---+ 3 rows in set (0.00 sec) Using MAX or MIN on the team gives different but not necessarily closer results. mysql select cpid,sum(score),max(team) from j group by cpid; +---++---+ | cpid | sum(score) | max(team) | +---++---+ | a |600 | team2 | | b |800 | team2 | | c |906 | team2 | +---++---+ 3 rows in set (0.00 sec) mysql select cpid,sum(score),min(team) from j group by cpid; +---++---+ | cpid | sum(score) | min(team) | +---++---+ | a |600 | team1 | | b |800 | team1 | | c |906 | team1 | +---++---+ 3 rows in set (0.00 sec) Given that for cpid = 'bbb', they have 2 rows where it is team1, and only 1 with team2 but the original query gives team2 and rightly so as it just uses the first row in mysql's slightly illegal (but useful!) use of allowing other columns in the query but not in the group by. The question is, is there any way to modify this query so that it would return the team having the most entries? Theoretical what I would like: | cpid | sum(score) | team | +---++---+ | a |600 | team1 | | b |800 | team1 | | c |906 | team2 | If not, is there an easy way to have another column, say mostteam char(10) and run an update statement on the whole table which would put the correct value in? Regards Phil No virus found in this incoming message. Checked by AVG - http://www.avg.com Version: 8.0.176 / Virus Database: 270.10.4/1880 - Release Date: 1/7/2009 8:49 AM
Locking database when 'creating sort index'
When we run a large query other queries start to back up when the large one gets to the 'creating sort index' phase, this lock seems to affect the whole server, all databases... does anyone know what may be causing this? Thanks in advance -- David Scott
Re: Locking database when 'creating sort index'
On Wed, Jan 7, 2009 at 6:20 PM, David Scott critt...@desktopcreatures.comwrote: When we run a large query other queries start to back up when the large one gets to the 'creating sort index' phase, this lock seems to affect the whole server, all databases... does anyone know what may be causing this? More specifics would be good, but if the other queries don't access that table, perhaps I/O ? Check your (myisam_)sort_buffer_size and tmp_table_size settings. Also, creating indexes on the field that you're sorting on might be useful, depending on the query. -- Celsius is based on water temperature. Fahrenheit is based on alcohol temperature. Ergo, Fahrenheit is better than Celsius. QED.
Re: Locking database when 'creating sort index'
At 11:20 AM 1/7/2009, you wrote: When we run a large query other queries start to back up when the large one gets to the 'creating sort index' phase, this lock seems to affect the whole server, all databases... does anyone know what may be causing this? Thanks in advance -- David Scott David, Can you provide us with more info? 1) Is this an InnoDb table or MyISAM? 2) What version of MySQL are you using? 3) Are you using Create Index or Alter Table? Can you give us the syntax you are using? 4) How long does it take? Can you give us the table structure # of indexes? 5) Are these queries that are backed up, referencing the table you are building the index on? 6) Can you provide us with a Show Process List? This should help the members of this list give you a better more informed answer. Offhand I suspect your key_buffer_size may be too low and MySQL is attempting to build the index on disk rather than in memory. If the index can be built in memory it will be 10x faster than building the index on disk. That is why adding as much ram as possible to your server will help. This is set in your my.cnf file: # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=500M If you increase your key_buffer size from the default value to 30% of your memory, you should get indexes built faster. Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Problem with mysql_secure_installation script in mysql-5.0.67
Description: I'm installing a new instance of MySQL using 5.0.67 (source) and get the following from mysql_secure_installation: Set root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ./mysql_secure_installation: !: execute permission denied By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] Y ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) ... Failed! How-To-Repeat: Just install the server, run mysql_install_db, start the server and run mysql_secure_installation. Fix: Submitter-Id: ? Originator:Steve Holmes Organization: Purdue University Computing Center MySQL support: none, at the moment Synopsis: Problem with mysql_secure_installation script in mysql-5.0.67 Severity: serious Priority: medium Category: mysql Class: sw-bug Release: mysql-5.0.67 (Source distribution) Server: ./mysqladmin Ver 8.41 Distrib 5.0.67, for pc-solaris2.10 on i386 Copyright (C) 2000-2006 MySQL AB This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL license Server version 5.0.67-log Protocol version10 Connection Localhost via UNIX socket UNIX socket /tmp/mysql.sock Uptime: 3 min 8 sec Threads: 1 Questions: 6 Slow queries: 0 Opens: 12 Flush tables: 1 Open tables: 6 Queries per second avg: 0.032 C compiler:gcc (GCC) 4.2.1 C++ compiler: gcc (GCC) 4.2.1 Environment: System: SunOS sun1.ics.purdue.edu 5.10 Generic_138889-02 i86pc i386 i86pc Architecture: i86pc Some paths: /usr/local/bin/perl /usr/local/bin/make /usr/local/bin/gcc GCC: Using built-in specs. Target: i386-pc-solaris2.10 Configured with: /export/src/bin/gcc/gcc/configure --prefix=/opt/gcc --with-gnu-as --with-as=/usr/sfw/bin/gas --without-gnu-ld --with-ld=/usr/ccs/bin/ld --with-mpfr=/usr/local --enable-languages=c,c++,fortran,objc,obj-c++ Thread model: posix gcc version 4.2.1 Compilation info (call): CC='gcc' CFLAGS='-O3 -fno-omit-frame-pointer' CXX='gcc' CXXFLAGS='-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti' LDFLAGS='-L/usr/local/lib' ASFLAGS='' Compilation info (used): CC='gcc' CFLAGS=' -DDBUG_OFF -O3 -fno-omit-frame-pointer -DHAVE_RWLOCK_T' CXX='gcc' CXXFLAGS=' -DDBUG_OFF -O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti -fno-implicit-templates -fno-exceptions -fno-rtti -DHAVE_RWLOCK_T' LDFLAGS='-L/usr/local/lib ' ASFLAGS='' LIBC: lrwxrwxrwx 1 root root 9 Dec 31 17:39 /lib/libc.so - libc.so.1 -rwxr-xr-x 1 root bin 1411368 Nov 18 11:11 /lib/libc.so.1 lrwxrwxrwx 1 root root 19 Dec 31 17:38 /usr/lib/libc.so - ../../lib/libc.so.1 lrwxrwxrwx 1 root root 19 Dec 31 17:38 /usr/lib/libc.so.1 - ../../lib/libc.so.1 Configure command: ./configure '--prefix=/opt/mysql-5.0.67' '--localstatedir=/var/mysql' '--with-extra-charsets=complex' '--enable-thread-safe-client' '--enable-local-infile' '--enable-assembler' '--with-libwrap' '--with-embedded-server' '--enable-shared' '--with-embedded-privilege-control' '--with-archive-storage-engine' '--with-federated-storage-engine' '--with-openssl' 'CC=gcc' 'CFLAGS=-O3 -fno-omit-frame-pointer' 'CPPFLAGS=-I/usr/local/include' 'CXXFLAGS=-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti' 'CXX=gcc' 'LDFLAGS=-L/usr/local/lib' -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: Limit within groups
-Original Message- From: baron.schwa...@gmail.com [mailto:baron.schwa...@gmail.com] On Behalf Of Baron Schwartz Sent: Wednesday, January 07, 2009 9:54 AM To: Jerry Schwartz Cc: mysql@lists.mysql.com Subject: Re: Limit within groups On Tue, Jan 6, 2009 at 3:13 PM, Jerry Schwartz jschwa...@the-infoshop.com wrote: Each account has multiple customers, and each customer has multiple sales. I want to get the top 20 customers for each account. http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax- row-per-group-in-sql/ Keep reading, it talks about top N per group later on in the post. Select the top N rows from each group [JS] Thanks for the pointer. Unless I'm misunderstanding them, none of the techniques look very promising. I need the top 20 customers for each of 15 or more accounts, but each customer has many records so I need to rank them by sum() for each customer and then select the first 20 for each account. It looks as though I'd have build a separate query for each account, and then UNION them all together. I'm trying to avoid that, especially since some of the account Ids are actually Japanese words. What I really need is some kind of for each functionality. My goal is tantalizingly just out of reach. -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Locking database when 'creating sort index'
1) InnoDb2) 5.0.51 on Linux 3) No, a Select with a bunch of Joins, a Where, group and order 4) 37 seconds 5) Yes 6) Show Processlist does not show anything, just the user, what are you looking for? 2009/1/7 mos mo...@fastmail.fm At 11:20 AM 1/7/2009, you wrote: When we run a large query other queries start to back up when the large one gets to the 'creating sort index' phase, this lock seems to affect the whole server, all databases... does anyone know what may be causing this? Thanks in advance -- David Scott David, Can you provide us with more info? 1) Is this an InnoDb table or MyISAM? 2) What version of MySQL are you using? 3) Are you using Create Index or Alter Table? Can you give us the syntax you are using? 4) How long does it take? Can you give us the table structure # of indexes? 5) Are these queries that are backed up, referencing the table you are building the index on? 6) Can you provide us with a Show Process List? This should help the members of this list give you a better more informed answer. Offhand I suspect your key_buffer_size may be too low and MySQL is attempting to build the index on disk rather than in memory. If the index can be built in memory it will be 10x faster than building the index on disk. That is why adding as much ram as possible to your server will help. This is set in your my.cnf file: # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=500M If you increase your key_buffer size from the default value to 30% of your memory, you should get indexes built faster. Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=critt...@desktopcreatures.com
Re: Locking database when 'creating sort index'
Oh and we increased the key_buffer_size=1200M (30% of ram) no change. 2009/1/7 David Scott critt...@desktopcreatures.com 1) InnoDb2) 5.0.51 on Linux 3) No, a Select with a bunch of Joins, a Where, group and order 4) 37 seconds 5) Yes 6) Show Processlist does not show anything, just the user, what are you looking for? 2009/1/7 mos mo...@fastmail.fm At 11:20 AM 1/7/2009, you wrote: When we run a large query other queries start to back up when the large one gets to the 'creating sort index' phase, this lock seems to affect the whole server, all databases... does anyone know what may be causing this? Thanks in advance -- David Scott David, Can you provide us with more info? 1) Is this an InnoDb table or MyISAM? 2) What version of MySQL are you using? 3) Are you using Create Index or Alter Table? Can you give us the syntax you are using? 4) How long does it take? Can you give us the table structure # of indexes? 5) Are these queries that are backed up, referencing the table you are building the index on? 6) Can you provide us with a Show Process List? This should help the members of this list give you a better more informed answer. Offhand I suspect your key_buffer_size may be too low and MySQL is attempting to build the index on disk rather than in memory. If the index can be built in memory it will be 10x faster than building the index on disk. That is why adding as much ram as possible to your server will help. This is set in your my.cnf file: # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=500M If you increase your key_buffer size from the default value to 30% of your memory, you should get indexes built faster. Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=critt...@desktopcreatures.com
Re: Locking database when 'creating sort index'
At 01:07 PM 1/7/2009, David Scott wrote: 1) InnoDb 2) 5.0.51 on Linux 3) No, a Select with a bunch of Joins, a Where, group and order 4) 37 seconds 5) Yes 6) Show Processlist does not show anything, just the user, what are you looking for? David, I was looking to see if the other queries were waiting on table locks. So from what you're telling me the other queries are executing, just more slowly. Is that correct? Or are they queued up waiting for your large query to finish? I'm not that familiar with Innodb but I did find a thread by Heikki a while back with a user having a similar problem. http://markmail.org/message/kmplwau22ualufmg#query:related%3Akmplwau22ualufmg+page:1+mid:s6ufcutbxika2a44+state:results So you need to determine if the MySQL server is disk bound or CPU bound and that will help in solving the problem. Mike 2009/1/7 mos mailto:mo...@fastmail.fmmo...@fastmail.fm At 11:20 AM 1/7/2009, you wrote: When we run a large query other queries start to back up when the large one gets to the 'creating sort index' phase, this lock seems to affect the whole server, all databases... does anyone know what may be causing this? Thanks in advance -- David Scott David, Can you provide us with more info? 1) Is this an InnoDb table or MyISAM? 2) What version of MySQL are you using? 3) Are you using Create Index or Alter Table? Can you give us the syntax you are using? 4) How long does it take? Can you give us the table structure # of indexes? 5) Are these queries that are backed up, referencing the table you are building the index on? 6) Can you provide us with a Show Process List? This should help the members of this list give you a better more informed answer. Offhand I suspect your key_buffer_size may be too low and MySQL is attempting to build the index on disk rather than in memory. If the index can be built in memory it will be 10x faster than building the index on disk. That is why adding as much ram as possible to your server will help. This is set in your my.cnf file: # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=500M If you increase your key_buffer size from the default value to 30% of your memory, you should get indexes built faster. Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysqlhttp://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=critt...@desktopcreatures.comhttp://lists.mysql.com/mysql?unsub=critt...@desktopcreatures.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Limit within groups
On Wed, Jan 7, 2009 at 1:48 PM, Jerry Schwartz jschwa...@the-infoshop.com wrote: -Original Message- From: baron.schwa...@gmail.com [mailto:baron.schwa...@gmail.com] On Behalf Of Baron Schwartz Sent: Wednesday, January 07, 2009 9:54 AM To: Jerry Schwartz Cc: mysql@lists.mysql.com Subject: Re: Limit within groups On Tue, Jan 6, 2009 at 3:13 PM, Jerry Schwartz jschwa...@the-infoshop.com wrote: Each account has multiple customers, and each customer has multiple sales. I want to get the top 20 customers for each account. http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax- row-per-group-in-sql/ Keep reading, it talks about top N per group later on in the post. Select the top N rows from each group [JS] Thanks for the pointer. Unless I'm misunderstanding them, none of the techniques look very promising. I need the top 20 customers for each of 15 or more accounts, but each customer has many records so I need to rank them by sum() for each customer and then select the first 20 for each account. This is one way to do what you need, depending on your data (doesn't always work): select type, variety, price from fruits where ( select count(*) from fruits as f where f.type = fruits.type and f.price fruits.price ) = 2; Vary the 2 to get what you want. -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Group by question
Hello, I think following query would help you For Ascending select cpid,sum(score),team from j group by cpid order by sum(score) For Descending select cpid,sum(score),team from j group by cpid order by sum(score) desc Thanks! 2009/1/7 Phil freedc@gmail.com A question on grouping I've never been able to solve... create table j (proj char(3), id int, score double,cpid char(32),team char(10)); insert into j values('aaa',1,100,'a','team1'); insert into j values('bbb',2,200,'a','team1'); insert into j values('ccc',3,300,'a','team2'); insert into j values('aaa',4,100,'b','team2'); insert into j values('bbb',5,300,'b','team1'); insert into j values('ccc',6,400,'b','team1'); insert into j values('aaa',7,101,'c','team1'); insert into j values('bbb',8,302,'c','team2'); insert into j values('ccc',9,503,'c','team2'); mysql select * from j; +--+--+---+---+---+ | proj | id | score | cpid | team | +--+--+---+---+---+ | aaa |1 | 100 | a | team1 | | bbb |2 | 200 | a | team1 | | ccc |3 | 300 | a | team2 | | aaa |4 | 100 | b | team2 | | bbb |5 | 300 | b | team1 | | ccc |6 | 400 | b | team1 | | aaa |7 | 101 | c | team1 | | bbb |8 | 302 | c | team2 | | ccc |9 | 503 | c | team2 | +--+--+---+---+---+ 9 rows in set (0.00 sec) mysql select cpid,sum(score),team from j group by cpid; +---++---+ | cpid | sum(score) | team | +---++---+ | a |600 | team1 | | b |800 | team2 | | c |906 | team1 | +---++---+ 3 rows in set (0.00 sec) Using MAX or MIN on the team gives different but not necessarily closer results. mysql select cpid,sum(score),max(team) from j group by cpid; +---++---+ | cpid | sum(score) | max(team) | +---++---+ | a |600 | team2 | | b |800 | team2 | | c |906 | team2 | +---++---+ 3 rows in set (0.00 sec) mysql select cpid,sum(score),min(team) from j group by cpid; +---++---+ | cpid | sum(score) | min(team) | +---++---+ | a |600 | team1 | | b |800 | team1 | | c |906 | team1 | +---++---+ 3 rows in set (0.00 sec) Given that for cpid = 'bbb', they have 2 rows where it is team1, and only 1 with team2 but the original query gives team2 and rightly so as it just uses the first row in mysql's slightly illegal (but useful!) use of allowing other columns in the query but not in the group by. The question is, is there any way to modify this query so that it would return the team having the most entries? Theoretical what I would like: | cpid | sum(score) | team | +---++---+ | a |600 | team1 | | b |800 | team1 | | c |906 | team2 | If not, is there an easy way to have another column, say mostteam char(10) and run an update statement on the whole table which would put the correct value in? Regards Phil -- Distributed Computing stats http://stats.free-dc.org