Do I need to use GROUP BY to do this?
Hi, nbsp; I have a table where I keep sales transactions, so I'm trying to do a query that will count the number of transactions per day. My test data looks like: -- -- Table structure for table `sales_activity` -- CREATE TABLE `sales_activity` ( nbsp; `sales_id` int(11) NOT NULL auto_increment, nbsp; `sales_date` datetime NOT NULL default '-00-00 00:00:00', nbsp; `sales_type` tinyint(4) NOT NULL default '0', nbsp; PRIMARY KEYnbsp; (`sales_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=11 ; -- -- Dumping data for table `sales_activity` -- INSERT INTO `sales_activity` VALUES (1, '2008-06-15 13:00:00', 1); INSERT INTO `sales_activity` VALUES (2, '2008-06-15 13:00:00', 1); INSERT INTO `sales_activity` VALUES (3, '2008-06-15 13:00:00', 1); INSERT INTO `sales_activity` VALUES (4, '2008-06-15 13:00:00', 1); INSERT INTO `sales_activity` VALUES (5, '2008-06-15 13:00:00', 2); INSERT INTO `sales_activity` VALUES (6, '2008-06-15 13:00:00', 2); INSERT INTO `sales_activity` VALUES (7, '2008-06-16 13:00:00', 1); INSERT INTO `sales_activity` VALUES (8, '2008-06-16 13:00:00', 1); INSERT INTO `sales_activity` VALUES (9, '2008-06-17 13:00:00', 1); INSERT INTO `sales_activity` VALUES (10, '2008-06-17 13:00:00', 1); nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp; I would like to get a count of the number of transactions where transaction id=1 for each date. ie, the result set should look like: 2008-06-15nbsp;nbsp;nbsp; 4 2008-06-16 nbsp;nbsp; 2 2008-06-17 nbsp;nbsp; 2 What type of query do I need to get that information? Thanks!
Re: How to select data if not in both tables?
Thanks, This worked exactly like I had hoped. Grant Peter Brawley [EMAIL PROTECTED] wrote:Grant, If I want to select all the products that are in the product_table, but not in the sale_table, how to make the query? The product_table has all the products, but the sale table is a subset of the product_table. SELECT * FROM product_table p LEFT JOIN sale_table s USING (prod_id) WHERE s.prod_id IS NULL; PB Grant Giddens wrote: Hi, I have 2 tables like: product_table: prod_id item_name price data data etc sale_table: prod_id sale_price If I want to select all the products that are in the product_table, but not in the sale_table, how to make the query? The product_table has all the products, but the sale table is a subset of the product_table. I tried: SELECT item_name FROM product_table, sale_table WHERE product_table.prod_id != sale_table.prod_id but I'm not getting any results from that. Sorry this code isn't exact, I'm not att my development PC at the moment. Thanks, Grant - Brings words and photos together (easily) with PhotoMail - it's free and works with Yahoo! Mail. - No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.375 / Virus Database: 267.15.6/257 - Release Date: 2/10/2006 No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.375 / Virus Database: 267.15.6/257 - Release Date: 2/10/2006 - What are the most popular cars? Find out at Yahoo! Autos
How to select data if not in both tables?
Hi, I have 2 tables like: product_table: prod_id item_name price data data etc sale_table: prod_id sale_price If I want to select all the products that are in the product_table, but not in the sale_table, how to make the query? The product_table has all the products, but the sale table is a subset of the product_table. I tried: SELECT item_name FROM product_table, sale_table WHERE product_table.prod_id != sale_table.prod_id but I'm not getting any results from that. Sorry this code isn't exact, I'm not att my development PC at the moment. Thanks, Grant - Brings words and photos together (easily) with PhotoMail - it's free and works with Yahoo! Mail.
Re: Column in field list is ambiguous
Thanks Michael, That was the solution. This was the first time I've seen that error and now it makes plenty of sense. Thanks, Grant Michael Stassen [EMAIL PROTECTED] wrote: Grant Giddens wrote: Hi, I'm using mysql 4.1.14 and and getting this error: Column 'asin' in field list is ambiguous This is the query that is giving me the error: SELECT pn_pricecompare_searchdata.prod_id, MATCH (asin,upc,...) AGAINST ('warcraft' IN BOOLEAN MODE) AS score FROM pn_pricecompare_searchdata ps LEFT JOIN pn_pricecompare_product pp ON (pp.prod_id = ps.prod_id) WHERE MATCH (asin,upc,...) AGAINST ('warcraft' IN BOOLEAN MODE) ORDER BY score DESC LIMIT 0,10 The query works fine if I take out the LEFT JOIN SELECT pn_pricecompare_searchdata.prod_id, MATCH (asin,upc,...) AGAINST ('warcraft' IN BOOLEAN MODE) AS score FROM pn_pricecompare_searchdata ps LEFT JOIN pn_pricecompare_product pp ON (pp.prod_id = ps.prod_id) WHERE MATCH (asin,upc,...) AGAINST ('warcraft' IN BOOLEAN MODE) ORDER BY score DESC LIMIT 0,10 The searchdata table is MyISAM with fulltext enabled on the fields. The product table is an INNODB table. I searched all over and can't find a solution. I did find one possible bug report that might be related: http://bugs.mysql.com/bug.php?id=15607 Is there any workaround I can use? Any suggestions would be greatly appreciated. Thanks, Grant Is there a column named asin in the pn_pricecompare_product table? If so, try qualifying asin with the correct table name in your query (use the alias). Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] - Relax. Yahoo! Mail virus scanning helps detect nasty viruses!
Column in field list is ambiguous
Hi, I'm using mysql 4.1.14 and and getting this error: Column 'asin' in field list is ambiguous This is the query that is giving me the error: SELECT pn_pricecompare_searchdata.prod_id, MATCH (asin,upc,isbn,ean,title,brand,manufacturer,mpn,model,artist,author,platform,format,label,studio,publisher) AGAINST ('warcraft' IN BOOLEAN MODE) AS score FROM pn_pricecompare_searchdata LEFT JOIN pn_pricecompare_product ON (pn_pricecompare_product.prod_id = pn_pricecompare_searchdata.prod_id) WHERE MATCH (asin,upc,isbn,ean,title,brand,manufacturer,mpn,model,artist,author,platform,format,label,studio,publisher) AGAINST ('warcraft' IN BOOLEAN MODE) ORDER BY score DESC LIMIT 0,10 The query works fine if I take out the LEFT JOIN SELECT pn_pricecompare_searchdata.prod_id, MATCH (asin,upc,isbn,ean,title,brand,manufacturer,mpn,model,artist,author,platform,format,label,studio,publisher) AGAINST ('warcraft' IN BOOLEAN MODE) AS score FROM pn_pricecompare_searchdata WHERE MATCH (asin,upc,isbn,ean,title,brand,manufacturer,mpn,model,artist,author,platform,format,label,studio,publisher) AGAINST ('warcraft' IN BOOLEAN MODE) ORDER BY score DESC LIMIT 0,10 The searchdata table is MyISAM with fulltext enabled on the fields. The product table is an INNODB table. I searched all over and can't find a solution. I did find one possible bug report that might be related: http://bugs.mysql.com/bug.php?id=15607 Is there any workaround I can use? Any sugguestions would be greatly appreciated. Thanks, Grant - Brings words and photos together (easily) with PhotoMail - it's free and works with Yahoo! Mail.
Re: innodb questions
Ware, Thanks so much for your help. This is what I'm planning on doing, please let me know if you see any problems here. This is on my test server, so losing data isn't the end of the world. This is what I plan on doing in order: 1. Backup the database via mysqldump 2. Purge the master logs via PURGE MASTER LOGS command. Maybe I should just delete them? It's a test server and I don't plan on going back to a previous snapshot. 3. Shut down the mysql server 4. Edit my.cnf to remove log-bin 5. Edit my.cnf to add innodb_file_per_table 6. Move the tablespace ( I guess this is the ib_logfile0, ib_logfile1, and ibdata1 files) 7. Start the mysql server. I think at this point the table space will be recreated. Am I right? 8. Re-import my data from my file at step 1. This leads me to several questions though: 1. My web host server is running CentOS linux. How do I know it's maximum filesize? Should I limit the ibdata1 type files such as: innodb_data_file_path = /ibdata/ibdata1:100M:autoextend:max:2000M What if my table is larger than 2G? 2. If I change my tables (add/drop index and such) do I need to do this mysqldump, recreate my tablespace and re-imporrt my data process each time? 3. What if I'm running multiple databases with innodb tables on one mysql server? I guess if I use innodb_file_per_table it will help me keep the file sizes down and make it easier to drop and reload individual databases. Am I correct here? Thanks, Grant Ware Adams [EMAIL PROTECTED] wrote: On Jan 31, 2006, at 9:54 PM, Grant Giddens wrote: Since changing these tables, I've noticed some large files in my / var/lib/mysql directory. This is on my test server and I'm running gentoo linux. The files in this directory look like: /var/lib/mysql/gentoo1-bin.01 (1 Gig in size) /var/lib/mysql/gentoo1-bin.01 (1 Gig in size) /var/lib/mysql/gentoo1-bin.01 (1 Gig in size) /var/lib/mysql/gentoo1-bin.01 (1 Gig in size) ... /var/lib/mysql/gentoo1-bin.60 (1 Gig in size) These all look like binary log files, they aren't exclusive to InnoDB. You must have enabled binary logging in your my.cnf file as well. This is covered pretty extensively in the manual. Basically they are used for replication and recovery. In the latter you can take a point in time snapshot from a known binary log position and then if need be apply the logs (which is basically re-running the queries in them) to that to get back to where you databases was at any point in time. If you don't need any of these functions you can either get rid of the files or turn off binary logging. However, you probably don't want to just delete them from the file system. Have a look at the 'show master logs' and 'purge master logs' commands. This will delete the files and keep mysqld's internal index of the binary logs accurate. /var/lib/mysql/ibdata1(10.0 Gig in size) This is your InnoDB table space, you need it. You also need your ib_logfile0 etc... files. 3. Can I limit the size of the ibdata1 file? Only by limiting data in your tables. Also, in the shared table space (which you are using) you can't shrink this file. You can switch to innodb_file_per_table (see InnoDB manual) so that when you drop a table you save it's space (but not on deletion of individual records). However, even doing this you cannot delete your ibdata1 file or any shared tablespace files. You can recreate your entire tablespace (see below), but even then you'll need a (small) shared ibdata file. Is it too late to resize it? Yes, but you could use mysqldump to dump all data to text files, delete (or move) the tablespace, redefine it and then re-import. 4. What can I set to reduce the size of these files? Use innodb_file_per_table turn off binary loggin if you don't need it make sure index and field types are appropriate My innodb variables are: These came through poorly spaced, but I think it would help a lot to read the InnoDB sections of the manual (it's pretty manageable in size). InnoDB is really fantastic for certain applications, including heavy write load to large tables with concurrent reads. We've used it for several years on several billion records with 170 qps 50% of which is writes. There's no way to do this in MySQL other than InnoDB, realistically. That said, it has it's own learning curve. It's really an entirely new database engine, so there's lots to learn even if you've used mysql/myisam for years. In particular the tablespace layout and dependency on files other than ones linked to a particular table is a little daunting. --Ware - Bring words and photos together (easily) with PhotoMail - it's free and works with Yahoo! Mail.
innodb questions
Hi, I recently converted some of my tables to innodb from myisam. I don't need transactions or rollback capability, I switched because I needed row-level locking. These are large tables with many rows and lots of INSERTS and UPDATES. Since changing these tables, I've noticed some large files in my /var/lib/mysql directory. This is on my test server and I'm running gentoo linux. The files in this directory look like: /var/lib/mysql/gentoo1-bin.01 (1 Gig in size) /var/lib/mysql/gentoo1-bin.01 (1 Gig in size) /var/lib/mysql/gentoo1-bin.01 (1 Gig in size) /var/lib/mysql/gentoo1-bin.01 (1 Gig in size) ... /var/lib/mysql/gentoo1-bin.60 (1 Gig in size) /var/lib/mysql/ibdata1(10.0 Gig in size) Since I'm new to innodb tables, I assume that's where these files came from. Based on these file sizes, I need to get rid of them to save some disk space. My questions are: 1. Do I need them? 2. Can I delete them? 3. Can I limit the size of the ibdata1 file? Is it too late to resize it? 4. What can I set to reduce the size of these files? My innodb variables are: Variable Session value Global value innodb additional mem pool size33554432 33554432 innodb autoextend increment8 8 innodb buffer pool awe mem mb 0 0 innodb buffer pool size 536870912 536870912 innodb data file path ibdata1:10M:autoextend ibdata1:10M:autoextend innodb data home dir innodb fast shutdown ON ON innodb file io threads4 4 innodb file per table OFF OFF innodb flush log at trx commit 1 1 innodb flush method innodb force recovery 0 0 innodb lock wait timeout50 50 innodb locks unsafe for binlogOFF OFF innodb log arch dir innodb log archive OFF OFF innodb log buffer size8388608 8388608 innodb log file size5242880 5242880 innodb log files in group 2 2 innodb log group home dir./ ./ innodb max dirty pages pct90 90 innodb max purge lag 0 0 innodb mirrored log groups 1 1 innodb open files300 300 innodb table locksON ON innodb thread concurrency 8 8 Please let me know if I need to post any more variables. Thanks, Grant - Yahoo! Autos. Looking for a sweet ride? Get pricing, reviews, more on new and used cars.
Re: Most efficient way to design this table
How did you get these byte calculations? I thought an INT took up 4 bytes and char(10) would take 10 bytes. http://dev.mysql.com/doc/refman/4.1/en/storage-requirements.html C.R.Vegelin [EMAIL PROTECTED] wrote: Hi Grant, I suggest to change both key fields to Integers. Numeric keys take less space and are faster. Your Product_Feature table then may have: - product_id INT unsigned - feature INT unsigned having a Primary Key of 8 bytes i.s.o. 141 bytes. Thus a smaller index, less disk reads and more speed. HTH, Cor Vegelin - Original Message - From: Grant Giddens To: Sent: Tuesday, January 17, 2006 2:28 AM Subject: Most efficient way to design this table Hi, I currently have a table with 2 columns, product_id and feature. Each product_id can have multiple features. My two columns are: product_id is type char(13) feature is type varchar(128) In order to make sure I don't have the same feature listed twice for a particular product, I have set the PRIMARY key on product_id and feature. I have lots of products and lots of features. Is design bloating my available key_buffer memory with too much data? Is there a better way to index this data? Thanks, Grant - Yahoo! Photos Ring in the New Year with Photo Calendars. Add photos, events, holidays, whatever. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] - Yahoo! Photos Got holiday prints? See all the ways to get quality prints in your hands ASAP.
Re: key_buffer_size vs innodb_buffer_pool_size
I don't know if I still fully understand. Say I have a database with a good mix of myisam and innodb type tables. If my server has 1 gig of ram and I set key_buffer_size=256M and innodb_buffer_pool_size=256M then wouldn't mysql be constrained to 512M of the ram and leave the remaining 512M to the other server services? For a large database, are they any guides that I can use to determine how much ram would be optimal for my particular database? Thanks, Grant Eric Bergen [EMAIL PROTECTED] wrote: The difference in recommendation size comes from the different techniques each storage engine uses for caching data. myisam (key_buffer_size) only stores indexes where innodb_buffer_pool_size stores both indexes and data. mysiam relies on the operating system to cache data in ram which is why you don't want to use all available memory for the key buffer. On 1/14/06, Grant Giddens wrote: Hi, After reading through the example my.cnf files (large, huge, etc), I started to wonder what the difference was between the isam key_buffer_size and the innodb innodb_buffer_pool_size. I realize that they are two different table types, but some of the docs says to set the key_buffer_size to 25%-50% of the overall system memory. The comments for the innodb_buffer_pool_size say that it can be set to 50%-80% of the overall system memory. Maybe I don't understand exactly the difference between the two because I don't understand why they have different memory recommendations. Is there any FAQs on the my.cnf file? How would you set these two variables if you had an even mix of isam and innodb tables? Where can I learn more about tweaking the my.cnf file? The mysql online documentation is good, but I need a more basic description of these two variables and all the other my.cnf settings. Thanks, Grant - Yahoo! Photos Got holiday prints? See all the ways to get quality prints in your hands ASAP. -- Eric Bergen [EMAIL PROTECTED] http://www.ebergen.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] - Yahoo! Photos Showcase holiday pictures in hardcover Photo Books. You design it and well bind it!
Most efficient way to design this table
Hi, I currently have a table with 2 columns, product_id and feature. Each product_id can have multiple features. My two columns are: product_id is type char(13) feature is type varchar(128) In order to make sure I don't have the same feature listed twice for a particular product, I have set the PRIMARY key on product_id and feature. I have lots of products and lots of features. Is design bloating my available key_buffer memory with too much data? Is there a better way to index this data? Thanks, Grant - Yahoo! Photos Ring in the New Year with Photo Calendars. Add photos, events, holidays, whatever.
key_buffer_size vs innodb_buffer_pool_size
Hi, After reading through the example my.cnf files (large, huge, etc), I started to wonder what the difference was between the isam key_buffer_size and the innodb innodb_buffer_pool_size. I realize that they are two different table types, but some of the docs says to set the key_buffer_size to 25%-50% of the overall system memory. The comments for the innodb_buffer_pool_size say that it can be set to 50%-80% of the overall system memory. Maybe I don't understand exactly the difference between the two because I don't understand why they have different memory recommendations. Is there any FAQs on the my.cnf file? How would you set these two variables if you had an even mix of isam and innodb tables? Where can I learn more about tweaking the my.cnf file? The mysql online documentation is good, but I need a more basic description of these two variables and all the other my.cnf settings. Thanks, Grant - Yahoo! Photos Got holiday prints? See all the ways to get quality prints in your hands ASAP.
Can I do a boolean search and get the row count in 1 SQL query?
Hi, I have a web app where I am doing a boolean search. I only want to return 10 results per page, but I'd also like to know how many total rows match the search query. I'm currently performing this with 2 query statements: 1. (To get the actual rows via the search) SELECT $product_column[title], MATCH (title) AGAINST ('$q' IN BOOLEAN MODE) AS score FROM $product_table WHERE MATCH (title) AGAINST ('$q' IN BOOLEAN MODE) ORDER BY score DESC LIMIT $lower_limit,10 2. (To get the total number of results) SELECT COUNT(*) as num_results MATCH (title) AGAINST ('$q' IN BOOLEAN MODE) AS score FROM $product_table WHERE MATCH (title) AGAINST ('$q' IN BOOLEAN MODE) ORDER BY score DESC The queries might be a bit off as I don't have the exact code in front of me right now. Is there a way I can combine this into 1 query? Thanks, Grant - Yahoo! Photos Ring in the New Year with Photo Calendars. Add photos, events, holidays, whatever.
Can I go a boolean search and get the row count in 1 SQL query?
Hi, I have a web app where I am doing a boolean search. I only want to return 10 results per page, but I'd also like to know how many total rows match the search query. I'm currently performing this with 2 query statements: 1. (To get the actual rows via the search) SELECT $product_column[title], MATCH (title) AGAINST ('$q' IN BOOLEAN MODE) AS score FROM $product_table WHERE MATCH (title) AGAINST ('$q' IN BOOLEAN MODE) ORDER BY score DESC LIMIT $lower_limit,10 2. (To get the total number of results) SELECT COUNT(*) as num_results MATCH (title) AGAINST ('$q' IN BOOLEAN MODE) AS score FROM $product_table WHERE MATCH (title) AGAINST ('$q' IN BOOLEAN MODE) ORDER BY score DESC The queries might be a bit off as I don't have the exact code in front of me right now. Is there a way I can combine this into 1 query? Thanks, Grant - Yahoo! Photos Ring in the New Year with Photo Calendars. Add photos, events, holidays, whatever.
Re: Do I need to tweak my server variables for this SELECT statement?
Thanks Hank. I'll try to split up the query into 2 separate queries. I think that the reason the original query is so slow is that I don't have enough RAM allocated to mysql. When the original query takes place, I see a process Copying to tmp table on disk. I believe it's writing all the data to the disk and then sorting it. I'd like to try tweaking the my.cnf file to allow mysql to use more RAM.I just need someone to help me edit the file because I'm not quite sure what I'm doing... Thanks, Grant Hank [EMAIL PROTECTED] wrote: Now that I know what's causing the slow queries, what can I do to fix it? The only thing I can suggest is breaking up the query into two parts - the first part to retrieve just the product codes and salesrank, and sort and limit that.. save in a temp table or use application code to retrieve and print the rest of the product info. Sorting 300,000+ records in that huge result set is going to take some time (although it shouldn't take 10 minutes). -Hank -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] - Yahoo! DSL Something to write home about. Just $16.99/mo. or less
Re: Do I need to tweak my server variables for this SELECT statement?
Thanks James. I'll definately give that a try. My test server has 1 gig of ram. In the my-huge.cnf example it says that it's mainly for servers that have mysql as the main process. On my production server, I have 1 gig of ram, but it also runs apache, mutliple webstites, mysql, DNS, ftp server, etc... It's a dedicated server that only hosts my sites so I can tweak the configuration. Should I use the my-large.cnf as a starting point, or should I be OK with my-huge.cnf? Thanks, Grant James Harvard [EMAIL PROTECTED] wrote:The reason I suggested that you just use the alternative my-huge.cnf file is because that is a ready-prepared config file optimised for systems with lots of RAM for MySQL to use. You don't need to know which variable to change - it's already done for you. You may want/need to tweak stuff later, of course, but my-huge.cnf is a better starting point than my.cnf. James Harvard At 6:45 am -0800 23/12/05, Grant Giddens wrote: I think that the reason the original query is so slow is that I don't have enough RAM allocated to mysql. When the original query takes place, I see a process Copying to tmp table on disk. I believe it's writing all the data to the disk and then sorting it. I'd like to try tweaking the my.cnf file to allow mysql to use more RAM. I just need someone to help me edit the file because I'm not quite sure what I'm doing... -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] - Yahoo! Shopping Find Great Deals on Holiday Gifts at Yahoo! Shopping
Re: Do I need to tweak my server variables for this SELECT statement?
Hi James. Thanks for the tips. I tried your below SQL call of quering just one column. The query didn't speed up. I think I am running out of RAM and thus caching the temp table to disk. My server is currently using the default my.cnf file. I will try the large and huge example .cnf files to increate table cache limits. Like I mentioned before, I am tweaking the .cnf files by blind trial and error. I would appreciate anyone with experience looking over my current setup and proposed setup. I'm not sure how much RAM to allocate to each server variable. Here is a link to my current server variables: http://retailretreat.com/mysql/server_variables.php.htm Here is a link to my current my.cnf file: http://retailretreat.com/mysql/my.cnf.txt Here is a link to my proposed my.cnf file. I'm not sure if there are any errors or mistakes in the file. http://retailretreat.com/mysql/my-new.cnf.txt Thanks, Grant James Harvard [EMAIL PROTECTED] wrote:I've recently been doing some big table query optimisation, but after getting the query tweaked to hit the right index the query time is under very livable-with, despite the fact that MySQL seems to be examining more rows for my query than for yours. However the 'rows' column of thhe explain output is just an estimate. I'm not an expert on this subject, but I do have a couple of ideas that should only take you a couple of minutes to test, until a big-table-guru steps in. :-) You could try starting MySQL using the 'mysql/support-files/my-huge.cnf' example config file to see if that makes any difference. This is for a large system with memory of 1G-2G where the system runs mainly MySQL. It may be you're running into a limit on the size of temporary table MySQL will build in RAM, so it's doing it on disc instead. I would try this first. Also - and this really is just a guess - I wonder whether part of the problem is that you're retrieving a lot of data (lots of columns) and then sorting the resulting huge temporary table, only to use the first 10 rows. Try just selecting just one row to see if that helps. If it does then you could maybe use a sub-query or application code to feed the list of 10 prod_id values into a query that gets all the columns you need. SELECT pn_pricecompare_product.prod_id FROM pn_pricecompare_catprod, pn_pricecompare_product WHERE ((pn_pricecompare_catprod.category = '283155') AND (pn_pricecompare_catprod.asin = pn_pricecompare_product.asin)) ORDER BY pn_pricecompare_product.salesrank ASC LIMIT 0,10 It might be the case that you could create an index that MySQL could use to optimise the 'order by', retrieving the rows in sorted order, but I'm not up on that sort of optimisation and don't know for sure - you'd have to check out the relevant manual section. Another factor may be the table format - fixed versus dynamic row length. If you need to come back on this maybe we could see a 'show create table' for the two tables? Also if you enable the slow query log, it tells you the actual number of rows examined - might be useful if you continue to have trouble. HTH good luck, James Harvard For the first time, I'm working with a really large database. I have 1 SQL statement that brings my server to it's knees. This setup is currently on my home development PC, and not in production. The server is running apache, samba, and mysql under gentoo linux. I'm the only user, so there is no vitually load on the server. The server has 1 Gig of ram. I've got 2 tables, one that holds a list of product, the other holds a list of categories that the product is associated with. My SELECT statment just grabs 10 products that are associated with a specific category. The product table has 650,000 rows and the category table has 8,150,000 rows. My SELECT statement is: SELECT pn_pricecompare_catprod.category, pn_pricecompare_catprod.asin, pn_pricecompare_product.title, pn_pricecompare_product.prod_id, pn_pricecompare_product.image_small, pn_pricecompare_product.brand, pn_pricecompare_product.manufacturer, pn_pricecompare_product.mpn, pn_pricecompare_product.model, pn_pricecompare_product.artist, pn_pricecompare_product.author, pn_pricecompare_product.binding, pn_pricecompare_product.label, pn_pricecompare_product.audiencerating, pn_pricecompare_product.studio, pn_pricecompare_product.releasedate, pn_pricecompare_product.numberofpages, pn_pricecompare_product.pubdate, pn_pricecompare_product.publisher, pn_pricecompare_product.searchindex, pn_pricecompare_product.lowest_price, pn_pricecompare_product.num_merchants FROM pn_pricecompare_catprod, pn_pricecompare_product WHERE ((pn_pricecompare_catprod.category = '283155') AND (pn_pricecompare_catprod.asin = pn_pricecompare_product.asin)) ORDER BY pn_pricecompare_product.salesrank ASC LIMIT 0,10 Sometimes this takes 10 minutes to
Re: Do I need to tweak my server variables for this SELECT statement?
Thank Hank. I will try this. When you say the table descriptions, do you mean for me to post my CREATE TABLE syntax of how I created the table? Thanks, Grant Hank [EMAIL PROTECTED] wrote: I don't think the problem is going to be solved with the my.cnf file. Here's what I would try.. 1- run and time the original query without the ORDER BY or LIMIT clauses 2- run and time the following breakdown queries, to see if the indexes are at least working correctly: -- test catprod SELECT pn_pricecompare_catprod.category, pn_pricecompare_catprod.asin FROM pn_pricecompare_catprod WHERE pn_pricecompare_catprod.category = '283155' -- test product SELECT pn_pricecompare_product.title, pn_pricecompare_product.prod_id, pn_pricecompare_product.image_small, pn_pricecompare_product.brand, pn_pricecompare_product.manufacturer, pn_pricecompare_product.mpn, pn_pricecompare_product.model, pn_pricecompare_product.artist, pn_pricecompare_product.author, pn_pricecompare_product.binding, pn_pricecompare_product.label, pn_pricecompare_product.audiencerating, pn_pricecompare_product.studio, pn_pricecompare_product.releasedate, pn_pricecompare_product.numberofpages, pn_pricecompare_product.pubdate, pn_pricecompare_product.publisher, pn_pricecompare_product.searchindex, pn_pricecompare_product.lowest_price, pn_pricecompare_product.num_merchants FROM pn_pricecompare_product WHERE pn_pricecompare_product.asin IN () ORDER BY pn_pricecompare_product.salesrank ASC LIMIT 0,10 -- test just getting a count of the join result SELECT count(*) FROM pn_pricecompare_catprod, pn_pricecompare_product WHERE pn_pricecompare_catprod.category = '283155' AND pn_pricecompare_catprod.asin = pn_pricecompare_product.asin The results of those queries should shed some light on where the problem is being introduced. Also, table descriptions of both tables would be helpful in locating the problem. -Hank __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: Do I need to tweak my server variables for this SELECT statement?
OK guys, Here is are my tables and test SELECT queries. Before every query I did a FLUSH TABLES command. Before starting these test, I did and ANALYIZE and OPTIMIZE on all the tables. Hopefully the formatting of this email isn't too bad. product table: show create table pn_pricecompare_product: CREATE TABLE `pn_pricecompare_product` ( `prod_id` int(10) unsigned NOT NULL auto_increment, `asin` varchar(10) NOT NULL default '', `upc` varchar(14) default NULL, `isbn` varchar(10) default NULL, `ean` varchar(13) default NULL, `title` varchar(255) default NULL, `brand` varchar(64) default NULL, `manufacturer` varchar(64) default NULL, `mpn` varchar(32) default NULL, `model` varchar(32) default NULL, `artist` varchar(64) default NULL, `author` varchar(64) default NULL, `binding` varchar(64) default NULL, `label` varchar(64) default NULL, `audiencerating` varchar(64) default NULL, `studio` varchar(64) default NULL, `releasedate` date default NULL, `numberofpages` mediumint(8) unsigned NOT NULL default '0', `pubdate` date default NULL, `publisher` varchar(64) default NULL, `searchindex` varchar(32) default NULL, `added` date default NULL, `active` tinyint(1) NOT NULL default '1', `image_small` varchar(255) default NULL, `image_medium` varchar(255) default NULL, `image_large` varchar(255) default NULL, `item_height` varchar(64) default NULL, `item_length` varchar(64) default NULL, `item_weight` varchar(64) default NULL, `item_width` varchar(64) default NULL, `package_height` varchar(64) default NULL, `package_length` varchar(64) default NULL, `package_weight` varchar(64) default NULL, `package_width` varchar(64) default NULL, `list_price` decimal(12,2) unsigned default NULL, `lowest_price` decimal(12,2) unsigned default NULL, `num_merchants` smallint(6) default NULL, `salesrank` int(10) unsigned default NULL, PRIMARY KEY (`prod_id`), UNIQUE KEY `asin` (`asin`), KEY `upc` (`upc`), KEY `isbn` (`isbn`), KEY `ean` (`ean`), KEY `salesrank` (`salesrank`), KEY `brand` (`brand`), KEY `manufacturer` (`manufacturer`), KEY `mpn` (`mpn`), KEY `model` (`model`), KEY `author` (`author`), KEY `artist` (`artist`), KEY `lowest_price` (`lowest_price`), KEY `added` (`added`), FULLTEXT KEY `title` (`title`,`brand`,`manufacturer`,`mpn`,`model`,`artist`,`label`,`studio`,`author`,`publisher`,`upc`,`isbn`) ) TYPE=MyISAM desc pn_pricecompare_product: Field Type Null Key Default Extra prod_id int(10) unsignedPRI NULL auto_increment asin varchar(10)UNI upc varchar(14) YES MUL NULL isbn varchar(10) YES MUL NULL ean varchar(13) YES MUL NULL title varchar(255) YES MUL NULL brand varchar(64) YES MUL NULL manufacturer varchar(64) YES MUL NULL mpn varchar(32) YES MUL NULL model varchar(32) YES MUL NULL artist varchar(64) YES MUL NULL author varchar(64) YES MUL NULL binding varchar(64) YESNULL label varchar(64) YESNULL audiencerating varchar(64) YESNULL studio varchar(64) YESNULL releasedate date YESNULL numberofpages mediumint(8) unsigned 0 pubdate date YESNULL publisher varchar(64) YESNULL searchindex varchar(32) YESNULL added date YES MUL NULL active tinyint(1) 1 image_small varchar(255) YESNULL image_medium varchar(255) YESNULL image_large varchar(255) YESNULL item_height varchar(64) YESNULL item_length varchar(64) YESNULL item_weight varchar(64) YESNULL item_width varchar(64) YESNULL package_height varchar(64) YESNULL package_length varchar(64) YESNULL package_weight varchar(64) YESNULL package_width varchar(64) YESNULL list_price decimal(12,2) unsigned YES NULL lowest_price decimal(12,2) unsigned YES MUL NULL num_merchants smallint(6) YESNULL salesrank int(10) unsigned YES MUL NULL show create table pn_pricecompare_catprod: CREATE TABLE `pn_pricecompare_catprod` ( `category` int(10)
Re: Do I need to tweak my server variables for this SELECT statement?
It'll be a few hours before I get home and can do this query with the variations you mentioned. I don't know if I need the separate index key on `salesrank` or not. I thought it would speed up my query be having it. In the previous email I included the results from doing an EXPLAIN. It shows: tabletype possible_keys keykey_len refrowsExtra pn_pricecompare_catprod ref PRIMARY,asin PRIMARY 4 const 369308 Using where; Using index; Using temporary; Using f... pn_pricecompare_product eq_ref asin asin 10 pn_pricecompare_catprod.asin 1 The formatting is funky from my cut and paste, but it looks like 369,308 rows for this particular query. I think the product table has about 650,000 rows, and the category table has about 8,150,000 rows. Thanks, Grant Hank [EMAIL PROTECTED] wrote: Ok, so the next step would be to try the original query with just the LIMIT clause, and then just the ORDER BY (but not both). The results of select count(*) query would be helpful to know just how many records mysql is trying to sort and limit. And do you really need a separte index key on `salesrank` on the product table? -Hank __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: Do I need to tweak my server variables for this SELECT statement?
OK, I tried the following queries: 1. Use the ORDER BY and not the LIMIT -- still slow 2. Use the LIMIT and not the ORDER BY -- fast So it looks like it's building a whole temp table with all the items and then doing the ORDER BY. Just as a test, I removed the INDEX from the `salesrank` column. It didn't make a difference as my results were just like the ones above. Now that I know what's causing the slow queries, what can I do to fix it? Thanks, Grant Hank [EMAIL PROTECTED] wrote: Ok, so the next step would be to try the original query with just the LIMIT clause, and then just the ORDER BY (but not both). The results of select count(*) query would be helpful to know just how many records mysql is trying to sort and limit. And do you really need a separte index key on `salesrank` on the product table? -Hank -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Do I need to tweak my server variables for this SELECT statement?
Hi, For the first time, I'm working with a really large database. I have 1 SQL statement that brings my server to it's knees. This setup is currently on my home development PC, and not in production. The server is running apache, samba, and mysql under gentoo linux. I'm the only user, so there is no vitually load on the server. The server has 1 Gig of ram. I've got 2 tables, one that holds a list of product, the other holds a list of categories that the product is associated with. My SELECT statment just grabs 10 products that are associated with a specific category. The product table has 650,000 rows and the category table has 8,150,000 rows. My SELECT statement is: SELECT pn_pricecompare_catprod.category, pn_pricecompare_catprod.asin, pn_pricecompare_product.title, pn_pricecompare_product.prod_id, pn_pricecompare_product.image_small, pn_pricecompare_product.brand, pn_pricecompare_product.manufacturer, pn_pricecompare_product.mpn, pn_pricecompare_product.model, pn_pricecompare_product.artist, pn_pricecompare_product.author, pn_pricecompare_product.binding, pn_pricecompare_product.label, pn_pricecompare_product.audiencerating, pn_pricecompare_product.studio, pn_pricecompare_product.releasedate, pn_pricecompare_product.numberofpages, pn_pricecompare_product.pubdate, pn_pricecompare_product.publisher, pn_pricecompare_product.searchindex, pn_pricecompare_product.lowest_price, pn_pricecompare_product.num_merchants FROM pn_pricecompare_catprod, pn_pricecompare_product WHERE ((pn_pricecompare_catprod.category = '283155') AND (pn_pricecompare_catprod.asin = pn_pricecompare_product.asin)) ORDER BY pn_pricecompare_product.salesrank ASC LIMIT 0,10 Sometimes this takes 10 minutes to execute. When this occurs, I can hear the hard drive thrashing. If I do an EXPLAIN, I get: table typepossible_keys keykey_len ref rowsExtra pn_pricecompare_catprod ref PRIMARY,asin PRIMARY 4 const 355416 Using where; Using index; Using temporary; Using f... pn_pricecompare_product eq_ref asin asin 10 pn_pricecompare_catprod.asin 1 When the query executes, and I check the processes, I see Copying to tmp table on disk After googling for this statement I found several pages that indicate I might have to tweak the my.cnf file. I checked my my.cnf file, and it's just the default file. I found the example huge, large, medium, and small .cnf files and plan on using them to try to optimize my my.cnf file. Other than trial and error, I really don't know what I'm doing. Here is a link to my current server variables: http://retailretreat.com/mysql/server_variables.php.htm Here is a link to my current my.cnf file: http://retailretreat.com/mysql/my.cnf.txt Here is a link to my proposed my.cnf file. I'm not sure if there are any errors or mistakes in the file. http://retailretreat.com/mysql/my-new.cnf.txt Am I right that I need to tweak the my.cnf file? I think I optimized my tables with INDEXes correctly already. What should I look for first? Thanks, Grant __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Can I weight full-text searches in boolean mode?
Hi, I have a table with consumer products. I have a row product name and brand. If I do a full-text search in boolean mode for a term like apple ipod I get results like: Apple iPod (brand: Apple) TuneDock for Apple iPod (brand: Belkin) Since both of these terms match apple and ipod they are returned with the same score. Since apple also matches the brand for the product from Apple, I'd like to give it a little boost when ranking the score so that it is returned with a higher score. I think this would give me better search results. Right now both of these are returned with a score of 2, since they both have 2 matching words. If I could weight it, I would want: Apple iPod (brand: Apple) (score=3) TuneDock for Apple iPod (brand: Belkin) (score=2) Is this possible? Thanks, Grant - Yahoo! Personals Single? There's someone we'd like you to meet. Lots of someones, actually. Try Yahoo! Personals
Can I weight full-text searches in boolean mode?
Hi, I have a table with consumer products. I have a row product name and brand. If I do a full-text search in boolean mode for a term like apple ipod I get results like: Apple iPod (brand: Apple) TuneDock for Apple iPod (brand: Belkin) Since both of these terms match apple and ipod they are returned with the same score. Since apple also matches the brand for the product from Apple, I'd like to give it a little boost when ranking the score so that it is returned with a higher score. I think this would give me better search results. Right now both of these are returned with a score of 2, since they both have 2 matching words. If I could weight it, I would want: Apple iPod (brand: Apple) (score=3) TuneDock for Apple iPod (brand: Belkin) (score=2) Is this possible? Thanks, Grant - Yahoo! Music Unlimited - Access over 1 million songs. Try it free.
Need help with fulltext search and left join
Hi, I keep getting errors on this query and I'm not sure why. I'm using mysql version 4.0.22. Any ideas? SELECT pn_coupons_store.store_name, pn_coupons_store.store_name_short, pn_coupons_coupons.store_id, pn_coupons_coupons.coupon_id, pn_coupons_dealtype.dealtype_name, pn_coupons_coupons.coupon_desc, pn_coupons_coupons.coupon_link, pn_coupons_coupons.use_preurl, pn_coupons_coupons.coupon_tracking, pn_coupons_coupons.coupon_banner, pn_coupons_coupons.coupon_drm, pn_coupons_coupons.coupon_code, pn_coupons_coupons.coupon_legal_text, pn_coupons_coupons.coupon_submitter, pn_coupons_coupons.coupon_added, pn_coupons_coupons.coupon_start, pn_coupons_coupons.coupon_exp, pn_coupons_coupons.coupon_hits, pn_coupons_network.network_id, MATCH (coupon_desc) AGAINST ('electronics' IN BOOLEAN MODE) AS score FROM pn_coupons_coupons WHERE MATCH (store_name) AGAINST ('electronics' IN BOOLEAN MODE), LEFT JOIN pn_coupons_store ON (pn_coupons_store.store_id = pn_coupons_coupons.store_id) LEFT JOIN pn_coupons_dealtype ON (pn_coupons_dealtype.dealtype_id = pn_coupons_coupons.dealtype_id) LEFT JOIN pn_coupons_network ON (pn_coupons_network.network_id = pn_coupons_store.store_network_id) ORDER BY score DESC Thanks, Grant __ Yahoo! FareChase: Search multiple travel sites in one click. http://farechase.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
QUERY ordering clarification
Hi, If I do a query like: SELECT prodname, price FROM prod_table WHERE sku in ($sku1, $sku2, $sku3, $sku4) Will my results always be ordered in $sku1, $sku2, $sku3, $sku4 order? I can't really do a ORDER BY prodname or price here. I just want to make sure that that this type of query will always return the results in $sku1, $sku2, $sku3, $sku4 order. Thanks, Grant __ Do you Yahoo!? Yahoo! Small Business - Try our new resources site! http://smallbusiness.yahoo.com/resources/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
SELECT help
Hi, I am tring to do a select from 2 tables. Table1: sku title Table 2: sku feature SELECT table1.title, table2.feature FROM table1, table2 WHERE table1.sku in ($sku1, $sku2, $sku3) ORDER BY FIELD(table1.sku, $sku1, $sku2, $sku3) ASC That seems to work to some extint, but I am getting way too many results (more than 3). It's returning all combinations of sku and feauture even if they don't share the same sku. I modified the select to: SELECT table1.title, table2.feature FROM table1, table2 WHERE (table1.sku = $table2.sku) AND table1.sku in ($sku1, $sku2, $sku3) ORDER BY FIELD(table1.sku, $sku1, $sku2, $sku3) ASC That seemed to work almost correctly. I have some items in table2 that don't have a feature and therefor don't have a row associated with them. For example, if I have 3 items in each table, the above select works fine. If I have 3 items in table1 and 2 items in table2 the above query only gives me 2 results. table1 will always be fully populated and table2 might be missing some features. How can I run my query to get 3 results and if the feature is missing still return the table.title and NULL for the feature? Thanks, Grant __ Do you Yahoo!? Yahoo! Small Business - Try our new resources site! http://smallbusiness.yahoo.com/resources/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Need help figuring out indexes for faster SELECT queries
I have a new project I'm working for and I was wondering if anyone could help me optimize my selects for speed. I have a table with about 500,000 entries. The table structure I'm using is (via my PHP commands): $sql = CREATE TABLE $store_data_table ( $store_data_column[sku] VARCHAR(10) NOT NULL PRIMARY KEY, $store_data_column[prod_name] VARCHAR(127), $store_data_column[thumb_url] VARCHAR(127), $store_data_column[msrp] DECIMAL(12,2) NOT NULL, $store_data_column[price] DECIMAL(12,2) NOT NULL, $store_data_column[disc] DECIMAL(12,2) NOT NULL, $store_data_column[cat1] VARCHAR(63) NOT NULL, $store_data_column[cat2] VARCHAR(63) NOT NULL, $store_data_column[cat3] VARCHAR(63) NOT NULL, $store_data_column[cat4] VARCHAR(63) NOT NULL, INDEX (`cat1`,`cat2`,`cat3`,`cat4`)); I do lots of queries like selecting the categories like: $sql = SELECT DISTINCT $store_data_column[cat1] FROM $store_data_table WHERE $store_data_column[cat1] != 'N/A' ORDER BY $store_data_column[cat1] ASC; and $sql = SELECT DISTINCT $store_data_column[cat2] FROM $store_data_table WHERE $store_data_column[cat1] = '$url_params[cat1]' AND $store_data_column[cat2] != 'N/A' ORDER BY $store_data_column[cat2] ASC; and $sql = SELECT DISTINCT $store_data_column[cat3] FROM $store_data_table WHERE $store_data_column[cat1] = '$url_params[cat1]' AND $store_data_column[cat2] = '$url_params[cat2]' AND $store_data_column[cat3] != 'N/A' ORDER BY $store_data_column[cat3] ASC; etc... Also, I do selects like: $sql = SELECT $store_data_column[sku], $store_data_column[prod_name], $store_data_column[price], $store_data_column[cat1], $store_data_column[cat2], $store_data_column[cat3], $store_data_column[cat4] FROM $store_data_table WHERE $store_data_column[cat1] = '$url_params[cat1] ORDER BY $order_by_clause LIMIT $item_offset, $max_items; all the way to: $sql = SELECT $store_data_column[sku], $store_data_column[prod_name], $store_data_column[price], $store_data_column[cat1], $store_data_column[cat2], $store_data_column[cat3], $store_data_column[cat4] FROM $store_data_table WHERE $store_data_column[cat1] = '$url_params[cat1]' AND $store_data_column[cat2] = '$url_params[cat2]' AND $store_data_column[cat3] = '$url_params[cat3]' AND $store_data_column[cat4] = '$url_params[cat4] ORDER BY $order_by_clause LIMIT $item_offset, $max_items; Originally I didn't have the INDEX on categories. Adding the index I have now sped up the queries greatly, but I was wondering if there were any other tips so squeeze some more performance out of the selects. __ Do you Yahoo!? Meet the all-new My Yahoo! - Try it today! http://my.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Need help optimizing full-text searches
Hi, I currently have a table with 2 text fields, both of which are marked as fulltext. I'm using the full-text boolean search method with fairly good results. My problems however are: 1. I am on a shared host provider so I can't tweak the full-text search options such as stop words or minimum query length. 2. I would like to be able to search for words as short as 2 characters, but I'm limited by the default mysql configuration to 4 characters. 3. I'd like to get around the stopwords/stop characters. A lot of people search for 802.11g , but that search always comes up empty because I think the . character is messing up my searches. 4. Since I'm using the full-text search with a small amount of data, I would like all the results to be returned, even if they keywords are in more or less than 50% of the rows. 5. I really like the +, -, and for searching in full-text boolean mode, so I'd like to preserve that functionality. Since I'm on a shared host provider and can't really optimize the mysql configuration, is there any suggestions to help me improve my search results? Thanks, Grant __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Need help with mysql search engine
Hi, I currently have a table with 2 text fields, both of which are marked as fulltext. I'm using the full-text boolean search method with fairly good results. My problems however are: 1. I am on a shared host provider so I can't tweak the full-text search options such as stop words or minimum query length. 2. I would like to be able to search for words as short as 2 characters, but I'm limited by the default mysql configuration to 4 characters. 3. I'd like to get around the stopwords/stop characters. A lot of people search for 802.11g , but that search always comes up empty because I think the . character is messing up my searches. 4. Since I'm using the full-text search with a small amount of data, I would like all the results to be returned, even if they keywords are in more or less than 50% of the rows. 5. I really like the +, -, and for searching in full-text boolean mode, so I'd like to preserve that functionality. Since I'm on a shared host provider and can't really optimize the mysql configuration, is there any suggestions to help me improve my search results? Thanks, Grant __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Insert data if not duplicate based on order
Hi, I have a table that has many rows. I want to be able to insert a new row only if has a unique field of the last 20 rows when I sort them by date. Currently before I insert a new row, I will select a specific field for 20 rows and run a for loop in php looking for a match with the data I'm getting ready to insert. If a match occurs I will skip the insert command. This normally works fine, but I have seen occasions when 2 people visit my website at the same exact time. In this case, sometimes I might insert the data twice because I think I'm spinning in the for loop for 2 site visitors simultaneously. I know this is bad syntax, but this is what I imagine a insert query to look like: INSERT INTO $table (datax, datay, dataz) VALUES ('x', 'y', 'z') IF 'x' unique WHEN ordering by 'z' DESC LIMIT 20 In this example, 'z' is a datetime field and 'x' is a varchar. I don't mind duplicate entries in my table, but I don't want duplicates entries to occur within 20 rows when sorting by date. Is this possible to do with one INSERT command? I saw the IGNORE statment in the mysql manual, but I don't think it really aplies to what I'm trying to do here. Any suggestions would be appreciated, Thanks, Grant __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]