Re: Querying large table
In news:[EMAIL PROTECTED], Shadow [EMAIL PROTECTED] wrote: I need to get number of items in a specific category, so I use SELECT COUNT(*) FROM items WHERE ctg='ctg' But each query takes ~ 10seconds. Its really slow. You may add an index on `items`.`ctg` if there is none. Maciek -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
need help urgent
How to retrieve data from three consecutive tuples until th end of the database.Such that suppose the field name is ID and I want to retrieve data from another field LAT such that at a time LAT values for ID's 1,2,3 are taken then for 3,4,5 then for 4,5,6 so on till table ends. -- View this message in context: http://www.nabble.com/need-help-urgent-tf3491135.html#a9749875 Sent from the MySQL - General mailing list archive at Nabble.com.
Re: need help urgent
Hi Rakaha, You have a cursor , select id from table_name and then have a loop where in you select values of LTA for each id got from the above cursor, close the loop once all the ID have been processed. regards anandk On 3/30/07, raksha [EMAIL PROTECTED] wrote: How to retrieve data from three consecutive tuples until th end of the database.Such that suppose the field name is ID and I want to retrieve data from another field LAT such that at a time LAT values for ID's 1,2,3 are taken then for 3,4,5 then for 4,5,6 so on till table ends. -- View this message in context: http://www.nabble.com/need-help-urgent-tf3491135.html#a9749875 Sent from the MySQL - General mailing list archive at Nabble.com.
a 'safe' way to move data?
I'm trying to move data between 2 tables. INSERT INTO new_table SELECT * FROM old_table LIMIT 5; DELETE FROM old_table LIMIT 5; This is the only process that deletes data from old_table, can I be *sure* that the limit in these 2 queries will address the same data set? (if I don't limit to small numbers in the LIMIT, I/O gets too high, so I'm moving data slowly in batches) Thanks, -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: a 'safe' way to move data?
Hi Christian, Before delete teh data from old_table, just have a backup. Create table new_table_bck select * from old_table limit 5; But i feel, instead of using limit, try to get data based on some date or other condition, so that you are sure that same data gets insert and also deleted from old table regards anandkl On 3/30/07, Ian P. Christian [EMAIL PROTECTED] wrote: I'm trying to move data between 2 tables. INSERT INTO new_table SELECT * FROM old_table LIMIT 5; DELETE FROM old_table LIMIT 5; This is the only process that deletes data from old_table, can I be *sure* that the limit in these 2 queries will address the same data set? (if I don't limit to small numbers in the LIMIT, I/O gets too high, so I'm moving data slowly in batches) Thanks, -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: need help urgent
Or you could also do this. SELECT LTA FROM TABLE A WHERE ID IN (SELECT B.ID FROM TABLE B); i hope this what your looking at, please let us know. regards anandkl On 3/30/07, Ananda Kumar [EMAIL PROTECTED] wrote: Hi Rakaha, You have a cursor , select id from table_name and then have a loop where in you select values of LTA for each id got from the above cursor, close the loop once all the ID have been processed. regards anandk On 3/30/07, raksha [EMAIL PROTECTED] wrote: How to retrieve data from three consecutive tuples until th end of the database.Such that suppose the field name is ID and I want to retrieve data from another field LAT such that at a time LAT values for ID's 1,2,3 are taken then for 3,4,5 then for 4,5,6 so on till table ends. -- View this message in context: http://www.nabble.com/need-help-urgent-tf3491135.html#a9749875 Sent from the MySQL - General mailing list archive at Nabble.comhttp://nabble.com/ .
Re: Querying large table
It's taking a long time because your filter is external to the table, so you can't use an index. You want to focus your search on the category table, where you can quickly narrow down the number of records to search. SELECT ctg, count(itemid) FROM categories JOIN items ON ctgID=itemCtgID WHERE ctg='ctg' GROUP BY ctg; - Original Message - From: Shadow [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, March 29, 2007 6:59 PM Subject: Querying large table Hey, guys. I have 2 tables: categories and items. COUNT(*) categories = 63 833 COUNT(*) items = 742 993 I need to get number of items in a specific category, so I use SELECT COUNT(*) FROM items WHERE ctg='ctg' But each query takes ~ 10seconds. Its really slow. Can anybody propose some optimization? Thanks. - Finding fabulous fares is fun. Let Yahoo! FareChase search your favorite travel sites to find flight and hotel bargains. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: a 'safe' way to move data?
No, you can't assure the same data will be addressed without at least including an order by. Even then you would need to make sure that the first X records in the order would not change. For instance, if you order by entered_date DESC, then the data set would change because any new records would get included in the LIMIT. - Original Message - From: Ian P. Christian [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, March 30, 2007 8:18 AM Subject: a 'safe' way to move data? I'm trying to move data between 2 tables. INSERT INTO new_table SELECT * FROM old_table LIMIT 5; DELETE FROM old_table LIMIT 5; This is the only process that deletes data from old_table, can I be *sure* that the limit in these 2 queries will address the same data set? (if I don't limit to small numbers in the LIMIT, I/O gets too high, so I'm moving data slowly in batches) Thanks, -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: a 'safe' way to move data?
Brent Baisley wrote: No, you can't assure the same data will be addressed without at least including an order by. Even then you would need to make sure that the first X records in the order would not change. For instance, if you order by entered_date DESC, then the data set would change because any new records would get included in the LIMIT. Will it not always use the natural order of the table in selects/deletes, and therefore return results in the order in which they were inserted? -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: a 'safe' way to move data?
In the last episode (Mar 30), Ian P. Christian said: Brent Baisley wrote: No, you can't assure the same data will be addressed without at least including an order by. Even then you would need to make sure that the first X records in the order would not change. For instance, if you order by entered_date DESC, then the data set would change because any new records would get included in the LIMIT. Will it not always use the natural order of the table in selects/deletes, and therefore return results in the order in which they were inserted? If you know that no-one else has inserted or deleted records between your two commands, the commands will return the same records. To be completely safe, you would want to use innodb tables, then select only the primary key of the 50,000 records you're interested in, using the FOR UPDATE keyword (to keep others from modifying those records while you're doing the move). Then INSERT INTO newtable SELECT * FROM oldtable WHERE primarykey in ( your 50,000 keys ), then DELETE FROM oldtable WHERE primarykey in ( your 50,000 keys ), then COMMIT, which will cause your insertions and deletions to be truly atomic. -- Dan Nelson [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: a 'safe' way to move data?
Dan Nelson wrote: To be completely safe, you would want to use innodb tables, then select only the primary key of the 50,000 records you're interested in, using the FOR UPDATE keyword (to keep others from modifying those records while you're doing the move). Then INSERT INTO newtable SELECT * FROM oldtable WHERE primarykey in ( your 50,000 keys ), then DELETE FROM oldtable WHERE primarykey in ( your 50,000 keys ), then COMMIT, which will cause your insertions and deletions to be truly atomic. Ah of course - a far better idea. Thanks :) -- Ian P. Christian ~ http://pookey.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Database Replication Fallover
Dear MySql, I'm looking into availability and wonder if any member might be able to help me. I have two databases, one Primary and one full Replication. Normally the primary is used for data input, reports are drawn from the replication. If I loose the Primary, do any members have any software they can recommend which: - Stops the replication daemon. - Sets the replication server to Read/Write. - Shuts down the primary. - Routes traffic to the replication. Any advise or ideas would be very useful... Regards, Ben Clewett. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
CVS-Like System For Database Changes
Hello! I'm just getting in to using CVS to track changes to my source code for PHP projects that I'm working on, and it's exactly what my organization needed. However, there does not appear to be a way to track changes to mySQL databases in the same way. Basically, as the structure of tables are changed to meet the requirements of new features, I'd like a way to be able to record those changes (both structural table changes and also default table data such as table of states or zip codes or whatever) in a CVS-type system (preferably integrated with CVS directly) so that when a customer uses CVS to get the newest version of the code for their project, they can also get (and automatically apply) all changes to their database for the new version. Does such a system exist? How do other people cope with these types of updates? Thanks for any guidance! Tim Gustafson (831) 425-4522 x 100 (831) 621-6299 Fax -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: CVS-Like System For Database Changes
We keep all of the schema (one file per table) in SVN (subversion) with a directory to represent each database. As the schema evolves, we have had no trouble tracking changes through resource history and are able to extract diffs on every commited change. It works like a charm and would proably work equally as well with CVS. - michael On 3/30/07, Tim Gustafson [EMAIL PROTECTED] wrote: Hello! I'm just getting in to using CVS to track changes to my source code for PHP projects that I'm working on, and it's exactly what my organization needed. However, there does not appear to be a way to track changes to mySQL databases in the same way. Basically, as the structure of tables are changed to meet the requirements of new features, I'd like a way to be able to record those changes (both structural table changes and also default table data such as table of states or zip codes or whatever) in a CVS-type system (preferably integrated with CVS directly) so that when a customer uses CVS to get the newest version of the code for their project, they can also get (and automatically apply) all changes to their database for the new version. Does such a system exist? How do other people cope with these types of updates? Thanks for any guidance! Tim Gustafson (831) 425-4522 x 100 (831) 621-6299 Fax -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- - michael dykman - [EMAIL PROTECTED] - All models are wrong. Some models are useful. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Tricky Sorting
I'm trying to figure out the best way to do a tricky bit of sorting. I'm pretty sure it's entirely possible with an IFNULL or something, but I always feel like I hit a barrier when I get to a certain level of complexity in my MYSQL. In any case, I have some magazines, each of which has a DATE, a VOLUME, and an ISSUE. The sorting is usually simple, in that the date includes a year and a month and you can sort by that. However, sometimes magazines get delayed and they start putting only a year on their issues, omitting the months. Worse, they sometimes randomly change numbers (or names) Ideally, I'd like things to sort by the date, unless there's not a month, in which case it falls back to the issue number. So for example, this would be a correct sort: 1996-01-01 Original Mag V1 #1 1996-02-01 Original Mag V1 #2 1996-00-00 Original Mag V1 #3 1996-00-00 Original Mag V1 #4 1996-05-01 Original Mag V1 #5 1996-06-01 Replacement Mag V1 #1 ORDER BY date, volume, issue does this: 1996-00-00 Original Mag V1 #3 1996-00-00 Original Mag V1 #4 1996-01-01 Original Mag V1 #1 1996-02-01 Original Mag V1 #2 1996-05-01 Original Mag V1 #5 1996-06-01 Replacement Mag V1 #1 ORDER BY volume, issue does this: 1996-01-01 Original Mag V1 #1 1996-06-01 Replacement Mag V1 #1 1996-02-01 Original Mag V1 #2 1996-00-00 Original Mag V1 #3 1996-00-00 Original Mag V1 #4 1996-05-01 Original Mag V1 #5 None of it's quite ideal. ORDER BY YEAR(date), volume, issue is what I'm using right now because it puts things in the right ballpark. Shannon -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]