Re: Use of MySQL with large tables
We have a table with over 27 Million rows and have had trouble with the JDBC with large queries. Our solution has been to use loops with a more restrictive where clause. Its not mission critical so I have not investigated very sincerely. But I recently ran across the --quick option for the mysql client. We began to implement this, but noticed that it prevented large updates ( 50,000 rows) when using our application, so we had to turn it off, but it might work for you. Kevin Kotun Hydrologist Everglades National Park We are encountering two issues when using MySQL with large tables (by large, we're talking 1 million rows). Our application is written in Java and we are using the mm.mysql JDBC driver. We run our applications using both Oracle and MySQL. Below are issues that we have experienced. 1. The mm.mysql driver caches all results from a large selection in memory before returning the ResultSet to the calling method. This means that any select returning a lot of rows runs the risk of getting an OutOfMemoryError. Oracle doesn't have this problem, as the ResultSet only contains some of the rows matching the query, and more are retrieved as ResultSet.next() is called. One of our developers has already discussed the issue with Mark Matthews, and the impression seems to be that MySQL does not support true cursors. Once you start a query, you must read the entire result set before issuing any other queries. The lack of cursors seems to really limit what types of applications can be implemented using MySQL as a backend. Are there any discussions regarding the implementation of cursors which would support canceling and scrolling, or is the solution to just keep adding memory (we're running on a PII 450 w/ 128MB RAM)? 2. After we incurred problem number 1, we decided to work around it by using the LIMIT directive and retrieving the entire table in chunks of 25000 rows by issuing a series of queries. Unfortunately, each LIMIT query takes longer and longer to return results, which means the overall select proceeds in exponential time. As an example, we tried to query all of the rows for a table containing 18 columns with 1.8 million rows (on a PII 450 w/ 128M). A partial summary of the results is in the following table: Selected rows Time for this selectCumulative Time 0-24999 00:00:2900:00:29 25000-4 00:00:3000:00:59 5-74999 00:00:3200:01:31 ... 50-524999 00:05:5201:01:04 525000-54 00:06:0301:07:07 ... 100-1024999 00:11:1003:50:07 ... 1775000-179 0:27:50 14:15:34 Is this phenomena expected behavior and will it be addressed sometime in the future, or is it just the way it goes? Stephen L. Faustino Senior Software Engineer SecureLogix Corporation Direct/Vmail (210)402-9669x949 mailto:[EMAIL PROTECTED] This email is intended for the named recipient(s) only and may contain information that is privileged and/or confidential. Nothing in this email is intended to constitute a waiver of any privilege or the confidentiality of this message. If you have received this email in error, please notify me immediately by reply and delete this message. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Crash Recovery
I have a table with over 26 million records that experiences a high volume of traffic. Recently, query results became a little screwy; upon investigation it was found that the table and the index were fouled up. The results of an isamchk on the table yielded: Checking ISAM file: /usr8/hydrology/measurement Data records: 27034726 Deleted blocks: 0 - check file-size ./bin/isamchk: warning: Size of datafile is: 11 Should be: 3554138 - check delete-chain - check index reference - check data record references index: 1 ./bin/isamchk: error: Found key at page 1024 that points to record outside datafile ISAM-table '/usr8/hydrology/measurement' is corrupted Fix it using switch -r or -o and on the index yielded: Data records: 27034726 Deleted blocks: 0 - check file-size ./bin/isamchk: warning: Size of datafile is: 11 Should be: 3554138 - check delete-chain - check index reference - check data record references index: 1 ./bin/isamchk: error: Found key at page 1024 that points to record outside datafile ISAM-table '/usr8/hydrology/measurement.ISM' is corrupted Fix it using switch -r or -o Trying to fix the table as suggested results in the datafile being reduced to a file size of 0 bytyes. Before the fix it is over 4Gigs. I have tried several different options for isamchk including the --quick,which does not kill the data file but still does not work. So the question is: since the isamchk can detect the 27 million records, what is my best hope for getting them all out. select * results in garbage any thoughts would be appreciated. I can get about 90% of the records out with carefully crafted where clauses, but I am a little concerned with the integrity of them. What is the probability that the integrity of these records can be trusted. They look ok and a few random checks that I have done show no problems. thanks to all. Kevin Kotun, P.E. Hydrologist Everglades National Park - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Copying from one table to another.
I obviously dont know what the error message is but Update tbl1, tbl2 set tbl1.CategoryInTable1 = tbl2.CategoryInTable 2 where ^ | This space may be a problem good luck, try to provide the error message -- kevin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Copying table V.2.0
I think you need a new approach. It appears that update does not support the join you are attempting. It seems that you are trying to update the value for a column from one table with the value of a column from another table. If this is the case, i would use a shell script to first extract the values from tblSNLC and then update them directly into the tblSNOrg Kevin thanks for your response. Here are the different errors for each SQl command I enter. Command: update tblSNLC,tblSNOrg set tblSNOrg.OrgSponsor1=tblSNLC.SponsorFull where tblSNOrg.OrgZip=tblSNLC.Zip; Error: Failed to execute SQL : SQL update tblSNLC,tblSNOrg set tblSNOrg.OrgSponsor1=tblSNLC.SponsorFull where tblSNOrg.OrgZip=tblSNLC.Zip; failed : You have an error in your SQL syntax near 'tblSNOrg set tblSNOrg.OrgSponsor1=tblSNLC.SponsorFull where tblSNOrg.OrgZip=tblS' at line 1 Command: update tblSNLC,tblSNOrg set OrgSponsor1=SponsorFull where OrgZip=Zip; Error: Failed to execute SQL : SQL update tblSNLC,tblSNOrg set OrgSponsor1=SponsorFull where OrgZip=Zip; failed : You have an error in your SQL syntax near 'tblSNOrg set OrgSponsor1=SponsorFull where OrgZip=Zip' at line 1 THIS NEXT ONE SEEMS THE CLOSEST Command: update tblSNOrg set OrgSponsor1=tblSNLC.SponsorFull where OrgZip=tblSNLC.Zip; Error: Failed to execute SQL : SQL update tblSNOrg set OrgSponsor1=tblSNLC.SponsorFull where OrgZip=tblSNLC.Zip; failed : Unknown table 'tblSNLC' in field list I don't know why it wouldn't find the table Thanks for any continued help. -eric On 9/12/2001 10:46 AM, Kevin Kotun [EMAIL PROTECTED] wrote: I obviously dont know what the error message is but Update tbl1, tbl2 set tbl1.CategoryInTable1 = tbl2.CategoryInTable 2 where ^ | This space may be a problem good luck, try to provide the error message -- kevin - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php