We have a table with over 27 Million rows and have had trouble with the JDBC with
large queries.
Our solution has been to use loops with a more restrictive where clause. Its not
mission critical so I
have not investigated very sincerely.
But I recently ran across the --quick option for the mysql client. We began to
implement this, but
noticed that it prevented large updates ( > 50,000 rows) when using our application,
so we had to turn
it off, but it might work for you.
Kevin Kotun
Hydrologist
Everglades National Park
>
> We are encountering two issues when using MySQL with large tables (by large,
> we're talking > 1 million rows). Our application is written in Java and we
> are using the mm.mysql JDBC driver. We run our applications using both
> Oracle and MySQL. Below are issues that we have experienced.
>
> 1. The mm.mysql driver caches all results from a large selection in memory
> before returning the ResultSet to the calling method. This means that any
> select returning a lot of rows runs the risk of getting an OutOfMemoryError.
> Oracle doesn't have this problem, as the ResultSet only contains some of the
> rows matching the query, and more are retrieved as ResultSet.next() is
> called. One of our developers has already discussed the issue with Mark
> Matthews, and the impression seems to be that MySQL does not support true
> cursors. Once you start a query, you must read the entire result set before
> issuing any other queries. The lack of cursors seems to really limit what
> types of applications can be implemented using MySQL as a backend. Are
> there any discussions regarding the implementation of cursors which would
> support canceling and scrolling, or is the solution to just keep adding
> memory (we're running on a PII 450 w/ 128MB RAM)?
>
> 2. After we incurred problem number 1, we decided to work around it by using
> the LIMIT directive and retrieving the entire table in chunks of 25000 rows
> by issuing a series of queries. Unfortunately, each LIMIT query takes longer
> and longer to return results, which means the overall select proceeds in
> exponential time. As an example, we tried to query all of the rows for a
> table containing 18 columns with 1.8 million rows (on a PII 450 w/ 128M). A
> partial summary of the results is in the following table:
>
> Selected rows Time for this select Cumulative
> Time
> 0-24999 00:00:29 00:00:29
> 25000-49999 00:00:30 00:00:59
> 50000-74999 00:00:32 00:01:31
> ...
> 500000-524999 00:05:52 01:01:04
> 525000-549999 00:06:03 01:07:07
> ...
> 1000000-1024999 00:11:10 03:50:07
> ...
> 1775000-1799999 0:27:50 14:15:34
>
> Is this phenomena expected behavior and will it be addressed sometime in the
> future, or is it just the way it goes?
>
>
> Stephen L. Faustino
> Senior Software Engineer
> SecureLogix Corporation
> Direct/Vmail (210)402-9669x949
> mailto:[EMAIL PROTECTED]
>
> This email is intended for the named recipient(s) only and may contain
> information that is privileged and/or confidential. Nothing in this email is
> intended to constitute a waiver of any privilege or the confidentiality of
> this message. If you have received this email in error, please notify me
> immediately by reply and delete this message.
>
>
>
>
> ---------------------------------------------------------------------
> Before posting, please check:
> http://www.mysql.com/manual.php (the manual)
> http://lists.mysql.com/ (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
><[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>
>
---------------------------------------------------------------------
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php