Neal Clark <[EMAIL PROTECTED]> writes:
> comments?
Looks like the right idea. If you have a lot of rows to process,
you'll benefit by fetching in batches, e.g.
my $sth = $dbh->prepare(qq{FETCH FORWARD 1000 FROM my_cur});
# iterate through the result set here
-Doug
---
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Okay, I don't have any postgresql tables big enough to verify this is
doing what I think it is (namely, only keeping one row from my result
set in memory at a time), and I still don't really know much about
cursors or pg, but this appears to be
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Thanks for all the replies everyone. Not really knowing what a cursor
is, I suppose I have some work to do. I can do the SELECT/LIMIT/
OFFSET approach but that seems like kind of a headache, esp. when its
hard to predict what # of rows will max o
On Mon, Mar 12, 2007 at 08:38:52AM -0400, Douglas McNaught wrote:
> You are restricted to staying in a transaction while the cursor is
> open, so if you want to work outside of transactions LIMIT/OFFSET
> is your only way.
http://www.postgresql.org/docs/8.2/interactive/sql-declare.html
"If WITH H
"Albe Laurenz" <[EMAIL PROTECTED]> writes:
> So there is no automatic way of handling it.
>
> You will probably have to consider it in your code and use
> SELECT-Statements
> with a LIMIT clause.
Either that, or explicitly DECLARE a CURSOR and use FETCH from that
cursor in batches. You can do th
Neal Clark wrote:
> my $sth = $dbh->prepare(qq{SOME_QUERY});
> $sth->execute;
> while (my $href = $sth->fetchrow_hashref) {
> # do stuff
> }
>
[...]
>
> So with mysql, I can just say $dbh->{'mysql-use-result'} = 1, and
> then it switches so that the fetchrow_hashref calls are actually
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi.
I am in the middle of moving a product from MySQL to Postgre. One of
the tables is relatively big, with 100M+ rows and growing, each of
which has a column that usually contains between 1-500k of data (the
'MYD' file it is currently 94G).