On Aug 21, 2007, at 12:04 AM, Tom Lane wrote:
If you need to deal with very large result sets, the standard advice
is to use a cursor so you can pull a few hundred or thousand rows
at a time via FETCH.
In case it's not obvious... in this case you might want to dump the
output of that query in
Bill Thoen <[EMAIL PROTECTED]> writes:
> I ran VACUUM ANALYZE just before I launched this and there were no other
> postgress jobs running. I'm the only user as well. I also ran EXPLAIN
> prior to the run and got this:
> Nested Loop (cost=11.71..28800.34 rows=7219 width=584)
>-> Seq Scan o
Something is really screwy here. I tried what looked like a simpler task
than I tried when I started this message thread. The only unusual issue
here is that the table compliance_2006 is rather big (over 18 million
records). The table ers_regions is pretty small (about 3100 records) and
all the
Bill Thoen <[EMAIL PROTECTED]> writes:
> Tom, here's the "explain" results: Does this help explain what went wrong?
> (And yes, I think there will be a *lot* of groups.)
> explain select count(*) as count,fips_st_cd, fips_cnty_cd, farm_nbr,
> tract_nbr, field_nbr,crop_cd,crop_status_cd,practice_cd
Tom, here's the "explain" results: Does this help explain what went wrong?
(And yes, I think there will be a *lot* of groups.)
explain select count(*) as count,fips_st_cd, fips_cnty_cd, farm_nbr,
tract_nbr, field_nbr,crop_cd,crop_status_cd,practice_cd,seq_nbr from
compliance_2006 group by fips_st_
Bill Thoen <[EMAIL PROTECTED]> writes:
> I knew this would take some time, but what I didn't expect was that about
> an hour into the select, my mouse and keyboard locked up and also I
> couldn't log in from another computer via SSH. This is a Linux machine
> running Fedora Core 6 and PostgresQL is
Bill Thoen <[EMAIL PROTECTED]> wrote:
>
> I'm new to PostgreSQL and I ran into problem I don't want to repeat. I have
> a database with a little more than 18 million records that takes up about
> 3GB. I need to check to see if there are duplicate records, so I tried a
> command like this:
>
> SELE
I'm new to PostgreSQL and I ran into problem I don't want to repeat. I have
a database with a little more than 18 million records that takes up about
3GB. I need to check to see if there are duplicate records, so I tried a
command like this:
SELECT count(*) AS count, fld1, fld2, fld3, fld4 FROM My