=?ISO-8859-1?Q?Jorge_Ar=E9valo?= <jorge.arev...@deimos-space.com> writes:
> I'm executing this query:

> SELECT x, y, another_field FROM generate_series(1, 10) x,
> generate_series(1, 10) y, my_table

> The field 'another_field' belongs to 'my_table'. And that table has
> 360000 entries. In a 64 bits machine, with 4GB RAM, Ubuntu 10.10 and
> postgres 8.4.7, the query works fine. But in a 32 bits machine, with
> 1GB RAM, Ubuntu 9.10 and postgres 8.4.7, the query process is killed
> after taking about 80% of available memory. In the 64 bits machine the
> query takes about 60-70% of the available memory too, but it ends.

You mean the backend, or psql?  I don't see any particular backend bloat
when I do that, but psql eats memory because it's trying to absorb and
display the whole query result.  

> Is it normal? I mean, postgres has to deal with millions of rows, ok,
> but shouldn't it start swapping memory instead of crashing? Is a
> question of postgres configuration?

Try "\set FETCH_COUNT 1000" or so.

                        regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to