> Hello,
>
> I'm executing this query:
>
> SELECT x, y, another_field FROM generate_series(1, 10) x,
> generate_series(1, 10) y, my_table

Well, do you realize this is a cartesian product that gives

10 x 10 x 360000 = 36.000.000

rows in the end. Not sure how wide is the third table (how many columns
etc.) but this may occupy a lot of memory.

> The field 'another_field' belongs to 'my_table'. And that table has
> 360000 entries. In a 64 bits machine, with 4GB RAM, Ubuntu 10.10 and
> postgres 8.4.7, the query works fine. But in a 32 bits machine, with
> 1GB RAM, Ubuntu 9.10 and postgres 8.4.7, the query process is killed
> after taking about 80% of available memory. In the 64 bits machine the
> query takes about 60-70% of the available memory too, but it ends.
> And this happens even if I simply get x and y:
>
> SELECT x, y FROM generate_series(1, 10) x, generate_series(1, 10) y,
> my_table

The result is still 36 million rows, so there's not a big difference I guess.

> Is it normal? I mean, postgres has to deal with millions of rows, ok,
> but shouldn't it start swapping memory instead of crashing? Is a
> question of postgres configuration?

I guess that's the OOM killer, killing one of the processes. See this

http://en.wikipedia.org/wiki/Out_of_memory

so it's a matter of the system, not PostgreSQL - the kernel decides
there's not enough memory, chooses one of the processes and kills it.
PostgreSQL is a victim in this case.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to