That is exactly what I did:

\o a_lot_room_to_hold_my_result
select * from a_table

either 
1. out of memory for query result
2. killed
3. crash PG

"If you have a very large table you can exhaust memory on the client 
side unless you are writing the data directly to a file."
How besides "\o" and pg_dump?

We have 4G RAM, and shared_buffers= 32768, it is a dedicate test box,
while the table is about 2G. 

Thanks,

-----Original Message-----
From: Steve Crawford [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 10, 2005 11:00 AM
To: Lee Wu; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] select * and save into a text file failed

On Friday 10 June 2005 9:33 am, Lee Wu wrote:
> Even without saving to file, it is still killed:
>...
> My_db=# select * from a_table;
> Killed
>...

The previous examples don't work for me. In psql try this:
--First set the output to a file
\o 'my_output.txt'

--Now run the query
select * from myfile;

--Quit and check your results
\q

If you have a very large table you can exhaust memory on the client 
side unless you are writing the data directly to a file.

Cheers,
Steve


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to