John R Pierce wrote:
> On 8/27/2013 6:49 PM, 高健 wrote:
>> For a query and insert action,
>> Firstly , the data is pull into private memory of the backend
>> process which is service client.
> if you're just writing this data into another table, why not do
> it all in SQL ?
>
> INSERT INTO
On 8/27/2013 6:49 PM, 高健 wrote:
For a query and insert action,
Firstly , the data is pull into private memory of the backend
process which is service client.
if you're returning a single result of 3 million records, yes, you're
going to need memory to store that entire result set before y
Hi:
Now the situation goes there:
In the testing environment,
even when my customer changed shared_buffers from 1024MB to 712MB or 512MB,
The total memory consumption is still almost the same.
I think that PG is always using as much resource as it can,
For a query and insert action,
Firstly
On Sun, Aug 25, 2013 at 11:08 PM, 高健 wrote:
> Hello:
>
> Sorry for disturbing.
>
> I am now encountering a serious problem: memory is not enough.
>
> My customer reported that when they run a program they found the totall
> memory and disk i/o usage all reached to threshold value(80%).
>
> That pr
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ??
Sent: Monday, August 26, 2013 2:08 AM
To: pgsql-general
Subject: [GENERAL] Is there any method to limit resource usage in PG?
Hello:
Sorry for disturbing.
I am now encountering a serious
On 8/25/2013 11:08 PM, 高健 wrote:
That program is written by Java.
It is to use JDBC to pull out data from DB, while the query joined
some table together, It will return about 3000,000 records.
Then the program will use JDBC again to write the records row by row
, to inert into another table
Hello:
Sorry for disturbing.
I am now encountering a serious problem: memory is not enough.
My customer reported that when they run a program they found the totall
memory and disk i/o usage all reached to threshold value(80%).
That program is written by Java.
It is to use JDBC to pull out data