andrew klassen wrote:
I am using the c-library interface and for these particular transactions
I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
followed by at most 300 EXECUTES and then a COMMIT. That is the
general scenario. What value beyond 300 should I try?
Make sure
On Wed, 4 Jun 2008, Mathieu Gilardet wrote:
Do be more specific, we have an heavy loaded server with SCSI disks
(RAID 0 on a SAS controller), making a total of 500GB. Actually, there
are 16GB RAM, representing about 2,5% of db size.
That's a reasonable ratio. Being able to hold somewhere aro
andrew klassen <[EMAIL PROTECTED]> writes:
> I am using the c-library interface and for these particular transactions
> I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
> followed by at most 300 EXECUTES and then a COMMIT. That is the
> general scenario. What value beyond 300
I am using the c-library interface and for these particular transactions
I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
followed by at most 300 EXECUTES and then a COMMIT. That is the
general scenario. What value beyond 300 should I try?
Thanks.
Do you have PREPARE sta
I am using the c-library interface and for these particular transactions
I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
followed by at most 300 EXECUTES and then a COMMIT. That is the
general scenario. What value beyond 300 should I try?
Also, how might COPY (which involve
andrew klassen wrote:
I'll try adding more threads to update the table as you suggest.
You could try materially increasing the update batch size too. As an
exercise you could
see what the performance of COPY is by backing out the data and
reloading it from
a suitable file.
--
Sent via pgsql
Matthew Wakeling wrote:
If you're running a "work queue" architecture, that probably means you
only have one thread doing all the updates/inserts? It might be worth
going multi-threaded, and issuing inserts and updates through more
than one connection. Postgres is designed pretty well to scale
I agree that the discs are probably very busy. I do have 2 disks but they are
for redundancy. Would it help to put the data, indexes and xlog on separate
disk partitions?
I'll try adding more threads to update the table as you suggest.
- Original Message
From: Matthew Wakeling <[EMAIL
Hi,
I'm trying to make use of a cluster of 40 nodes that my group has,
and I'm curious if anyone has experience with PgPool's parallel query
mode. Under what circumstances could I expect the most benefit from
query parallelization as implemented by PgPool?
--
Sent via pgsql-performance mai
We are not using connection pooling, however the clients are tunneling
through SSH. Forgot to mention that in my first message. Does that
make any difference to it?
Thank you,
Lewis Kapell
Computer Operations
Seton Home Study School
Tom Lane wrote:
Lewis Kapell <[EMAIL PROTECTED]> writes:
On Wed, 4 Jun 2008, andrew klassen wrote:
I am using multiple threads, but only one worker thread for insert/updated to
this table.
I don't mind trying to add multiple threads for this table, but my guess is it
would not
help because basically the overall tps rate is decreasing so dramatically.
Lewis Kapell <[EMAIL PROTECTED]> writes:
> ... By examining the logs, I have observed that the
> backend pid for a particular client sometimes changes during a session.
That is just about impossible to believe, unless perhaps you have a
connection pooler in the loop somewhere?
On Wed, 4 Jun 2008, Lewis Kapell wrote:
The client sends its authorization information immediately before
sending the data, and also with the data chunk.
Well, I have no idea why the backend pid is changing, but here it looks
like you have a classic concurrency problem caused by checking a var
I am using multiple threads, but only one worker thread for insert/updated to
this table.
I don't mind trying to add multiple threads for this table, but my guess is it
would not
help because basically the overall tps rate is decreasing so dramatically. Since
the cpu time consumed by the corresp
I have a Windows application which connects to a Postgres (8.3) database
residing on our company server. Most of the application's users work
from their homes, so the application was developed with a lot of
security checks.
When a client connects to the database, a random hash is generated an
hello,
I am wondering if actually, does exists any rule to determine
the amount of RAM, according to the hardware disk size
(in prevision to use all or nearly available space),
when designing a server?
Of course fsm settings implies that when db size grows, more memory will
be in use
for track
On Tue, 3 Jun 2008, andrew klassen wrote:
Basically, I have a somewhat constant rate of inserts/updates that go
into a work queue and then get passed to postgres.
The cpu load is not that high, i.e. plenty of idle cpu. I am running an older
version of freebsd and the iostat output is not very
17 matches
Mail list logo