Le mardi 13 mars 2012 à 11:15, Merlin Moncure a écrit :

> 2012/3/13 François Beausoleil <franc...@teksol.info 
> (mailto:franc...@teksol.info)>:
> >  
> > I'll go with the COPY, since I can live with the batched requirements just 
> > fine.
>  
> 30-40 'in transaction' i/o bound inserts is so slow as to not really
> be believable unless each record is around 1 megabyte because being in
> transaction removes storage latency from the equation. Even on a
> crappy VM. As a point of comparison my sata workstation drive can do
> in the 10s of thousands. How many records are you inserting per
> transaction?
>  


I took the time to gather statistics about the database server: 
https://gist.github.com/07bbf8a5b05b1c37a7f2

The files are a series of roughly 30 second samples, while the system is under 
production usage. When I quoted 30-40 transactions per second, I was actually 
referring to the number of messages processed from my message queue. Going by 
the PostgreSQL numbers, xact_commit tells me I manage 288 commits per second. 
It's much better than I anticipated.

Anyways, if anybody has comments on how I could increase throughput, I'd 
appreciate. My message queues are almost always backed up by 1M messages, and 
it's at least partially related to PostgreSQL: if the DB can write faster, I 
can manage my backlog better.

I'm still planning on going with batch processing, but I need to do something 
ASAP to give me just a bit more throughput.

Thanks!
François


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to