On Sat, Nov 22, 2008 at 4:54 PM, Ciprian Dorin Craciun
<[EMAIL PROTECTED]> wrote:
> On Sat, Nov 22, 2008 at 11:51 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>> On Sat, Nov 22, 2008 at 2:37 PM, Ciprian Dorin Craciun
>> <[EMAIL PROTECTED]> wrote:
>>>
>>>    Hello all!
>> SNIP
>>>    So I would conclude that relational stores will not make it for
>>> this use case...
>>
>> I was wondering you guys are having to do all individual inserts or if
>> you can batch some number together into a transaction.  Being able to
>> put > 1 into a single transaction is a huge win for pgsql.
>
>    I'm aware of the performance issues between 1 insert vs x batched
> inserts in one operation / transaction. That is why in the case of
> Postgres I am using COPY <table> FROM STDIN, and using 5k batches...
> (I've tried even 10k, 15k, 25k, 50k, 500k, 1m inserts / batch and no
> improvement...)

I've had exactly the same experience with Postgres during an attempt
to use it as a store for large-scale incoming streams of data at a
rate very comparable to what you're looking at (~100k/sec). We
eventually just ended up rolling our own solution.

-- 
- David T. Wilson
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to