2010/8/19 Samuel Gendler :
> On Wed, Aug 18, 2010 at 1:25 PM, Yeb Havinga wrote:
>> Samuel Gendler wrote:
>>>
>>> When running pgbench on a db which fits easily into RAM (10% of RAM =
>>> -s 380), I see transaction counts a little less than 5K. When I go to
>>> 90% of RAM (-s 3420), transaction r
On Wed, Aug 18, 2010 at 1:25 PM, Yeb Havinga wrote:
> Samuel Gendler wrote:
>>
>> When running pgbench on a db which fits easily into RAM (10% of RAM =
>> -s 380), I see transaction counts a little less than 5K. When I go to
>> 90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a
I am. I was giving mean numbers
On Wed, Aug 18, 2010 at 12:56 PM, Craig James
wrote:
> On 8/18/10 12:24 PM, Samuel Gendler wrote:
>>
>> With barriers off, I saw a transaction rate of about 1200. With
>> barriers on, it was closer to 1050. The test had a concurrency of 40
>> in both cases.
>
>
Samuel Gendler wrote:
When running pgbench on a db which fits easily into RAM (10% of RAM =
-s 380), I see transaction counts a little less than 5K. When I go to
90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a
fairly wide range of concurrencies). At that point, I decided to
On 8/18/10 12:24 PM, Samuel Gendler wrote:
With barriers off, I saw a transaction rate of about 1200. With
barriers on, it was closer to 1050. The test had a concurrency of 40
in both cases.
I discovered there is roughly 10-20% "noise" in pgbench results after running
the exact same test ove
I'm just starting the process of trying to tune new hardware, which is
2x quad core xeon, 48GB RAM, 8x300GB SAS 15K drives in RAID 1+0,
2x72GB 15K SAS drives in RAID 1 for WAL and system. It is a PERC 6/i
card with BBU. Write-back cache is enabled. The system volume is
ext3. The large data partit