If you are inserting a lot of data into the same table, table extension locks
are a problem, and will be extended in only 8k increments which if you have a
lot of clients hitting/expanding the same table you are going to have a lot of
overhead.
-Original Message-
From:
Given a baseline postgresql.conf config and a couple DL580 40 core/256GB memory
I noticed a large over head for pgbouncer, has anyone seen this before?
$ pgbench -h `hostname -i` -j 32 -p 4320 -U asgprod -s 500 -c 32 -S -T 60
pgbench_500
Scale option ignored, using pgbench_branches table
Check your pagecache settings, when doing heavy io writes of a large file you
can basically force a linux box to completely stall. At some point once the
pagecache has reached it's limit it'll force all IO to go sync basically from
my understanding. We are still fighting with this but lots
Are your stats updated on the table after you added the index?
- run the bad query with explain verbose on (you should send this anyways)
- check to see what the difference is in expected rows vs. actual rows
- make sure that your work_mem is high enough if you are sorting, if not you'll
see it
The naked query runs a very long time, and stuck in a slow semop() syscalls
from my strace output, the explain plans are probably more to do with our
statistic problems that we are clubbing through slowly but surely. The main
concern was way the slow semop() calls consistently, a restart of
I have a question on how the analyzer works in this type of scenario.
We calculate some results and COPY INTO some partitioned tables, which we use
some selects to aggregate the data back out into reports. Everyone once in a
while the aggregation step picks a bad plan due to stats on the
Am I reading this right in that the sort is taking almost 8 seconds?
GroupAggregate (cost=95808.09..95808.14 rows=1 width=142) (actual
time=14186.999..14694.524 rows=315635 loops=1)
Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7,
s.d8, s.d9, s.d10, sum(sq.v)
When purchasing the intel 7500 series, please make sure to check the hemisphere
mode of your memory configuration. There is a HUGE difference in the memory
configuration around 50% speed if you don't populate all the memory slots on
the controllers properly.
Just some information on our setup:
- HP DL585 G6
- 4 x AMD Opteron 8435 (24 cores)
- 256GB RAM
- 2 FusionIO 640GB PCI-SSD (RAID0)
- dual 10GB ethernet.
- we have several tables that we store calculated values in.
- these are inserted by a compute farm that calculates the results and stores
This can be resolved by partitioning the disk with a larger write spare area so
that the cells don't have to by recycled so often. There is a lot of
misinformation about SSD's, there are some great articles on anandtech that
really explain how the technology works and some of the differences
If it's a HP box you can also turn this off via the bios via your RBSU:
Starting with HP ProLiant G6 servers that utilize Intel® Xeon® processors,
setting the HP Power Profile
Option in RBSU to Maximum Performance Mode sets these recommended additional
low-latency options
for minimum BIOS
Scott, are you really moving that much data through memory, 70-80GB/sec is the
limit of the new intel 7500 series in a best case scenario.
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Scott Marlowe
During heavy writes times we get the checkpoint too often error, what's the
real knock down effect of checkpointing too often? The documents don't really
say what is wrong with checkpointing too often, does it cause block, io
contention, etc, etc? From my understanding it's just IO
You have also run analyze verbose, and checked to make sure you don't have a
ton of bloated indexes?
- check the process with strace -p PID
- check the diskIO with iostat, not vmstat
- run analyze verbose, and possible reindex the database, or cluster the larger
tables.
- dump from
Are you going to RAID the SSD drives at all? You would likely be better off
with a couple of things.
- increasing ram to 256GB on each server to cache most of the databases. (easy,
and cheaper than SSD)
- move to fusionIO
- move to SLC based SSD, warning not many raid controllers will get the
This has gotten a lot better with the 2.x drivers as well.
I'm completely aware of the FusionIO and it's advantages/disadvantages.. I'm
currently getting the following pgbench results but still only hitting the
array for about 800MB/sec, short of the 3GB/sec that it's capable of. This is
Test,
Sorry trying to fix why my email is getting formatted to bits when posting to
the list.
- John
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Strange, John W
Sent: Tuesday, January 04, 2011 1:01 PM
Has anyone had a chance to recompile and try larger a larger blocksize than
8192 with pSQL 8.4.x? I'm finally getting around to tuning some FusionIO
drives that we are setting up. We are looking to setup 4 fusionIO drives per
server, and then use pgpooler to scale them to 3 servers so that we
18 matches
Mail list logo