On Mon, Mar 16, 2009 at 10:17 PM, Greg Smith wrote:
> On Tue, 17 Mar 2009, Gregory Stark wrote:
>
>> I think pgbench is just not that great a model for real-world usage
>
> pgbench's default workload isn't a good model for anything. It wasn't a
> particularly real-world test when the TPC-B it's b
On Tue, 17 Mar 2009, Gregory Stark wrote:
Hm, well the tests I ran for posix_fadvise were actually on a Perc5 -- though
who knows if it was the same under the hood -- and I saw better performance
than this. I saw about 4MB/s for a single drive and up to about 35MB/s for 15
drives. However this w
On Tue, 17 Mar 2009, Gregory Stark wrote:
I think pgbench is just not that great a model for real-world usage
pgbench's default workload isn't a good model for anything. It wasn't a
particularly real-world test when the TPC-B it's based on was created, and
that was way back in 1990. And pg
On Mon, 16 Mar 2009, Joe Uhl wrote:
Now when I run vmtstat 1 30 it looks very different (below).
That looks much better. Obviously you'd like some more headroom on the
CPU situation than you're seeing, but that's way better than having so
much time spent waiting for I/O.
max_connections
Greg Smith writes:
> On Mon, 16 Mar 2009, Joe Uhl wrote:
>
>> Here is vmstat 1 30. We are under peak load right now so I can gather
>> information from the real deal
>
> Quite helpful, reformatting a bit and picking an informative section:
>
> procs ---memory-----swap- io
Greg Smith writes:
> On Mon, 16 Mar 2009, Gregory Stark wrote:
>
>> Why would checkpoints force out any data? It would dirty those pages and then
>> sync the files marking them clean, but they should still live on in the
>> filesystem cache.
>
> The bulk of the buffer churn in pgbench is from the
On Mon, Mar 16, 2009 at 2:50 PM, Joe Uhl wrote:
> I dropped the pool sizes and brought things back up. Things are stable,
> site is fast, CPU utilization is still high. Probably just a matter of time
> before issue comes back (we get slammed as kids get out of school in the
> US).
Yeah, I'm gue
I dropped the pool sizes and brought things back up. Things are
stable, site is fast, CPU utilization is still high. Probably just a
matter of time before issue comes back (we get slammed as kids get out
of school in the US).
Now when I run vmtstat 1 30 it looks very different (below). W
On Mon, 16 Mar 2009, Joe Uhl wrote:
Here is vmstat 1 30. We are under peak load right now so I can gather
information from the real deal
Quite helpful, reformatting a bit and picking an informative section:
procs ---memory-----swap- io--- -system-- cpu
r b
Here is vmstat 1 30. We are under peak load right now so I can gather
information from the real deal :)
Had an almost complete lockup a moment ago, number of non-idle
postgres connections was 637. Going to drop our JDBC pool sizes a bit
and bounce everything.
procs ---memory
On Monday 16 March 2009, Joe Uhl wrote:
> Right now (not under peak load) this server is running at 68% CPU
> utilization and its SATA raid 10 is doing about 2MB/s writes and 11MB/
> s reads. When I run dd I can hit 200+MB/s writes and 230+ MB/s reads,
> so we are barely using the available IO.
On 03/16/09 11:08, Gregory Stark wrote:
"Jignesh K. Shah" writes:
Generally when there is dead constant.. signs of classic bottleneck ;-) We
will be fixing one to get to another.. but knocking bottlenecks is the name of
the game I think
Indeed. I think the bottleneck we're interes
On Mon, Mar 16, 2009 at 1:00 AM, Nagalingam, Karthikeyan
wrote:
> Hi,
> we are in the process of finding the best solution for Postgresql
> deployment with storage controller. I have some query, Please give some
> suggestion for the below
>
> 1) Can we get customer deployment scenarios for pos
Our production database is seeing very heavy CPU utilization - anyone
have any ideas/input considering the following?
CPU utilization gradually increases during the day until it approaches
90%-100% at our peak time. When this happens our transactions/sec
drops and our site becomes very slo
Note, some have mentioned that my client breaks inline formatting. My only
comment is after Kevin's signature below:
On 3/16/09 9:53 AM, "Kevin Grittner" wrote:
I wrote:
> One more reason this point is an interesting one is that it is one
> that gets *worse* with the suggested patch, if only b
I wrote:
> One more reason this point is an interesting one is that it is one
> that gets *worse* with the suggested patch, if only by half a
percent.
>
> Without:
>
> 600: 80: Medium Throughput: 82632.000 Avg Medium Resp: 0.005
>
> with:
>
> 600: 80: Medium Throughput: 82241.000 Avg Mediu
On Mon, 16 Mar 2009, Gregory Stark wrote:
Why would checkpoints force out any data? It would dirty those pages and then
sync the files marking them clean, but they should still live on in the
filesystem cache.
The bulk of the buffer churn in pgbench is from the statement that updates
a row in
On Sat, 14 Mar 2009, Heikki Linnakangas wrote:
I think the elephant in the room is that we have a single lock that needs to
be acquired every time a transaction commits, and every time a backend takes
a snapshot.
I like this line of thinking.
There are two valid sides to this. One is the elep
On Mon, 2009-03-16 at 12:11 -0400, Mark Steben wrote:
> First of all, I did pose this question first on the pgsql – admin
> mailing list.
> The issue is that during a restore on a remote site, (Postgres 8.2.5)
>
> archived logs are taking an average of 35 – 40 seconds apiece to
> restore.
Ar
First of all, I did pose this question first on the pgsql - admin mailing
list.
And I know it is not appreciated to post across multiple mailing lists so I
Apologize in advance. I do not make it a practice to do so but, this being
A performance issue I think I should have inquired on this list
wrote:
> On Fri, 13 Mar 2009, Kevin Grittner wrote:
>> If all data access is in RAM, why can't 80 processes
>> keep 64 threads (on 8 processors) busy? Does anybody else think
>> that's an interesting question, or am I off in left field here?
>
> I don't think that anyone is arguing that it's no
Greg Smith writes:
> On Mon, 16 Mar 2009, m...@bortal.de wrote:
>
>> Any idea why my performance colapses at 2GB Database size?
I don't understand how you get that graph from the data above. The data above
seems to show your test databases at 1.4GB and 2.9GB. There are no 1GB and 2GB
data points
"Jignesh K. Shah" writes:
> Generally when there is dead constant.. signs of classic bottleneck ;-) We
> will be fixing one to get to another.. but knocking bottlenecks is the name of
> the game I think
Indeed. I think the bottleneck we're interested in addressing here is why you
say you weren'
On Mon, 16 Mar 2009, m...@bortal.de wrote:
Any idea why my performance colapses at 2GB Database size?
pgbench results follow a general curve I outlined at
http://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm and
the spot where performance drops hard depends on how big of a w
Hello List,
i would like to pimp my postgres setup. To make sure i dont have a slow
hardware, i tested it on three diffrent enviorments:
1.) Native Debian Linux (Dom0) and 4GB of RAM
2.) Debian Linux in Xen (DomU) 4GB of RAM
3.) Blade with SSD Disk 8GB of RAM
Here are my results: http://i39.ti
Hi,
we are in the process of finding the best solution for Postgresql
deployment with storage controller. I have some query, Please give some
suggestion for the below
1) Can we get customer deployment scenarios for postgresql with storage
controller. Any flow diagram, operation diagram and im
2009/3/14 decibel
> On Mar 10, 2009, at 12:20 PM, Tom Lane wrote:
>
>> f...@redhat.com (Frank Ch. Eigler) writes:
>>
>>> For a prepared statement, could the planner produce *several* plans,
>>> if it guesses great sensitivity to the parameter values? Then it
>>> could choose amongst them at run
27 matches
Mail list logo