On Tue, May 12, 2009 at 8:59 PM, Scott Carey wrote:
> The $ cost of more CPU power on larger machines ends up such a small %
> chunk, especially after I/O cost. Sure, the CPU with HyperThreading and the
> turbo might be 40% more expensive than the other CPU, but if the total
> system cost is 5% m
I'm sorry, but I'm confused. Everyone keeps talking about connection
pooling, but Dimitri has said repeatedly that each client makes a
single connection and then keeps it open until the end of the test,
not that it makes a single connection per SQL query. Connection
startup costs shouldn't be an
On Tue, May 12, 2009 at 12:49 PM, Tom Lane wrote:
> 1. There is no (portable) way to pass the connection from the postmaster
> to another pre-existing process.
[Googles.] It's not obvious to me that SCM_RIGHTS is non-portable,
and Windows has an API call WSADuplicateSocket() specifically for thi
The $ cost of more CPU power on larger machines ends up such a small %
chunk, especially after I/O cost. Sure, the CPU with HyperThreading and the
turbo might be 40% more expensive than the other CPU, but if the total
system cost is 5% more for 15% more performance . . .
It depends on how CPU lim
On Tue, May 12, 2009 at 11:22 AM, Dimitri wrote:
> Robert, what I'm testing now is 256 users max. The workload is growing
> progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max
> throughput is reached on the number of users equal to 2 * number of
> cores, but what's important for me h
On Tue, May 12, 2009 at 8:05 PM, Greg Smith wrote:
> Anand did SQL Server and Oracle test results, the Nehalem system looks like
> a substantial improvement over the Shanghai Opteron 2384:
>
> http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=6
> http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=7
* Aidan Van Dyk (ai...@highrise.ca) wrote:
> But, what really does preforking give us? A 2 or 3% improvement? The
> forking isn't the expensive part, the per-database setup that happens is
> the expensive setup...
Obviously that begs the question- why not support pre-fork with specific
database
Anand did SQL Server and Oracle test results, the Nehalem system looks
like a substantial improvement over the Shanghai Opteron 2384:
http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=6
http://it.anandtech.com/IT/showdoc.aspx?i=3536&p=7
--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.c
On Tue, 12 May 2009, Thomas Finneid wrote:
on a database with 260 GB of data and an index size of 109GB on separate raid
disks. there are
85 city_ids, 2000
street_ids per city,
20 house_ids per street per city
5 floor_ids per house_ per street per city
You shou
* Joshua D. Drake [090512 19:27]:
> Apache solved this problem back when it was still called NSCA HTTPD. Why
> aren't we preforking again?
Of course, preforking and connection pooling are totally different
beast...
But, what really does preforking give us? A 2 or 3% improvement? The
forking
On Tue, 2009-05-12 at 20:34 -0400, Aidan Van Dyk wrote:
> * Joshua D. Drake [090512 19:27]:
>
> > Apache solved this problem back when it was still called NSCA HTTPD. Why
> > aren't we preforking again?
>
> Of course, preforking and connection pooling are totally different
> beast...
>
Yes an
On Tue, May 12, 2009 at 5:53 PM, Angel Alvarez wrote:
> we suffer a 'more optimal' superlative missuse
>
> there is not so 'more optimal' thing but a simple 'better' thing.
>
> im not native english speaker but i think it still applies.
>
> Well this a superlative list so all of you deserve a be
Dear List mates,
more optimal plan...
morreoptimal configuration...
we suffer a 'more optimal' superlative missuse
there is not so 'more optimal' thing but a simple 'better' thing.
im not native english speaker but i think it still applies.
Well this a superlative list so all of you deserve
On Tue, May 12, 2009 at 5:49 PM, Tom Lane wrote:
> See previous discussions. IIRC, there are two killer points:
>
> 1. There is no (portable) way to pass the connection from the postmaster
> to another pre-existing process.
The Apache model is to have all the backends call accept. So incoming
co
Dimitri Fontaine escribió:
> A much better idea to solve this, in my opinion, would be to have
> pgbouncer as a postmaster child, integrated into PostgreSQL. It allows
> for choosing whether you want session pooling, transaction pooling or
> statement pooling, which is a more deterministic w
Hi
have the following table (theoretical)
table apartment_location (
city_idint,
street_id int,
house_id int,
floor_id int,
owner string
...
)
index .. ( city_id, street_id, house_id, floor_id ) tablespc indexspace;
on a database w
On Tue, May 12, 2009 at 4:24 PM, Simon Riggs wrote:
>
> On Tue, 2009-05-12 at 15:52 -0400, Robert Haas wrote:
>> On Tue, May 12, 2009 at 12:49 PM, Tom Lane wrote:
>> > 1. There is no (portable) way to pass the connection from the postmaster
>> > to another pre-existing process.
>>
>> [Googles.]
Anyone on the list had a chance to benchmark the Nehalem's yet? I'm
primarily wondering if their promise of performance from 3 memory
channels holds up under typical pgsql workloads. I've been really
happy with the behavior of my AMD shanghai based server under heavy
loads, but if the Nehalems mu
Cory Coager wrote:
> I'm running version 8.1.11 on SLES 10 SP2. I'm trying to improve this
> query and unfortunately I cannot change the application. For some
> reason the planner is making a bad decision sometimes after an analyze
> of table objectcustomfieldvalues.
>
> The query is:
> SELECT D
On Tue, May 12, 2009 at 1:00 PM, Dimitri wrote:
> On MySQL there is no changes if I set the number of sessions in the
> config file to 400 or to 2000 - for 2000 it'll just allocate more
> memory.
I don't care whether the setting affects the speed of MySQL. I want
to know if it affects the speed
On Tue, 2009-05-12 at 21:24 +0100, Simon Riggs wrote:
> On Tue, 2009-05-12 at 15:52 -0400, Robert Haas wrote:
> > On Tue, May 12, 2009 at 12:49 PM, Tom Lane wrote:
> > > 1. There is no (portable) way to pass the connection from the postmaster
> > > to another pre-existing process.
> >
> > [Google
Although nobody wants to support it, he should try the patch that Jignesh K.
Shah (from Sun) proposed that makes ProcArrayLock lighter-weight. If it
makes 32 cores much faster, then we have a smoking gun.
Although everyone here is talking about this as an 'unoptimal' solution, the
fact is there i
Hi,
Le 12 mai 09 à 18:32, Robert Haas a écrit :
implement this same logic internally? IOW, when a client disconnects,
instead of having the backend exit immediately, have it perform the
equivalent of DISCARD ALL and then stick around for a minute or two
and, if a new connection request arrives
On Tue, 2009-05-12 at 15:52 -0400, Robert Haas wrote:
> On Tue, May 12, 2009 at 12:49 PM, Tom Lane wrote:
> > 1. There is no (portable) way to pass the connection from the postmaster
> > to another pre-existing process.
>
> [Googles.] It's not obvious to me that SCM_RIGHTS is non-portable,
> an
On Tue, 2009-05-12 at 11:30 -0700, Scott Carey wrote:
> the fact is there is no evidence that a connection pooler will fix the
> scalability from 16 > 32 cores.
There has been much analysis over a number of years of the effects of
the ProcArrayLock, specifically the O(N^2) effect of increasing nu
Andres Freund escribió:
> Naturally it would still be nice to be good in this not optimal workload...
I find it hard to justify wasting our scarce development resources into
optimizing such a contrived workload.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The
Robert Haas writes:
> AIUI, whenever the connection pooler switches to serving a new client,
> it tells the PG backend to DISCARD ALL. But why couldn't we just
> implement this same logic internally? IOW, when a client disconnects,
> instead of having the backend exit immediately, have it perfor
No, they keep connections till the end of the test.
Rgds,
-Dimitri
On 5/12/09, Joshua D. Drake wrote:
> On Tue, 2009-05-12 at 17:22 +0200, Dimitri wrote:
>> Robert, what I'm testing now is 256 users max. The workload is growing
>> progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max
On Tue, 2009-05-12 at 17:22 +0200, Dimitri wrote:
> Robert, what I'm testing now is 256 users max. The workload is growing
> progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max
> throughput is reached on the number of users equal to 2 * number of
> cores, but what's important for me h
On 5/12/09, Stefan Kaltenbrunner wrote:
> Dimitri wrote:
>> Hi Stefan,
>>
>> sorry, I did not have a time to bring all details into the toolkit -
>> but at least I published it instead to tell a "nice story" about :-)
>
> fair point and appreciated. But it seems important that benchmarking
> resul
Dimitri wrote:
> Of course the Max throughput is reached on the number of users equal
> to 2 * number of cores
I'd expect that when disk I/O is not a significant limiting factor,
but I've seen a "sweet spot" of (2 * cores) + (effective spindle
count) for loads involving a lot of random I/O.
Good point! I missed it.. - will 20MB be enough?
Rgds,
-Dimitri
On 5/12/09, Julian v. Bock wrote:
> Hi
>
>> "D" == Dimitri writes:
>
> D> current postgresql.conf:
>
> D> #
> D> max_connections = 2000 # (change requires restart)
> D> temp_buffers = 200MB
>
>
Robert, what I'm testing now is 256 users max. The workload is growing
progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max
throughput is reached on the number of users equal to 2 * number of
cores, but what's important for me here - database should continue to
keep the workload! - res
Matthew Wakeling writes:
> On Tue, 12 May 2009, Simon Riggs wrote:
>> No, we spawn then authenticate.
> But you still have a single thread doing the accept() and spawn. At some
> point (maybe not now, but in the future) this could become a bottleneck
> given very short-lived connections.
More
Matthew Wakeling wrote:
On Tue, 12 May 2009, Simon Riggs wrote:
won't connect operations be all handled by a
single thread - the parent postmaster?
No, we spawn then authenticate.
But you still have a single thread doing the accept() and spawn. At some
point (maybe not now, but in the futur
On Tue, 12 May 2009, Simon Riggs wrote:
won't connect operations be all handled by a
single thread - the parent postmaster?
No, we spawn then authenticate.
But you still have a single thread doing the accept() and spawn. At some
point (maybe not now, but in the future) this could become a bo
On Tue, 2009-05-12 at 16:00 +0100, Matthew Wakeling wrote:
> won't connect operations be all handled by a
> single thread - the parent postmaster?
No, we spawn then authenticate.
--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support
--
Sent via pgsql-perf
On Tue, 12 May 2009, Stefan Kaltenbrunner wrote:
But what I get from your answer is that you are basically doing one
connect/disconnect per client and the testcase you are talking about has 256
clients?
Correct me if I'm wrong, but won't connect operations be all handled by a
single thread -
On Tue, May 12, 2009 at 8:59 AM, Dimitri wrote:
> Wait wait, currently I'm playing the "stress scenario", so there are
> only 256 sessions max, but thing time is zero (full stress). Scenario
> with 1600 users is to test how database is solid just to keep a huge
> amount of users, but doing only on
Dimitri wrote:
Hi Stefan,
sorry, I did not have a time to bring all details into the toolkit -
but at least I published it instead to tell a "nice story" about :-)
fair point and appreciated. But it seems important that benchmarking
results can be verified by others as well...
The client
Hi
> "D" == Dimitri writes:
D> current postgresql.conf:
D> #
D> max_connections = 2000 # (change requires restart)
D> temp_buffers = 200MB
temp_buffers are kept per connection and not freed until the session
ends. If you use some kind of connection pooling
Hi Stefan,
sorry, I did not have a time to bring all details into the toolkit -
but at least I published it instead to tell a "nice story" about :-)
The client process is a binary compiled with libpq. Client is
interpreting a scenario script and publish via SHM a time spent on
each SQL request. I
Wait wait, currently I'm playing the "stress scenario", so there are
only 256 sessions max, but thing time is zero (full stress). Scenario
with 1600 users is to test how database is solid just to keep a huge
amount of users, but doing only one transaction per second (very low
global TPS comparing t
Dimitri wrote:
Folks, before you start to think "what a dumb guy doing a dumb thing" :-))
I'll explain you few details:
it's for more than 10 years I'm using a db_STRESS kit
(http://dimitrik.free.fr/db_STRESS.html) to check databases
performance and scalability. Until now I was very happy with r
For the moment I'm even not considering any scalability issues on the
Read+Write workload - it may always be related to the storage box, and
storage latency or controller/cache efficiency may play a lot.
As problem I'm considering a scalability issue on Read-Only workload -
only selects, no disk a
Tom Lane said the following on 05/11/2009 07:02 PM:
where we're off by a factor of 1500+ :-(
I think most likely the ~~ operator is the biggest problem.
Unfortunately 8.1's estimator for ~~ is not terribly bright. You could
try increasing your statistics target but I don't think it will help
mu
On Tue, 2009-05-12 at 12:19 +0200, Dimitri wrote:
> What I'm trying to do now is to understand what exactly is the
> problem.
You're running with 1600 users, which is above the scalability limit
uncovered (by Sun...) during earlier benchmarking. The scalability
issues are understood but currentl
Dimitri wrote:
What I discovered so far with all your help:
- the impact of a planner
- the impact of the analyze target
- the impact of prepare / execute
- scalability limit on 32 cores
You've received good advice on how to minimize the impact of the first
three points, and using thos
On Tue, 2009-05-12 at 13:16 +0200, Dimitri wrote:
> Wow, Simon! :-))
>
> yes, I'm working in Sun Benchmark Center :-))
> (I'm not using my Sun email on public lists only to avid a spam)
>
> and as came here and asking questions it's probably proving my
> intentions to show PostgreSQL in its bes
Wow, Simon! :-))
yes, I'm working in Sun Benchmark Center :-))
(I'm not using my Sun email on public lists only to avid a spam)
and as came here and asking questions it's probably proving my
intentions to show PostgreSQL in its best light, no?.. - I never liked
"not honest" comparisons :-))
Rega
On Tue, 2009-05-12 at 12:19 +0200, Dimitri wrote:
> For my big surprise, MySQL was faster!
Ours too.
** I bet you $1000 that I can improve the performance of your benchmark
results with PostgreSQL. You give me $1000 up-front and if I can't
improve your high end numbers I'll give you $2000 back.
Folks, before you start to think "what a dumb guy doing a dumb thing" :-))
I'll explain you few details:
it's for more than 10 years I'm using a db_STRESS kit
(http://dimitrik.free.fr/db_STRESS.html) to check databases
performance and scalability. Until now I was very happy with results
it gave me
On Tue, May 12, 2009 at 3:00 AM, Евгений Василев
wrote:
> I have the following table:
>
> CREATE TABLE "temp".tmp_135528
> (
> id integer NOT NULL,
> prid integer,
> group_id integer,
> iinv integer,
> oinv integer,
> isum numeric,
> osum numeric,
> idate timestamp without time zone,
> odate times
It's just one of the test conditions - "what if there 2000 users?" - I
know I may use pgpool or others, but I also need to know the limits of
the database engine itself.. For the moment I'm limiting to 256
concurrent sessions, but config params are kept like for 2000 :-)
Rgds,
-Dimitri
On 5/12/09
On Mon, May 11, 2009 at 6:31 PM, Dimitri wrote:
> Hi Kevin,
>
> PostgreSQL: 8.3.7 & 8.4
> Server: Sun M5000 32cores
> OS: Solaris 10
>
> current postgresql.conf:
>
> #
> max_connections = 2000 # (change requires restart)
Are you sure about the 2000
I have the following table:
CREATE TABLE "temp".tmp_135528
(
id integer NOT NULL,
prid integer,
group_id integer,
iinv integer,
oinv integer,
isum numeric,
osum numeric,
idate timestamp without time zone,
odate timestamp without time zone,
CONSTRAINT t_135528_pk PRIMARY KEY (id
Hi,
Dimitri writes:
>>> So, why I don't use prepare here: let's say I'm testing the worst
>>> stress case :-) Imagine you have thousands of such kind of queries -
>>> you cannot prepare all of them! :-)
>>
>> Thousands? Surely there'll be a dozen or three of most common queries,
>> to which yo
Hi,
On 05/12/2009 12:46 AM, Dimitri wrote:
So, why I don't use prepare here: let's say I'm testing the worst
stress case :-) Imagine you have thousands of such kind of queries -
you cannot prepare all of them! :-) or you'll maybe prepare it once,
but as I showed previously in this thread prepa
Nice to know. But again, if this cache is kept only on the client side
it'll be always lost on disconnect. And if clients are "short-lived"
it'll not help.
BTW, is there an option to say "do execution plan as simple as
possible"? If you're sure about your data and your indexes - don't
need to spen
>> So, why I don't use prepare here: let's say I'm testing the worst
>> stress case :-) Imagine you have thousands of such kind of queries -
>> you cannot prepare all of them! :-)
>
> Thousands? Surely there'll be a dozen or three of most common queries,
> to which you pass different parameters.
60 matches
Mail list logo