On Thu, 12 Mar 2009, Jignesh K. Shah wrote:
That said the testkit that I am using is a lightweight OLTP typish
workload which a user runs against a preknown schema and between various
transactions that it does it emulates a wait time of 200ms.
After re-reading about this all again at
http://
On Thu, 12 Mar 2009, Scott Carey wrote:
Furthermore, if the problem was due to too much concurrency in the
database with active connections, its hard to see how changing the lock
code would change the result the way it did ?
What I wonder about is if the locking mechanism is accidentally turn
> Its worth ruling out given that even if the likelihood is small, the fix is
> easy. However, I don’t see the throughput drop from peak as more
> concurrency is added that is the hallmark of this problem — usually with a
> lot of context switching and a sudden increase in CPU use per transaction.
On 3/12/09 11:37 AM, "Jignesh K. Shah" wrote:
And again this is the third time I am saying.. the test users also have some
latency build up in them which is what generally is exploited to get more users
than number of CPUS on the system but that's the point we want to exploit..
Otherwise if a
On 3/12/09 1:35 PM, "Greg Smith" wrote:
On Thu, 12 Mar 2009, Jignesh K. Shah wrote:
> As soon as I get more "cycles" I will try variations of it but it would
> help if others can try it out in their own environments to see if it
> helps their instances.
What you should do next is see whether y
On 3/12/09 11:28 AM, "Tom Lane" wrote:
Scott Carey writes:
> They are not meaningless. It is certainly more to understand, but the test
> is entirely valid without that. In a CPU bound / RAM bound case, as
> concurrency increases you look for the throughput trend, the %CPU use trend
> and t
On Wed, Mar 11, 2009 at 11:42 PM, Frank Joerdens wrote:
>
> effective_cache_size = 4GB
Only 4GB with 64GB of ram ?
About logging, we have 3 partition :
- data
- index
- everything else, including logging.
Usually, we log on a remote syslog (a dedicated log server for the
whole server
On Thu, 12 Mar 2009, Jignesh K. Shah wrote:
As soon as I get more "cycles" I will try variations of it but it would
help if others can try it out in their own environments to see if it
helps their instances.
What you should do next is see whether you can remove the bottleneck your
test is ru
On 03/12/09 15:10, Alvaro Herrera wrote:
Tom Lane wrote:
Scott Carey writes:
They are not meaningless. It is certainly more to understand, but the test is
entirely valid without that. In a CPU bound / RAM bound case, as concurrency
increases you look for the throughput trend, the
>>> "Jignesh K. Shah" wrote:
> What we have is a pool of 2000 users and we start making each user
> do series of transactions on different rows and see how much the
> database can handle linearly before some bottleneck (system or
> database) kicks in and there can be no more linear increase in
>
Tom Lane wrote:
> Scott Carey writes:
> > They are not meaningless. It is certainly more to understand, but the test
> > is entirely valid without that. In a CPU bound / RAM bound case, as
> > concurrency increases you look for the throughput trend, the %CPU use trend
> > and the context swit
On 03/12/09 13:48, Scott Carey wrote:
On 3/11/09 7:47 PM, "Tom Lane" wrote:
All I'm adding, is that it makes some sense to me based on my
experience in CPU / RAM bound scalability tuning. It was expressed
that the test itself didn't even make sense.
I was wrong in my understanding of wha
At 11:44 AM 3/12/2009, Kevin Grittner wrote:
I'm probably more inclined to believe that his change may have merit
than many here, but I can't accept anything based on this test until
someone answers the question, so far ignored by all responses, of
where the bottleneck is at the low end which
Scott Carey writes:
> They are not meaningless. It is certainly more to understand, but the test
> is entirely valid without that. In a CPU bound / RAM bound case, as
> concurrency increases you look for the throughput trend, the %CPU use trend
> and the context switch rate trend. More infor
On 3/12/09 10:53 AM, "Tom Lane" wrote:
"Kevin Grittner" writes:
> You misunderstood me. I wasn't addressing the affects of his change,
> but rather the fact that his test shows a linear improvement in TPS up
> to 1000 connections for a 64 thread machine which is dealing entirely
> with RAM -- n
On 3/12/09 10:09 AM, "Gregory Stark" wrote:
Ram-resident use cases are entirely valid and worth testing, but in those use
cases you would want to have about as many processes as you have processes.
Within a factor of two or so, yes. However, where in his results does it show
that there are 10
On 03/12/09 11:13, Kevin Grittner wrote:
Scott Carey wrote:
"Kevin Grittner" wrote:
I'm a lot more interested in what's happening between 60 and 180
than over 1000, personally. If there was a RAID involved, I'd put
it down to better use of the numerous spindles, but when it'
"Kevin Grittner" writes:
> You misunderstood me. I wasn't addressing the affects of his change,
> but rather the fact that his test shows a linear improvement in TPS up
> to 1000 connections for a 64 thread machine which is dealing entirely
> with RAM -- no disk access. Where's the bottleneck th
On 3/11/09 7:47 PM, "Tom Lane" wrote:
Scott Carey writes:
> If there is enough lock contention and a common lock case is a short lived
> shared lock, it makes perfect sense sense. Fewer readers are blocked waiting
> on writers at any given time. Readers can 'cut' in line ahead of writers
>
Databases are usually IO bound , vmstat results can confirm individual
cases and setups.
In case the server is IO bound the entry point should be setting up
properly performing
IO. RAID10 helps a great extent in improving IO bandwidth by
parallelizing the IO operations,
more spindles the better. Al
On 3/12/09 8:13 AM, "Kevin Grittner" wrote:
>>> Scott Carey wrote:
> "Kevin Grittner" wrote:
>
>> I'm a lot more interested in what's happening between 60 and 180
>> than over 1000, personally. If there was a RAID involved, I'd put
>> it down to better use of the numerous spindles, but when i
Grzegorz Jaśkiewicz writes:
> So please, don't say that this doesn't make sense because he tested it
> against ram disc. That was precisely the point of exercise.
What people are tip-toeing around saying, which I'll just say right out in the
most provocative way, is that Jignesh has simply *misc
On 3/12/09 7:57 AM, "Jignesh K. Shah" wrote:
On 03/11/09 22:01, Scott Carey wrote:
Re: [PERFORM] Proposal of tunable fix for scalability of 8.4 On 3/11/09 3:27
PM, "Kevin Grittner" wrote:
If you want to make this more fair, instead of freeing all shared locks, limit
the count to some numb
>>> Grzegorz Jaœkiewicz wrote:
> Scalability is something that is affected by everything, and fixing
> this makes sens as much as looking at possible fixes to make raids
> more scalable, which is looked at by someone else I think.
> So please, don't say that this doesn't make sense because he tes
On Thu, Mar 12, 2009 at 3:13 PM, Kevin Grittner
wrote:
Scott Carey wrote:
>> "Kevin Grittner" wrote:
>>
>>> I'm a lot more interested in what's happening between 60 and 180
>>> than over 1000, personally. If there was a RAID involved, I'd put
>>> it down to better use of the numerous spind
>>> Scott Carey wrote:
> "Kevin Grittner" wrote:
>
>> I'm a lot more interested in what's happening between 60 and 180
>> than over 1000, personally. If there was a RAID involved, I'd put
>> it down to better use of the numerous spindles, but when it's all
>> in RAM it makes no sense.
>
> If
On 03/11/09 22:01, Scott Carey wrote:
On 3/11/09 3:27 PM, "Kevin Grittner" wrote:
I'm a lot more interested in what's happening between 60 and 180 than
over 1000, personally. If there was a RAID involved, I'd put it down
to better use of the numerous spindles, but when it's all
"Jignesh K. Shah" wrote:
> On 03/11/09 18:27, Kevin Grittner wrote:
>> "Jignesh K. Shah" wrote:
>>> Rerunning similar tests on a 64-thread UltraSPARC T2plus based
>>> server config
>>
>>> (IO is not a problem... all in RAM .. no disks):
>>> Time:Users:Type:TPM: Response Time
>>> 60: 100: M
On Thursday 12 March 2009 14:38:56 Frank Joerdens wrote:
> I just put the patched .deb on staging and we'll give it a whirl there
> for basic sanity checking - we currently have no way to even
> approximate the load that we have on live for testing.
Is it a capacity problem or a tool suite problem
On Thu, Mar 12, 2009 at 1:45 AM, Tom Lane wrote:
[...]
> You could try changing _IOLBF
> to _IOFBF near the head of postmaster/syslogger.c and see if that helps.
I just put the patched .deb on staging and we'll give it a whirl there
for basic sanity checking - we currently have no way to even
app
Nagalingam, Karthikeyan wrote:
Hi,
Can you guide me, Where is the entry point to get the
documentation for Postgresql performance tuning, Optimization for
Postgresql with Storage controller.
Your recommendation and suggestion are welcome.
Regards
Karthikeyan.N
Take a look at
htt
Hi,
Can you guide me, Where is the entry point to get the documentation
for Postgresql performance tuning, Optimization for Postgresql with
Storage controller.
Your recommendation and suggestion are welcome.
Regards
Karthikeyan.N
On Thu, Mar 12, 2009 at 2:05 AM, Andrew Dunstan wrote:
> It is buffered at the individual log message level, so that we make sure we
> don't multiplex messages. No more than that.
OK. So if the OP can afford multiplexed queries by using a log
analyzer supporting them, it might be a good idea to t
33 matches
Mail list logo