On Tue, Aug 20, 2013 at 1:57 AM, Andres Freund wrote:
> On 2013-08-19 15:17:44 -0700, Jeff Janes wrote:
>> On Wed, Aug 7, 2013 at 7:40 AM, Merlin Moncure wrote:
>>
>> > I agree; at least then it's not unambiguously better. if you (in
>> > effect) swap all contention on allocation from a lwlock to
On 2013-08-19 15:17:44 -0700, Jeff Janes wrote:
> On Wed, Aug 7, 2013 at 7:40 AM, Merlin Moncure wrote:
>
> > I agree; at least then it's not unambiguously better. if you (in
> > effect) swap all contention on allocation from a lwlock to a spinlock
> > it's not clear if you're improving things; i
On Mon, Aug 19, 2013 at 5:17 PM, Jeff Janes wrote:
> On Wed, Aug 7, 2013 at 7:40 AM, Merlin Moncure wrote:
>
>> I agree; at least then it's not unambiguously better. if you (in
>> effect) swap all contention on allocation from a lwlock to a spinlock
>> it's not clear if you're improving things; i
On Mon, Aug 19, 2013 at 5:02 PM, Jeff Janes wrote:
>
> My concern is how we can ever move this forward. If we can't recreate
> it on a test system, and you probably won't be allowed to push
> experimental patches to the production systemwhat's left?
>
> Also, if the kernel is introducing new
On Wed, Aug 7, 2013 at 7:40 AM, Merlin Moncure wrote:
> I agree; at least then it's not unambiguously better. if you (in
> effect) swap all contention on allocation from a lwlock to a spinlock
> it's not clear if you're improving things; it would have to be proven
> and I'm trying to keep things
On Mon, Aug 5, 2013 at 8:49 AM, Merlin Moncure wrote:
>
> *) What I think is happening:
> I think we are again getting burned by getting de-scheduled while
> holding the free list lock. I've been chasing this problem for a long
> time now (for example, see:
> http://postgresql.1045698.n5.nabble.co
On Sat, Aug 17, 2013 at 10:55 AM, Robert Haas wrote:
> On Mon, Aug 5, 2013 at 11:49 AM, Merlin Moncure wrote:
>> *) What I think is happening:
>> I think we are again getting burned by getting de-scheduled while
>> holding the free list lock. I've been chasing this problem for a long
>> time now
On Mon, Aug 5, 2013 at 11:49 AM, Merlin Moncure wrote:
> *) What I think is happening:
> I think we are again getting burned by getting de-scheduled while
> holding the free list lock. I've been chasing this problem for a long
> time now (for example, see:
> http://postgresql.1045698.n5.nabble.com
On Wed, Aug 14, 2013 at 7:00 PM, Merlin Moncure wrote:
> Performance testing this patch is a real bugaboo for me; the VMs I have to
> work with are too unstable to give useful results :-(. Need to scrounge up
> a doner box somewhere...
While doing performance tests in this area, I always had
On 8/14/13 8:30 AM, Merlin Moncure wrote:
Performance testing this patch is a real bugaboo for me; the VMs I have to work
with are too unstable to give useful results :-(. Need to scrounge up a doner
box somewhere...
I offered a server or two to the community a while ago but I don't think
a
>>> To: Andres Freund
> >>> Cc: PostgreSQL-development; Jeff Janes
> >>> Subject: Re: [HACKERS] StrategyGetBuffer optimization, take 2
> >>>
> >>> On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund
>
> >>> wrote:
> >>> &
day, August 08, 2013 12:09 AM
>>> To: Andres Freund
>>> Cc: PostgreSQL-development; Jeff Janes
>>> Subject: Re: [HACKERS] StrategyGetBuffer optimization, take 2
>>>
>>> On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund
>>>
>>> wrote:
>&
>> Cc: PostgreSQL-development; Jeff Janes
>> Subject: Re: [HACKERS] StrategyGetBuffer optimization, take 2
>>
>> On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund
>> wrote:
>> > On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
>> >> > I don
RS] StrategyGetBuffer optimization, take 2
>
> On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund
> wrote:
> > On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
> >> > I don't think the unlocked increment of nextVictimBuffer is a good
> idea
> >> > thou
On Wed, Aug 7, 2013 at 10:37 PM, Andres Freund wrote:
> On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
>> > I don't think the unlocked increment of nextVictimBuffer is a good idea
>> > though. nextVictimBuffer jumping over NBuffers under concurrency seems
>> > like a recipe for disaster to me
On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund wrote:
> On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
>> > I don't think the unlocked increment of nextVictimBuffer is a good idea
>> > though. nextVictimBuffer jumping over NBuffers under concurrency seems
>> > like a recipe for disaster to me
On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote:
> > I don't think the unlocked increment of nextVictimBuffer is a good idea
> > though. nextVictimBuffer jumping over NBuffers under concurrency seems
> > like a recipe for disaster to me. At the very, very least it will need a
> > good wad of com
On Mon, Aug 5, 2013 at 11:40 AM, Andres Freund wrote:
> On 2013-08-05 10:49:08 -0500, Merlin Moncure wrote:
>> optimization 4: remove free list lock (via Jeff Janes). This is the
>> other optimization: one backend will no longer be able to shut down
>> buffer allocation
>
> I think splitting off
> optimization 2: refcount is examined during buffer allocation without
> a lock. if it's > 0, buffer is assumed pinned (even though it may not
> in fact be) and sweep continues
+1.
I think this shall not lead to much problems, since a lost update
cannot,IMO, lead to disastrous result. At most,
On 2013-08-05 10:49:08 -0500, Merlin Moncure wrote:
> optimization 4: remove free list lock (via Jeff Janes). This is the
> other optimization: one backend will no longer be able to shut down
> buffer allocation
I think splitting off the actual freelist checking into a spinlock makes
quite a bit
My $company recently acquired another postgres based $company and
migrated all their server operations into our datacenter. Upon
completing the move, the newly migrated database server started
experiencing huge load spikes.
*) Environment description:
Postgres 9.2.4
RHEL 6
32 cores
virtualized (
21 matches
Mail list logo