>>> As in 200%+ slower.
>> Have you tried PTHREAD_MUTEX_ADAPTIVE_NP ?
> Yes.
Ok, if this can be validated, we might have a new case now for which my
suggestion would not be helpful. Reviewed, optimized code with short critical
sections and no hotspots by design could indeed be an exception where t
On 10/06/15 17:17, Andres Freund wrote:
> On 2015-06-10 16:07:50 +0200, Nils Goroll wrote:
>> On larger Linux machines, we have been running with spin locks replaced by
>> generic posix mutexes for years now. I personally haven't look at the code
>> for
>> age
On 10/06/15 17:12, Jan Wieck wrote:
> for (...)
> {
> s_lock();
> // do something with a global variable
> s_unlock();
> }
OK, I understand now, thank you. I am not sure if this test case is appropriate
for the critical sections in postgres (if it was, we'd not have the problem we
are
So optimizing for no and moderate contention isn't something
> you can simply forgo.
Let's get back to my initial suggestion:
On 10/06/15 16:07, Nils Goroll wrote:
> I think it would
> still be worth considering to do away with the roll-your-own spinlocks on
> systems whose po
On 10/06/15 16:18, Jan Wieck wrote:
>
> I have played with test code that isolates a stripped down version of s_lock()
> and uses it with multiple threads. I then implemented multiple different
> versions of that s_lock(). The results with 200 concurrent threads are that
> using a __sync_val_compa
On 10/06/15 16:20, Andres Freund wrote:
> That's precisely what I referred to in the bit you cut away...
I apologize, yes.
On 10/06/15 16:25, Tom Lane wrote:
> Optimizing for misuse of the mechanism is not the way.
I absolutely agree and I really appreciate all efforts towards lockless data
stru
On 10/06/15 16:05, Andres Freund wrote:
> it'll nearly always be beneficial to spin
Trouble is that postgres cannot know if the process holding the lock actually
does run, so if it doesn't, all we're doing is burn cycles and make the problem
worse.
Contrary to that, the kernel does know, so for
On larger Linux machines, we have been running with spin locks replaced by
generic posix mutexes for years now. I personally haven't look at the code for
ages, but we maintain a patch which pretty much does the same thing still:
Ref: http://www.postgresql.org/message-id/4fede0bf.7080...@schokola.d
Just FYI: We have worked around these issues by running regular (scripted and
thus controlled) vaccuums on all tables but the active ones and adding L2 ZFS
caching (l2arc). I hope to get back to this again soon.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make change
Hi Jeff and all,
On 23/05/15 22:13, Jeff Janes wrote:
> Are you sure it is the read IO that causes the problem?
Yes. Trouble is here that we are talking about a 361 GB table
List of relations
Schema |Name | Type | Owner |Size
On 23/05/15 16:50, Tom Lane wrote:
>> > as many before, I ran into the issue of a postgresql database (8.4.1)
> *Please* tell us that was a typo.
Yes it was, my sincere apologies. It's 9.4.1
Nils
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your sub
Hi,
as many before, I ran into the issue of a postgresql database (8.4.1)
- committing many transactions
- to huge volume tables (3-figure GB in size)
- running the xid wrap vacuum (to freeze tuples)
where the additional read IO load has negative impact to the extent of the
system becoming unusab
This is really late, but ...
On 08/21/12 11:20 PM, Robert Haas wrote:
Our sinval synchronization mechanism has a somewhat weird design that
makes this OK.
... I don't want to miss the change to thank you, Robert, for the detailed
explanation. I have backported b4fbe392f8ff6ff1a66b488eb7197eef
Hi,
I am reviewing this one year old change again before backporting it to 9.1.3 for
production use.
ATM, I believe the code is correct, but I don't want to miss the change to spot
possible errors, so please let me dump my brain on some points:
- IIUC, SIGetDataEntries() can return 0 when i
Should we do something to plug this, and if so, what? If not, should
we document the danger?
I am not sure if I really understood the intention of the question correctly,
but if the question was if pg should try to work around misuse of signals, then
my answer would be a definite no.
IMHO,
Robert,
1. How much we're paying for this in the uncontended case?
Using glibc, we have the overhead of an additional library function call, which
we could eliminate by pulling in the code from glibc/nptl or a source of other
proven reference code.
The pgbench results I had posted before
btw, I really need to let go of this topic to catch up before going away at the
end of the week.
Thanks, Nils
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> 3.1.7?
Sorry, that was a typo. 9.1.3.
Yes, I had mentioned the Version in my initial posting. This version is the one
I need to work on as long as 9.2 is beta.
> A major scalability bottleneck caused by spinlock contention was fixed
> in 9.2 - see commit b4fbe392f8ff6ff1a66b488eb7197eef9e1770a
just a quick note: I got really interesting results, but the writeup is not done
yet. Will get back to this ASAP.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Jeff,
without further ado: Thank you, I will go away, run pgbench according to your
advice and report back.
Nils
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Jeff,
>>> It looks like the hacked code is slower than the original. That
>>> doesn't seem so good to me. Am I misreading this?
>>
>> No, you are right - in a way. This is not about maximizing tps, this is about
>> maximizing efficiency under load situations
>
> But why wouldn't this maximiz
Hi Robert,
> Spinlock contentions cause tps to go down. The fact that tps didn't
> change much in this case suggests that either these workloads don't
> generate enough spinlock contention to benefit from your patch, or
> your patch doesn't meaningfully reduce it, or both. We might need a
> test
> test runs on an IBM POWER7 system with 16 cores, 64 hardware threads.
Could you add the CPU Type / clock speed please?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Thank you, Robert.
as this patch was not targeted towards increasing tps, I am at happy to hear
that your benchmarks also suggest that performance is "comparable".
But my main question is: how about resource consumption? For the issue I am
working on, my current working hypothesis is that spinnin
> You need at the very, very least 10s.
ok, thanks.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
aas wrote:
> FWIW, I kicked off a looong benchmarking run on this a couple of days
> ago on the IBM POWER7 box, testing pgbench -S, regular pgbench, and
> pgbench --unlogged-tables at various client counts with and without
> the patch; three half-hour test runs for each test configu
>> Using futexes directly could be even cheaper.
> Note that below this you only have the futex(2) system call.
I was only referring to the fact that we could save one function and one library
call, which could make a difference for the uncontended case.
--
Sent via pgsql-hackers mailing list (pg
> It's
> still unproven whether it'd be an improvement, but you could expect to
> prove it one way or the other with a well-defined amount of testing.
I've hacked the code to use adaptive pthread mutexes instead of spinlocks. see
attached patch. The patch is for the git head, but it can easily be
> But if you start with "let's not support any platforms that don't have this
> feature"
This will never be my intention.
Nils
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Merlin,
> _POSIX_THREAD_PROCESS_SHARED
sure.
> Also, it's forbidden to do things like invoke i/o in the backend while
> holding only a spinlock. As to your larger point, it's an interesting
> assertion -- some data to back it up would help.
Let's see if I can get any. ATM I've only got indic
Hi,
I am currently trying to understand what looks like really bad scalability of
9.1.3 on a 64core 512GB RAM system: the system runs OK when at 30% usr, but only
marginal amounts of additional load seem to push it to 70% and the application
becomes highly unresponsive.
My current understanding b
31 matches
Mail list logo