test
patterns I've noticed serious congestion with spinning on global
pbuf_mtx mutex inside getpbuf() and relpbuf(). Since that code is
already very simple, I've tried to optimize probably the only thing
possible there: switch bswlist from TAILQ to SLIST. As I can see,
b_freelist field of struct buf
On Sat, Jun 29, 2013 at 10:06:11AM +0300, Alexander Motin wrote:
I understand that lock attempt will steal cache line from lock owner.
What I don't very understand is why avoiding it helps performance in
this case. Indeed, having mutex on own cache line will not let other
cores to steal
, I've tried to optimize probably the only thing
possible there: switch bswlist from TAILQ to SLIST. As I can see,
b_freelist field of struct buf is really used as TAILQ in some other
places, so I've just added another SLIST_ENTRY field. And result
appeared to be surprising -- I can
(). Since that code is
already very simple, I've tried to optimize probably the only thing
possible there: switch bswlist from TAILQ to SLIST. As I can see,
b_freelist field of struct buf is really used as TAILQ in some other
places, so I've just added another SLIST_ENTRY field. And result
appeared
.. i'd rather you narrow down _why_ it's performing better before committing it.
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
You're using instructions-retired; how about using l1/l2 cache loads,
stores, etc? There's a lot more CPU
that lock spinning there consumes
incomparably much more CPU time then the locked region itself could consume.
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
Big win or small, TAILQ is still heavier then STAILQ, while it is not
needed
On 28 June 2013 08:37, Alexander Motin m...@freebsd.org wrote:
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
Big win or small, TAILQ is still heavier then STAILQ, while it is not needed
there at all.
You can't make that assumption
On Fri, Jun 28, 2013 at 08:14:42AM -0700, Adrian Chadd wrote:
.. i'd rather you narrow down _why_ it's performing better before committing
it.
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
Or penalize some other set of machines
On Fri, Jun 28, 2013 at 8:56 AM, Adrian Chadd adr...@freebsd.org wrote:
On 28 June 2013 08:37, Alexander Motin m...@freebsd.org wrote:
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
Big win or small, TAILQ is still heavier
On 28.06.2013 18:56, Adrian Chadd wrote:
On 28 June 2013 08:37, Alexander Motin m...@freebsd.org wrote:
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
Big win or small, TAILQ is still heavier then STAILQ, while it is not needed
and the current element. A doubly-linked LIST needs to modify both the
head as well as the old first element, which may not be in cache (and may
not be in the same TLB, either). I don't recall exactly what [S]TAILQ
touches, but the doubly-linked version still has to modify more entries
(). Since that code is
already very simple, I've tried to optimize probably the only thing
possible there: switch bswlist from TAILQ to SLIST. As I can see,
b_freelist field of struct buf is really used as TAILQ in some other
places, so I've just added another SLIST_ENTRY field. And result
appeared
()
-- first lock acquisition causes 78% of them. Later memory accesses
including the lock release are hitting the same cache line and almost free.
With clean kernel using TAILQ I see RESOURCE_STALLS.ANY spread almost
equally between lock acquisition, bswlist access and lock release. It looks
like
with spinning on global
pbuf_mtx mutex inside getpbuf() and relpbuf(). Since that code is
already very simple, I've tried to optimize probably the only thing
possible there: switch bswlist from TAILQ to SLIST. As I can see,
b_freelist field of struct buf is really used as TAILQ in some other
there: switch bswlist from TAILQ to SLIST. As I can see,
b_freelist field of struct buf is really used as TAILQ in some other
places, so I've just added another SLIST_ENTRY field. And result
appeared to be surprising -- I can no longer reproduce the issue at all.
May be it was just unlucky synchronization
Hi !
I have finally started with my work on that protocol I was telling you about
(ax.25), but now I have come to a problem. Some of old structs for
networking were changed and now they use TAILQ macros. There is almost no
information about this macros, so I am quite in dark... I looked
On Sat, 29 Dec 2001 00:57:48 +0100
Aleksander Rozman [EMAIL PROTECTED] wrote:
I have finally started with my work on that protocol I was telling you about
(ax.25), but now I have come to a problem. Some of old structs for
networking were changed and now they use TAILQ macros. There is almost
man queue
man TAILQ_FIRST
On Sat, 29 Dec 2001, Aleksander Rozman wrote:
Hi !
I have finally started with my work on that protocol I was telling you about
(ax.25), but now I have come to a problem. Some of old structs for
networking were changed and now they use TAILQ macros
), but now I have come to a problem. Some of
old structs for
networking were changed and now they use TAILQ
macros. There is almost no
information about this macros, so I am quite in
dark... I looked at source
code, but I am still putting something wrong in, so
I ghet a lot of errors.
Is there any
19 matches
Mail list logo