On Thu, 18 Feb 2021 at 15:15, Jason A. Donenfeld wrote:
>
[...]
> >
> > > +static void __wg_prev_queue_enqueue(struct prev_queue *queue, struct
> > > sk_buff *skb)
> > > +{
> > > + WRITE_ONCE(NEXT(skb), NULL);
> > > + WRITE_ONCE(NEXT(xchg_release(&queue->head, skb)), skb);
> > > +}
>
On Thu, Feb 18, 2021 at 3:04 PM Björn Töpel wrote:
>
> On Thu, 18 Feb 2021 at 14:53, Jason A. Donenfeld wrote:
> >
> > Hey Bjorn,
> >
> > On Thu, Feb 18, 2021 at 2:50 PM Björn Töpel wrote:
> > > > +
> > > > +static void __wg_prev_queue_enqueue(struct prev_queue *queue, struct
> > > > sk_buff *s
On Thu, 18 Feb 2021 at 14:53, Jason A. Donenfeld wrote:
>
> Hey Bjorn,
>
> On Thu, Feb 18, 2021 at 2:50 PM Björn Töpel wrote:
> > > +
> > > +static void __wg_prev_queue_enqueue(struct prev_queue *queue, struct
> > > sk_buff *skb)
> > > +{
> > > + WRITE_ONCE(NEXT(skb), NULL);
> > > +
Hey Bjorn,
On Thu, Feb 18, 2021 at 2:50 PM Björn Töpel wrote:
> > +
> > +static void __wg_prev_queue_enqueue(struct prev_queue *queue, struct
> > sk_buff *skb)
> > +{
> > + WRITE_ONCE(NEXT(skb), NULL);
> > + smp_wmb();
> > + WRITE_ONCE(NEXT(xchg_relaxed(&queue->head, skb)), skb
On Mon, 8 Feb 2021 at 14:47, Jason A. Donenfeld wrote:
>
> Having two ring buffers per-peer means that every peer results in two
> massive ring allocations. On an 8-core x86_64 machine, this commit
> reduces the per-peer allocation from 18,688 bytes to 1,856 bytes, which
> is an 90% reduction. Nin
"Jason A. Donenfeld" writes:
> On Wed, Feb 17, 2021 at 7:36 PM Toke Høiland-Jørgensen wrote:
>> Are these performance measurements are based on micro-benchmarks of the
>> queueing structure, or overall wireguard performance? Do you see any
>> measurable difference in the overall performance (i.e
On Wed, Feb 17, 2021 at 7:36 PM Toke Høiland-Jørgensen wrote:
> Are these performance measurements are based on micro-benchmarks of the
> queueing structure, or overall wireguard performance? Do you see any
> measurable difference in the overall performance (i.e., throughput
> drop)?
These are fr
"Jason A. Donenfeld" writes:
> Having two ring buffers per-peer means that every peer results in two
> massive ring allocations. On an 8-core x86_64 machine, this commit
> reduces the per-peer allocation from 18,688 bytes to 1,856 bytes, which
> is an 90% reduction. Ninety percent! With some sing
On Tue, Feb 9, 2021 at 4:44 PM Jason A. Donenfeld wrote:
>
> Hi Dmitry,
>
> Thanks for the review.
>
> On Tue, Feb 9, 2021 at 9:24 AM Dmitry Vyukov wrote:
> > Strictly saying, 0.15% is for delaying the newly added item only. This
> > is not a problem, we can just consider that push has not finish
Hi Dmitry,
Thanks for the review.
On Tue, Feb 9, 2021 at 9:24 AM Dmitry Vyukov wrote:
> Strictly saying, 0.15% is for delaying the newly added item only. This
> is not a problem, we can just consider that push has not finished yet
> in this case. You can get this with any queue. It's just that c
On Mon, Feb 8, 2021 at 2:38 PM Jason A. Donenfeld wrote:
>
> Having two ring buffers per-peer means that every peer results in two
> massive ring allocations. On an 8-core x86_64 machine, this commit
> reduces the per-peer allocation from 18,688 bytes to 1,856 bytes, which
> is an 90% reduction. N
Having two ring buffers per-peer means that every peer results in two
massive ring allocations. On an 8-core x86_64 machine, this commit
reduces the per-peer allocation from 18,688 bytes to 1,856 bytes, which
is an 90% reduction. Ninety percent! With some single-machine
deployments approaching 400,
12 matches
Mail list logo