Thanks Song and Andrii for the response.

Use-case is global rate-limiting for incoming TCP connections. And we want
to implement the token bucket algorithm using XDP for this purpose.
So we are planning to have a map that gets updated with 'x' number of
tokens per second and the same map would get decremented as and when we get
a new TCP connection request. Most of our systems are 64 core machines.
Since every core would try to update the map in parallel, the problem I am
imagining is that I might miss the update when the tokens are being
incremented. I guess it is still ok to lose the decrement update when we
get the packet as that is less critical, but I cannot afford to lose the
increment operation which could mean that none of the connections are
processed until the next second.

Let me know if any more details are needed.

Regards,
Kanthi

PS I am trying to work around using a two map approach(additional map for
filling the tokens for the next second), but would love to hear if there is
something cleaner and better.


On Wed, May 27, 2020 at 1:29 AM Andrii Nakryiko <andrii.nakry...@gmail.com>
wrote:

> On Fri, May 22, 2020 at 1:07 PM Kanthi P <pavuluri.kan...@gmail.com>
> wrote:
> >
> > Hi,
> >
> >
> > I’ve been reading that hash map’s update element is atomic and also that
> we can use BPF_XADD to make the entire map update atomically.
> >
> >
> > But I think that doesn’t guarantee that these updates are thread safe,
> meaning one cpu core can overwrite other core’s update.
> >
> >
> > Is there a clean way of keeping them thread safe. Unfortunately I can’t
> use per-cpu maps as I need global counters.
> >
> >
> > And spin locks sounds a costly operation. Can you please throw some
> light?
>
> Stating that spin locks are costly without empirical data seems
> premature. What's the scenario? What's the number of CPUs? What's the
> level of contention? Under light contention, spin locks in practice
> would be almost as fast as atomic increments. Under heavy contention,
> spin locks would probably be even better than atomics because they
> will not waste as much CPU, as a typical atomic retry loop would.
>
> But basically, depending on your use case (which you should probably
> describe to get a better answer), you can either:
>   - do atomic increment/decrement if you need to update a counter (see
> examples in kernel selftests using __sync_fetch_and_add);
>   - use map with bpf_spin_lock (there are also examples in selftests).
>
> >
> >
> > Regards,
> >
> > Kanthi
> >
> > 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#1864): https://lists.iovisor.org/g/iovisor-dev/message/1864
Mute This Topic: https://lists.iovisor.org/mt/74407447/21656
Group Owner: iovisor-dev+ow...@lists.iovisor.org
Unsubscribe: https://lists.iovisor.org/g/iovisor-dev/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to