Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-14 Thread Kuniyuki Iwashima
From:   Martin KaFai Lau 
Date:   Thu, 10 Dec 2020 11:33:40 -0800
> On Thu, Dec 10, 2020 at 02:58:10PM +0900, Kuniyuki Iwashima wrote:
> 
> [ ... ]
> 
> > > > I've implemented one-by-one migration only for the accept queue for now.
> > > > In addition to the concern about TFO queue,
> > > You meant this queue:  queue->fastopenq.rskq_rst_head?
> > 
> > Yes.
> > 
> > 
> > > Can "req" be passed?
> > > I did not look up the lock/race in details for that though.
> > 
> > I think if we rewrite freeing TFO requests part like one of accept queue
> > using reqsk_queue_remove(), we can also migrate them.
> > 
> > In this patchset, selecting a listener for accept queue, the TFO queue of
> > the same listener is also migrated to another listener in order to prevent
> > TFO spoofing attack.
> > 
> > If the request in the accept queue is migrated one by one, I am wondering
> > which should the request in TFO queue be migrated to prevent attack or
> > freed.
> > 
> > I think user need not know about keeping such requests in kernel to prevent
> > attacks, so passing them to eBPF prog is confusing. But, redistributing
> > them randomly without user's intention can make some irrelevant listeners
> > unnecessarily drop new TFO requests, so this is also bad. Moreover, freeing
> > such requests seems not so good in the point of security.
> The current behavior (during process restart) is also not carrying this
> security queue.  Not carrying them in this patch will make it
> less secure than the current behavior during process restart?

No, I thought I could make it more secure.


> Do you need it now or it is something that can be considered for later
> without changing uapi bpf.h?

No, I do not need it for any other reason, so I will simply free the
requests in TFO queue.
Thank you.


> > > > ---8<---
> > > > diff --git a/net/ipv4/inet_connection_sock.c 
> > > > b/net/ipv4/inet_connection_sock.c
> > > > index a82fd4c912be..d0ddd3cb988b 100644
> > > > --- a/net/ipv4/inet_connection_sock.c
> > > > +++ b/net/ipv4/inet_connection_sock.c
> > > > @@ -1001,6 +1001,29 @@ struct sock *inet_csk_reqsk_queue_add(struct 
> > > > sock *sk,
> > > >  }
> > > >  EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
> > > >  
> > > > +static bool inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock 
> > > > *nsk, struct request_sock *req)
> > > > +{
> > > > +   struct request_sock_queue *queue = 
> > > > _csk(nsk)->icsk_accept_queue;
> > > > +   bool migrated = false;
> > > > +
> > > > +   spin_lock(>rskq_lock);
> > > > +   if (likely(nsk->sk_state == TCP_LISTEN)) {
> > > > +   migrated = true;
> > > > +
> > > > +   req->dl_next = NULL;
> > > > +   if (queue->rskq_accept_head == NULL)
> > > > +   WRITE_ONCE(queue->rskq_accept_head, req);
> > > > +   else
> > > > +   queue->rskq_accept_tail->dl_next = req;
> > > > +   queue->rskq_accept_tail = req;
> > > > +   sk_acceptq_added(nsk);
> > > > +   inet_csk_reqsk_queue_migrated(sk, nsk, req);
> > > need to first resolve the question raised in patch 5 regarding
> > > to the update on req->rsk_listener though.
> > 
> > In the unhash path, it is also safe to call sock_put() for the old listner.
> > 
> > In inet_csk_listen_stop(), the sk_refcnt of the listener >= 1. If the
> > listener does not have immature requests, sk_refcnt is 1 and freed in
> > __tcp_close().
> > 
> >   sock_hold(sk) in __tcp_close()
> >   sock_put(sk) in inet_csk_destroy_sock()
> >   sock_put(sk) in __tcp_clsoe()
> I don't see how it is different here than in patch 5.
> I could be missing something.
> 
> Lets contd the discussion on the other thread (patch 5) first.

The listening socket has two kinds of refcounts for itself(1) and
requests(n). I think the listener has its own refcount at least in
inet_csk_listen_stop(), so sock_put() here never free the listener.


Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-10 Thread Martin KaFai Lau
On Thu, Dec 10, 2020 at 02:58:10PM +0900, Kuniyuki Iwashima wrote:

[ ... ]

> > > I've implemented one-by-one migration only for the accept queue for now.
> > > In addition to the concern about TFO queue,
> > You meant this queue:  queue->fastopenq.rskq_rst_head?
> 
> Yes.
> 
> 
> > Can "req" be passed?
> > I did not look up the lock/race in details for that though.
> 
> I think if we rewrite freeing TFO requests part like one of accept queue
> using reqsk_queue_remove(), we can also migrate them.
> 
> In this patchset, selecting a listener for accept queue, the TFO queue of
> the same listener is also migrated to another listener in order to prevent
> TFO spoofing attack.
> 
> If the request in the accept queue is migrated one by one, I am wondering
> which should the request in TFO queue be migrated to prevent attack or
> freed.
> 
> I think user need not know about keeping such requests in kernel to prevent
> attacks, so passing them to eBPF prog is confusing. But, redistributing
> them randomly without user's intention can make some irrelevant listeners
> unnecessarily drop new TFO requests, so this is also bad. Moreover, freeing
> such requests seems not so good in the point of security.
The current behavior (during process restart) is also not carrying this
security queue.  Not carrying them in this patch will make it
less secure than the current behavior during process restart?
Do you need it now or it is something that can be considered for later
without changing uapi bpf.h?

> > > ---8<---
> > > diff --git a/net/ipv4/inet_connection_sock.c 
> > > b/net/ipv4/inet_connection_sock.c
> > > index a82fd4c912be..d0ddd3cb988b 100644
> > > --- a/net/ipv4/inet_connection_sock.c
> > > +++ b/net/ipv4/inet_connection_sock.c
> > > @@ -1001,6 +1001,29 @@ struct sock *inet_csk_reqsk_queue_add(struct sock 
> > > *sk,
> > >  }
> > >  EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
> > >  
> > > +static bool inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock 
> > > *nsk, struct request_sock *req)
> > > +{
> > > +   struct request_sock_queue *queue = 
> > > _csk(nsk)->icsk_accept_queue;
> > > +   bool migrated = false;
> > > +
> > > +   spin_lock(>rskq_lock);
> > > +   if (likely(nsk->sk_state == TCP_LISTEN)) {
> > > +   migrated = true;
> > > +
> > > +   req->dl_next = NULL;
> > > +   if (queue->rskq_accept_head == NULL)
> > > +   WRITE_ONCE(queue->rskq_accept_head, req);
> > > +   else
> > > +   queue->rskq_accept_tail->dl_next = req;
> > > +   queue->rskq_accept_tail = req;
> > > +   sk_acceptq_added(nsk);
> > > +   inet_csk_reqsk_queue_migrated(sk, nsk, req);
> > need to first resolve the question raised in patch 5 regarding
> > to the update on req->rsk_listener though.
> 
> In the unhash path, it is also safe to call sock_put() for the old listner.
> 
> In inet_csk_listen_stop(), the sk_refcnt of the listener >= 1. If the
> listener does not have immature requests, sk_refcnt is 1 and freed in
> __tcp_close().
> 
>   sock_hold(sk) in __tcp_close()
>   sock_put(sk) in inet_csk_destroy_sock()
>   sock_put(sk) in __tcp_clsoe()
I don't see how it is different here than in patch 5.
I could be missing something.

Lets contd the discussion on the other thread (patch 5) first.

> 
> 
> > > +   }
> > > +   spin_unlock(>rskq_lock);
> > > +
> > > +   return migrated;
> > > +}
> > > +
> > >  struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock 
> > > *child,
> > >  struct request_sock *req, bool 
> > > own_req)
> > >  {
> > > @@ -1023,9 +1046,11 @@ EXPORT_SYMBOL(inet_csk_complete_hashdance);
> > >   */
> > >  void inet_csk_listen_stop(struct sock *sk)
> > >  {
> > > +   struct sock_reuseport *reuseport_cb = 
> > > rcu_access_pointer(sk->sk_reuseport_cb);
> > > struct inet_connection_sock *icsk = inet_csk(sk);
> > > struct request_sock_queue *queue = >icsk_accept_queue;
> > > struct request_sock *next, *req;
> > > +   struct sock *nsk;
> > >  
> > > /* Following specs, it would be better either to send FIN
> > >  * (and enter FIN-WAIT-1, it is normal close)
> > > @@ -1043,8 +1068,19 @@ void inet_csk_listen_stop(struct sock *sk)
> > > WARN_ON(sock_owned_by_user(child));
> > > sock_hold(child);
> > >  
> > > +   if (reuseport_cb) {
> > > +   nsk = reuseport_select_migrated_sock(sk, 
> > > req_to_sk(req)->sk_hash, NULL);
> > > +   if (nsk) {
> > > +   if (inet_csk_reqsk_queue_migrate(sk, nsk, 
> > > req))
> > > +   goto unlock_sock;
> > > +   else
> > > +   sock_put(nsk);
> > > +   }
> > > +   }
> > > +
> > > 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-09 Thread Kuniyuki Iwashima
From:   Martin KaFai Lau 
Date:   Wed, 9 Dec 2020 17:53:19 -0800
> On Thu, Dec 10, 2020 at 01:57:19AM +0900, Kuniyuki Iwashima wrOAote:
> [ ... ]
> 
> > > > > I think it is a bit complex to pass the new listener from
> > > > > reuseport_detach_sock() to inet_csk_listen_stop().
> > > > > 
> > > > > __tcp_close/tcp_disconnect/tcp_abort
> > > > >  |-tcp_set_state
> > > > >  |  |-unhash
> > > > >  | |-reuseport_detach_sock (return nsk)
> > > > >  |-inet_csk_listen_stop
> > > > Picking the new listener does not have to be done in
> > > > reuseport_detach_sock().
> > > > 
> > > > IIUC, it is done there only because it prefers to pick
> > > > the last sk from socks[] when bpf prog is not attached.
> > > > This seems to get into the way of exploring other potential
> > > > implementation options.
> > > 
> > > Yes.
> > > This is just idea, but we can reserve the last index of socks[] to hold 
> > > the
> > > last 'moved' socket in reuseport_detach_sock() and use it in
> > > inet_csk_listen_stop().
> > > 
> > > 
> > > > Merging the discussion on the last socks[] pick from another thread:
> > > > >
> > > > > I think most applications start new listeners before closing 
> > > > > listeners, in
> > > > > this case, selecting the moved socket as the new listener works well.
> > > > >
> > > > >
> > > > > > That said, if it is still desired to do a random pick by kernel when
> > > > > > there is no bpf prog, it probably makes sense to guard it in a 
> > > > > > sysctl as
> > > > > > suggested in another reply.  To keep it simple, I would also keep 
> > > > > > this
> > > > > > kernel-pick consistent instead of request socket is doing something
> > > > > > different from the unhash path.
> > > > >
> > > > > Then, is this way better to keep kernel-pick consistent?
> > > > >
> > > > >   1. call reuseport_select_migrated_sock() without sk_hash from any 
> > > > > path
> > > > >   2. generate a random number in reuseport_select_migrated_sock()
> > > > >   3. pass it to __reuseport_select_sock() only for select-by-hash
> > > > >   (4. pass 0 as sk_hash to bpf_run_sk_reuseport not to use it)
> > > > >   5. do migration per queue in inet_csk_listen_stop() or per request 
> > > > > in
> > > > >  receive path.
> > > > >
> > > > > I understand it is beautiful to keep consistensy, but also think
> > > > > the kernel-pick with heuristic performs better than random-pick.
> > > > I think discussing the best kernel pick without explicit user input
> > > > is going to be a dead end. There is always a case that
> > > > makes this heuristic (or guess) fail.  e.g. what if multiple
> > > > sk(s) being closed are always the last one in the socks[]?
> > > > all their child sk(s) will then be piled up at one listen sk
> > > > because the last socks[] is always picked?
> > > 
> > > There can be such a case, but it means the newly listened sockets are
> > > closed earlier than old ones.
> > > 
> > > 
> > > > Lets assume the last socks[] is indeed the best for all cases.  Then why
> > > > the in-progress req don't pick it this way?  I feel the implementation
> > > > is doing what is convenient at that point.  And that is fine, I think
> > > 
> > > In this patchset, I originally assumed four things:
> > > 
> > >   migration should be done
> > > (i)   from old to new
> > > (ii)  to redistribute requests evenly as possible
> > > (iii) to keep the order of requests in the queue
> > >   (resulting in splicing queues)
> > > (iv)  in O(1) for scalability
> > >   (resulting in fix-up rsk_listener approach)
> > > 
> > > I selected the last socket in unhash path to satisfy above four because 
> > > the
> > > last socket changes at every close() syscall if application closes from
> > > older socket.
> > > 
> > > But in receiving ACK or retransmitting SYN+ACK, we cannot get the last
> > > 'moved' socket. Even if we reserve the last 'moved' socket in the last
> > > index by the idea above, we cannot sure the last socket is changed after
> > > close() for each req->listener. For example, we have listeners A, B, C, 
> > > and
> > > D, and then call close(A) and close(B), and receive the final ACKs for A
> > > and B, then both of them are assigned to C. In this case, A for D and B 
> > > for
> > > C is desired. So, selecting the last socket in socks[] for incoming
> > > requests cannnot realize (ii).
> > > 
> > > This is why I selected the last moved socket in unhash path and a random
> > > listener in receive path.
> > > 
> > > 
> > > > for kernel-pick, it should just go for simplicity and stay with
> > > > the random(/hash) pick instead of pretending the kernel knows the
> > > > application must operate in a certain way.  It is fine
> > > > that the pick was wrong, the kernel will eventually move the
> > > > childs/reqs to the survived listen sk.
> > > 
> > > Exactly. Also the heuristic way is not fair for every application.
> > > 
> > > After reading below idea (migrated_sk), I think random-pick is better
> > > at 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-09 Thread Martin KaFai Lau
On Thu, Dec 10, 2020 at 01:57:19AM +0900, Kuniyuki Iwashima wrote:
[ ... ]

> > > > I think it is a bit complex to pass the new listener from
> > > > reuseport_detach_sock() to inet_csk_listen_stop().
> > > > 
> > > > __tcp_close/tcp_disconnect/tcp_abort
> > > >  |-tcp_set_state
> > > >  |  |-unhash
> > > >  | |-reuseport_detach_sock (return nsk)
> > > >  |-inet_csk_listen_stop
> > > Picking the new listener does not have to be done in
> > > reuseport_detach_sock().
> > > 
> > > IIUC, it is done there only because it prefers to pick
> > > the last sk from socks[] when bpf prog is not attached.
> > > This seems to get into the way of exploring other potential
> > > implementation options.
> > 
> > Yes.
> > This is just idea, but we can reserve the last index of socks[] to hold the
> > last 'moved' socket in reuseport_detach_sock() and use it in
> > inet_csk_listen_stop().
> > 
> > 
> > > Merging the discussion on the last socks[] pick from another thread:
> > > >
> > > > I think most applications start new listeners before closing listeners, 
> > > > in
> > > > this case, selecting the moved socket as the new listener works well.
> > > >
> > > >
> > > > > That said, if it is still desired to do a random pick by kernel when
> > > > > there is no bpf prog, it probably makes sense to guard it in a sysctl 
> > > > > as
> > > > > suggested in another reply.  To keep it simple, I would also keep this
> > > > > kernel-pick consistent instead of request socket is doing something
> > > > > different from the unhash path.
> > > >
> > > > Then, is this way better to keep kernel-pick consistent?
> > > >
> > > >   1. call reuseport_select_migrated_sock() without sk_hash from any path
> > > >   2. generate a random number in reuseport_select_migrated_sock()
> > > >   3. pass it to __reuseport_select_sock() only for select-by-hash
> > > >   (4. pass 0 as sk_hash to bpf_run_sk_reuseport not to use it)
> > > >   5. do migration per queue in inet_csk_listen_stop() or per request in
> > > >  receive path.
> > > >
> > > > I understand it is beautiful to keep consistensy, but also think
> > > > the kernel-pick with heuristic performs better than random-pick.
> > > I think discussing the best kernel pick without explicit user input
> > > is going to be a dead end. There is always a case that
> > > makes this heuristic (or guess) fail.  e.g. what if multiple
> > > sk(s) being closed are always the last one in the socks[]?
> > > all their child sk(s) will then be piled up at one listen sk
> > > because the last socks[] is always picked?
> > 
> > There can be such a case, but it means the newly listened sockets are
> > closed earlier than old ones.
> > 
> > 
> > > Lets assume the last socks[] is indeed the best for all cases.  Then why
> > > the in-progress req don't pick it this way?  I feel the implementation
> > > is doing what is convenient at that point.  And that is fine, I think
> > 
> > In this patchset, I originally assumed four things:
> > 
> >   migration should be done
> > (i)   from old to new
> > (ii)  to redistribute requests evenly as possible
> > (iii) to keep the order of requests in the queue
> >   (resulting in splicing queues)
> > (iv)  in O(1) for scalability
> >   (resulting in fix-up rsk_listener approach)
> > 
> > I selected the last socket in unhash path to satisfy above four because the
> > last socket changes at every close() syscall if application closes from
> > older socket.
> > 
> > But in receiving ACK or retransmitting SYN+ACK, we cannot get the last
> > 'moved' socket. Even if we reserve the last 'moved' socket in the last
> > index by the idea above, we cannot sure the last socket is changed after
> > close() for each req->listener. For example, we have listeners A, B, C, and
> > D, and then call close(A) and close(B), and receive the final ACKs for A
> > and B, then both of them are assigned to C. In this case, A for D and B for
> > C is desired. So, selecting the last socket in socks[] for incoming
> > requests cannnot realize (ii).
> > 
> > This is why I selected the last moved socket in unhash path and a random
> > listener in receive path.
> > 
> > 
> > > for kernel-pick, it should just go for simplicity and stay with
> > > the random(/hash) pick instead of pretending the kernel knows the
> > > application must operate in a certain way.  It is fine
> > > that the pick was wrong, the kernel will eventually move the
> > > childs/reqs to the survived listen sk.
> > 
> > Exactly. Also the heuristic way is not fair for every application.
> > 
> > After reading below idea (migrated_sk), I think random-pick is better
> > at simplicity and passing each sk.
> > 
> > 
> > > [ I still think the kernel should not even pick if
> > >   there is no bpf prog to instruct how to pick
> > >   but I am fine as long as there is a sysctl to
> > >   guard this. ]
> > 
> > Unless different applications listen on the same port, random-pick can save
> > connections which would 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-09 Thread Kuniyuki Iwashima
From:   Kuniyuki Iwashima 
Date:   Wed, 9 Dec 2020 17:05:09 +0900
> From:   Martin KaFai Lau 
> Date:   Tue, 8 Dec 2020 19:09:03 -0800
> > On Tue, Dec 08, 2020 at 05:17:48PM +0900, Kuniyuki Iwashima wrote:
> > > From:   Martin KaFai Lau 
> > > Date:   Mon, 7 Dec 2020 23:34:41 -0800
> > > > On Tue, Dec 08, 2020 at 03:31:34PM +0900, Kuniyuki Iwashima wrote:
> > > > > From:   Martin KaFai Lau 
> > > > > Date:   Mon, 7 Dec 2020 12:33:15 -0800
> > > > > > On Thu, Dec 03, 2020 at 11:14:24PM +0900, Kuniyuki Iwashima wrote:
> > > > > > > From:   Eric Dumazet 
> > > > > > > Date:   Tue, 1 Dec 2020 16:25:51 +0100
> > > > > > > > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > > > > > > > This patch lets reuseport_detach_sock() return a pointer of 
> > > > > > > > > struct sock,
> > > > > > > > > which is used only by inet_unhash(). If it is not NULL,
> > > > > > > > > inet_csk_reqsk_queue_migrate() migrates 
> > > > > > > > > TCP_ESTABLISHED/TCP_SYN_RECV
> > > > > > > > > sockets from the closing listener to the selected one.
> > > > > > > > > 
> > > > > > > > > Listening sockets hold incoming connections as a linked list 
> > > > > > > > > of struct
> > > > > > > > > request_sock in the accept queue, and each request has 
> > > > > > > > > reference to a full
> > > > > > > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), 
> > > > > > > > > we only unlink
> > > > > > > > > the requests from the closing listener's queue and relink 
> > > > > > > > > them to the head
> > > > > > > > > of the new listener's queue. We do not process each request 
> > > > > > > > > and its
> > > > > > > > > reference to the listener, so the migration completes in O(1) 
> > > > > > > > > time
> > > > > > > > > complexity. However, in the case of TCP_SYN_RECV sockets, we 
> > > > > > > > > take special
> > > > > > > > > care in the next commit.
> > > > > > > > > 
> > > > > > > > > By default, the kernel selects a new listener randomly. In 
> > > > > > > > > order to pick
> > > > > > > > > out a different socket every time, we select the last element 
> > > > > > > > > of socks[] as
> > > > > > > > > the new listener. This behaviour is based on how the kernel 
> > > > > > > > > moves sockets
> > > > > > > > > in socks[]. (See also [1])
> > > > > > > > > 
> > > > > > > > > Basically, in order to redistribute sockets evenly, we have 
> > > > > > > > > to use an eBPF
> > > > > > > > > program called in the later commit, but as the side effect of 
> > > > > > > > > such default
> > > > > > > > > selection, the kernel can redistribute old requests evenly to 
> > > > > > > > > new listeners
> > > > > > > > > for a specific case where the application replaces listeners 
> > > > > > > > > by
> > > > > > > > > generations.
> > > > > > > > > 
> > > > > > > > > For example, we call listen() for four sockets (A, B, C, D), 
> > > > > > > > > and close the
> > > > > > > > > first two by turns. The sockets move in socks[] like below.
> > > > > > > > > 
> > > > > > > > >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> > > > > > > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> > > > > > > > >   socks[2] : C   |  socks[2] : C --'
> > > > > > > > >   socks[3] : D --'
> > > > > > > > > 
> > > > > > > > > Then, if C and D have newer settings than A and B, and each 
> > > > > > > > > socket has a
> > > > > > > > > request (a, b, c, d) in their accept queue, we can 
> > > > > > > > > redistribute old
> > > > > > > > > requests evenly to new listeners.
> > > > > > > > > 
> > > > > > > > >   socks[0] : A (a) <-.  socks[0] : D (a + d)  
> > > > > > > > > socks[0] : D (a + d)
> > > > > > > > >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  
> > > > > > > > > socks[1] : C (b + c)
> > > > > > > > >   socks[2] : C (c)   |  socks[2] : C (c) --'
> > > > > > > > >   socks[3] : D (d) --'
> > > > > > > > > 
> > > > > > > > > Here, (A, D) or (B, C) can have different application 
> > > > > > > > > settings, but they
> > > > > > > > > MUST have the same settings at the socket API level; 
> > > > > > > > > otherwise, unexpected
> > > > > > > > > error may happen. For instance, if only the new listeners have
> > > > > > > > > TCP_SAVE_SYN, old requests do not have SYN data, so the 
> > > > > > > > > application will
> > > > > > > > > face inconsistency and cause an error.
> > > > > > > > > 
> > > > > > > > > Therefore, if there are different kinds of sockets, we must 
> > > > > > > > > attach an eBPF
> > > > > > > > > program described in later commits.
> > > > > > > > > 
> > > > > > > > > Link: 
> > > > > > > > > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > > > > > > > > Reviewed-by: Benjamin Herrenschmidt 
> > > > > > > > > Signed-off-by: Kuniyuki Iwashima 
> > > > > > > > > ---
> > > > > > > > >  include/net/inet_connection_sock.h |  1 +
> > > > > > > > >  include/net/sock_reuseport.h   |  2 +-
> > > > > > > > >  net/core/sock_reuseport.c  

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-09 Thread Kuniyuki Iwashima
From:   Martin KaFai Lau 
Date:   Tue, 8 Dec 2020 19:09:03 -0800
> On Tue, Dec 08, 2020 at 05:17:48PM +0900, Kuniyuki Iwashima wrote:
> > From:   Martin KaFai Lau 
> > Date:   Mon, 7 Dec 2020 23:34:41 -0800
> > > On Tue, Dec 08, 2020 at 03:31:34PM +0900, Kuniyuki Iwashima wrote:
> > > > From:   Martin KaFai Lau 
> > > > Date:   Mon, 7 Dec 2020 12:33:15 -0800
> > > > > On Thu, Dec 03, 2020 at 11:14:24PM +0900, Kuniyuki Iwashima wrote:
> > > > > > From:   Eric Dumazet 
> > > > > > Date:   Tue, 1 Dec 2020 16:25:51 +0100
> > > > > > > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > > > > > > This patch lets reuseport_detach_sock() return a pointer of 
> > > > > > > > struct sock,
> > > > > > > > which is used only by inet_unhash(). If it is not NULL,
> > > > > > > > inet_csk_reqsk_queue_migrate() migrates 
> > > > > > > > TCP_ESTABLISHED/TCP_SYN_RECV
> > > > > > > > sockets from the closing listener to the selected one.
> > > > > > > > 
> > > > > > > > Listening sockets hold incoming connections as a linked list of 
> > > > > > > > struct
> > > > > > > > request_sock in the accept queue, and each request has 
> > > > > > > > reference to a full
> > > > > > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we 
> > > > > > > > only unlink
> > > > > > > > the requests from the closing listener's queue and relink them 
> > > > > > > > to the head
> > > > > > > > of the new listener's queue. We do not process each request and 
> > > > > > > > its
> > > > > > > > reference to the listener, so the migration completes in O(1) 
> > > > > > > > time
> > > > > > > > complexity. However, in the case of TCP_SYN_RECV sockets, we 
> > > > > > > > take special
> > > > > > > > care in the next commit.
> > > > > > > > 
> > > > > > > > By default, the kernel selects a new listener randomly. In 
> > > > > > > > order to pick
> > > > > > > > out a different socket every time, we select the last element 
> > > > > > > > of socks[] as
> > > > > > > > the new listener. This behaviour is based on how the kernel 
> > > > > > > > moves sockets
> > > > > > > > in socks[]. (See also [1])
> > > > > > > > 
> > > > > > > > Basically, in order to redistribute sockets evenly, we have to 
> > > > > > > > use an eBPF
> > > > > > > > program called in the later commit, but as the side effect of 
> > > > > > > > such default
> > > > > > > > selection, the kernel can redistribute old requests evenly to 
> > > > > > > > new listeners
> > > > > > > > for a specific case where the application replaces listeners by
> > > > > > > > generations.
> > > > > > > > 
> > > > > > > > For example, we call listen() for four sockets (A, B, C, D), 
> > > > > > > > and close the
> > > > > > > > first two by turns. The sockets move in socks[] like below.
> > > > > > > > 
> > > > > > > >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> > > > > > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> > > > > > > >   socks[2] : C   |  socks[2] : C --'
> > > > > > > >   socks[3] : D --'
> > > > > > > > 
> > > > > > > > Then, if C and D have newer settings than A and B, and each 
> > > > > > > > socket has a
> > > > > > > > request (a, b, c, d) in their accept queue, we can redistribute 
> > > > > > > > old
> > > > > > > > requests evenly to new listeners.
> > > > > > > > 
> > > > > > > >   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] 
> > > > > > > > : D (a + d)
> > > > > > > >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] 
> > > > > > > > : C (b + c)
> > > > > > > >   socks[2] : C (c)   |  socks[2] : C (c) --'
> > > > > > > >   socks[3] : D (d) --'
> > > > > > > > 
> > > > > > > > Here, (A, D) or (B, C) can have different application settings, 
> > > > > > > > but they
> > > > > > > > MUST have the same settings at the socket API level; otherwise, 
> > > > > > > > unexpected
> > > > > > > > error may happen. For instance, if only the new listeners have
> > > > > > > > TCP_SAVE_SYN, old requests do not have SYN data, so the 
> > > > > > > > application will
> > > > > > > > face inconsistency and cause an error.
> > > > > > > > 
> > > > > > > > Therefore, if there are different kinds of sockets, we must 
> > > > > > > > attach an eBPF
> > > > > > > > program described in later commits.
> > > > > > > > 
> > > > > > > > Link: 
> > > > > > > > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > > > > > > > Reviewed-by: Benjamin Herrenschmidt 
> > > > > > > > Signed-off-by: Kuniyuki Iwashima 
> > > > > > > > ---
> > > > > > > >  include/net/inet_connection_sock.h |  1 +
> > > > > > > >  include/net/sock_reuseport.h   |  2 +-
> > > > > > > >  net/core/sock_reuseport.c  | 10 +-
> > > > > > > >  net/ipv4/inet_connection_sock.c| 30 
> > > > > > > > ++
> > > > > > > >  net/ipv4/inet_hashtables.c |  9 +++--
> > > > > > > >  5 files changed, 48 insertions(+), 4 deletions(-)
> > > > > > > > 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-08 Thread Martin KaFai Lau
On Tue, Dec 08, 2020 at 05:17:48PM +0900, Kuniyuki Iwashima wrote:
> From:   Martin KaFai Lau 
> Date:   Mon, 7 Dec 2020 23:34:41 -0800
> > On Tue, Dec 08, 2020 at 03:31:34PM +0900, Kuniyuki Iwashima wrote:
> > > From:   Martin KaFai Lau 
> > > Date:   Mon, 7 Dec 2020 12:33:15 -0800
> > > > On Thu, Dec 03, 2020 at 11:14:24PM +0900, Kuniyuki Iwashima wrote:
> > > > > From:   Eric Dumazet 
> > > > > Date:   Tue, 1 Dec 2020 16:25:51 +0100
> > > > > > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > > > > > This patch lets reuseport_detach_sock() return a pointer of 
> > > > > > > struct sock,
> > > > > > > which is used only by inet_unhash(). If it is not NULL,
> > > > > > > inet_csk_reqsk_queue_migrate() migrates 
> > > > > > > TCP_ESTABLISHED/TCP_SYN_RECV
> > > > > > > sockets from the closing listener to the selected one.
> > > > > > > 
> > > > > > > Listening sockets hold incoming connections as a linked list of 
> > > > > > > struct
> > > > > > > request_sock in the accept queue, and each request has reference 
> > > > > > > to a full
> > > > > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we 
> > > > > > > only unlink
> > > > > > > the requests from the closing listener's queue and relink them to 
> > > > > > > the head
> > > > > > > of the new listener's queue. We do not process each request and 
> > > > > > > its
> > > > > > > reference to the listener, so the migration completes in O(1) time
> > > > > > > complexity. However, in the case of TCP_SYN_RECV sockets, we take 
> > > > > > > special
> > > > > > > care in the next commit.
> > > > > > > 
> > > > > > > By default, the kernel selects a new listener randomly. In order 
> > > > > > > to pick
> > > > > > > out a different socket every time, we select the last element of 
> > > > > > > socks[] as
> > > > > > > the new listener. This behaviour is based on how the kernel moves 
> > > > > > > sockets
> > > > > > > in socks[]. (See also [1])
> > > > > > > 
> > > > > > > Basically, in order to redistribute sockets evenly, we have to 
> > > > > > > use an eBPF
> > > > > > > program called in the later commit, but as the side effect of 
> > > > > > > such default
> > > > > > > selection, the kernel can redistribute old requests evenly to new 
> > > > > > > listeners
> > > > > > > for a specific case where the application replaces listeners by
> > > > > > > generations.
> > > > > > > 
> > > > > > > For example, we call listen() for four sockets (A, B, C, D), and 
> > > > > > > close the
> > > > > > > first two by turns. The sockets move in socks[] like below.
> > > > > > > 
> > > > > > >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> > > > > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> > > > > > >   socks[2] : C   |  socks[2] : C --'
> > > > > > >   socks[3] : D --'
> > > > > > > 
> > > > > > > Then, if C and D have newer settings than A and B, and each 
> > > > > > > socket has a
> > > > > > > request (a, b, c, d) in their accept queue, we can redistribute 
> > > > > > > old
> > > > > > > requests evenly to new listeners.
> > > > > > > 
> > > > > > >   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : 
> > > > > > > D (a + d)
> > > > > > >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : 
> > > > > > > C (b + c)
> > > > > > >   socks[2] : C (c)   |  socks[2] : C (c) --'
> > > > > > >   socks[3] : D (d) --'
> > > > > > > 
> > > > > > > Here, (A, D) or (B, C) can have different application settings, 
> > > > > > > but they
> > > > > > > MUST have the same settings at the socket API level; otherwise, 
> > > > > > > unexpected
> > > > > > > error may happen. For instance, if only the new listeners have
> > > > > > > TCP_SAVE_SYN, old requests do not have SYN data, so the 
> > > > > > > application will
> > > > > > > face inconsistency and cause an error.
> > > > > > > 
> > > > > > > Therefore, if there are different kinds of sockets, we must 
> > > > > > > attach an eBPF
> > > > > > > program described in later commits.
> > > > > > > 
> > > > > > > Link: 
> > > > > > > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > > > > > > Reviewed-by: Benjamin Herrenschmidt 
> > > > > > > Signed-off-by: Kuniyuki Iwashima 
> > > > > > > ---
> > > > > > >  include/net/inet_connection_sock.h |  1 +
> > > > > > >  include/net/sock_reuseport.h   |  2 +-
> > > > > > >  net/core/sock_reuseport.c  | 10 +-
> > > > > > >  net/ipv4/inet_connection_sock.c| 30 
> > > > > > > ++
> > > > > > >  net/ipv4/inet_hashtables.c |  9 +++--
> > > > > > >  5 files changed, 48 insertions(+), 4 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/include/net/inet_connection_sock.h 
> > > > > > > b/include/net/inet_connection_sock.h
> > > > > > > index 7338b3865a2a..2ea2d743f8fc 100644
> > > > > > > --- a/include/net/inet_connection_sock.h
> > > > > > > +++ 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-08 Thread Kuniyuki Iwashima
From:   Martin KaFai Lau 
Date:   Tue, 8 Dec 2020 00:13:28 -0800
> On Tue, Dec 08, 2020 at 03:27:14PM +0900, Kuniyuki Iwashima wrote:
> > From:   Martin KaFai Lau 
> > Date:   Mon, 7 Dec 2020 12:14:38 -0800
> > > On Sun, Dec 06, 2020 at 01:03:07AM +0900, Kuniyuki Iwashima wrote:
> > > > From:   Martin KaFai Lau 
> > > > Date:   Fri, 4 Dec 2020 17:42:41 -0800
> > > > > On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
> > > > > [ ... ]
> > > > > > diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> > > > > > index fd133516ac0e..60d7c1f28809 100644
> > > > > > --- a/net/core/sock_reuseport.c
> > > > > > +++ b/net/core/sock_reuseport.c
> > > > > > @@ -216,9 +216,11 @@ int reuseport_add_sock(struct sock *sk, struct 
> > > > > > sock *sk2, bool bind_inany)
> > > > > >  }
> > > > > >  EXPORT_SYMBOL(reuseport_add_sock);
> > > > > >  
> > > > > > -void reuseport_detach_sock(struct sock *sk)
> > > > > > +struct sock *reuseport_detach_sock(struct sock *sk)
> > > > > >  {
> > > > > > struct sock_reuseport *reuse;
> > > > > > +   struct bpf_prog *prog;
> > > > > > +   struct sock *nsk = NULL;
> > > > > > int i;
> > > > > >  
> > > > > > spin_lock_bh(_lock);
> > > > > > @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
> > > > > >  
> > > > > > reuse->num_socks--;
> > > > > > reuse->socks[i] = reuse->socks[reuse->num_socks];
> > > > > > +   prog = rcu_dereference(reuse->prog);
> > > > > Is it under rcu_read_lock() here?
> > > > 
> > > > reuseport_lock is locked in this function, and we do not modify the 
> > > > prog,
> > > > but is rcu_dereference_protected() preferable?
> > > > 
> > > > ---8<---
> > > > prog = rcu_dereference_protected(reuse->prog,
> > > >  lockdep_is_held(_lock));
> > > > ---8<---
> > > It is not only reuse->prog.  Other things also require rcu_read_lock(),
> > > e.g. please take a look at __htab_map_lookup_elem().
> > > 
> > > The TCP_LISTEN sk (selected by bpf to be the target of the migration)
> > > is also protected by rcu.
> > 
> > Thank you, I will use rcu_read_lock() and rcu_dereference() in v3 patchset.
> > 
> > 
> > > I am surprised there is no WARNING in the test.
> > > Do you have the needed DEBUG_LOCK* config enabled?
> > 
> > Yes, DEBUG_LOCK* was 'y', but rcu_dereference() without rcu_read_lock()
> > does not show warnings...
> I would at least expect the "WARN_ON_ONCE(!rcu_read_lock_held() ...)"
> from __htab_map_lookup_elem() should fire in your test
> example in the last patch.
> 
> It is better to check the config before sending v3.

It seems ok, but I will check it again.

---8<---
[ec2-user@ip-10-0-0-124 bpf-next]$ cat .config | grep DEBUG_LOCK
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_DEBUG_LOCKDEP=y
CONFIG_DEBUG_LOCKING_API_SELFTESTS=y
---8<---


> > > > > > diff --git a/net/ipv4/inet_connection_sock.c 
> > > > > > b/net/ipv4/inet_connection_sock.c
> > > > > > index 1451aa9712b0..b27241ea96bd 100644
> > > > > > --- a/net/ipv4/inet_connection_sock.c
> > > > > > +++ b/net/ipv4/inet_connection_sock.c
> > > > > > @@ -992,6 +992,36 @@ struct sock *inet_csk_reqsk_queue_add(struct 
> > > > > > sock *sk,
> > > > > >  }
> > > > > >  EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
> > > > > >  
> > > > > > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock 
> > > > > > *nsk)
> > > > > > +{
> > > > > > +   struct request_sock_queue *old_accept_queue, *new_accept_queue;
> > > > > > +
> > > > > > +   old_accept_queue = _csk(sk)->icsk_accept_queue;
> > > > > > +   new_accept_queue = _csk(nsk)->icsk_accept_queue;
> > > > > > +
> > > > > > +   spin_lock(_accept_queue->rskq_lock);
> > > > > > +   spin_lock(_accept_queue->rskq_lock);
> > > > > I am also not very thrilled on this double spin_lock.
> > > > > Can this be done in (or like) inet_csk_listen_stop() instead?
> > > > 
> > > > It will be possible to migrate sockets in inet_csk_listen_stop(), but I
> > > > think it is better to do it just after reuseport_detach_sock() becuase 
> > > > we
> > > > can select a different listener (almost) every time at a lower cost by
> > > > selecting the moved socket and pass it to inet_csk_reqsk_queue_migrate()
> > > > easily.
> > > I don't see the "lower cost" point.  Please elaborate.
> > 
> > In reuseport_select_sock(), we pass sk_hash of the request socket to
> > reciprocal_scale() and generate a random index for socks[] to select
> > a different listener every time.
> > On the other hand, we do not have request sockets in unhash path and
> > sk_hash of the listener is always 0, so we have to generate a random number
> > in another way. In reuseport_detach_sock(), we can use the index of the
> > moved socket, but we do not have it in inet_csk_listen_stop(), so we have
> > to generate a random number in inet_csk_listen_stop().
> > I think it is at lower cost to use the index of the moved socket.
> Generate a random number is not a big deal for the migration code path.
> 
> Also, I 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-08 Thread Kuniyuki Iwashima
From:   Martin KaFai Lau 
Date:   Mon, 7 Dec 2020 23:34:41 -0800
> On Tue, Dec 08, 2020 at 03:31:34PM +0900, Kuniyuki Iwashima wrote:
> > From:   Martin KaFai Lau 
> > Date:   Mon, 7 Dec 2020 12:33:15 -0800
> > > On Thu, Dec 03, 2020 at 11:14:24PM +0900, Kuniyuki Iwashima wrote:
> > > > From:   Eric Dumazet 
> > > > Date:   Tue, 1 Dec 2020 16:25:51 +0100
> > > > > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > > > > This patch lets reuseport_detach_sock() return a pointer of struct 
> > > > > > sock,
> > > > > > which is used only by inet_unhash(). If it is not NULL,
> > > > > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
> > > > > > sockets from the closing listener to the selected one.
> > > > > > 
> > > > > > Listening sockets hold incoming connections as a linked list of 
> > > > > > struct
> > > > > > request_sock in the accept queue, and each request has reference to 
> > > > > > a full
> > > > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we only 
> > > > > > unlink
> > > > > > the requests from the closing listener's queue and relink them to 
> > > > > > the head
> > > > > > of the new listener's queue. We do not process each request and its
> > > > > > reference to the listener, so the migration completes in O(1) time
> > > > > > complexity. However, in the case of TCP_SYN_RECV sockets, we take 
> > > > > > special
> > > > > > care in the next commit.
> > > > > > 
> > > > > > By default, the kernel selects a new listener randomly. In order to 
> > > > > > pick
> > > > > > out a different socket every time, we select the last element of 
> > > > > > socks[] as
> > > > > > the new listener. This behaviour is based on how the kernel moves 
> > > > > > sockets
> > > > > > in socks[]. (See also [1])
> > > > > > 
> > > > > > Basically, in order to redistribute sockets evenly, we have to use 
> > > > > > an eBPF
> > > > > > program called in the later commit, but as the side effect of such 
> > > > > > default
> > > > > > selection, the kernel can redistribute old requests evenly to new 
> > > > > > listeners
> > > > > > for a specific case where the application replaces listeners by
> > > > > > generations.
> > > > > > 
> > > > > > For example, we call listen() for four sockets (A, B, C, D), and 
> > > > > > close the
> > > > > > first two by turns. The sockets move in socks[] like below.
> > > > > > 
> > > > > >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> > > > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> > > > > >   socks[2] : C   |  socks[2] : C --'
> > > > > >   socks[3] : D --'
> > > > > > 
> > > > > > Then, if C and D have newer settings than A and B, and each socket 
> > > > > > has a
> > > > > > request (a, b, c, d) in their accept queue, we can redistribute old
> > > > > > requests evenly to new listeners.
> > > > > > 
> > > > > >   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : D 
> > > > > > (a + d)
> > > > > >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C 
> > > > > > (b + c)
> > > > > >   socks[2] : C (c)   |  socks[2] : C (c) --'
> > > > > >   socks[3] : D (d) --'
> > > > > > 
> > > > > > Here, (A, D) or (B, C) can have different application settings, but 
> > > > > > they
> > > > > > MUST have the same settings at the socket API level; otherwise, 
> > > > > > unexpected
> > > > > > error may happen. For instance, if only the new listeners have
> > > > > > TCP_SAVE_SYN, old requests do not have SYN data, so the application 
> > > > > > will
> > > > > > face inconsistency and cause an error.
> > > > > > 
> > > > > > Therefore, if there are different kinds of sockets, we must attach 
> > > > > > an eBPF
> > > > > > program described in later commits.
> > > > > > 
> > > > > > Link: 
> > > > > > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > > > > > Reviewed-by: Benjamin Herrenschmidt 
> > > > > > Signed-off-by: Kuniyuki Iwashima 
> > > > > > ---
> > > > > >  include/net/inet_connection_sock.h |  1 +
> > > > > >  include/net/sock_reuseport.h   |  2 +-
> > > > > >  net/core/sock_reuseport.c  | 10 +-
> > > > > >  net/ipv4/inet_connection_sock.c| 30 
> > > > > > ++
> > > > > >  net/ipv4/inet_hashtables.c |  9 +++--
> > > > > >  5 files changed, 48 insertions(+), 4 deletions(-)
> > > > > > 
> > > > > > diff --git a/include/net/inet_connection_sock.h 
> > > > > > b/include/net/inet_connection_sock.h
> > > > > > index 7338b3865a2a..2ea2d743f8fc 100644
> > > > > > --- a/include/net/inet_connection_sock.h
> > > > > > +++ b/include/net/inet_connection_sock.h
> > > > > > @@ -260,6 +260,7 @@ struct dst_entry 
> > > > > > *inet_csk_route_child_sock(const struct sock *sk,
> > > > > >  struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> > > > > >   struct request_sock *req,
> > > > > >   struct 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-08 Thread Martin KaFai Lau
On Tue, Dec 08, 2020 at 03:27:14PM +0900, Kuniyuki Iwashima wrote:
> From:   Martin KaFai Lau 
> Date:   Mon, 7 Dec 2020 12:14:38 -0800
> > On Sun, Dec 06, 2020 at 01:03:07AM +0900, Kuniyuki Iwashima wrote:
> > > From:   Martin KaFai Lau 
> > > Date:   Fri, 4 Dec 2020 17:42:41 -0800
> > > > On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
> > > > [ ... ]
> > > > > diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> > > > > index fd133516ac0e..60d7c1f28809 100644
> > > > > --- a/net/core/sock_reuseport.c
> > > > > +++ b/net/core/sock_reuseport.c
> > > > > @@ -216,9 +216,11 @@ int reuseport_add_sock(struct sock *sk, struct 
> > > > > sock *sk2, bool bind_inany)
> > > > >  }
> > > > >  EXPORT_SYMBOL(reuseport_add_sock);
> > > > >  
> > > > > -void reuseport_detach_sock(struct sock *sk)
> > > > > +struct sock *reuseport_detach_sock(struct sock *sk)
> > > > >  {
> > > > >   struct sock_reuseport *reuse;
> > > > > + struct bpf_prog *prog;
> > > > > + struct sock *nsk = NULL;
> > > > >   int i;
> > > > >  
> > > > >   spin_lock_bh(_lock);
> > > > > @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
> > > > >  
> > > > >   reuse->num_socks--;
> > > > >   reuse->socks[i] = reuse->socks[reuse->num_socks];
> > > > > + prog = rcu_dereference(reuse->prog);
> > > > Is it under rcu_read_lock() here?
> > > 
> > > reuseport_lock is locked in this function, and we do not modify the prog,
> > > but is rcu_dereference_protected() preferable?
> > > 
> > > ---8<---
> > > prog = rcu_dereference_protected(reuse->prog,
> > >lockdep_is_held(_lock));
> > > ---8<---
> > It is not only reuse->prog.  Other things also require rcu_read_lock(),
> > e.g. please take a look at __htab_map_lookup_elem().
> > 
> > The TCP_LISTEN sk (selected by bpf to be the target of the migration)
> > is also protected by rcu.
> 
> Thank you, I will use rcu_read_lock() and rcu_dereference() in v3 patchset.
> 
> 
> > I am surprised there is no WARNING in the test.
> > Do you have the needed DEBUG_LOCK* config enabled?
> 
> Yes, DEBUG_LOCK* was 'y', but rcu_dereference() without rcu_read_lock()
> does not show warnings...
I would at least expect the "WARN_ON_ONCE(!rcu_read_lock_held() ...)"
from __htab_map_lookup_elem() should fire in your test
example in the last patch.

It is better to check the config before sending v3.

[ ... ]

> > > > > diff --git a/net/ipv4/inet_connection_sock.c 
> > > > > b/net/ipv4/inet_connection_sock.c
> > > > > index 1451aa9712b0..b27241ea96bd 100644
> > > > > --- a/net/ipv4/inet_connection_sock.c
> > > > > +++ b/net/ipv4/inet_connection_sock.c
> > > > > @@ -992,6 +992,36 @@ struct sock *inet_csk_reqsk_queue_add(struct 
> > > > > sock *sk,
> > > > >  }
> > > > >  EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
> > > > >  
> > > > > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk)
> > > > > +{
> > > > > + struct request_sock_queue *old_accept_queue, *new_accept_queue;
> > > > > +
> > > > > + old_accept_queue = _csk(sk)->icsk_accept_queue;
> > > > > + new_accept_queue = _csk(nsk)->icsk_accept_queue;
> > > > > +
> > > > > + spin_lock(_accept_queue->rskq_lock);
> > > > > + spin_lock(_accept_queue->rskq_lock);
> > > > I am also not very thrilled on this double spin_lock.
> > > > Can this be done in (or like) inet_csk_listen_stop() instead?
> > > 
> > > It will be possible to migrate sockets in inet_csk_listen_stop(), but I
> > > think it is better to do it just after reuseport_detach_sock() becuase we
> > > can select a different listener (almost) every time at a lower cost by
> > > selecting the moved socket and pass it to inet_csk_reqsk_queue_migrate()
> > > easily.
> > I don't see the "lower cost" point.  Please elaborate.
> 
> In reuseport_select_sock(), we pass sk_hash of the request socket to
> reciprocal_scale() and generate a random index for socks[] to select
> a different listener every time.
> On the other hand, we do not have request sockets in unhash path and
> sk_hash of the listener is always 0, so we have to generate a random number
> in another way. In reuseport_detach_sock(), we can use the index of the
> moved socket, but we do not have it in inet_csk_listen_stop(), so we have
> to generate a random number in inet_csk_listen_stop().
> I think it is at lower cost to use the index of the moved socket.
Generate a random number is not a big deal for the migration code path.

Also, I really still failed to see a particular way that the kernel
pick will help in the migration case.  The kernel has no clue
on how to select the right process to migrate to without
a proper policy signal from the user.  They are all as bad as
a random pick.  I am not sure this migration feature is
even useful if there is no bpf prog attached to define the policy.
That said, if it is still desired to do a random pick by kernel when
there is no bpf prog, it probably makes sense to 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-07 Thread Kuniyuki Iwashima
From:   Martin KaFai Lau 
Date:   Mon, 7 Dec 2020 22:54:18 -0800
> On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
> 
> > @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
> >  
> > reuse->num_socks--;
> > reuse->socks[i] = reuse->socks[reuse->num_socks];
> > +   prog = rcu_dereference(reuse->prog);
> >  
> > if (sk->sk_protocol == IPPROTO_TCP) {
> > +   if (reuse->num_socks && !prog)
> > +   nsk = i == reuse->num_socks ? reuse->socks[i - 
> > 1] : reuse->socks[i];
> I asked in the earlier thread if the primary use case is to only
> use the bpf prog to pick.  That thread did not come to
> a solid answer but did conclude that the sysctl should not
> control the behavior of the BPF_SK_REUSEPORT_SELECT_OR_MIGRATE prog.
> 
> From this change here, it seems it is still desired to only depend
> on the kernel to random pick even when no bpf prog is attached.

I wrote this way only to split patches into tcp and bpf parts.
So, in the 10th patch, eBPF prog is run if the type is
BPF_SK_REUSEPORT_SELECT_OR_MIGRATE.
https://lore.kernel.org/netdev/20201201144418.35045-11-kun...@amazon.co.jp/

But, it makes a breakage, so I will move
BPF_SK_REUSEPORT_SELECT_OR_MIGRATE validation into 10th patch so that the
type is only available after 10th patch.

---8<---
case BPF_PROG_TYPE_SK_REUSEPORT:
switch (expected_attach_type) {
case BPF_SK_REUSEPORT_SELECT:
case BPF_SK_REUSEPORT_SELECT_OR_MIGRATE: <- move to 10th.
return 0;
default:
return -EINVAL;
}
---8<---


> If that is the case, a sysctl to guard here for not changing
> the current behavior makes sense.
> It should still only control the non-bpf-pick behavior:
> when the sysctl is on, the kernel will still do a random pick
> when there is no bpf prog attached to the reuseport group.
> Thoughts?

If different applications listen on the same port without eBPF prog, I
think sysctl is necessary. But honestly, I am not sure there is really such
a case and sysctl is necessary.

If patcheset with sysctl is more acceptable, I will add it back in the next
spin.


> > +
> > reuse->num_closed_socks++;
> > reuse->socks[reuse->max_socks - 
> > reuse->num_closed_socks] = sk;
> > } else {
> > @@ -264,6 +270,8 @@ void reuseport_detach_sock(struct sock *sk)
> > call_rcu(>rcu, reuseport_free_rcu);
> >  out:
> > spin_unlock_bh(_lock);
> > +
> > +   return nsk;
> >  }
> >  EXPORT_SYMBOL(reuseport_detach_sock);


Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-07 Thread Martin KaFai Lau
On Tue, Dec 08, 2020 at 03:31:34PM +0900, Kuniyuki Iwashima wrote:
> From:   Martin KaFai Lau 
> Date:   Mon, 7 Dec 2020 12:33:15 -0800
> > On Thu, Dec 03, 2020 at 11:14:24PM +0900, Kuniyuki Iwashima wrote:
> > > From:   Eric Dumazet 
> > > Date:   Tue, 1 Dec 2020 16:25:51 +0100
> > > > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > > > This patch lets reuseport_detach_sock() return a pointer of struct 
> > > > > sock,
> > > > > which is used only by inet_unhash(). If it is not NULL,
> > > > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
> > > > > sockets from the closing listener to the selected one.
> > > > > 
> > > > > Listening sockets hold incoming connections as a linked list of struct
> > > > > request_sock in the accept queue, and each request has reference to a 
> > > > > full
> > > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we only 
> > > > > unlink
> > > > > the requests from the closing listener's queue and relink them to the 
> > > > > head
> > > > > of the new listener's queue. We do not process each request and its
> > > > > reference to the listener, so the migration completes in O(1) time
> > > > > complexity. However, in the case of TCP_SYN_RECV sockets, we take 
> > > > > special
> > > > > care in the next commit.
> > > > > 
> > > > > By default, the kernel selects a new listener randomly. In order to 
> > > > > pick
> > > > > out a different socket every time, we select the last element of 
> > > > > socks[] as
> > > > > the new listener. This behaviour is based on how the kernel moves 
> > > > > sockets
> > > > > in socks[]. (See also [1])
> > > > > 
> > > > > Basically, in order to redistribute sockets evenly, we have to use an 
> > > > > eBPF
> > > > > program called in the later commit, but as the side effect of such 
> > > > > default
> > > > > selection, the kernel can redistribute old requests evenly to new 
> > > > > listeners
> > > > > for a specific case where the application replaces listeners by
> > > > > generations.
> > > > > 
> > > > > For example, we call listen() for four sockets (A, B, C, D), and 
> > > > > close the
> > > > > first two by turns. The sockets move in socks[] like below.
> > > > > 
> > > > >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> > > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> > > > >   socks[2] : C   |  socks[2] : C --'
> > > > >   socks[3] : D --'
> > > > > 
> > > > > Then, if C and D have newer settings than A and B, and each socket 
> > > > > has a
> > > > > request (a, b, c, d) in their accept queue, we can redistribute old
> > > > > requests evenly to new listeners.
> > > > > 
> > > > >   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : D (a 
> > > > > + d)
> > > > >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C (b 
> > > > > + c)
> > > > >   socks[2] : C (c)   |  socks[2] : C (c) --'
> > > > >   socks[3] : D (d) --'
> > > > > 
> > > > > Here, (A, D) or (B, C) can have different application settings, but 
> > > > > they
> > > > > MUST have the same settings at the socket API level; otherwise, 
> > > > > unexpected
> > > > > error may happen. For instance, if only the new listeners have
> > > > > TCP_SAVE_SYN, old requests do not have SYN data, so the application 
> > > > > will
> > > > > face inconsistency and cause an error.
> > > > > 
> > > > > Therefore, if there are different kinds of sockets, we must attach an 
> > > > > eBPF
> > > > > program described in later commits.
> > > > > 
> > > > > Link: 
> > > > > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > > > > Reviewed-by: Benjamin Herrenschmidt 
> > > > > Signed-off-by: Kuniyuki Iwashima 
> > > > > ---
> > > > >  include/net/inet_connection_sock.h |  1 +
> > > > >  include/net/sock_reuseport.h   |  2 +-
> > > > >  net/core/sock_reuseport.c  | 10 +-
> > > > >  net/ipv4/inet_connection_sock.c| 30 
> > > > > ++
> > > > >  net/ipv4/inet_hashtables.c |  9 +++--
> > > > >  5 files changed, 48 insertions(+), 4 deletions(-)
> > > > > 
> > > > > diff --git a/include/net/inet_connection_sock.h 
> > > > > b/include/net/inet_connection_sock.h
> > > > > index 7338b3865a2a..2ea2d743f8fc 100644
> > > > > --- a/include/net/inet_connection_sock.h
> > > > > +++ b/include/net/inet_connection_sock.h
> > > > > @@ -260,6 +260,7 @@ struct dst_entry *inet_csk_route_child_sock(const 
> > > > > struct sock *sk,
> > > > >  struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> > > > > struct request_sock *req,
> > > > > struct sock *child);
> > > > > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk);
> > > > >  void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct 
> > > > > request_sock *req,
> > > > >  unsigned long timeout);
> > > > >  

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-07 Thread Martin KaFai Lau
On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:

> @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
>  
>   reuse->num_socks--;
>   reuse->socks[i] = reuse->socks[reuse->num_socks];
> + prog = rcu_dereference(reuse->prog);
>  
>   if (sk->sk_protocol == IPPROTO_TCP) {
> + if (reuse->num_socks && !prog)
> + nsk = i == reuse->num_socks ? reuse->socks[i - 
> 1] : reuse->socks[i];
I asked in the earlier thread if the primary use case is to only
use the bpf prog to pick.  That thread did not come to
a solid answer but did conclude that the sysctl should not
control the behavior of the BPF_SK_REUSEPORT_SELECT_OR_MIGRATE prog.

>From this change here, it seems it is still desired to only depend
on the kernel to random pick even when no bpf prog is attached.
If that is the case, a sysctl to guard here for not changing
the current behavior makes sense.
It should still only control the non-bpf-pick behavior:
when the sysctl is on, the kernel will still do a random pick
when there is no bpf prog attached to the reuseport group.
Thoughts?

> +
>   reuse->num_closed_socks++;
>   reuse->socks[reuse->max_socks - 
> reuse->num_closed_socks] = sk;
>   } else {
> @@ -264,6 +270,8 @@ void reuseport_detach_sock(struct sock *sk)
>   call_rcu(>rcu, reuseport_free_rcu);
>  out:
>   spin_unlock_bh(_lock);
> +
> + return nsk;
>  }
>  EXPORT_SYMBOL(reuseport_detach_sock);



Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-07 Thread Kuniyuki Iwashima
From:   Martin KaFai Lau 
Date:   Mon, 7 Dec 2020 12:33:15 -0800
> On Thu, Dec 03, 2020 at 11:14:24PM +0900, Kuniyuki Iwashima wrote:
> > From:   Eric Dumazet 
> > Date:   Tue, 1 Dec 2020 16:25:51 +0100
> > > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > > This patch lets reuseport_detach_sock() return a pointer of struct sock,
> > > > which is used only by inet_unhash(). If it is not NULL,
> > > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
> > > > sockets from the closing listener to the selected one.
> > > > 
> > > > Listening sockets hold incoming connections as a linked list of struct
> > > > request_sock in the accept queue, and each request has reference to a 
> > > > full
> > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we only 
> > > > unlink
> > > > the requests from the closing listener's queue and relink them to the 
> > > > head
> > > > of the new listener's queue. We do not process each request and its
> > > > reference to the listener, so the migration completes in O(1) time
> > > > complexity. However, in the case of TCP_SYN_RECV sockets, we take 
> > > > special
> > > > care in the next commit.
> > > > 
> > > > By default, the kernel selects a new listener randomly. In order to pick
> > > > out a different socket every time, we select the last element of 
> > > > socks[] as
> > > > the new listener. This behaviour is based on how the kernel moves 
> > > > sockets
> > > > in socks[]. (See also [1])
> > > > 
> > > > Basically, in order to redistribute sockets evenly, we have to use an 
> > > > eBPF
> > > > program called in the later commit, but as the side effect of such 
> > > > default
> > > > selection, the kernel can redistribute old requests evenly to new 
> > > > listeners
> > > > for a specific case where the application replaces listeners by
> > > > generations.
> > > > 
> > > > For example, we call listen() for four sockets (A, B, C, D), and close 
> > > > the
> > > > first two by turns. The sockets move in socks[] like below.
> > > > 
> > > >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> > > >   socks[2] : C   |  socks[2] : C --'
> > > >   socks[3] : D --'
> > > > 
> > > > Then, if C and D have newer settings than A and B, and each socket has a
> > > > request (a, b, c, d) in their accept queue, we can redistribute old
> > > > requests evenly to new listeners.
> > > > 
> > > >   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : D (a + 
> > > > d)
> > > >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C (b + 
> > > > c)
> > > >   socks[2] : C (c)   |  socks[2] : C (c) --'
> > > >   socks[3] : D (d) --'
> > > > 
> > > > Here, (A, D) or (B, C) can have different application settings, but they
> > > > MUST have the same settings at the socket API level; otherwise, 
> > > > unexpected
> > > > error may happen. For instance, if only the new listeners have
> > > > TCP_SAVE_SYN, old requests do not have SYN data, so the application will
> > > > face inconsistency and cause an error.
> > > > 
> > > > Therefore, if there are different kinds of sockets, we must attach an 
> > > > eBPF
> > > > program described in later commits.
> > > > 
> > > > Link: 
> > > > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > > > Reviewed-by: Benjamin Herrenschmidt 
> > > > Signed-off-by: Kuniyuki Iwashima 
> > > > ---
> > > >  include/net/inet_connection_sock.h |  1 +
> > > >  include/net/sock_reuseport.h   |  2 +-
> > > >  net/core/sock_reuseport.c  | 10 +-
> > > >  net/ipv4/inet_connection_sock.c| 30 ++
> > > >  net/ipv4/inet_hashtables.c |  9 +++--
> > > >  5 files changed, 48 insertions(+), 4 deletions(-)
> > > > 
> > > > diff --git a/include/net/inet_connection_sock.h 
> > > > b/include/net/inet_connection_sock.h
> > > > index 7338b3865a2a..2ea2d743f8fc 100644
> > > > --- a/include/net/inet_connection_sock.h
> > > > +++ b/include/net/inet_connection_sock.h
> > > > @@ -260,6 +260,7 @@ struct dst_entry *inet_csk_route_child_sock(const 
> > > > struct sock *sk,
> > > >  struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> > > >   struct request_sock *req,
> > > >   struct sock *child);
> > > > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk);
> > > >  void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct 
> > > > request_sock *req,
> > > >unsigned long timeout);
> > > >  struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock 
> > > > *child,
> > > > diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> > > > index 0e558ca7afbf..09a1b1539d4c 100644
> > > > --- a/include/net/sock_reuseport.h
> > > > +++ b/include/net/sock_reuseport.h
> > > > @@ -31,7 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-07 Thread Kuniyuki Iwashima
From:   Martin KaFai Lau 
Date:   Mon, 7 Dec 2020 12:14:38 -0800
> On Sun, Dec 06, 2020 at 01:03:07AM +0900, Kuniyuki Iwashima wrote:
> > From:   Martin KaFai Lau 
> > Date:   Fri, 4 Dec 2020 17:42:41 -0800
> > > On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
> > > [ ... ]
> > > > diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> > > > index fd133516ac0e..60d7c1f28809 100644
> > > > --- a/net/core/sock_reuseport.c
> > > > +++ b/net/core/sock_reuseport.c
> > > > @@ -216,9 +216,11 @@ int reuseport_add_sock(struct sock *sk, struct 
> > > > sock *sk2, bool bind_inany)
> > > >  }
> > > >  EXPORT_SYMBOL(reuseport_add_sock);
> > > >  
> > > > -void reuseport_detach_sock(struct sock *sk)
> > > > +struct sock *reuseport_detach_sock(struct sock *sk)
> > > >  {
> > > > struct sock_reuseport *reuse;
> > > > +   struct bpf_prog *prog;
> > > > +   struct sock *nsk = NULL;
> > > > int i;
> > > >  
> > > > spin_lock_bh(_lock);
> > > > @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
> > > >  
> > > > reuse->num_socks--;
> > > > reuse->socks[i] = reuse->socks[reuse->num_socks];
> > > > +   prog = rcu_dereference(reuse->prog);
> > > Is it under rcu_read_lock() here?
> > 
> > reuseport_lock is locked in this function, and we do not modify the prog,
> > but is rcu_dereference_protected() preferable?
> > 
> > ---8<---
> > prog = rcu_dereference_protected(reuse->prog,
> >  lockdep_is_held(_lock));
> > ---8<---
> It is not only reuse->prog.  Other things also require rcu_read_lock(),
> e.g. please take a look at __htab_map_lookup_elem().
> 
> The TCP_LISTEN sk (selected by bpf to be the target of the migration)
> is also protected by rcu.

Thank you, I will use rcu_read_lock() and rcu_dereference() in v3 patchset.


> I am surprised there is no WARNING in the test.
> Do you have the needed DEBUG_LOCK* config enabled?

Yes, DEBUG_LOCK* was 'y', but rcu_dereference() without rcu_read_lock()
does not show warnings...


> > > > if (sk->sk_protocol == IPPROTO_TCP) {
> > > > +   if (reuse->num_socks && !prog)
> > > > +   nsk = i == reuse->num_socks ? 
> > > > reuse->socks[i - 1] : reuse->socks[i];
> > > > +
> > > > reuse->num_closed_socks++;
> > > > reuse->socks[reuse->max_socks - 
> > > > reuse->num_closed_socks] = sk;
> > > > } else {
> > > > @@ -264,6 +270,8 @@ void reuseport_detach_sock(struct sock *sk)
> > > > call_rcu(>rcu, reuseport_free_rcu);
> > > >  out:
> > > > spin_unlock_bh(_lock);
> > > > +
> > > > +   return nsk;
> > > >  }
> > > >  EXPORT_SYMBOL(reuseport_detach_sock);
> > > >  
> > > > diff --git a/net/ipv4/inet_connection_sock.c 
> > > > b/net/ipv4/inet_connection_sock.c
> > > > index 1451aa9712b0..b27241ea96bd 100644
> > > > --- a/net/ipv4/inet_connection_sock.c
> > > > +++ b/net/ipv4/inet_connection_sock.c
> > > > @@ -992,6 +992,36 @@ struct sock *inet_csk_reqsk_queue_add(struct sock 
> > > > *sk,
> > > >  }
> > > >  EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
> > > >  
> > > > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk)
> > > > +{
> > > > +   struct request_sock_queue *old_accept_queue, *new_accept_queue;
> > > > +
> > > > +   old_accept_queue = _csk(sk)->icsk_accept_queue;
> > > > +   new_accept_queue = _csk(nsk)->icsk_accept_queue;
> > > > +
> > > > +   spin_lock(_accept_queue->rskq_lock);
> > > > +   spin_lock(_accept_queue->rskq_lock);
> > > I am also not very thrilled on this double spin_lock.
> > > Can this be done in (or like) inet_csk_listen_stop() instead?
> > 
> > It will be possible to migrate sockets in inet_csk_listen_stop(), but I
> > think it is better to do it just after reuseport_detach_sock() becuase we
> > can select a different listener (almost) every time at a lower cost by
> > selecting the moved socket and pass it to inet_csk_reqsk_queue_migrate()
> > easily.
> I don't see the "lower cost" point.  Please elaborate.

In reuseport_select_sock(), we pass sk_hash of the request socket to
reciprocal_scale() and generate a random index for socks[] to select
a different listener every time.
On the other hand, we do not have request sockets in unhash path and
sk_hash of the listener is always 0, so we have to generate a random number
in another way. In reuseport_detach_sock(), we can use the index of the
moved socket, but we do not have it in inet_csk_listen_stop(), so we have
to generate a random number in inet_csk_listen_stop().
I think it is at lower cost to use the index of the moved socket.


> > sk_hash of the listener is 0, so we would have to generate a random number
> > in inet_csk_listen_stop().
> If I read it correctly, it is also passing 0 as the sk_hash to
> bpf_run_sk_reuseport() from reuseport_detach_sock().
> 
> Also, how is the 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-07 Thread Martin KaFai Lau
On Thu, Dec 03, 2020 at 11:14:24PM +0900, Kuniyuki Iwashima wrote:
> From:   Eric Dumazet 
> Date:   Tue, 1 Dec 2020 16:25:51 +0100
> > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > This patch lets reuseport_detach_sock() return a pointer of struct sock,
> > > which is used only by inet_unhash(). If it is not NULL,
> > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
> > > sockets from the closing listener to the selected one.
> > > 
> > > Listening sockets hold incoming connections as a linked list of struct
> > > request_sock in the accept queue, and each request has reference to a full
> > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we only unlink
> > > the requests from the closing listener's queue and relink them to the head
> > > of the new listener's queue. We do not process each request and its
> > > reference to the listener, so the migration completes in O(1) time
> > > complexity. However, in the case of TCP_SYN_RECV sockets, we take special
> > > care in the next commit.
> > > 
> > > By default, the kernel selects a new listener randomly. In order to pick
> > > out a different socket every time, we select the last element of socks[] 
> > > as
> > > the new listener. This behaviour is based on how the kernel moves sockets
> > > in socks[]. (See also [1])
> > > 
> > > Basically, in order to redistribute sockets evenly, we have to use an eBPF
> > > program called in the later commit, but as the side effect of such default
> > > selection, the kernel can redistribute old requests evenly to new 
> > > listeners
> > > for a specific case where the application replaces listeners by
> > > generations.
> > > 
> > > For example, we call listen() for four sockets (A, B, C, D), and close the
> > > first two by turns. The sockets move in socks[] like below.
> > > 
> > >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> > >   socks[2] : C   |  socks[2] : C --'
> > >   socks[3] : D --'
> > > 
> > > Then, if C and D have newer settings than A and B, and each socket has a
> > > request (a, b, c, d) in their accept queue, we can redistribute old
> > > requests evenly to new listeners.
> > > 
> > >   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : D (a + d)
> > >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C (b + c)
> > >   socks[2] : C (c)   |  socks[2] : C (c) --'
> > >   socks[3] : D (d) --'
> > > 
> > > Here, (A, D) or (B, C) can have different application settings, but they
> > > MUST have the same settings at the socket API level; otherwise, unexpected
> > > error may happen. For instance, if only the new listeners have
> > > TCP_SAVE_SYN, old requests do not have SYN data, so the application will
> > > face inconsistency and cause an error.
> > > 
> > > Therefore, if there are different kinds of sockets, we must attach an eBPF
> > > program described in later commits.
> > > 
> > > Link: 
> > > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > > Reviewed-by: Benjamin Herrenschmidt 
> > > Signed-off-by: Kuniyuki Iwashima 
> > > ---
> > >  include/net/inet_connection_sock.h |  1 +
> > >  include/net/sock_reuseport.h   |  2 +-
> > >  net/core/sock_reuseport.c  | 10 +-
> > >  net/ipv4/inet_connection_sock.c| 30 ++
> > >  net/ipv4/inet_hashtables.c |  9 +++--
> > >  5 files changed, 48 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/include/net/inet_connection_sock.h 
> > > b/include/net/inet_connection_sock.h
> > > index 7338b3865a2a..2ea2d743f8fc 100644
> > > --- a/include/net/inet_connection_sock.h
> > > +++ b/include/net/inet_connection_sock.h
> > > @@ -260,6 +260,7 @@ struct dst_entry *inet_csk_route_child_sock(const 
> > > struct sock *sk,
> > >  struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> > > struct request_sock *req,
> > > struct sock *child);
> > > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk);
> > >  void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock 
> > > *req,
> > >  unsigned long timeout);
> > >  struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock 
> > > *child,
> > > diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> > > index 0e558ca7afbf..09a1b1539d4c 100644
> > > --- a/include/net/sock_reuseport.h
> > > +++ b/include/net/sock_reuseport.h
> > > @@ -31,7 +31,7 @@ struct sock_reuseport {
> > >  extern int reuseport_alloc(struct sock *sk, bool bind_inany);
> > >  extern int reuseport_add_sock(struct sock *sk, struct sock *sk2,
> > > bool bind_inany);
> > > -extern void reuseport_detach_sock(struct sock *sk);
> > > +extern struct sock *reuseport_detach_sock(struct sock *sk);
> > >  extern struct sock 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-07 Thread Martin KaFai Lau
On Sun, Dec 06, 2020 at 01:03:07AM +0900, Kuniyuki Iwashima wrote:
> From:   Martin KaFai Lau 
> Date:   Fri, 4 Dec 2020 17:42:41 -0800
> > On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
> > [ ... ]
> > > diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> > > index fd133516ac0e..60d7c1f28809 100644
> > > --- a/net/core/sock_reuseport.c
> > > +++ b/net/core/sock_reuseport.c
> > > @@ -216,9 +216,11 @@ int reuseport_add_sock(struct sock *sk, struct sock 
> > > *sk2, bool bind_inany)
> > >  }
> > >  EXPORT_SYMBOL(reuseport_add_sock);
> > >  
> > > -void reuseport_detach_sock(struct sock *sk)
> > > +struct sock *reuseport_detach_sock(struct sock *sk)
> > >  {
> > >   struct sock_reuseport *reuse;
> > > + struct bpf_prog *prog;
> > > + struct sock *nsk = NULL;
> > >   int i;
> > >  
> > >   spin_lock_bh(_lock);
> > > @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
> > >  
> > >   reuse->num_socks--;
> > >   reuse->socks[i] = reuse->socks[reuse->num_socks];
> > > + prog = rcu_dereference(reuse->prog);
> > Is it under rcu_read_lock() here?
> 
> reuseport_lock is locked in this function, and we do not modify the prog,
> but is rcu_dereference_protected() preferable?
> 
> ---8<---
> prog = rcu_dereference_protected(reuse->prog,
>lockdep_is_held(_lock));
> ---8<---
It is not only reuse->prog.  Other things also require rcu_read_lock(),
e.g. please take a look at __htab_map_lookup_elem().

The TCP_LISTEN sk (selected by bpf to be the target of the migration)
is also protected by rcu.

I am surprised there is no WARNING in the test.
Do you have the needed DEBUG_LOCK* config enabled?

> > >   if (sk->sk_protocol == IPPROTO_TCP) {
> > > + if (reuse->num_socks && !prog)
> > > + nsk = i == reuse->num_socks ? reuse->socks[i - 
> > > 1] : reuse->socks[i];
> > > +
> > >   reuse->num_closed_socks++;
> > >   reuse->socks[reuse->max_socks - 
> > > reuse->num_closed_socks] = sk;
> > >   } else {
> > > @@ -264,6 +270,8 @@ void reuseport_detach_sock(struct sock *sk)
> > >   call_rcu(>rcu, reuseport_free_rcu);
> > >  out:
> > >   spin_unlock_bh(_lock);
> > > +
> > > + return nsk;
> > >  }
> > >  EXPORT_SYMBOL(reuseport_detach_sock);
> > >  
> > > diff --git a/net/ipv4/inet_connection_sock.c 
> > > b/net/ipv4/inet_connection_sock.c
> > > index 1451aa9712b0..b27241ea96bd 100644
> > > --- a/net/ipv4/inet_connection_sock.c
> > > +++ b/net/ipv4/inet_connection_sock.c
> > > @@ -992,6 +992,36 @@ struct sock *inet_csk_reqsk_queue_add(struct sock 
> > > *sk,
> > >  }
> > >  EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
> > >  
> > > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk)
> > > +{
> > > + struct request_sock_queue *old_accept_queue, *new_accept_queue;
> > > +
> > > + old_accept_queue = _csk(sk)->icsk_accept_queue;
> > > + new_accept_queue = _csk(nsk)->icsk_accept_queue;
> > > +
> > > + spin_lock(_accept_queue->rskq_lock);
> > > + spin_lock(_accept_queue->rskq_lock);
> > I am also not very thrilled on this double spin_lock.
> > Can this be done in (or like) inet_csk_listen_stop() instead?
> 
> It will be possible to migrate sockets in inet_csk_listen_stop(), but I
> think it is better to do it just after reuseport_detach_sock() becuase we
> can select a different listener (almost) every time at a lower cost by
> selecting the moved socket and pass it to inet_csk_reqsk_queue_migrate()
> easily.
I don't see the "lower cost" point.  Please elaborate.

> 
> sk_hash of the listener is 0, so we would have to generate a random number
> in inet_csk_listen_stop().
If I read it correctly, it is also passing 0 as the sk_hash to
bpf_run_sk_reuseport() from reuseport_detach_sock().

Also, how is the sk_hash expected to be used?  I don't see
it in the test.


Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-05 Thread Kuniyuki Iwashima
I'm sending this mail just for logging because I failed to send mails only 
to LKML, netdev, and bpf yesterday.


From:   Martin KaFai Lau 
Date:   Fri, 4 Dec 2020 17:42:41 -0800
> On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
> [ ... ]
> > diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> > index fd133516ac0e..60d7c1f28809 100644
> > --- a/net/core/sock_reuseport.c
> > +++ b/net/core/sock_reuseport.c
> > @@ -216,9 +216,11 @@ int reuseport_add_sock(struct sock *sk, struct sock 
> > *sk2, bool bind_inany)
> >  }
> >  EXPORT_SYMBOL(reuseport_add_sock);
> >  
> > -void reuseport_detach_sock(struct sock *sk)
> > +struct sock *reuseport_detach_sock(struct sock *sk)
> >  {
> > struct sock_reuseport *reuse;
> > +   struct bpf_prog *prog;
> > +   struct sock *nsk = NULL;
> > int i;
> >  
> > spin_lock_bh(_lock);
> > @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
> >  
> > reuse->num_socks--;
> > reuse->socks[i] = reuse->socks[reuse->num_socks];
> > +   prog = rcu_dereference(reuse->prog);
> Is it under rcu_read_lock() here?

reuseport_lock is locked in this function, and we do not modify the prog,
but is rcu_dereference_protected() preferable?

---8<---
prog = rcu_dereference_protected(reuse->prog,
 lockdep_is_held(_lock));
---8<---


> > if (sk->sk_protocol == IPPROTO_TCP) {
> > +   if (reuse->num_socks && !prog)
> > +   nsk = i == reuse->num_socks ? reuse->socks[i - 
> > 1] : reuse->socks[i];
> > +
> > reuse->num_closed_socks++;
> > reuse->socks[reuse->max_socks - 
> > reuse->num_closed_socks] = sk;
> > } else {
> > @@ -264,6 +270,8 @@ void reuseport_detach_sock(struct sock *sk)
> > call_rcu(>rcu, reuseport_free_rcu);
> >  out:
> > spin_unlock_bh(_lock);
> > +
> > +   return nsk;
> >  }
> >  EXPORT_SYMBOL(reuseport_detach_sock);
> >  
> > diff --git a/net/ipv4/inet_connection_sock.c 
> > b/net/ipv4/inet_connection_sock.c
> > index 1451aa9712b0..b27241ea96bd 100644
> > --- a/net/ipv4/inet_connection_sock.c
> > +++ b/net/ipv4/inet_connection_sock.c
> > @@ -992,6 +992,36 @@ struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> >  }
> >  EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
> >  
> > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk)
> > +{
> > +   struct request_sock_queue *old_accept_queue, *new_accept_queue;
> > +
> > +   old_accept_queue = _csk(sk)->icsk_accept_queue;
> > +   new_accept_queue = _csk(nsk)->icsk_accept_queue;
> > +
> > +   spin_lock(_accept_queue->rskq_lock);
> > +   spin_lock(_accept_queue->rskq_lock);
> I am also not very thrilled on this double spin_lock.
> Can this be done in (or like) inet_csk_listen_stop() instead?

It will be possible to migrate sockets in inet_csk_listen_stop(), but I
think it is better to do it just after reuseport_detach_sock() becuase we
can select a different listener (almost) every time at a lower cost by
selecting the moved socket and pass it to inet_csk_reqsk_queue_migrate()
easily.

sk_hash of the listener is 0, so we would have to generate a random number
in inet_csk_listen_stop().


Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-04 Thread Martin KaFai Lau
On Tue, Dec 01, 2020 at 11:44:10PM +0900, Kuniyuki Iwashima wrote:
[ ... ]
> diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> index fd133516ac0e..60d7c1f28809 100644
> --- a/net/core/sock_reuseport.c
> +++ b/net/core/sock_reuseport.c
> @@ -216,9 +216,11 @@ int reuseport_add_sock(struct sock *sk, struct sock 
> *sk2, bool bind_inany)
>  }
>  EXPORT_SYMBOL(reuseport_add_sock);
>  
> -void reuseport_detach_sock(struct sock *sk)
> +struct sock *reuseport_detach_sock(struct sock *sk)
>  {
>   struct sock_reuseport *reuse;
> + struct bpf_prog *prog;
> + struct sock *nsk = NULL;
>   int i;
>  
>   spin_lock_bh(_lock);
> @@ -242,8 +244,12 @@ void reuseport_detach_sock(struct sock *sk)
>  
>   reuse->num_socks--;
>   reuse->socks[i] = reuse->socks[reuse->num_socks];
> + prog = rcu_dereference(reuse->prog);
Is it under rcu_read_lock() here?

>  
>   if (sk->sk_protocol == IPPROTO_TCP) {
> + if (reuse->num_socks && !prog)
> + nsk = i == reuse->num_socks ? reuse->socks[i - 
> 1] : reuse->socks[i];
> +
>   reuse->num_closed_socks++;
>   reuse->socks[reuse->max_socks - 
> reuse->num_closed_socks] = sk;
>   } else {
> @@ -264,6 +270,8 @@ void reuseport_detach_sock(struct sock *sk)
>   call_rcu(>rcu, reuseport_free_rcu);
>  out:
>   spin_unlock_bh(_lock);
> +
> + return nsk;
>  }
>  EXPORT_SYMBOL(reuseport_detach_sock);
>  
> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> index 1451aa9712b0..b27241ea96bd 100644
> --- a/net/ipv4/inet_connection_sock.c
> +++ b/net/ipv4/inet_connection_sock.c
> @@ -992,6 +992,36 @@ struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
>  }
>  EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
>  
> +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk)
> +{
> + struct request_sock_queue *old_accept_queue, *new_accept_queue;
> +
> + old_accept_queue = _csk(sk)->icsk_accept_queue;
> + new_accept_queue = _csk(nsk)->icsk_accept_queue;
> +
> + spin_lock(_accept_queue->rskq_lock);
> + spin_lock(_accept_queue->rskq_lock);
I am also not very thrilled on this double spin_lock.
Can this be done in (or like) inet_csk_listen_stop() instead?

> +
> + if (old_accept_queue->rskq_accept_head) {
> + if (new_accept_queue->rskq_accept_head)
> + old_accept_queue->rskq_accept_tail->dl_next =
> + new_accept_queue->rskq_accept_head;
> + else
> + new_accept_queue->rskq_accept_tail = 
> old_accept_queue->rskq_accept_tail;
> +
> + new_accept_queue->rskq_accept_head = 
> old_accept_queue->rskq_accept_head;
> + old_accept_queue->rskq_accept_head = NULL;
> + old_accept_queue->rskq_accept_tail = NULL;
> +
> + WRITE_ONCE(nsk->sk_ack_backlog, nsk->sk_ack_backlog + 
> sk->sk_ack_backlog);
> + WRITE_ONCE(sk->sk_ack_backlog, 0);
> + }
> +
> + spin_unlock(_accept_queue->rskq_lock);
> + spin_unlock(_accept_queue->rskq_lock);
> +}
> +EXPORT_SYMBOL(inet_csk_reqsk_queue_migrate);
> +
>  struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child,
>struct request_sock *req, bool own_req)
>  {
> diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> index 45fb450b4522..545538a6bfac 100644
> --- a/net/ipv4/inet_hashtables.c
> +++ b/net/ipv4/inet_hashtables.c
> @@ -681,6 +681,7 @@ void inet_unhash(struct sock *sk)
>  {
>   struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
>   struct inet_listen_hashbucket *ilb = NULL;
> + struct sock *nsk;
>   spinlock_t *lock;
>  
>   if (sk_unhashed(sk))
> @@ -696,8 +697,12 @@ void inet_unhash(struct sock *sk)
>   if (sk_unhashed(sk))
>   goto unlock;
>  
> - if (rcu_access_pointer(sk->sk_reuseport_cb))
> - reuseport_detach_sock(sk);
> + if (rcu_access_pointer(sk->sk_reuseport_cb)) {
> + nsk = reuseport_detach_sock(sk);
> + if (nsk)
> + inet_csk_reqsk_queue_migrate(sk, nsk);
> + }
> +
>   if (ilb) {
>   inet_unhash2(hashinfo, sk);
>   ilb->count--;
> -- 
> 2.17.2 (Apple Git-113)
> 


Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-03 Thread Kuniyuki Iwashima
From:   Eric Dumazet 
Date:   Thu, 3 Dec 2020 15:31:53 +0100
> On Thu, Dec 3, 2020 at 3:14 PM Kuniyuki Iwashima  wrote:
> >
> > From:   Eric Dumazet 
> > Date:   Tue, 1 Dec 2020 16:25:51 +0100
> > > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > > This patch lets reuseport_detach_sock() return a pointer of struct sock,
> > > > which is used only by inet_unhash(). If it is not NULL,
> > > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
> > > > sockets from the closing listener to the selected one.
> > > >
> > > > Listening sockets hold incoming connections as a linked list of struct
> > > > request_sock in the accept queue, and each request has reference to a 
> > > > full
> > > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we only 
> > > > unlink
> > > > the requests from the closing listener's queue and relink them to the 
> > > > head
> > > > of the new listener's queue. We do not process each request and its
> > > > reference to the listener, so the migration completes in O(1) time
> > > > complexity. However, in the case of TCP_SYN_RECV sockets, we take 
> > > > special
> > > > care in the next commit.
> > > >
> > > > By default, the kernel selects a new listener randomly. In order to pick
> > > > out a different socket every time, we select the last element of 
> > > > socks[] as
> > > > the new listener. This behaviour is based on how the kernel moves 
> > > > sockets
> > > > in socks[]. (See also [1])
> > > >
> > > > Basically, in order to redistribute sockets evenly, we have to use an 
> > > > eBPF
> > > > program called in the later commit, but as the side effect of such 
> > > > default
> > > > selection, the kernel can redistribute old requests evenly to new 
> > > > listeners
> > > > for a specific case where the application replaces listeners by
> > > > generations.
> > > >
> > > > For example, we call listen() for four sockets (A, B, C, D), and close 
> > > > the
> > > > first two by turns. The sockets move in socks[] like below.
> > > >
> > > >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> > > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> > > >   socks[2] : C   |  socks[2] : C --'
> > > >   socks[3] : D --'
> > > >
> > > > Then, if C and D have newer settings than A and B, and each socket has a
> > > > request (a, b, c, d) in their accept queue, we can redistribute old
> > > > requests evenly to new listeners.
> > > >
> > > >   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : D (a + 
> > > > d)
> > > >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C (b + 
> > > > c)
> > > >   socks[2] : C (c)   |  socks[2] : C (c) --'
> > > >   socks[3] : D (d) --'
> > > >
> > > > Here, (A, D) or (B, C) can have different application settings, but they
> > > > MUST have the same settings at the socket API level; otherwise, 
> > > > unexpected
> > > > error may happen. For instance, if only the new listeners have
> > > > TCP_SAVE_SYN, old requests do not have SYN data, so the application will
> > > > face inconsistency and cause an error.
> > > >
> > > > Therefore, if there are different kinds of sockets, we must attach an 
> > > > eBPF
> > > > program described in later commits.
> > > >
> > > > Link: 
> > > > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > > > Reviewed-by: Benjamin Herrenschmidt 
> > > > Signed-off-by: Kuniyuki Iwashima 
> > > > ---
> > > >  include/net/inet_connection_sock.h |  1 +
> > > >  include/net/sock_reuseport.h   |  2 +-
> > > >  net/core/sock_reuseport.c  | 10 +-
> > > >  net/ipv4/inet_connection_sock.c| 30 ++
> > > >  net/ipv4/inet_hashtables.c |  9 +++--
> > > >  5 files changed, 48 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/include/net/inet_connection_sock.h 
> > > > b/include/net/inet_connection_sock.h
> > > > index 7338b3865a2a..2ea2d743f8fc 100644
> > > > --- a/include/net/inet_connection_sock.h
> > > > +++ b/include/net/inet_connection_sock.h
> > > > @@ -260,6 +260,7 @@ struct dst_entry *inet_csk_route_child_sock(const 
> > > > struct sock *sk,
> > > >  struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> > > >   struct request_sock *req,
> > > >   struct sock *child);
> > > > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk);
> > > >  void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct 
> > > > request_sock *req,
> > > >unsigned long timeout);
> > > >  struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock 
> > > > *child,
> > > > diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> > > > index 0e558ca7afbf..09a1b1539d4c 100644
> > > > --- a/include/net/sock_reuseport.h
> > > > +++ b/include/net/sock_reuseport.h
> > > > @@ -31,7 +31,7 @@ struct sock_reuseport {
> > 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-03 Thread Eric Dumazet
On Thu, Dec 3, 2020 at 3:14 PM Kuniyuki Iwashima  wrote:
>
> From:   Eric Dumazet 
> Date:   Tue, 1 Dec 2020 16:25:51 +0100
> > On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > > This patch lets reuseport_detach_sock() return a pointer of struct sock,
> > > which is used only by inet_unhash(). If it is not NULL,
> > > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
> > > sockets from the closing listener to the selected one.
> > >
> > > Listening sockets hold incoming connections as a linked list of struct
> > > request_sock in the accept queue, and each request has reference to a full
> > > socket and its listener. In inet_csk_reqsk_queue_migrate(), we only unlink
> > > the requests from the closing listener's queue and relink them to the head
> > > of the new listener's queue. We do not process each request and its
> > > reference to the listener, so the migration completes in O(1) time
> > > complexity. However, in the case of TCP_SYN_RECV sockets, we take special
> > > care in the next commit.
> > >
> > > By default, the kernel selects a new listener randomly. In order to pick
> > > out a different socket every time, we select the last element of socks[] 
> > > as
> > > the new listener. This behaviour is based on how the kernel moves sockets
> > > in socks[]. (See also [1])
> > >
> > > Basically, in order to redistribute sockets evenly, we have to use an eBPF
> > > program called in the later commit, but as the side effect of such default
> > > selection, the kernel can redistribute old requests evenly to new 
> > > listeners
> > > for a specific case where the application replaces listeners by
> > > generations.
> > >
> > > For example, we call listen() for four sockets (A, B, C, D), and close the
> > > first two by turns. The sockets move in socks[] like below.
> > >
> > >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> > >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> > >   socks[2] : C   |  socks[2] : C --'
> > >   socks[3] : D --'
> > >
> > > Then, if C and D have newer settings than A and B, and each socket has a
> > > request (a, b, c, d) in their accept queue, we can redistribute old
> > > requests evenly to new listeners.
> > >
> > >   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : D (a + d)
> > >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C (b + c)
> > >   socks[2] : C (c)   |  socks[2] : C (c) --'
> > >   socks[3] : D (d) --'
> > >
> > > Here, (A, D) or (B, C) can have different application settings, but they
> > > MUST have the same settings at the socket API level; otherwise, unexpected
> > > error may happen. For instance, if only the new listeners have
> > > TCP_SAVE_SYN, old requests do not have SYN data, so the application will
> > > face inconsistency and cause an error.
> > >
> > > Therefore, if there are different kinds of sockets, we must attach an eBPF
> > > program described in later commits.
> > >
> > > Link: 
> > > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > > Reviewed-by: Benjamin Herrenschmidt 
> > > Signed-off-by: Kuniyuki Iwashima 
> > > ---
> > >  include/net/inet_connection_sock.h |  1 +
> > >  include/net/sock_reuseport.h   |  2 +-
> > >  net/core/sock_reuseport.c  | 10 +-
> > >  net/ipv4/inet_connection_sock.c| 30 ++
> > >  net/ipv4/inet_hashtables.c |  9 +++--
> > >  5 files changed, 48 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/include/net/inet_connection_sock.h 
> > > b/include/net/inet_connection_sock.h
> > > index 7338b3865a2a..2ea2d743f8fc 100644
> > > --- a/include/net/inet_connection_sock.h
> > > +++ b/include/net/inet_connection_sock.h
> > > @@ -260,6 +260,7 @@ struct dst_entry *inet_csk_route_child_sock(const 
> > > struct sock *sk,
> > >  struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> > >   struct request_sock *req,
> > >   struct sock *child);
> > > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk);
> > >  void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock 
> > > *req,
> > >unsigned long timeout);
> > >  struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock 
> > > *child,
> > > diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> > > index 0e558ca7afbf..09a1b1539d4c 100644
> > > --- a/include/net/sock_reuseport.h
> > > +++ b/include/net/sock_reuseport.h
> > > @@ -31,7 +31,7 @@ struct sock_reuseport {
> > >  extern int reuseport_alloc(struct sock *sk, bool bind_inany);
> > >  extern int reuseport_add_sock(struct sock *sk, struct sock *sk2,
> > >   bool bind_inany);
> > > -extern void reuseport_detach_sock(struct sock *sk);
> > > +extern struct sock *reuseport_detach_sock(struct sock *sk);
> > >  extern struct sock 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-03 Thread Kuniyuki Iwashima
From:   Eric Dumazet 
Date:   Tue, 1 Dec 2020 16:25:51 +0100
> On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> > This patch lets reuseport_detach_sock() return a pointer of struct sock,
> > which is used only by inet_unhash(). If it is not NULL,
> > inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
> > sockets from the closing listener to the selected one.
> > 
> > Listening sockets hold incoming connections as a linked list of struct
> > request_sock in the accept queue, and each request has reference to a full
> > socket and its listener. In inet_csk_reqsk_queue_migrate(), we only unlink
> > the requests from the closing listener's queue and relink them to the head
> > of the new listener's queue. We do not process each request and its
> > reference to the listener, so the migration completes in O(1) time
> > complexity. However, in the case of TCP_SYN_RECV sockets, we take special
> > care in the next commit.
> > 
> > By default, the kernel selects a new listener randomly. In order to pick
> > out a different socket every time, we select the last element of socks[] as
> > the new listener. This behaviour is based on how the kernel moves sockets
> > in socks[]. (See also [1])
> > 
> > Basically, in order to redistribute sockets evenly, we have to use an eBPF
> > program called in the later commit, but as the side effect of such default
> > selection, the kernel can redistribute old requests evenly to new listeners
> > for a specific case where the application replaces listeners by
> > generations.
> > 
> > For example, we call listen() for four sockets (A, B, C, D), and close the
> > first two by turns. The sockets move in socks[] like below.
> > 
> >   socks[0] : A <-.  socks[0] : D  socks[0] : D
> >   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
> >   socks[2] : C   |  socks[2] : C --'
> >   socks[3] : D --'
> > 
> > Then, if C and D have newer settings than A and B, and each socket has a
> > request (a, b, c, d) in their accept queue, we can redistribute old
> > requests evenly to new listeners.
> > 
> >   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : D (a + d)
> >   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C (b + c)
> >   socks[2] : C (c)   |  socks[2] : C (c) --'
> >   socks[3] : D (d) --'
> > 
> > Here, (A, D) or (B, C) can have different application settings, but they
> > MUST have the same settings at the socket API level; otherwise, unexpected
> > error may happen. For instance, if only the new listeners have
> > TCP_SAVE_SYN, old requests do not have SYN data, so the application will
> > face inconsistency and cause an error.
> > 
> > Therefore, if there are different kinds of sockets, we must attach an eBPF
> > program described in later commits.
> > 
> > Link: 
> > https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> > Reviewed-by: Benjamin Herrenschmidt 
> > Signed-off-by: Kuniyuki Iwashima 
> > ---
> >  include/net/inet_connection_sock.h |  1 +
> >  include/net/sock_reuseport.h   |  2 +-
> >  net/core/sock_reuseport.c  | 10 +-
> >  net/ipv4/inet_connection_sock.c| 30 ++
> >  net/ipv4/inet_hashtables.c |  9 +++--
> >  5 files changed, 48 insertions(+), 4 deletions(-)
> > 
> > diff --git a/include/net/inet_connection_sock.h 
> > b/include/net/inet_connection_sock.h
> > index 7338b3865a2a..2ea2d743f8fc 100644
> > --- a/include/net/inet_connection_sock.h
> > +++ b/include/net/inet_connection_sock.h
> > @@ -260,6 +260,7 @@ struct dst_entry *inet_csk_route_child_sock(const 
> > struct sock *sk,
> >  struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> >   struct request_sock *req,
> >   struct sock *child);
> > +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk);
> >  void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock 
> > *req,
> >unsigned long timeout);
> >  struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock 
> > *child,
> > diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> > index 0e558ca7afbf..09a1b1539d4c 100644
> > --- a/include/net/sock_reuseport.h
> > +++ b/include/net/sock_reuseport.h
> > @@ -31,7 +31,7 @@ struct sock_reuseport {
> >  extern int reuseport_alloc(struct sock *sk, bool bind_inany);
> >  extern int reuseport_add_sock(struct sock *sk, struct sock *sk2,
> >   bool bind_inany);
> > -extern void reuseport_detach_sock(struct sock *sk);
> > +extern struct sock *reuseport_detach_sock(struct sock *sk);
> >  extern struct sock *reuseport_select_sock(struct sock *sk,
> >   u32 hash,
> >   struct sk_buff *skb,
> > diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> > index 

Re: [PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-01 Thread Eric Dumazet



On 12/1/20 3:44 PM, Kuniyuki Iwashima wrote:
> This patch lets reuseport_detach_sock() return a pointer of struct sock,
> which is used only by inet_unhash(). If it is not NULL,
> inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
> sockets from the closing listener to the selected one.
> 
> Listening sockets hold incoming connections as a linked list of struct
> request_sock in the accept queue, and each request has reference to a full
> socket and its listener. In inet_csk_reqsk_queue_migrate(), we only unlink
> the requests from the closing listener's queue and relink them to the head
> of the new listener's queue. We do not process each request and its
> reference to the listener, so the migration completes in O(1) time
> complexity. However, in the case of TCP_SYN_RECV sockets, we take special
> care in the next commit.
> 
> By default, the kernel selects a new listener randomly. In order to pick
> out a different socket every time, we select the last element of socks[] as
> the new listener. This behaviour is based on how the kernel moves sockets
> in socks[]. (See also [1])
> 
> Basically, in order to redistribute sockets evenly, we have to use an eBPF
> program called in the later commit, but as the side effect of such default
> selection, the kernel can redistribute old requests evenly to new listeners
> for a specific case where the application replaces listeners by
> generations.
> 
> For example, we call listen() for four sockets (A, B, C, D), and close the
> first two by turns. The sockets move in socks[] like below.
> 
>   socks[0] : A <-.  socks[0] : D  socks[0] : D
>   socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
>   socks[2] : C   |  socks[2] : C --'
>   socks[3] : D --'
> 
> Then, if C and D have newer settings than A and B, and each socket has a
> request (a, b, c, d) in their accept queue, we can redistribute old
> requests evenly to new listeners.
> 
>   socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : D (a + d)
>   socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C (b + c)
>   socks[2] : C (c)   |  socks[2] : C (c) --'
>   socks[3] : D (d) --'
> 
> Here, (A, D) or (B, C) can have different application settings, but they
> MUST have the same settings at the socket API level; otherwise, unexpected
> error may happen. For instance, if only the new listeners have
> TCP_SAVE_SYN, old requests do not have SYN data, so the application will
> face inconsistency and cause an error.
> 
> Therefore, if there are different kinds of sockets, we must attach an eBPF
> program described in later commits.
> 
> Link: 
> https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
> Reviewed-by: Benjamin Herrenschmidt 
> Signed-off-by: Kuniyuki Iwashima 
> ---
>  include/net/inet_connection_sock.h |  1 +
>  include/net/sock_reuseport.h   |  2 +-
>  net/core/sock_reuseport.c  | 10 +-
>  net/ipv4/inet_connection_sock.c| 30 ++
>  net/ipv4/inet_hashtables.c |  9 +++--
>  5 files changed, 48 insertions(+), 4 deletions(-)
> 
> diff --git a/include/net/inet_connection_sock.h 
> b/include/net/inet_connection_sock.h
> index 7338b3865a2a..2ea2d743f8fc 100644
> --- a/include/net/inet_connection_sock.h
> +++ b/include/net/inet_connection_sock.h
> @@ -260,6 +260,7 @@ struct dst_entry *inet_csk_route_child_sock(const struct 
> sock *sk,
>  struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
> struct request_sock *req,
> struct sock *child);
> +void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk);
>  void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req,
>  unsigned long timeout);
>  struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child,
> diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
> index 0e558ca7afbf..09a1b1539d4c 100644
> --- a/include/net/sock_reuseport.h
> +++ b/include/net/sock_reuseport.h
> @@ -31,7 +31,7 @@ struct sock_reuseport {
>  extern int reuseport_alloc(struct sock *sk, bool bind_inany);
>  extern int reuseport_add_sock(struct sock *sk, struct sock *sk2,
> bool bind_inany);
> -extern void reuseport_detach_sock(struct sock *sk);
> +extern struct sock *reuseport_detach_sock(struct sock *sk);
>  extern struct sock *reuseport_select_sock(struct sock *sk,
> u32 hash,
> struct sk_buff *skb,
> diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
> index fd133516ac0e..60d7c1f28809 100644
> --- a/net/core/sock_reuseport.c
> +++ b/net/core/sock_reuseport.c
> @@ -216,9 +216,11 @@ int reuseport_add_sock(struct sock *sk, struct sock 
> *sk2, bool bind_inany)
>  }
>  EXPORT_SYMBOL(reuseport_add_sock);
>  
> -void 

[PATCH v1 bpf-next 03/11] tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.

2020-12-01 Thread Kuniyuki Iwashima
This patch lets reuseport_detach_sock() return a pointer of struct sock,
which is used only by inet_unhash(). If it is not NULL,
inet_csk_reqsk_queue_migrate() migrates TCP_ESTABLISHED/TCP_SYN_RECV
sockets from the closing listener to the selected one.

Listening sockets hold incoming connections as a linked list of struct
request_sock in the accept queue, and each request has reference to a full
socket and its listener. In inet_csk_reqsk_queue_migrate(), we only unlink
the requests from the closing listener's queue and relink them to the head
of the new listener's queue. We do not process each request and its
reference to the listener, so the migration completes in O(1) time
complexity. However, in the case of TCP_SYN_RECV sockets, we take special
care in the next commit.

By default, the kernel selects a new listener randomly. In order to pick
out a different socket every time, we select the last element of socks[] as
the new listener. This behaviour is based on how the kernel moves sockets
in socks[]. (See also [1])

Basically, in order to redistribute sockets evenly, we have to use an eBPF
program called in the later commit, but as the side effect of such default
selection, the kernel can redistribute old requests evenly to new listeners
for a specific case where the application replaces listeners by
generations.

For example, we call listen() for four sockets (A, B, C, D), and close the
first two by turns. The sockets move in socks[] like below.

  socks[0] : A <-.  socks[0] : D  socks[0] : D
  socks[1] : B   |  =>  socks[1] : B <-.  =>  socks[1] : C
  socks[2] : C   |  socks[2] : C --'
  socks[3] : D --'

Then, if C and D have newer settings than A and B, and each socket has a
request (a, b, c, d) in their accept queue, we can redistribute old
requests evenly to new listeners.

  socks[0] : A (a) <-.  socks[0] : D (a + d)  socks[0] : D (a + d)
  socks[1] : B (b)   |  =>  socks[1] : B (b) <-.  =>  socks[1] : C (b + c)
  socks[2] : C (c)   |  socks[2] : C (c) --'
  socks[3] : D (d) --'

Here, (A, D) or (B, C) can have different application settings, but they
MUST have the same settings at the socket API level; otherwise, unexpected
error may happen. For instance, if only the new listeners have
TCP_SAVE_SYN, old requests do not have SYN data, so the application will
face inconsistency and cause an error.

Therefore, if there are different kinds of sockets, we must attach an eBPF
program described in later commits.

Link: 
https://lore.kernel.org/netdev/CAEfhGiyG8Y_amDZ2C8dQoQqjZJMHjTY76b=KBkTKcBtA=dh...@mail.gmail.com/
Reviewed-by: Benjamin Herrenschmidt 
Signed-off-by: Kuniyuki Iwashima 
---
 include/net/inet_connection_sock.h |  1 +
 include/net/sock_reuseport.h   |  2 +-
 net/core/sock_reuseport.c  | 10 +-
 net/ipv4/inet_connection_sock.c| 30 ++
 net/ipv4/inet_hashtables.c |  9 +++--
 5 files changed, 48 insertions(+), 4 deletions(-)

diff --git a/include/net/inet_connection_sock.h 
b/include/net/inet_connection_sock.h
index 7338b3865a2a..2ea2d743f8fc 100644
--- a/include/net/inet_connection_sock.h
+++ b/include/net/inet_connection_sock.h
@@ -260,6 +260,7 @@ struct dst_entry *inet_csk_route_child_sock(const struct 
sock *sk,
 struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
  struct request_sock *req,
  struct sock *child);
+void inet_csk_reqsk_queue_migrate(struct sock *sk, struct sock *nsk);
 void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req,
   unsigned long timeout);
 struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child,
diff --git a/include/net/sock_reuseport.h b/include/net/sock_reuseport.h
index 0e558ca7afbf..09a1b1539d4c 100644
--- a/include/net/sock_reuseport.h
+++ b/include/net/sock_reuseport.h
@@ -31,7 +31,7 @@ struct sock_reuseport {
 extern int reuseport_alloc(struct sock *sk, bool bind_inany);
 extern int reuseport_add_sock(struct sock *sk, struct sock *sk2,
  bool bind_inany);
-extern void reuseport_detach_sock(struct sock *sk);
+extern struct sock *reuseport_detach_sock(struct sock *sk);
 extern struct sock *reuseport_select_sock(struct sock *sk,
  u32 hash,
  struct sk_buff *skb,
diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
index fd133516ac0e..60d7c1f28809 100644
--- a/net/core/sock_reuseport.c
+++ b/net/core/sock_reuseport.c
@@ -216,9 +216,11 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2, 
bool bind_inany)
 }
 EXPORT_SYMBOL(reuseport_add_sock);
 
-void reuseport_detach_sock(struct sock *sk)
+struct sock *reuseport_detach_sock(struct sock *sk)
 {
struct sock_reuseport *reuse;
+   struct bpf_prog *prog;
+   struct sock *nsk = NULL;
int i;
 
spin_lock_bh(_lock);