Re: if_netisr(): trade NET_LOCK() for NET_RLOCK()

2019-06-05 Thread Alexandr Nedvedicky
Hello,

On Wed, Jun 05, 2019 at 10:50:18AM -0300, Martin Pieuchot wrote:
> On 04/06/19(Tue) 23:57, Alexandr Nedvedicky wrote:
> > Hello,
> > 
> > I see, I owe some clarification to share wider context of this change.
> > 
> > On Tue, Jun 04, 2019 at 10:32:57AM -0300, Martin Pieuchot wrote:
> > > On 04/06/19(Tue) 01:37, Alexandr Nedvedicky wrote:
> > > > Hello,
> > > > 
> > > > diff below is just cosmetic change, which has no impact on current
> > > > functionality, because there is just single network task to deliver 
> > > > packets to
> > > > IP stack. I just want to push this small change to tree to minimize 
> > > > delta
> > > > between current and my experimental branch.
> > > 
> > > But why is it fine? 
> > 
> > currently all packets reaching IP stack are processed by if_netisr(), 
> > which
> > runs as a task. The task pops up a packet from its queue (a.k.a. taskq) 
> > and
> > passes the packet to corresponding xintr() handler for further 
> > processing.
> 
> Only for packets that needs to be delivered/processed locally.  In the
> case of forwarding if_input_process() is the task that process packets.
> 
> 

thanks for pointing that out. I keep forgetting this important detail.

regards
sashan



Re: if_netisr(): trade NET_LOCK() for NET_RLOCK()

2019-06-05 Thread Martin Pieuchot
On 04/06/19(Tue) 23:57, Alexandr Nedvedicky wrote:
> Hello,
> 
> I see, I owe some clarification to share wider context of this change.
> 
> On Tue, Jun 04, 2019 at 10:32:57AM -0300, Martin Pieuchot wrote:
> > On 04/06/19(Tue) 01:37, Alexandr Nedvedicky wrote:
> > > Hello,
> > > 
> > > diff below is just cosmetic change, which has no impact on current
> > > functionality, because there is just single network task to deliver 
> > > packets to
> > > IP stack. I just want to push this small change to tree to minimize delta
> > > between current and my experimental branch.
> > 
> > But why is it fine? 
> 
> currently all packets reaching IP stack are processed by if_netisr(), 
> which
> runs as a task. The task pops up a packet from its queue (a.k.a. taskq) 
> and
> passes the packet to corresponding xintr() handler for further processing.

Only for packets that needs to be delivered/processed locally.  In the
case of forwarding if_input_process() is the task that process packets.


> We still have a single taskq (see NET_TASKQ defined as 1 in net/if.c),
> hence there is just one instance of if_netisr() running. Therefore I say
> 'the change has no impact'. 
> 
> We still need to make sure the network stack is ready to deal with 
> multiple
> tasks for local bound packets.

Nice plan!

> > Is it because the read version of the netlock is
> > enough to protect the packet processing path?  
> 
> from network stack point of view the packets acquire netlock as
> readers as they do not perform any changes to network subsystem
> configuration.

Route entries might be interested, cached routes might be updated.  But
theoretically that should be fine.  That's something to keep in mind
when adding more tasks.

> > Or is it because it
> > guarantees that no ioctl will be executed in the meantime?  
> 
> yes, the purpose of netlock is to synchronize packets with
> ioctl() operations, which change configuration of networks stack
> (e.g. adding/removing interface).
> 
> > And why do
> > we need that?  To parallelise the packet processing path?
> 
> correct. the plan is to have more netisr tasks to process
> packets. those will run in parallel/concurrently as netlock readers.
> we still have to make sure the network stack is able to run on multiple
> tasks for local bound packets.

I all for it (o:



Re: if_netisr(): trade NET_LOCK() for NET_RLOCK()

2019-06-04 Thread Alexandr Nedvedicky
Hello,

I see, I owe some clarification to share wider context of this change.

On Tue, Jun 04, 2019 at 10:32:57AM -0300, Martin Pieuchot wrote:
> On 04/06/19(Tue) 01:37, Alexandr Nedvedicky wrote:
> > Hello,
> > 
> > diff below is just cosmetic change, which has no impact on current
> > functionality, because there is just single network task to deliver packets 
> > to
> > IP stack. I just want to push this small change to tree to minimize delta
> > between current and my experimental branch.
> 
> But why is it fine? 

currently all packets reaching IP stack are processed by if_netisr(), which
runs as a task. The task pops up a packet from its queue (a.k.a. taskq) and
passes the packet to corresponding xintr() handler for further processing.

We still have a single taskq (see NET_TASKQ defined as 1 in net/if.c),
hence there is just one instance of if_netisr() running. Therefore I say
'the change has no impact'. 

We still need to make sure the network stack is ready to deal with multiple
tasks for local bound packets.

> Is it because the read version of the netlock is
> enough to protect the packet processing path?  

from network stack point of view the packets acquire netlock as
readers as they do not perform any changes to network subsystem
configuration.

> Or is it because it
> guarantees that no ioctl will be executed in the meantime?  

yes, the purpose of netlock is to synchronize packets with
ioctl() operations, which change configuration of networks stack
(e.g. adding/removing interface).

> And why do
> we need that?  To parallelise the packet processing path?

correct. the plan is to have more netisr tasks to process
packets. those will run in parallel/concurrently as netlock readers.
we still have to make sure the network stack is able to run on multiple
tasks for local bound packets.
> 
> ok mpi@ :)

thanks 

regards
sashan



Re: if_netisr(): trade NET_LOCK() for NET_RLOCK()

2019-06-04 Thread Martin Pieuchot
On 04/06/19(Tue) 01:37, Alexandr Nedvedicky wrote:
> Hello,
> 
> diff below is just cosmetic change, which has no impact on current
> functionality, because there is just single network task to deliver packets to
> IP stack. I just want to push this small change to tree to minimize delta
> between current and my experimental branch.

But why is it fine?  Is it because the read version of the netlock is
enough to protect the packet processing path?  Or is it because it
guarantees that no ioctl will be executed in the meantime?  And why do
we need that?  To parallelise the packet processing path?

ok mpi@ :)

> 8<---8<---8<--8<
> diff --git a/sys/net/if.c b/sys/net/if.c
> index e355f44e80d..15b5f94f188 100644
> --- a/sys/net/if.c
> +++ b/sys/net/if.c
> @@ -976,14 +976,14 @@ if_netisr(void *unused)
>  {
>   int n, t = 0;
>  
> - NET_LOCK();
> + NET_RLOCK();
>  
>   while ((n = netisr) != 0) {
>   /* Like sched_pause() but with a rwlock dance. */
>   if (curcpu()->ci_schedstate.spc_schedflags & SPCF_SHOULDYIELD) {
> - NET_UNLOCK();
> + NET_RUNLOCK();
>   yield();
> - NET_LOCK();
> + NET_RLOCK();
>   }
>  
>   atomic_clearbits_int(, n);
> @@ -1044,7 +1044,7 @@ if_netisr(void *unused)
>   }
>  #endif
>  
> - NET_UNLOCK();
> + NET_RUNLOCK();
>  }
>  
>  void
> 



if_netisr(): trade NET_LOCK() for NET_RLOCK()

2019-06-03 Thread Alexandr Nedvedicky
Hello,

diff below is just cosmetic change, which has no impact on current
functionality, because there is just single network task to deliver packets to
IP stack. I just want to push this small change to tree to minimize delta
between current and my experimental branch.

OK?

thanks and
regards
sashan

8<---8<---8<--8<
diff --git a/sys/net/if.c b/sys/net/if.c
index e355f44e80d..15b5f94f188 100644
--- a/sys/net/if.c
+++ b/sys/net/if.c
@@ -976,14 +976,14 @@ if_netisr(void *unused)
 {
int n, t = 0;
 
-   NET_LOCK();
+   NET_RLOCK();
 
while ((n = netisr) != 0) {
/* Like sched_pause() but with a rwlock dance. */
if (curcpu()->ci_schedstate.spc_schedflags & SPCF_SHOULDYIELD) {
-   NET_UNLOCK();
+   NET_RUNLOCK();
yield();
-   NET_LOCK();
+   NET_RLOCK();
}
 
atomic_clearbits_int(, n);
@@ -1044,7 +1044,7 @@ if_netisr(void *unused)
}
 #endif
 
-   NET_UNLOCK();
+   NET_RUNLOCK();
 }
 
 void