[vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-26 Thread Vamsi Krishna
Hi ,

I have looked at the ipsec code in VPP and trying to understand how it
works in a multi threaded environment. Noticed that the datastructures for
spd, sad and tunnel interface are pools and there are no locks to prevent
race conditions.

For instance the ipsec-input node passes SA index to the esp-encrypt node,
and esp-encrypt node looks up the SA from sad pool. But during the time in
which the packet is passed from one node to another the entry at SA index
may be changed or deleted. Same seems to be true for dpdk-esp-encrypt and
dpdk-esp-decrypt. How are these cases handled? Can the implementation be
used in multi-threaded environment?

Please help understand the IPSec implementation.

Thanks
Krishna
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9709): https://lists.fd.io/g/vpp-dev/message/9709
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-27 Thread Klement Sekera via Lists.Fd.Io
Hi,

I agree that there is an unlikely corner case which could result in vpp
assert. I don't think there is a real chance to hit this in a
production image, since it would require you to successfully make two
API calls (drop old entry, replace with new entry) while there are
packets in flight. Just deleting the old entry would cause re-use of
existing entry (which is marked as freed, but the code does not clear
the memory or invalidate it some other way) and the packet would be
handled the same way as if the delete came after it left vpp.

Based on this, I'm not sure whether fixing this is worth the cost..

Thanks,
Klement

On Wed, 2018-06-27 at 11:15 +0530, Vamsi Krishna wrote:
> Hi ,
> 
> I have looked at the ipsec code in VPP and trying to understand how
> it
> works in a multi threaded environment. Noticed that the
> datastructures for
> spd, sad and tunnel interface are pools and there are no locks to
> prevent
> race conditions.
> 
> For instance the ipsec-input node passes SA index to the esp-encrypt
> node,
> and esp-encrypt node looks up the SA from sad pool. But during the
> time in
> which the packet is passed from one node to another the entry at SA
> index
> may be changed or deleted. Same seems to be true for dpdk-esp-encrypt 
> and
> dpdk-esp-decrypt. How are these cases handled? Can the implementation
> be
> used in multi-threaded environment?
> 
> Please help understand the IPSec implementation.
> 
> Thanks
> Krishna
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#9709): https://lists.fd.io/g/vpp-dev/message/9709
> Mute This Topic: https://lists.fd.io/mt/22720913/675704
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ksek...@cisco.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9711): https://lists.fd.io/g/vpp-dev/message/9711
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-27 Thread Damjan Marion
ipsec data structures are updated during barrier sync, so there is not packets 
in-flight...


> On 27 Jun 2018, at 07:45, Vamsi Krishna  wrote:
> 
> Hi ,
> 
> I have looked at the ipsec code in VPP and trying to understand how it works 
> in a multi threaded environment. Noticed that the datastructures for spd, sad 
> and tunnel interface are pools and there are no locks to prevent race 
> conditions. 
> 
> For instance the ipsec-input node passes SA index to the esp-encrypt node, 
> and esp-encrypt node looks up the SA from sad pool. But during the time in 
> which the packet is passed from one node to another the entry at SA index may 
> be changed or deleted. Same seems to be true for dpdk-esp-encrypt and 
> dpdk-esp-decrypt. How are these cases handled? Can the implementation be used 
> in multi-threaded environment?
> 
> Please help understand the IPSec implementation.
> 
> Thanks
> Krishna
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#9709): https://lists.fd.io/g/vpp-dev/message/9709
> Mute This Topic: https://lists.fd.io/mt/22720913/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9716): https://lists.fd.io/g/vpp-dev/message/9716
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-27 Thread Dave Barach via Lists.Fd.Io
+1.



To amplify a bit: all binary API messages are processed with worker threads 
paused in a barrier sync, unless the API message has been explicitly marked 
thread-safe.



Here is the relevant code in 
.../src/vlibapi/api_shared.c:vl_api_msg_handler_with_vm_node(...)



  if (!am->is_mp_safe[id])

 {

   vl_msg_api_barrier_trace_context (am->msg_names[id]);

   vl_msg_api_barrier_sync ();

 }

  (*handler) (the_msg, vm, node);



  if (!am->is_mp_safe[id])

   vl_msg_api_barrier_release ();



Typical example of marking a message mp-safe:



  api_main_t *am=&api_main;

  ...



  am->is_mp_safe[VL_API_MEMCLNT_KEEPALIVE_REPLY] = 1;



The debug CLI uses the same scheme. Unless otherwise marked mp-safe, debug CLI 
commands are executed with worker threads paused in a barrier sync.



HTH... Dave



-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion
Sent: Wednesday, June 27, 2018 6:59 AM
To: Vamsi Krishna 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?



ipsec data structures are updated during barrier sync, so there is not packets 
in-flight...





> On 27 Jun 2018, at 07:45, Vamsi Krishna 
> mailto:vamsi...@gmail.com>> wrote:

>

> Hi ,

>

> I have looked at the ipsec code in VPP and trying to understand how it works 
> in a multi threaded environment. Noticed that the datastructures for spd, sad 
> and tunnel interface are pools and there are no locks to prevent race 
> conditions.

>

> For instance the ipsec-input node passes SA index to the esp-encrypt node, 
> and esp-encrypt node looks up the SA from sad pool. But during the time in 
> which the packet is passed from one node to another the entry at SA index may 
> be changed or deleted. Same seems to be true for dpdk-esp-encrypt and 
> dpdk-esp-decrypt. How are these cases handled? Can the implementation be used 
> in multi-threaded environment?

>

> Please help understand the IPSec implementation.

>

> Thanks

> Krishna

> -=-=-=-=-=-=-=-=-=-=-=-

> Links: You receive all messages sent to this group.

>

> View/Reply Online (#9709): https://lists.fd.io/g/vpp-dev/message/9709

> Mute This Topic: https://lists.fd.io/mt/22720913/675642

> Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io>

> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]

> -=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9718): https://lists.fd.io/g/vpp-dev/message/9718
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-27 Thread Vamsi Krishna
Hi Damjan, Dave,

Thanks for the quick reply.

It is really helpful. So the barrier ensures that the IPSec data structure
access is thread safe.

Have a few more question on the IPSec implementation.
1. The inbound SA lookup (in ipsec-input) is actually going through the
inbound policies for the given spd id linearly and matching a policy. The
SA is picked based on the matching policy.
 This could have been an SAD hash table with key as (SPI, dst address,
proto (ESP or AH) ), so that the SA can be looked up from the hash on
receiving an ESP packet.
 Is there a particular reason it is implemented using a linear policy
match?

2. There is also an IKEv2 responder implementation that adds/deletes IPSec
tunnel interfaces. How does this work? Is there any documentation that can
be referred to?

Thanks
Krishna

On Wed, Jun 27, 2018 at 6:23 PM, Dave Barach (dbarach) 
wrote:

> +1.
>
>
>
> To amplify a bit: *all* binary API messages are processed with worker
> threads paused in a barrier sync, unless the API message has been
> explicitly marked thread-safe.
>
>
>
> Here is the relevant code in .../src/vlibapi/api_shared.c:
> vl_api_msg_handler_with_vm_node(...)
>
>
>
>   if (!am->is_mp_safe[id])
>
>  {
>
>vl_msg_api_barrier_trace_context (am->msg_names[id]);
>
>vl_msg_api_barrier_sync ();
>
>  }
>
>   (*handler) (the_msg, vm, node);
>
>
>
>   if (!am->is_mp_safe[id])
>
>vl_msg_api_barrier_release ();
>
>
>
> Typical example of marking a message mp-safe:
>
>
>
>   api_main_t *am=&api_main;
>
>   ...
>
>
>
>   am->is_mp_safe[VL_API_MEMCLNT_KEEPALIVE_REPLY] = 1;
>
>
>
> The debug CLI uses the same scheme. Unless otherwise marked mp-safe, debug
> CLI commands are executed with worker threads paused in a barrier sync.
>
>
>
> HTH... Dave
>
>
>
> -----Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion
> Sent: Wednesday, June 27, 2018 6:59 AM
> To: Vamsi Krishna 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?
>
>
>
> ipsec data structures are updated during barrier sync, so there is not
> packets in-flight...
>
>
>
>
>
> > On 27 Jun 2018, at 07:45, Vamsi Krishna  wrote:
>
> >
>
> > Hi ,
>
> >
>
> > I have looked at the ipsec code in VPP and trying to understand how it
> works in a multi threaded environment. Noticed that the datastructures for
> spd, sad and tunnel interface are pools and there are no locks to prevent
> race conditions.
>
> >
>
> > For instance the ipsec-input node passes SA index to the esp-encrypt
> node, and esp-encrypt node looks up the SA from sad pool. But during the
> time in which the packet is passed from one node to another the entry at SA
> index may be changed or deleted. Same seems to be true for dpdk-esp-encrypt
> and dpdk-esp-decrypt. How are these cases handled? Can the implementation
> be used in multi-threaded environment?
>
> >
>
> > Please help understand the IPSec implementation.
>
> >
>
> > Thanks
>
> > Krishna
>
> > -=-=-=-=-=-=-=-=-=-=-=-
>
> > Links: You receive all messages sent to this group.
>
> >
>
> > View/Reply Online (#9709): https://lists.fd.io/g/vpp-dev/message/9709
>
> > Mute This Topic: https://lists.fd.io/mt/22720913/675642
>
> > Group Owner: vpp-dev+ow...@lists.fd.io
>
> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
>
> > -=-=-=-=-=-=-=-=-=-=-=-
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9730): https://lists.fd.io/g/vpp-dev/message/9730
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-28 Thread Jim Thompson
All,

I don't know if any of the previously-raised issues occur in real-life.
Goodness knows we've run billions of IPsec packets in the test harnesses
(harnessi?) here without seeing them.

There are a couple issues with IPsec and multicore that haven't been
raised, however, so I'm gonna hijack the thread.

If multiple worker threads are configured in VPP, it seems like there’s the
potential for problems with IPsec where the sequence number or replay
window for an SA could get stomped on by two threads trying to update them
at the same. We assume that this issue is well known since the following
comment occurs at line 173 in src/vnet/ipsec/esp.h

/* TODO seq increment should be atomic to be accessed by multiple
workers */

See: https://github.com/FDio/vpp/blob/master/src/vnet/ipsec/esp.h#L173

We've asked if anyone is working on this, and are willing to try and fix
it, but would need some direction on what is the best way to accomplish
same.

We could try to use locking, which would be straightforward but would add
overhead.  Maybe that overhead could be offset some by requesting a block
of sequence numbers upfront for all of the packets being processed instead
of getting a sequence number and incrementing as each packet is processed.

There is also the clib_smp_atomic_add() call, which invokes
__sync_fetch_and_add(addr,increment).  This is a GCC built_in that uses a
memory barrier to avoid obtaining a lock.  We're not sure if there are
drawbacks to using this.

See: http://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Atomic-Builtins.html

GRE uses clib_smp_atomic_add() for sequence number processing, see
src/vnet/gre/gre.c#L409
and src/vnet/gre/gre.c#L421

Finally, there seem to be issues around AES-GCM nonce processing when
operating multi-threaded.  If it is nonce processing, it can probably
(also) be addressed via clib_smp_atomic_add(), but.. don't know yet.

We've raised these before, but haven't received much in the way of
response.  Again, we're willing to work on these, but would like a bit of
'guidance' from vpp-dev.

Thanks,

Jim (and the rest of Netgate)


On Thu, Jun 28, 2018 at 1:44 AM, Vamsi Krishna  wrote:

> Hi Damjan, Dave,
>
> Thanks for the quick reply.
>
> It is really helpful. So the barrier ensures that the IPSec data structure
> access is thread safe.
>
> Have a few more question on the IPSec implementation.
> 1. The inbound SA lookup (in ipsec-input) is actually going through the
> inbound policies for the given spd id linearly and matching a policy. The
> SA is picked based on the matching policy.
>  This could have been an SAD hash table with key as (SPI, dst address,
> proto (ESP or AH) ), so that the SA can be looked up from the hash on
> receiving an ESP packet.
>  Is there a particular reason it is implemented using a linear policy
> match?
>
> 2. There is also an IKEv2 responder implementation that adds/deletes IPSec
> tunnel interfaces. How does this work? Is there any documentation that can
> be referred to?
>
> Thanks
> Krishna
>
> On Wed, Jun 27, 2018 at 6:23 PM, Dave Barach (dbarach) 
> wrote:
>
>> +1.
>>
>>
>>
>> To amplify a bit: *all* binary API messages are processed with worker
>> threads paused in a barrier sync, unless the API message has been
>> explicitly marked thread-safe.
>>
>>
>>
>> Here is the relevant code in .../src/vlibapi/api_shared.c:v
>> l_api_msg_handler_with_vm_node(...)
>>
>>
>>
>>   if (!am->is_mp_safe[id])
>>
>>  {
>>
>>vl_msg_api_barrier_trace_context (am->msg_names[id]);
>>
>>vl_msg_api_barrier_sync ();
>>
>>  }
>>
>>   (*handler) (the_msg, vm, node);
>>
>>
>>
>>   if (!am->is_mp_safe[id])
>>
>>vl_msg_api_barrier_release ();
>>
>>
>>
>> Typical example of marking a message mp-safe:
>>
>>
>>
>>   api_main_t *am=&api_main;
>>
>>   ...
>>
>>
>>
>>   am->is_mp_safe[VL_API_MEMCLNT_KEEPALIVE_REPLY] = 1;
>>
>>
>>
>> The debug CLI uses the same scheme. Unless otherwise marked mp-safe,
>> debug CLI commands are executed with worker threads paused in a barrier
>> sync.
>>
>>
>>
>> HTH... Dave
>>
>>
>>
>> -Original Message-
>> From: vpp-dev@lists.fd.io  On Behalf Of Damjan
>> Marion
>> Sent: Wednesday, June 27, 2018 6:59 AM
>> To: Vamsi Krishna 
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?
>>
>>
>>
>> ipsec data structures are updated during barrier sync, so there is not
>> pack

Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-29 Thread Damjan Marion

Hi Jim,

Atomic add sounds like a reasonable solution to me...

-- 
Damjan

> On 28 Jun 2018, at 09:26, Jim Thompson  wrote:
> 
> All,
> 
> I don't know if any of the previously-raised issues occur in real-life.  
> Goodness knows we've run billions of IPsec packets in the test harnesses 
> (harnessi?) here without seeing them.
> 
> There are a couple issues with IPsec and multicore that haven't been raised, 
> however, so I'm gonna hijack the thread.
> 
> If multiple worker threads are configured in VPP, it seems like there’s the 
> potential for problems with IPsec where the sequence number or replay window 
> for an SA could get stomped on by two threads trying to update them at the 
> same. We assume that this issue is well known since the following comment 
> occurs at line 173 in src/vnet/ipsec/esp.h
> 
> /* TODO seq increment should be atomic to be accessed by multiple workers 
> */
> 
> See: https://github.com/FDio/vpp/blob/master/src/vnet/ipsec/esp.h#L173 
> <https://github.com/FDio/vpp/blob/master/src/vnet/ipsec/esp.h#L173>
> 
> We've asked if anyone is working on this, and are willing to try and fix it, 
> but would need some direction on what is the best way to accomplish same.
> 
> We could try to use locking, which would be straightforward but would add 
> overhead.  Maybe that overhead could be offset some by requesting a block of 
> sequence numbers upfront for all of the packets being processed instead of 
> getting a sequence number and incrementing as each packet is processed.
> 
> There is also the clib_smp_atomic_add() call, which invokes 
> __sync_fetch_and_add(addr,increment).  This is a GCC built_in that uses a 
> memory barrier to avoid obtaining a lock.  We're not sure if there are 
> drawbacks to using this.
> 
> See: http://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Atomic-Builtins.html 
> <http://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Atomic-Builtins.html>
> 
> GRE uses clib_smp_atomic_add() for sequence number processing, see 
> src/vnet/gre/gre.c#L409 and src/vnet/gre/gre.c#L421
> 
> Finally, there seem to be issues around AES-GCM nonce processing when 
> operating multi-threaded.  If it is nonce processing, it can probably (also) 
> be addressed via clib_smp_atomic_add(), but.. don't know yet.
> 
> We've raised these before, but haven't received much in the way of response.  
> Again, we're willing to work on these, but would like a bit of 'guidance' 
> from vpp-dev.
> 
> Thanks,
> 
> Jim (and the rest of Netgate)
> 
> 
> On Thu, Jun 28, 2018 at 1:44 AM, Vamsi Krishna  <mailto:vamsi...@gmail.com>> wrote:
> Hi Damjan, Dave, 
> 
> Thanks for the quick reply. 
> 
> It is really helpful. So the barrier ensures that the IPSec data structure 
> access is thread safe. 
> 
> Have a few more question on the IPSec implementation.
> 1. The inbound SA lookup (in ipsec-input) is actually going through the 
> inbound policies for the given spd id linearly and matching a policy. The SA 
> is picked based on the matching policy.
>  This could have been an SAD hash table with key as (SPI, dst address, 
> proto (ESP or AH) ), so that the SA can be looked up from the hash on 
> receiving an ESP packet. 
>  Is there a particular reason it is implemented using a linear policy 
> match?
> 
> 2. There is also an IKEv2 responder implementation that adds/deletes IPSec 
> tunnel interfaces. How does this work? Is there any documentation that can be 
> referred to?
> 
> Thanks
> Krishna
> 
> On Wed, Jun 27, 2018 at 6:23 PM, Dave Barach (dbarach)  <mailto:dbar...@cisco.com>> wrote:
> +1.
> 
>  
> 
> To amplify a bit: all binary API messages are processed with worker threads 
> paused in a barrier sync, unless the API message has been explicitly marked 
> thread-safe.
> 
>  
> 
> Here is the relevant code in 
> .../src/vlibapi/api_shared.c:vl_api_msg_handler_with_vm_node(...)
> 
>  
> 
>   if (!am->is_mp_safe[id])
> 
>  {
> 
>vl_msg_api_barrier_trace_context (am->msg_names[id]);
> 
>vl_msg_api_barrier_sync ();
> 
>  }
> 
>   (*handler) (the_msg, vm, node);
> 
>  
> 
>   if (!am->is_mp_safe[id])
> 
>vl_msg_api_barrier_release ();
> 
>  
> 
> Typical example of marking a message mp-safe:
> 
>  
> 
>   api_main_t *am=&api_main;
> 
>   ...
> 
>  
> 
>   am->is_mp_safe[VL_API_MEMCLNT_KEEPALIVE_REPLY] = 1;
> 
>  
> 
> The debug CLI uses the same scheme. Unless otherwise marked mp-safe, debug 
> CLI commands are executed with worker threads paused in a bar

Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-29 Thread Vamsi Krishna
Hi Damjan, Dave,

Can you please also answer the questions I had in the email just before Jim
hijacked the thread.

Thanks
Vamsi

On Fri, Jun 29, 2018 at 3:06 PM, Damjan Marion  wrote:

>
> Hi Jim,
>
> Atomic add sounds like a reasonable solution to me...
>
> --
> Damjan
>
> On 28 Jun 2018, at 09:26, Jim Thompson  wrote:
>
> All,
>
> I don't know if any of the previously-raised issues occur in real-life.
> Goodness knows we've run billions of IPsec packets in the test harnesses
> (harnessi?) here without seeing them.
>
> There are a couple issues with IPsec and multicore that haven't been
> raised, however, so I'm gonna hijack the thread.
>
> If multiple worker threads are configured in VPP, it seems like there’s
> the potential for problems with IPsec where the sequence number or replay
> window for an SA could get stomped on by two threads trying to update them
> at the same. We assume that this issue is well known since the following
> comment occurs at line 173 in src/vnet/ipsec/esp.h
>
> /* TODO seq increment should be atomic to be accessed by multiple
> workers */
>
> See: https://github.com/FDio/vpp/blob/master/src/vnet/ipsec/esp.h#L173
>
> We've asked if anyone is working on this, and are willing to try and fix
> it, but would need some direction on what is the best way to accomplish
> same.
>
> We could try to use locking, which would be straightforward but would add
> overhead.  Maybe that overhead could be offset some by requesting a block
> of sequence numbers upfront for all of the packets being processed instead
> of getting a sequence number and incrementing as each packet is processed.
>
> There is also the clib_smp_atomic_add() call, which invokes
> __sync_fetch_and_add(addr,increment).  This is a GCC built_in that uses a
> memory barrier to avoid obtaining a lock.  We're not sure if there are
> drawbacks to using this.
>
> See: http://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Atomic-Builtins.html
>
> GRE uses clib_smp_atomic_add() for sequence number processing, see 
> src/vnet/gre/gre.c#L409
> and src/vnet/gre/gre.c#L421
>
> Finally, there seem to be issues around AES-GCM nonce processing when
> operating multi-threaded.  If it is nonce processing, it can probably
> (also) be addressed via clib_smp_atomic_add(), but.. don't know yet.
>
> We've raised these before, but haven't received much in the way of
> response.  Again, we're willing to work on these, but would like a bit of
> 'guidance' from vpp-dev.
>
> Thanks,
>
> Jim (and the rest of Netgate)
>
>
> On Thu, Jun 28, 2018 at 1:44 AM, Vamsi Krishna  wrote:
>
>> Hi Damjan, Dave,
>>
>> Thanks for the quick reply.
>>
>> It is really helpful. So the barrier ensures that the IPSec data
>> structure access is thread safe.
>>
>> Have a few more question on the IPSec implementation.
>> 1. The inbound SA lookup (in ipsec-input) is actually going through the
>> inbound policies for the given spd id linearly and matching a policy. The
>> SA is picked based on the matching policy.
>>  This could have been an SAD hash table with key as (SPI, dst
>> address, proto (ESP or AH) ), so that the SA can be looked up from the hash
>> on receiving an ESP packet.
>>  Is there a particular reason it is implemented using a linear policy
>> match?
>>
>> 2. There is also an IKEv2 responder implementation that adds/deletes
>> IPSec tunnel interfaces. How does this work? Is there any documentation
>> that can be referred to?
>>
>> Thanks
>> Krishna
>>
>> On Wed, Jun 27, 2018 at 6:23 PM, Dave Barach (dbarach) > > wrote:
>>
>>> +1.
>>>
>>>
>>>
>>> To amplify a bit: *all* binary API messages are processed with worker
>>> threads paused in a barrier sync, unless the API message has been
>>> explicitly marked thread-safe.
>>>
>>>
>>>
>>> Here is the relevant code in .../src/vlibapi/api_shared.c:v
>>> l_api_msg_handler_with_vm_node(...)
>>>
>>>
>>>
>>>   if (!am->is_mp_safe[id])
>>>
>>>  {
>>>
>>>vl_msg_api_barrier_trace_context (am->msg_names[id]);
>>>
>>>vl_msg_api_barrier_sync ();
>>>
>>>  }
>>>
>>>   (*handler) (the_msg, vm, node);
>>>
>>>
>>>
>>>       if (!am->is_mp_safe[id])
>>>
>>>vl_msg_api_barrier_release ();
>>>
>>>
>>>
>>> Typical example of marking a message mp-safe:
>&g

Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-06-29 Thread Damjan Marion
r threads 
>> paused in a barrier sync, unless the API message has been explicitly marked 
>> thread-safe.
>> 
>>  
>> 
>> Here is the relevant code in 
>> .../src/vlibapi/api_shared.c:vl_api_msg_handler_with_vm_node(...)
>> 
>>  
>> 
>>   if (!am->is_mp_safe[id])
>> 
>>  {
>> 
>>vl_msg_api_barrier_trace_context (am->msg_names[id]);
>> 
>>vl_msg_api_barrier_sync ();
>> 
>>  }
>> 
>>   (*handler) (the_msg, vm, node);
>> 
>>  
>> 
>>   if (!am->is_mp_safe[id])
>> 
>>vl_msg_api_barrier_release ();
>> 
>>  
>> 
>> Typical example of marking a message mp-safe:
>> 
>>  
>> 
>>   api_main_t *am=&api_main;
>> 
>>   ...
>> 
>>  
>> 
>>   am->is_mp_safe[VL_API_MEMCLNT_KEEPALIVE_REPLY] = 1;
>> 
>>  
>> 
>> The debug CLI uses the same scheme. Unless otherwise marked mp-safe, debug 
>> CLI commands are executed with worker threads paused in a barrier sync.
>> 
>>  
>> 
>> HTH... Dave
>> 
>>  
>> 
>> -Original Message-
>> From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> > <mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan Marion
>> Sent: Wednesday, June 27, 2018 6:59 AM
>> To: Vamsi Krishna mailto:vamsi...@gmail.com>>
>> Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?
>> 
>>  
>> 
>> ipsec data structures are updated during barrier sync, so there is not 
>> packets in-flight...
>> 
>>  
>> 
>>  
>> 
>> > On 27 Jun 2018, at 07:45, Vamsi Krishna > > <mailto:vamsi...@gmail.com>> wrote:
>> 
>> >
>> 
>> > Hi ,
>> 
>> >
>> 
>> > I have looked at the ipsec code in VPP and trying to understand how it 
>> > works in a multi threaded environment. Noticed that the datastructures for 
>> > spd, sad and tunnel interface are pools and there are no locks to prevent 
>> > race conditions.
>> 
>> >
>> 
>> > For instance the ipsec-input node passes SA index to the esp-encrypt node, 
>> > and esp-encrypt node looks up the SA from sad pool. But during the time in 
>> > which the packet is passed from one node to another the entry at SA index 
>> > may be changed or deleted. Same seems to be true for dpdk-esp-encrypt and 
>> > dpdk-esp-decrypt. How are these cases handled? Can the implementation be 
>> > used in multi-threaded environment?
>> 
>> >
>> 
>> > Please help understand the IPSec implementation.
>> 
>> >
>> 
>> > Thanks
>> 
>> > Krishna
>> 
>> > -=-=-=-=-=-=-=-=-=-=-=-
>> 
>> > Links: You receive all messages sent to this group.
>> 
>> >
>> 
>> > View/Reply Online (#9709): https://lists.fd.io/g/vpp-dev/message/9709 
>> > <https://lists.fd.io/g/vpp-dev/message/9709>
>> > Mute This Topic: https://lists.fd.io/mt/22720913/675642 
>> > <https://lists.fd.io/mt/22720913/675642>
>> > Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
>> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>> > <https://lists.fd.io/g/vpp-dev/unsub>  [dmar...@me.com 
>> > <mailto:dmar...@me.com>]
>> 
>> > -=-=-=-=-=-=-=-=-=-=-=-
>> 
>>  
>> 
>> 
>> 
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#9730): https://lists.fd.io/g/vpp-dev/message/9730 
>> <https://lists.fd.io/g/vpp-dev/message/9730>
>> Mute This Topic: https://lists.fd.io/mt/22720913/675164 
>> <https://lists.fd.io/mt/22720913/675164>
>> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev%2bow...@lists.fd.io>
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>> <https://lists.fd.io/g/vpp-dev/unsub>  [j...@netgate.com 
>> <mailto:j...@netgate.com>]
>> -=-=-=-=-=-=-=-=-=-=-=-
>> 
>> 
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#9747): https://lists.fd.io/g/vpp-dev/message/9747
> Mute This Topic: https://lists.fd.io/mt/22720913/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9748): https://lists.fd.io/g/vpp-dev/message/9748
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-01 Thread Vamsi Krishna
>>> dbar...@cisco.com> wrote:
>>>
>>>> +1.
>>>>
>>>>
>>>>
>>>> To amplify a bit: *all* binary API messages are processed with worker
>>>> threads paused in a barrier sync, unless the API message has been
>>>> explicitly marked thread-safe.
>>>>
>>>>
>>>>
>>>> Here is the relevant code in .../src/vlibapi/api_shared.c:v
>>>> l_api_msg_handler_with_vm_node(...)
>>>>
>>>>
>>>>
>>>>       if (!am->is_mp_safe[id])
>>>>
>>>>  {
>>>>
>>>>vl_msg_api_barrier_trace_context (am->msg_names[id]);
>>>>
>>>>vl_msg_api_barrier_sync ();
>>>>
>>>>  }
>>>>
>>>>   (*handler) (the_msg, vm, node);
>>>>
>>>>
>>>>
>>>>   if (!am->is_mp_safe[id])
>>>>
>>>>vl_msg_api_barrier_release ();
>>>>
>>>>
>>>>
>>>> Typical example of marking a message mp-safe:
>>>>
>>>>
>>>>
>>>>   api_main_t *am=&api_main;
>>>>
>>>>   ...
>>>>
>>>>
>>>>
>>>>   am->is_mp_safe[VL_API_MEMCLNT_KEEPALIVE_REPLY] = 1;
>>>>
>>>>
>>>>
>>>> The debug CLI uses the same scheme. Unless otherwise marked mp-safe,
>>>> debug CLI commands are executed with worker threads paused in a barrier
>>>> sync.
>>>>
>>>>
>>>>
>>>> HTH... Dave
>>>>
>>>>
>>>>
>>>> -Original Message-
>>>> From: vpp-dev@lists.fd.io  On Behalf Of Damjan
>>>> Marion
>>>> Sent: Wednesday, June 27, 2018 6:59 AM
>>>> To: Vamsi Krishna 
>>>> Cc: vpp-dev@lists.fd.io
>>>> Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?
>>>>
>>>>
>>>>
>>>> ipsec data structures are updated during barrier sync, so there is not
>>>> packets in-flight...
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> > On 27 Jun 2018, at 07:45, Vamsi Krishna  wrote:
>>>>
>>>> >
>>>>
>>>> > Hi ,
>>>>
>>>> >
>>>>
>>>> > I have looked at the ipsec code in VPP and trying to understand how
>>>> it works in a multi threaded environment. Noticed that the datastructures
>>>> for spd, sad and tunnel interface are pools and there are no locks to
>>>> prevent race conditions.
>>>>
>>>> >
>>>>
>>>> > For instance the ipsec-input node passes SA index to the esp-encrypt
>>>> node, and esp-encrypt node looks up the SA from sad pool. But during the
>>>> time in which the packet is passed from one node to another the entry at SA
>>>> index may be changed or deleted. Same seems to be true for dpdk-esp-encrypt
>>>> and dpdk-esp-decrypt. How are these cases handled? Can the implementation
>>>> be used in multi-threaded environment?
>>>>
>>>> >
>>>>
>>>> > Please help understand the IPSec implementation.
>>>>
>>>> >
>>>>
>>>> > Thanks
>>>>
>>>> > Krishna
>>>>
>>>> > -=-=-=-=-=-=-=-=-=-=-=-
>>>>
>>>> > Links: You receive all messages sent to this group.
>>>>
>>>> >
>>>>
>>>> > View/Reply Online (#9709): https://lists.fd.io/g/vpp-dev/message/9709
>>>>
>>>> > Mute This Topic: https://lists.fd.io/mt/22720913/675642
>>>>
>>>> > Group Owner: vpp-dev+ow...@lists.fd.io
>>>>
>>>> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
>>>>
>>>> > -=-=-=-=-=-=-=-=-=-=-=-
>>>>
>>>>
>>>>
>>>
>>>
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>> Links: You receive all messages sent to this group.
>>>
>>> View/Reply Online (#9730): https://lists.fd.io/g/vpp-dev/message/9730
>>> Mute This Topic: https://lists.fd.io/mt/22720913/675164
>>> Group Owner: vpp-dev+ow...@lists.fd.io
>>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [j...@netgate.com]
>>> -=-=-=-=-=-=-=-=-=-=-=-
>>>
>>>
>>
>>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#9747): https://lists.fd.io/g/vpp-dev/message/9747
> Mute This Topic: https://lists.fd.io/mt/22720913/675642
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9754): https://lists.fd.io/g/vpp-dev/message/9754
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-02 Thread Kingwel Xie
Hi Vamsi, Damjan,

I’d like to contribute my two cents about IPSEC. We have been working on the 
improvement for quite some time.


  1.  Great that vPP supports IPSEC, but the code is mainly for PoC. It lacks 
of many features: buffer chain, AES-GCM/AES-CTR, UDP encap (seems already there 
in master track?) many hardcode, broken packet trace,  SEQ handling, etc.
  2.  Performance is not good, because of wrongly usage of openssl, buffer 
copying. We can see 100% up after fixing all these issues.
  3.  DPDK Ipsec has better performance but the quality of code is not good, 
many bugs.

If you are looking for a production IPSEC, vpp is a good start but you still 
have a lot things to do.

Regards,
Kingwel

From: vpp-dev@lists.fd.io  On Behalf Of Vamsi Krishna
Sent: Monday, July 02, 2018 12:05 PM
To: Damjan Marion 
Cc: Jim Thompson ; Dave Barach ; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?

Thanks Damjan.

Is the IPSec code using spd attached to an interface ("set interface ipsec spd 
 ") production quality?
How is the performance of this code in terms of throughput, are there any 
benchmarks that can be referred to?

Thanks
Vamsi

On Sat, Jun 30, 2018 at 1:35 AM, Damjan Marion 
mailto:dmar...@me.com>> wrote:

Dear Vamsi,

It was long long time ago when I wrote original ipsec code, so I don't remember 
details anymore.
For IKEv2, i would suggest using external implementation, as ikev2 stuff is 
just PoC quality code.

--
Damjan


On 29 Jun 2018, at 18:38, Vamsi Krishna 
mailto:vamsi...@gmail.com>> wrote:

Hi Damjan, Dave,

Can you please also answer the questions I had in the email just before Jim 
hijacked the thread.

Thanks
Vamsi

On Fri, Jun 29, 2018 at 3:06 PM, Damjan Marion 
mailto:dmar...@me.com>> wrote:

Hi Jim,

Atomic add sounds like a reasonable solution to me...

--
Damjan


On 28 Jun 2018, at 09:26, Jim Thompson 
mailto:j...@netgate.com>> wrote:

All,

I don't know if any of the previously-raised issues occur in real-life.  
Goodness knows we've run billions of IPsec packets in the test harnesses 
(harnessi?) here without seeing them.

There are a couple issues with IPsec and multicore that haven't been raised, 
however, so I'm gonna hijack the thread.

If multiple worker threads are configured in VPP, it seems like there’s the 
potential for problems with IPsec where the sequence number or replay window 
for an SA could get stomped on by two threads trying to update them at the 
same. We assume that this issue is well known since the following comment 
occurs at line 173 in src/vnet/ipsec/esp.h

/* TODO seq increment should be atomic to be accessed by multiple workers */

See: https://github.com/FDio/vpp/blob/master/src/vnet/ipsec/esp.h#L173

We've asked if anyone is working on this, and are willing to try and fix it, 
but would need some direction on what is the best way to accomplish same.

We could try to use locking, which would be straightforward but would add 
overhead.  Maybe that overhead could be offset some by requesting a block of 
sequence numbers upfront for all of the packets being processed instead of 
getting a sequence number and incrementing as each packet is processed.

There is also the clib_smp_atomic_add() call, which invokes 
__sync_fetch_and_add(addr,increment).  This is a GCC built_in that uses a 
memory barrier to avoid obtaining a lock.  We're not sure if there are 
drawbacks to using this.

See: http://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Atomic-Builtins.html

GRE uses clib_smp_atomic_add() for sequence number processing, see 
src/vnet/gre/gre.c#L409 and src/vnet/gre/gre.c#L421

Finally, there seem to be issues around AES-GCM nonce processing when operating 
multi-threaded.  If it is nonce processing, it can probably (also) be addressed 
via clib_smp_atomic_add(), but.. don't know yet.

We've raised these before, but haven't received much in the way of response.  
Again, we're willing to work on these, but would like a bit of 'guidance' from 
vpp-dev.

Thanks,

Jim (and the rest of Netgate)

On Thu, Jun 28, 2018 at 1:44 AM, Vamsi Krishna 
mailto:vamsi...@gmail.com>> wrote:
Hi Damjan, Dave,

Thanks for the quick reply.

It is really helpful. So the barrier ensures that the IPSec data structure 
access is thread safe.

Have a few more question on the IPSec implementation.
1. The inbound SA lookup (in ipsec-input) is actually going through the inbound 
policies for the given spd id linearly and matching a policy. The SA is picked 
based on the matching policy.
 This could have been an SAD hash table with key as (SPI, dst address, 
proto (ESP or AH) ), so that the SA can be looked up from the hash on receiving 
an ESP packet.
 Is there a particular reason it is implemented using a linear policy match?

2. There is also an IKEv2 responder implementation that adds/deletes IPSec 
tunnel interfaces. How does this wo

Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-02 Thread Damjan Marion

-- 
Damjan

> On 2 Jul 2018, at 11:14, Kingwel Xie  wrote:
> 
> Hi Vamsi, Damjan,
>  
> I’d like to contribute my two cents about IPSEC. We have been working on the 
> improvement for quite some time.
>  
> Great that vPP supports IPSEC, but the code is mainly for PoC. It lacks of 
> many features: buffer chain, AES-GCM/AES-CTR, UDP encap (seems already there 
> in master track?) many hardcode, broken packet trace,  SEQ handling, etc.
> Performance is not good, because of wrongly usage of openssl, buffer copying.

Buffer copying is needed, otherwise you have problem with cloned buffers. I.e. 
you still want original packet to be SPANed

> We can see 100% up after fixing all these issues.
> DPDK Ipsec has better performance but the quality of code is not good, many 
> bugs.
>  
> If you are looking for a production IPSEC, vpp is a good start but you still 
> have a lot things to do.

Contributions are welcome :)


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9760): https://lists.fd.io/g/vpp-dev/message/9760
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-02 Thread Jim Thompson

> On Jul 1, 2018, at 11:05 PM, Vamsi Krishna  wrote:
> 
> How is the performance of this code in terms of throughput, are there any 
> benchmarks that can be referred to? 

Four host setup (2 hosts for tunnel endpoints, 2 hosts outside tunnel as source 
& sink)
Source / sink Xeon E3-1275 v3 w/40G xl710
IPsec tunnel endpoints:  Intel i7-6950X (10C i7 @ 3.0GHz), xl710 40G NICs, 
Coleto Creek QAT card (https://store.netgate.com/ADI/QuickAssist8955.aspx 
)

Context:
“Kernel” is linux kernel
“User” is VPP

Testing performed in April 2017
 
Context Crypto processing   Crypto/AEAD algorithm   Integrity algorithm 
# of SAsTotal # streams Iperf3 TCP 1500 # of samples
Kernel  AES-NI  AES-CBC-128 SHA11   1   2.09 Gbps   16
Kernel  AES-NI  AES-CBC-128 SHA11   4   2.07 Gbps   16
Kernel  AES-NI  AES-CBC-128 SHA18   8   10.85 Gbps  6
Kernel  AES-NI  AES-GCM-128-16  1   1   5.06 Gbps   16
Kernel  AES-NI  AES-GCM-128-16  1   4   5.06 Gbps   16
Kernel  AES-NI  AES-GCM-128-16  8   8   25.25 Gbps  6
Kernel  QAT AES-CBC-128 SHA11   1   8.74 Gbps   16
Kernel  QAT AES-CBC-128 SHA11   4   8.74 Gbps   16
Kernel  QAT AES-CBC-128 SHA18   8   27.08 Gbps  6
UserVPP native (OpenSSL 1.0.1)  AES-CBC-128 SHA11   1   
2.03 Gbps   16
UserVPP native (OpenSSL 1.0.1)  AES-CBC-128 SHA11   4   
3.39 Gbps   16
UserVPP native (OpenSSL 1.0.1)  AES-CBC-128 SHA18   8   
9.45 Gbps   5
UserVPP AESNI MB cryptodev  AES-CBC-128 SHA11   1   7.42 
Gbps   6
UserVPP AESNI MB cryptodev  AES-CBC-128 SHA11   4   8.28 
Gbps   6
UserVPP AESNI GCM cryptodev AES-GCM-128-16  1   1   13.70 
Gbps  6
UserVPP AESNI GCM cryptodev AES-GCM-128-16  1   4   15.93 
Gbps  6
UserVPP QAT cryptodev   AES-CBC-128 SHA11   1   32.68 
Gbps  15
UserVPP QAT cryptodev   AES-CBC-128 SHA11   4   35.72 
Gbps  16
UserVPP QAT cryptodev   AES-CBC-128 SHA18   8   36.32 
Gbps  6
UserVPP QAT cryptodev   AES-GCM-128-16  1   1   32.73 
Gbps  6
UserVPP QAT cryptodev   AES-GCM-128-16  1   4   32.98 
Gbps  5








36.32Gbps is as close as you’re going to get to filling a 40Gbps NIC w/IPsec, 
due to framing overheads.
Tests using VPP w/ GCM not performed at 8 SAs / 8 streams due to issues I 
posted about last week.
We plan to repeat these with Skylake Xeon CPUs, a more modern VPP, 100Gbps NICs 
and a Lewisburg QAT device.


L3 forwarding (no IPsec, minimal routes, no ACLs) using the same setup:
Kernel, 1 stream, 64B: 804 kpps, 512B: 808 kpps, 1500B: 806 kpps
Kernel, 4 stream, 64B: 2.93 Mpps, 512B: 2.92 Mpps, 1500B: 2.91 Mpps
Kernel, 8 stream, 64B: 5.16 Mpps, 512B: 5.14 Mpps, 1500B: 3.28 Mpps

VPP, 1 stream, 64B: 14.05 Mpps, 512B: 8.84 Mpps, 1500B: 3.28 kpps
VPP, 4 stream, 64B: 32.23 Mpps, 512B: 9.39 Mpps, 1500B: 3.28 Mpps
VPP, 8 stream, 64B: 42.60 Mpps, 512B: 9.39 Mpps, 1500B: 3.28 Mpps



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9765): https://lists.fd.io/g/vpp-dev/message/9765
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-02 Thread Kingwel Xie
Hi Damjan,

Thanks for the heads-up. Never come to that. I’m still thinking it is 
acceptable if we are doing IPSec. Buffer copying is a significant overhead.

We are working on the code, will contribute when we think it is ready. There 
are so many corner cases of IPSec, hard to say we can cover all of them.

Regards,
Kingwel

From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion
Sent: Monday, July 02, 2018 7:43 PM
To: Kingwel Xie 
Cc: Vamsi Krishna ; Jim Thompson ; Dave 
Barach ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?


--
Damjan


On 2 Jul 2018, at 11:14, Kingwel Xie 
mailto:kingwel@ericsson.com>> wrote:

Hi Vamsi, Damjan,

I’d like to contribute my two cents about IPSEC. We have been working on the 
improvement for quite some time.


  1.  Great that vPP supports IPSEC, but the code is mainly for PoC. It lacks 
of many features: buffer chain, AES-GCM/AES-CTR, UDP encap (seems already there 
in master track?) many hardcode, broken packet trace,  SEQ handling, etc.
  2.  Performance is not good, because of wrongly usage of openssl, buffer 
copying.

Buffer copying is needed, otherwise you have problem with cloned buffers. I.e. 
you still want original packet to be SPANed



  1.  We can see 100% up after fixing all these issues.
  2.  DPDK Ipsec has better performance but the quality of code is not good, 
many bugs.

If you are looking for a production IPSEC, vpp is a good start but you still 
have a lot things to do.

Contributions are welcome :)


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9766): https://lists.fd.io/g/vpp-dev/message/9766
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-03 Thread Damjan Marion via Lists.Fd.Io

> On 3 Jul 2018, at 02:36, Kingwel Xie  wrote:
> 
> Hi Damjan,
>  
> Thanks for the heads-up. Never come to that. I’m still thinking it is 
> acceptable if we are doing IPSec. Buffer copying is a significant overhead.

What i wanted to say by copying is writing encrypted data into new buffer 
instead of overwriting the payload of existing buffer. I will not call that a 
significant overhead.

>  
> We are working on the code, will contribute when we think it is ready. There 
> are so many corner cases of IPSec, hard to say we can cover all of them.

I know that another people are also working on the code, so will be good that 
we are all in sync to avoid throwaway work

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9770): https://lists.fd.io/g/vpp-dev/message/9770
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-06 Thread Kingwel Xie
Well, there is a vector named recycle to remember all old buffers, which 
consequently means a lot of mem resize, mem_cpy when vector rate is 256 or so. 
Counting all of these overhead, I’d say, I see around 7~10% impact, after 
fixing openssl usage issue.

BTW, openssl issue means we should always fully initialized the cipher and hmac 
context once, instead of doing it every time handling one packet.

Taking AES-CBC as an example, when encrypting packet:

  EVP_CipherInit_ex (ctx, NULL, NULL, NULL, iv, -1);   // only do it with iv, 
iv is changed per every packet
  EVP_CipherUpdate (ctx, in, &out_len, in, in_len);

On the other hand, we do full initialization when creating contexts. Note keys 
should be specified here, but not IV.

  HMAC_Init_ex (sa->context[thread_id].hmac_ctx, sa->integ_key, 
sa->integ_key_len, md, NULL);
  EVP_CipherInit_ex (sa->context[thread_id].cipher_ctx, cipher, NULL, 
sa->crypto_key, NULL, is_outbound > 0 ? 1 : 0);

Initialization with keys would take quite a long time because openssl do a lot 
of math. It is not necessary, as we know, keys are kept unchanged in most cases 
for a SA.

Regards,
Kingwel


From: Damjan Marion 
Sent: Tuesday, July 03, 2018 5:14 PM
To: Kingwel Xie 
Cc: Vamsi Krishna ; Jim Thompson ; Dave 
Barach ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?


On 3 Jul 2018, at 02:36, Kingwel Xie 
mailto:kingwel@ericsson.com>> wrote:

Hi Damjan,

Thanks for the heads-up. Never come to that. I’m still thinking it is 
acceptable if we are doing IPSec. Buffer copying is a significant overhead.

What i wanted to say by copying is writing encrypted data into new buffer 
instead of overwriting the payload of existing buffer. I will not call that a 
significant overhead.


We are working on the code, will contribute when we think it is ready. There 
are so many corner cases of IPSec, hard to say we can cover all of them.

I know that another people are also working on the code, so will be good that 
we are all in sync to avoid throwaway work

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9786): https://lists.fd.io/g/vpp-dev/message/9786
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-06 Thread Damjan Marion via Lists.Fd.Io

We don't use recycle anymore (except at one place), mainly due ot the issue how 
dpdk works.
-- 
Damjan

> On 6 Jul 2018, at 11:27, Kingwel Xie  wrote:
> 
> Well, there is a vector named recycle to remember all old buffers, which 
> consequently means a lot of mem resize, mem_cpy when vector rate is 256 or 
> so. Counting all of these overhead, I’d say, I see around 7~10% impact, after 
> fixing openssl usage issue.

We don't use recycle anymore (except at one place), mainly due ot the issue how 
dpdk works.
-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9788): https://lists.fd.io/g/vpp-dev/message/9788
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-06 Thread Kingwel Xie
Sorry, Damjan. Maybe I confused you.

This is what I am talking about:

In esp_encrypt_node_fn(), the logic is like this:

u32 *recycle = 0;
…
vec_add1 (recycle, i_bi0);
…
free_buffers_and_exit:
  if (recycle)
vlib_buffer_free (vm, recycle, vec_len (recycle));
  vec_free (recycle);






From: Damjan Marion 
Sent: Friday, July 06, 2018 6:14 PM
To: Kingwel Xie 
Cc: Vamsi Krishna ; Jim Thompson ; Dave 
Barach ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?


We don't use recycle anymore (except at one place), mainly due ot the issue how 
dpdk works.
--
Damjan


On 6 Jul 2018, at 11:27, Kingwel Xie 
mailto:kingwel@ericsson.com>> wrote:

Well, there is a vector named recycle to remember all old buffers, which 
consequently means a lot of mem resize, mem_cpy when vector rate is 256 or so. 
Counting all of these overhead, I’d say, I see around 7~10% impact, after 
fixing openssl usage issue.

We don't use recycle anymore (except at one place), mainly due ot the issue how 
dpdk works.
--
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9789): https://lists.fd.io/g/vpp-dev/message/9789
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is VPP IPSec implementation thread safe?

2018-07-06 Thread Damjan Marion via Lists.Fd.Io
OK, yes, that is needed as it will take care for buffers which are cloned.
Potential optimisation here would be to reuse buffer if 
vlib_buffer_t->n_add_refs != 0.

-- 
Damjan

> On 6 Jul 2018, at 12:27, Kingwel Xie  wrote:
> 
> Sorry, Damjan. Maybe I confused you.
>  
> This is what I am talking about:
>  
> In esp_encrypt_node_fn(), the logic is like this:
>  
> u32 *recycle = 0;
> …
> vec_add1 (recycle, i_bi0);
> …
> free_buffers_and_exit:
>   if (recycle)
> vlib_buffer_free (vm, recycle, vec_len (recycle));
>   vec_free (recycle);
>  
>  
>  
>  
>  
>  
> From: Damjan Marion  
> Sent: Friday, July 06, 2018 6:14 PM
> To: Kingwel Xie 
> Cc: Vamsi Krishna ; Jim Thompson ; Dave 
> Barach ; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Is VPP IPSec implementation thread safe?
>  
>  
> We don't use recycle anymore (except at one place), mainly due ot the issue 
> how dpdk works.
> -- 
> Damjan
> 
> 
> On 6 Jul 2018, at 11:27, Kingwel Xie  <mailto:kingwel@ericsson.com>> wrote:
>  
> Well, there is a vector named recycle to remember all old buffers, which 
> consequently means a lot of mem resize, mem_cpy when vector rate is 256 or 
> so. Counting all of these overhead, I’d say, I see around 7~10% impact, after 
> fixing openssl usage issue.
>  
> We don't use recycle anymore (except at one place), mainly due ot the issue 
> how dpdk works.
> -- 
> Damjan
>  
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#9789): https://lists.fd.io/g/vpp-dev/message/9789 
> <https://lists.fd.io/g/vpp-dev/message/9789>
> Mute This Topic: https://lists.fd.io/mt/22720913/675642 
> <https://lists.fd.io/mt/22720913/675642>
> Group Owner: vpp-dev+ow...@lists.fd.io <mailto:vpp-dev+ow...@lists.fd.io>
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
> <https://lists.fd.io/g/vpp-dev/unsub>  [dmar...@me.com 
> <mailto:dmar...@me.com>]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9790): https://lists.fd.io/g/vpp-dev/message/9790
Mute This Topic: https://lists.fd.io/mt/22720913/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-