Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

2020-02-09 Thread Pravin Shelar
On Fri, Feb 7, 2020 at 1:18 AM Dincer Beken  wrote:
>
> Just repeating last mail with fixed indentations (mail was in wrong format):
>
>
> Hello Ben, Pravin,
>
>
> Thank you for your consideration.
>
>
> >> It is also simpler to fix the stats issue using this approach.
>
>
> >There's no stats issue.  Userspace just counts the number of packets it
> >sent and adds them in.
>
>
>
> Regarding the stats issue. In case of LTE Broadcast, I have tight 
> synchronization periods (from 80ms down to 10ms in 5G) in which I need to set 
> the elapsed #octets and the packet number, as well as the timestamp into the 
> header. Since I do not wan't to make
>  an upcall with each packet, I am using the stats of the kernel flows. Since 
> OVS_PACKET_CMD_EXECUTE does remove the flow directly after use, I am missing 
> the stats of the first packet. Therefore I wanted to know, if there is a 
> specific reason why OVS_PACKET_CMD_EXECUTE
>  has to always use temporary kernel flows.
>
>
> >> >
> >> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
> >> > > Another option would be to add new command that install and execute
> >> > > packet in same netlink msg. That would save us a netlink msg to handle
> >> > > a miss-call.  what do you think about it?
> >> >
> >> > When I experimented with that in the past, I found that it didn't have a
> >> > noticeable impact on performance.
> >> >
> >> Reduced number of msgs sent here would help in certain situations.
>
>
> >That is plausible, but it didn't help when I measured it previously.
>
>
> If we add a distinct message for packet execution with checking a flow, ex.: 
> OVS_PACKET_CMD_EXECUTE_2, if there is is no flow found, should the packet be 
> dropped? I assume that you probably would like the userplane (altough we are 
> talking here about the a dpif-netlink
>  adapter), to be loosely coupled from the kernel datapath state (such that 
> the userplane does not always has to %100 know if a kernel flow has been 
> purged or not). So  this would be an unnecessary risk. Well if we add the 
> temporary flow creation, would not
>  OVS_PACKET_CMD_EXECUTE be redundant?
>
I am not in favor of checking of flow in OVS_PACKET_CMD_EXECUTE for
same issue that you have mentioned. It involves flow lookup, the flow
might not exist in datapath, so we need handle that case. On the other
hand it is much straightforward in OVS_FLOW_CMD_NEW. As Ben has
mentioned OVS_FLOW_ATTR_PACKET attr can be used for passing packet
data. After executing the actions for packet we can update the stats
for newly added flow.

>
> > If it did help, then I don't think we'd need a new command, we could
> > just add a OVS_FLOW_ATTR_PACKET to attach a packet to the existing
> > OVS_FLOW_CMD_NEW or OVS_FLOW_CMD_SET commands.
>
>
> I guess that we only need to check in handle_upcalls, if the 
> should_install_flow is positive, install the flow and append the packet, and 
> only if not, create an DPIF_OP_EXECUTE netlink message. This looks helpful. I 
> did not work with OVS_FLOW_CMD_SET yet, but
>  this should be likewise I assume.
>
> Regards,
>
> Dincer
>
>
>
> 
>
> Von: Ben Pfaff 
>
> Gesendet: Donnerstag, 6. Februar 2020 22:55
>
> An: Pravin Shelar 
>
> Cc: Dincer Beken ; ovs-dev@openvswitch.org 
> ; Andreas Eberlein 
>
> Betreff: Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel 
> Datapath
>
>
>
> On Thu, Feb 06, 2020 at 01:32:12PM -0800, Pravin Shelar wrote:
>
> > On Thu, Feb 6, 2020 at 12:18 PM Ben Pfaff  wrote:
>
> > >
>
> > > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
>
> > > > Another option would be to add new command that install and execute
>
> > > > packet in same netlink msg. That would save us a netlink msg to handle
>
> > > > a miss-call.  what do you think about it?
>
> > >
>
> > > When I experimented with that in the past, I found that it didn't have a
>
> > > noticeable impact on performance.
>
> > >
>
> > Reduced number of msgs sent here would help in certain situations.
>
>
>
> That is plausible, but it didn't help when I measured it previously.
>
>
>
> > It is also simpler to fix the stats issue using this approach.
>
>
>
> There's no stats issue.  Userspace just counts the number of packets it
>
> sent and adds them in.
>
> ___
>
> dev mailing list
>
> d...@openvswitch.org
>
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

2020-02-07 Thread Dincer Beken
Hello,

Further looking at the code, when upcall_xlate is performed, we create a 
hardcoded status block to represent the first packet:

stats.n_packets = 1;
stats.n_bytes = dp_packet_size(upcall->packet);
stats.used = time_msec();
stats.tcp_flags = ntohs(upcall->flow->tcp_flags);



This information is pushed into the flow status when a output or a goto_table 
action is performed. Additionally, the revalidator which pulls the status 
information from
the kernel also updates the flow information, right after the flow is 
established.

Assuming that we are interested in the successfully established kernel flows, 
the status of the revalidator should give us the conclusive flow statistics. Is 
this status block and the resubmit of the status information than a workaround 
because the first packet of MISS_UPCALL is not registered in the kernel stats?

Regards,
Dincer


Von: dev  im Auftrag von Dincer Beken 

Gesendet: Freitag, 7. Februar 2020 10:18
An: Ben Pfaff ; Pravin Shelar 
Cc: ovs-dev@openvswitch.org ; Andreas Eberlein 

Betreff: Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel 
Datapath

Just repeating last mail with fixed indentations (mail was in wrong format):


Hello Ben, Pravin,


Thank you for your consideration.


>> It is also simpler to fix the stats issue using this approach.


>There's no stats issue.  Userspace just counts the number of packets it
>sent and adds them in.



Regarding the stats issue. In case of LTE Broadcast, I have tight 
synchronization periods (from 80ms down to 10ms in 5G) in which I need to set 
the elapsed #octets and the packet number, as well as the timestamp into the 
header. Since I do not wan't to make
 an upcall with each packet, I am using the stats of the kernel flows. Since 
OVS_PACKET_CMD_EXECUTE does remove the flow directly after use, I am missing 
the stats of the first packet. Therefore I wanted to know, if there is a 
specific reason why OVS_PACKET_CMD_EXECUTE
 has to always use temporary kernel flows.


>> >
>> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
>> > > Another option would be to add new command that install and execute
>> > > packet in same netlink msg. That would save us a netlink msg to handle
>> > > a miss-call.  what do you think about it?
>> >
>> > When I experimented with that in the past, I found that it didn't have a
>> > noticeable impact on performance.
>> >
>> Reduced number of msgs sent here would help in certain situations.


>That is plausible, but it didn't help when I measured it previously.


If we add a distinct message for packet execution with checking a flow, ex.: 
OVS_PACKET_CMD_EXECUTE_2, if there is is no flow found, should the packet be 
dropped? I assume that you probably would like the userplane (altough we are 
talking here about the a dpif-netlink
 adapter), to be loosely coupled from the kernel datapath state (such that the 
userplane does not always has to %100 know if a kernel flow has been purged or 
not). So  this would be an unnecessary risk. Well if we add the temporary flow 
creation, would not
 OVS_PACKET_CMD_EXECUTE be redundant?


> If it did help, then I don't think we'd need a new command, we could
> just add a OVS_FLOW_ATTR_PACKET to attach a packet to the existing
> OVS_FLOW_CMD_NEW or OVS_FLOW_CMD_SET commands.


I guess that we only need to check in handle_upcalls, if the 
should_install_flow is positive, install the flow and append the packet, and 
only if not, create an DPIF_OP_EXECUTE netlink message. This looks helpful. I 
did not work with OVS_FLOW_CMD_SET yet, but
 this should be likewise I assume.

Regards,

Dincer





Von: Ben Pfaff 

Gesendet: Donnerstag, 6. Februar 2020 22:55

An: Pravin Shelar 

Cc: Dincer Beken ; ovs-dev@openvswitch.org 
; Andreas Eberlein 

Betreff: Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel 
Datapath



On Thu, Feb 06, 2020 at 01:32:12PM -0800, Pravin Shelar wrote:

> On Thu, Feb 6, 2020 at 12:18 PM Ben Pfaff  wrote:

> >

> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:

> > > Another option would be to add new command that install and execute

> > > packet in same netlink msg. That would save us a netlink msg to handle

> > > a miss-call.  what do you think about it?

> >

> > When I experimented with that in the past, I found that it didn't have a

> > noticeable impact on performance.

> >

> Reduced number of msgs sent here would help in certain situations.



That is plausible, but it didn't help when I measured it previously.



> It is also simpler to fix the stats issue using this approach.



There's no stats issue.  Userspace just counts the number of packets it

sent and adds them in.

_

Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

2020-02-07 Thread Dincer Beken
Just repeating last mail with fixed indentations (mail was in wrong format):


Hello Ben, Pravin,


Thank you for your consideration.


>> It is also simpler to fix the stats issue using this approach.


>There's no stats issue.  Userspace just counts the number of packets it
>sent and adds them in.



Regarding the stats issue. In case of LTE Broadcast, I have tight 
synchronization periods (from 80ms down to 10ms in 5G) in which I need to set 
the elapsed #octets and the packet number, as well as the timestamp into the 
header. Since I do not wan't to make
 an upcall with each packet, I am using the stats of the kernel flows. Since 
OVS_PACKET_CMD_EXECUTE does remove the flow directly after use, I am missing 
the stats of the first packet. Therefore I wanted to know, if there is a 
specific reason why OVS_PACKET_CMD_EXECUTE
 has to always use temporary kernel flows.


>> >
>> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
>> > > Another option would be to add new command that install and execute
>> > > packet in same netlink msg. That would save us a netlink msg to handle
>> > > a miss-call.  what do you think about it?
>> >
>> > When I experimented with that in the past, I found that it didn't have a
>> > noticeable impact on performance.
>> >
>> Reduced number of msgs sent here would help in certain situations.


>That is plausible, but it didn't help when I measured it previously.


If we add a distinct message for packet execution with checking a flow, ex.: 
OVS_PACKET_CMD_EXECUTE_2, if there is is no flow found, should the packet be 
dropped? I assume that you probably would like the userplane (altough we are 
talking here about the a dpif-netlink
 adapter), to be loosely coupled from the kernel datapath state (such that the 
userplane does not always has to %100 know if a kernel flow has been purged or 
not). So  this would be an unnecessary risk. Well if we add the temporary flow 
creation, would not
 OVS_PACKET_CMD_EXECUTE be redundant?


> If it did help, then I don't think we'd need a new command, we could
> just add a OVS_FLOW_ATTR_PACKET to attach a packet to the existing
> OVS_FLOW_CMD_NEW or OVS_FLOW_CMD_SET commands.


I guess that we only need to check in handle_upcalls, if the 
should_install_flow is positive, install the flow and append the packet, and 
only if not, create an DPIF_OP_EXECUTE netlink message. This looks helpful. I 
did not work with OVS_FLOW_CMD_SET yet, but
 this should be likewise I assume.

Regards,

Dincer





Von: Ben Pfaff 

Gesendet: Donnerstag, 6. Februar 2020 22:55

An: Pravin Shelar 

Cc: Dincer Beken ; ovs-dev@openvswitch.org 
; Andreas Eberlein 

Betreff: Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel 
Datapath



On Thu, Feb 06, 2020 at 01:32:12PM -0800, Pravin Shelar wrote:

> On Thu, Feb 6, 2020 at 12:18 PM Ben Pfaff  wrote:

> >

> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:

> > > Another option would be to add new command that install and execute

> > > packet in same netlink msg. That would save us a netlink msg to handle

> > > a miss-call.  what do you think about it?

> >

> > When I experimented with that in the past, I found that it didn't have a

> > noticeable impact on performance.

> >

> Reduced number of msgs sent here would help in certain situations.



That is plausible, but it didn't help when I measured it previously.



> It is also simpler to fix the stats issue using this approach.



There's no stats issue.  Userspace just counts the number of packets it

sent and adds them in.

___

dev mailing list

d...@openvswitch.org

https://mail.openvswitch.org/mailman/listinfo/ovs-dev

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

2020-02-07 Thread Dincer Beken
Hello Ben, Pravin,

Thank you for your consideration.
> It is also simpler to fix the stats issue using this approach.

There's no stats issue.  Userspace just counts the number of packets it
sent and adds them in.

Regarding the stats issue. In case of LTE Broadcast, I have tight 
synchronization periods (from 80ms down to 10ms in 5G) in which I need to set 
the elapsed #octets and the packet number, as well as the timestamp into the 
header. Since I do not wan't to make an upcall with each packet, I am using the 
stats of the kernel flows. Since OVS_PACKET_CMD_EXECUTE does remove the flow 
directly after use, I am missing the stats of the first packet. Therefore I 
wanted to know, if there is a specific reason why OVS_PACKET_CMD_EXECUTE has to 
always use temporary kernel flows.

> >
> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
> > > Another option would be to add new command that install and execute
> > > packet in same netlink msg. That would save us a netlink msg to handle
> > > a miss-call.  what do you think about it?
> >
> > When I experimented with that in the past, I found that it didn't have a
> > noticeable impact on performance.
> >
> Reduced number of msgs sent here would help in certain situations.

That is plausible, but it didn't help when I measured it previously.

If we add a distinct message for packet execution with checking a flow, ex.: 
OVS_PACKET_CMD_EXECUTE_2, if there is is no flow found, should the packet be 
dropped? I assume that you probably would like the userplane (altough we are 
talking here about the a dpif-netlink adapter), to be loosely coupled from the 
kernel datapath state (such that the userplane does not always has to %100 know 
if a kernel flow has been purged or not). So  this would be an unnecessary 
risk. Well if we add the temporary flow creation, would not 
OVS_PACKET_CMD_EXECUTE be redundant?

If it did help, then I don't think we'd need a new command, we could
just add a OVS_FLOW_ATTR_PACKET to attach a packet to the existing
OVS_FLOW_CMD_NEW or OVS_FLOW_CMD_SET commands.
I guess that we only need to check in handle_upcalls, if the 
should_install_flow is positive, install the flow and append the packet, and 
only if not, create an DPIF_OP_EXECUTE netlink message. This looks helpful. I 
did not work with OVS_FLOW_CMD_SET yet, but this should be likewise I assume.

Regards,
Dincer


Von: Ben Pfaff 
Gesendet: Donnerstag, 6. Februar 2020 22:55
An: Pravin Shelar 
Cc: Dincer Beken ; ovs-dev@openvswitch.org 
; Andreas Eberlein 
Betreff: Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel 
Datapath

On Thu, Feb 06, 2020 at 01:32:12PM -0800, Pravin Shelar wrote:
> On Thu, Feb 6, 2020 at 12:18 PM Ben Pfaff  wrote:
> >
> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
> > > Another option would be to add new command that install and execute
> > > packet in same netlink msg. That would save us a netlink msg to handle
> > > a miss-call.  what do you think about it?
> >
> > When I experimented with that in the past, I found that it didn't have a
> > noticeable impact on performance.
> >
> Reduced number of msgs sent here would help in certain situations.

That is plausible, but it didn't help when I measured it previously.

> It is also simpler to fix the stats issue using this approach.

There's no stats issue.  Userspace just counts the number of packets it
sent and adds them in.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

2020-02-06 Thread Ben Pfaff
On Thu, Feb 06, 2020 at 01:32:12PM -0800, Pravin Shelar wrote:
> On Thu, Feb 6, 2020 at 12:18 PM Ben Pfaff  wrote:
> >
> > On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
> > > Another option would be to add new command that install and execute
> > > packet in same netlink msg. That would save us a netlink msg to handle
> > > a miss-call.  what do you think about it?
> >
> > When I experimented with that in the past, I found that it didn't have a
> > noticeable impact on performance.
> >
> Reduced number of msgs sent here would help in certain situations. 

That is plausible, but it didn't help when I measured it previously.

> It is also simpler to fix the stats issue using this approach.

There's no stats issue.  Userspace just counts the number of packets it
sent and adds them in.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

2020-02-06 Thread Pravin Shelar
On Thu, Feb 6, 2020 at 12:18 PM Ben Pfaff  wrote:
>
> On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
> > Another option would be to add new command that install and execute
> > packet in same netlink msg. That would save us a netlink msg to handle
> > a miss-call.  what do you think about it?
>
> When I experimented with that in the past, I found that it didn't have a
> noticeable impact on performance.
>
Reduced number of msgs sent here would help in certain situations. It
is also simpler to fix the stats issue using this approach.

> If it did help, then I don't think we'd need a new command, we could
> just add a OVS_FLOW_ATTR_PACKET to attach a packet to the existing
> OVS_FLOW_CMD_NEW or OVS_FLOW_CMD_SET commands.

Sounds good.

Thanks.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

2020-02-06 Thread Ben Pfaff
On Thu, Feb 06, 2020 at 11:36:19AM -0800, Pravin Shelar wrote:
> Another option would be to add new command that install and execute
> packet in same netlink msg. That would save us a netlink msg to handle
> a miss-call.  what do you think about it?

When I experimented with that in the past, I found that it didn't have a
noticeable impact on performance.

If it did help, then I don't think we'd need a new command, we could
just add a OVS_FLOW_ATTR_PACKET to attach a packet to the existing
OVS_FLOW_CMD_NEW or OVS_FLOW_CMD_SET commands.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Question Regarding ovs_packet_cmd_execute in Kernel Datapath

2020-02-06 Thread Pravin Shelar
On Wed, Feb 5, 2020 at 10:11 AM Dincer Beken  wrote:
>
> Hello all,
>
> I am not entirely new to Open vSwitch and am implementing new actions to 
> realize the TS25.446 SYNC algorithm from 3GPP to create an BM-SC component 
> for LTE-Multicast (eMBMS). I am lacking the experience to judge over the 
> following issue and would kindly ask for your opinion. Excuse me in advance, 
> if the e-Mail does not follow the principles of the mailing list. Here we go..
>
> When a MISS_UPCALL is triggered by the kernel, this causes 2 netlink 
> operations. First, the establishment of a new flow in the datapath via 
> OVS_FLOW_CMD_NEW and then the execution of the missed packet via 
> OVS_PACKET_CMD_EXECUTE.
>
> From the code, it seems to me, that both of these operations are in this 
> order. Considering this, why is the "ovs_packet_cmd_execute" message not 
> checking, and if it exists, directly using the new flow, which was 
> established with OVS_FLOW_CMD_NEW in the first netlink message, but creates a 
> temporary new flow, all the time. One issue this causes is that the generated 
> flow lacks the statistics of the first packet (#packet and #elapsed_octets).
>
> By adding the UFID into the OVS_FLOW_CMD_NEW, it is possible to reuse the 
> first flow, and effectively to update the flow statistics.
>
Another option would be to add new command that install and execute
packet in same netlink msg. That would save us a netlink msg to handle
a miss-call.  what do you think about it?


> So as an example, I added the following code, with which I could reuse the 
> flow generated via OVS_FLOW_CMD_NEW. Nothing new here.
>
> if(a[OVS_FLOW_ATTR_UFID]){
> /* Extract flow identifier. */
> struct sw_flow_id sw_flow_id;
> struct sw_flow_key sw_flow_key;
>
> int error = ovs_nla_get_identifier(_flow_id, a[OVS_FLOW_ATTR_UFID],
>_flow_key, log);
> if (error) {
> goto err_kfree_skb;
> }
> rcu_read_lock();
> dp = get_dp_rcu(net, ovs_header->dp_ifindex);
> err = -ENODEV;
> if (!dp)
> goto err_unlock;
> /* Check if this is a duplicate flow */
> if (ovs_identifier_is_ufid(_flow_id)){
> flow = ovs_flow_tbl_lookup_ufid(>table, _flow_id);
> }
> if(flow){
> struct sw_flow_actions *sf_acts;
> struct dp_stats_percpu *stats;
> u64 *stats_counter;
> u32 n_mask_hit;
> struct sw_flow_key flow_key_dummy = {{0}};
> err = ovs_flow_key_extract_userspace(net, a[OVS_PACKET_ATTR_KEY],
> packet, _key_dummy, log);
> ovs_flow_stats_update(flow, flow->key.tp.flags, packet);
> sf_acts = rcu_dereference(flow->sf_acts);
> ovs_execute_actions(dp, packet, sf_acts, >key);
> stats_counter = >n_hit;
> /* Update datapath statistics. */
> u64_stats_update_begin(>syncp);
> (*stats_counter)++;
> stats->n_mask_hit += n_mask_hit;
> u64_stats_update_end(>syncp);
> pr_err("Successfully handled the packet via the existing flow of the 
> UFID.");
>  }
>  rcu_read_unlock();
>  return error;
> }
>
> I think, that the ovs_packet_cmd_execute creates those temporary flows 
> because of a reason, because it is kinda an obvious issue. So, what I 
> basically would like to know is, if there are performance related or other 
> aspects (like adding the 128 bit UFID as a nlAttr), why we always create a 
> new flow.
>
> Regards,
> Dincer Beken
> ___
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev