Re: [ovs-discuss] Dumping flows with both cookies and offload statistics

2023-02-20 Thread Ilya Maximets via discuss
On 2/17/23 09:45, Viacheslav Galaktionov wrote:
> On 2/10/23 16:40, Ilya Maximets wrote:
>> On 2/10/23 10:14, Viacheslav Galaktionov wrote:
>>> Thank you!
>>>
>>> Small question, though: what do the n_offload_{packets,bytes} fields
>>> represent in the output of "bridge/dump-flows", if not the offload stats
>>> for traffic associated with a given OpenFlow rule? I understand it might
>>> be implemented using datapath flows under the hood, but that shouldn't
>>> matter for my purposes as long as the numbers mean what I expect them to.
>> Hmm, you're right.  I missed that part.
>>
>> Stats from datapath flows are aggregated into stats for OpenFlow rules
>> that contributed into creation of these datapath flows.  In this case
>> n_offload_{packets,bytes} are counting stats from offloaded datapath
>> flows separately.  So, yes, these might be what you need.
>>
>> Sorry, I'm not normally using this appctl command, so I forgot about this
>> functionality being implemented. :)
>>
>> Dumping datapath flows is typically more useful for troubleshooting,
>> but if you only need statistics, then the 'bridge/dump-flows' should
>> be sufficient.
> Glad to hear this, thank you!
> 
> By the way, I've noticed another small discrepancy in how ovs-ofctl and
> ovs-appctl dump bridge flows: they use different field names for table
> IDs: the former uses "table", while the latter uses "table_id". Do you
> think I'd break anything if I patched ovs-appctl to use "table" as well?

It's hard to tell.  You never know what people use to parse the output
in the wild.  I'd keep it as is.

>>
>> Note though that there are some tricky considerations regarding what is
>> considered 'offloaded' and what is not.  For example, with TC, these stats
>> will count packets matching chains which are in_hw.  They will not count
>> flows installed in TC software datapath, but not in HW, AFAICT.
>>
>> For DPDK, it will count both fully and partially offloaded flows, even
>> though actions are executed in software for partially offloaded ones.
> Yes, this is an interesting point, thanks. I wonder if it'd be possible
> to somehow differentiate between SW and HW offloads, but I suppose the

There are different kinds of HW offload.  Currently you can somewhat
differentiate on datapath flow level, as we provide the dp-layer and
offloaded:[yes,no,partial] for each datapath flow.

However, that is not always enough.  Because, even if TC, for example,
reports in_hw flag for a certain chain, it doesn't mean that all the
packets for that chain are actually handled in hardware, it only means
that NIC driver acknowledged the flow.  But the actual traffic may or
may not be actually handled by the HW in various different cases.

It's complicated. :)

Best regards, Ilya Maximets.

> current behaviour suits my goals well enough.
>>
>> Best regards, Ilya Maximets.
>>
>>> On 2/10/23 01:15, Ilya Maximets wrote:
 On 2/7/23 15:25, Viacheslav Galaktionov via discuss wrote:
> Hi!
>
> I'm trying to figure out if my OvS flows are actually offloaded to TC or 
> DPDK.
> To simplify flow management, the test application uses cookies to assign 
> unique
> IDs to flows.
>
> As I understand, there are two ways to dump flows from OvS:
>
> 1. Use "ovs-ofctl dump-flows", which doesn't return offload stats.
> 2. Use "ovs-appctl bridge/dump-flows", which doesn't return the cookies.
>
> ovs-ofctl is supposed to work with any OpenFlow switch, and offload stats 
> are
> not part of the OpenFlow protocol, so its behaviour is perfectly 
> understandable.
 Both methods above are dumping OpenFlow rules and OVS doesn't actually
 offload OpenFlow rules themselves.  OVS creates simpler datapath flows
 while passing a packet through OpenFlow tables.  This simple datapath
 flow it sends to the datapath inclusing possibility of offloading of
 that datapath flow.

 So, you won't be able to get offload stats from OpenFlow rules, so neither
 of the commands above are suitable for you.

 You can dump datapath flows instead with 'ovs-appctl dpctl/dump-flows'.
 By using '-m' option you can get additional flags like 'dp_layer'
 and 'offloaded:[yes|no|partial]' for each datapath flow.

> However, ovs-appctl is an OvS-specific tool and I don't see any reason 
> why it
> would omit this flow-identifying information, especially since it doesn't 
> seem
> to provide any alternative. I've done some quick testing and this patch 
> appears
> to give me what I need:
>
> diff --git a/ofproto/ofproto.c b/ofproto/ofproto.c
> index 3a527683c..bdf1e7467 100644
> --- a/ofproto/ofproto.c
> +++ b/ofproto/ofproto.c
> @@ -4803,6 +4803,9 @@ flow_stats_ds(struct ofproto *ofproto, struct rule 
> *rule, struct ds *results,
>    if (rule->table_id != 0) {
>    ds_put_format(results, "table_id=%"PRIu8", ", rule->table_id);
> 

Re: [ovs-discuss] Problem with ovn and neutron dynamic routing

2023-02-20 Thread Luis Tomas Bolivar via discuss
We hit this problem a while ago and reported it here:
https://bugzilla.redhat.com/show_bug.cgi?id=1906455

On Mon, Feb 20, 2023 at 9:56 AM Plato, Michael via discuss <
ovs-discuss@openvswitch.org> wrote:

> Hello,
>
>
>
> we have a problem with ovn in connection with neutron dynamic routing
> (which is now supported with ovn). We can announce our internal networks
> via BGP and the VMs in this network can also be reached directly without
> nat.
>
> But if we attach a public floating ip to the internal self service network
> ip, we have some strange effects. The VM can still be reached via ping with
> both ips. But SSH for example only works via floating ip. I did some
> network traces and found that the return traffic is being natted even
> though no nat was applied on incoming way. From my point of view we need a
> conntrack marker which identifies traffic which was d-natted on incoming
> way and s-nat only those traffic on return way. Is it possible to implement
> something like this to fully support ovn with BGP announced networks which
> are directly reachable via routing?
>
>
>
> Thanks for reply and best regards!
>
>
>
> Michael
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>


-- 
LUIS TOMÁS BOLÍVAR
Principal Software Engineer
Red Hat
Madrid, Spain
ltoma...@redhat.com
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Problem with ovn and neutron dynamic routing

2023-02-20 Thread Plato, Michael via discuss
Hello,

we have a problem with ovn in connection with neutron dynamic routing (which is 
now supported with ovn). We can announce our internal networks via BGP and the 
VMs in this network can also be reached directly without nat.
But if we attach a public floating ip to the internal self service network ip, 
we have some strange effects. The VM can still be reached via ping with both 
ips. But SSH for example only works via floating ip. I did some network traces 
and found that the return traffic is being natted even though no nat was 
applied on incoming way. From my point of view we need a conntrack marker which 
identifies traffic which was d-natted on incoming way and s-nat only those 
traffic on return way. Is it possible to implement something like this to fully 
support ovn with BGP announced networks which are directly reachable via 
routing?

Thanks for reply and best regards!

Michael
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss