On 7/10/24 18:30, Adrián Moreno wrote:
> On Wed, Jul 10, 2024 at 04:28:55PM GMT, Eelco Chaudron wrote:
>>
>>
>> On 10 Jul 2024, at 15:06, Ilya Maximets wrote:
>>
>>> On 7/7/24 22:09, Adrian Moreno wrote:
>>>> Add a command to dump statistics per exporter.
>>>>
>>>> Signed-off-by: Adrian Moreno <amore...@redhat.com>
>>>> ---
>>>>  NEWS                           |   2 +
>>>>  ofproto/ofproto-dpif-lsample.c | 111 +++++++++++++++++++++++++++++++++
>>>>  ofproto/ofproto-dpif-lsample.h |   1 +
>>>>  ofproto/ofproto-dpif.c         |   1 +
>>>>  4 files changed, 115 insertions(+)
>>>>
>>>> diff --git a/NEWS b/NEWS
>>>> index 15faf9fc3..1c53badea 100644
>>>> --- a/NEWS
>>>> +++ b/NEWS
>>>> @@ -20,6 +20,8 @@ Post-v3.3.0
>>>>       allows samples to be emitted locally (instead of via IPFIX) in a
>>>>       datapath-specific manner. The Linux kernel datapath is the first to
>>>>       support this feature by using the new datapath "psample" action.
>>>> +     A new unixctl command 'lsample/show' shows packet and bytes 
>>>> statistics
>>>> +     per local sample exporter.
>>>
>>> Maybe add this as a separate bullet point.
>>>
>>>>
>>>>
>>>>  v3.3.0 - 16 Feb 2024
>>>> diff --git a/ofproto/ofproto-dpif-lsample.c 
>>>> b/ofproto/ofproto-dpif-lsample.c
>>>> index 171129d5b..82a87c27d 100644
>>>> --- a/ofproto/ofproto-dpif-lsample.c
>>>> +++ b/ofproto/ofproto-dpif-lsample.c
>>>> @@ -21,7 +21,10 @@
>>>>  #include "dpif.h"
>>>>  #include "hash.h"
>>>>  #include "ofproto.h"
>>>> +#include "ofproto-dpif.h"
>>>> +#include "openvswitch/dynamic-string.h"
>>>>  #include "openvswitch/thread.h"
>>>> +#include "unixctl.h"
>>>>
>>>>  /* Dpif local sampling.
>>>>   *
>>>> @@ -219,3 +222,111 @@ dpif_lsample_unref(struct dpif_lsample *lsample)
>>>>          dpif_lsample_destroy(lsample);
>>>>      }
>>>>  }
>>>> +
>>>> +static int
>>>> +comp_exporter_collector_id(const void *a_, const void *b_)
>>>> +{
>>>> +    const struct lsample_exporter_node *a, *b;
>>>> +
>>>> +    a = *(struct lsample_exporter_node **) a_;
>>>> +    b = *(struct lsample_exporter_node **) b_;
>>>> +
>>>> +    if (a->exporter.options.collector_set_id >
>>>> +        b->exporter.options.collector_set_id) {
>>>> +        return 1;
>>>> +    }
>>>> +    if (a->exporter.options.collector_set_id <
>>>> +        b->exporter.options.collector_set_id) {
>>>> +        return -1;
>>>> +    }
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +static void
>>>> +lsample_exporter_list(struct dpif_lsample *lsample,
>>>> +                      struct lsample_exporter_node ***list,
>>>> +                      size_t *num_exporters)
>>>> +{
>>>> +    struct lsample_exporter_node **exporter_list;
>>>> +    struct lsample_exporter_node *node;
>>>> +    size_t k = 0, n;
>>>> +
>>>> +    n = cmap_count(&lsample->exporters);
>>>> +
>>>> +    exporter_list = xcalloc(n, sizeof *exporter_list);
>>>> +
>>>> +    CMAP_FOR_EACH (node, node, &lsample->exporters) {
>>>> +        if (k >= n) {
>>>> +            break;
>>>> +        }
>>>> +        exporter_list[k++] = node;
>>>
>>> There is no guarantee that cmap didn't change between getting the
>>> number of exporters and iterating over them.  Need to either take
>>> a lock or grow the array on demand.
>>
>> I initially thought the same, but it’s covered by the break above.
>>
>> Thinking about it again, we might actually not show entries that were 
>> displayed before, due to a new one being earlier in the map. So I guess we 
>> should probably grow the exporter_list (rather than hold the lock).
>>
> 
> Right. If the map has grown since we took the size we will break before
> iterating over all of them k will be equal to n we will break.
> If the the map has shrinked, CMAP_FOR_EACH will finish.
> 
> AFAICS, in both cases it's safe to access the data (nodes will not be
> freed until the next quiese period).
> 
> What can happen is that we don't show exactly what the current state of
> the map is, e.g: we show a node that has just been deleted from the map
> or we don't show a node that has just been added.
> 
> The only reason for having more than one exporter per bridge is if we
> want to have multiple "observability applications" having differentt
> group ids. In any case, I don't expect the number to be very high nor its
> creation/deletion rate.
> 
> With all this, I considered that having sporadic slighly-inaccurate
> displays was an asumable cost.
> 
> What do you think?

I actually missed the length check.  So, there will be no incorrect memory
accesses.  And for the inaccuracy of the unixctl output, that sounds fine
to me.

Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to