On 3/26/26 5:26 PM, Xavier Simonart wrote:
> Hi Dumitru
>

Hi Xavier,


> I also have one comment on one of your (no so) nits below.
> Thanks
> Xavier
> 

[...]

>>>> @@ -3271,22 +3308,44 @@ dump_statistics() {
>>>>      ch1_rep=$(grep -c "ICMP echo reply" ch1.tcpdump)
>>>>      ch2_req=$(grep -c "ICMP echo request" ch2.tcpdump)
>>>>      ch2_rep=$(grep -c "ICMP echo reply" ch2.tcpdump)
>>>> +    ch3_req=$(grep -c "ICMP echo request" ch3.tcpdump)
>>>> +    ch3_rep=$(grep -c "ICMP echo reply" ch3.tcpdump)
>>>>      gw1_req=$(grep -c "ICMP echo request" gw1.tcpdump)
>>>>      gw1_rep=$(grep -c "ICMP echo reply" gw1.tcpdump)
>>>>      gw2_req=$(grep -c "ICMP echo request" gw2.tcpdump)
>>>>      gw2_rep=$(grep -c "ICMP echo reply" gw2.tcpdump)
>>>>      gw3_req=$(grep -c "ICMP echo request" gw3.tcpdump)
>>>>      gw3_rep=$(grep -c "ICMP echo reply" gw3.tcpdump)
>>>> -    echo "$n1 claims in gw1, $n2 in gw2 and $n3 on gw3"
>>>> -    echo "ch2_request=$ch2_req gw1_request=$gw1_req
>> gw2_request=$gw2_req gw3_request=$gw3_req ch1_request=$ch1_req
>> ch1_reply=$ch1_rep gw1_reply=$gw1_rep gw2_reply=$gw2_rep gw3_reply=$gw3_rep
>> ch2_reply=$ch2_rep"
>>>> +    echo "$n1 claims in gw1, $n2 in gw2 and $n3 on gw3" >&2
>>>> +    echo "ch3_req=$ch3_req gw_req=($gw1_req + $gw2_req +$gw3_req)
>> ch1_req=$ch1_req ch1_rep=$ch1_rep gw_rep=($gw1_rep + $gw2_rep + $gw3_rep)
>> ch3_rep=$ch3_rep ch2=($ch2_req+$ch2_rep)" >&2
>>>> +    echo "$((ch3_req - ch3_rep))"
>>>>  }
>>>>
>>>> -check_migration_between_gw1_and_gw2() {
>>>> -    action=$1
>>>> -    send_background_packets
>>>> +add_port() {
>>>> +    bridge=$1
>>>> +    interface=$2
>>>> +    address=$3
>>>> +    echo "Adding $bridge $interface $address"
>>>> +
>>>> +    pid=$(podman inspect -f '{{.State.Pid}}' ovn-gw-1)
>>>> +    ln -sf /proc/$pid/ns/net /var/run/netns/$pid
>>>> +    port=$(OVS_RUNDIR= ovs-vsctl --data=bare --no-heading
>> --columns=name find interface \
>>>> +           external_ids:container_id=ovn-gw-1
>> external_ids:container_iface="$interface")
>>>> +    port="${port:0:13}"
>>>> +    ip link add "${port}_l" type veth peer name "${port}_c"
>>>> +    ip link set "${port}_l" up
>>>> +    ip link set "${port}_c" netns $pid
>>>> +    ip netns exec $pid ip link set dev "${port}_c" name "$interface"
>>>> +    ip netns exec $pid ip link set "$interface" up
>>>> +    if [[ -n "$address" ]]; then
>>>> +        ip netns exec $pid ip addr add "$address" dev "$interface"
>>>> +    fi
>>
>> I might be wrong but I think nobody cleans up any of these ports created
>> in the ovn-gw-1 container.  Do we need some on_exit() calls here?
>>
> I do not think so. We stop a container, and ports on host were deleted as a
> consequence.
> add_port just "repairs" the container, after restarting it, adding back the
> ports which
> ovn-fake-multinode initially added.
> We could debate whether add_port should itself be added in on_exit, right
> after
> podman stop the container. But it looked overkill to me (we do add_port
> right after
> podman stop/start, so on_exit would only be useful if podman stop/start
> failed (in which case cluster
> is anyhow in bad state) or if e.g. we ctrl-c just between podman stop and
> add_port
> WDYT?
> 

Maybe I'm misunderstanding stuff but we don't stop/start containers
during one execution of the multinode.at testsuite.  At least not in
GitHub CI.  What we do is:

- start the fake-multinode cluster (this does podman run under the hood):
CHASSIS_COUNT=4 GW_COUNT=4 ./ovn_cluster.sh start

- run "make check-multinode": this only runs "podman exec" commands
inside the containers started above

- collect logs

- stop cluster:
./ovn_cluster.sh stop

So what I meant was, if you run the full test suite, when this specific
test that called "add_ports()" ends we still have the veth pair
configured on the system, at least until the full test suite ends.

That may cause issues as tests that happen to run after this specific
one will have to take into account that some veths might exist.

In general, we (should) try to cleanup everything that was created by a
test (an instance of AT_SETUP) when that test ends successfully or with
a failure.

Regards,
Dumitru

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to