Re: [ovs-discuss] [ovs-dev] [OVN][RAFT]Why left server cannot be added back to
Hi,Ben, I am agreed that ,under normal circumstances, there is no need to allow left server back to origin server . But I met some strange situation. For example ,sometimes, one server may be with two diffrent UUIDs recorded by Leader server. I think if the left server could be back to origin server, it will be a easy solution to solve many strange problems. Thanks, Yun At 2019-11-06 05:31:25, "Ben Pfaff" wrote: >On Tue, Nov 05, 2019 at 08:10:41PM +0800, taoyunupt wrote: >> Hi,Numan, >> When I run OVN/RAFT cluster, I found a server(which >> initiative to leave or be kicked off ) ,cannot be added back to origin >> cluster. I found the code as following, can you tell me the reason , many >> thanks! >> >> >> case RAFT_REC_NOTE: >> if (!strcmp(r->note, "left")) { >> return ovsdb_error(NULL, "record %llu indicates server has left " >>"the cluster; it cannot be added back (use " >>"\"ovsdb-tool join-cluster\" to add a new " >>"server)", rec_idx); > >The Raft dissertation doesn't contemplate the possibility of a server >re-joining a cluster. Allowing it would add new corner cases that >aren't worth dealing with. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
[ovs-discuss] can OVS conntrack support IP list like this: actions=ct(commit, table=0, zone=1, nat(dst=220.0.0.3, 220.0.0.7, 220.0.0.123))?
Hi, folks We need to do SNAT for many internal IPs by just using several public IPs, we also need to do DNAT by some other public IPs for exposing webservice, openflow rules look like the below: table=0,ip,nw_src=172.17.0.0/16,…,actions=ct(commit,table=0,zone=1,nat(src= 220.0.0.3,220.0.0.7,220.0.0.123)) table=0,ip,nw_src=172.18.0.67,…,actions=ct(commit,table=0,zone=1,nat(src=22 0.0.0.3,220.0.0.7,220.0.0.123)) table=0,ip,tcp,nw_dst=220.0.0.11,tp_dst=80,…,actions=ct(commit,table=0,zone =2,nat(dst=172.16.0.100:80)) table=0,ip,tcp,nw_dst=220.0.0.11, tp_dst=443,…,actions=ct(commit,table=0,zone=2,nat(dst=172.16.0.100:443)) >From ct document, it seems it can’t support IP list for nat, anybody knows how we can handle such cases in some kind feasible way? In addition, is it ok if multiple openflow rules use the same NAT IP:PORT combination? I’m not sure if it will result in some conflicts for SNAT, because all of them need to do dynamic source port mapping, per my test, it seems this isn’t a problem. Thank you all in advance and appreciate your help sincerely. smime.p7s Description: S/MIME cryptographic signature ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] [ovs-dev] [OVN][RAFT]Why left server cannot be added back to
On Tue, Nov 05, 2019 at 08:10:41PM +0800, taoyunupt wrote: > Hi,Numan, > When I run OVN/RAFT cluster, I found a server(which > initiative to leave or be kicked off ) ,cannot be added back to origin > cluster. I found the code as following, can you tell me the reason , many > thanks! > > > case RAFT_REC_NOTE: > if (!strcmp(r->note, "left")) { > return ovsdb_error(NULL, "record %llu indicates server has left " >"the cluster; it cannot be added back (use " >"\"ovsdb-tool join-cluster\" to add a new " >"server)", rec_idx); The Raft dissertation doesn't contemplate the possibility of a server re-joining a cluster. Allowing it would add new corner cases that aren't worth dealing with. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] gso packet is failing with af_packet socket with packet_vnet_hdr
Hi Flavio, As per your inputs, I modified the gso_size, and now skb_gso_validate_mtu(skb, mtu) is returning true, and ip_finish_output2(sk, skb) and dst_neigh_output(dst, neigh, skb); are getting called. But still, I am seeing the large packets getting dropped somewhere in the kernel down the line and retransmission happening. if (skb_gso_validate_mtu(skb, mtu)) return ip_finish_output2(sk, skb); [ 1854.905733] vxlan_xmit:2262 skb->len:2776 packet_length:2762 [ 1854.905744] skb_gso_size_check:4478 and seg_len:1500 and max_len:1500 and shinfo->gso_size:1398 and GSO_BY_FRAGS:65535 The gso_size 1398 bytes is correct in my case ( 1398 + 50 (vxlan header) + 20(IP) + TCP(32) + 14(ETH) = 1514 bytes) The code is simple: vnet = buf; // buf is an array of 64k bytes len = 0; if (csum) { vnet->flags = (VIRTIO_NET_HDR_F_NEEDS_CSUM); vnet->csum_start = (ETH_HLEN + sizeof(*iph)); vnet->csum_offset = (__builtin_offsetof(struct tcphdr, check)); } if (gso) { vnet->hdr_len = (ETH_HLEN + sizeof(*iph) + sizeof(*tcph)); vnet->gso_type = VIRTIO_NET_HDR_GSO_TCPV4; vnet->gso_size = ( ETH_DATA_LEN - 50 - sizeof(struct iphdr) - sizeof(struct tcphdr)); // 50 is the vxlan header } else { vnet->gso_type = VIRTIO_NET_HDR_GSO_NONE; vnet->gso_size = 0; } len =sizeof(*vnet); // Now copying the entire L2 packet into the buf starting at an offset buf + len and sending the packet. Did I miss something? And I am not sure how OVS behaves after receiving this packet and before transmitting to vxlan. How is checksum offloading happening with af_packet in OVS? Does OVS have any role in this? Please see the attached image for reference. The packet flow with in the host is given below: Ubuntu container (eth0 (1500MTU))--routing lookup-->Ubuntu container(veth0(1450 MTU)) ->OVS(veth1(1450MTU))->vxlan(65K MTU)->eth0(physical interface(1500MTU))->other machine. Looking forward to your reply. Regards, Ramana On Mon, Nov 4, 2019 at 10:41 PM Ramana Reddy wrote: > Thanks, Flavio. I will check it out tomorrow and let you know how it goes. > > Regards, > Ramana > > > On Mon, Nov 4, 2019 at 10:15 PM Flavio Leitner wrote: > >> On Mon, 4 Nov 2019 21:32:28 +0530 >> Ramana Reddy wrote: >> >> > Hi Favio Leitner, >> > Thank you very much for your reply. Here is the code snippet. But the >> > same code is working if I send the packet without ovs. >> >> Could you provide more details on the OvS environment and the test? >> >> The linux kernel propagates the header size dependencies when you stack >> the devices in net_device->hard_header_len, so in the case of vxlan dev >> it will be: >> >> needed_headroom = lowerdev->hard_header_len; >> needed_headroom += VXLAN_HEADROOM; >> dev->needed_headroom = needed_headroom; >> >> Sounds like that is helping when OvS is not being used. >> >> fbl >> >> >> > bool csum = true; >> > bool gso = true' >> > struct virtio_net_hdr *vnet = buf; >> >if (csum) { >> > vnet->flags = (VIRTIO_NET_HDR_F_NEEDS_CSUM); >> > vnet->csum_start = ETH_HLEN + sizeof(*iph); >> > vnet->csum_offset = __builtin_offsetof(struct >> > tcphdr, check); >> > } >> > >> > if (gso) { >> > vnet->hdr_len = ETH_HLEN + sizeof(*iph) + >> > sizeof(*tcph); >> > vnet->gso_type = VIRTIO_NET_HDR_GSO_TCPV4; >> > vnet->gso_size = ETH_DATA_LEN - sizeof(struct >> > iphdr) - >> > sizeof(struct >> > tcphdr); >> > } else { >> > vnet->gso_type = VIRTIO_NET_HDR_GSO_NONE; >> > } >> > Regards, >> > Ramana >> > >> > >> > On Mon, Nov 4, 2019 at 8:39 PM Flavio Leitner >> > wrote: >> > >> > > >> > > Hi, >> > > >> > > What's the value you're passing on gso_size in struct >> > > virtio_net_hdr? You need to leave room for the encapsulation >> > > header, e.g.: >> > > >> > > gso_size = iface_mtu - virtio_net_hdr->hdr_len >> > > >> > > fbl >> > > >> > > On Mon, 4 Nov 2019 01:11:36 +0530 >> > > Ramana Reddy wrote: >> > > >> > > > Hi, >> > > > I am wondering if anyone can help me with this. I am having >> > > > trouble to send tso/gso packet >> > > > with af_packet socket with packet_vnet_hdr (through >> > > > virtio_net_hdr) over vxlan tunnel in OVS. >> > > > >> > > > What I observed that, the following function eventually hitting >> > > > and is returning false (net/core/skbuff.c), hence the packet is >> > > > dropping. static inline bool skb_gso_size_check(const struct
Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev
On Tue, 5 Nov 2019 18:47:09 + "Tobias Hofmann \(tohofman\) via discuss" wrote: > Hi Flavio, > > thanks for the insights! Unfortunately, I don't know about the pdump > and its relation to the ring. pdump dumps packets from dpdk ports into rings/mempools, so that you can inspect/use the traffic: https://doc.dpdk.org/guides/howto/packet_capture_framework.html But I looked at the dpdk sources now and I don't see it allocating any memory when the library is initialized, so this is likely a red herring. > Can you please specify where I can see that the port is not ready > yet? Is that these three lines: > > 2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged > device (:08:0b.2) The above shows the device is not ready/bound yet. > 2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching > device ':08:0b.2' to DPDK > 2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set > configuration (Invalid argument) > > As far as I know, the ring allocation failure that you mentioned > isn't necessarily a bad thing since it just indicates that DPDK > reduces something internally (I can't remember what exactly it was) > to support a high MTU with only 1GB of memory. True for the memory allocated for DPDK ports. However, there is a minimum which if it's not there, the mempool allocation will fail. > I'm wondering now if it might help to change the timing of when > openvswitch is started after a system reboot to prevent this problem > as it only occurs after reboot. Do you think that this approach might > fix the problem? It will help to get the i40e port working, but that "ring error" will continue as you see after restarting anyways. I don't know the other interface types, maybe there is another interface failing which is not in the log. Do you see any error reported in 'ovs-vsctl show' after the restart? fbl ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev
Hi Flavio, thanks for the insights! Unfortunately, I don't know about the pdump and its relation to the ring. Can you please specify where I can see that the port is not ready yet? Is that these three lines: 2019-11-02T14:14:23.094Z|00070|dpdk|ERR|EAL: Cannot find unplugged device (:08:0b.2) 2019-11-02T14:14:23.094Z|00071|netdev_dpdk|WARN|Error attaching device ':08:0b.2' to DPDK 2019-11-02T14:14:23.094Z|00072|netdev|WARN|dpdk-p0: could not set configuration (Invalid argument) As far as I know, the ring allocation failure that you mentioned isn't necessarily a bad thing since it just indicates that DPDK reduces something internally (I can't remember what exactly it was) to support a high MTU with only 1GB of memory. I'm wondering now if it might help to change the timing of when openvswitch is started after a system reboot to prevent this problem as it only occurs after reboot. Do you think that this approach might fix the problem? Thanks for your help Tobias On 05.11.19, 14:08, "Flavio Leitner" wrote: On Mon, 4 Nov 2019 19:12:36 + "Tobias Hofmann (tohofman)" wrote: > Hi Flavio, > > thanks for reaching out. > > The DPDK options used in OvS are: > > other_config:pmd-cpu-mask=0x202 > other_config:dpdk-socket-mem=1024 > other_config:dpdk-init=true > > > For the dpdk port, we set: > > type=dpdk > options:dpdk-devargs=:08:0b.2 > external_ids:unused-drv=i40evf > mtu_request=9216 Looks good to me, though the CPU has changed comparing to the log: 2019-11-02T14:51:26.940Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd --socket-mem 1024 -c 0x0001 What I see from the logs is that OvS is trying to add a port, but the port is not ready yet, so it continues with other things which also consumes memory. Unfortunately by the time that the i40 port is ready then there is no memory. When you restart, the i40 is ready and the memory can be allocated. However, the ring allocation fails due to lack of memory: 2019-11-02T14:51:27.808Z|00136|dpdk|ERR|RING: Cannot reserve memory 2019-11-02T14:51:27.974Z|00137|dpdk|ERR|RING: Cannot reserve memory If you reduce the MTU, then the minimum amount of memory required for the DPDK port reduces drastically, which explains why it works. Also increasing the total memory to 2G helps because then the minimum amount for 9216 MTU and the ring seems to be sufficient. The ring seems to be related to pdump, is that the case? I don't known of the top of my head. In summary, looks like 1G is not enough for large MTU and pdump. HTH, fbl > > > Please let me know if this is what you asked for. > > Thanks > Tobias > > On 04.11.19, 15:50, "Flavio Leitner" wrote: > > > It would be nice if you share the DPDK options used in OvS. > > On Sat, 2 Nov 2019 15:43:18 + > "Tobias Hofmann \(tohofman\) via discuss" > wrote: > > > Hello community, > > > > My team and I observe a strange behavior on our system with the > > creation of dpdk ports in OVS. We have a CentOS 7 system with > > OpenvSwitch and only one single port of type ‘dpdk’ attached to > > a bridge. The MTU size of the DPDK port is 9216 and the reserved > > HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total > > HugePage memory. > > > > Setting everything up works fine, however after I reboot my > > box, the dpdk port is in error state and I can observe this > > line in the logs (full logs attached to the mail): > > 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create > > memory pool for netdev dpdk-p0, with MTU 9216 on socket 0: > > Invalid argument > > 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set > > interface dpdk-p0 new configuration > > > > I figured out that by restarting the openvswitch process, the > > issue with the port is resolved and it is back in a working > > state. However, as soon as I reboot the system a second time, > > the port comes up in error state again. Now, we have also > > observed a couple of other workarounds that I can’t really > > explain why they help: > > > > * When there is also a VM deployed on the system that is > > using ports of type ‘dpdkvhostuserclient’, we never see any > > issues like that. (MTU size of the VM ports is 9216 by the way) > > * When we increase the HugePage memory for OVS to 2GB, we > > also don’t see any issues. > > * Lowering the MTU size of the ‘dpdk’ type port to 1500 also > > helps to prevent this issue. > > > > Can anyone explain t
Re: [ovs-discuss] Re:Re: [HELP] Question about icmp pkt marked Invalid by userspace conntrack
Hi Timo On Mon, Nov 4, 2019 at 11:29 PM txfh2007 wrote: > Hi Darrell: > The meter rate limit is set as 1Gbps, but the actual rate is around > 500Mbps.. I have read the meter patch, but this patch is to prevent delta_t > changed to 0. But in my case, the delta_t is around 35500ms. > It might be good to just include all known related fixes anyways, including this other one https://github.com/openvswitch/ovs/commit/acc5df0e3cb036524d49891fdb9ba89b609dd26a > For my case, the meter action is on openflow table 46, and the ct action > is on table 44, the output action is on table 65, so I guess the order is > right? > Could you dump the 'relevant' datapath flows before adding the meter rule and after adding the meter rule ? ovs-appctl dpif/dump-flows > > Thanks > > Timo > > > > -- > :Darrell Ball > :2019年11月5日(星期二) 06:56 > :txfh2007 > :Ben Pfaff ; ovs-discuss > :Re: [ovs-discuss] Re:Re: [HELP] Question about icmp pkt marked Invalid by > userspace conntrack > > > Hi Timo > > On Sun, Nov 3, 2019 at 5:12 PM txfh2007 wrote: > > Hi Darrell: > Sorry for my late reply. Yes, the two VMs under test are on same > compute node , and pkts rx/tx via vhost user type port. > > Got it > > > Firstly if I don't configure meter table, then Iperf TCP bandwidth result > From VM1 to VM2 is around 5Gbps, then I set the meter entry and constraint > the rate, and the deviation is larger than I throught. > > > IIUC, pre-meter, you get 5 Gbps, then post-meter 0.5 Gpbs, which is less > than you expected ? > What did you expect the metered rate to be ? > Note Ben pointed you to a meter related bug fix on the alias b4. > > I guess the recalculation of l4 checksum during conntrack would impact > the actual rate? > > > are you applying the meter rule at end of the complete pipeline ? > > > Thank you > Timo > > > > > txfh2007 > Ben Pfaff ; ovs-discuss > Re: [ovs-discuss] Re:Re: [HELP] Question about icmp pkt marked Invalid by > userspace conntrack > > > Hi Timo > > > I read thru this thread to get more context on what you are doing; you > have a base OVS-DPDK > use case and are measuring VM to VM performance across 2 compute nodes. > You are probably using > vhost-user-client ports ? Pls correct me if I am wrong. > In this case, "per direction" you have one rx virtual interface to handle > in OVS; there will be a tradeoff b/w > checksum validation security and performance. > JTBC, in terms of your measurements, how did you arrive at the 5Gbps - > instrumented code or otherwise ?. > (I can verify that later when I have a setup). > > > Darrell > > > > > > > > > > > On Thu, Oct 31, 2019 at 9:23 AM Darrell Ball wrote: > > > > > On Thu, Oct 31, 2019 at 3:04 AM txfh2007 via discuss < > ovs-discuss@openvswitch.org> wrote: > > Hi Ben && Darrell: > This patch works, but after merging this patch I have found the iperf > throughout decrease from 5Gbps+ to 500Mbps. > > what is the 5Gbps number ? Is that the number with marking all packets as > invalid in initial sanity checks ? > > > Typically one wants to offload checksum checks. The code checks whether > that has been done and skips > doing it in software; can you verify that you have the capability and are > using it ? > > > Skipping checksum checks reduces security, of course, but it can be added > if there is a common case of > not being able to offload checksumming. > > > > I guess maybe we should add a switch to turn off layer4 checksum > validation when doing userspace conntrack ? I have found for kernel > conntrack, there is a related button named "nf_conntrack_checksum" . > > Any advice? > > Thank you ! > > -- > > :Ben Pfaff > :ovs-discuss > :Re:Re:[ovs-discuss] [HELP] Question about icmp pkt marked Invalid by > userspace conntrack > > > Hi Ben && Darrell: > Thanks, this patch works! Now the issue seems fixed > > Timo > > > Re: Re:[ovs-discuss] [HELP] Question about icmp pkt marked Invalid by > userspace conntrack > > > I see. > > It sounds like Darrell pointed out the solution, but please let me know > if it did not help. > > On Fri, Oct 11, 2019 at 08:57:58AM +0800, txfh2007 wrote: > > Hi Ben: > > > > I just found the GCC_UNALIGNED_ACCESSORS error during gdb trace and > not sure this is a misaligned error or others. What I can confirm is > during "extract_l4" of this icmp reply packet, when we do "check_l4_icmp", > the unaligned error emits and the "extract_l4" returned false. So this > packet be marked as ct_state=invalid. > > > > Thank you for your help. > > > > Timo > > > > Topic:Re: [ovs-discuss] [HELP] Question about icmp pkt marked Invalid by > userspace conntrack > > > > > > It's very surprising. > > > > Are you using a RISC architecture that insists on aligned accesses? On > > the other hand, if you are using x86-64 or some other architecture that > > ordinarily does not care, are you sure that this is about a m
Re: [ovs-discuss] Binding issue between OVN and openvswitch
On Tue, Nov 05, 2019 at 12:03:52PM +0100, Frédéric Guihéry wrote: > Hello, > > I'm having an issue when playing with kvm + libvirt + ovn + ovs. I've > successfully built a logical switch and logical ports with OVN. On the > hypervisor side, I was able to create an openvswitch bridge named > "br-int", and I've connected two KVM virtual machines to this bridge > with libvirt. When starting the virtual machines, the OVN controller > makes the correct binding between the logical configuration and the > hypervisor bridge/interfaces. I can see it in the ovn-controller log: > > 2019-11-04T14:26:48.955Z|13347|binding|INFO|Claiming lport > ----0002 for this chassis. > > 2019-11-04T14:26:48.956Z|13348|binding|INFO|----0002: > Claiming 02:02:02:02:02:05 192.168.2.2 > > Besides, the command "ovn-sbctl list Mac_Binding" shows me the good Mac > binding and "ovn-sbctl show" displays the correct Port binding: > > Port_Binding "----0002" > Port_Binding "----0003" > > Now, when I reset the OVN logical configuration and try to do exactly > the same with another bridge name (let's say "br-int2"), this doesn't > work. I'm also not able to create another bridge name with other mac/IP > configuration for my VM nics. The OVN controller (or maybe vswitchd ?) > doesn't make the binding between the logical configuration and the > hypervisor bridge/interfaces. It seems that I can't use another bridge > name and I'm stucked with "br-int". I've also tried to reinstall the > ovn/ovs packages and deleted the databases (/etc/openvswitch/*), but I'm > still stucked with "br-int". It's not clear to me why you want to have more than one integration bridge. OVN only needs and only uses a single integration bridge (per hypervisor). You can configure the particular name it uses (see external-ids:ovn-bridge in ovn-controller(8)), though, if you want to name it br-int2. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] Is it possible to terminate ERSPAN in OVS?
On 11/1/2019 4:04 PM, William Tu wrote: On Thu, Oct 31, 2019 at 9:44 AM Ben Pfaff wrote: On Thu, Oct 31, 2019 at 04:34:14PM +, Bryan T. Richardson wrote: On Thu, Oct 31, 2019 at 09:27:19AM -0700, Ben Pfaff wrote: On Wed, Oct 30, 2019 at 11:23:37PM +, Bryan T. Richardson wrote: From the documentation located at http://docs.openvswitch.org/en/latest/faq/configuration/ it's obvious that, based on the example the answer to the question "Does Open vSwitch support ERSPAN?", OVS can create an ERSPAN connection to another switch and mirror packets over the connection. However, what's not clear is whether or not OVS can act as the receiver switch, ie. terminate the ERSPAN GRE tunnel, decapsulate the packets, and mirror them to a local OVS port. Yes, use a port of type "erspan". ovs-vswitchd.conf.db(5) describes the available options, and ovs-fields(7) describes the fields. This support was added in version 2.10. Great, thanks Ben! I'll start digging into the docs you suggested to see if I can figure it out. Any chance you could provide a simple configuration example here? I think a minimal configuration would just include the remote IP address, like other OVS tunnels. You can find some examples in the TUNNEL FIELDS section of ovs-fields(7). Hi Bryan, There is also an doc here section 3.2. http://vger.kernel.org/lpc_net2018_talks/erspan-linux.pdf William ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss Here is a script I use for ERSPAN type I: if [ "$#" -ne 4 ]; then echo "Usage: add-erspan-type1" exit 1 fi ovs-vsctl add-port br0 $1 -- set int $1 \ type=erspan options:remote_ip=$2 \ options:key=$3 \ options:erspan_ver=1 options:erspan_idx=$4 And here is a script I use for ERSPAN type II: if [ "$#" -ne 5 ]; then echo "Usage: add-erspan-type2 " exit 1 fi ovs-vsctl add-port br0 $1 -- set int $1 type=erspan \ options:remote_ip=$2 options:key=$3 \ options:erspan_dir=$4 options:erspan_hwid=$5 \ options:erspan_ver=2 options:tos=inherit Hope that helps. - Greg ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
[ovs-discuss] Binding issue between OVN and openvswitch
Hello, I'm having an issue when playing with kvm + libvirt + ovn + ovs. I've successfully built a logical switch and logical ports with OVN. On the hypervisor side, I was able to create an openvswitch bridge named "br-int", and I've connected two KVM virtual machines to this bridge with libvirt. When starting the virtual machines, the OVN controller makes the correct binding between the logical configuration and the hypervisor bridge/interfaces. I can see it in the ovn-controller log: 2019-11-04T14:26:48.955Z|13347|binding|INFO|Claiming lport ----0002 for this chassis. 2019-11-04T14:26:48.956Z|13348|binding|INFO|----0002: Claiming 02:02:02:02:02:05 192.168.2.2 Besides, the command "ovn-sbctl list Mac_Binding" shows me the good Mac binding and "ovn-sbctl show" displays the correct Port binding: Port_Binding "----0002" Port_Binding "----0003" Now, when I reset the OVN logical configuration and try to do exactly the same with another bridge name (let's say "br-int2"), this doesn't work. I'm also not able to create another bridge name with other mac/IP configuration for my VM nics. The OVN controller (or maybe vswitchd ?) doesn't make the binding between the logical configuration and the hypervisor bridge/interfaces. It seems that I can't use another bridge name and I'm stucked with "br-int". I've also tried to reinstall the ovn/ovs packages and deleted the databases (/etc/openvswitch/*), but I'm still stucked with "br-int". The context: - ubuntu 19.04 - libvirt 5.0.0-1 - ovn / openvswitch 2.11.0 Any idea where I could investigate? Thanks, Frédéric ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] OVS DPDK: Failed to create memory pool for netdev
On Mon, 4 Nov 2019 19:12:36 + "Tobias Hofmann (tohofman)" wrote: > Hi Flavio, > > thanks for reaching out. > > The DPDK options used in OvS are: > > other_config:pmd-cpu-mask=0x202 > other_config:dpdk-socket-mem=1024 > other_config:dpdk-init=true > > > For the dpdk port, we set: > > type=dpdk > options:dpdk-devargs=:08:0b.2 > external_ids:unused-drv=i40evf > mtu_request=9216 Looks good to me, though the CPU has changed comparing to the log: 2019-11-02T14:51:26.940Z|00010|dpdk|INFO|EAL ARGS: ovs-vswitchd --socket-mem 1024 -c 0x0001 What I see from the logs is that OvS is trying to add a port, but the port is not ready yet, so it continues with other things which also consumes memory. Unfortunately by the time that the i40 port is ready then there is no memory. When you restart, the i40 is ready and the memory can be allocated. However, the ring allocation fails due to lack of memory: 2019-11-02T14:51:27.808Z|00136|dpdk|ERR|RING: Cannot reserve memory 2019-11-02T14:51:27.974Z|00137|dpdk|ERR|RING: Cannot reserve memory If you reduce the MTU, then the minimum amount of memory required for the DPDK port reduces drastically, which explains why it works. Also increasing the total memory to 2G helps because then the minimum amount for 9216 MTU and the ring seems to be sufficient. The ring seems to be related to pdump, is that the case? I don't known of the top of my head. In summary, looks like 1G is not enough for large MTU and pdump. HTH, fbl > > > Please let me know if this is what you asked for. > > Thanks > Tobias > > On 04.11.19, 15:50, "Flavio Leitner" wrote: > > > It would be nice if you share the DPDK options used in OvS. > > On Sat, 2 Nov 2019 15:43:18 + > "Tobias Hofmann \(tohofman\) via discuss" > wrote: > > > Hello community, > > > > My team and I observe a strange behavior on our system with the > > creation of dpdk ports in OVS. We have a CentOS 7 system with > > OpenvSwitch and only one single port of type ‘dpdk’ attached to > > a bridge. The MTU size of the DPDK port is 9216 and the reserved > > HugePages for OVS are 512 x 2MB-HugePages, e.g. 1GB of total > > HugePage memory. > > > > Setting everything up works fine, however after I reboot my > > box, the dpdk port is in error state and I can observe this > > line in the logs (full logs attached to the mail): > > 2019-11-02T14:46:16.914Z|00437|netdev_dpdk|ERR|Failed to create > > memory pool for netdev dpdk-p0, with MTU 9216 on socket 0: > > Invalid argument > > 2019-11-02T14:46:16.914Z|00438|dpif_netdev|ERR|Failed to set > > interface dpdk-p0 new configuration > > > > I figured out that by restarting the openvswitch process, the > > issue with the port is resolved and it is back in a working > > state. However, as soon as I reboot the system a second time, > > the port comes up in error state again. Now, we have also > > observed a couple of other workarounds that I can’t really > > explain why they help: > > > > * When there is also a VM deployed on the system that is > > using ports of type ‘dpdkvhostuserclient’, we never see any > > issues like that. (MTU size of the VM ports is 9216 by the way) > > * When we increase the HugePage memory for OVS to 2GB, we > > also don’t see any issues. > > * Lowering the MTU size of the ‘dpdk’ type port to 1500 also > > helps to prevent this issue. > > > > Can anyone explain this? > > > > We’re using the following versions: > > Openvswitch: 2.9.3 > > DPDK: 17.11.5 > > > > Appreciate any help! > > Tobias > > > ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
[ovs-discuss] [OVN][RAFT]Why left server cannot be added back to
Hi,Numan, When I run OVN/RAFT cluster, I found a server(which initiative to leave or be kicked off ) ,cannot be added back to origin cluster. I found the code as following, can you tell me the reason , many thanks! case RAFT_REC_NOTE: if (!strcmp(r->note, "left")) { return ovsdb_error(NULL, "record %llu indicates server has left " "the cluster; it cannot be added back (use " "\"ovsdb-tool join-cluster\" to add a new " "server)", rec_idx); Thanks, Yun___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] [OVN][RAFT]Why left server cannot be added back to
On Tue, Nov 5, 2019 at 5:40 PM taoyunupt wrote: > > Hi,Numan, > When I run OVN/RAFT cluster, I found a server(which > initiative to leave or be kicked off ) ,cannot be added back to origin > cluster. I found the code as following, can you tell me the reason , many > thanks! > > case RAFT_REC_NOTE: > if (!strcmp(r->note, "left")) { > return ovsdb_error(NULL, "record %llu indicates server has left " >"the cluster; it cannot be added back (use " >"\"ovsdb-tool join-cluster\" to add a new " >"server)", rec_idx); > Thanks, > Yun Hi Yun, Probably Ben or Han can answer this question. Did you look into the documentation about raft in ovs man pages. Maybe you can find the reason for this . Thanks Numan ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] the network performence is not normal when use openvswitch.ko make from ovs tree
On Tue, Nov 5, 2019 at 6:14 PM shuangyang qian wrote: > > > cc to ovs-discuss > -- Forwarded message - > 发件人: shuangyang qian > Date: 2019年11月5日周二 下午6:12 > Subject: Re: [ovs-discuss] the network performence is not normal when use > openvswitch.ko make from ovs tree > To: Tonghao Zhang > > > thank you for your reply, i just change my kernel version as same as you and > do the steps you provide, and get the same result which i metioned at first. > The process is like below. > on node1: > # ovs-vsctl show > 4f4b936e-ddb9-4fc6-b0aa-6eb6034d4671 > Bridge br-int > Port br-int > Interface br-int > type: internal > Port "gnv0" > Interface "gnv0" > type: geneve > options: {csum="true", key="100", remote_ip="10.18.124.2"} > Port "veth-vm1" > Interface "veth-vm1" > ovs_version: "2.12.0" > # ip netns exec vm1 ip a > 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > 2: ovs-gretap0@NONE: mtu 1462 qdisc noop state DOWN > group default qlen 1000 > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff Why this netdev in your netns? you run the ovs in your netns ? the ovs should running on host. > 3: erspan0@NONE: mtu 1450 qdisc noop state DOWN group > default qlen 1000 > link/ether 32:d9:4f:86:c3:58 brd ff:ff:ff:ff:ff:ff ? > 4: ovs-ip6gre0@NONE: mtu 1448 qdisc noop state DOWN group default > qlen 1000 > link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd > 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 > 5: ovs-ip6tnl0@NONE: mtu 1452 qdisc noop state DOWN group default > qlen 1000 ? > link/tunnel6 :: brd :: > 19: vm1-eth0@if18: mtu 1500 qdisc noqueue > state UP group default qlen 1000 > link/ether 32:4b:51:e2:2b:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 0 > inet 192.168.100.10/24 scope global vm1-eth0 >valid_lft forever preferred_lft forever > inet6 fe80::304b:51ff:fee2:2bf4/64 scope link >valid_lft forever preferred_lft forever please set vm1-eth0 mtu to 1450. > on node2: > # ovs-vsctl show > 53df6c21-c210-4c2c-a7ab-b1edb0df4a31 > Bridge br-int > Port "veth-vm2" > Interface "veth-vm2" > Port "gnv0" > Interface "gnv0" > type: geneve > options: {csum="true", key="100", remote_ip="10.18.124.1"} > Port br-int > Interface br-int > type: internal > ovs_version: "2.12.0" > # ip netns exec vm2 ip a > 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > 2: ovs-gretap0@NONE: mtu 1462 qdisc noop state DOWN > group default qlen 1000 > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 3: erspan0@NONE: mtu 1450 qdisc noop state DOWN group > default qlen 1000 > link/ether 8e:90:3e:95:1b:dd brd ff:ff:ff:ff:ff:ff > 4: ovs-ip6gre0@NONE: mtu 1448 qdisc noop state DOWN group default > qlen 1000 > link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd > 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 > 5: ovs-ip6tnl0@NONE: mtu 1452 qdisc noop state DOWN group default > qlen 1000 > link/tunnel6 :: brd :: > 11: vm2-eth0@if10: mtu 1500 qdisc noqueue > state UP group default qlen 1000 > link/ether ee:e4:3e:16:6f:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0 > inet 192.168.100.20/24 scope global vm2-eth0 >valid_lft forever preferred_lft forever > inet6 fe80::ece4:3eff:fe16:6f66/64 scope link >valid_lft forever preferred_lft forever > > and in network namespace vm1 on node1 i start iperf3 as server: > # ip netns exec vm1 iperf3 -s > > and in network namespace vm2 on noed2 i start iperf3 as client: > # ip netns exec vm2 iperf3 -c 192.168.100.10 -i 2 -t 10 > Connecting to host 192.168.100.10, port 5201 > [ 4] local 192.168.100.20 port 35258 connected to 192.168.100.10 port 5201 > [ ID] Interval Transfer Bandwidth Retr Cwnd > [ 4] 0.00-2.00 sec 494 MBytes 2.07 Gbits/sec 151952 KBytes > [ 4] 2.00-4.00 sec 582 MBytes 2.44 Gbits/sec3 1007 KBytes > [ 4] 4.00-6.00 sec 639 MBytes 2.68 Gbits/sec0 1.36 MBytes > [ 4] 6.00-8.00 sec 618 MBytes 2.59 Gbits/sec0 1.64 MBytes > [ 4] 8.00-10.00 sec 614 MBytes 2.57 Gbits/sec0 1.88 MBytes > - - - - - - - - - - - - - - - - - - - - - - - - - > [ ID] Interval Transfer Bandwidth Retr > [ 4] 0.00-10.00 sec 2.88 GBytes 2.47 Gbits/sec 154 sender > [ 4] 0.00-10.00 sec 2.88 GBytes 2.47 Gbits/sec receiver > > iperf Done. > > the openvswitch.ko in both two nodes is: > # modinfo openvswitch > filename: > /lib/modules/3.10.0-957.el7.x86_64/extra/openvswitch/openvswitch.ko > alias: net-pf-16-proto-16-family-ovs_ct_limit > alias: net-pf-16-proto-16-
[ovs-discuss] Fwd: the network performence is not normal when use openvswitch.ko make from ovs tree
cc to ovs-discuss -- Forwarded message - 发件人: shuangyang qian Date: 2019年11月5日周二 下午6:12 Subject: Re: [ovs-discuss] the network performence is not normal when use openvswitch.ko make from ovs tree To: Tonghao Zhang thank you for your reply, i just change my kernel version as same as you and do the steps you provide, and get the same result which i metioned at first. The process is like below. on node1: # ovs-vsctl show 4f4b936e-ddb9-4fc6-b0aa-6eb6034d4671 Bridge br-int Port br-int Interface br-int type: internal Port "gnv0" Interface "gnv0" type: geneve options: {csum="true", key="100", remote_ip="10.18.124.2"} Port "veth-vm1" Interface "veth-vm1" ovs_version: "2.12.0" # ip netns exec vm1 ip a 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ovs-gretap0@NONE: mtu 1462 qdisc noop state DOWN group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: erspan0@NONE: mtu 1450 qdisc noop state DOWN group default qlen 1000 link/ether 32:d9:4f:86:c3:58 brd ff:ff:ff:ff:ff:ff 4: ovs-ip6gre0@NONE: mtu 1448 qdisc noop state DOWN group default qlen 1000 link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 5: ovs-ip6tnl0@NONE: mtu 1452 qdisc noop state DOWN group default qlen 1000 link/tunnel6 :: brd :: 19: vm1-eth0@if18: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 32:4b:51:e2:2b:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.100.10/24 scope global vm1-eth0 valid_lft forever preferred_lft forever inet6 fe80::304b:51ff:fee2:2bf4/64 scope link valid_lft forever preferred_lft forever on node2: # ovs-vsctl show 53df6c21-c210-4c2c-a7ab-b1edb0df4a31 Bridge br-int Port "veth-vm2" Interface "veth-vm2" Port "gnv0" Interface "gnv0" type: geneve options: {csum="true", key="100", remote_ip="10.18.124.1"} Port br-int Interface br-int type: internal ovs_version: "2.12.0" # ip netns exec vm2 ip a 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ovs-gretap0@NONE: mtu 1462 qdisc noop state DOWN group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: erspan0@NONE: mtu 1450 qdisc noop state DOWN group default qlen 1000 link/ether 8e:90:3e:95:1b:dd brd ff:ff:ff:ff:ff:ff 4: ovs-ip6gre0@NONE: mtu 1448 qdisc noop state DOWN group default qlen 1000 link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 5: ovs-ip6tnl0@NONE: mtu 1452 qdisc noop state DOWN group default qlen 1000 link/tunnel6 :: brd :: 11: vm2-eth0@if10: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether ee:e4:3e:16:6f:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.100.20/24 scope global vm2-eth0 valid_lft forever preferred_lft forever inet6 fe80::ece4:3eff:fe16:6f66/64 scope link valid_lft forever preferred_lft forever and in network namespace vm1 on node1 i start iperf3 as server: # ip netns exec vm1 iperf3 -s and in network namespace vm2 on noed2 i start iperf3 as client: # ip netns exec vm2 iperf3 -c 192.168.100.10 -i 2 -t 10 Connecting to host 192.168.100.10, port 5201 [ 4] local 192.168.100.20 port 35258 connected to 192.168.100.10 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-2.00 sec 494 MBytes 2.07 Gbits/sec 151952 KBytes [ 4] 2.00-4.00 sec 582 MBytes 2.44 Gbits/sec3 1007 KBytes [ 4] 4.00-6.00 sec 639 MBytes 2.68 Gbits/sec0 1.36 MBytes [ 4] 6.00-8.00 sec 618 MBytes 2.59 Gbits/sec0 1.64 MBytes [ 4] 8.00-10.00 sec 614 MBytes 2.57 Gbits/sec0 1.88 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 2.88 GBytes 2.47 Gbits/sec 154 sender [ 4] 0.00-10.00 sec 2.88 GBytes 2.47 Gbits/sec receiver iperf Done. the openvswitch.ko in both two nodes is: # modinfo openvswitch filename: /lib/modules/3.10.0-957.el7.x86_64/extra/openvswitch/openvswitch.ko alias: net-pf-16-proto-16-family-ovs_ct_limit alias: net-pf-16-proto-16-family-ovs_meter alias: net-pf-16-proto-16-family-ovs_packet alias: net-pf-16-proto-16-family-ovs_flow alias: net-pf-16-proto-16-family-ovs_vport alias: net-pf-16-proto-16-family-ovs_datapath version:2.12.0 license:GPL description:Open vSwitch switching datapath retpoline: Y rhelversion:7.6 srcversion: 764C8BD051B3182DE71CF29 depends: nf_conntrack,tunnel6,nf_nat,nf_defrag
Re: [ovs-discuss] the network performence is not normal when use openvswitch.ko make from ovs tree
On Mon, Nov 4, 2019 at 5:14 PM shuangyang qian wrote: > > Hi: > I make rpm packages for ovs and ovn with this > document:http://docs.openvswitch.org/en/latest/intro/install/fedora/ . For > use the kernel module in ovs tree, i configure with the command: ./configure > --with-linux=/lib/modules/$(uname -r)/build . > Then install the rpm packages. > when i finished, i check the openvswitch.ko is like: > # lsmod | grep openvswitch > openvswitch 291276 0 > tunnel6 3115 1 openvswitch > nf_defrag_ipv6 25957 2 nf_conntrack_ipv6,openvswitch > nf_nat_ipv6 6459 2 openvswitch,ip6table_nat > nf_nat_ipv4 6187 2 openvswitch,iptable_nat > nf_nat 18080 5 > xt_nat,openvswitch,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4 > nf_conntrack 102766 10 > ip_vs,nf_conntrack_ipv6,openvswitch,nf_conntrack_ipv4,nf_conntrack_netlink,nf_nat_ipv6,nf_nat_masquerade_ipv4,xt_conntrack,nf_nat_ipv4,nf_nat > libcrc32c 1388 3 ip_vs,openvswitch,xfs > ipv6 400397 92 > ip_vs,nf_conntrack_ipv6,openvswitch,nf_defrag_ipv6,nf_nat_ipv6,bridge > # modinfo openvswitch > filename: /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko > alias: net-pf-16-proto-16-family-ovs_ct_limit > alias: net-pf-16-proto-16-family-ovs_meter > alias: net-pf-16-proto-16-family-ovs_packet > alias: net-pf-16-proto-16-family-ovs_flow > alias: net-pf-16-proto-16-family-ovs_vport > alias: net-pf-16-proto-16-family-ovs_datapath > version:2.11.2 > license:GPL > description:Open vSwitch switching datapath > srcversion: 9DDA327F9DD46B9813628A4 > depends: > nf_conntrack,tunnel6,ipv6,nf_nat,nf_defrag_ipv6,libcrc32c,nf_nat_ipv6,nf_nat_ipv4 > vermagic: 4.9.18-19080201 SMP mod_unload modversions > parm: udp_port:Destination UDP port (ushort) > # rpm -qf /lib/modules/4.9.18-19080201/extra/openvswitch/openvswitch.ko > openvswitch-kmod-2.11.2-1.el7.x86_64 > > Then i start to build my network structure. I have two node,and network > namespace vm1 on node1, network namespace vm2 on node2. vm1's veth pair > veth-vm1 is on node1's br-int. vm2's veth pair veth-vm2 is on node2's br-int. > In logical layer, there is one logical switch test-subnet and two logical > switch port node1 and node2 on it. like this: > # ovn-nbctl show > switch 70585c0e-3cd9-459e-9448-3c13f3c0bfa3 (test-subnet) > port node2 > addresses: ["00:00:00:00:00:02 192.168.100.20"] > port node1 > addresses: ["00:00:00:00:00:01 192.168.100.10"] > on node1: > # ovs-vsctl show > 5180f74a-1379-49af-b265-4403bd0d82d8 > Bridge br-int > fail_mode: secure > Port "ovn-431b9e-0" > Interface "ovn-431b9e-0" > type: geneve > options: {csum="true", key=flow, remote_ip="10.18.124.2"} > Port br-int > Interface br-int > type: internal > Port "veth-vm1" > Interface "veth-vm1" > ovs_version: "2.11.2" > # ip netns exec vm1 ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group > default qlen 1 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo >valid_lft forever preferred_lft forever > inet6 ::1/128 scope host >valid_lft forever preferred_lft forever > 14: ovs-gretap0@NONE: mtu 1462 qdisc noop state DOWN > group default qlen 1000 > link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff > 15: erspan0@NONE: mtu 1450 qdisc noop state DOWN group > default qlen 1000 > link/ether 22:02:1b:08:ec:53 brd ff:ff:ff:ff:ff:ff > 16: ovs-ip6gre0@NONE: mtu 1448 qdisc noop state DOWN group default > qlen 1 > link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd > 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 > 17: ovs-ip6tnl0@NONE: mtu 1452 qdisc noop state DOWN group default > qlen 1 > link/tunnel6 :: brd :: > 18: vm1-eth0@if17: mtu 1400 qdisc noqueue > state UP group default qlen 1000 > link/ether 00:00:00:00:00:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0 > inet 192.168.100.10/24 scope global vm1-eth0 >valid_lft forever preferred_lft forever > inet6 fe80::200:ff:fe00:1/64 scope link >valid_lft forever preferred_lft forever > > > on node2:# ovs-vsctl show > 011332d0-78bc-47f7-be3c-fab0beb08e28 > Bridge br-int > fail_mode: secure > Port br-int > Interface br-int > type: internal > Port "ovn-c655f8-0" > Interface "ovn-c655f8-0" > type: geneve > options: {csum="true", key=flow, remote_ip="10.18.124.1"} > Port "veth-vm2" > Interface "veth-vm2" > ovs_version: "2.11.2" > #ip netns exec vm2 ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group > default qlen 1 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > i