>
>> >
>> >Thanks Mark.
>>
>> Hey again Kapil,
>>
>> So it looks like the issue that you're experiencing is the result of a bug
>> in DPDK v16.07, which was subsequently fixed in v16.11
>>
>> If you apply the following commit to your DPDK codebase, it will resolve
>> the issue.
>>
>>     commit f5e9ed5c4e35a4cc2db7c10cf855e701472af864
>>     Author: Nipun Gupta <nipun.gu...@nxp.com>
>>     Date:   Fri Nov 11 21:17:10 2016 +0530
>>
>>         mempool: fix leak if populate fails
>>
>>         This patch fixes the issue of memzone not being freed incase the
>>         rte_mempool_populate_phys fails in the
>> rte_mempool_populate_default
>>
>>         This issue was identified when testing with OVS ~2.6
>>         - configure the system with low memory (e.g. < 500 MB)
>>         - add bridge and dpdk interfaces
>>         - delete brigde
>>         - keep on repeating the above sequence.
>>
>>         Fixes: d1d914ebbc25 ("mempool: allocate in several memory chunks
>> by default")
>>
>>         Signed-off-by: Nipun Gupta <nipun.gu...@nxp.com>
>>         Acked-by: Olivier Matz <olivier.m...@6wind.com>
>>
>>     diff --git a/lib/librte_mempool/rte_mempool.c
>> b/lib/librte_mempool/rte_mempool.c
>>     index e94e56f..aa513b9 100644
>>     --- a/lib/librte_mempool/rte_mempool.c
>>     +++ b/lib/librte_mempool/rte_mempool.c
>>     @@ -578,8 +578,10 @@ static unsigned optimize_object_size(unsigned
>> obj_size)
>>                                     mz->len, pg_sz,
>>                                     rte_mempool_memchunk_mz_free,
>>                                     (void *)(uintptr_t)mz);
>>     -               if (ret < 0)
>>     +               if (ret < 0) {
>>     +                       rte_memzone_free(mz);
>>                             goto fail;
>>     +               }
>>             }
>>
>>             return mp->size;
>>
>> Cheers,
>> Mark
>
>
>Good catch on this Mark,

To be fair, Sergio pointed out a number of mempool bugs that had been fixed in 
later versions of DPDK - I just did the testing on them ;)

>
>Just as an aside, the same fix is included in the latest DPDK 16.07.2 stable 
>branch along
>with a number of other bug fixes that were backported from DPDK 16.11.
>
>Stable releases of DPDK are available at
>
>http://dpdk.org/browse/dpdk-stable/
>
>Ian
>
>>
>> >
>> >On Tue, Feb 28, 2017, 3:04 PM Kavanagh, Mark B
>> <mark.b.kavan...@intel.com> wrote:
>> >>
>> >>Hi Mark,
>> >>Is there any patch I can expect for this issue?
>> >
>> >Hi Kapil,
>> >
>> >I flagged the issue to our onsite DPDK memory expert (Sergio, cc'd).
>> >Sergio worked with us on the issue, and wrote a simple DPDK application
>> >in an attempt to reproduce the issue, and localize it to DPDK.
>> >
>> >Unfortunately, the issue was not observed/reproducible in that DPDK
>> >app. I intend to take another look at it today from an OvS-DPDK
>> >perspective - I'll provide an update as and when available.
>> >
>> >Thanks,
>> >Mark
>> >
>> >>
>> >>On Tue, Feb 21, 2017, 7:28 PM Kapil Adhikesavalu <kapil20...@gmail.com>
>> wrote:
>> >>Hi Mark,
>> >>
>> >>Thanks for the detailed analysis, i will wait for further updates.
>> >>
>> >>Btw on the mailing list, i seem to have clicked 'reply' instead of
>> >>'replyall' :)
>> >>
>> >>Regards
>> >>Kapil.
>> >>
>> >>On Tue, Feb 21, 2017 at 3:26 PM, Kavanagh, Mark B
>> <mark.b.kavan...@intel.com> wrote:
>> >>+ dev - please keep the list included in any discussions going forward
>> >>+ folks :)
>> >>
>> >>Hi Kapil,
>> >>
>> >>I've managed to reproduce the issue of removing and re-adding ports
>> >>with MTU > 1894, using the following configuration:
>> >>
>> >>        DPDK:           v16.07
>> >>        OVS:            f922f0f1c9
>> >>        dpdk-socket-mem: 1024
>> >>        hugepage-sz:    2M
>> >>        port MTU:               1920
>> >>
>> >>The same behavior is observed with both Phy ports and vhost-user
>> >>ports; when I add a port with MTU 1920, subsequently remove it, and
>> >>then re-add it, I receive an error message
>> >stating
>> >>that insufficient memory is available to create a mempool for that port.
>> >>
>> >>A few notes on this:
>> >>        - this behavior is observed only when attempting to re-add the
>> >>same port (more on this later); if I add a new port with MTU 1920, the
>> >>issue doesn't occur, indicating that
>> >lack
>> >>of available memory is not the root-cause
>> >>        - the reason that you don't see the issue when you add a
>> >>'dummy' port is because
>> >that
>> >>port is also of type dpdkvhostuser and shares the same MTU - and thus
>> >>the same mempool - as the other dpdkvhostuser ports that you've added.
>> >>A mempool can only be deleted when all
>> >ports
>> >>that use it are deleted; since you never delete the 'dummy' port, the
>> >>mempool is never destroyed, and so when you re-add the other
>> >>dpdkvhostuser ports, they simply use the
>> >existing
>> >>mempool that they previously did (and which is still in use by 'dummy').
>> >>
>> >>In the course of debugging, I traced the issue to the DPDK function
>> >>rte_memzone_reserve_aligned_thread_unsafe, which is invoked in the
>> >>call hierarchy of rte_mempool_create (in turn, invoked by
>> >>dpdk_mp_get). By instrumenting the code, I observed that when I
>> >>attempt to re-add a port with MTU 1920, a check for the pre-existence of
>> one particular memzone fails in this function:
>> >>
>> >>        ovs-vswitchd[59273]: EAL:
>> >>memzone_reserve_aligned_thread_unsafe(): reserving memzone for
>> >>mempool: === MP_ovs_mp_3054_0_262144_130 ===
>> >>        ovs-vswitchd[59273]: EAL:
>> >>memzone_reserve_aligned_thread_unsafe(): memzone
>> >><MP_ovs_mp_3054_0_262144_130> already exists
>> >>        ovs-vswitchd[59273]: ovs|00069|dpdk|ERR|Insufficient memory to
>> >>create memory pool
>> >for
>> >>netdev dpdk0, with MTU 1920 on socket 0
>> >>
>> >>Since that function fails, it triggers the 'insufficient memory' log
>> >>in OvS-DPDK - this is certainly something that we can remedy, as it is
>> misleading.
>> >>
>> >>Curiously, I don't observe this behavior when dpdk-socket-mem=2048, in
>> >>which case the port may be deleted and re-added without issue. I'll
>> >>consult my DPDK colleagues on this, and get back to you with an answer
>> in the coming days.
>> >>
>> >>Thanks,
>> >>Mark
>> >>
>> >>>
>> >>>Hi Ian,
>> >>>
>> >>>Thanks for looking into it.
>> >>>
>> >>>Please find the information below and let me know if anything else is
>> needed.
>> >>>
>> >>>•        OVS version: 2.6.1
>> >>>•        DPDK version: 16.07
>> >>>• Was OvS installed through a package manager or built from
>> source?  Built from source.
>> >>>• If OVS is a release version, is it the latest version of that
>> >>>release?  branch-2.6 with head commit
>> >>>f922f0f1c926bb7596ba4e0971960dc89e39f0e7
>> >>>• Author: Thadeu Lima de Souza Cascardo <casca...@redhat.com> • Date:
>> >>>Wed Oct 19 13:32:57 2016 -0200 • OS Version: ?  It's a yocto based OS
>> >>>with 4.1 Linux kernel uname -r > 4.1.26-yocto-
>> >>standard
>> >>>• HW Platform: ? HP gen9 DL20 proliant • CPU version and frequency:
>> ?
>> >>>Intel(R) Xeon(R) CPU E3-1220 v5 @ 3.00GHz, 4 cores •        Commands
>> >>>& parameters used to launch OVS (In particular any command related to
>> >>>memory setup)
>> >>>
>> >>>root@hp-dl20:/# cat /proc/cmdline
>> >>>BOOT_IMAGE=/dl20.bin root=/dev/ram0 ip=dhcp default_hugepagesz=2M
>> >>>hugepagesz=2M
>> >>>hugepages=1536 isolcpus=1-3
>> >>>
>> >>># Create ovs config
>> >>>mkdir -p /var/log/openvswitch
>> >>>ovsdb-tool create $ovsdir/etc/openvswitch/conf.db
>> >>>$ovsdir/usr/share/openvswitch/vswitch.ovsschema
>> >>>
>> >>># Bring up ovsdb-server daemon
>> >>>mkdir -p $ovsdir/var/run/openvswitch
>> >>>/usr/sbin/ovsdb-server
>> >>>--remote=punix:$ovsdir/var/run/openvswitch/db.sock \
>> >>>               --remote=db:Open_vSwitch,Open_vSwitch,manager_options
>> >>>\
>> >>>               --private-key=db:Open_vSwitch,SSL,private_key \
>> >>>               --certificate=db:Open_vSwitch,SSL,certificate \
>> >>>               --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \
>> >>>               --pidfile --detach --verbose=err
>> >>>
>> >>># Intialize the ovs database
>> >>>/usr/bin/ovs-vsctl --no-wait init
>> >>>
>> >>>  dpdk_socket_mem="1024,0"
>> >>>  dpdk_lcore_mask=0x1
>> >>>
>> >>>  # Specify DPDK options for very newest as-of-yet-unreleased rev of
>> >>>OVS
>> >>>  /usr/bin/ovs-vsctl --no-wait set Open_vSwitch .
>> >>>other_config:dpdk-init=true
>> >>>
>> >>>  # Number of memory channels on targeted platform
>> >>>  /usr/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-
>> extra="-n 4"
>> >>>
>> >>>  # Set dpdk-socket-mem
>> >>>  /usr/bin/ovs-vsctl --no-wait set Open_vSwitch .
>> >>>other_config:dpdk-socket- mem=$dpdk_socket_mem
>> >>>
>> >>>  # Set dpdk-lcore-mask
>> >>>  /usr/bin/ovs-vsctl --no-wait set Open_vSwitch .
>> >>>other_config:dpdk-lcore- mask=$dpdk_lcore_mask
>> >>>
>> >>>To collect the below logs in attachment, i enabled --verbose and ran
>> >>>following commands to hit the issue
>> >>>
>> >>>root@hp-dl20:/# ovs-vsctl add-br trail -- set bridge trail
>> >>>datapath_type=netdev root@hp-dl20:/# ovs-vsctl add-port trail
>> >>>dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser --
>> >>>set Interface dpdkvhostuser0 mtu_request=1920 root@hp-dl20:/#
>> >>>ovs-vsctl get interface dpdkvhostuser0 mtu
>> >>>1920
>> >>>root@hp-dl20:/# ovs-vsctl del-port trail dpdkvhostuser0
>> >>>root@hp-dl20:/# ovs-vsctl add-port trail dpdkvhostuser0 -- set
>> >>>Interface dpdkvhostuser0 type=dpdkvhostuser -- set Interface
>> >>>dpdkvhostuser0 mtu_request=1920 root@hp-dl20:/# ovs-vsctl get
>> >>>interface dpdkvhostuser0 mtu
>> >>>1500
>> >>>
>> >>>Regards
>> >>>Kapil.
>> >>>
>> >>>On Mon, Feb 20, 2017 at 2:27 PM, Stokes, Ian <ian.sto...@intel.com>
>> wrote:
>> >>>Hi Kapil,
>> >>>
>> >>>Myself and my colleague Mark Kavanagh (the author of the Jumbo Frames
>> >>>implementation) are planning to take a look at this today.
>> >>>
>> >>>We were unable to reproduce the issue itself last week as again the
>> >>>port always reports an error when we attempt to set MTU > 1894 so in
>> >>>effect we never get to the stage where we can delete and re-request the
>> MTU.
>> >>>
>> >>>To help with this can you confirm/provide the following data:
>> >>>
>> >>>•        OVS version: 2.6.1
>> >>>•        DPDK version: 16.07
>> >>>• Was OvS installed through a package manager or built from source?
>> >>>• If OVS is a release version, is it the latest version of that
>> release?
>> >>>• OS Version: ?
>> >>>• Kernel Version: ?
>> >>>• HW Platform: ?
>> >>>• CPU version and frequency: ?
>> >>>•        Commands & parameters used to launch OVS (In particular any
>> >>>command related to memory setup)
>> >>>
>> >>>
>> >>>Could you also provide the vswitch logs? There might be something
>> >>>there that could help us root cause.
>> >>>
>> >>>Thanks
>> >>>Ian
>> >>>
>> >>>
>> >>>
>> >>>From: Kapil Adhikesavalu [mailto:kapil20...@gmail.com]
>> >>>Sent: Monday, February 20, 2017 7:08 AM
>> >>>To: Stokes, Ian <ian.sto...@intel.com>
>> >>>Subject: Re: [ovs-dev] Memory pool issue on delete and re-add of DPDK
>> >>>ports with jumbo MTU
>> >>>>1894
>> >>>
>> >>>Hi Ian,
>> >>>
>> >>>Any thoughts on my observation ? Looks like a possible bug with MTU
>> implementation.
>> >>>
>> >>>Regards
>> >>>Kapil.
>> >>>
>> >>>On Thu, Feb 16, 2017 at 11:49 AM, Kapil Adhikesavalu
>> <kapil20...@gmail.com> wrote:
>> >>>Hi Ian,
>> >>>
>> >>>Thanks for the information.
>> >>>
>> >>>I already figured increasing the dpdk-socket-mem to 2048M from 1024M
>> >>>is allowing me to use 1920B as MTU(1920B is all i need) consistently
>> >>>across port add/delete. But i have a memory constrain on my system, so
>> all that i could afford is 1024M for socket memory.
>> >>>
>> >>>Coming back to the issue;
>> >>>
>> >>>The very first time after starting OVS, i am able to create ports and
>> >>>assign MTU
>> >as1920(with
>> >>>1024M socket mem) without any issues, no failure logs and i am able
>> >>>read back MTU as1920 in get mtu and send packets size of 1920B size.
>> >>>I have been using it this way for about 2 months without any issues.
>> >>>
>> >>>Problem is seen only when i clean up all the ports on the bridges and
>> >>>add back a new port with the same 1920B MTU.
>> >>>Considering that i am able to assign 1920B the very first time and as
>> >>>well as when i
>> >restart
>> >>>the OVS process, i suspect there is some issue with OVS/DPDK in
>> >>>freeing up the allocated space on port deletion.
>> >>>
>> >>>Other strange observation i made is:
>> >>>Along with all the ports and bridges i needed to use,  if i add a
>> >>>dummy port with 1920 MTU..Now i am able to remove and re-add all my
>> >>>intended ports without any issues by keeping this dummy port always
>> existing.
>> >>>
>> >>>Logs from my trail:
>> >>>=============
>> >>>
>> >>>Trail 1 with dummy port MTU as 1920, helps to remove and add back my
>> >>>intended port with desired MTU
>> >>>=====================================================================
>> >>>=======
>> >>>
>> >>>Note: in my actual setup, i have 10 ports in total and observed the
>> >>>same behavior, just picking 2 ports here for the trail
>> >>>
>> >>>pkill ovs
>> >>>./lib/systemd/ovs-start.sh
>> >>>
>> >>>tail -f /var/log/openvswitch/ovs-vswitchd.log &
>> >>>
>> >>>ovs-vsctl add-br trail -- set bridge trail datapath_type=netdev
>> >>>ovs-vsctl add-port trail dpdkvhostuser0 -- set Interface
>> >>>dpdkvhostuser0 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser0 mtu_request=1920 ovs-vsctl get interface
>> >>>dpdkvhostuser0 mtu
>> >>>1920
>> >>>ovs-vsctl add-port trail dpdkvhostuser1 -- set Interface
>> >>>dpdkvhostuser1 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser1 mtu_request=1920 ovs-vsctl get interface
>> >>>dpdkvhostuser1 mtu
>> >>>1920
>> >>>
>> >>>ovs-vsctl add-port trail dummy -- set Interface dummy
>> >>>type=dpdkvhostuser -- set Interface dummy mtu_request=1920 ovs-vsctl
>> >>>get interface dummy mtu
>> >>>1920
>> >>>
>> >>>ovs-vsctl del-port trail dpdkvhostuser0 ovs-vsctl del-port trail
>> >>>dpdkvhostuser1
>> >>>
>> >>>ovs-vsctl add-port trail dpdkvhostuser0 -- set Interface
>> >>>dpdkvhostuser0 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser0 mtu_request=1920 ovs-vsctl get interface
>> >>>dpdkvhostuser0 mtu
>> >>>1920
>> >>>ovs-vsctl add-port trail dpdkvhostuser1 -- set Interface
>> >>>dpdkvhostuser1 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser1 mtu_request=1920 ovs-vsctl get interface
>> >>>dpdkvhostuser1 mtu
>> >>>1920
>> >>>
>> >>>ovs-vsctl del-port trail dpdkvhostuser0 ovs-vsctl del-port trail
>> >>>dpdkvhostuser1 ovs-vsctl del-port trail dummy
>> >>>
>> >>>After removing the last dummy port, same issue
>> >>>====================================
>> >>>
>> >>>ovs-vsctl add-port trail dpdkvhostuser0 -- set Interface
>> >>>dpdkvhostuser0 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser0 mtu_request=1920
>> >>>|dpdk|ERR|Insufficient memory to create memory pool for netdev
>> >>>|dpdk|ERR|dpdkvhostuser0, with MTU
>> >1920
>> >>>on socket 0
>> >>>ovs-vsctl get interface dpdkvhostuser0 mtu
>> >>>1500
>> >>>ovs-vsctl add-port trail dpdkvhostuser1 -- set Interface
>> >>>dpdkvhostuser1 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser1 mtu_request=1920
>> >>>|dpdk|ERR|Insufficient memory to create memory pool for netdev
>> >>>|dpdk|ERR|dpdkvhostuser1, with MTU
>> >1920
>> >>>on socket 0
>> >>>ovs-vsctl get interface dpdkvhostuser1 mtu
>> >>>1500
>> >>>
>> >>>Trail 2 with dummy port MTU as 1500, has the same issue when i remove
>> >>>and add back my intended port with desired MTTU
>> >>>=====================================================================
>> >>>===================
>> >>>
>> >>>pkill ovs
>> >>>./lib/systemd/scripts/bristol-ovs.sh
>> >>>
>> >>>tail -f /var/log/openvswitch/ovs-vswitchd.log &
>> >>>
>> >>>
>> >>>ovs-vsctl add-br trail -- set bridge trail datapath_type=netdev
>> >>>ovs-vsctl add-port trail dpdkvhostuser0 -- set Interface
>> >>>dpdkvhostuser0 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser0 mtu_request=1920 ovs-vsctl get interface
>> >>>dpdkvhostuser0 mtu
>> >>>1920
>> >>>ovs-vsctl add-port trail dpdkvhostuser1 -- set Interface
>> >>>dpdkvhostuser1 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser1 mtu_request=1920 ovs-vsctl get interface
>> >>>dpdkvhostuser1 mtu
>> >>>1920
>> >>>
>> >>>ovs-vsctl add-port trail dummy -- set Interface dummy
>> >>>type=dpdkvhostuser -- set Interface dummy mtu_request=1500 ovs-vsctl
>> >>>get interface dummy mtu
>> >>>1500
>> >>>
>> >>>ovs-vsctl del-port trail dpdkvhostuser0 ovs-vsctl del-port trail
>> >>>dpdkvhostuser1
>> >>>
>> >>>ovs-vsctl add-port trail dpdkvhostuser0 -- set Interface
>> >>>dpdkvhostuser0 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser0 mtu_request=1920
>> >>>|dpdk|ERR|Insufficient memory to create memory pool for netdev
>> >>>|dpdk|ERR|dpdkvhostuser0, with MTU
>> >1920
>> >>>on socket 0
>> >>>ovs-vsctl get interface dpdkvhostuser0 mtu
>> >>>1500
>> >>>ovs-vsctl add-port trail dpdkvhostuser1 -- set Interface
>> >>>dpdkvhostuser1 type=dpdkvhostuser
>> >-
>> >>-
>> >>>set Interface dpdkvhostuser1 mtu_request=1920
>> >>>|dpdk|ERR|Insufficient memory to create memory pool for netdev
>> >>>|dpdk|ERR|dpdkvhostuser1, with MTU
>> >1920
>> >>>on socket 0
>> >>>ovs-vsctl get interface dpdkvhostuser1 mtu
>> >>>1500
>> >>>
>> >>>
>> >>>Regards
>> >>>Kapil.
>> >>>
>> >>>On Thu, Feb 16, 2017 at 3:47 AM, Stokes, Ian <ian.sto...@intel.com>
>> wrote:
>> >>>> Hi,
>> >>>>
>> >>>> WIth OVS + DPDK, after i add a port with MTU size greater than
>> >>>> >1894, if i do port delete and re-add the port with same MTU, it
>> >>>> leads to following memory pool error and MTU is set to 1500.
>> >>>> While re-adding the port it the MTU is configured less than 1894,
>> >>>> issue is not seen. To recover, i need to kill the OVS process and
>> start it again.
>> >>>
>> >>>What I suspect is happening here is that you are allocating too
>> >>>little hugepage memory to
>> >>OVS
>> >>>with DPDK. From the details below it looks like you are assigning
>> >>>1024 MB, can you confirm this?
>> >>>
>> >>>I see the same error message you describe when requesting an MTU >=
>> >>>1895 and with only 1024 hugepages memory set (although I see the error
>> the first time the MTU is requested).
>> >>>
>> >>>I'm surprised you only see the issue after deleting the port and
>> >>>re-adding it, can you confirm in your logs that you don't see the
>> >>>error when the port is added the first time
>> >with
>> >>>the 1895 MTU request?
>> >>>
>> >>>Also can you increase the hugepage memory being assigned to OVS with
>> >>>DPDK, it should allow you to request/set a larger MTU.
>> >>>
>> >>>Below is a link to a very good article that explains the memory
>> >>>requirement's when using Jumbo frames with OVS with DPDK if you are
>> >>>interested
>> >>>
>> >>>https://software.intel.com/en-us/articles/jumbo-frames-in-open-vswitc
>> >>>h-with-
>> >dpdk?language=ru
>> >>>
>> >>>Ian
>> >>>
>> >>>>
>> >>>> ovs-vswitchd[14264]: ovs|00002|dpdk|ERR|Insufficient memory to
>> >>>> create memory pool for netdev dpdkvhostuser0, with MTU 1895 on
>> >>>> socket 0
>> >>>>
>> >>>> Issue is not seen when ports use MTU size <=1894. Let me know if
>> >>>> additional logs are needed.
>> >>>>
>> >>>> OVS version: 2.6.1 and DPDK: 16.07
>> >>>> ===========================
>> >>>>
>> >>>> root@hp-dl20:/# ovs-vsctl --version ovs-vsctl (Open vSwitch) 2.6.1
>> >>>> DB Schema 7.14.0
>> >>>>
>> >>>> root@hp-dl20:/# ovs-vsctl show
>> >>>> 176cae66-f14b-436e-9cb2-a7caa054c481
>> >>>>
>> >>>> Issue case with MTU 1895:  (when port with mtu 1895 is readded
>> >>>> error is seen, able to configure MTU <=1894) ====================
>> >>>> ovs-vsctl add-br trail -- set bridge trail datapath_type=netdev
>> >>>> ovs-vsctl add-port trail
>> >>>> dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser
>> >>>> -- set Interface dpdkvhostuser0 mtu_request=1895 root@hp-dl20:/#
>> >>>> ovs-vsctl get interface dpdkvhostuser0 mtu
>> >>>> 1895
>> >>>>
>> >>>> root@hp-dl20:/# ovs-vsctl show
>> >>>> d5b44a7b-62c1-4096-82a4-722833d3d154
>> >>>>     Bridge trail
>> >>>>         Port "dpdkvhostuser0"
>> >>>>             Interface "dpdkvhostuser0"
>> >>>>                 type: dpdkvhostuser
>> >>>>         Port trail
>> >>>>             Interface trail
>> >>>>                 type: internal
>> >>>>
>> >>>> ovs-vsctl del-port trail dpdkvhostuser0 ovs-vsctl add-port trail
>> >>>> dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser
>> >>>> -- set Interface dpdkvhostuser0 mtu_request=1895
>> >>>>
>> >>>> 2017-02-15T15:39:23.830Z|00002|dpdk|ERR|Insufficient memory to
>> >>>> create memory pool for netdev dpdkvhostuser0, with MTU 1895 on
>> >>>> socket 0
>> >>>>
>> >>>> ovs-vsctl get interface dpdkvhostuser0 mtu
>> >>>> 1500
>> >>>>
>> >>>> root@hp-dl20:/# ovs-vsctl set Interface dpdkvhostuser0
>> >>>> mtu_request=1894 root@hp-dl20:/# ovs-vsctl get interface
>> >>>> dpdkvhostuser0 mtu
>> >>>> 1894
>> >>>>
>> >>>> Issue case with MTU 1895:
>> >>>> ====================
>> >>>> pkill ovs and restart OVS
>> >>>>
>> >>>> root     15206  0.0  0.0  23180  4788 ?        Ss   15:42   0:00
>> >>>> /usr/sbin/ovsdb-server --remote=punix:/var/run/openvswitch/db.sock
>> >>>> --remote=db:Open_vSwitch,Open_vSwitch,manager_options
>> >>>> --private-key=db:Open_vSwitch,SSL,private_key
>> >>>> --certificate=db:Open_vSwitch,SSL,certificate
>> >>>> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
>> >>>> -- verbose=err root     15213 10.1  0.0 1836764 5960 ?        Ssl
>> >>>> 15:42   0:01 /usr/sbin/ovs-vswitchd --pidfile --log-file
>> >>>> --verbose=err --detach
>> >>>>
>> >>>> ovs-vsctl add-br trail -- set bridge trail datapath_type=netdev
>> >>>> ovs-vsctl add-port trail dpdkvhostuser0 -- set Interface
>> >>>> dpdkvhostuser0 type=dpdkvhostuser -- set Interface dpdkvhostuser0
>> >>>> mtu_request=1894 root@hp-dl20:/# ovs-vsctl get interface
>> >>>> dpdkvhostuser0 mtu
>> >>>> 1894
>> >>>>
>> >>>> ovs-vsctl del-port trail dpdkvhostuser0 ovs-vsctl add-port trail
>> >>>> dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser
>> >>>> -- set Interface dpdkvhostuser0 mtu_request=1894 ovs- vsctl get
>> >>>> interface dpdkvhostuser0 mtu
>> >>>> 1894
>> >>>>
>> >>>> When moved from 1894 to 1895
>> >>>> ========================
>> >>>> pkill ovs and restart OVS
>> >>>>
>> >>>> ovs-vsctl add-br trail -- set bridge trail datapath_type=netdev
>> >>>> ovs-vsctl add-port trail dpdkvhostuser0 -- set Interface
>> >>>> dpdkvhostuser0 type=dpdkvhostuser -- set Interface dpdkvhostuser0
>> >>>> mtu_request=1894 ovs- vsctl get interface dpdkvhostuser0 mtu
>> >>>> 1894
>> >>>>
>> >>>> ovs-vsctl del-port trail dpdkvhostuser0 ovs-vsctl add-port trail
>> >>>> dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser
>> >>>> -- set Interface dpdkvhostuser0 mtu_request=1895 ovs- vsctl get
>> >>>> interface dpdkvhostuser0 mtu
>> >>>> 1895
>> >>>> ovs-vsctl del-port trail dpdkvhostuser0 ovs-vsctl add-port trail
>> >>>> dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser
>> >>>> -- set Interface dpdkvhostuser0 mtu_request=1895 2017-
>> >>>> 02-15T15:43:28.066Z|00002|dpdk|ERR|Insufficient memory to create
>> >>>> memory pool for netdev dpdkvhostuser0, with MTU 1895 on socket 0
>> >>>> ovs-vsctl get interface dpdkvhostuser0 mtu
>> >>>> 1500
>> >>>>
>> >>>> root@hp-dl20:/# uname -r
>> >>>> 4.1.26-yocto-standard
>> >>>>
>> >>>> root@hp-dl20:/# cat /proc/meminfo
>> >>>> HugePages_Total:    1536
>> >>>> HugePages_Free:     1024
>> >>>> HugePages_Rsvd:        0
>> >>>> HugePages_Surp:        0
>> >>>> Hugepagesize:       2048 kB
>> >>>>
>> >>>> Regards
>> >>>> Kapil.
>> >>>> _______________________________________________
>> >>>> dev mailing list
>> >>>> d...@openvswitch.org
>> >>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>> >>>
>> >>>
>> >>
>> >>--
>> >>Thanks
>> >>Kapil
>> >--
>> >Thanks
>> >Kapil
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to