This is good news. I was wondering if it was in the pipeline. Thank you

P Gyanesh Kumar Patra

On Thu, Nov 9, 2017 at 7:25 PM, Francois Ozog <francois.o...@linaro.org>
wrote:

> ODP2.0 should allow ODP to leverage directly libiverbs from a native ODP
> pktio without DPDK layer.
> Mellanox has created a userland framework based on libiverbs while we try
> to promote an extension of Mediated Device (vfio-mdev).
>
> FF
>
> On 9 November 2017 at 14:18, Maxim Uvarov <maxim.uva...@linaro.org> wrote:
>
>> Nice to see it working. I think we did not yet tested it with Mellanox
>> drivers.
>>
>> For linux-generic refer to .travis.yaml or ./scripts/build-pktio-dpdk
>> scripts. Also all required steps are in README.
>>
>> Maxim.
>>
>> On 11/09/17 14:47, Elo, Matias (Nokia - FI/Espoo) wrote:
>> > Hi Gyanesh,
>> >
>> > Pretty much the same steps should also work with odp linux-generic. The
>> main difference is configure script. With linux-generic you use
>> '--with-dpdk-path=<dpdk_path>' option and optionally
>> --enable-dpdk-zero-copy flag. The supported dpdk  version is v17.08.
>> >
>> > -Matias
>> >
>> >> On 9 Nov 2017, at 10:34, gyanesh patra <pgyanesh.pa...@gmail.com>
>> wrote:
>> >>
>> >> Hi Maxim,
>> >> Thanks for the help. I managed to figure out the configuration error
>> and it
>> >> works fine for "ODP-DPDK". The MLX5 pmd was not included properly.
>> >>
>> >> But regarding "ODP" repo (not odp-dpdk), do i need to follow any steps
>> to
>> >> be able to use MLX ???
>> >>
>> >>
>> >> P Gyanesh Kumar Patra
>> >>
>> >> On Wed, Nov 8, 2017 at 7:56 PM, Maxim Uvarov <maxim.uva...@linaro.org>
>> >> wrote:
>> >>
>> >>> On 11/08/17 19:32, gyanesh patra wrote:
>> >>>> I am not sure what you mean. Can you please elaborate?
>> >>>>
>> >>>> As i mentioned before I am able to run dpdk examples. Hence the
>> drivers
>> >>>> are available and working fine.
>> >>>> I configured ODP & ODP-DPDK with "LDFLAGS=-libverbs" and compiled to
>> >>>> work with mellanox. I followed the same while compiling dpdk too.
>> >>>>
>> >>>> Is there anything i am missing?
>> >>>>
>> >>>> P Gyanesh Kumar Patra
>> >>>
>> >>>
>> >>> in general if CONFIG_RTE_LIBRTE_MLX5_PMD=y was specified then it has
>> to
>> >>> work. I think we did test only with ixgbe. But in general it's common
>> code.
>> >>>
>> >>> "Unable to init any I/O type." means it it called all open for all
>> pktio
>> >>> in list here:
>> >>> ./platform/linux-generic/pktio/io_ops.c
>> >>>
>> >>> and setup_pkt_dpdk() failed for some reason.
>> >>>
>> >>> I do not like allocations errors in your log.
>> >>>
>> >>> Try to compile ODP with --enable-debug-print --enable-debug it will
>> make
>> >>> ODP_DBG() macro work and it will be visible why it does not opens
>> pktio.
>> >>>
>> >>> Maxim
>> >>>
>> >>>
>> >>>>
>> >>>> On Wed, Nov 8, 2017 at 5:22 PM, Maxim Uvarov <
>> maxim.uva...@linaro.org
>> >>>> <mailto:maxim.uva...@linaro.org>> wrote:
>> >>>>
>> >>>>    is Mellanox pmd compiled in?
>> >>>>
>> >>>>    Maxim.
>> >>>>
>> >>>>    On 11/08/17 17:58, gyanesh patra wrote:
>> >>>>> Hi,
>> >>>>> I am trying to run ODP & ODP-DPDK examples on our server with
>> >>>>    mellanox 100G
>> >>>>> NICs. I am using the odp_l2fwd example. While running the example,
>> >>>>    I am
>> >>>>> facing some issues.
>> >>>>> -> When I run "ODP" example using the if names given by kernel as
>> >>>>> arguments, I am not getting enough throughput.(the value is very
>> >>> low)
>> >>>>> -> And when I try "ODP-DPDK" example using port ID as "0,1", it
>> >>> can't
>> >>>>> create pktio. Whereas I am able to run the examples from "DPDK"
>> >>>>> repo with portID "0,1" for the same mellanox NICs. I tried running
>> >>>>    with
>> >>>>> "81:00.0,81:00.1" and also with if-names too without any success.
>> >>>>    Adding
>> >>>>> the whitelist using ODP_PLATFORM_PARAMS doesn't help either.
>> >>>>>
>> >>>>> Am I missing any steps to use mellanox NICs? OR is there a
>> >>>>    different method
>> >>>>> to specify the device details to create pktio?
>> >>>>> I am providing the output of "odp_l2fwd" examples for ODP and
>> >>> ODP-DPDK
>> >>>>> repository here.
>> >>>>>
>> >>>>> The NICs being used:
>> >>>>>
>> >>>>> 0000:81:00.0 'MT27700 Family [ConnectX-4]' if=enp129s0f0
>> >>> drv=mlx5_core
>> >>>>> unused=
>> >>>>> 0000:81:00.1 'MT27700 Family [ConnectX-4]' if=enp129s0f1
>> >>> drv=mlx5_core
>> >>>>> unused=
>> >>>>>
>> >>>>> ODP l2fwd example run details:
>> >>>>> ------------------------------
>> >>>>> root@ubuntu:/home/ubuntu/odp/test/performance# ./odp_l2fwd -i
>> >>>>> enp129s0f0,enp129s0f1
>> >>>>> HW time counter freq: 2399999886 <(239)%20999-9886> <tel:2399999886
>> <(239)%20999-9886>>
>> >>>>    <(239)%20999-9886> hz
>> >>>>>
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>> _ishm.c:880:_odp_ishm_reserve():No huge pages, fall back to normal
>> >>>>    pages.
>> >>>>> check: /proc/sys/vm/nr_hugepages.
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>> PKTIO: initialized loop interface.
>> >>>>> PKTIO: initialized pcap interface.
>> >>>>> PKTIO: initialized ipc interface.
>> >>>>> PKTIO: initialized socket mmap, use export
>> >>>>    ODP_PKTIO_DISABLE_SOCKET_MMAP=1
>> >>>>> to disable.
>> >>>>> PKTIO: initialized socket mmsg,use export
>> >>>>    ODP_PKTIO_DISABLE_SOCKET_MMSG=1
>> >>>>> to disable.
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>>
>> >>>>> ODP system info
>> >>>>> ---------------
>> >>>>> ODP API version: 1.15.0
>> >>>>> ODP impl name:   "odp-linux"
>> >>>>> CPU model:       Intel(R) Xeon(R) CPU E5-2680 v4
>> >>>>> CPU freq (hz):   3300000000
>> >>>>> Cache line size: 64
>> >>>>> CPU count:       56
>> >>>>>
>> >>>>>
>> >>>>> CPU features supported:
>> >>>>> SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 FMA
>> >>>>    CMPXCHG16B
>> >>>>> XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC MOVBE POPCNT TSC_DEADLINE
>> >>>>    AES XSAVE
>> >>>>> OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC
>> >>>>    SEP MTRR
>> >>>>> PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM
>> >>> PBE
>> >>>>> DIGTEMP TRBOBST ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF
>> >>>>    FSGSBASE HLE
>> >>>>> AVX2 BMI2 ERMS INVPCID RTM LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP
>> >>>>    EM64T INVTSC
>> >>>>>
>> >>>>> CPU features NOT supported:
>> >>>>> CNXT_ID PSN ACNT2 BMI1 SMEP AVX512F LZCNT
>> >>>>>
>> >>>>> Running ODP appl: "odp_l2fwd"
>> >>>>> -----------------
>> >>>>> IF-count:        2
>> >>>>> Using IFs:       enp129s0f0 enp129s0f1
>> >>>>> Mode:            PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>>
>> >>>>> num worker threads: 32
>> >>>>> first CPU:          24
>> >>>>> cpu mask:           0xFFFFFFFF000000
>> >>>>>
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> >>> memory
>> >>>>>
>> >>>>> Pool info
>> >>>>> ---------
>> >>>>>  pool            0
>> >>>>>  name            packet pool
>> >>>>>  pool type       packet
>> >>>>>  pool shm        11
>> >>>>>  user area shm   0
>> >>>>>  num             8192
>> >>>>>  align           64
>> >>>>>  headroom        128
>> >>>>>  seg len         8064
>> >>>>>  max data len    65536
>> >>>>>  tailroom        0
>> >>>>>  block size      8768
>> >>>>>  uarea size      0
>> >>>>>  shm size        72143104
>> >>>>>  base addr       0x7f5fc1234000
>> >>>>>  uarea shm size  0
>> >>>>>  uarea base addr (nil)
>> >>>>>
>> >>>>> pktio/socket_mmap.c:401:mmap_setup_ring():setsockopt(pkt mmap):
>> >>>>    Invalid
>> >>>>> argument
>> >>>>> pktio/socket_mmap.c:496:sock_mmap_close():mmap_unmap_sock()
>> >>>>    Invalid argument
>> >>>>> created pktio 1, dev: enp129s0f0, drv: socket
>> >>>>> Sharing 1 input queues between 16 workers
>> >>>>> Sharing 1 output queues between 16 workers
>> >>>>> created 1 input and 1 output queues on (enp129s0f0)
>> >>>>> pktio/socket_mmap.c:401:mmap_setup_ring():setsockopt(pkt mmap):
>> >>>>    Invalid
>> >>>>> argument
>> >>>>> pktio/socket_mmap.c:496:sock_mmap_close():mmap_unmap_sock()
>> >>>>    Invalid argument
>> >>>>> created pktio 2, dev: enp129s0f1, drv: socket
>> >>>>> Sharing 1 input queues between 16 workers
>> >>>>> Sharing 1 output queues between 16 workers
>> >>>>> created 1 input and 1 output queues on (enp129s0f1)
>> >>>>>
>> >>>>> Queue binding (indexes)
>> >>>>> -----------------------
>> >>>>> worker 0
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 1
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 2
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 3
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 4
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 5
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 6
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 7
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 8
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 9
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 10
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 11
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 12
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 13
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 14
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 15
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 16
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 17
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 18
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 19
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 20
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 21
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 22
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 23
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 24
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 25
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 26
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 27
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 28
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 29
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>> worker 30
>> >>>>>  rx: pktio 0, queue 0
>> >>>>>  tx: pktio 1, queue 0
>> >>>>> worker 31
>> >>>>>  rx: pktio 1, queue 0
>> >>>>>  tx: pktio 0, queue 0
>> >>>>>
>> >>>>>
>> >>>>> Port config
>> >>>>> --------------------
>> >>>>> Port 0 (enp129s0f0)
>> >>>>>  rx workers 16
>> >>>>>  tx workers 16
>> >>>>>  rx queues 1
>> >>>>>  tx queues 1
>> >>>>> Port 1 (enp129s0f1)
>> >>>>>  rx workers 16
>> >>>>>  tx workers 16
>> >>>>>  rx queues 1
>> >>>>>  tx queues 1
>> >>>>>
>> >>>>> [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [03] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [04] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [05] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [06] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [07] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [08] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [09] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [10] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [11] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [12] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [13] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [14] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [15] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [16] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [17] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [18] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [19] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [20] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [21] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [22] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [23] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [24] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [25] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [26] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [27] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [28] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [29] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [30] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [31] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>> [32] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>>
>> >>>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>> >>>>> 96 pps, 96 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 96 max pps,  0 rx drops, 0 tx drops
>> >>>>> 64 pps, 96 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 96 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 96 max pps,  0 rx drops, 0 tx drops
>> >>>>> 32 pps, 96 max pps,  0 rx drops, 0 tx drops
>> >>>>> 0 pps, 96 max pps,  0 rx drops, 0 tx drops
>> >>>>> 416 pps, 416 max pps,  0 rx drops, 0 tx drops
>> >>>>>
>> >>>>>
>> >>>>> ODP-DPDK example run details:
>> >>>>> -----------------------------
>> >>>>> root@ubuntu:/home/ubuntu/odp-dpdk/test/common_plat/performance#
>> >>>>    ./odp_l2fwd
>> >>>>> -i 0,1
>> >>>>> EAL: Detected 56 lcore(s)
>> >>>>> EAL: Probing VFIO support...
>> >>>>> EAL: PCI device 0000:05:00.0 on NUMA socket 0
>> >>>>> EAL:   probe driver: 8086:1528 net_ixgbe
>> >>>>> EAL: PCI device 0000:05:00.1 on NUMA socket 0
>> >>>>> EAL:   probe driver: 8086:1528 net_ixgbe
>> >>>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>> >>> failed:Cannot
>> >>>>> allocate memory
>> >>>>> ../linux-generic/_ishm.c:866:_odp_ishm_reserve():No huge pages,
>> >>>>    fall back
>> >>>>> to normal pages. check: /proc/sys/vm/nr_hugepages.
>> >>>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>> >>> failed:Cannot
>> >>>>> allocate memory
>> >>>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>> >>> failed:Cannot
>> >>>>> allocate memory
>> >>>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>> >>> failed:Cannot
>> >>>>> allocate memory
>> >>>>> PKTIO: initialized loop interface.
>> >>>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>> >>> failed:Cannot
>> >>>>> allocate memory
>> >>>>> No crypto devices available
>> >>>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>> >>> failed:Cannot
>> >>>>> allocate memory
>> >>>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>> >>> failed:Cannot
>> >>>>> allocate memory
>> >>>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>> >>> failed:Cannot
>> >>>>> allocate memory
>> >>>>>
>> >>>>> ODP system info
>> >>>>> ---------------
>> >>>>> ODP API version: 1.15.0
>> >>>>> ODP impl name:   odp-dpdk
>> >>>>> CPU model:       Intel(R) Xeon(R) CPU E5-2680 v4
>> >>>>> CPU freq (hz):   2400000000
>> >>>>> Cache line size: 64
>> >>>>> CPU count:       56
>> >>>>>
>> >>>>> Running ODP appl: "odp_l2fwd"
>> >>>>> -----------------
>> >>>>> IF-count:        2
>> >>>>> Using IFs:       0 1
>> >>>>> Mode:            PKTIN_DIRECT, PKTOUT_DIRECT
>> >>>>>
>> >>>>> num worker threads: 32
>> >>>>> first CPU:          24
>> >>>>> cpu mask:           0xFFFFFFFF000000
>> >>>>>
>> >>>>> mempool <packet pool>@0x7f1c7fe7de40
>> >>>>>  flags=10
>> >>>>>  pool=0x7f1c7e8ddcc0
>> >>>>>  phys_addr=0x17ffe7de40
>> >>>>>  nb_mem_chunks=1
>> >>>>>  size=8192
>> >>>>>  populated_size=8192
>> >>>>>  header_size=64
>> >>>>>  elt_size=2624
>> >>>>>  trailer_size=64
>> >>>>>  total_obj_size=2752
>> >>>>>  private_data_size=64
>> >>>>>  avg bytes/object=2752.000000
>> >>>>>  internal cache infos:
>> >>>>>    cache_size=512
>> >>>>>    cache_count[0]=0
>> >>>>>    cache_count[1]=0
>> >>>>>    cache_count[2]=0
>> >>>>>    cache_count[3]=0
>> >>>>>    cache_count[4]=0
>> >>>>>    cache_count[5]=0
>> >>>>>    cache_count[6]=0
>> >>>>>    cache_count[7]=0
>> >>>>>    cache_count[8]=0
>> >>>>>    cache_count[9]=0
>> >>>>>    cache_count[10]=0
>> >>>>>    cache_count[11]=0
>> >>>>>    cache_count[12]=0
>> >>>>>    cache_count[13]=0
>> >>>>>    cache_count[14]=0
>> >>>>>    cache_count[15]=0
>> >>>>>    cache_count[16]=0
>> >>>>>    cache_count[17]=0
>> >>>>>    cache_count[18]=0
>> >>>>>    cache_count[19]=0
>> >>>>>    cache_count[20]=0
>> >>>>>    cache_count[21]=0
>> >>>>>    cache_count[22]=0
>> >>>>>    cache_count[23]=0
>> >>>>>    cache_count[24]=0
>> >>>>>    cache_count[25]=0
>> >>>>>    cache_count[26]=0
>> >>>>>    cache_count[27]=0
>> >>>>>    cache_count[28]=0
>> >>>>>    cache_count[29]=0
>> >>>>>    cache_count[30]=0
>> >>>>>    cache_count[31]=0
>> >>>>>    cache_count[32]=0
>> >>>>>    cache_count[33]=0
>> >>>>>    cache_count[34]=0
>> >>>>>    cache_count[35]=0
>> >>>>>    cache_count[36]=0
>> >>>>>    cache_count[37]=0
>> >>>>>    cache_count[38]=0
>> >>>>>    cache_count[39]=0
>> >>>>>    cache_count[40]=0
>> >>>>>    cache_count[41]=0
>> >>>>>    cache_count[42]=0
>> >>>>>    cache_count[43]=0
>> >>>>>    cache_count[44]=0
>> >>>>>    cache_count[45]=0
>> >>>>>    cache_count[46]=0
>> >>>>>    cache_count[47]=0
>> >>>>>    cache_count[48]=0
>> >>>>>    cache_count[49]=0
>> >>>>>    cache_count[50]=0
>> >>>>>    cache_count[51]=0
>> >>>>>    cache_count[52]=0
>> >>>>>    cache_count[53]=0
>> >>>>>    cache_count[54]=0
>> >>>>>    cache_count[55]=0
>> >>>>>    cache_count[56]=0
>> >>>>>    cache_count[57]=0
>> >>>>>    cache_count[58]=0
>> >>>>>    cache_count[59]=0
>> >>>>>    cache_count[60]=0
>> >>>>>    cache_count[61]=0
>> >>>>>    cache_count[62]=0
>> >>>>>    cache_count[63]=0
>> >>>>>    cache_count[64]=0
>> >>>>>    cache_count[65]=0
>> >>>>>    cache_count[66]=0
>> >>>>>    cache_count[67]=0
>> >>>>>    cache_count[68]=0
>> >>>>>    cache_count[69]=0
>> >>>>>    cache_count[70]=0
>> >>>>>    cache_count[71]=0
>> >>>>>    cache_count[72]=0
>> >>>>>    cache_count[73]=0
>> >>>>>    cache_count[74]=0
>> >>>>>    cache_count[75]=0
>> >>>>>    cache_count[76]=0
>> >>>>>    cache_count[77]=0
>> >>>>>    cache_count[78]=0
>> >>>>>    cache_count[79]=0
>> >>>>>    cache_count[80]=0
>> >>>>>    cache_count[81]=0
>> >>>>>    cache_count[82]=0
>> >>>>>    cache_count[83]=0
>> >>>>>    cache_count[84]=0
>> >>>>>    cache_count[85]=0
>> >>>>>    cache_count[86]=0
>> >>>>>    cache_count[87]=0
>> >>>>>    cache_count[88]=0
>> >>>>>    cache_count[89]=0
>> >>>>>    cache_count[90]=0
>> >>>>>    cache_count[91]=0
>> >>>>>    cache_count[92]=0
>> >>>>>    cache_count[93]=0
>> >>>>>    cache_count[94]=0
>> >>>>>    cache_count[95]=0
>> >>>>>    cache_count[96]=0
>> >>>>>    cache_count[97]=0
>> >>>>>    cache_count[98]=0
>> >>>>>    cache_count[99]=0
>> >>>>>    cache_count[100]=0
>> >>>>>    cache_count[101]=0
>> >>>>>    cache_count[102]=0
>> >>>>>    cache_count[103]=0
>> >>>>>    cache_count[104]=0
>> >>>>>    cache_count[105]=0
>> >>>>>    cache_count[106]=0
>> >>>>>    cache_count[107]=0
>> >>>>>    cache_count[108]=0
>> >>>>>    cache_count[109]=0
>> >>>>>    cache_count[110]=0
>> >>>>>    cache_count[111]=0
>> >>>>>    cache_count[112]=0
>> >>>>>    cache_count[113]=0
>> >>>>>    cache_count[114]=0
>> >>>>>    cache_count[115]=0
>> >>>>>    cache_count[116]=0
>> >>>>>    cache_count[117]=0
>> >>>>>    cache_count[118]=0
>> >>>>>    cache_count[119]=0
>> >>>>>    cache_count[120]=0
>> >>>>>    cache_count[121]=0
>> >>>>>    cache_count[122]=0
>> >>>>>    cache_count[123]=0
>> >>>>>    cache_count[124]=0
>> >>>>>    cache_count[125]=0
>> >>>>>    cache_count[126]=0
>> >>>>>    cache_count[127]=0
>> >>>>>    total_cache_count=0
>> >>>>>  common_pool_count=8192
>> >>>>>  no statistics available
>> >>>>> ../linux-generic/odp_packet_io.c:226:setup_pktio_entry():Unable to
>> >>>>    init any
>> >>>>> I/O type.
>> >>>>> odp_l2fwd.c:642:create_pktio():Error: failed to open 0
>> >>>>>
>> >>>>>
>> >>>>> Thanks & Regards,
>> >>>>> P Gyanesh Kumar Patra
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >
>>
>>
>
>
> --
> [image: Linaro] <http://www.linaro.org/>
> François-Frédéric Ozog | *Director Linaro Networking Group*
> T: +33.67221.6485
> francois.o...@linaro.org | Skype: ffozog
>
>

Reply via email to