Hi All:
 I have hugetlb mounted

root@node-1:/app# mount | grep huge
cgroup on /sys/fs/cgroup/hugetlb type cgroup 
(rw,nosuid,nodev,noexec,relatime,hugetlb,nsroot=/kubepods/besteffort/pod57d8886a-701a-11e9-be26-08002733828a/3d36de8ece4e84a1ccfca2c28e9bec1a8b1b1efdec682995f7a6406808d0c8a2)

vagrant@master:~$ kubectl version

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", 
GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", 
BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", 
Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", 
GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", 
BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", 
Platform:"linux/amd64"}

I was able to bring vpp fine with the same configs with docker-compose I am 
still not clear why it doesn't work in k8s ?
Thanks,Mohamed
    On Monday, May 6, 2019, 11:53:23 AM EDT, Benoit Ganne (bganne) via 
Lists.Fd.Io <bganne=cisco....@lists.fd.io> wrote:  
 
 Just a shot in the dark, but is hugetlbfs accessible somewhere in your 
container?
It should not be the case by default, and you probably need it, eg.:
~# mount -t hugetlbfs /dev/null /mnt/huge

ben

> -----Original Message-----
> From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Peter Mikus
> via Lists.Fd.Io
> Sent: vendredi 3 mai 2019 12:01
> To: msher...@yahoo.com
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] SIGABRT during dpdk_config
> 
> Hello,
> 
> 
> 
> Would be helpful if you can describe your environment closer. K8S version,
> yaml, etc.
> 
> This seems to be problem of mapping hugepages.
> 
> 
> 
> Peter Mikus
> Engineer – Software
> 
> Cisco Systems Limited
> 
> 
> 
> From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Damjan Marion
> via Lists.Fd.Io
> Sent: Friday, May 3, 2019 11:48 AM
> To: msher...@yahoo.com
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] SIGABRT during dpdk_config
> 
> 
> 
> 
> 
>     On 1 May 2019, at 18:43, Mohamed Mohamed via Lists.Fd.Io
> <msherif4=yahoo....@lists.fd.io <mailto:msherif4=yahoo....@lists.fd.io> >
> wrote:
> 
> 
> 
>     Hi Damjan
> 
> 
> 
>     I am running container in privileged mode is there a way to narrow
> this down ?
> 
> 
> 
> No idea, I never tried to run VPP in docker container…
> 
> 
> 
> --
> Damjan
> 
> 
> 
> 
> 
>     Thanks
> 
>     Mohamed
> 
> 
> 
>     Sent from my iPhone
> 
> 
>     On May 1, 2019, at 12:37 PM, Damjan Marion (damarion)
> <damar...@cisco.com <mailto:damar...@cisco.com> > wrote:
> 
> 
> 
> 
> 
> 
> 
>             On 1 May 2019, at 16:11, Mohamed Mohamed via Lists.Fd.Io
> <msherif4=yahoo....@lists.fd.io <mailto:msherif4=yahoo....@lists.fd.io> >
> wrote:
> 
> 
> 
>             Hi Folks:
> 
> 
> 
>             I am getting vpp crash during init with the following
> traces
> 
> 
> 
>             Program terminated with signal SIGABRT, Aborted.
> 
>             #0  0x00007f0c7e3ea428 in __GI_raise (sig=sig@entry=6)
> at ../sysdeps/unix/sysv/linux/raise.c:54
> 
>             54 ../sysdeps/unix/sysv/linux/raise.c: No such file or
> directory.
> 
>             [Current thread is 1 (Thread 0x7f0c7ffef780 (LWP
> 21592))]
> 
>             (gdb) bt
> 
>             #0  0x00007f0c7e3ea428 in __GI_raise (sig=sig@entry=6)
> at ../sysdeps/unix/sysv/linux/raise.c:54
> 
>             #1  0x00007f0c7e3ec02a in __GI_abort () at abort.c:89
> 
>             #2  0x000000000040730e in os_exit (code=code@entry=1) at
> /build/vpp/src/vpp/vnet/main.c:357
> 
>             #3  0x00007f0c7ecaec09 in unix_signal_handler
> (signum=<optimized out>, si=<optimized out>,
> 
>                 uc=<optimized out>) at
> /build/vpp/src/vlib/unix/main.c:156
> 
>             #4  <signal handler called>
> 
>             #5  0x00007f0c3bcfa977 in alloc_seg () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #6  0x00007f0c3bcfb06d in alloc_seg_walk () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #7  0x00007f0c3bd022fb in
> rte_memseg_list_walk_thread_unsafe () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #8  0x00007f0c3bcfbbf1 in eal_memalloc_alloc_seg_bulk ()
> from /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #9  0x00007f0c3bd12034 in alloc_pages_on_heap () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #10 0x00007f0c3bd12362 in try_expand_heap () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #11 0x00007f0c3bd128b8 in alloc_more_mem_on_socket ()
> from /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #12 0x00007f0c3bd12d25 in malloc_heap_alloc () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #13 0x00007f0c3bd0d90e in rte_malloc_socket () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #14 0x00007f0c3bd166b1 in rte_service_init () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #15 0x00007f0c3bcf0487 in rte_eal_init () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 
>             #16 0x00007f0c3c38d0b3 in dpdk_config (vm=<optimized
> out>, input=<optimized out>)
> 
>                 at /build/vpp/src/plugins/dpdk/device/init.c:1446
> 
>             #17 0x00007f0c7ec710d7 in vlib_call_all_config_functions
> (vm=<optimized out>,
> 
>                 input=input@entry=0x7f0c3e3fffa0,
> is_early=is_early@entry=0) at /build/vpp/src/vlib/init.c:146
> 
>             #18 0x00007f0c7ec83c16 in vlib_main (vm=<optimized out>,
> vm@entry=0x7f0c7eec6340 <vlib_global_main>,
> 
>                 input=input@entry=0x7f0c3e3fffa0) at
> /build/vpp/src/vlib/main.c:2028
> 
>             #19 0x00007f0c7ecadc26 in thread0 (arg=139691645756224)
> at /build/vpp/src/vlib/unix/main.c:606
> 
>             #20 0x00007f0c7e7b0594 in clib_calljmp () from
> /usr/lib/x86_64-linux-gnu/libvppinfra.so.19.01
> 
>             #21 0x00007ffe62c32740 in ?? ()
> 
>             #22 0x00007f0c7ecaf172 in vlib_unix_main
> (argc=<optimized out>, argv=<optimized out>)
> 
>                 at /build/vpp/src/vlib/unix/main.c:675
> 
>             #23 0x0000000000000000 in ?? ()
> 
> 
> 
>             I am running VPP in docker container in vagrant VM, it
> comes up fine with docker-compose but when I did the same from k8s it
> failed to come up because of the above
> 
> 
> 
>             any suggestions ?
> 
> 
> 
>         DPDK is not able to allocate memory. Might be missing
> permissions....
> 
> 
> 
>         --
>         Damjan
> 
> 
> 
>     -=-=-=-=-=-=-=-=-=-=-=-
>     Links: You receive all messages sent to this group.
> 
>     View/Reply Online (#12893): https://lists.fd.io/g/vpp-
> dev/message/12893 <https://lists.fd.io/g/vpp-dev/message/12893>
>     Mute This Topic: https://lists.fd.io/mt/31432757/675642
>     Group Owner: vpp-dev+ow...@lists.fd.io
>     Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [dmar...@me.com
> <mailto:dmar...@me.com> ]
>     -=-=-=-=-=-=-=-=-=-=-=-
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12936): https://lists.fd.io/g/vpp-dev/message/12936
Mute This Topic: https://lists.fd.io/mt/31432757/1658653
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [msher...@yahoo.com]
-=-=-=-=-=-=-=-=-=-=-=-  
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12938): https://lists.fd.io/g/vpp-dev/message/12938
Mute This Topic: https://lists.fd.io/mt/31432757/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to