Re: [vpp-dev] xdot hangs when used to view vpp graphviz #vpp #vpp

2019-05-15 Thread Benoit Ganne (bganne) via Lists.Fd.Io
> Any suggestion on what might be causing?
> bbalaji@bbalaji-vm-1:~/Repos/vpp$ xdot /tmp/vpp.dot
[...]
>   File "/usr/lib/python3.6/selectors.py", line 376, in select
> fd_event_list = self._poll.poll(timeout)
> KeyboardInterrupt

Probably xdot choking on the VPP graph complexity 😊
I have been using Gephi [1] to explore VPP graph with great success.

Ben

[1] https://gephi.org/
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13032): https://lists.fd.io/g/vpp-dev/message/13032
Mute This Topic: https://lists.fd.io/mt/31624214/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Weird Error Number

2019-05-15 Thread Ole Troan
> > Can anyone shed some light here?
> 
> I guess the purpose of a positive value is to indicate that you can try again 
> with the same arguments and get a different response,
> as opposed to a negative value.
> 
> I would be in favour of removing that error from api_errno.h.
> Especially for these corner cases I'd much rather want a documented separate 
> error number space for the module / message.
> 
> Best regards,
> Ole
> 
> Hi Ole,
> 
> I will submit a patch to remove the positive error number
> and the corresponding use of it.
> 
> And remove the entire dead function as well...?

Thanks! Merged.

Best regards,
Ole

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13033): https://lists.fd.io/g/vpp-dev/message/13033
Mute This Topic: https://lists.fd.io/mt/31521527/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vapi_dispatch_one blocking mode #vpp #vapi

2019-05-15 Thread Mahdi Varasteh
Hello,

I noticed once, that vapi_recv function, called from vapi_dispatch_one, is 
called like this:

vapi_recv (ctx, &msg, &size, SVM_Q_WAIT, 0 );

and one time, it happened just once and i couldn't regenerate it, the code was 
freezed in pthread_cond_wait called from svm_queue_wait_inline.
and when i traced it back, it was all started from vapi_recv.

isn't it a better practice to call vapi_recv in vapi_dispatch_one with 
SVM_Q_TIMEDWAIT ?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13034): https://lists.fd.io/g/vpp-dev/message/13034
Mute This Topic: https://lists.fd.io/mt/31627670/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi Eyle,

> I guess you are looking for this part:
[...]
> If necessary, I can provide the full log.

Yes please, looks like I am missing the errno = 13 part in particular. Also, 
could you share dmesg output too? As it is an issue between userspace and 
kernelspace, kernel logs will help.
The best would be to share the whole strace & dmesg output (eg. through 
pastebin or equivalent).

Thx,
ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13036): https://lists.fd.io/g/vpp-dev/message/13036
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Eyle Brinkhuis
Hi Ben,

I guess you are looking for this part:

open("/home/centos/rdma.vpp", O_RDONLY) = 8
fstat(8, {st_mode=S_IFREG|0664, st_size=45, ...}) = 0
read(8, "create int rdma host-if enp1s0f1"..., 4096) = 45
readlink("/sys/class/net/enp1s0f1/device/driver/module", 
"../../../../module/mlx5_core", 63) = 28
readlink("/sys/class/net/enp1s0f1/device", "../../../:01:00.1", 63) = 21
getuid()= 0
geteuid()   = 0
open("/sys/class/infiniband_verbs/abi_version", O_RDONLY|O_CLOEXEC) = 9
read(9, "6\n", 8)   = 2
close(9)= 0
open("/sys/class/infiniband_verbs/abi_version", O_RDONLY|O_CLOEXEC) = 9
read(9, "6\n", 8)   = 2
close(9)= 0
geteuid()   = 0
openat(AT_FDCWD, "/sys/class/infiniband_verbs", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 9
getdents(9, /* 4 entries */, 32768) = 112
stat("/sys/class/infiniband_verbs/abi_version", {st_mode=S_IFREG|0444, 
st_size=4096, ...}) = 0
stat("/sys/class/infiniband_verbs/uverbs1", {st_mode=S_IFDIR|0755, st_size=0, 
...}) = 0
open("/sys/class/infiniband_verbs/uverbs1/ibdev", O_RDONLY|O_CLOEXEC) = 10
read(10, "mlx5_1\n", 64)= 7
close(10)   = 0
stat("/sys/class/infiniband/mlx5_1", {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
stat("/dev/infiniband/uverbs1", {st_mode=S_IFCHR|0777, st_rdev=makedev(231, 
193), ...}) = 0
open("/sys/class/infiniband_verbs/uverbs1/abi_version", O_RDONLY|O_CLOEXEC) = 10
read(10, "1\n", 8)  = 2
close(10)   = 0
open("/sys/class/infiniband_verbs/uverbs1/device/modalias", O_RDONLY|O_CLOEXEC) 
= 10
read(10, "pci:v15B3d1013sv15B3"..., 512) = 54
close(10)   = 0
getdents(9, /* 0 entries */, 32768) = 0
close(9)= 0
open("/sys/class/infiniband/mlx5_1/node_type", O_RDONLY|O_CLOEXEC) = 9
read(9, "1: CA\n", 16)  = 6
close(9)= 0
readlink("/sys/class/infiniband_verbs/uverbs1/device", "../../../:01:00.1", 
63) = 21
open("/dev/infiniband/uverbs1", O_RDWR|O_CLOEXEC) = 9
mmap(NULL, 204800, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f4de4935000
ioctl(9, _IOC(_IOC_READ|_IOC_WRITE, 0x1b, 0x01, 0x18), 0x7f4da149b240) = -1 
ENOTTY (Inappropriate ioctl for device)
uname({sysname="Linux", 
nodename="node4.nfv.surfnet.nl", ...}) = 0
write(9, 
"\0\0\0\0\f\0\24\0p\263I\241M\177\0\0\20\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0"..., 48) 
= 48
brk(NULL)   = 0x18b7000
brk(0x18d8000)  = 0x18d8000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0) = 0x7f4de4a81000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x1000) = 0x7f4de4a8
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x2000) = 0x7f4de4a7f000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x3000) = 0x7f4de4a7e000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x4000) = 0x7f4de4a7d000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x5000) = 0x7f4de4a7c000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x6000) = 0x7f4de4a7b000
mmap(NULL, 4096, PROT_WRITE, MAP_SHARED, 9, 0x7000) = 0x7f4de4a7a000
mmap(NULL, 4096, PROT_READ, MAP_SHARED, 9, 0x50) = 0x7f4de4a79000
mmap(NULL, 4096, PROT_READ, MAP_SHARED, 9, 0x70) = 0x7f4de4a78000
open("/proc/cpuinfo", O_RDONLY) = 11
fstat(11, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x7f4de4a77000
read(11, "processor\t: 0\nvendor_id\t: Genuin"..., 1024) = 1024
read(11, "hwp_epp spec_ctrl intel_stibp fl"..., 1024) = 1024
read(11, "sbase tsc_adjust bmi1 hle avx2 s"..., 1024) = 1024
read(11, " x2apic movbe popcnt tsc_deadlin"..., 1024) = 1024
read(11, "n pebs bts rep_good nopl xtopolo"..., 1024) = 1024
read(11, " pse tsc msr pae mce cx8 apic se"..., 1024) = 1024
read(11, "KB\nphysical id\t: 0\nsiblings\t: 8\n"..., 1024) = 1024
read(11, "d\t: GenuineIntel\ncpu family\t: 6\n"..., 1024) = 1024
read(11, "l_stibp flush_l1d\nbogomips\t: 720"..., 1024) = 1024
read(11, "hle avx2 smep bmi2 erms invpcid "..., 1024) = 312
read(11, "", 1024)  = 0
close(11)   = 0
munmap(0x7f4de4a77000, 4096)= 0
write(9, 
"\1\0\0\200\1\0&\0@\241I\241M\177\0\0\0\0\r\0\0\0\0\0\0\0\0\0\0\0\0\0", 32) = 32
ioctl(9, _IOC(_IOC_READ|_IOC_WRITE, 0x1b, 0x01, 0x18), 0x7f4da149a070) = -1 
ENOTTY (Inappropriate ioctl for device)
ioctl(9, _IOC(_IOC_READ|_IOC_WRITE, 0x1b, 0x01, 0x18), 0x7f4da149b260) = -1 
ENOTTY (Inappropriate ioctl for device)
write(9, "\2\0\0\0\6\0\n\0\340\261I\241M\177\0\0\1\0\0\0\0\0\0\0", 24) = 24
write(9, "\3\0\0\0\4\0\2\0\260\311I\241M\177\0\0", 16) = 16
write(9, 
"\22\0\0\0\20\0\4\0`\310I\241M\177\0\0\340\365\210\1\0\0\0\0\377\3\0\0\0\0\0\0"...,
 64) = 64
write(9, 
"4\0\0\200\5\0\3\\311I\241M\177\0\0\

Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Eyle Brinkhuis
Hi Ben,

Sure, by any means: https://pastebin.com/w6PAsUzN 
 is the VPP log
See https://pastebin.com/uqT6C9Td  for the DMESG 
output.

Thanks again!

Regards,

Eyle

> On 15 May 2019, at 11:12, Benoit Ganne (bganne)  wrote:
> 
> Hi Eyle,
> 
>> I guess you are looking for this part:
> [...]
>> If necessary, I can provide the full log.
> 
> Yes please, looks like I am missing the errno = 13 part in particular. Also, 
> could you share dmesg output too? As it is an issue between userspace and 
> kernelspace, kernel logs will help.
> The best would be to share the whole strace & dmesg output (eg. through 
> pastebin or equivalent).
> 
> Thx,
> ben



signature.asc
Description: Message signed with OpenPGP
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13038): https://lists.fd.io/g/vpp-dev/message/13038
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Benoit Ganne (bganne) via Lists.Fd.Io
> Sure, by any means: https://pastebin.com/w6PAsUzN is the VPP log
> See https://pastebin.com/uqT6C9Td for the DMESG output.

Nothing stand out from a quick glance :/
Just to make sure, could you disable selinux and retry?
~# setenforce 0

Thx
ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13039): https://lists.fd.io/g/vpp-dev/message/13039
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Eyle Brinkhuis
Hi Ben,

There we go..
vpp# create int rdma host-if enp1s0f1 name rdma-0
vpp# sh int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
local00 down  0/0/0/0
rdma-01 down 9000/0/0/0
vpp#

I wonder if that is the problem with DPDK for the MLX cards as well. Let me 
check on another node.

Cheers,

Eyle

On 15 May 2019, at 11:52, Benoit Ganne (bganne) 
mailto:bga...@cisco.com>> wrote:

Sure, by any means: https://pastebin.com/w6PAsUzN is the VPP log
See https://pastebin.com/uqT6C9Td for the DMESG output.

Nothing stand out from a quick glance :/
Just to make sure, could you disable selinux and retry?
~# setenforce 0

Thx
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13040): https://lists.fd.io/g/vpp-dev/message/13040
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] HQoS

2019-05-15 Thread Abeeha Aqeel
Hi Jasvinder,

Here’s the startup.conf:

unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
# gid vpp
}

api-trace {
  on
}

# api-segment {
# gid vpp
# }

# socksvr {
# default
# }

cpu {
main-core 2
corelist-workers 4, 20, 6, 22, 8, 24
# corelist-hqos-threads 18
}

dpdk {
dev :05:00.0 {
num-rx-queues 2
# hqos
}

dev :05:00.1 {
num-rx-queues 2
# hqos
}

# igb_uio
# uio_pci_generic

## Disable multi-segment buffers, improves performance but
## disables Jumbo MTU support
# no-multi-seg

## Change hugepages allocation per-socket, needed only if there is need 
for
## larger number of mbufs. Default is 256M on each detected CPU socket
socket-mem 16384, 16384

   ## Disables UDP / TCP TX checksum offload. Typically needed for use
## faster vector PMDs (together with no-multi-seg)
# no-tx-checksum-offload
}

plugins {
path /root/vpp/build-root/install-vpp_debug-native/vpp/lib/vpp_plugins/
plugin default { enable }
}
   

I tried enabling and disabling hqos but it doesn’t work in both cases.

Regards,
 
Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com



From: Singh, Jasvinder
Sent: Wednesday, May 15, 2019 2:29 PM
To: Abeeha Aqeel; Ni, Hongjun
Cc: Byrne, Stephen1; Dumitrescu, Cristian; vpp-dev@lists.fd.io; 
b...@xflowresearch.com
Subject: RE: [vpp-dev] HQoS

Hi Abeeha,

Can you share startup.conf? 

Thanks,
Jasvinder


From: Abeeha Aqeel [mailto:abeeha.aq...@xflowresearch.com] 
Sent: Wednesday, May 15, 2019 7:29 AM
To: Singh, Jasvinder ; Ni, Hongjun 

Cc: Byrne, Stephen1 ; Dumitrescu, Cristian 
; vpp-dev@lists.fd.io; b...@xflowresearch.com
Subject: RE: [vpp-dev] HQoS

Hi Jasvinder, 

The CMakelist.txt files before and after applying the patch are exactly the 
same and hqos plugin is already enabled but the CLI commands are still not 
showing. Below is the screenshot of the CMakelist.text file located at 
/root/vpp/src/plugins/dpdk: 



Thank you and best regards,
 
Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com



From: Abeeha Aqeel
Sent: Wednesday, May 15, 2019 11:04 AM
To: hongjun...@intel.com
Cc: stephen1.by...@intel.com; cristian.dumitre...@intel.com; 
cristian.dumitre...@intel.com; vpp-dev@lists.fd.io; b...@xflowresearch.com
Subject: FW: [vpp-dev] HQoS


Hi Jasvinder, 

The CMakelist.txt files before and after applying the patch are exactly the 
same and hqos plugin is already enabled but the CLI commands are still not 
showing. Below is the screenshot of the CMakelist.text file located at 
/root/vpp/src/plugins/dpdk: 





Thank you and best regards,
 
Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com



From: Singh, Jasvinder
Sent: Friday, May 10, 2019 2:58 PM
To: Ni, Hongjun; Abeeha Aqeel
Cc: Byrne, Stephen1; Dumitrescu, Cristian
Subject: RE: [vpp-dev] HQoS

+ Cristian

From: Singh, Jasvinder 
Sent: Friday, May 10, 2019 10:52 AM
To: Ni, Hongjun ; Abeeha Aqeel 

Cc: Byrne, Stephen1 
Subject: RE: [vpp-dev] HQoS

Hi Abeeha,

Looks like HQoS module is disabled in current VPP dpdk plugin (Check 
CMakelist.txt). Please trying enabling it first. You should see HQoS related 
CLIs on the console.

Thanks,
Jasvinder


From: Ni, Hongjun 
Sent: Friday, May 10, 2019 2:47 AM
To: Abeeha Aqeel ; Singh, Jasvinder 

Subject: RE: [vpp-dev] HQoS

Hi Jasvinder,

Could you help to look into this issue?   Thank you!

Thanks,
Hongjun

From: Abeeha Aqeel [mailto:abeeha.aq...@xflowresearch.com] 
Sent: Thursday, May 9, 2019 6:02 PM
To: Ni, Hongjun ; Singh, Jasvinder 

Cc: b...@xflowresearch.com; vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] HQoS

Hi Hongjun, 

I applied the patch submitted to fix the API change on vpp 19.04 using 
underlying dpdk 19.02, following the steps:


git clone https://gerrit.fd.io/r/vppHi Hongjun,

cd vpp
make wipe
make install-dep
make install-ext-deps
wget 
https://gerrit.fd.io/r/changes/16839/revisions/a62db7f3796ef152d23475ed36ad5f3fbfcab2a8/archive?format=tar
mv archive\?format\=tar a62db7f.tar
tar -xvf a62db7f.tar
make build
./vpp -c /root/go/src/github.com/xFlowResearch/BNG/vpp-startup.conf &
./vppctl

I also tried, git fetch and cherry-pick:

cd vpp
make wipe
make install-dep
make install-ext-deps
git stash
git fetch https://gerrit.fd.io/r/vpp refs/changes/39/16839/3 && git cherry-pick 
FETCH_HEAD  
make build
./vpp -c /root/go/src/github.com/xFlowResearch/BNG/vpp-startup.conf &
./vppctl

In both cases, the setup builds and starts properly but hqos plugin doesn’t 
work. Nor does the vpp debug CLI have the hqos commands, e.g. in this 

Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Eyle Brinkhuis
Well.. that’s that..
vpp# show int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
FiftySixGigabitEthernet1/0/0  1 down 9000/0/0/0
FiftySixGigabitEthernet1/0/1  2 down 9000/0/0/0
local00 down  0/0/0/0

Regards,

Eyle

> On 15 May 2019, at 11:54, Eyle Brinkhuis  wrote:
> 
> Hi Ben,
> 
> There we go..
> vpp# create int rdma host-if enp1s0f1 name rdma-0
> vpp# sh int
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
> Counter  Count
> local00 down  0/0/0/0
> rdma-01 down 9000/0/0/0
> vpp#
> 
> I wonder if that is the problem with DPDK for the MLX cards as well. Let me 
> check on another node.
> 
> Cheers,
> 
> Eyle
> 
>> On 15 May 2019, at 11:52, Benoit Ganne (bganne) > > wrote:
>> 
>>> Sure, by any means: https://pastebin.com/w6PAsUzN 
>>>  is the VPP log
>>> See https://pastebin.com/uqT6C9Td  for the 
>>> DMESG output.
>> 
>> Nothing stand out from a quick glance :/
>> Just to make sure, could you disable selinux and retry?
>> ~# setenforce 0
>> 
>> Thx
>> ben
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13040): https://lists.fd.io/g/vpp-dev/message/13040
> Mute This Topic: https://lists.fd.io/mt/31576338/1681911
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [eyle.brinkh...@surfnet.nl]
> -=-=-=-=-=-=-=-=-=-=-=-



signature.asc
Description: Message signed with OpenPGP
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13042): https://lists.fd.io/g/vpp-dev/message/13042
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] HQoS

2019-05-15 Thread Jasvinder Singh
Hi Abeeha,

In the config file,  hqos thread is allocated (cpu section), and hqos is’nt 
enabled on interfaces (dpdk section). In the current state, hqos needs separate 
thread (other than worker threads) for scheduling function, and also, 
enablement on dpdk interfaces.

Thanks,
Jasvinder


From: Abeeha Aqeel [mailto:abeeha.aq...@xflowresearch.com]
Sent: Wednesday, May 15, 2019 11:03 AM
To: Singh, Jasvinder ; Ni, Hongjun 

Cc: Byrne, Stephen1 ; Dumitrescu, Cristian 
; vpp-dev@lists.fd.io; b...@xflowresearch.com
Subject: RE: [vpp-dev] HQoS

Hi Jasvinder,

Here’s the startup.conf:

unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
# gid vpp
}

api-trace {
  on
}

# api-segment {
# gid vpp
# }

# socksvr {
# default
# }

cpu {
main-core 2
corelist-workers 4, 20, 6, 22, 8, 24
# corelist-hqos-threads 18
}

dpdk {
dev :05:00.0 {
num-rx-queues 2
# hqos
}

dev :05:00.1 {
num-rx-queues 2
# hqos
}

# igb_uio
# uio_pci_generic

## Disable multi-segment buffers, improves performance but
## disables Jumbo MTU support
# no-multi-seg

## Change hugepages allocation per-socket, needed only if there is need 
for
## larger number of mbufs. Default is 256M on each detected CPU socket
socket-mem 16384, 16384

   ## Disables UDP / TCP TX checksum offload. Typically needed for use
## faster vector PMDs (together with no-multi-seg)
# no-tx-checksum-offload
}

plugins {
path /root/vpp/build-root/install-vpp_debug-native/vpp/lib/vpp_plugins/
plugin default { enable }
}


I tried enabling and disabling hqos but it doesn’t work in both cases.

Regards,

Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com

[cid:6e5d1814-0a3d-444b-9050-060db1465f05]


From: Singh, Jasvinder
Sent: Wednesday, May 15, 2019 2:29 PM
To: Abeeha Aqeel; Ni, 
Hongjun
Cc: Byrne, Stephen1; Dumitrescu, 
Cristian; 
vpp-dev@lists.fd.io; 
b...@xflowresearch.com
Subject: RE: [vpp-dev] HQoS

Hi Abeeha,

Can you share startup.conf?

Thanks,
Jasvinder


From: Abeeha Aqeel [mailto:abeeha.aq...@xflowresearch.com]
Sent: Wednesday, May 15, 2019 7:29 AM
To: Singh, Jasvinder 
mailto:jasvinder.si...@intel.com>>; Ni, Hongjun 
mailto:hongjun...@intel.com>>
Cc: Byrne, Stephen1 
mailto:stephen1.by...@intel.com>>; Dumitrescu, 
Cristian mailto:cristian.dumitre...@intel.com>>; 
vpp-dev@lists.fd.io; 
b...@xflowresearch.com
Subject: RE: [vpp-dev] HQoS

Hi Jasvinder,

The CMakelist.txt files before and after applying the patch are exactly the 
same and hqos plugin is already enabled but the CLI commands are still not 
showing. Below is the screenshot of the CMakelist.text file located at 
/root/vpp/src/plugins/dpdk:

[cid:image002.png@01D50B08.CB3C2CA0]

Thank you and best regards,

Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com

[cid:6e5d1814-0a3d-444b-9050-060db1465f05]


From: Abeeha Aqeel
Sent: Wednesday, May 15, 2019 11:04 AM
To: hongjun...@intel.com
Cc: stephen1.by...@intel.com; 
cristian.dumitre...@intel.com; 
cristian.dumitre...@intel.com; 
vpp-dev@lists.fd.io; 
b...@xflowresearch.com
Subject: FW: [vpp-dev] HQoS


Hi Jasvinder,

The CMakelist.txt files before and after applying the patch are exactly the 
same and hqos plugin is already enabled but the CLI commands are still not 
showing. Below is the screenshot of the CMakelist.text file located at 
/root/vpp/src/plugins/dpdk:

[cid:image002.png@01D50B0D.103CAD30]



Thank you and best regards,

Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com

[cid:6e5d1814-0a3d-444b-9050-060db1465f05]


From: Singh, Jasvinder
Sent: Friday, May 10, 2019 2:58 PM
To: Ni, Hongjun; Abeeha 
Aqeel
Cc: Byrne, Stephen1; Dumitrescu, 
Cristian

Re: [vpp-dev] HQoS

2019-05-15 Thread Abeeha Aqeel
Hi Jasvinder,

Yes, I am aware of that. But after applying the patch, assigning hqos-threads 
and enabling hqos on interface the CLI commands don’t show. 

Another issue I have encountered while using vpp version 19.01 is that VPP 
doesn’t start when I assign two corelist-hqos-threads. As shown in the config 
file: 

unix {
    nodaemon
    log /var/log/vpp/vpp.log
    full-coredump
    cli-listen /run/vpp/cli.sock
    # gid vpp
}

api-trace {
  on
}

# api-segment {
    # gid vpp
# }

# socksvr {
    # default
# }

cpu {
    main-core 2
    corelist-workers 4, 20, 6, 22, 8, 24
    corelist-hqos-threads 12,18
}

dpdk {
    dev :05:00.0 {
    num-rx-queues 2
    hqos
    }

    dev :05:00.1 {
    num-rx-queues 2
    # hqos
    }

    # igb_uio
    # uio_pci_generic

    ## Disable multi-segment buffers, improves performance but
    ## disables Jumbo MTU support
    # no-multi-seg

    ## Change hugepages allocation per-socket, needed only if there is need 
for
    ## larger number of mbufs. Default is 256M on each detected CPU socket
    socket-mem 16384, 16384

   ## Disables UDP / TCP TX checksum offload. Typically needed for use
    ## faster vector PMDs (together with no-multi-seg)
    # no-tx-checksum-offload
}

plugins {
    path /root/vpp/build-root/install-vpp_debug-native/vpp/lib/vpp_plugins/
    plugin default { enable }
}
   
I have installed vpp version 19.04 on bare-metal server with centos 7 as 
Operating System and an Intel 10G 2P X520 NIC. What could be the possible issue?


Thank you and regards,
 
Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com



From: Singh, Jasvinder
Sent: Wednesday, May 15, 2019 3:19 PM
To: Abeeha Aqeel; Ni, Hongjun
Cc: Byrne, Stephen1; Dumitrescu, Cristian; vpp-dev@lists.fd.io; 
b...@xflowresearch.com
Subject: RE: [vpp-dev] HQoS

Hi Abeeha,

In the config file,  hqos thread is allocated (cpu section), and hqos is’nt 
enabled on interfaces (dpdk section). In the current state, hqos needs separate 
thread (other than worker threads) for scheduling function, and also, 
enablement on dpdk interfaces.

Thanks,
Jasvinder
   

From: Abeeha Aqeel [mailto:abeeha.aq...@xflowresearch.com] 
Sent: Wednesday, May 15, 2019 11:03 AM
To: Singh, Jasvinder ; Ni, Hongjun 

Cc: Byrne, Stephen1 ; Dumitrescu, Cristian 
; vpp-dev@lists.fd.io; b...@xflowresearch.com
Subject: RE: [vpp-dev] HQoS

Hi Jasvinder,

Here’s the startup.conf:

unix {
    nodaemon
    log /var/log/vpp/vpp.log
    full-coredump
    cli-listen /run/vpp/cli.sock
    # gid vpp
}

api-trace {
  on
}

# api-segment {
    # gid vpp
# }

# socksvr {
    # default
# }

cpu {
    main-core 2
    corelist-workers 4, 20, 6, 22, 8, 24
    # corelist-hqos-threads 18
}

dpdk {
    dev :05:00.0 {
    num-rx-queues 2
    # hqos
    }

    dev :05:00.1 {
    num-rx-queues 2
    # hqos
    }

    # igb_uio
    # uio_pci_generic

    ## Disable multi-segment buffers, improves performance but
    ## disables Jumbo MTU support
    # no-multi-seg

    ## Change hugepages allocation per-socket, needed only if there is need 
for
    ## larger number of mbufs. Default is 256M on each detected CPU socket
    socket-mem 16384, 16384

   ## Disables UDP / TCP TX checksum offload. Typically needed for use
    ## faster vector PMDs (together with no-multi-seg)
    # no-tx-checksum-offload
}

plugins {
    path /root/vpp/build-root/install-vpp_debug-native/vpp/lib/vpp_plugins/
    plugin default { enable }
}
   

I tried enabling and disabling hqos but it doesn’t work in both cases.

Regards,
 
Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.aq...@xflowresearch.com
www.xflowresearch.com



From: Singh, Jasvinder
Sent: Wednesday, May 15, 2019 2:29 PM
To: Abeeha Aqeel; Ni, Hongjun
Cc: Byrne, Stephen1; Dumitrescu, Cristian; vpp-dev@lists.fd.io; 
b...@xflowresearch.com
Subject: RE: [vpp-dev] HQoS

Hi Abeeha,

Can you share startup.conf? 

Thanks,
Jasvinder


From: Abeeha Aqeel [mailto:abeeha.aq...@xflowresearch.com] 
Sent: Wednesday, May 15, 2019 7:29 AM
To: Singh, Jasvinder ; Ni, Hongjun 

Cc: Byrne, Stephen1 ; Dumitrescu, Cristian 
; vpp-dev@lists.fd.io; b...@xflowresearch.com
Subject: RE: [vpp-dev] HQoS

Hi Jasvinder, 

The CMakelist.txt files before and after applying the patch are exactly the 
same and hqos plugin is already enabled but the CLI commands are still not 
showing. Below is the screenshot of the CMakelist.text file located at 
/root/vpp/src/plugins/dpdk: 



Thank you and best regards,
 
Abeeha Aqeel
Network Design Engineer
Xflow Research Inc.
+923245062309 (GMT+5)
abeeha.

Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Benoit Ganne (bganne) via Lists.Fd.Io
>> I wonder if that is the problem with DPDK for the MLX cards as well. Let me 
>> check on another node.
> Well.. that’s that..

Ok good. No surprise: they both are based on rdma-core/libibverb. Did you 
installed vpp-selinux? If so maybe we are missing some rules in there, but I'll 
have to let that to more knowledgeable people...

ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13045): https://lists.fd.io/g/vpp-dev/message/13045
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FD.io Gerrit Maintenance: 2019-05-28 @ 1900 UTC (12:00pm PDT) - 2100 UTC (2:00pm PDT)

2019-05-15 Thread Vanessa Valderrama
*What:*

FD.io Gerrit maintenance

*When:*

2019-05-28 @ 1900 UTC (12:00pm PDT) - 2100 UTC (2:00pm PDT)

*Impact:*

Jenkins will be placed in shutdown mode one hour prior to maintenance to
allow builds to complete and will remain in shutdown mode for the
duration of the Gerrit upgrade.

The following systems will be unavailable during the maintenance window

  * Jenkins production
  * Jenkins sandbox
  * Gerrit

*Why:*

FD.io Gerrit maintenance to upgrade from 2.14.6 to 2.16
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13046): https://lists.fd.io/g/vpp-dev/message/13046
Mute This Topic: https://lists.fd.io/mt/31631345/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP OOM crash in CLI

2019-05-15 Thread Andreas Schultz
Hi,

It seems VPPs CLI is not very good at dealing with large FIBs or lots
of interfaces. I know the CLI is a debug tool only, but IMHO it should
not crash VPP that easily.
On a fib with 300k entries, the pager does not work and I get a OOM crash:

(gdb) bt
#0  clib_mov16 (src=, dst=) at
/usr/src/vpp/src/vppinfra/memcpy_sse3.h:60
#1  clib_mov32 (src=, dst=) at
/usr/src/vpp/src/vppinfra/memcpy_sse3.h:67
#2  clib_mov64 (src=, dst=) at
/usr/src/vpp/src/vppinfra/memcpy_sse3.h:73
#3  clib_mov128 (src=, dst=) at
/usr/src/vpp/src/vppinfra/memcpy_sse3.h:81
#4  clib_mov256 (src=0x7febd6d2ff30 "on184760 (p2p)\n[@0]: ipv4 via
0.0.0.0 upf_session184760: mtu:9000\npath:[184809] pl-index:184809 ip4
weight=1 pref=0 attached-nexthop:  oper-flags:resolved,
cfg-flags:attached,\n  10.43.104.28 upf_sessi"...,
dst=0x7febd97aff30 "on184760 (p2p)\n[@0]: ipv4 via 0.0.0.0
upf_session184760: mtu:9000\npath:[184809] pl-index:184809 ip4
weight=1 pref=0 attached-nexthop:  oper-flags:resolved,
cfg-flags:attached,\n  10.43.104.28 upf_sessi"...)
at /usr/src/vpp/src/vppinfra/memcpy_sse3.h:88
#5  clib_memcpy_fast (n=40232024, src=0x7febd6d2ff30,
dst=0x7febd97aff30) at /usr/src/vpp/src/vppinfra/memcpy_sse3.h:325
#6  vec_resize_allocate_memory (v=v@entry=0x7febd4b9c01c,
length_increment=length_increment@entry=201, data_bytes=40232076,
header_bytes=, header_bytes@entry=0,
data_align=data_align@entry=8) at /usr/src/vpp/src/vppinfra/vec.c:95
#7  0x76ae4233 in _vec_resize_inline (data_align=, header_bytes=, data_bytes=,
length_increment=, v=) at
/usr/src/vpp/src/vppinfra/vec.h:147
#8  unix_cli_add_pending_output (uf=0x7ff2704bfe2c,
buffer=0x7fffbabeb96c "path:[209906] pl-index:209906 ip4 weight=1
pref=0 attached-nexthop:  oper-flags:resolved, cfg-flags:attached,\n
10.12.107.171 upf_session209858 (p2p)\n[@0]: ipv4 via 0.0.0.0
upf_session209858: mtu:9000"..., buffer_bytes=201, cf=)
at /usr/src/vpp/src/vlib/unix/cli.c:544
#9  0x76ae5cb7 in unix_vlib_cli_output_raw
(cf=cf@entry=0x7fffb93dc69c, uf=uf@entry=0x7ff2704bfe2c,
buffer=, buffer_bytes=) at
/usr/src/vpp/src/vlib/unix/cli.c:654
#10 0x76ae6475 in unix_vlib_cli_output_raw
(buffer_bytes=, buffer=,
uf=0x7ff2704bfe2c, cf=0x7fffb93dc69c) at
/usr/src/vpp/src/vlib/unix/cli.c:620
#11 unix_vlib_cli_output_cooked (cf=0x7fffb93dc69c, uf=0x7ff2704bfe2c,
buffer=0x7fffbabeb96c "path:[209906] pl-index:209906 ip4 weight=1
pref=0 attached-nexthop:  oper-flags:resolved, cfg-flags:attached,\n
10.12.107.171 upf_session209858 (p2p)\n[@0]: ipv4 via 0.0.0.0
upf_session209858: mtu:9000"..., buffer_bytes=201)
at /usr/src/vpp/src/vlib/unix/cli.c:687
#12 0x76a8c79b in vlib_cli_output (vm=vm@entry=0x76d06700
, fmt=fmt@entry=0x77889987 "%U") at
/usr/src/vpp/src/vlib/cli.c:742
#13 0x777ffe23 in show_fib_path_command (vm=0x76d06700
, input=, cmd=) at
/usr/src/vpp/src/vnet/fib/fib_path.c:2737
#14 0x76a8caa6 in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x76d06700 ,
cm=cm@entry=0x76d06900 ,
input=input@entry=0x7fffbac5bf60, parent_command_index=) at /usr/src/vpp/src/vlib/cli.c:607
#15 0x76a8d0e7 in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x76d06700 ,
cm=cm@entry=0x76d06900 ,
input=input@entry=0x7fffbac5bf60, parent_command_index=) at /usr/src/vpp/src/vlib/cli.c:568
#16 0x76a8d0e7 in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x76d06700 ,
cm=cm@entry=0x76d06900 ,
input=input@entry=0x7fffbac5bf60,
parent_command_index=parent_command_index@entry=0) at
/usr/src/vpp/src/vlib/cli.c:568
#17 0x76a8d3b4 in vlib_cli_input (vm=0x76d06700
, input=input@entry=0x7fffbac5bf60,
function=function@entry=0x76ae6900 ,
function_arg=function_arg@entry=0) at /usr/src/vpp/src/vlib/cli.c:707
#18 0x76ae84c6 in unix_cli_process_input (cm=0x76d07040
, cli_file_index=0) at
/usr/src/vpp/src/vlib/unix/cli.c:2420
#19 unix_cli_process (vm=0x76d06700 ,
rt=0x7fffbac4b000, f=) at
/usr/src/vpp/src/vlib/unix/cli.c:2536
#20 0x76aa4e06 in vlib_process_bootstrap (_a=)
at /usr/src/vpp/src/vlib/main.c:1469
#21 0x765a5864 in clib_calljmp () from
/usr/src/vpp/build-root/install-vpp-native/vpp/lib/libvppinfra.so.19.08
#22 0x7fffb95ffb00 in ?? ()
#23 0x76aaa971 in vlib_process_startup (f=0x0,
p=0x7fffbac4b000, vm=0x76d06700 ) at
/usr/src/vpp/src/vlib/main.c:1491
#24 dispatch_process (vm=0x76d06700 ,
p=0x7fffbac4b000, last_time_stamp=0, f=0x0) at
/usr/src/vpp/src/vlib/main.c:1536
#25 0x in ?? ()

Regards
Andreas
-- 
-- 
Dipl.-Inform. Andreas Schultz

--- enabling your networks --
Travelping GmbH Phone:  +49-391-81 90 99 0
Roentgenstr. 13 Fax:+49-391-81 90 99 299
39108 Magdeburg Email:  i...@travelping.com
GERMANY Web:http://www.travelping.com

Company Registration: Amtsgericht StendalReg No.:   HRB 10578
Geschaeftsfuehrer:

Re: [EXTERNAL] [vpp-dev] VPP OOM crash in CLI

2019-05-15 Thread Chris Luke
The pager in the CLI retains output up to a certain amount but then gives up 
and switches to pass-through after a certain number of lines (default is 
10). If the output doesn't have newlines, or that default has been altered, 
then it will try to use more memory.

In this case it appears to die while trying to increase the buffer to ~40MB in 
size, which is quite a lot; are these long lines that it is trying to display?

Chris.

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Andreas Schultz
Sent: Wednesday, May 15, 2019 12:39 PM
To: vpp-dev@lists.fd.io
Subject: [EXTERNAL] [vpp-dev] VPP OOM crash in CLI

Hi,

It seems VPPs CLI is not very good at dealing with large FIBs or lots of 
interfaces. I know the CLI is a debug tool only, but IMHO it should not crash 
VPP that easily.
On a fib with 300k entries, the pager does not work and I get a OOM crash:

(gdb) bt
#0  clib_mov16 (src=, dst=) at
/usr/src/vpp/src/vppinfra/memcpy_sse3.h:60
#1  clib_mov32 (src=, dst=) at
/usr/src/vpp/src/vppinfra/memcpy_sse3.h:67
#2  clib_mov64 (src=, dst=) at
/usr/src/vpp/src/vppinfra/memcpy_sse3.h:73
#3  clib_mov128 (src=, dst=) at
/usr/src/vpp/src/vppinfra/memcpy_sse3.h:81
#4  clib_mov256 (src=0x7febd6d2ff30 "on184760 (p2p)\n[@0]: ipv4 via
0.0.0.0 upf_session184760: mtu:9000\npath:[184809] pl-index:184809 ip4
weight=1 pref=0 attached-nexthop:  oper-flags:resolved, cfg-flags:attached,\n  
10.43.104.28 upf_sessi"...,
dst=0x7febd97aff30 "on184760 (p2p)\n[@0]: ipv4 via 0.0.0.0
upf_session184760: mtu:9000\npath:[184809] pl-index:184809 ip4
weight=1 pref=0 attached-nexthop:  oper-flags:resolved, cfg-flags:attached,\n  
10.43.104.28 upf_sessi"...)
at /usr/src/vpp/src/vppinfra/memcpy_sse3.h:88
#5  clib_memcpy_fast (n=40232024, src=0x7febd6d2ff30,
dst=0x7febd97aff30) at /usr/src/vpp/src/vppinfra/memcpy_sse3.h:325
#6  vec_resize_allocate_memory (v=v@entry=0x7febd4b9c01c, 
length_increment=length_increment@entry=201, data_bytes=40232076, 
header_bytes=, header_bytes@entry=0,
data_align=data_align@entry=8) at /usr/src/vpp/src/vppinfra/vec.c:95
#7  0x76ae4233 in _vec_resize_inline (data_align=, header_bytes=, data_bytes=,
length_increment=, v=) at
/usr/src/vpp/src/vppinfra/vec.h:147
#8  unix_cli_add_pending_output (uf=0x7ff2704bfe2c,
buffer=0x7fffbabeb96c "path:[209906] pl-index:209906 ip4 weight=1
pref=0 attached-nexthop:  oper-flags:resolved, cfg-flags:attached,\n
10.12.107.171 upf_session209858 (p2p)\n[@0]: ipv4 via 0.0.0.0
upf_session209858: mtu:9000"..., buffer_bytes=201, cf=)
at /usr/src/vpp/src/vlib/unix/cli.c:544
#9  0x76ae5cb7 in unix_vlib_cli_output_raw (cf=cf@entry=0x7fffb93dc69c, 
uf=uf@entry=0x7ff2704bfe2c, buffer=, buffer_bytes=) at
/usr/src/vpp/src/vlib/unix/cli.c:654
#10 0x76ae6475 in unix_vlib_cli_output_raw (buffer_bytes=, buffer=, uf=0x7ff2704bfe2c, cf=0x7fffb93dc69c) at
/usr/src/vpp/src/vlib/unix/cli.c:620
#11 unix_vlib_cli_output_cooked (cf=0x7fffb93dc69c, uf=0x7ff2704bfe2c,
buffer=0x7fffbabeb96c "path:[209906] pl-index:209906 ip4 weight=1
pref=0 attached-nexthop:  oper-flags:resolved, cfg-flags:attached,\n
10.12.107.171 upf_session209858 (p2p)\n[@0]: ipv4 via 0.0.0.0
upf_session209858: mtu:9000"..., buffer_bytes=201)
at /usr/src/vpp/src/vlib/unix/cli.c:687
#12 0x76a8c79b in vlib_cli_output (vm=vm@entry=0x76d06700 
, fmt=fmt@entry=0x77889987 "%U") at
/usr/src/vpp/src/vlib/cli.c:742
#13 0x777ffe23 in show_fib_path_command (vm=0x76d06700 
, input=, cmd=) at
/usr/src/vpp/src/vnet/fib/fib_path.c:2737
#14 0x76a8caa6 in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x76d06700 ,
cm=cm@entry=0x76d06900 , 
input=input@entry=0x7fffbac5bf60, parent_command_index=) at /usr/src/vpp/src/vlib/cli.c:607
#15 0x76a8d0e7 in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x76d06700 ,
cm=cm@entry=0x76d06900 , 
input=input@entry=0x7fffbac5bf60, parent_command_index=) at /usr/src/vpp/src/vlib/cli.c:568
#16 0x76a8d0e7 in vlib_cli_dispatch_sub_commands
(vm=vm@entry=0x76d06700 ,
cm=cm@entry=0x76d06900 , 
input=input@entry=0x7fffbac5bf60,
parent_command_index=parent_command_index@entry=0) at
/usr/src/vpp/src/vlib/cli.c:568
#17 0x76a8d3b4 in vlib_cli_input (vm=0x76d06700 , 
input=input@entry=0x7fffbac5bf60,
function=function@entry=0x76ae6900 ,
function_arg=function_arg@entry=0) at /usr/src/vpp/src/vlib/cli.c:707
#18 0x76ae84c6 in unix_cli_process_input (cm=0x76d07040 
, cli_file_index=0) at
/usr/src/vpp/src/vlib/unix/cli.c:2420
#19 unix_cli_process (vm=0x76d06700 , rt=0x7fffbac4b000, 
f=) at
/usr/src/vpp/src/vlib/unix/cli.c:2536
#20 0x76aa4e06 in vlib_process_bootstrap (_a=) at 
/usr/src/vpp/src/vlib/main.c:1469
#21 0x765a5864 in clib_calljmp () from
/usr/src/vpp/build-root/install-vpp-native/vpp/lib/libvppinfra.so.19.08
#22 0x7fffb95ffb00 in ?? ()
#23 0x76aaa971 in vlib_process_startup (f=0x0, p=0x7fffbac4b000, 
vm=0x76d06700 ) at
/usr/src/vpp

Re: [E] Re: [vpp-dev] vip in VPP

2019-05-15 Thread Kevin Yan via Lists.Fd.Io
Hi Shahid,
Actually I have the same requirement that two IPs from same subnet configured 
on one interface and I tried to configure that way but it failed, vpp will 
complain as bellows
[cid:image001.png@01D50BC7.7BED62D0]
https://gerrit.fd.io/r/#/c/8057/
this patch did the check and disable overlapping sub-nets on any interface, is 
this reasonable? I suppose it is okay to configure multiple ip addresses within 
same subnet to one same interface.  Linux supports this, but vpp doesn’t.

so how did you solve your problem?

BRs,
Kevin
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Shahid Khan
Sent: Friday, April 26, 2019 9:19 PM
To: Ole Troan 
Cc: Damjan Marion ; vpp-dev@lists.fd.io
Subject: [E] Re: [vpp-dev] vip in VPP

Just default gw ... i will check assigning two IPs from same subnet to one 
interface ...


-Shahid

On Apr 26, 2019 18:14, "Ole Troan" 
mailto:otr...@employees.org>> wrote:
> Can we configure one interface with two ips from same subnet ?

There's certainly nothing wrong with that, so if it for some reason doesn't 
work, that can be patched.
You can of course put the same IP on different VPP instances on e.g. a loopback 
interface. If you have some way of routing to them.

VRRP is not supported. Do you have some application requiring state 
synchronisation on top, or is this just for default gateway?

Cheers,
Ole

>
> -Shahid
>
> On Apr 26, 2019 17:56, "Damjan Marion" 
> mailto:dmar...@me.com>> wrote:
>
>
> > On 26 Apr 2019, at 14:15, Shahid Khan 
> > mailto:shahidnasimk...@gmail.com>> wrote:
> >
> > Not on different interfaces same subnet on parent and its sub interface 
> > ... Parent interface will have real ip and sub interface will have VIP
>
> sub-interface is from fib perspective different interface and it is typically 
> tagged with some vlan tags.
>
> What’s wrong with having one interface with 2 IPs assigned. 2nd IP can be 
> added/removed based on control plane decision (i.e. change of vrrp state).
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#12861): https://lists.fd.io/g/vpp-dev/message/12861
> Mute This Topic: https://lists.fd.io/mt/31351846/675193
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
> [otr...@employees.org]
> -=-=-=-=-=-=-=-=-=-=-=-

This e-mail message may contain confidential or proprietary information of 
Mavenir Systems, Inc. or its affiliates and is intended solely for the use of 
the intended recipient(s). If you are not the intended recipient of this 
message, you are hereby notified that any review, use or distribution of this 
information is absolutely prohibited and we request that you delete all copies 
in your control and contact us by e-mailing to secur...@mavenir.com. This 
message contains the views of its author and may not necessarily reflect the 
views of Mavenir Systems, Inc. or its affiliates, who employ systems to monitor 
email messages, but make no representation that such messages are authorized, 
secure, uncompromised, or free from computer viruses, malware, or other 
defects. Thank You
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13049): https://lists.fd.io/g/vpp-dev/message/13049
Mute This Topic: https://lists.fd.io/mt/31636478/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] bonding: add support for numa awareness

2019-05-15 Thread Zhiyong Yang
Hi VPP experts,

I have submitted the below patch, welcome your comments.
https://gerrit.fd.io/r/#/c/19603/

bonding: add support for numa awareness

This patch enables binding numa awareness on multi-socket
server working in active-backeup mode.
The VPP adds capability for automatically preferring the slave
with local numa node in this mode in order to reduces the load on the QPI-bus 
and improve system overall performance in multi-socket use cases. The user 
doesn't need to add any extra operation as well.

Thanks
Zhiyong
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13050): https://lists.fd.io/g/vpp-dev/message/13050
Mute This Topic: https://lists.fd.io/mt/31637102/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP and Non-VPP Communication #vpp

2019-05-15 Thread shaligram.prakash
Yes, It can definitely be done via mmap the memory and marking the thread
process shared.

*fd=shm_open(...);*
*mmap(NULL, shmSize, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);*
*...*

*pthread_mutexattr_setpshared(&attr, PTHREAD_PROCESS_SHARED);*
*pthread_mutex_init(&metrics->lock, &attr);   *

*Regards,*
*Shaligram Prakash*


On Thu, 11 Apr 2019 at 17:13,  wrote:

> Can VPP and non-VPP Linux process share a single database (say shared
> memory)?  -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#12762): https://lists.fd.io/g/vpp-dev/message/12762
> Mute This Topic: https://lists.fd.io/mt/31029666/1192919
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=2729900
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [shaligra...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13051): https://lists.fd.io/g/vpp-dev/message/13051
Mute This Topic: https://lists.fd.io/mt/31029666/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp_get_stats return all zeros

2019-05-15 Thread Gyan Ranjan
Is there a know issue with stats when multi thread is enbled in vpp and we
get all zeroed node errors ?
Gyan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13052): https://lists.fd.io/g/vpp-dev/message/13052
Mute This Topic: https://lists.fd.io/mt/31638076/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP & Mellanox

2019-05-15 Thread Eyle Brinkhuis
Yes, I installed vpp-selinux. Let me know if I can help with anything regarding 
these problems, these are test-machines after all.

Regards,

Eyle

> On 15 May 2019, at 13:58, Benoit Ganne (bganne)  wrote:
> 
>>> I wonder if that is the problem with DPDK for the MLX cards as well. Let me 
>>> check on another node.
>> Well.. that’s that..
> 
> Ok good. No surprise: they both are based on rdma-core/libibverb. Did you 
> installed vpp-selinux? If so maybe we are missing some rules in there, but 
> I'll have to let that to more knowledgeable people...
> 
> ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13053): https://lists.fd.io/g/vpp-dev/message/13053
Mute This Topic: https://lists.fd.io/mt/31576338/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-