[vpp-dev] [VCL] Memory access error for different size of mutex with different glibc versions in VPP and VCL app

2020-03-22 Thread wanghanlin







Hi All,Now, VCL app and VPP shared some data structures, such as svm_queue_t.  In svm_queue_t, there are mutex and condvar variables that depends on specified glibc version. When VPP run in host and VCL app run in a docker container, glibc versions maybe different between VPP and VCL app, and then result in memory access error for different size of mutex  and condvar.Has anyone noticed this?Regards,Hanlin


 










wanghanlin







wanghan...@corp.netease.com








签名由
网易邮箱大师
定制

 



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15839): https://lists.fd.io/g/vpp-dev/message/15839
Mute This Topic: https://lists.fd.io/mt/72485607/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Build broken: revert of "srv6-mobile: revert GTP4/6.DT and User Plane message mapping" pending

2020-03-22 Thread Dave Barach via Lists.Fd.Io
Merged https://gerrit.fd.io/r/c/vpp/+/26059 instead...

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Dave Barach via 
Lists.Fd.Io
Sent: Sunday, March 22, 2020 9:40 AM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] Build broken: revert of "srv6-mobile: revert GTP4/6.DT and 
User Plane message mapping" pending

See https://gerrit.fd.io/r/c/vpp/+/26061.

I can see why folks thought that the original patch was OK: Jenkins / Gerrit 
hid the original test failure.

test_srv6_mobile.TestSRv6EndMGTP6D.test_srv6_mobile failed during validation, 
and has been failing 100% of the time since the patch was merged.

Debug CLI error: "sr localsid: Error: SRv6 LocalSID address is mandatory."

I tried fixing the test code, "s/prefix/address/" in several places such as:

self.vapi.cli(
"sr localsid prefix {}/64 behavior end.m.gtp6.e"
.format(pkts[0]['IPv6'].dst))

That led the test to fail for other reasons. Hence the revert.

FWIW... Dave




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15838): https://lists.fd.io/g/vpp-dev/message/15838
Mute This Topic: https://lists.fd.io/mt/72466522/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-03-22 14:00:27 UTC

2020-03-22 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 5
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15837): https://lists.fd.io/g/vpp-dev/message/15837
Mute This Topic: https://lists.fd.io/mt/72466984/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Build broken: revert of "srv6-mobile: revert GTP4/6.DT and User Plane message mapping" pending

2020-03-22 Thread Dave Barach via Lists.Fd.Io
See https://gerrit.fd.io/r/c/vpp/+/26061.

I can see why folks thought that the original patch was OK: Jenkins / Gerrit 
hid the original test failure.

test_srv6_mobile.TestSRv6EndMGTP6D.test_srv6_mobile failed during validation, 
and has been failing 100% of the time since the patch was merged.

Debug CLI error: "sr localsid: Error: SRv6 LocalSID address is mandatory."

I tried fixing the test code, "s/prefix/address/" in several places such as:

self.vapi.cli(
"sr localsid prefix {}/64 behavior end.m.gtp6.e"
.format(pkts[0]['IPv6'].dst))

That led the test to fail for other reasons. Hence the revert.

FWIW... Dave




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15836): https://lists.fd.io/g/vpp-dev/message/15836
Mute This Topic: https://lists.fd.io/mt/72466522/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] #lb vpp lb does not works with my configuration

2020-03-22 Thread Jinlei Li
Hi,

I am trying to test vpp load balance for my scenario. I create two pairs of 
veth on host linux, and connect vpp and two Nginx containers by them. A 
physical NIC is used by VPP as eth0 , and I create a loopback interface 
(loop0), finally I add the two host-interfaces and the loopback interface in 
the same bridge domain. the network toplogic is just like this picture. I can 
ping vpp eth0 within container. I hope the traffic from external can reach the 
nginx containers.

Then I would like to enable the vpp load balance feature, the configuration is 
like this:

set interface state eth0 up

set interface mtu 1500 eth0

set interface ip address eth0 10.161.30.5/24

ip route add 0.0.0.0/0 via 10.161.30.1

create host-interface name vpp1host
create host-interface name vpp2host

set interface state host-vpp1host up
set interface state host-vpp2host up

create loopback interface
set interface state loop0 up

set interface mtu 1500  host-vpp1host
set interface mtu 1500  host-vpp2host
set interface mtu 1500  loop0

create bridge-domain 1
set interface l2 bridge host-vpp1host 1
set interface l2 bridge host-vpp2host 1
set interface l2 bridge loop0 1 bvi

set interface ip address loop0 2.2.2.1/24

lb conf ip4-src-address 2.2.2.1
lb vip 10.161.30.5/32 protocol tcp port 80 encap nat4 type clusterip 
target_port 80
lb as 10.161.30.5/32 protocol tcp port 80 2.2.2.10 2.2.2.20
lb set interface nat4 in loop0  (after I add this configuration, the loop0 
interface can not reach container).

DBGvpp# show lb vips verbose

ip4-nat4 [1] 10.161.30.5/32

new_size:1024

protocol:6 port:80

type:clusterip port:20480 target_port:80  counters:

packet from existing sessions: 0

first session packet: 0

untracked packet: 0

no server configured: 0

#as:2

2.2.2.20 512 buckets   0 flows  dpo:18 used

2.2.2.10 512 buckets   0 flows  dpo:17 used

---

By the way,I also tried nat44 static mapping, and it works.

nat44 add address 10.161.30.5
set interface nat44 in loop0 out eth0
nat44 add load-balancing static mapping protocol tcp external 10.161.30.5:80 
local 2.2.2.10:80 probability 50 local 2.2.2.20:80 probability 50

So ,  can any one check where is the problem?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15835): https://lists.fd.io/g/vpp-dev/message/15835
Mute This Topic: https://lists.fd.io/mt/72464203/21656
Mute #lb: https://lists.fd.io/mk?hashtag=lb=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-