[vpp-dev] ikev2-ipsec-tunnel && NAT-T ?

2018-12-03 Thread wangchuan...@163.com
Hi all,
Can the ipsec tunnel generated by ikev2 support udp-encap(NAT-T) ?
How?

Thanks!



wangchuan...@163.com
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11485): https://lists.fd.io/g/vpp-dev/message/11485
Mute This Topic: https://lists.fd.io/mt/28578226/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Verify issues (GRE)

2018-12-03 Thread Florin Coras
Hi Juraj, 

Could you try exporting VCL_DEBUG=1 (or higher) before running the tests?

Florin

> On Dec 3, 2018, at 3:41 AM, Juraj Linkeš  wrote:
> 
> Hi Florin,
>  
> So the tests should work fine in parallel, thanks for the clarification.
>  
> I tried running the tests again and I could reproduce it with keyboard 
> interrupt or when the test produced a core (and was then killed by the parent 
> run_tests process), but the logs don't say anything - just that the server 
> and client were started and that's where the logs stop. I guess the child vcl 
> worker process is not handled in this case, though I wonder why 
> run_in_venv_with_cleanup.sh doesn't clean it up.
>  
> Juraj
>  
> From: Florin Coras [mailto:fcoras.li...@gmail.com] 
> Sent: Thursday, November 29, 2018 5:04 PM
> To: Juraj Linkeš 
> Cc: Ole Troan ; vpp-dev 
> Subject: Re: [vpp-dev] Verify issues (GRE)
>  
> Hi Juraj, 
>  
> Those tests exercise the stack in vpp, so they don’t use up linux stack 
> ports. Moreover, both cut-through and through-the-stack tests use 
> self.shm_prefix when connecting to vpp’s binary api. So, as long as that 
> variable is properly updated, VCL and implicitly LDP will attach and use 
> ports on the right vpp instance. 
>  
> As for sock_test_client/server not being properly killed, did you find 
> something in the logs that would indicate why it happened? 
>  
> Florin
> 
> 
> On Nov 29, 2018, at 3:18 AM, Juraj Linkeš  > wrote:
>  
> Hi Ole,
>  
> I've noticed a few thing about the VCL testcases:
> -The VCL testcasess are all using the same ports, which makes them 
> unsuitable for parallel test runs
> -Another thing about these testcases is that when they're don't 
> finish properly the sock_test_server and client stay running as zombie 
> processes (and thus use up ports). It's easily reproducible locally by 
> interrupting the tests, but I'm not sure whether this could actually arise in 
> CI
> -Which means that if one testcase finishes improperly (e.g. is killed 
> because of a timeout) all of the other VCL testcases will likely also fail
>  
> Hope this helps if there's anyone looking into those tests,
> Juraj
>  
> From: Ole Troan [mailto:otr...@employees.org ] 
> Sent: Wednesday, November 28, 2018 7:56 PM
> To: vpp-dev mailto:vpp-dev@lists.fd.io>>
> Subject: [vpp-dev] Verify issues (GRE)
>  
> Guys,
> 
> The verify job have been unstable over the last few days.
> We see some instability in the Jenkins build system, in the test harness 
> itself, and in the tests.
> On my 18.04 machine I’m seeing intermittent failures in GRE, GBP, DHCP, VCL.
> 
> It looks like Jenkins is functioning correctly now.
> Ed and I are also testing a revert of all the changes made to the test 
> framework itself over the last couple of days. A bit harsh, but we think this 
> might be the quickest way back to some level of stability.
> 
> Then we need to fix the tests that are in themselves unstable.
> 
> Any volunteers to see if they can figure out why GRE fails?
> 
> Cheers,
> Ole
> 
> 
> GRE Test Case 
> ==
> GRE IPv4 tunnel TestsOK
> GRE IPv6 tunnel TestsOK
> GRE tunnel L2 Tests  OK
> 19:37:47,505 Unexpected packets captured:
> Packet #0:
>   0201FF0202FE70A06AD308004500 p.j...E.
> 0010  002A00013F11219FAC100101AC10 .*?.!...
> 0020  010204D204D2001672A9343336392033 r.4369 3
> 0030  2033202D31202D31  3 -1 -1
> 
> ###[ Ethernet ]### 
>   dst   = 02:01:00:00:ff:02
>   src   = 02:fe:70:a0:6a:d3
>   type  = IPv4
> ###[ IP ]### 
>  version   = 4
>  ihl   = 5
>  tos   = 0x0
>  len   = 42
>  id= 1
>  flags = 
>  frag  = 0
>  ttl   = 63
>  proto = udp
>  chksum= 0x219f
>  src   = 172.16.1.1
>  dst   = 172.16.1.2
>  \options   \
> ###[ UDP ]### 
> sport = 1234
> dport = 1234
> len   = 22
> chksum= 0x72a9
> ###[ Raw ]### 
>load  = '4369 3 3 -1 -1'
> 
> Ten more packets
> 
> 
> ###[ UDP ]### 
> sport = 1234
> dport = 1234
> len   = 22
> chksum= 0x72a9
> ###[ Raw ]### 
>load  = '4369 3 3 -1 -1'
> 
> ** Ten more packets
> 
> Print limit reached, 10 out of 257 packets printed
> 19:37:47,770 REG: Couldn't remove configuration for object(s):
> 19:37:47,770 
> GRE tunnel VRF Tests 
> ERROR [ temp dir used by test case: /tmp/vpp-unittest-TestGRE-hthaHC ]
> 
> ==
> ERROR: GRE tunnel VRF Tests
> 

Re: [vpp-dev] Issue with vpp on Cloud Ubuntu Xenial VM (openstack, KVM)

2018-12-03 Thread Amit Surana
Hi Marco,

Facing another issue, please can you check. While installing package
vpp-dpdk-dkms
I am getting error as below.  I already have module igb_uio.ko running
(built as part of dpdk)
on my machine. Can I neglect this error ?

root@vnf:~# apt-get install -y vpp vpp-dpdk-dkms vpp-lib vpp-dbg
vpp-plugins vpp-dev

---
Setting up vpp-dpdk-dkms (18.02.1-vpp1) ...
Loading new vpp-dpdk-dkms-18.02.1-vpp1 DKMS files...
First Installation: checking all kernels...
Building only for 4.4.0-139-generic
Building initial module for 4.4.0-139-generic


*ERROR: Cannot create report: [Errno 17] File exists:
'/var/crash/vpp-dpdk-dkms.0.crash'Error! Bad return status for module build
on kernel: 4.4.0-139-generic (x86_64)Consult
/var/lib/dkms/vpp-dpdk-dkms/18.02.1-vpp1/build/make.log for more
information.*
Setting up vpp-plugins (18.04-rc2~26-gac2b736~b45) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Processing triggers for ureadahead (0.100.0-19) ...

As per log entry : igb_uio.c file is missing.

DKMS make.log for vpp-dpdk-dkms-18.02.1-vpp1 for kernel 4.4.0-139-generic
(x86_64)
Mon Dec  3 15:10:50 UTC 2018
make: Entering directory '/usr/src/linux-headers-4.4.0-139-generic'
  LD  /var/lib/dkms/vpp-dpdk-dkms/18.02.1-vpp1/build/built-in.o
make[1]: *** No rule to make target
'/var/lib/dkms/vpp-dpdk-dkms/18.02.1-vpp1/build/igb_uio.c', needed by '/$
Makefile:1439: recipe for target
'_module_/var/lib/dkms/vpp-dpdk-dkms/18.02.1-vpp1/build' failed
make: *** [_module_/var/lib/dkms/vpp-dpdk-dkms/18.02.1-vpp1/build] Error 2
make: Leaving directory '/usr/src/linux-headers-4.4.0-139-generic'

Regards
Amit Surana

On Mon, Dec 3, 2018 at 4:00 PM Marco Varlese  wrote:

> On Sat, 2018-12-01 at 22:51 +0530, Amit Surana wrote:
>
> Hi Marcos,
>
> Many thanks for your help, I have got it working now.
>
> My pleasure; that's great to hear!!!
>
>
> Regards
> Amit Surana
>
> On Sat, 1 Dec 2018, 11:09 Amit Surana 
> Thanks Marco. yes that seems to be the issue as vpp crash file too
> indicates same.
> Please find /cat/proc for Guest CPU, it does not have support for SSE4.
>
> processor   : 1
> vendor_id   : AuthenticAMD
> cpu family  : 6
> model   : 6
> model name  : QEMU Virtual CPU version 2.5+
> stepping: 3
> cpu MHz : 2299.923
> cache size  : 512 KB
> physical id : 1
> siblings: 1
> core id : 0
> cpu cores   : 1
> apicid  : 1
> initial apicid  : 1
> fpu : yes
> fpu_exception   : yes
> cpuid level : 13
> wp  : yes
> flags   : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm nopl cpuid pni cx16
> hypervisor lahf_lm svm 3dnowprefetch vmmcall
> bugs: fxsave_leak sysret_ss_attrs spectre_v1 spectre_v2
> spec_store_bypass
> bogomips: 0.22
> TLB size: 1024 4K pages
> clflush size: 64
> cache_alignment : 64
> address sizes   : 40 bits physical, 48 bits virtual
>
> VPP crash output-
> Program terminated with signal SIGILL, Illegal instruction.
> #0  0x7f157f37215c in _mm_set_epi8 (__q00=,
> __q01=, __q02=, __q03=,
> __q04=, __q05=, __q06=,
> __q07=, __q08=, __q09=,
> __q10=, __q11=, __q12=,
> __q13=, __q14=, __q15=)
> at /usr/lib/gcc/x86_64-linux-gnu/7/include/emmintrin.h:620
>
> My Host does have SSE4 support, its not reflecting in Guest due to Nova
> setting.
> I'm using devstack, so need to check best way to change it.
>
> Regards
> Amit Surana
>
>
> On Fri, Nov 30, 2018 at 6:54 PM Marco Varlese  wrote:
>
> I think you're CPU doesn't support SSE4/AVX instructions...
>
> Can you try to paste here the content of the "flags" parameters in the
> output of "cat /proc/cpuinfo"
>
>
> - Marco
>
> On Fri, 2018-11-30 at 16:00 +0530, Amit Surana wrote:
>
> Hi Team,
>
> Please can you help me with following issue.
>
> I'm trying to run VPP on Ubuntu Xenial VM (2 vCPU, 4 GB) in Openstack/KVM.
>
> After installation vppctl fails with following:
> $vppctl
> *ERROR:*
> clib_socket_init: connect (fd 3, '/run/vpp/cli.sock'): No such file or
> directory
>
> *Thread for Similar issue-*
> I checked following
>
> https://lists.fd.io/g/vpp-dev/topic/running_vpp_failed/16298572?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,20,16298572
>
> *But Resolution is not applicable for my settings.*
>
> *Journalctl has following :*
>
> Nov 29 11:52:14 ubuntuxenial vpp[26827]: vlib_plugin_early_init:359:
>
> plugin path /usr/lib/vpp_plugins
>
> Nov 29 11:52:14 ubuntuxenial kernel: traps: vpp[26827] trap invalid opcode
> ip:7f62ce450b9e sp:7ffe696c3c20 error:0 in
> libvppinfra.so.0.0.0[7f62ce3f+7c000]
>
> Nov 29 11:52:18 ubuntuxenial systemd[1]: vpp.service: Main process exited,
> code=dumped, status=4/ILL
>
> Nov 29 11:52:18 ubuntuxenial systemd[1]: vpp.service: Unit entered failed
> state.
>
> Nov 29 11:52:18 ubuntuxenial systemd[1]: vpp.service: Failed with result
> 'core-dump'.
>
>
> Please note I've used instruction 

Re: [vpp-dev] Verify issues (GRE)

2018-12-03 Thread Juraj Linkeš
Hi Florin,

So the tests should work fine in parallel, thanks for the clarification.

I tried running the tests again and I could reproduce it with keyboard 
interrupt or when the test produced a core (and was then killed by the parent 
run_tests process), but the logs don't say anything - just that the server and 
client were started and that's where the logs stop. I guess the child vcl 
worker process is not handled in this case, though I wonder why 
run_in_venv_with_cleanup.sh doesn't clean it up.

Juraj

From: Florin Coras [mailto:fcoras.li...@gmail.com]
Sent: Thursday, November 29, 2018 5:04 PM
To: Juraj Linkeš 
Cc: Ole Troan ; vpp-dev 
Subject: Re: [vpp-dev] Verify issues (GRE)

Hi Juraj,

Those tests exercise the stack in vpp, so they don’t use up linux stack ports. 
Moreover, both cut-through and through-the-stack tests use self.shm_prefix when 
connecting to vpp’s binary api. So, as long as that variable is properly 
updated, VCL and implicitly LDP will attach and use ports on the right vpp 
instance.

As for sock_test_client/server not being properly killed, did you find 
something in the logs that would indicate why it happened?

Florin


On Nov 29, 2018, at 3:18 AM, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:

Hi Ole,

I've noticed a few thing about the VCL testcases:
-The VCL testcasess are all using the same ports, which makes them 
unsuitable for parallel test runs
-Another thing about these testcases is that when they're don't finish 
properly the sock_test_server and client stay running as zombie processes (and 
thus use up ports). It's easily reproducible locally by interrupting the tests, 
but I'm not sure whether this could actually arise in CI
-Which means that if one testcase finishes improperly (e.g. is killed 
because of a timeout) all of the other VCL testcases will likely also fail

Hope this helps if there's anyone looking into those tests,
Juraj

From: Ole Troan [mailto:otr...@employees.org]
Sent: Wednesday, November 28, 2018 7:56 PM
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Verify issues (GRE)

Guys,

The verify job have been unstable over the last few days.
We see some instability in the Jenkins build system, in the test harness 
itself, and in the tests.
On my 18.04 machine I’m seeing intermittent failures in GRE, GBP, DHCP, VCL.

It looks like Jenkins is functioning correctly now.
Ed and I are also testing a revert of all the changes made to the test 
framework itself over the last couple of days. A bit harsh, but we think this 
might be the quickest way back to some level of stability.

Then we need to fix the tests that are in themselves unstable.

Any volunteers to see if they can figure out why GRE fails?

Cheers,
Ole


GRE Test Case
==
GRE IPv4 tunnel TestsOK
GRE IPv6 tunnel TestsOK
GRE tunnel L2 Tests  OK
19:37:47,505 Unexpected packets captured:
Packet #0:
  0201FF0202FE70A06AD308004500 p.j...E.
0010  002A00013F11219FAC100101AC10 .*?.!...
0020  010204D204D2001672A9343336392033 r.4369 3
0030  2033202D31202D31  3 -1 -1

###[ Ethernet ]###
  dst   = 02:01:00:00:ff:02
  src   = 02:fe:70:a0:6a:d3
  type  = IPv4
###[ IP ]###
 version   = 4
 ihl   = 5
 tos   = 0x0
 len   = 42
 id= 1
 flags =
 frag  = 0
 ttl   = 63
 proto = udp
 chksum= 0x219f
 src   = 172.16.1.1
 dst   = 172.16.1.2
 \options   \
###[ UDP ]###
sport = 1234
dport = 1234
len   = 22
chksum= 0x72a9
###[ Raw ]###
   load  = '4369 3 3 -1 -1'

Ten more packets


###[ UDP ]###
sport = 1234
dport = 1234
len   = 22
chksum= 0x72a9
###[ Raw ]###
   load  = '4369 3 3 -1 -1'

** Ten more packets

Print limit reached, 10 out of 257 packets printed
19:37:47,770 REG: Couldn't remove configuration for object(s):
19:37:47,770 
GRE tunnel VRF Tests ERROR 
[ temp dir used by test case: /tmp/vpp-unittest-TestGRE-hthaHC ]

==
ERROR: GRE tunnel VRF Tests
--
Traceback (most recent call last):
  File "/vpp/16257/test/test_gre.py", line 61, in tearDown
super(TestGRE, self).tearDown()
  File "/vpp/16257/test/framework.py", line 546, in tearDown
self.registry.remove_vpp_config(self.logger)
  File "/vpp/16257/test/vpp_object.py", line 86, in remove_vpp_config
(", ".join(str(x) for x in failed)))
Exception: Couldn't remove configuration for object(s): 1:2.2.2.2/32


Re: [vpp-dev] Issue with vpp on Cloud Ubuntu Xenial VM (openstack, KVM)

2018-12-03 Thread Marco Varlese
On Sat, 2018-12-01 at 22:51 +0530, Amit Surana wrote:
> Hi Marcos,
> Many thanks for your help, I have got it working now.
My pleasure; that's great to hear!!!
> Regards
> Amit Surana
> On Sat, 1 Dec 2018, 11:09 Amit Surana  > Thanks Marco. yes that seems to be the issue as vpp crash file too indicates
> > same.Please find /cat/proc for Guest CPU, it does not have support for SSE4.
> > processor   : 1
> > vendor_id   : AuthenticAMD
> > cpu family  : 6
> > model   : 6
> > model name  : QEMU Virtual CPU version 2.5+
> > stepping: 3
> > cpu MHz : 2299.923
> > cache size  : 512 KB
> > physical id : 1
> > siblings: 1
> > core id : 0
> > cpu cores   : 1
> > apicid  : 1
> > initial apicid  : 1
> > fpu : yes
> > fpu_exception   : yes
> > cpuid level : 13
> > wp  : yes
> > flags   : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
> > pat pse36 clflush mmx fxsr sse sse2 syscall nx lm nopl cpuid pni cx16
> > hypervisor lahf_lm svm 3dnowprefetch vmmcall
> > bugs: fxsave_leak sysret_ss_attrs spectre_v1 spectre_v2
> > spec_store_bypass
> > bogomips: 0.22
> > TLB size: 1024 4K pages
> > clflush size: 64
> > cache_alignment : 64
> > address sizes   : 40 bits physical, 48 bits virtual
> > VPP crash output-Program terminated with signal SIGILL, Illegal instruction.
> > #0  0x7f157f37215c in _mm_set_epi8 (__q00=,
> > __q01=, __q02=, __q03=,
> > __q04=, __q05=, __q06=,
> > __q07=, __q08=, __q09=,
> > __q10=, __q11=, __q12=,
> > __q13=, __q14=, __q15=)
> > at /usr/lib/gcc/x86_64-linux-gnu/7/include/emmintrin.h:620
> > My Host does have SSE4 support, its not reflecting in Guest due to Nova
> > setting.I'm using devstack, so need to check best way to change it. 
> > RegardsAmit Surana
> > 
> > 
> > On Fri, Nov 30, 2018 at 6:54 PM Marco Varlese  wrote:
> > > I think you're CPU doesn't support SSE4/AVX instructions...
> > > Can you try to paste here the content of the "flags" parameters in the
> > > output of "cat /proc/cpuinfo"
> > > 
> > > - Marco
> > > On Fri, 2018-11-30 at 16:00 +0530, Amit Surana wrote:
> > > > Hi Team,
> > > > Please can you help me with following issue.
> > > > I'm trying to run VPP on Ubuntu Xenial VM (2 vCPU, 4 GB) in
> > > > Openstack/KVM.
> > > >  After installation vppctl fails with
> > > > following:$vppctlERROR:clib_socket_init: connect (fd 3,
> > > > '/run/vpp/cli.sock'): No such file or directory
> > > > 
> > > > Thread for Similar issue-I checked following
> > > > https://lists.fd.io/g/vpp-dev/topic/running_vpp_failed/16298572?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,20,16298572
> > > > 
> > > > But Resolution is not applicable for my settings.
> > > > 
> > > > 
> > > > 
> > > > Journalctl has following :
> > > > 
> > > > 
> > > > 
> > > > Nov 29 11:52:14 ubuntuxenial vpp[26827]:
> > > > vlib_plugin_early_init:359: plugin path /usr/lib/vpp_plugins
> > > > 
> > > > Nov 29 11:52:14 ubuntuxenial kernel: traps: vpp[26827] trap
> > > > invalid opcode ip:7f62ce450b9e sp:7ffe696c3c20 error:0 in
> > > > libvppinfra.so.0.0.0[7f62ce3f+7c000]
> > > > 
> > > > Nov 29 11:52:18 ubuntuxenial systemd[1]: vpp.service: Main
> > > > process exited, code=dumped, status=4/ILL
> > > > 
> > > > Nov 29 11:52:18 ubuntuxenial systemd[1]: vpp.service: Unit
> > > > entered failed state.
> > > > 
> > > > Nov 29 11:52:18 ubuntuxenial systemd[1]: vpp.service: Failed
> > > > with result 'core-dump'.
> > > > Please note I've used instruction as per 
> > > > 
> > > > 
> > > > https://github.com/cncf/cnfs/tree/master/comparison/box-by-box-kvm-docker
> > > > -
> > > > 
> > > > 
> > > > export UBUNTU="xenial"
> > > >   export RELEASE=".stable.1804"
> > > >   rm /etc/apt/sources.list.d/99fd.io.list
> > > >   echo "deb [trusted=yes] 
> > > > https://nexus.fd.io/content/repositories/fd.io.ubuntu.$UBUNTU.main/ ./"
> > > > | tee -a /etc/apt/sources.list.d/99fd.io.list
> > > >   sudo apt-get update
> > > >   apt-get install vpp vpp-lib vpp-plugins vpp-dbg vpp-dev vpp-api-java
> > > > vpp-api-python vpp-api-lua
> > > > 
> > > > 
> > > > 
> > > > I've tried to run it on Ubuntu Bionic cloud image but facing same issue.
> > > > 
> > > > 
> > > > Regards
> > > > Amit Surana
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > -=-=-=-=-=-=-=-=-=-=-=-Links: You receive all messages sent to this
> > > > group.
> > > > View/Reply Online (#11468): https://lists.fd.io/g/vpp-dev/message/11468
> > > > Mute This Topic: https://lists.fd.io/mt/28508443/675056
> > > > Group Owner: vpp-dev+ow...@lists.fd.io
> > > > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [mvarl...@suse.de]-=-
> > > > =-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11480): https://lists.fd.io/g/vpp-dev/message/11480
Mute This Topic: https://lists.fd.io/mt/28508443/21656
Group Owner: 

Re: [vpp-dev] how can i del the ipsec Node that generated by ikev2?

2018-12-03 Thread Klement Sekera via Lists.Fd.Io
Can you please post the CLI commands which result in such state?

Thanks,
Klement

Quoting wangchuan...@163.com (2018-12-03 04:12:55)
>hi all,
>    The ipsec Node generated by Ikev2 still existing after deleting the
>ikev2 profile, how can I delete it?
>Thanks!
> 
>--
> 
>wangchuan...@163.com
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11478): https://lists.fd.io/g/vpp-dev/message/11478
Mute This Topic: https://lists.fd.io/mt/28567959/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question about multicast mpls

2018-12-03 Thread xyxue
Hi Neale,

I found the "multicast" in the command below:
VLIB_CLI_COMMAND (create_mpls_tunnel_command, static) = {
  .path = "mpls tunnel",
  .short_help =
  "mpls tunnel [multicast] [l2-only] via [next-hop-address] 
[next-hop-interface] [next-hop-table ] [weight ] [preference 
] [udp-encap-id ] [ip4-lookup-in-table ] 
[ip6-lookup-in-table ] [mpls-lookup-in-table ] [resolve-via-host] 
[resolve-via-connected] [rx-ip4 ] [out-labels ]",
  .function = vnet_create_mpls_tunnel_command_fn,
};

Is this enough for the mpls multicast? 
Or the " vnet_mpls_local_label()" multicast  + “mpls tunnel” multicast => mpls 
multicast? 

Thanks,
Xue



 
From: Neale Ranns (nranns)
Date: 2018-11-30 16:31
To: 薛欣颖
CC: vpp-dev
Subject: Re: [vpp-dev] question about multicast mpls
 
Hi Xue,
 
I don’t have any. And a quick look at the CLI implementation in 
vnet_mpls_local_label() shows it does not accept a ‘multicast’ keyword.
 
/neale
 
 
De :  au nom de xyxue 
Date : vendredi 30 novembre 2018 à 01:21
À : "Neale Ranns (nranns)" 
Cc : vpp-dev 
Objet : Re: [vpp-dev] question about multicast mpls
 
Hi Neale,
 
Is there any cli configuration examples about multicast mpls ? 
 
Thanks,
Xue
 
From: Neale Ranns via Lists.Fd.Io
Date: 2018-11-28 20:59
To: 薛欣颖; vpp-dev
CC: vpp-dev
Subject: Re: [vpp-dev] question about multicast mpls
Hi Xue,
 
MPLS multicast has been supported for a while. Please see the unit tests for 
examples: test/test_mpls.py test_mcast_*()
 
Regards,
Neale
 
 
De :  au nom de xyxue 
Date : mercredi 28 novembre 2018 à 13:04
À : vpp-dev 
Objet : [vpp-dev] question about multicast mpls
 
 
Hi guys,
 
I found "multicast" in the mpls cli. Is the vpp support multicast mpls now ?
Is there any example show about multicast mpls?
 
Thank you very much for your reply.
 
Thanks,
Xue


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11477): https://lists.fd.io/g/vpp-dev/message/11477
Mute This Topic: https://lists.fd.io/mt/28430049/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-