[vpp-dev] Qwery regarding inter-connecting two docker via MemIf

2021-11-14 Thread pragya nand
Hi All,

I have followed the steps in
https://s3-docs.fd.io/vpp/22.02/gettingstarted/progressivevpp/twovppinstances.html
 .
This works for two vpp instances on the same host.
I was trying to connect 2 docker containers each running vpp using MemIf
and do a ping as shown in the above link, but I was unsuccessful.
Is there a way we can connect 2 docker container by using MemIf or by using
any other utility.
Please point to any reference material that is available for the same.

Thank you
Pragya Nand

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20491): https://lists.fd.io/g/vpp-dev/message/20491
Mute This Topic: https://lists.fd.io/mt/87063578/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] DS-Lite issue again

2021-11-14 Thread Ameen Al-Azzawi
Hello again,



As part of my Ph.D. research, I am building DS-Lite topology with the help
of VPP hopefully.



My DS-Lite topology in a nutshell (as every normal ds-lite) consists of 4
machines: -

·   Sender: IPv4 only machine, sends traffic to the receiver, the
traffic passes at first through B4 router.

·   B4 router: receive IPv4 packet, performs encapsulation, then sends
it as IP4 in IPv6 datagram.

·   AFTR router: receive the encapsulated packets, decapsulate it and
forward the IPv4 packet to the internal NAT interface, where NAT44 function
to be performed before forwarding the IPv4 packet to the receiver.

·   Receiver: normal IPv4 only machine.



 So, the idea to be able to ping (ICMP v4) from Sender to receiver  while
having IPv6 infrastructure in the middle


I have attached a picture of my topology.



VPP software is supposed to be installed on B4 & AFTR routers, which I did.

Note: Normally B4 & AFTR routers are not directly connected, this is just
for testing purposes.



All interfaces are configured through *“/etc/sysconfig/network-scripts*/”
folder


I have configured the tunnel endpoints on both sides (B4 and AFTR) with
commands below: -



In B4, I added the following: -





*[root@B4 ~]#vppctl*

*vpp# dslite set b4-tunnel-endpoint-address 2001:db8:0:1::2*

*vpp# show dslite b4-tunnel-endpoint-address*

*2001:db8:0:1::2 *





In AFTR, I added the followings: -



*[root@AFTR ~]#vppctl*

*vpp# dslite set aftr-tunnel-endpoint-address 2001:db8:0:1::1*

*vpp# show dslite aftr-tunnel-endpoint-address*

*   2001:db8:0:1::1*



*vpp# dslite add pool address 198.51.100.2 - 198.51.100.10*

*vpp# show dslite pool*

*DS-Lite pool:*

*198.51.100.2*

*198.51.100.3*

*198.51.100.4*

*198.51.100.5*

*198.51.100.6*

*198.51.100.7*

*198.51.100.8*

*198.51.100.9*

*198.51.100.10*

*vpp#*



I am not sure about the “pool” configuration, but this is how I thought it
should be configured.





The thing is, I read the documentation here: -



https://wiki.fd.io/view/VPP/NAT#DS-Lite

However, I am still missing something because the below command shows no
output: -



*vpp# show dslite sessions *







“startup.conf” file is also attached, it is the same for both B4 & AFTR
machines.







Do I need to add API stuff in  “startup.conf” ?



To be more honest, I looked at this API config example Below: -

define dslite_add_del_pool_addr_range {
 u32 client_index;
 u32 context;
 u8 start_addr[4];
 u8 end_addr[4];
 u8 is_add;
};


I couldn’t make sense of it, since I haven’t dealt with API before.



Note: all of my machines are CentOS 7



Any input is highly appreciated.


Regards

Ameen
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}


api-trace {
  on
}

api-segment {
  gid vpp
}

socksvr {
  default
}

cpu {
 }

 plugins {
plugin dslite_plugin.so { enable }
## Adjusting the plugin path depending on where the VPP plugins are
path /usr/lib/vpp_plugins

## Disable all plugins by default and then selectively enable specific 
plugins
#plugin default { disable }
plugin dpdk_plugin.so { enable }
plugin acl_plugin.so { enable }

## Enable all plugins by default and then selectively disable specific 
plugins
# plugin dpdk_plugin.so { disable }
# plugin acl_plugin.so { disable }
 }

nat { endpoint-dependent }
dslite { ce }

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20490): https://lists.fd.io/g/vpp-dev/message/20490
Mute This Topic: https://lists.fd.io/mt/87045923/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] [csit-dev] FD.io CSIT-2110 Release Report is published

2021-11-14 Thread Ameen Al-Azzawi
Dear Maciek,

I have just sent another email with more details regarding the issue.

Regards
Ameen

On Thu, Nov 11, 2021 at 4:39 PM Maciek Konstantynowicz (mkonstan) <
mkons...@cisco.com> wrote:

> Dear Ameen,
>
> What specific statement in the email do you disagree with?
>
> This is not a research paper, it is FD.io CSIT release report capturing
> benchmarking and functional test data for v21.10 release.
>
> Regarding your comment about "VPP and the system doesn't function”, I
> checked your statement about:
>
> "I have asked here a million times about configuring ds-lite, no one
> helped.”
>
> And found out that you have sent few emails (between 26-Sep and 28-Oct) to
> vpp-dev with questions about running ds-lite on vpp, and the last email was
> not answered.
> I suggest you resend the email, and I am confident that FD.io dev
> community will provide you with support.
>
> Cheers,
> Maciek
>
>
> On 11 Nov 2021, at 14:22, Ameen Al-Azzawi  wrote:
>
> I would totally disagree.
> It is disappointing when you base your research paper on VPP and the
> system doesn't function.
> I have asked here a million times about configuring ds-lite, no one
> helped.
>
> On Thu, Nov 11, 2021 at 3:19 PM Maciek Konstantynowicz (mkonstan) via
> lists.fd.io  wrote:
>
>> Big thanks to FD.io  community contributors and
>> supporters that enabled this data-rich publication.
>> Great job everyone!
>>
>> Cheers,
>> Maciek
>>
>> On 10 Nov 2021, at 18:12, Tibor Frank via lists.fd.io <
>> tifrank=cisco@lists.fd.io> wrote:
>>
>> Hi All,
>>
>> FD.io  CSIT-2110 report is now available on FD.io
>>  docs site:
>>
>> https://s3-docs.fd.io/csit/rls2110/report/
>>
>> Another successful release! Many thanks to all contributors in CSIT and
>> VPP communities for making it happen.
>>
>> See below for CSIT-2110 release summary and pointers to specific
>> sections in the report.
>>
>> Welcome all comments, best by email to csit-...@lists.fd.io.
>>
>> Tibor
>>
>>
>> CSIT-2110 Release Summary
>> -
>>
>> BENCHMARK TESTS
>>
>> - Intel Xeon Ice Lake: Added test data for these platforms. Current
>> CSIT-2110
>>   report data for Intel Xeon Ice Lake comes from an external source (Intel
>>   labs running CSIT code on “8360Y D Stepping” and “6338N” processors).
>>
>> - MLRsearch improvements: Added support for multiple packet throughput
>> rates
>>   in a single search, each rate is associated with a distinct Packet Loss
>> Ratio
>>   (PLR) criterion. Previously only Non Drop Rate (NDR) (PLR=0) and single
>>   Partial Drop Rate (PDR) (PLR<0.5%) were supported. Implemented number of
>>   optimizations improving rate discovery efficiency.
>>
>> - TREX performance tests: Added initial tests for testing latency between
>> 2
>>   ports on nic on the TRex. Added tests: IP4Base, IP4scale2m, IP6Base,
>>   IP6scale2m, L2bscale1mmaclrn.
>>
>> TEST FRAMEWORK
>>
>> - CSIT test environment version has been updated to ver. 8: Intel NIC
>> 700/800
>>   series firmware upgrade based on DPDK compatibility matrix.
>>
>> - CSIT in AWS environment: Added CSIT support for AWS c5n instances
>>   environment.
>>
>> Pointers to CSIT-2110 Report sections
>> -
>>
>> 1. FD.io  CSIT test methodology  [1]
>> 2. VPP release notes[2]
>> 3. VPP 64B/IMIX throughput graphs   [3]
>> 4. VPP throughput speedup multi-core[4]
>> 5. VPP latency under load   [5]
>> 6. VPP comparisons v21.10 vs. v21.06[6]
>> 7. VPP performance all pkt sizes & NICs [7]
>> 8. DPDK 21.08 apps release notes[8]
>> 9. DPDK 64B throughput graphs   [9]
>> 10. DPDK latency under load [10]
>> 11. DPDK comparisons 21.08 vs. 21.02[11]
>> 12. TRex 2.88 apps release notes[12]
>> 13. TRex 64B throughput graphs  [13]
>> 14. TRex latency under load [14]
>>
>> Functional device tests (VPP_Device) are also included in the report.
>>
>> [1]
>> https://s3-docs.fd.io/csit/rls2110/report/introduction/methodology.html
>> [2]
>> https://s3-docs.fd.io/csit/rls2110/report/vpp_performance_tests/csit_release_notes.html
>> [3]
>> https://s3-docs.fd.io/csit/rls2110/report/vpp_performance_tests/packet_throughput_graphs/index.html
>> [4]
>> https://s3-docs.fd.io/csit/rls2110/report/vpp_performance_tests/throughput_speedup_multi_core/index.html
>> [5]
>> https://s3-docs.fd.io/csit/rls2110/report/vpp_performance_tests/packet_latency/index.html
>> [6]
>> https://s3-docs.fd.io/csit/rls2110/report/vpp_performance_tests/comparisons/current_vs_previous_release.html
>> [7]
>> https://s3-docs.fd.io/csit/rls2110/report/detailed_test_results/vpp_performance_results/index.html
>> [8]
>> https://s3-docs.fd.io/csit/rls2110/report/dpdk_performance_tests/csit_release_notes.html
>> [9]
>> https://s3-docs.fd.io/csit/rls2110/report/dpdk_performance_tests/packet_throughput_graphs/index.html
>> [10]
>>