[vpp-dev] VCL-LDPRELOAD with C++ gRPC
Hi, we have recently tested the VPP TCP stack/VCL-LDPRELOAD library with C++ gRPC (https://grpc.io/) and reported two bugs: https://jira.fd.io/browse/VPP-1089 https://jira.fd.io/browse/VPP-1101 I would like to ask whether the way how the VCL-LDPRELOAD library is being developed is the same, i.e., it's use-case driven and a full POSIX replacement via LDPRELOAD remains out of scope. If so, does that mean that in order for the VPP TCP stack to become functional with gRPC, we can report bugs only, or, if appropriate, help to fix them? Are you not going to be focused on the C++ gRPC by yourselves in the near feature? Regards, Peter ___ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev
Re: [vpp-dev] The feasibility of C++ gRPC with libvcl_ldpreload
Hi Keith, thanks for your mail. I have just encountered something and therefore created this report: https://jira.fd.io/browse/VPP-1089 VCL-LDPRELOAD: setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, ...) does nothing, therefore, getsockopt(...) does not confirm an expected change Regards, Peter Od: Keith Burns <alaga...@gmail.com> Odoslané: štvrtok, 7. decembra 2017 16:05:39 Komu: Peter Palmár Kópia: vpp-dev Predmet: Re: [vpp-dev] The feasibility of C++ gRPC with libvcl_ldpreload Peter, As you might be aware we've been focused on showcasing VPP integrated with the Ligato project for KubeCon this week. Once folks are back next week there's quite a bit of technical debt we need to address. There's also interest from other parties to use LIBVCL (not LDP per se), and I suspect after KubeCon sparks some interest there'll be more requests. I'm thinking the smartest way forward is to manage this via JIRA. If you could raise a feature request and assign it to me, we can then put everything in one place and prioritise/have a repository for folks who want to help to pick up tasks. On Dec 6, 2017 12:25 PM, "Peter Palmár" <peter.pal...@pantheon.tech> wrote: Hi, we are testing the VPP TCP stack by using the following combination: A C++ application based on C++ gRPC with libvcl_ldpreload. We use greeter_server and greeter_client from grpc/examples/cpp/helloworld taken from https://github.com/grpc/grpc. The server and client use the eventfd()/eventfd2() system call which is not implemented in libvcl_ldpreload; this seems to be a reason why the communication between the server and client does not work. Could you please let me know whether I am right and if so, whether/when an implementation of eventfd is planned to be added to libvcl_ldpreload? The attached file contains the client output. Regards, Peter ___ vpp-dev mailing list vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> https://lists.fd.io/mailman/listinfo/vpp-dev ___ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev
[vpp-dev] The feasibility of C++ gRPC with libvcl_ldpreload
Hi, we are testing the VPP TCP stack by using the following combination: A C++ application based on C++ gRPC with libvcl_ldpreload. We use greeter_server and greeter_client from grpc/examples/cpp/helloworld taken from https://github.com/grpc/grpc. The server and client use the eventfd()/eventfd2() system call which is not implemented in libvcl_ldpreload; this seems to be a reason why the communication between the server and client does not work. Could you please let me know whether I am right and if so, whether/when an implementation of eventfd is planned to be added to libvcl_ldpreload? The attached file contains the client output. Regards, Peter sudo -E bash -c 'export LD_PRELOAD=/home/palmar/dev/vpp/build-root/build-vpp_debug-native/vpp/.libs/libvcl_ldpreload.so.0.0.0; ./greeter_client' [sudo] password for palmar: vppcom_app_create:1964: [3889] getenv 'VCL_CONFIG' failed! open configuration file '/etc/vpp/vcl.conf' failed vppcom_cfg_heapsize:1604: [3889] allocated VCL heap = 0x7fa1e8475000, size 268435456 (0x1000) vppcom_cfg_read:1624: [3889] open configuration file '/etc/vpp/vcl.conf' failed! Connecting to VPP api... connected! vppcom_app_create:2063: [3889] sending session enable vppcom_app_create:2074: [3889] sending app attach vppcom_app_create:2086: [3889] app_name 'vcom-app-3889', my_client_index 0 (0x0) vcom_socket_main_init [3889] vcom_init...done! [3889] vcom_constructor...done! vppcom_epoll_create:3352: [3889] Created vep_idx 0 / sid 0! [3889] epoll_create: '0006'='0008' [3889] vcom_socket_epoll_ctl_i: libc_epoll_ctl() returned 0 epfd 6, vep_idx 0, fd 7 sid -1 op 1 count 1, vcl_cnt 0, libc_cnt 1 [3889] epoll_ctl: ''='0006', '0001', '0007' epfd='0006', vep_idx='', type='EPOLL_TYPE_VPPCOM_BOUND ', flags='524288', count='1', close='0' vppcom_session_create:2136: [3889] sid 1 [3889][14055050752 (0x7fa1e7c72700)] socket: '0008'= D='0010', T='0001', P='' fd='0008', sid='0001',type='SOCKET_TYPE_VPPCOM_BOUND ' epfd='0006', vep_idx='', type='EPOLL_TYPE_VPPCOM_BOUND ', flags='524288', count='1', close='0' vppcom_session_bind:2286: [3889] sid 1: binding to local IPv6 address 0.0.0.1 port 0, proto TCP [3889] bind: ''='0008', '0x7fa1e7c71800', '0028' [3889] close: fd 8 vppcom_session_close:2172: [3889] vpp handle 0x, sid 1: closing session... vppcom_session_close:2248: [3889] vpp handle 0x, sid 1: session removed. [3889] close: vcom_close() returned 0 epfd='0006', vep_idx='', type='EPOLL_TYPE_VPPCOM_BOUND ', flags='524288', count='1', close='0' vppcom_session_create:2136: [3889] sid 1 [3889][14055050752 (0x7fa1e7c72700)] socket: '0008'= D='0010', T='0001', P='' fd='0008', sid='0001',type='SOCKET_TYPE_VPPCOM_BOUND ' epfd='0006', vep_idx='', type='EPOLL_TYPE_VPPCOM_BOUND ', flags='524288', count='1', close='0' [3889] setsockopt: ''='0008', '0041', '0026', '0x7fa1e7c718b4', '0004' [3889] fcntl: '0002'='0008', '0003' [3889] fcntl: ''='0008', '0004' [3889] fcntl: ''='0008', '0001' [3889] fcntl: ''='0008', '0002' [3889] setsockopt: ''='0008', '0006', '0001', '0x7fa1e7c718b4', '0004' [3889] connect: '-022'='0008', '0x7fa1e7c71930', '0028' [3889] vcom_socket_epoll_ctl_i: vppcom_epoll_ctl() returned 0 epfd 6, vep_idx 0, fd 8 sid 1 op 1 count 2, vcl_cnt 1, libc_cnt 1 [3889] epoll_ctl: ''='0006', '0001', '0008' [3889] shutdown: ''='0008', '0002' [3889] close: fd 8 vppcom_session_close:2172: [3889] vpp handle 0x, sid 1: closing session... vppcom_session_close:2248: [3889] vpp handle 0x, sid 1: session removed. [3889] close: vcom_close() returned 0 14: Connect Failed Greeter received: RPC failed [3889] close: fd 6 [3889] close: vcom_close() returned 0 vcom_socket_main_destroy vppcom_session_close:2169: [3889] vep_idx 0 / sid 0: closing epoll session... vppcom_session_close:2245: [3889] vep_idx 0 / sid 0: epoll session removed. vppcom_app_destroy:2102: [3889] detaching from VPP, my_client_index 0 (0x0) [3889] vcom_destroy...done! [3889] vcom_destructor...done! ___ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev
[vpp-dev] When using nc (netcat), VPP doesn't seem to work
Hi vpp developers, I would like to use vpp with nc, but after the nc server has accepted the first connection from the nc client, the server closes the session and the server and client end normally. More precisely, they end after pthread_mutex_unlock (>mutex) from unix_shared_memory_queue.c / unix_shared_memory_queue_add(...) has been executed by the client. Please have a look at the backtrace of the stack below. Server: sudo -E bash -c 'export LD_PRELOAD=/usr/local/lib/libvcl_ldpreload.so.0.0.0; ./nc -l -n -v ' Client: sudo -E gdb --args ./nc -v -n 127.0.0.1 (gdb) set environment LD_PRELOAD=/usr/local/lib/libvcl_ldpreload.so.0.0.0 (...) (gdb) bt #0 unix_shared_memory_queue_add (q=0x30046080, elem=0x7fff9828 ""\a0", nowait=0) at /home/palmar/dev/vpp/build-data/../src/vlibmemory/unix_shared_memory_queue.c:232 #1 0x7684f610 in vl_msg_api_send_shmem (q=0x30046080, elem=0x7fff9828 ""\a0") at /home/palmar/dev/vpp/build-data/../src/vlibmemory/memory_shared.c:581 #2 0x76ebde6d in vppcom_send_connect_sock (session=0x7fffe1f1e52c, session_index=0) at /home/palmar/dev/vpp/build-data/../src/uri/vppcom.c:788 #3 0x76ec2ba0 in vppcom_session_connect (session_index=0, server_ep=0x7fff98b0) at /home/palmar/dev/vpp/build-data/../src/uri/vppcom.c:2096 #4 0x77bd1435 in vcom_socket_connect (__fd=__fd@entry=6, __addr=__addr@entry=0x688b40, __len=__len@entry=16) at libvcl-ldpreload/vcom_socket.c:990 #5 0x77bcdf70 in vcom_connect (__fd=__fd@entry=6, __addr=__addr@entry=0x688b40, __len=__len@entry=16) at libvcl-ldpreload/vcom.c:1842 #6 0x77bce08e in connect (__fd=6, __addr=0x688b40, __len=16) at libvcl-ldpreload/vcom.c:1862 #7 0x00403ce1 in connect_with_timeout (ctimeout=-1, salen=16, sa=0x688b40, fd=6) at netcat.c:961 #8 remote_connect (host=host@entry=0x7fffe14c "127.0.0.1", port=0x688af0 "", hints=...) at netcat.c:877 #9 0x0040219c in main (argc=, argv=) at netcat.c:641 Could you please let me know whether this is a known issue? Regards, Peter ___ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev