Re: [vpp-dev] Python API fails to connect to vpp #vapi #vpp_papi #vpp

2021-08-11 Thread Ole Troan
Hyong,

> Thanks for the info, and using 'use_socket=True' did solve the issue 
> ('VPPApiClient' was already in use as it was imported as 'VPP' in my code).  
> Out of curiosity, why is the python shared memory transport deprecated?

1) there was no performance gain using shared memory over UDS in Python
2) shared memory support required a VPP version specific C library
3) potential licensing issue with linking in the VPP C library and use of scapy 
in tests

Cheers,
Ole


signature.asc
Description: Message signed with OpenPGP

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19949): https://lists.fd.io/g/vpp-dev/message/19949
Mute This Topic: https://lists.fd.io/mt/84779165/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #vpp_papi:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp_papi
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Python API fails to connect to vpp #vapi #vpp_papi #vpp

2021-08-10 Thread hyongsop
Hi Ole,

Thanks for the info, and using 'use_socket=True' did solve the issue 
('VPPApiClient' was already in use as it was imported as 'VPP' in my code).  
Out of curiosity, why is the python shared memory transport deprecated?

Thanks again,
--Hyong

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19942): https://lists.fd.io/g/vpp-dev/message/19942
Mute This Topic: https://lists.fd.io/mt/84779165/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #vpp_papi:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp_papi
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Python API fails to connect to vpp #vapi #vpp_papi #vpp

2021-08-10 Thread Ole Troan
> I have a python script that uses 'vpp_papi' to try to connect to the 'vpp' 
> running on the local host.  It's based on my reading of the source code in 
> 'src/vpp-api/python/vpp_papi'.  One problem I'm running into is that the 
> client fails to connect to the vpp with the error message:
> 
> python3.8: .../src/vpp-api/client/client.c:560: vac_set_error_handler: 
> Assertion `clib_mem_get_heap ()' failed

I have seen this error before. It might be that you need to run as root, or set 
the chroot_prefix.
That said, we have deprecated Python shared memory VPP API transport.
And now only support the Unix domain socket. You might as well connect with 
use_socket=True (default from 21.08?).

Also "VPP" has been renamed to VPPApiClient.
e.g:
vpp = VPPApiClient(use_socket=True)
vpp.connect(name='foo')

cheers,
Ole

> 
> This error occurs as part of 'connect()':
> 
> vpp = VPP(self.api_json_files, read_timeout=60)
> ret = vpp.connect(app_name)
> 
> 
> FWIW, the vpp itself seems to have enough heap space (it's run with 2 
> workers):
> 
> 
> DBGvpp# show memory main-heap
> Thread 0 vpp_main
>   base 0x7f8d15fd5000, size 1g, locked, unmap-on-destroy, name 'main heap'
> page stats: page-size 4K, total 262144, mapped 26485, not-mapped 235659
>   numa 0: 24 pages, 96k bytes
>   numa 1: 26461 pages, 103.36m bytes
> total: 1023.99M, used: 99.86M, free: 924.14M, trimmable: 923.55M
> 
> Thread 1 vpp_wk_0
>   base 0x7f8d15fd5000, size 1g, locked, unmap-on-destroy, name 'main heap'
> page stats: page-size 4K, total 262144, mapped 27253, not-mapped 234891
>   numa 0: 24 pages, 96k bytes
>   numa 1: 27229 pages, 106.36m bytes
> total: 1023.99M, used: 102.86M, free: 921.14M, trimmable: 920.55M
> 
> Thread 2 vpp_wk_1
>   base 0x7f8d15fd5000, size 1g, locked, unmap-on-destroy, name 'main heap'
> page stats: page-size 4K, total 262144, mapped 28021, not-mapped 234123
>   numa 0: 24 pages, 96k bytes
>   numa 1: 27997 pages, 109.36m bytes
> total: 1023.99M, used: 105.86M, free: 918.14M, trimmable: 917.55M
> On the other hand, the system doesn't seem to have many "huge pages" as shown 
> by 'hugeadm --pool-list':
> 
> Size  Minimum  Current  Maximum  Default
> 
> 2097152 1024 1024 1024*
> 
> 1073741824000
> 
> However, I haven't been able to figure out if the error is related to the 
> above and couldn't get much info on the error from googling for and looking 
> at the `clib_mem_get_heap()` implementation.
> 
>  Any info on the cause of this error and path forward would be greatly 
> appreciated.
> 
> The vpp is 'stable/2101' and is running on a Ubuntu system (20.04, 
> 5.4.0-26-generic).
> 
> Thanks,
> --Hyong
> 
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19940): https://lists.fd.io/g/vpp-dev/message/19940
Mute This Topic: https://lists.fd.io/mt/84779165/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #vpp_papi:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp_papi
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Python API fails to connect to vpp #vapi #vpp_papi #vpp

2021-08-09 Thread hyongsop
Hi,

I have a python script that uses 'vpp_papi' to try to connect to the 'vpp' 
running on the local host.  It's based on my reading of the source code in 
'src/vpp-api/python/vpp_papi'.  One problem I'm running into is that the client 
fails to connect to the vpp with the error message:

> 
> 
> 
> python3.8: .../src/vpp-api/client/client.c:560: vac_set_error_handler:
> Assertion `clib_mem_get_heap ()' failed
> 
> 

This error occurs as part of 'connect()':

> 
> 
> 
> vpp = VPP(self.api_json_files, read_timeout=60)
> ret = vpp.connect(app_name)
> 
> 
> 
> 

FWIW, the vpp itself seems to have enough heap space (it's run with 2 workers):

> 
> DBGvpp# show memory main-heap
> Thread 0 vpp_main
> base 0x7f8d15fd5000, size 1g, locked, unmap-on-destroy, name 'main heap'
> page stats: page-size 4K, total 262144, mapped 26485, not-mapped 235659
> numa 0: 24 pages, 96k bytes
> numa 1: 26461 pages, 103.36m bytes
> total: 1023.99M, used: 99.86M, free: 924.14M, trimmable: 923.55M
> 
> Thread 1 vpp_wk_0
> base 0x7f8d15fd5000, size 1g, locked, unmap-on-destroy, name 'main heap'
> page stats: page-size 4K, total 262144, mapped 27253, not-mapped 234891
> numa 0: 24 pages, 96k bytes
> numa 1: 27229 pages, 106.36m bytes
> total: 1023.99M, used: 102.86M, free: 921.14M, trimmable: 920.55M
> 
> Thread 2 vpp_wk_1
> base 0x7f8d15fd5000, size 1g, locked, unmap-on-destroy, name 'main heap'
> page stats: page-size 4K, total 262144, mapped 28021, not-mapped 234123
> numa 0: 24 pages, 96k bytes
> numa 1: 27997 pages, 109.36m bytes
> total: 1023.99M, used: 105.86M, free: 918.14M, trimmable: 917.55M
> 

On the other hand, the system doesn't seem to have many "huge pages" as shown 
by 'hugeadm --pool-list':

> 
> 
> 
> Size  Minimum  Current  Maximum  Default
> 
> 
> 
> 2097152     1024     1024     1024        *
> 
> 
> 
> 1073741824        0        0        0
> 
> 

However, I haven't been able to figure out if the error is related to the above 
and couldn't get much info on the error from googling for and looking at the 
`clib_mem_get_heap()` implementation.

Any info on the cause of this error and path forward would be greatly 
appreciated.

The vpp is 'stable/2101' and is running on a Ubuntu system (20.04, 
5.4.0-26-generic).

Thanks,
--Hyong

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19937): https://lists.fd.io/g/vpp-dev/message/19937
Mute This Topic: https://lists.fd.io/mt/84779165/21656
Mute #vapi:https://lists.fd.io/g/vpp-dev/mutehashtag/vapi
Mute #vpp_papi:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp_papi
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-