Hello vpp-dev & csit-dev,
I have a question about how to know data type mapping between *.api.json with
PYTHON API ?
Let me use "./src/vnet/ip/ip.api" as example, as I know, vppapigen will
autogenerate "ip.api.json" from "ip.api"
And then vpp_papi can autogenerate its python api from "i
This is a known issue with how the retry mechanic interacts (badly) with gerrit
occasionally. This odds of this happening were a bit higher over the last
couple of days specific to centos retries. This is tied to the JNLP changes
made to csit and how the initial connection is made. While the c
Hi all,
I noticed a job getting a +1 even though some of the builds failed ...
https://gerrit.fd.io/r/#/c/18444/
please note patch set 9
fd.io JJB
Patch Set 9: Verified-1 Build Failed
https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1804/2459/ :
FAILURE No problems were identified. I
Hi Mohamed,
> I have hugetlb mounted
> root@node-1:/app# mount | grep huge
> cgroup on /sys/fs/cgroup/hugetlb type cgroup
> (rw,nosuid,nodev,noexec,relatime,hugetlb,nsroot=/kubepods/besteffort/pod57
> d8886a-701a-11e9-be26-
> 08002733828a/3d36de8ece4e84a1ccfca2c28e9bec1a8b1b1efdec682995f7a6406808
The original idea was to reply to an API message immediately with a [sic] error
status which indicated that the operation was in progress. Hence, +10.
The scheme is [at best] barely used at all as you wrote. Copying Ole for an
opinion on that single use-case.
D.
From: vpp-dev@lists.fd.io On B
Hi All:
I have hugetlb mounted
root@node-1:/app# mount | grep huge
cgroup on /sys/fs/cgroup/hugetlb type cgroup
(rw,nosuid,nodev,noexec,relatime,hugetlb,nsroot=/kubepods/besteffort/pod57d8886a-701a-11e9-be26-08002733828a/3d36de8ece4e84a1ccfca2c28e9bec1a8b1b1efdec682995f7a6406808d0c8a2)
vagrant
Folks,
So I was reading src/vnet/api_errno.h, as you do, and I noticed
this weird line:
_(IN_PROGRESS, 10, "Operation in progress")
\
That's right, an error number that is positive. It just doesn't sound
right...
And I don't think it is simply missing a negative sign as there is also
this
Just a shot in the dark, but is hugetlbfs accessible somewhere in your
container?
It should not be the case by default, and you probably need it, eg.:
~# mount -t hugetlbfs /dev/null /mnt/huge
ben
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Peter Mikus
> via Lists.Fd.I
Hi Jitendra,
I have not followed DPDK Cryptodev development over the last year, but
given my vague memory and latest VPP code, I reckon your configuration is
not supported.
The cryptodev scheduler supports common cipher/auth algorithms among all
slaves, which in your case is none.
Which VPP versi
Hi Hongjun,
I have tested the HQoS Plugin with iperf3. Below is a simple topology I have
implemented:
First, I tested the default profile (profile 0) provided by VPP and assigned it
to all 4096 pipes (users) which should give approximately 2MB (10G/4096) to
each user. It worked fine for 5 us
Hi everyone, I will use a library in vpp. I try to add library path into
Cmakelists.txt. I put the library in vpp/src/ directory. How can i add api
file of library with using find path or something else?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online
Hi,
With VPP multi-core (workers >= 2) and dpdk-bonding (mode-2) configuration, I
observe intermittent VPP crash traffic is sent at high-rates (for example, sent
4 MPPS of 1518B packets). Same crash is seen even with different packet-sizes
like 64B, 128B etc.
I don't see the crash when the num
Hi Abeeha,
For downstream bandwidth limiting, we leveraged HQos plugin in OpenBRAS
solution.
In our previous integration test, it could support 64K subscribers with HQos.
Thanks,
Hongjun
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Abeeha Aqeel
Sent: Monday, May 6, 2019 3
Hi Hongjun,
I have been trying to implement downstream bandwidth limiting using HQoS plugin
in vpp. It works fine for a certain number of clients (less than 5) but doesn’t
assign proper bandwidth for larger number of clients.
Can you please elaborate which method is being used in the OpenBRA
14 matches
Mail list logo