On 11/10 15:17:15, Bala Manoharan wrote:
> On 10 November 2016 at 13:26, Brian Brooks wrote:
> > On 11/07 16:46:12, Bala Manoharan wrote:
> >> Hi,
> >
> > Hiya
> >
> >> This mail thread discusses the design of classification queue group
> >> RFC. The same can be found in the google doc whose link
I hoped such a kernel module would already exist, but I am surprised
that DPDK would not rely on it. Maybe there is a limitation I cannot
see. I'll keep searching. Francois, maybe you know the answer?
On 10 November 2016 at 19:10, Mike Holmes wrote:
>
>
> On 10 November 2016 at 12:52, Christophe
From: Xuelin Shi
This patch made the following changes:
- receiving packets from multiple pktios other than generating packets
- identifying TM user by ip with individual class of service
- print the user service while starting the program
- pirnt the packets count
On 10 November 2016 at 20:05, Maxim Uvarov wrote:
> On 11/10/16 20:21, Christophe Milard wrote:
>
>> The capability "can_getphy" is introduced and tells whether physical
>> address queries are available.
>> The function odpdrv_getphy() is added to query for physical address
>> (from virtual addre
On 11/10/16 20:21, Christophe Milard wrote:
The capability "can_getphy" is introduced and tells whether physical
address queries are available.
The function odpdrv_getphy() is added to query for physical address
(from virtual address)
Signed-off-by: Christophe Milard
---
include/odp/drv/spec/
On 10 November 2016 at 04:14, Bala Manoharan
wrote:
> Regards,
> Bala
>
>
> On 10 November 2016 at 07:34, Bill Fischofer
> wrote:
> >
> >
> > On Mon, Nov 7, 2016 at 5:16 AM, Bala Manoharan <
> bala.manoha...@linaro.org>
> > wrote:
> >>
> >> Hi,
> >>
> >> This mail thread discusses the design of
On 10 November 2016 at 12:52, Christophe Milard <
christophe.mil...@linaro.org> wrote:
> Hi,
>
> My hope was that packet segments would all be smaller than one page
> (either normal pages or huge pages) to guarantee physical memory
> continuity which is needed by some drivers (read non vfio driver
Hi,
My hope was that packet segments would all be smaller than one page
(either normal pages or huge pages) to guarantee physical memory
continuity which is needed by some drivers (read non vfio drivers for
PCI).
Francois Ozog's experience (with dpdk)shows that this hope will fail
in some case: n
The capability "can_getphy" is introduced and tells whether physical
address queries are available.
The function odpdrv_getphy() is added to query for physical address
(from virtual address)
Signed-off-by: Christophe Milard
---
platform/linux-generic/drv_shm.c| 12 +++
Signed-off-by: Christophe Milard
---
.../common_plat/validation/drv/drvshmem/drvshmem.c | 37 ++
.../common_plat/validation/drv/drvshmem/drvshmem.h | 1 +
2 files changed, 38 insertions(+)
diff --git a/test/common_plat/validation/drv/drvshmem/drvshmem.c
b/test/common_plat/v
The capability "can_getphy" is introduced and tells whether physical
address queries are available.
The function odpdrv_getphy() is added to query for physical address
(from virtual address)
Signed-off-by: Christophe Milard
---
include/odp/drv/spec/shm.h | 35 +++
NOTE: Must be applied on top of "getting physical addresses from _ishmphy"
Brings the physical address query functions on the driver interface.
Christophe Milard (3):
drv: shm: function to query for physical addresses
linux-gen: drv: functions to query for physical addresses
test: drv: shm:
https://bugs.linaro.org/show_bug.cgi?id=2437
Mike Holmes changed:
What|Removed |Added
Resolution|--- |NON REPRODUCIBLE
Status|IN_PROGRESS
https://bugs.linaro.org/show_bug.cgi?id=2438
Mike Holmes changed:
What|Removed |Added
Resolution|--- |NON REPRODUCIBLE
Status|UNCONFIRMED
https://bugs.linaro.org/show_bug.cgi?id=2558
Mike Holmes changed:
What|Removed |Added
Status|UNCONFIRMED |RESOLVED
Resolution|---
https://bugs.linaro.org/show_bug.cgi?id=2512
--- Comment #5 from Bala Manoharan ---
V2 Patch in mailing list. Needs review.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.linaro.org/show_bug.cgi?id=2497
Bala Manoharan changed:
What|Removed |Added
Status|UNCONFIRMED |RESOLVED
Resolution|---
Fixes https://bugs.linaro.org/show_bug.cgi?id=2496
Signed-off-by: Balasubramanian Manoharan
---
v2: Incorporate review comments
test/common_plat/validation/api/pktio/pktio.c | 24 +---
1 file changed, 21 insertions(+), 3 deletions(-)
diff --git a/test/common_plat/validation/
This series fails bisectability with this patch:
Using patch: 0005-linux-gen-pool-reimplement-pool-with-ring.patch
Trying to apply patch
Patch applied
Building with patch
PASS: odp_crypto
PASS: odp_pktio_perf
SKIP: odp_l2fwd_run.sh
PASS: odp_sched_latency_run.sh
PASS: odp_scheduling_run.sh
=
https://bugs.linaro.org/show_bug.cgi?id=2494
--- Comment #7 from Bala Manoharan ---
Regards,
Bala
On 3 November 2016 at 19:36, wrote:
> https://bugs.linaro.org/show_bug.cgi?id=2494
>
> --- Comment #6 from Maciej Czekaj ---
> (In reply to Bala Manoharan from comment #5)
>> Application portabi
Regards,
Bala
On 3 November 2016 at 19:36, wrote:
> https://bugs.linaro.org/show_bug.cgi?id=2494
>
> --- Comment #6 from Maciej Czekaj ---
> (In reply to Bala Manoharan from comment #5)
>> Application portability across multiple platforms can not be done without
>> any modifications. However I
On 11/10/16 14:07, Petri Savolainen wrote:
There's no use case for application to allocate zero length
packets. Application should always have some knowledge about
the new packet data length before allocation. Also
implementations are more efficient when a check for zero length
is avoided.
Also
Hello Xiaowen,
For now we have 2 types of scheduler in linux-generic implementation.
Strict priority scheduler can be compiled in with --enable-schedule-sp to
configure.
SP scheduler is optimized for low latency processing of high priority events
(not for for throughput).
Best regards,
Maxim.
Used the ring data structure to implement pool. Also
buffer structure was simplified to enable future driver
interface. Every buffer includes a packet header, so each
buffer can be used as a packet head or segment. Segmentation
was disabled and segment size was fixed to a large number
(64kB) to lim
Added support for multi-segmented packets. The first segments
is the packet descriptor, which contains all metadata and
pointers to other segments.
Signed-off-by: Petri Savolainen
---
.../include/odp/api/plat/packet_types.h| 6 +-
.../linux-generic/include/odp_buffer_inlines.h
Removed odp_pool_to_entry(), which was a duplicate of
pool_entry_from_hdl(). Renamed odp_buf_to_hdr() to
buf_hdl_to_hdr(), which describes more accurately the internal
function. Inlined pool_entry(), pool_entry_from_hdl() and
buf_hdl_to_hdr(), which are used often and also outside of
pool.c. Rename
Moved scheduler ring code into a new header file, so that
it can be used also in other parts of the implementation.
Signed-off-by: Petri Savolainen
---
platform/linux-generic/Makefile.am | 1 +
platform/linux-generic/include/odp_ring_internal.h | 111 +
plat
Round up global pool allocations to a burst size. Cache any
extra buffers for future use. Prefetch buffers header which
very newly allocated from global pool and will be returned to
the caller.
Signed-off-by: Petri Savolainen
---
.../linux-generic/include/odp_buffer_internal.h| 3 +-
platfo
Added checks for correct alignment. Also updated tests to call
odp_pool_param_init() for parameter initialization.
Signed-off-by: Petri Savolainen
---
test/common_plat/validation/api/buffer/buffer.c | 113 +---
1 file changed, 63 insertions(+), 50 deletions(-)
diff --git a/t
Improve performance by changing the first parameter of
buffer_alloc_multi() to pool pointer (from handle), to avoid
double lookup of the pool pointer. Pointer is available for
packet alloc calls already.
Signed-off-by: Petri Savolainen
---
platform/linux-generic/include/odp_buffer_internal.h |
Use multi enq and deq operations to optimize global pool
access performance. Temporary uint32_t arrays are needed
since handles are pointer size variables.
Signed-off-by: Petri Savolainen
---
platform/linux-generic/odp_pool.c | 32
1 file changed, 20 insertions(+
Pool performance is optimized by using a ring as the global buffer storage.
IPC build is disabled, since it needs large modifications due to dependency to
pool internals. Old pool implementation was based on locks and linked list of
buffer headers. New implementation maintain a ring of buffer ha
Use odp_pool_param_init() to initialize pool parameters. Also
pktio test must use capability to determine maximum packet
segment length.
Signed-off-by: Petri Savolainen
---
example/generator/odp_generator.c | 2 +-
test/common_plat/validation/api/crypto/crypto.c | 2 +-
test/comm
Applications must use pool capabilibty to check maximum values
for parameters. Used maximum segment length since application
seems to support only single segment packets.
Signed-off-by: Petri Savolainen
---
test/common_plat/performance/odp_crypto.c | 47 +++
1 file ch
Remove support for zero length allocations which were never
required by the API specification or tested by the validation
suite.
Signed-off-by: Petri Savolainen
---
platform/linux-generic/odp_packet.c | 28
1 file changed, 28 deletions(-)
diff --git a/platform/linux
There's no use case for application to allocate zero length
packets. Application should always have some knowledge about
the new packet data length before allocation. Also
implementations are more efficient when a check for zero length
is avoided.
Also added a pool parameter to specify the maximum
Added test cases to allocate and free multiple multi-segment
packets.
Signed-off-by: Petri Savolainen
---
test/common_plat/validation/api/packet/packet.c | 73 +++--
1 file changed, 68 insertions(+), 5 deletions(-)
diff --git a/test/common_plat/validation/api/packet/packet.c
Tailroom test did not call odp_packet_extend_tail() since it pushed
tail too few bytes. Corrected the test to extend the tail by 100
bytes.
Concat test did pass the same packet as src and dst packets. There's
no valid use case to concatenate a packet into itself (forms
a loop). Corrected the test
Added multi-data versions of ring enqueue and dequeue operations.
Signed-off-by: Petri Savolainen
---
platform/linux-generic/include/odp_ring_internal.h | 65 ++
1 file changed, 65 insertions(+)
diff --git a/platform/linux-generic/include/odp_ring_internal.h
b/platform/linu
Added a macro to round up a value to the next power of two,
if it's not already a power of two. Also removed duplicated
code from the same file.
Signed-off-by: Petri Savolainen
---
.../linux-generic/include/odp_align_internal.h | 34 +-
1 file changed, 7 insertions(+), 27
In some error cases, netmap and dpdk pktios were calling
odp_packet_free_multi with zero packets. Moved existing error
check to avoid a free call with zero packets.
Signed-off-by: Petri Savolainen
---
platform/linux-generic/pktio/dpdk.c | 10 ++
platform/linux-generic/pktio/netmap.c |
Enable segmentation support with CONFIG_PACKET_MAX_SEGS
configuration option.
Signed-off-by: Petri Savolainen
---
platform/linux-generic/include/odp_config_internal.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/platform/linux-generic/include/odp_config_internal.h
b/p
IPC pktio implementation depends heavily on pool internals. It's
build is disabled due to pool re-implementation. IPC should be
re-implemented with a cleaner internal interface towards pool and
shm.
Signed-off-by: Petri Savolainen
---
platform/linux-generic/pktio/ipc.c | 3 ++-
1 file changed, 2
On 10 November 2016 at 13:26, Brian Brooks wrote:
> On 11/07 16:46:12, Bala Manoharan wrote:
>> Hi,
>
> Hiya
>
>> This mail thread discusses the design of classification queue group
>> RFC. The same can be found in the google doc whose link is given
>> below.
>> Users can provide their comments ei
On 11/09 09:42:49, Francois Ozog wrote:
> To deal with drivers we need to have additional "standard" (helper or API?)
> objects to deal with MAC addresses.
>
> ODP could be made to handle SDH, SDLC, FrameRelay or ATM packets. A
> packet_io is pretty generic...
>
> But as soon as we talk with driv
Regards,
Bala
On 10 November 2016 at 07:34, Bill Fischofer wrote:
>
>
> On Mon, Nov 7, 2016 at 5:16 AM, Bala Manoharan
> wrote:
>>
>> Hi,
>>
>> This mail thread discusses the design of classification queue group
>> RFC. The same can be found in the google doc whose link is given
>> below.
>> Us
Good points Brian.
I can add some data point with real life observations of a DPI box
analyzing two 10Gbps links (four 10 Gbps ports): there are approximately
50M known flows (TCP, UDP...), may be not active (TCP Close_wait).
FF
On 10 November 2016 at 08:56, Brian Brooks wrote:
> On 11/07 16:4
47 matches
Mail list logo