[lng-odp] [PATCH] Use different name to create pool and queue
Scalable scheduler requires to allocate shm memory for each queue. This patch is to avoid shm namespace collisions and allow shm block per queue. Signed-off-by: Kevin Wang Reviewed-by: Ola Liljedahl Reviewed-by: Brian Brooks Reviewed-by: Honnappa Nagarahalli --- .../linux-generic/include/odp_config_internal.h| 2 +- .../api/classification/odp_classification_basic.c | 8 ++--- .../classification/odp_classification_test_pmr.c | 42 +++--- test/common_plat/validation/api/timer/timer.c | 5 ++- 4 files changed, 30 insertions(+), 27 deletions(-) diff --git a/platform/linux-generic/include/odp_config_internal.h b/platform/linux-generic/include/odp_config_internal.h index e7d84c9..a262d2e 100644 --- a/platform/linux-generic/include/odp_config_internal.h +++ b/platform/linux-generic/include/odp_config_internal.h @@ -104,7 +104,7 @@ extern "C" { * * This the the number of separate SHM areas that can be reserved concurrently */ -#define ODP_CONFIG_SHM_BLOCKS (ODP_CONFIG_POOLS + 48) +#define ODP_CONFIG_SHM_BLOCKS (ODP_CONFIG_POOLS + ODP_CONFIG_QUEUES + 48) /* * Maximum event burst size diff --git a/test/common_plat/validation/api/classification/odp_classification_basic.c b/test/common_plat/validation/api/classification/odp_classification_basic.c index 9817287..b5b9285 100644 --- a/test/common_plat/validation/api/classification/odp_classification_basic.c +++ b/test/common_plat/validation/api/classification/odp_classification_basic.c @@ -93,10 +93,10 @@ void classification_test_create_pmr_match(void) configure_default_cos(pktio, &default_cos, &default_queue, &default_pool); - queue = queue_create("pmr_match", true); + queue = queue_create("pmr_match_queue", true); CU_ASSERT(queue != ODP_QUEUE_INVALID); - pool = pool_create("pmr_match"); + pool = pool_create("pmr_match_pool"); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); odp_cls_cos_param_init(&cls_param); @@ -277,10 +277,10 @@ void classification_test_pmr_composite_create(void) configure_default_cos(pktio, &default_cos, &default_queue, &default_pool); - queue = queue_create("pmr_match", true); + queue = queue_create("pmr_match_queue", true); CU_ASSERT(queue != ODP_QUEUE_INVALID); - pool = pool_create("pmr_match"); + pool = pool_create("pmr_match_pool"); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); odp_cls_cos_param_init(&cls_param); diff --git a/test/common_plat/validation/api/classification/odp_classification_test_pmr.c b/test/common_plat/validation/api/classification/odp_classification_test_pmr.c index d952420..c1da515 100644 --- a/test/common_plat/validation/api/classification/odp_classification_test_pmr.c +++ b/test/common_plat/validation/api/classification/odp_classification_test_pmr.c @@ -121,10 +121,10 @@ void classification_test_pmr_term_tcp_dport(void) configure_default_cos(pktio, &default_cos, &default_queue, &default_pool); - queue = queue_create("tcp_dport1", true); + queue = queue_create("tcp_dport1_queue", true); CU_ASSERT(queue != ODP_QUEUE_INVALID); - pool = pool_create("tcp_dport1"); + pool = pool_create("tcp_dport1_pool"); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); sprintf(cosname, "tcp_dport"); @@ -235,10 +235,10 @@ void classification_test_pmr_term_tcp_sport(void) configure_default_cos(pktio, &default_cos, &default_queue, &default_pool); - queue = queue_create("tcp_sport", true); + queue = queue_create("tcp_sport_queue", true); CU_ASSERT_FATAL(queue != ODP_QUEUE_INVALID); - pool = pool_create("tcp_sport"); + pool = pool_create("tcp_sport_pool"); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); sprintf(cosname, "tcp_sport"); @@ -348,10 +348,10 @@ void classification_test_pmr_term_udp_dport(void) configure_default_cos(pktio, &default_cos, &default_queue, &default_pool); - queue = queue_create("udp_dport", true); + queue = queue_create("udp_dport_queue", true); CU_ASSERT_FATAL(queue != ODP_QUEUE_INVALID); - pool = pool_create("udp_dport"); + pool = pool_create("udp_dport_pool"); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); sprintf(cosname, "udp_dport"); @@ -464,10 +464,10 @@ void classification_test_pmr_term_udp_sport(void) configure_default_cos(pktio, &default_cos, &default_queue, &default_pool); - queue = queue_create("udp_sport", true); + queue = queue_create("udp_sport_queue", true); CU_ASSERT_FATAL(queue != ODP_QUEUE_INVALID); - pool = pool_create("udp_sport"); + pool = pool_create("udp_sport_pool"); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); sprintf(cosname, "ud
Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software scheduler
Hi, On Thu, Apr 6, 2017 at 1:46 PM, Bill Fischofer wrote: > On Thu, Apr 6, 2017 at 1:32 PM, Ola Liljedahl > wrote: > > On 6 April 2017 at 13:48, Jerin Jacob > > wrote: > >> We see ORDERED->ATOMIC as main use case for basic packet forward.Stage > >> 1(ORDERED) to process on N cores and Stage2(ATOMIC) to maintain the ingress > >> order. > > Doesn't ORDERED scheduling maintain the ingress packet order all the > > way to the egress interface? A least that's my understanding of ODP > > ordered queues. > > From an ODP perspective, I fail to see how the ATOMIC stage is needed. For basic IP forwarding I also do not see why an atomic stage would be needed, but for stateful things like IPsec or some application specific higher layer processing the situation can be different. At the risk of stating the obvious: Ordered scheduling maintains ingress order when packets are placed in the next queue (toward the next pipeline stage or to pktout), but it allows parallel processing of packets of the same flow between the points where order is maintained. To guarantee packet processing in the ingress order in some section of code, the code needs to be executed in an atomic context or protected using an ordered lock. > As pointed out earlier, ordered locks are another option to avoid a > separate processing stage simply to do in-sequence operations within > an ordered flow. I'd be curious to understand the use-case in a bit > more detail here. Ordered queues preserve the originating queue's > order, however to achieve end-to-end ordering involving multiple > processing stages requires that flows traverse only ordered or atomic > queues. If a parallel queue is used ordering is indeterminate from > that point on in the pipeline. Exactly. In an IPsec GW (as a use case example) one might want to do all stateless processing (like ingress and egress IP processing and the crypto operations) in ordered contexts to get parallelism, but the stateful part (replay-check and sequence number generation) in an atomic context (or holding an ordered lock) so that not only the packets are output in ingress order but also their sequence numbers are in the same order. That said, some might argue that IPsec replay window can take care not only of packet reordering in the network but also of reordering inside an IPsec GW and therefore the atomic context (or ordered lock) is not necessarily needed in all implementations. Janne
[lng-odp] [PATCH] Avoid shm namespace collisions and allow shm block per queue
Kevin Wang (1): Use different name to create pool and queue .../linux-generic/include/odp_config_internal.h| 2 +- .../api/classification/odp_classification_basic.c | 8 ++--- .../classification/odp_classification_test_pmr.c | 42 +++--- test/common_plat/validation/api/timer/timer.c | 5 ++- 4 files changed, 30 insertions(+), 27 deletions(-) -- 2.7.4
[lng-odp] [PATCH] Miss an unlock operation before exit if error happens
linux-gen: pktio: Just set the return value, and remove the return() function at the failure branch. https://bugs.linaro.org/show_bug.cgi?id=2933 Signed-off-by: Kevin Wang Reviewed-by: Ola Liljedahl Reviewed-by: Brian Brooks --- platform/linux-generic/pktio/loop.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/platform/linux-generic/pktio/loop.c b/platform/linux-generic/pktio/loop.c index 7096283..61e98ad 100644 --- a/platform/linux-generic/pktio/loop.c +++ b/platform/linux-generic/pktio/loop.c @@ -176,7 +176,7 @@ static int loopback_send(pktio_entry_t *pktio_entry, int index ODP_UNUSED, pktio_entry->s.stats.out_octets += bytes; } else { ODP_DBG("queue enqueue failed %i\n", ret); - return -1; + ret = -1; } odp_ticketlock_unlock(&pktio_entry->s.txl); -- 2.7.4
[lng-odp] [PATCH] Fix a locking issue
Kevin Wang (1): Miss an unlock operation before exit if error happens platform/linux-generic/pktio/loop.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- 2.7.4
[lng-odp] [Bug 2933] New: Miss to call unlock if there are some errors happens in loopback_send() function.
https://bugs.linaro.org/show_bug.cgi?id=2933 Bug ID: 2933 Summary: Miss to call unlock if there are some errors happens in loopback_send() function. Product: OpenDataPlane - linux- generic reference Version: master Hardware: x86 OS: Linux Status: UNCONFIRMED Severity: normal Priority: --- Component: Packet IO Assignee: maxim.uva...@linaro.org Reporter: kevin.w...@linaro.org CC: lng-odp@lists.linaro.org Target Milestone: --- -- You are receiving this mail because: You are on the CC list for the bug.
[lng-odp] [Bug 2933] Miss to call unlock if there are some errors happen in loopback_send() function.
https://bugs.linaro.org/show_bug.cgi?id=2933 Kevin Wang changed: What|Removed |Added Summary|Miss to call unlock if |Miss to call unlock if |there are some errors |there are some errors |happens in loopback_send() |happen in loopback_send() |function. |function. -- You are receiving this mail because: You are on the CC list for the bug.
[lng-odp] [PATCH 1/2] add queue size param and related capability
Added size parameter indicating the maximum number of events in the queue and the corresponding queue capability changes. Signed-off-by: Honnappa Nagarahalli --- include/odp/api/spec/queue.h | 12 platform/linux-generic/odp_queue.c | 1 + 2 files changed, 13 insertions(+) diff --git a/include/odp/api/spec/queue.h b/include/odp/api/spec/queue.h index 7972fea..ccb6fb8 100644 --- a/include/odp/api/spec/queue.h +++ b/include/odp/api/spec/queue.h @@ -112,6 +112,12 @@ typedef struct odp_queue_capability_t { /** Number of scheduling priorities */ unsigned sched_prios; + /** Maximum number of events in the queue. + * + * Value of zero indicates the size is limited only by the available + * memory in the system. */ + unsigned max_size; + } odp_queue_capability_t; /** @@ -124,6 +130,12 @@ typedef struct odp_queue_param_t { * the queue type. */ odp_queue_type_t type; + /** Queue size + * + * Maximum number of events in the queue. Value of 0 chooses the + * default configuration of the implementation. */ + uint32_t size; + /** Enqueue mode * * Default value for both queue types is ODP_QUEUE_OP_MT. Application diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index fcf4bf5..5a50a57 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -175,6 +175,7 @@ int odp_queue_capability(odp_queue_capability_t *capa) capa->max_ordered_locks = sched_fn->max_ordered_locks(); capa->max_sched_groups = sched_fn->num_grps(); capa->sched_prios = odp_schedule_num_prio(); + capa->max_size = 0; return 0; } -- 2.7.4
[lng-odp] [PATCH 2/2] add queue size config to cuckoo table
Some queue implementations in ODP take queue size input. Cuckoo table is modified to provide the queue size input while creating the queue. Signed-off-by: Honnappa Nagarahalli --- helper/cuckootable.c | 1 + 1 file changed, 1 insertion(+) diff --git a/helper/cuckootable.c b/helper/cuckootable.c index 80ff498..0d46300 100644 --- a/helper/cuckootable.c +++ b/helper/cuckootable.c @@ -256,6 +256,7 @@ odph_cuckoo_table_create( /* initialize free_slots queue */ odp_queue_param_init(&qparam); qparam.type = ODP_QUEUE_TYPE_PLAIN; + qparam.size = capacity; snprintf(queue_name, sizeof(queue_name), "fs_%s", name); queue = odp_queue_create(queue_name, &qparam); -- 2.7.4
Re: [lng-odp] [API-NEXT PATCH v2 01/16] Fix native Clang build on ARMv8
On 6 April 2017 at 13:54, Maxim Uvarov wrote: > I'm ok with this patch. That can go to master, not need for api-next. > > Maxim. We need this to be in api-next as well so that we can drop this from V3. Thank you, Honnapppa > > On 04/06/17 20:09, Brian Brooks wrote: >> On 04/05 01:24:44, Dmitry Eremin-Solenikov wrote: >>> On 04.04.2017 23:34, Brian Brooks wrote: On 04/04 23:27:51, Dmitry Eremin-Solenikov wrote: > On 04.04.2017 23:26, Brian Brooks wrote: >> On 04/04 23:04:10, Dmitry Eremin-Solenikov wrote: >>> On 04.04.2017 21:47, Brian Brooks wrote: Signed-off-by: Brian Brooks >>> >>> Brian, >>> >>> how does this fail with clang on ARMv8? Could you please include error >>> message in the commit message? >> >> Well, you can't pass -mcx16 to clang when compiling natively on ARM. >> >> This is what it will look like: >> >> clang: error: argument unused during compilation: '-mcx16' >> >> and Autotools will halt the rest of the build. > > Does this happen during configure stage or when building? You will see that error during build. That is when clang is invoked. But, the bug is in configure.ac where -mcx16 is added to CFLAGS when it shouldn't be. >>> >>> Then why isn't it detected by configure script? >> >> The configure script is generated by Autoconf. Autoconf takes the >> configure.ac file as input. The autoconf tool is invoked when you >> run this project's ./bootstrap script. >> >> configure.ac is a combination of shell script and M4 (a macro language). >> If something in configure.ac doesn't look like shell, it is likely >> a macro. >> >> Lets step through it: >> >> 0 if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then >> 1my_save_cflags="$CFLAGS" >> 2 >> 3CFLAGS=-mcx16 >> 4AC_MSG_CHECKING([whether CC supports -mcx16]) >> 5AC_COMPILE_IFELSE([AC_LANG_PROGRAM([])], >> 6 [AC_MSG_RESULT([yes])] >> 7 [ODP_CFLAGS="$ODP_CFLAGS $CFLAGS"], >> 8 [AC_MSG_RESULT([no])] >> 9 ) >> 10CFLAGS="$my_save_cflags" >> 11 fi >> >> Line 0 is specifically checking for whether we are using GCC or >> whether whatever compiler we're using's version is greater than >> or equal to 5. >> >> Line 1 saves CFLAGS into a temporary variable, and line 10 restores >> it. This technique is useful for when you want to avoid altering >> CFLAGS until the very end of the configure.ac has been processed. >> The reason is that the compiler may be invoked as configure.ac is >> processed and you want a pristine set of CFLAGS when you do this. >> >> Line 3 adds -mcx16 to CFLAGS. The original author probably wants >> to compile a small program with -mcx16 at this point in the >> processing of the configure.ac file. If we know we're not running >> on an x86-based chip, we can skip this entirely. More on that later. >> >> Line 4 is a macro to print something to inform the user that we >> are checking "whether CC supports -mcx16". >> >> Lines 5-9 will invoke the compiler with a provided program >> (an empty program: AC_LANG_PROGRAM([])) and execute lines 6 and 7 >> on success or lines 7 and 8 on failure. The interesting part is >> line 7 which appends CFLAGS to ODP_CFLAGS on success. We know >> that CFLAGS contains -mcx16, but we must also be mindful that >> there is nothing else in CFLAGS otherwise multiple consecutive >> uses of this technique would end up appending redundant things >> to CFLAGS. >> >> So, what happens when the configure script is run on ARM with >> CC=clang? >> >> checking whether CC supports -mcx16... yes >> >> Wat... umm... let's try something else... >> >> $ touch poof.c >> $ clang -mcx16 poof.c >> clang: warning: argument unused during compilation: '-mcx16' >> (.text+0x30): undefined reference to `main' >> clang: error: linker command failed with exit code 1 (use -v to see >> invocation) >> >> OK. So, why does it fail when we do a 'make'? Because of -Werror. >> >> $ clang -Werror -mcx16 poof.c >> clang: error: argument unused during compilation: '-mcx16' >> >> -Werror is in ODP_CFLAGS, not CFLAGS used throughout the processing >> of the configure.ac file. >> >> An understandable argument is to add that flag to CFLAGS, but we can >> do better. Much better. We can skip this onzin entirely because we >> already know the architecture that we're targeting (the ${host} >> variable). Skipping that saves time. >> >> So, we only check for the x86-based -mcx16 flag if we're targeting >> an x86-based processor: >> >> if "${host}" == i?86* -o "${host}" == x86*; then >>if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then >> >> [..] >> >>fi >> fi >> >> It is that simple! :) >> >>> -- >>> With best wishes >>> Dmitry >
Re: [lng-odp] [API-NEXT PATCH v2 07/16] test: odp_scheduling: Handle dequeueing from a concurrent queue
On Thu, Apr 6, 2017 at 1:51 PM, Maxim Uvarov wrote: > On 04/06/17 13:35, Ola Liljedahl wrote: >> On 5 April 2017 at 23:39, Maxim Uvarov wrote: >>> On 04/05/17 17:30, Ola Liljedahl wrote: On 5 April 2017 at 14:50, Maxim Uvarov wrote: > On 04/05/17 06:57, Honnappa Nagarahalli wrote: >> This can go into master/api-next as an independent patch. Agree? > > agree. If we accept implementation where events can be 'delayed' Probably all platforms with HW queues. > than it > looks like we missed some api to sync queues. When would those API's be used? >>> >>> might be in case like that. Might be it's not needed in real world >>> application. >> This was a test program. I don't see the same situation occurring in a >> real world application. I could be wrong. >> >>> >>> My point that if situation of postpone event is accepted that we need >>> document that in api doxygen comment. >> I think the asynchronous behaviour is the default. ODP is a hardware >> abstraction. HW is often asynchronous, writes are posted etc. Ensuring >> synchronous behaviour costs performance. >> >> Single-threaded software is "synchronous", writes are immediately >> visible to the thread. But as soon as you go multi-threaded and don't >> use locks to access shared resources, software also becomes >> "asynchronous" (don't know if it is the right word here). Only if you >> use locks to synchronise accesses to shared memory you return to some >> form of sequential consistency (all threads see updates in the same >> order). You don't want to use locks, that quickly creates scalability >> bottlenecks. >> >> Since the scalable scheduler does its best to avoid locks >> (non-scalable) and sequential consistency (slow), instead utilising >> lock-less and lock-free algorithms and weak memory ordering (e.g. >> acquire/release), it exposes the underlying hardware characteristics. >> > > Ola, I think you better understand how week memory ordering works. In > this case I understand that hardware can 'delay' events in queue and > make them not visible just after queueing for some reason. And it's not > possible to solve in implementation. If we speak totally about software > I would understand if one thread did queue and other dequeue. Or case if > you queued X and dequeued Y. But in that case if each thread queued 1 > and dequeued 1 in each thread. Which look like if you store in one > thread some variable then you need several loads to get value which was > stored. Is that right behaviour of week ordering? There are some good online articles that explain the issues well. See, for example [1] that explains the types of barriers used to control memory orderings and [2] that explains how these relate to strong vs. weak memory models. -- [1] http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ [2] http://preshing.com/20120930/weak-vs-strong-memory-models/ > > Maxim. > > >>> >>> Maxim. >>> > > But I do not see why we need this patch. On the same cpu test queue 1 > event and after that dequeue 1 event: > > for (i = 0; i < QUEUE_ROUNDS; i++) { > ev = odp_buffer_to_event(buf); > > if (odp_queue_enq(queue, ev)) { > LOG_ERR(" [%i] Queue enqueue failed.\n", thr); > odp_buffer_free(buf); > return -1; > } > > ev = odp_queue_deq(queue); > > buf = odp_buffer_from_event(ev); > > if (!odp_buffer_is_valid(buf)) { > LOG_ERR(" [%i] Queue empty.\n", thr); > return -1; > } > } > > Where this exactly event can be delayed? In the memory system. > > If other threads do the same - then all do enqueue 1 event first and > then dequeue one event. I can understand problem with queueing on one > cpu and dequeuing on other cpu. But on the same cpu is has to always > work. Isn't it? No. > > Maxim. > >> >> On 4 April 2017 at 21:22, Brian Brooks wrote: >>> On 04/04 17:26:12, Bill Fischofer wrote: On Tue, Apr 4, 2017 at 3:37 PM, Brian Brooks wrote: > On 04/04 21:59:15, Maxim Uvarov wrote: >> On 04/04/17 21:47, Brian Brooks wrote: >>> Signed-off-by: Ola Liljedahl >>> Reviewed-by: Brian Brooks >>> Reviewed-by: Honnappa Nagarahalli >>> Reviewed-by: Kevin Wang >>> --- >>> test/common_plat/performance/odp_scheduling.c | 12 ++-- >>> 1 file changed, 10 insertions(+), 2 deletions(-) >>> >>> diff --git a/test/common_plat/performance/odp_scheduling.c >>> b/test/common_plat/performance/odp_scheduling.c >>> index c74a0713..38e76257 100644 >>> --- a/test/common_plat/perform
Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software scheduler
On Thu, Apr 6, 2017 at 1:46 PM, Bill Fischofer wrote: > On Thu, Apr 6, 2017 at 1:32 PM, Ola Liljedahl > wrote: >> On 6 April 2017 at 13:48, Jerin Jacob wrote: >>> -Original Message- Date: Thu, 6 Apr 2017 12:54:10 +0200 From: Ola Liljedahl To: Brian Brooks Cc: Jerin Jacob , "lng-odp@lists.linaro.org" Subject: Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software scheduler On 5 April 2017 at 18:50, Brian Brooks wrote: > On 04/05 21:27:37, Jerin Jacob wrote: >> -Original Message- >> > Date: Tue, 4 Apr 2017 13:47:52 -0500 >> > From: Brian Brooks >> > To: lng-odp@lists.linaro.org >> > Subject: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software >> > scheduler >> > X-Mailer: git-send-email 2.12.2 >> > >> > This work derives from Ola Liljedahl's prototype [1] which introduced >> > a >> > scalable scheduler design based on primarily lock-free algorithms and >> > data structures designed to decrease contention. A thread searches >> > through a data structure containing only queues that are both >> > non-empty >> > and allowed to be scheduled to that thread. Strict priority >> > scheduling is >> > respected, and (W)RR scheduling may be used within queues of the same >> > priority. >> > Lastly, pre-scheduling or stashing is not employed since it is >> > optional >> > functionality that can be implemented in the application. >> > >> > In addition to scalable ring buffers, the algorithm also uses >> > unbounded >> > concurrent queues. LL/SC and CAS variants exist in cases where >> > absense of >> > ABA problem cannot be proved, and also in cases where the compiler's >> > atomic >> > built-ins may not be lowered to the desired instruction(s). Finally, >> > a version >> > of the algorithm that uses locks is also provided. >> > >> > See platform/linux-generic/include/odp_config_internal.h for further >> > build >> > time configuration. >> > >> > Use --enable-schedule-scalable to conditionally compile this scheduler >> > into the library. >> >> This is an interesting stuff. >> >> Do you have any performance/latency numbers in comparison to exiting >> scheduler >> for completing say two stage(ORDERED->ATOMIC) or N stage pipeline on >> any platform? It is still a SW implementation, there is overhead accessed with queue enqueue/dequeue and the scheduling itself. So for an N-stage pipeline, overhead will accumulate. If only a subset of threads are associated with each stage (this could be beneficial for I-cache hit rate), there will be less need for scalability. What is the recommended strategy here for OCTEON/ThunderX? >>> >>> In the view of portable event driven applications(Works on both >>> embedded and server capable chips), the SW schedule is an important piece. >>> All threads/cores share all work? >>> >>> That is the recommend one in HW as it supports nativity. But HW provides >>> means to partition the work load based on odp schedule groups >>> >>> > > To give an idea, the avg latency reported by odp_sched_latency is down > to half > that of other schedulers (pre-scheduling/stashing disabled) on 4c A53, > 16c A57, > and 12c broadwell. We are still preparing numbers, and I think it's > worth mentioning > that they are subject to change as this patch series changes over time. > > I am not aware of an existing benchmark that involves switching between > different > queue types. Perhaps this is happening in an example app? This could be useful in e.g. IPsec termination. Use an atomic stage for the replay protection check and update. Now ODP has ordered locks for that so the "atomic" (exclusive) section can be achieved from an ordered processing stage. Perhaps Jerin knows some other application that utilises two-stage ORDERED->ATOMIC processing. >>> >>> We see ORDERED->ATOMIC as main use case for basic packet forward.Stage >>> 1(ORDERED) to process on N cores and Stage2(ATOMIC) to maintain the ingress >>> order. >> Doesn't ORDERED scheduling maintain the ingress packet order all the >> way to the egress interface? A least that's my understanding of ODP >> ordered queues. >> From an ODP perspective, I fail to see how the ATOMIC stage is needed. > > As pointed out earlier, ordered locks are another option to avoid a > separate processing stage simply to do in-sequence operations within > an ordered flow. I'd be curious to understand the use-case in a bit > more detail here. Ordered queues preserve the originating queue's > order, however to achieve end-to-end ordering involving multiple > processing stages requires that flows traverse only ordered or atomic > queues. If a pa
Re: [lng-odp] [API-NEXT PATCH v2 01/16] Fix native Clang build on ARMv8
I'm ok with this patch. That can go to master, not need for api-next. Maxim. On 04/06/17 20:09, Brian Brooks wrote: > On 04/05 01:24:44, Dmitry Eremin-Solenikov wrote: >> On 04.04.2017 23:34, Brian Brooks wrote: >>> On 04/04 23:27:51, Dmitry Eremin-Solenikov wrote: On 04.04.2017 23:26, Brian Brooks wrote: > On 04/04 23:04:10, Dmitry Eremin-Solenikov wrote: >> On 04.04.2017 21:47, Brian Brooks wrote: >>> Signed-off-by: Brian Brooks >> >> Brian, >> >> how does this fail with clang on ARMv8? Could you please include error >> message in the commit message? > > Well, you can't pass -mcx16 to clang when compiling natively on ARM. > > This is what it will look like: > > clang: error: argument unused during compilation: '-mcx16' > > and Autotools will halt the rest of the build. Does this happen during configure stage or when building? >>> >>> You will see that error during build. That is when clang is invoked. >>> But, the bug is in configure.ac where -mcx16 is added to CFLAGS when >>> it shouldn't be. >> >> Then why isn't it detected by configure script? > > The configure script is generated by Autoconf. Autoconf takes the > configure.ac file as input. The autoconf tool is invoked when you > run this project's ./bootstrap script. > > configure.ac is a combination of shell script and M4 (a macro language). > If something in configure.ac doesn't look like shell, it is likely > a macro. > > Lets step through it: > > 0 if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then > 1my_save_cflags="$CFLAGS" > 2 > 3CFLAGS=-mcx16 > 4AC_MSG_CHECKING([whether CC supports -mcx16]) > 5AC_COMPILE_IFELSE([AC_LANG_PROGRAM([])], > 6 [AC_MSG_RESULT([yes])] > 7 [ODP_CFLAGS="$ODP_CFLAGS $CFLAGS"], > 8 [AC_MSG_RESULT([no])] > 9 ) > 10CFLAGS="$my_save_cflags" > 11 fi > > Line 0 is specifically checking for whether we are using GCC or > whether whatever compiler we're using's version is greater than > or equal to 5. > > Line 1 saves CFLAGS into a temporary variable, and line 10 restores > it. This technique is useful for when you want to avoid altering > CFLAGS until the very end of the configure.ac has been processed. > The reason is that the compiler may be invoked as configure.ac is > processed and you want a pristine set of CFLAGS when you do this. > > Line 3 adds -mcx16 to CFLAGS. The original author probably wants > to compile a small program with -mcx16 at this point in the > processing of the configure.ac file. If we know we're not running > on an x86-based chip, we can skip this entirely. More on that later. > > Line 4 is a macro to print something to inform the user that we > are checking "whether CC supports -mcx16". > > Lines 5-9 will invoke the compiler with a provided program > (an empty program: AC_LANG_PROGRAM([])) and execute lines 6 and 7 > on success or lines 7 and 8 on failure. The interesting part is > line 7 which appends CFLAGS to ODP_CFLAGS on success. We know > that CFLAGS contains -mcx16, but we must also be mindful that > there is nothing else in CFLAGS otherwise multiple consecutive > uses of this technique would end up appending redundant things > to CFLAGS. > > So, what happens when the configure script is run on ARM with > CC=clang? > > checking whether CC supports -mcx16... yes > > Wat... umm... let's try something else... > > $ touch poof.c > $ clang -mcx16 poof.c > clang: warning: argument unused during compilation: '-mcx16' > (.text+0x30): undefined reference to `main' > clang: error: linker command failed with exit code 1 (use -v to see > invocation) > > OK. So, why does it fail when we do a 'make'? Because of -Werror. > > $ clang -Werror -mcx16 poof.c > clang: error: argument unused during compilation: '-mcx16' > > -Werror is in ODP_CFLAGS, not CFLAGS used throughout the processing > of the configure.ac file. > > An understandable argument is to add that flag to CFLAGS, but we can > do better. Much better. We can skip this onzin entirely because we > already know the architecture that we're targeting (the ${host} > variable). Skipping that saves time. > > So, we only check for the x86-based -mcx16 flag if we're targeting > an x86-based processor: > > if "${host}" == i?86* -o "${host}" == x86*; then >if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then > > [..] > >fi > fi > > It is that simple! :) > >> -- >> With best wishes >> Dmitry
Re: [lng-odp] [API-NEXT PATCH v2 07/16] test: odp_scheduling: Handle dequeueing from a concurrent queue
On 04/06/17 13:35, Ola Liljedahl wrote: > On 5 April 2017 at 23:39, Maxim Uvarov wrote: >> On 04/05/17 17:30, Ola Liljedahl wrote: >>> On 5 April 2017 at 14:50, Maxim Uvarov wrote: On 04/05/17 06:57, Honnappa Nagarahalli wrote: > This can go into master/api-next as an independent patch. Agree? agree. If we accept implementation where events can be 'delayed' >>> Probably all platforms with HW queues. >>> than it looks like we missed some api to sync queues. >>> When would those API's be used? >>> >> >> might be in case like that. Might be it's not needed in real world >> application. > This was a test program. I don't see the same situation occurring in a > real world application. I could be wrong. > >> >> My point that if situation of postpone event is accepted that we need >> document that in api doxygen comment. > I think the asynchronous behaviour is the default. ODP is a hardware > abstraction. HW is often asynchronous, writes are posted etc. Ensuring > synchronous behaviour costs performance. > > Single-threaded software is "synchronous", writes are immediately > visible to the thread. But as soon as you go multi-threaded and don't > use locks to access shared resources, software also becomes > "asynchronous" (don't know if it is the right word here). Only if you > use locks to synchronise accesses to shared memory you return to some > form of sequential consistency (all threads see updates in the same > order). You don't want to use locks, that quickly creates scalability > bottlenecks. > > Since the scalable scheduler does its best to avoid locks > (non-scalable) and sequential consistency (slow), instead utilising > lock-less and lock-free algorithms and weak memory ordering (e.g. > acquire/release), it exposes the underlying hardware characteristics. > Ola, I think you better understand how week memory ordering works. In this case I understand that hardware can 'delay' events in queue and make them not visible just after queueing for some reason. And it's not possible to solve in implementation. If we speak totally about software I would understand if one thread did queue and other dequeue. Or case if you queued X and dequeued Y. But in that case if each thread queued 1 and dequeued 1 in each thread. Which look like if you store in one thread some variable then you need several loads to get value which was stored. Is that right behaviour of week ordering? Maxim. >> >> Maxim. >> But I do not see why we need this patch. On the same cpu test queue 1 event and after that dequeue 1 event: for (i = 0; i < QUEUE_ROUNDS; i++) { ev = odp_buffer_to_event(buf); if (odp_queue_enq(queue, ev)) { LOG_ERR(" [%i] Queue enqueue failed.\n", thr); odp_buffer_free(buf); return -1; } ev = odp_queue_deq(queue); buf = odp_buffer_from_event(ev); if (!odp_buffer_is_valid(buf)) { LOG_ERR(" [%i] Queue empty.\n", thr); return -1; } } Where this exactly event can be delayed? >>> In the memory system. >>> If other threads do the same - then all do enqueue 1 event first and then dequeue one event. I can understand problem with queueing on one cpu and dequeuing on other cpu. But on the same cpu is has to always work. Isn't it? >>> No. >>> Maxim. > > On 4 April 2017 at 21:22, Brian Brooks wrote: >> On 04/04 17:26:12, Bill Fischofer wrote: >>> On Tue, Apr 4, 2017 at 3:37 PM, Brian Brooks >>> wrote: On 04/04 21:59:15, Maxim Uvarov wrote: > On 04/04/17 21:47, Brian Brooks wrote: >> Signed-off-by: Ola Liljedahl >> Reviewed-by: Brian Brooks >> Reviewed-by: Honnappa Nagarahalli >> Reviewed-by: Kevin Wang >> --- >> test/common_plat/performance/odp_scheduling.c | 12 ++-- >> 1 file changed, 10 insertions(+), 2 deletions(-) >> >> diff --git a/test/common_plat/performance/odp_scheduling.c >> b/test/common_plat/performance/odp_scheduling.c >> index c74a0713..38e76257 100644 >> --- a/test/common_plat/performance/odp_scheduling.c >> +++ b/test/common_plat/performance/odp_scheduling.c >> @@ -273,7 +273,7 @@ static int test_plain_queue(int thr, >> test_globals_t *globals) >> test_message_t *t_msg; >> odp_queue_t queue; >> uint64_t c1, c2, cycles; >> - int i; >> + int i, j; >> >> /* Alloc test message */ >> buf = odp_buffer_alloc(globals->pool); >> @@ -307,7 +307,15 @@ static int test_plain_queue(int thr, >> test_globals_t *globals
Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software scheduler
On Thu, Apr 6, 2017 at 1:32 PM, Ola Liljedahl wrote: > On 6 April 2017 at 13:48, Jerin Jacob wrote: >> -Original Message- >>> Date: Thu, 6 Apr 2017 12:54:10 +0200 >>> From: Ola Liljedahl >>> To: Brian Brooks >>> Cc: Jerin Jacob , >>> "lng-odp@lists.linaro.org" >>> Subject: Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software >>> scheduler >>> >>> On 5 April 2017 at 18:50, Brian Brooks wrote: >>> > On 04/05 21:27:37, Jerin Jacob wrote: >>> >> -Original Message- >>> >> > Date: Tue, 4 Apr 2017 13:47:52 -0500 >>> >> > From: Brian Brooks >>> >> > To: lng-odp@lists.linaro.org >>> >> > Subject: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software >>> >> > scheduler >>> >> > X-Mailer: git-send-email 2.12.2 >>> >> > >>> >> > This work derives from Ola Liljedahl's prototype [1] which introduced a >>> >> > scalable scheduler design based on primarily lock-free algorithms and >>> >> > data structures designed to decrease contention. A thread searches >>> >> > through a data structure containing only queues that are both non-empty >>> >> > and allowed to be scheduled to that thread. Strict priority scheduling >>> >> > is >>> >> > respected, and (W)RR scheduling may be used within queues of the same >>> >> > priority. >>> >> > Lastly, pre-scheduling or stashing is not employed since it is optional >>> >> > functionality that can be implemented in the application. >>> >> > >>> >> > In addition to scalable ring buffers, the algorithm also uses unbounded >>> >> > concurrent queues. LL/SC and CAS variants exist in cases where absense >>> >> > of >>> >> > ABA problem cannot be proved, and also in cases where the compiler's >>> >> > atomic >>> >> > built-ins may not be lowered to the desired instruction(s). Finally, a >>> >> > version >>> >> > of the algorithm that uses locks is also provided. >>> >> > >>> >> > See platform/linux-generic/include/odp_config_internal.h for further >>> >> > build >>> >> > time configuration. >>> >> > >>> >> > Use --enable-schedule-scalable to conditionally compile this scheduler >>> >> > into the library. >>> >> >>> >> This is an interesting stuff. >>> >> >>> >> Do you have any performance/latency numbers in comparison to exiting >>> >> scheduler >>> >> for completing say two stage(ORDERED->ATOMIC) or N stage pipeline on any >>> >> platform? >>> It is still a SW implementation, there is overhead accessed with queue >>> enqueue/dequeue and the scheduling itself. >>> So for an N-stage pipeline, overhead will accumulate. >>> If only a subset of threads are associated with each stage (this could >>> be beneficial for I-cache hit rate), there will be less need for >>> scalability. >>> What is the recommended strategy here for OCTEON/ThunderX? >> >> In the view of portable event driven applications(Works on both >> embedded and server capable chips), the SW schedule is an important piece. >> >>> All threads/cores share all work? >> >> That is the recommend one in HW as it supports nativity. But HW provides >> means to partition the work load based on odp schedule groups >> >> >>> >>> > >>> > To give an idea, the avg latency reported by odp_sched_latency is down to >>> > half >>> > that of other schedulers (pre-scheduling/stashing disabled) on 4c A53, >>> > 16c A57, >>> > and 12c broadwell. We are still preparing numbers, and I think it's worth >>> > mentioning >>> > that they are subject to change as this patch series changes over time. >>> > >>> > I am not aware of an existing benchmark that involves switching between >>> > different >>> > queue types. Perhaps this is happening in an example app? >>> This could be useful in e.g. IPsec termination. Use an atomic stage >>> for the replay protection check and update. Now ODP has ordered locks >>> for that so the "atomic" (exclusive) section can be achieved from an >>> ordered processing stage. Perhaps Jerin knows some other application >>> that utilises two-stage ORDERED->ATOMIC processing. >> >> We see ORDERED->ATOMIC as main use case for basic packet forward.Stage >> 1(ORDERED) to process on N cores and Stage2(ATOMIC) to maintain the ingress >> order. > Doesn't ORDERED scheduling maintain the ingress packet order all the > way to the egress interface? A least that's my understanding of ODP > ordered queues. > From an ODP perspective, I fail to see how the ATOMIC stage is needed. As pointed out earlier, ordered locks are another option to avoid a separate processing stage simply to do in-sequence operations within an ordered flow. I'd be curious to understand the use-case in a bit more detail here. Ordered queues preserve the originating queue's order, however to achieve end-to-end ordering involving multiple processing stages requires that flows traverse only ordered or atomic queues. If a parallel queue is used ordering is indeterminate from that point on in the pipeline. > >> >> >>> >>> > >>> >> When we say scalable scheduler, What application/means used to quantify >>> >> scalablity?? >>> It starts
Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software scheduler
On 6 April 2017 at 13:48, Jerin Jacob wrote: > -Original Message- >> Date: Thu, 6 Apr 2017 12:54:10 +0200 >> From: Ola Liljedahl >> To: Brian Brooks >> Cc: Jerin Jacob , >> "lng-odp@lists.linaro.org" >> Subject: Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software >> scheduler >> >> On 5 April 2017 at 18:50, Brian Brooks wrote: >> > On 04/05 21:27:37, Jerin Jacob wrote: >> >> -Original Message- >> >> > Date: Tue, 4 Apr 2017 13:47:52 -0500 >> >> > From: Brian Brooks >> >> > To: lng-odp@lists.linaro.org >> >> > Subject: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software >> >> > scheduler >> >> > X-Mailer: git-send-email 2.12.2 >> >> > >> >> > This work derives from Ola Liljedahl's prototype [1] which introduced a >> >> > scalable scheduler design based on primarily lock-free algorithms and >> >> > data structures designed to decrease contention. A thread searches >> >> > through a data structure containing only queues that are both non-empty >> >> > and allowed to be scheduled to that thread. Strict priority scheduling >> >> > is >> >> > respected, and (W)RR scheduling may be used within queues of the same >> >> > priority. >> >> > Lastly, pre-scheduling or stashing is not employed since it is optional >> >> > functionality that can be implemented in the application. >> >> > >> >> > In addition to scalable ring buffers, the algorithm also uses unbounded >> >> > concurrent queues. LL/SC and CAS variants exist in cases where absense >> >> > of >> >> > ABA problem cannot be proved, and also in cases where the compiler's >> >> > atomic >> >> > built-ins may not be lowered to the desired instruction(s). Finally, a >> >> > version >> >> > of the algorithm that uses locks is also provided. >> >> > >> >> > See platform/linux-generic/include/odp_config_internal.h for further >> >> > build >> >> > time configuration. >> >> > >> >> > Use --enable-schedule-scalable to conditionally compile this scheduler >> >> > into the library. >> >> >> >> This is an interesting stuff. >> >> >> >> Do you have any performance/latency numbers in comparison to exiting >> >> scheduler >> >> for completing say two stage(ORDERED->ATOMIC) or N stage pipeline on any >> >> platform? >> It is still a SW implementation, there is overhead accessed with queue >> enqueue/dequeue and the scheduling itself. >> So for an N-stage pipeline, overhead will accumulate. >> If only a subset of threads are associated with each stage (this could >> be beneficial for I-cache hit rate), there will be less need for >> scalability. >> What is the recommended strategy here for OCTEON/ThunderX? > > In the view of portable event driven applications(Works on both > embedded and server capable chips), the SW schedule is an important piece. > >> All threads/cores share all work? > > That is the recommend one in HW as it supports nativity. But HW provides > means to partition the work load based on odp schedule groups > > >> >> > >> > To give an idea, the avg latency reported by odp_sched_latency is down to >> > half >> > that of other schedulers (pre-scheduling/stashing disabled) on 4c A53, 16c >> > A57, >> > and 12c broadwell. We are still preparing numbers, and I think it's worth >> > mentioning >> > that they are subject to change as this patch series changes over time. >> > >> > I am not aware of an existing benchmark that involves switching between >> > different >> > queue types. Perhaps this is happening in an example app? >> This could be useful in e.g. IPsec termination. Use an atomic stage >> for the replay protection check and update. Now ODP has ordered locks >> for that so the "atomic" (exclusive) section can be achieved from an >> ordered processing stage. Perhaps Jerin knows some other application >> that utilises two-stage ORDERED->ATOMIC processing. > > We see ORDERED->ATOMIC as main use case for basic packet forward.Stage > 1(ORDERED) to process on N cores and Stage2(ATOMIC) to maintain the ingress > order. Doesn't ORDERED scheduling maintain the ingress packet order all the way to the egress interface? A least that's my understanding of ODP ordered queues. >From an ODP perspective, I fail to see how the ATOMIC stage is needed. > > >> >> > >> >> When we say scalable scheduler, What application/means used to quantify >> >> scalablity?? >> It starts with the design, use non-blocking data structures and try to >> distribute data to threads so that they do not access shared data very >> often. Some of this is a little detrimental to single-threaded >> performance, you need to use more atomic operations. It seems to work >> well on ARM (A53, A57) though, the penalty is higher on x86 (x86 is >> very good with spin locks, cmpxchg seems to have more overhead >> compared to ldxr/stxr on ARM which can have less memory ordering >> constraints). We actually use different synchronisation strategies on >> ARM and on x86 (compile time configuration). > > Another school of thought is to avoid all the lock using only si
Re: [lng-odp] [PATCH v2] fix native Clang build on ARMv8
On 04/06 21:02:54, Dmitry Eremin-Solenikov wrote: > On 06.04.2017 20:25, Brian Brooks wrote: > > See [1] for details. > > > > [1] https://lists.linaro.org/pipermail/lng-odp/2017-April/029684.html > > Brian, not that this is a good long description of the commit. Thank you. > I'd still suggest to just change the line setting CFLAGS from just > -mcx16 to -mcx16 -Werror. And even better to change that to append > mcx+Werror rather than just setting it. I understand your suggested code change, but I do not understand why you are suggesting it after having read what I wrote. Please explain. > > Signed-off-by: Brian Brooks > > --- > > configure.ac | 30 -- > > 1 file changed, 16 insertions(+), 14 deletions(-) > > > > diff --git a/configure.ac b/configure.ac > > index 9320f360..d364b8dd 100644 > > --- a/configure.ac > > +++ b/configure.ac > > @@ -303,20 +303,22 @@ ODP_CFLAGS="$ODP_CFLAGS -std=c99" > > # Extra flags for example to suppress certain warning types > > ODP_CFLAGS="$ODP_CFLAGS $ODP_CFLAGS_EXTRA" > > > > -# > > -# Check if compiler supports cmpxchng16 > > -## > > -if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then > > - my_save_cflags="$CFLAGS" > > - > > - CFLAGS=-mcx16 > > - AC_MSG_CHECKING([whether CC supports -mcx16]) > > - AC_COMPILE_IFELSE([AC_LANG_PROGRAM([])], > > - [AC_MSG_RESULT([yes])] > > - [ODP_CFLAGS="$ODP_CFLAGS $CFLAGS"], > > - [AC_MSG_RESULT([no])] > > - ) > > - CFLAGS="$my_save_cflags" > > +## > > +# Check if compiler supports cmpxchng16 on x86-based architectures > > +## > > +if "${host}" == i?86* -o "${host}" == x86*; then > > + if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then > > + my_save_cflags="$CFLAGS" > > + > > + CFLAGS=-mcx16 > > + AC_MSG_CHECKING([whether CC supports -mcx16]) > > + AC_COMPILE_IFELSE([AC_LANG_PROGRAM([])], > > + [AC_MSG_RESULT([yes])] > > + [ODP_CFLAGS="$ODP_CFLAGS $CFLAGS"], > > + [AC_MSG_RESULT([no])] > > + ) > > + CFLAGS="$my_save_cflags" > > + fi > > fi > > > > ## > > > > > -- > With best wishes > Dmitry
Re: [lng-odp] [PATCH v2] fix native Clang build on ARMv8
On 06.04.2017 20:25, Brian Brooks wrote: > See [1] for details. > > [1] https://lists.linaro.org/pipermail/lng-odp/2017-April/029684.html Brian, not that this is a good long description of the commit. I'd still suggest to just change the line setting CFLAGS from just -mcx16 to -mcx16 -Werror. And even better to change that to append mcx+Werror rather than just setting it. > > Signed-off-by: Brian Brooks > --- > configure.ac | 30 -- > 1 file changed, 16 insertions(+), 14 deletions(-) > > diff --git a/configure.ac b/configure.ac > index 9320f360..d364b8dd 100644 > --- a/configure.ac > +++ b/configure.ac > @@ -303,20 +303,22 @@ ODP_CFLAGS="$ODP_CFLAGS -std=c99" > # Extra flags for example to suppress certain warning types > ODP_CFLAGS="$ODP_CFLAGS $ODP_CFLAGS_EXTRA" > > -# > -# Check if compiler supports cmpxchng16 > -## > -if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then > - my_save_cflags="$CFLAGS" > - > - CFLAGS=-mcx16 > - AC_MSG_CHECKING([whether CC supports -mcx16]) > - AC_COMPILE_IFELSE([AC_LANG_PROGRAM([])], > - [AC_MSG_RESULT([yes])] > - [ODP_CFLAGS="$ODP_CFLAGS $CFLAGS"], > - [AC_MSG_RESULT([no])] > - ) > - CFLAGS="$my_save_cflags" > +## > +# Check if compiler supports cmpxchng16 on x86-based architectures > +## > +if "${host}" == i?86* -o "${host}" == x86*; then > + if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then > + my_save_cflags="$CFLAGS" > + > + CFLAGS=-mcx16 > + AC_MSG_CHECKING([whether CC supports -mcx16]) > + AC_COMPILE_IFELSE([AC_LANG_PROGRAM([])], > + [AC_MSG_RESULT([yes])] > + [ODP_CFLAGS="$ODP_CFLAGS $CFLAGS"], > + [AC_MSG_RESULT([no])] > + ) > + CFLAGS="$my_save_cflags" > + fi > fi > > ## > -- With best wishes Dmitry
[lng-odp] [PATCH v2] fix native Clang build on ARMv8
See [1] for details. [1] https://lists.linaro.org/pipermail/lng-odp/2017-April/029684.html Signed-off-by: Brian Brooks --- configure.ac | 30 -- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/configure.ac b/configure.ac index 9320f360..d364b8dd 100644 --- a/configure.ac +++ b/configure.ac @@ -303,20 +303,22 @@ ODP_CFLAGS="$ODP_CFLAGS -std=c99" # Extra flags for example to suppress certain warning types ODP_CFLAGS="$ODP_CFLAGS $ODP_CFLAGS_EXTRA" -# -# Check if compiler supports cmpxchng16 -## -if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then - my_save_cflags="$CFLAGS" - - CFLAGS=-mcx16 - AC_MSG_CHECKING([whether CC supports -mcx16]) - AC_COMPILE_IFELSE([AC_LANG_PROGRAM([])], - [AC_MSG_RESULT([yes])] - [ODP_CFLAGS="$ODP_CFLAGS $CFLAGS"], - [AC_MSG_RESULT([no])] - ) - CFLAGS="$my_save_cflags" +## +# Check if compiler supports cmpxchng16 on x86-based architectures +## +if "${host}" == i?86* -o "${host}" == x86*; then + if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then + my_save_cflags="$CFLAGS" + + CFLAGS=-mcx16 + AC_MSG_CHECKING([whether CC supports -mcx16]) + AC_COMPILE_IFELSE([AC_LANG_PROGRAM([])], + [AC_MSG_RESULT([yes])] + [ODP_CFLAGS="$ODP_CFLAGS $CFLAGS"], + [AC_MSG_RESULT([no])] + ) + CFLAGS="$my_save_cflags" + fi fi ## -- 2.12.2
Re: [lng-odp] [API-NEXT PATCH v2 01/16] Fix native Clang build on ARMv8
On 04/05 01:24:44, Dmitry Eremin-Solenikov wrote: > On 04.04.2017 23:34, Brian Brooks wrote: > > On 04/04 23:27:51, Dmitry Eremin-Solenikov wrote: > >> On 04.04.2017 23:26, Brian Brooks wrote: > >>> On 04/04 23:04:10, Dmitry Eremin-Solenikov wrote: > On 04.04.2017 21:47, Brian Brooks wrote: > > Signed-off-by: Brian Brooks > > Brian, > > how does this fail with clang on ARMv8? Could you please include error > message in the commit message? > >>> > >>> Well, you can't pass -mcx16 to clang when compiling natively on ARM. > >>> > >>> This is what it will look like: > >>> > >>> clang: error: argument unused during compilation: '-mcx16' > >>> > >>> and Autotools will halt the rest of the build. > >> > >> Does this happen during configure stage or when building? > > > > You will see that error during build. That is when clang is invoked. > > But, the bug is in configure.ac where -mcx16 is added to CFLAGS when > > it shouldn't be. > > Then why isn't it detected by configure script? The configure script is generated by Autoconf. Autoconf takes the configure.ac file as input. The autoconf tool is invoked when you run this project's ./bootstrap script. configure.ac is a combination of shell script and M4 (a macro language). If something in configure.ac doesn't look like shell, it is likely a macro. Lets step through it: 0 if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then 1my_save_cflags="$CFLAGS" 2 3CFLAGS=-mcx16 4AC_MSG_CHECKING([whether CC supports -mcx16]) 5AC_COMPILE_IFELSE([AC_LANG_PROGRAM([])], 6 [AC_MSG_RESULT([yes])] 7 [ODP_CFLAGS="$ODP_CFLAGS $CFLAGS"], 8 [AC_MSG_RESULT([no])] 9 ) 10CFLAGS="$my_save_cflags" 11 fi Line 0 is specifically checking for whether we are using GCC or whether whatever compiler we're using's version is greater than or equal to 5. Line 1 saves CFLAGS into a temporary variable, and line 10 restores it. This technique is useful for when you want to avoid altering CFLAGS until the very end of the configure.ac has been processed. The reason is that the compiler may be invoked as configure.ac is processed and you want a pristine set of CFLAGS when you do this. Line 3 adds -mcx16 to CFLAGS. The original author probably wants to compile a small program with -mcx16 at this point in the processing of the configure.ac file. If we know we're not running on an x86-based chip, we can skip this entirely. More on that later. Line 4 is a macro to print something to inform the user that we are checking "whether CC supports -mcx16". Lines 5-9 will invoke the compiler with a provided program (an empty program: AC_LANG_PROGRAM([])) and execute lines 6 and 7 on success or lines 7 and 8 on failure. The interesting part is line 7 which appends CFLAGS to ODP_CFLAGS on success. We know that CFLAGS contains -mcx16, but we must also be mindful that there is nothing else in CFLAGS otherwise multiple consecutive uses of this technique would end up appending redundant things to CFLAGS. So, what happens when the configure script is run on ARM with CC=clang? checking whether CC supports -mcx16... yes Wat... umm... let's try something else... $ touch poof.c $ clang -mcx16 poof.c clang: warning: argument unused during compilation: '-mcx16' (.text+0x30): undefined reference to `main' clang: error: linker command failed with exit code 1 (use -v to see invocation) OK. So, why does it fail when we do a 'make'? Because of -Werror. $ clang -Werror -mcx16 poof.c clang: error: argument unused during compilation: '-mcx16' -Werror is in ODP_CFLAGS, not CFLAGS used throughout the processing of the configure.ac file. An understandable argument is to add that flag to CFLAGS, but we can do better. Much better. We can skip this onzin entirely because we already know the architecture that we're targeting (the ${host} variable). Skipping that saves time. So, we only check for the x86-based -mcx16 flag if we're targeting an x86-based processor: if "${host}" == i?86* -o "${host}" == x86*; then if test "${CC}" != "gcc" -o ${CC_VERSION_MAJOR} -ge 5; then [..] fi fi It is that simple! :) > -- > With best wishes > Dmitry
[lng-odp] [API-NEXT PATCH] validation: pktio: remove CRCs from parser test packets
Remove precalculated CRCs from test packets. Some pktio devices may drop CRCs causing the tests to fail. Signed-off-by: Matias Elo --- test/common_plat/validation/api/pktio/parser.h | 24 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/test/common_plat/validation/api/pktio/parser.h b/test/common_plat/validation/api/pktio/parser.h index 57c6238..5cc2b98 100644 --- a/test/common_plat/validation/api/pktio/parser.h +++ b/test/common_plat/validation/api/pktio/parser.h @@ -28,6 +28,8 @@ int parser_suite_init(void); /* test arrays: */ extern odp_testinfo_t parser_suite[]; +/* Test packets without CRC */ + /** * ARP request */ @@ -39,7 +41,7 @@ static const uint8_t test_packet_arp[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xC0, 0xA8, 0x01, 0x02, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, - 0x0E, 0x0F, 0x10, 0x11, 0xA1, 0xA8, 0x27, 0x43, + 0x0E, 0x0F, 0x10, 0x11 }; /** @@ -53,7 +55,7 @@ static const uint8_t test_packet_ipv4_icmp[] = { 0x01, 0x02, 0x00, 0x00, 0xB7, 0xAB, 0x00, 0x01, 0x00, 0x02, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, - 0x0E, 0x0F, 0x10, 0x11, 0xD9, 0x7F, 0xE8, 0x02, + 0x0E, 0x0F, 0x10, 0x11 }; /** @@ -67,7 +69,7 @@ static const uint8_t test_packet_ipv4_tcp[] = { 0x01, 0x01, 0x04, 0xD2, 0x10, 0xE1, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x50, 0x00, 0x00, 0x00, 0x0C, 0xCC, 0x00, 0x00, 0x00, 0x01, - 0x02, 0x03, 0x04, 0x05, 0x2E, 0xDE, 0x5E, 0x48, + 0x02, 0x03, 0x04, 0x05 }; /** @@ -81,7 +83,7 @@ static const uint8_t test_packet_ipv4_udp[] = { 0x01, 0x01, 0x00, 0x3F, 0x00, 0x3F, 0x00, 0x1A, 0x2F, 0x97, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, - 0x0E, 0x0F, 0x10, 0x11, 0x64, 0xF4, 0xE4, 0xB6, + 0x0E, 0x0F, 0x10, 0x11 }; /** @@ -96,7 +98,7 @@ static const uint8_t test_packet_vlan_ipv4_udp[] = { 0x01, 0x02, 0xC4, 0xA8, 0x01, 0x01, 0x00, 0x3F, 0x00, 0x3F, 0x00, 0x16, 0x4D, 0xBF, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, - 0x0A, 0x0B, 0x0C, 0x0D, 0xCB, 0xBF, 0xD0, 0x29, + 0x0A, 0x0B, 0x0C, 0x0D }; /** @@ -112,7 +114,7 @@ static const uint8_t test_packet_vlan_qinq_ipv4_udp[] = { 0xF3, 0x73, 0xC0, 0xA8, 0x01, 0x02, 0xC4, 0xA8, 0x01, 0x01, 0x00, 0x3F, 0x00, 0x3F, 0x00, 0x12, 0x63, 0xDF, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, - 0x06, 0x07, 0x08, 0x09, 0x80, 0x98, 0xB8, 0x18, + 0x06, 0x07, 0x08, 0x09 }; /** @@ -126,8 +128,7 @@ static const uint8_t test_packet_ipv6_icmp[] = { 0x09, 0xFF, 0xFE, 0x00, 0x04, 0x00, 0x35, 0x55, 0x55, 0x55, 0x66, 0x66, 0x66, 0x66, 0x77, 0x77, 0x77, 0x77, 0x88, 0x88, 0x88, 0x88, 0x80, 0x00, - 0x1B, 0xC2, 0x00, 0x01, 0x00, 0x02, 0xE0, 0x68, - 0x0E, 0xBA, + 0x1B, 0xC2, 0x00, 0x01, 0x00, 0x02 }; /** @@ -143,7 +144,7 @@ static const uint8_t test_packet_ipv6_tcp[] = { 0x77, 0x77, 0x88, 0x88, 0x88, 0x88, 0x04, 0xD2, 0x10, 0xE1, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x50, 0x00, 0x00, 0x00, 0x36, 0x37, - 0x00, 0x00, 0x28, 0x67, 0xD2, 0xAF, + 0x00, 0x00 }; /** @@ -157,8 +158,7 @@ static const uint8_t test_packet_ipv6_udp[] = { 0x09, 0xFF, 0xFE, 0x00, 0x04, 0x00, 0x35, 0x55, 0x55, 0x55, 0x66, 0x66, 0x66, 0x66, 0x77, 0x77, 0x77, 0x77, 0x88, 0x88, 0x88, 0x88, 0x00, 0x3F, - 0x00, 0x3F, 0x00, 0x08, 0x9B, 0x68, 0x35, 0xD3, - 0x64, 0x49, + 0x00, 0x3F, 0x00, 0x08, 0x9B, 0x68 }; /** @@ -174,7 +174,7 @@ static const uint8_t test_packet_vlan_ipv6_udp[] = { 0x04, 0x00, 0x35, 0x55, 0x55, 0x55, 0x66, 0x66, 0x66, 0x66, 0x77, 0x77, 0x77, 0x77, 0x88, 0x88, 0x88, 0x88, 0x00, 0x3F, 0x00, 0x3F, 0x00, 0x08, - 0x9B, 0x68, 0xC5, 0xD8, 0x2F, 0x5C, + 0x9B, 0x68 }; #endif -- 2.7.4
Re: [lng-odp] [PATCH] test: tm: skip tm result under travis run
Please review this patch. TM very often fails in Travis CI. This patch has to fix it. Maxim. On 31 March 2017 at 23:39, Maxim Uvarov wrote: > tm test fails time to time in Travis environment. Because > of we can not control that machine we can not do things like > taskset and core isolation there. So simple run test and ignore > it's result. Threat only segfault as actual error. Linaro CI > will still do full test. > https://bugs.linaro.org/show_bug.cgi?id=2881 > > Signed-off-by: Maxim Uvarov > --- > .../validation/api/traffic_mngr/Makefile.am| 12 +-- > .../validation/api/traffic_mngr/traffic_mngr.sh| 25 > ++ > 2 files changed, 35 insertions(+), 2 deletions(-) > create mode 100755 test/common_plat/validation/ > api/traffic_mngr/traffic_mngr.sh > > diff --git a/test/common_plat/validation/api/traffic_mngr/Makefile.am > b/test/common_plat/validation/api/traffic_mngr/Makefile.am > index 35e689a0..a012c1b3 100644 > --- a/test/common_plat/validation/api/traffic_mngr/Makefile.am > +++ b/test/common_plat/validation/api/traffic_mngr/Makefile.am > @@ -1,10 +1,18 @@ > include ../Makefile.inc > > +TESTS_ENVIRONMENT += TEST_DIR=${builddir} > + > +TESTSCRIPTS = traffic_mngr.sh > +TEST_EXTENSIONS = .sh > + > +TESTS = $(TESTSCRIPTS) > + > noinst_LTLIBRARIES = libtesttraffic_mngr.la > libtesttraffic_mngr_la_SOURCES = traffic_mngr.c > > -test_PROGRAMS = traffic_mngr_main$(EXEEXT) > +bin_PROGRAMS = traffic_mngr_main$(EXEEXT) > dist_traffic_mngr_main_SOURCES = traffic_mngr_main.c > traffic_mngr_main_LDADD = libtesttraffic_mngr.la -lm $(LIBCUNIT_COMMON) > $(LIBODP) > > -EXTRA_DIST = traffic_mngr.h > +EXTRA_DIST = traffic_mngr.h $(TESTSCRIPTS) > +dist_check_SCRIPTS = $(TESTSCRIPTS) > diff --git a/test/common_plat/validation/api/traffic_mngr/traffic_mngr.sh > b/test/common_plat/validation/api/traffic_mngr/traffic_mngr.sh > new file mode 100755 > index ..a7d54162 > --- /dev/null > +++ b/test/common_plat/validation/api/traffic_mngr/traffic_mngr.sh > @@ -0,0 +1,25 @@ > +#!/bin/sh > +# > +# Copyright (c) 2017, Linaro Limited > +# All rights reserved. > +# > +# SPDX-License-Identifier: BSD-3-Clause > +# > + > +# directory where test binaries have been built > +TEST_DIR="${TEST_DIR:-$(dirname $0)}" > + > +# exit codes expected by automake for skipped tests > +TEST_SKIPPED=77 > + > +${TEST_DIR}/traffic_mngr_main${EXEEXT} > +ret=$? > + > +SIGSEGV=139 > + > +if [ "${TRAVIS}" = "true" ] && [ $ret -ne 0 ] && [ $ret -ne ${SIGSEGV} ]; > then > + echo "SKIP: skip due to not isolated environment" > + exit ${TEST_SKIPPED} > +fi > + > +exit $ret > -- > 2.11.0.295.gd7dffce > >
[lng-odp] [Linaro/odp] 0955fb: linux-generic: decouple odp_errno define from odp-...
Branch: refs/heads/master Home: https://github.com/Linaro/odp Commit: 0955fbb395dc1651a8bcd473beae2154d39f4a69 https://github.com/Linaro/odp/commit/0955fbb395dc1651a8bcd473beae2154d39f4a69 Author: Balakrishna Garapati Date: 2017-04-06 (Thu, 06 Apr 2017) Changed paths: M platform/linux-generic/Makefile.am A platform/linux-generic/include/odp_errno_define.h M platform/linux-generic/include/odp_internal.h Log Message: --- linux-generic: decouple odp_errno define from odp-linux makes it easy to define odp_errno to dpdk rteerrno and fixes linking issues. Signed-off-by: Balakrishna Garapati Reviewed-and-tested-by: Bill Fischofer Signed-off-by: Maxim Uvarov
Re: [lng-odp] Sequence requirments for odp_schedule_order_lock()
On Thu, Apr 6, 2017 at 8:22 AM, Radosław Biernacki wrote: > Hi Bill, > > Thank you for reply and sorry for my long reply. > > IMHO the description which you give could be copied to > doc/users-guide/users-guide.adoc as there are not many information how this > should work across all implementations. > I don't fully understand following sentence "since each individual index > follows the same order, which is fixed for both Thread A and B" > > Could you please describe what should happen in below example? > Each line specify the time of event: > T1: WorkerA calls odp_schedule() -> gets packet 1 from queueA > T2: WorkerB calls odp_schedule() -> gets packet 2 from queueA (which came > into interface after packet1) > > T3: WorkerB calls odp_schedule_order_lock(1) WorkerB blocks waiting for sequence 2's turn to use lock(1) > T3: WorkerA calls odp_schedule_order_lock(0) Since WorkerA has the current sequence number (1) for lock(0), WorkerA enters the critical section protected by lock(0). > > than WorkerA calls odp_schedule_order_unlock(0) and calls This means that lock(0) is now available to sequence 2. Any callers with higher sequence numbers will wait. It is a programming error for WorkerA to attempt to re-acquire lock(0) under its current context since it's already used this lock. > odp_schedule_order_lock(1) (and unlock(1)) > while WorkerB only unlock the 1 > > Will WorkerB be suspended till WorkerA unlocks order lock 1? Yes. WorkerB has sequence 2 and lock 1 is not available to it until after sequence 1 (WorkerA) has used this lock (or released it's context). > We implementing this API now and our world seems totally different than > linux-generic so it would be best if you could specify the remaining part of > the sequence (as it would be a sequence requirement for us). > > Topic 2: I guess that nested order locks are forbidden? If that's the case, > than we should also mention that in docs. No, ordered locks may be nested without restriction. For example, suppose WorkerA acquires lock(0) and then tries to acquire lock(1) before it releases lock(0). This is fine since it holds the proper sequence for both locks. The difference between ordered locks and spinlocks is that they can only be acquired by a thread that's in sequence, so the thread holding the current sequence always gets the lock when it asks for it and all others wait. So lock inversions are impossible with ordered locks because inversions can only happen when it's unpredictable who gets which lock. In the case of ordered locks that sequence is fixed by the ordered context itself, not by the individual locks. > > 2017-03-29 19:38 GMT+02:00 Bill Fischofer : >> >> On Wed, Mar 29, 2017 at 7:18 AM, Radosław Biernacki >> wrote: >> > Hi all, >> > >> > The documentation for odp_schedule_order_lock(unsigned lock_index) does >> > not >> > specify the sequence in which the lock_index need to be given. >> >> That's because there is no such required sequence. Each lock index is >> a distinct ordered synchronization point and the only requirement is >> that each thread may only use each index at most once per ordered >> context. Threads may skip any or all indexes as there is no obligation >> for threads to use ordered locks when they are present in the context, >> and the index order doesn't matter to ODP. >> >> When threads enter an ordered context they hold a sequence that is >> determined by the source ordered queue. For a thread to be able to >> acquire ordered lock index i, all that is required is that all threads >> holding lower sequences have either acquired and released that index >> or else definitively passed on using it by exiting the ordered >> context. So if thread A tries to acquire index 1 first and then index >> 2 while Thread B tries to acquire index 2 first and then index 1 it >> doesn't matter since each individual index follows the same order, >> which is fixed for both Thread A and B. >> >> Note that there may be a loss of parallelism if indexes are permuted, >> however there is no possibility of deadlock. Best parallelism will be >> achieved when all threads use indexes in the same sequence, but ODP >> doesn't care what sequence of indexes the application chooses to use. >> >> > >> > Shouldn't the following statements be included in description of this >> > function? >> > 1) All code paths calling this function (in the same synchronization >> > context) need to use the same lock_index sequence, eg: 1, 3, 2, 4 or the >> > results are undefined (like the sequence of events will not be >> > preserved) >> > if this rule is not followed >> > 2) The doc should emphasize a bit more to what the synchronization >> > context >> > is bound to (source queue). For eg. it should say that lock_index >> > sequence >> > can be different for different source queues (synchronization contexts). >> > 3) It is possible to skip some lock_index in sequence. But skipped >> > lock_indexes cannot be used outside of the sequence (since this will >> >
Re: [lng-odp] Sequence requirments for odp_schedule_order_lock()
Hi Bill, Thank you for reply and sorry for my long reply. IMHO the description which you give could be copied to doc/users-guide/users-guide.adoc as there are not many information how this should work across all implementations. I don't fully understand following sentence "since each individual index follows the same order, which is fixed for both Thread A and B" Could you please describe what should happen in below example? Each line specify the time of event: T1: WorkerA calls odp_schedule() -> gets packet 1 from queueA T2: WorkerB calls odp_schedule() -> gets packet 2 from queueA (which came into interface after packet1) T3: WorkerB calls odp_schedule_order_lock(1) T3: WorkerA calls odp_schedule_order_lock(0) than WorkerA calls odp_schedule_order_unlock(0) and calls odp_schedule_order_lock(1) (and unlock(1)) while WorkerB only unlock the 1 Will WorkerB be suspended till WorkerA unlocks order lock 1? We implementing this API now and our world seems totally different than linux-generic so it would be best if you could specify the remaining part of the sequence (as it would be a sequence requirement for us). Topic 2: I guess that nested order locks are forbidden? If that's the case, than we should also mention that in docs. 2017-03-29 19:38 GMT+02:00 Bill Fischofer : > On Wed, Mar 29, 2017 at 7:18 AM, Radosław Biernacki > wrote: > > Hi all, > > > > The documentation for odp_schedule_order_lock(unsigned lock_index) does > not > > specify the sequence in which the lock_index need to be given. > > That's because there is no such required sequence. Each lock index is > a distinct ordered synchronization point and the only requirement is > that each thread may only use each index at most once per ordered > context. Threads may skip any or all indexes as there is no obligation > for threads to use ordered locks when they are present in the context, > and the index order doesn't matter to ODP. > > When threads enter an ordered context they hold a sequence that is > determined by the source ordered queue. For a thread to be able to > acquire ordered lock index i, all that is required is that all threads > holding lower sequences have either acquired and released that index > or else definitively passed on using it by exiting the ordered > context. So if thread A tries to acquire index 1 first and then index > 2 while Thread B tries to acquire index 2 first and then index 1 it > doesn't matter since each individual index follows the same order, > which is fixed for both Thread A and B. > > Note that there may be a loss of parallelism if indexes are permuted, > however there is no possibility of deadlock. Best parallelism will be > achieved when all threads use indexes in the same sequence, but ODP > doesn't care what sequence of indexes the application chooses to use. > > > > > Shouldn't the following statements be included in description of this > > function? > > 1) All code paths calling this function (in the same synchronization > > context) need to use the same lock_index sequence, eg: 1, 3, 2, 4 or the > > results are undefined (like the sequence of events will not be preserved) > > if this rule is not followed > > 2) The doc should emphasize a bit more to what the synchronization > context > > is bound to (source queue). For eg. it should say that lock_index > sequence > > can be different for different source queues (synchronization contexts). > > 3) It is possible to skip some lock_index in sequence. But skipped > > lock_indexes cannot be used outside of the sequence (since this will > alter > > the sequence which is violation of rule 1). >
Re: [lng-odp] [RFC/API-NEXT 1/1] api: classification: packet hashing per class of service
Hi Bill, I will post a patch implementing this proposal next week. Regards, Bala On 6 April 2017 at 18:07, Bill Fischofer wrote: > Bala, any idea when this can advance from the RFC stage to a real patch? > > On Thu, Apr 6, 2017 at 6:15 AM, Bala Manoharan > wrote: >> Regards, >> Bala >> >> >> On 6 April 2017 at 00:53, Brian Brooks wrote: >>> On Fri, Dec 9, 2016 at 5:54 AM, Balasubramanian Manoharan >>> wrote: Adds support to spread packet from a single CoS to multiple queues by configuring hashing at CoS level. Signed-off-by: Balasubramanian Manoharan --- include/odp/api/spec/classification.h | 45 --- 1 file changed, 42 insertions(+), 3 deletions(-) diff --git a/include/odp/api/spec/classification.h b/include/odp/api/spec/classification.h index 0e442c7..220b029 100644 --- a/include/odp/api/spec/classification.h +++ b/include/odp/api/spec/classification.h @@ -126,6 +126,12 @@ typedef struct odp_cls_capability_t { /** Maximum number of CoS supported */ unsigned max_cos; + /** Maximun number of queue supported per CoS */ + unsigned max_queue_supported; + + /** Protocol header combination supported for Hashing */ + odp_pktin_hash_proto_t hash_supported; >>> >>> I like this idea and think it is critical for supporting >>> implementations that can handle lots of queues. What I don't quite >>> understand is how it relates to the rest of the Classification >>> functionality. For example, I would like all packets coming from the >>> physical interface to be hashed/parsed according to "hash_supported" >>> and then assigned to their respective queues. I would then like to >>> schedule from those queues. That is all. Is this possible? What >>> priority and synchronization will those queues have? >> >> This could be done by adding this configuration to the default Cos in >> the pktio interface. >> Currently there is hashing support with pktio interface but that will >> distribute the packet to odp_pktin_queue_t >> and the packets have to be received using odp_pktin_recv() function. >> >> All the queues belonging to the single queue group will have the same >> priority and synchronisation. >> >> >>> /** A Boolean to denote support of PMR range */ odp_bool_t pmr_range_supported; } odp_cls_capability_t; @@ -164,9 +170,25 @@ typedef enum { * Used to communicate class of service creation options */ typedef struct odp_cls_cos_param { - odp_queue_t queue; /**< Queue associated with CoS */ - odp_pool_t pool;/**< Pool associated with CoS */ - odp_cls_drop_t drop_policy; /**< Drop policy associated with CoS */ + /* Minimum number of queues to be linked to this CoS. +* If the number is greater than 1 then hashing has to be +* enabled */ + uint32_t num_queue; + /* Denotes whether hashing is enabled for queues in this CoS +* When hashing is enabled the queues are created by the implementation +* and application need not configure any queue to the class of service +*/ + odp_bool_t enable_hashing; + /* Protocol header fields which are included in packet input +* hash calculation */ + odp_pktin_hash_proto_t hash; + /* If hashing is disabled, then application has to configure +* this queue and packets are delivered to this queue */ + odp_queue_t queue; + /* Pool associated with CoS */ + odp_pool_t pool; + /* Drop policy associated with CoS */ + odp_cls_drop_t drop_policy; } odp_cls_cos_param_t; /** @@ -209,6 +231,23 @@ int odp_cls_capability(odp_cls_capability_t *capability); odp_cos_t odp_cls_cos_create(const char *name, odp_cls_cos_param_t *param); /** + * Queue hash result + * Returns the queue within a CoS in which a particular packet will be enqueued + * based on the packet parameters and hash protocol field configured with the + * class of service. + * + * @param cos class of service + * @param packet Packet handle + * + * @retval Returns the queue handle on which this packet will be + * enqueued. + * @retval ODP_QUEUE_INVALID for error case + * + * @note The packet has to be updated with valid header pointers L2, L3 and L4. + */ +odp_queue_t odp_queue_hash_result(odp_cos_t cos, odp_packet_t packet); + +/** * Discard a class-of-service along with all its associated resources * * @param[in] cos_id class-of-service instance. -- 1.9.1
Re: [lng-odp] [RFC][PATCH] added asymmetric crypto algorithm support.
We should also take into account Barry's earlier RFC for adding public key support. See http://patches.opendataplane.org/patch/4985/ as this is closely related to this RFC. On Thu, Apr 6, 2017 at 7:31 AM, Kartha, Umesh wrote: > Thanks for comments, please go through my replies. > > Regards, > Umesh > > From: Dmitry Eremin-Solenikov > Sent: Thursday, April 6, 2017 4:11 PM > To: Kartha, Umesh; lng-odp@lists.linaro.org > Cc: Manoharan, Balasubramanian; Murthy, Nidadavolu; Manapragada, Ram Kumar > Subject: Re: [lng-odp] [RFC][PATCH] added asymmetric crypto algorithm support. > > On 05.04.2017 13:02, Umesh Kartha wrote: >> Asymmetric crypto algorithms are essential in protocols such as SSL/TLS. >> As the current ODP crypto library lacks support for asymmetric crypto >> algorithms, this RFC is an attempt to address it and add support for the >> same. > > If you target TLS, you should probably also include FFDH support as a > first grade citizen. Is this API sufficient and efficient to produce > working TLS server? > [Umesh]>> Assuming FFDH is finite field Diffie Hellman, we were planning to > achieve the > DH operation through modexp operation. > > Did you consider having separate interfaces for symmetric and for public > key crypto? > [Umesh]>> We had an internal discussion about having a separate interface for > public key > crypto, the comment we received was if we were to implement new interfaces for > public key, we would also require a totally different session for asymmetric > algorithms. > This means we will need to create additional session and operation struct > then we need to > add similar functions for odp_crypto_session_create() and > odp_crypto_operation(). > >> The asymmetric algorithms featured in this version are >> >> 1 RSA >> - RSA Sign >> - RSA Verify >> - RSA Public Encrypt >> - RSA Private Decrypt >> >> Padding schemes supported for RSA operations are >> * RSA PKCS#1 BT1 >> * RSA PKCS#1 BT2 >> * RSA PKCS#1 OAEP >> * RSA PKCS#1 PSS >> >> 2 ECDSA >> - ECDSA Sign >> - ECDSA Verify >> >> Curves supported for ECDSA operations are >> * Prime192v1 >> * Secp224k1 >> * Prime256v1 >> * Secp384r1 >> * Secp521r1 > > What about EdDSA? > [Umesh]>> This is an intial draft and not the final list of EC curves. This > will be added > future draft. > >> >> 3 MODEXP >> >> 4 FUNDAMENTAL ECC >> - Point Addition >> - Point Multiplication >> - Point Doubling >> >>Curves supported for fundamental ECC operations are same as that of >>ECDSA operations. >> >> >> >> Signed-off-by: Umesh Kartha >> --- >> include/odp/api/spec/crypto.h | 570 >> +- >> 1 file changed, 569 insertions(+), 1 deletion(-) >> >> diff --git include/odp/api/spec/crypto.h include/odp/api/spec/crypto.h >> index d30f050..4cd5a3d 100644 >> --- include/odp/api/spec/crypto.h >> +++ include/odp/api/spec/crypto.h >> @@ -57,6 +57,14 @@ typedef enum { >>ODP_CRYPTO_OP_ENCODE, >>/** Decrypt and/or verify authentication ICV */ >>ODP_CRYPTO_OP_DECODE, >> + /** Perform asymmetric crypto RSA operation */ >> + ODP_CRYPTO_OP_RSA, >> + /** Perform asymmetric crypto modex operation */ >> + ODP_CRYPTO_OP_MODEX, >> + /** Perform asymmetric crypto ECDSA operation */ >> + ODP_CRYPTO_OP_ECDSA, >> + /** Perform asymmetric crypto ECC point operation */ >> + ODP_CRYPTO_OP_FECC, > > This looks like a bad abstraction for me. RSA and ECDSA also follow the > encode/decode and sign/verify scheme, so my first impression would be > that they should also fall into OP_ENCODE/OP_DECODE, exact operation > being selected by other parameters. > > Also I ain't sure why do you exactly need modexp and ecc point operation. > [Umesh]>> encode/decode modes are meant for symmetric operations. The > encodeing and > decoding required for RSA/ECDSA SIGN/VERIFY operations are handled internally > based > odp_crypto_rsa_op_t and odp_crypto_ecdsa_op_t respectively. > >> } odp_crypto_op_t; >> >> /** >> @@ -213,6 +221,202 @@ typedef union odp_crypto_auth_algos_t { >> } odp_crypto_auth_algos_t; >> >> /** >> + * Asymmetric crypto algorithms >> + */ >> +typedef enum { >> + /** RSA asymmetric key algorithm */ >> + ODP_ASYM_ALG_RSA, >> + >> + /** Modular exponentiation algorithm */ >> + ODP_ASYM_ALG_MODEXP, >> + >> + /** ECDSA authentication algorithm */ >> + ODP_ASYM_ALG_ECDSA, >> + >> + /** Fundamental ECC algorithm */ >> + ODP_ASYM_ALG_FECC > > What is Fundamental? > [Umesh]>> Basic EC curve operations used as part of ECDH and ECDSA, > that addition, doubling, point multiplication. > >> +} odp_asym_alg_t; >> + >> +/** >> + * Asymmetric algorithms in a bit field structure >> + */ >> +typedef union odp_crypto_asym_algos_t { >> + /** Asymmetric algorithms */ >> + struct { >> + /** ODP_ASYM_ALG_RSA_PKCS */ >> + uint16_t alg_rsa_pkcs :1; >> + >> +
Re: [lng-odp] [PATCHv2] linux-generic: decouple odp_errno define from odp-linux
It's OK to carry over my review for trivial changes like this. On Thu, Apr 6, 2017 at 1:48 AM, Balakrishna Garapati wrote: > makes it easy to define odp_errno to dpdk rteerrno and fixes > linking issues. > > Signed-off-by: Balakrishna Garapati Reviewed-and-tested-by: Bill Fischofer > --- > since v1: Fixed the extra blank line > platform/linux-generic/Makefile.am| 1 + > platform/linux-generic/include/odp_errno_define.h | 26 > +++ > platform/linux-generic/include/odp_internal.h | 3 +-- > 3 files changed, 28 insertions(+), 2 deletions(-) > create mode 100644 platform/linux-generic/include/odp_errno_define.h > > diff --git a/platform/linux-generic/Makefile.am > b/platform/linux-generic/Makefile.am > index 37835c3..5452915 100644 > --- a/platform/linux-generic/Makefile.am > +++ b/platform/linux-generic/Makefile.am > @@ -125,6 +125,7 @@ noinst_HEADERS = \ > ${srcdir}/include/odp_config_internal.h \ > ${srcdir}/include/odp_crypto_internal.h \ > ${srcdir}/include/odp_debug_internal.h \ > + ${srcdir}/include/odp_errno_define.h \ > ${srcdir}/include/odp_forward_typedefs_internal.h \ > ${srcdir}/include/odp_internal.h \ > ${srcdir}/include/odp_name_table_internal.h \ > diff --git a/platform/linux-generic/include/odp_errno_define.h > b/platform/linux-generic/include/odp_errno_define.h > new file mode 100644 > index 000..94c30e7 > --- /dev/null > +++ b/platform/linux-generic/include/odp_errno_define.h > @@ -0,0 +1,26 @@ > +/* Copyright (c) 2017, Linaro Limited > + * All rights reserved. > + * > + * SPDX-License-Identifier: BSD-3-Clause > + */ > + > +/** > + * @file > + * > + * ODP error number define > + */ > + > +#ifndef ODP_ERRNO_DEFINE_H_ > +#define ODP_ERRNO_DEFINE_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +extern __thread int __odp_errno; > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif > diff --git a/platform/linux-generic/include/odp_internal.h > b/platform/linux-generic/include/odp_internal.h > index b313b1f..e1267cf 100644 > --- a/platform/linux-generic/include/odp_internal.h > +++ b/platform/linux-generic/include/odp_internal.h > @@ -20,11 +20,10 @@ extern "C" { > #include > #include > #include > +#include > #include > #include > > -extern __thread int __odp_errno; > - > #define MAX_CPU_NUMBER 128 > > typedef struct { > -- > 1.9.1 >
Re: [lng-odp] [API-NEXT PATCH v2 2/4] linux-gen: packet: remove lazy parsing
I merged v2 for that to api-next. Initially I applied to master not to api next. Please check that patches are merged. Maxim. On 5 April 2017 at 09:03, Elo, Matias (Nokia - FI/Espoo) < matias@nokia-bell-labs.com> wrote: > > > > > On 4 Apr 2017, at 18:30, Maxim Uvarov wrote: > > > > breaks build: > > https://travis-ci.org/muvarov/odp/jobs/218496566 > > > > > > Hi Maxim, > > I'm unable to repeat this problem. Were the patches perhaps merged to the > wrong branch? > > -Matias > >
Re: [lng-odp] [RFC/API-NEXT 1/1] api: classification: packet hashing per class of service
Bala, any idea when this can advance from the RFC stage to a real patch? On Thu, Apr 6, 2017 at 6:15 AM, Bala Manoharan wrote: > Regards, > Bala > > > On 6 April 2017 at 00:53, Brian Brooks wrote: >> On Fri, Dec 9, 2016 at 5:54 AM, Balasubramanian Manoharan >> wrote: >>> Adds support to spread packet from a single CoS to multiple queues by >>> configuring hashing at CoS level. >>> >>> Signed-off-by: Balasubramanian Manoharan >>> --- >>> include/odp/api/spec/classification.h | 45 >>> --- >>> 1 file changed, 42 insertions(+), 3 deletions(-) >>> >>> diff --git a/include/odp/api/spec/classification.h >>> b/include/odp/api/spec/classification.h >>> index 0e442c7..220b029 100644 >>> --- a/include/odp/api/spec/classification.h >>> +++ b/include/odp/api/spec/classification.h >>> @@ -126,6 +126,12 @@ typedef struct odp_cls_capability_t { >>> /** Maximum number of CoS supported */ >>> unsigned max_cos; >>> >>> + /** Maximun number of queue supported per CoS */ >>> + unsigned max_queue_supported; >>> + >>> + /** Protocol header combination supported for Hashing */ >>> + odp_pktin_hash_proto_t hash_supported; >> >> I like this idea and think it is critical for supporting >> implementations that can handle lots of queues. What I don't quite >> understand is how it relates to the rest of the Classification >> functionality. For example, I would like all packets coming from the >> physical interface to be hashed/parsed according to "hash_supported" >> and then assigned to their respective queues. I would then like to >> schedule from those queues. That is all. Is this possible? What >> priority and synchronization will those queues have? > > This could be done by adding this configuration to the default Cos in > the pktio interface. > Currently there is hashing support with pktio interface but that will > distribute the packet to odp_pktin_queue_t > and the packets have to be received using odp_pktin_recv() function. > > All the queues belonging to the single queue group will have the same > priority and synchronisation. > > >> >>> /** A Boolean to denote support of PMR range */ >>> odp_bool_t pmr_range_supported; >>> } odp_cls_capability_t; >>> @@ -164,9 +170,25 @@ typedef enum { >>> * Used to communicate class of service creation options >>> */ >>> typedef struct odp_cls_cos_param { >>> - odp_queue_t queue; /**< Queue associated with CoS */ >>> - odp_pool_t pool;/**< Pool associated with CoS */ >>> - odp_cls_drop_t drop_policy; /**< Drop policy associated with >>> CoS */ >>> + /* Minimum number of queues to be linked to this CoS. >>> +* If the number is greater than 1 then hashing has to be >>> +* enabled */ >>> + uint32_t num_queue; >>> + /* Denotes whether hashing is enabled for queues in this CoS >>> +* When hashing is enabled the queues are created by the >>> implementation >>> +* and application need not configure any queue to the class of >>> service >>> +*/ >>> + odp_bool_t enable_hashing; >>> + /* Protocol header fields which are included in packet input >>> +* hash calculation */ >>> + odp_pktin_hash_proto_t hash; >>> + /* If hashing is disabled, then application has to configure >>> +* this queue and packets are delivered to this queue */ >>> + odp_queue_t queue; >>> + /* Pool associated with CoS */ >>> + odp_pool_t pool; >>> + /* Drop policy associated with CoS */ >>> + odp_cls_drop_t drop_policy; >>> } odp_cls_cos_param_t; >>> >>> /** >>> @@ -209,6 +231,23 @@ int odp_cls_capability(odp_cls_capability_t >>> *capability); >>> odp_cos_t odp_cls_cos_create(const char *name, odp_cls_cos_param_t *param); >>> >>> /** >>> + * Queue hash result >>> + * Returns the queue within a CoS in which a particular packet will be >>> enqueued >>> + * based on the packet parameters and hash protocol field configured with >>> the >>> + * class of service. >>> + * >>> + * @param cos class of service >>> + * @param packet Packet handle >>> + * >>> + * @retval Returns the queue handle on which this packet will >>> be >>> + * enqueued. >>> + * @retval ODP_QUEUE_INVALID for error case >>> + * >>> + * @note The packet has to be updated with valid header pointers L2, L3 >>> and L4. >>> + */ >>> +odp_queue_t odp_queue_hash_result(odp_cos_t cos, odp_packet_t packet); >>> + >>> +/** >>> * Discard a class-of-service along with all its associated resources >>> * >>> * @param[in] cos_id class-of-service instance. >>> -- >>> 1.9.1 >>>
Re: [lng-odp] [RFC][PATCH] added asymmetric crypto algorithm support.
Thanks for comments, please go through my replies. Regards, Umesh From: Dmitry Eremin-Solenikov Sent: Thursday, April 6, 2017 4:11 PM To: Kartha, Umesh; lng-odp@lists.linaro.org Cc: Manoharan, Balasubramanian; Murthy, Nidadavolu; Manapragada, Ram Kumar Subject: Re: [lng-odp] [RFC][PATCH] added asymmetric crypto algorithm support. On 05.04.2017 13:02, Umesh Kartha wrote: > Asymmetric crypto algorithms are essential in protocols such as SSL/TLS. > As the current ODP crypto library lacks support for asymmetric crypto > algorithms, this RFC is an attempt to address it and add support for the > same. If you target TLS, you should probably also include FFDH support as a first grade citizen. Is this API sufficient and efficient to produce working TLS server? [Umesh]>> Assuming FFDH is finite field Diffie Hellman, we were planning to achieve the DH operation through modexp operation. Did you consider having separate interfaces for symmetric and for public key crypto? [Umesh]>> We had an internal discussion about having a separate interface for public key crypto, the comment we received was if we were to implement new interfaces for public key, we would also require a totally different session for asymmetric algorithms. This means we will need to create additional session and operation struct then we need to add similar functions for odp_crypto_session_create() and odp_crypto_operation(). > The asymmetric algorithms featured in this version are > > 1 RSA > - RSA Sign > - RSA Verify > - RSA Public Encrypt > - RSA Private Decrypt > > Padding schemes supported for RSA operations are > * RSA PKCS#1 BT1 > * RSA PKCS#1 BT2 > * RSA PKCS#1 OAEP > * RSA PKCS#1 PSS > > 2 ECDSA > - ECDSA Sign > - ECDSA Verify > > Curves supported for ECDSA operations are > * Prime192v1 > * Secp224k1 > * Prime256v1 > * Secp384r1 > * Secp521r1 What about EdDSA? [Umesh]>> This is an intial draft and not the final list of EC curves. This will be added future draft. > > 3 MODEXP > > 4 FUNDAMENTAL ECC > - Point Addition > - Point Multiplication > - Point Doubling > > Curves supported for fundamental ECC operations are same as that of > ECDSA operations. > > > > Signed-off-by: Umesh Kartha > --- > include/odp/api/spec/crypto.h | 570 >+- > 1 file changed, 569 insertions(+), 1 deletion(-) > > diff --git include/odp/api/spec/crypto.h include/odp/api/spec/crypto.h > index d30f050..4cd5a3d 100644 > --- include/odp/api/spec/crypto.h > +++ include/odp/api/spec/crypto.h > @@ -57,6 +57,14 @@ typedef enum { > ODP_CRYPTO_OP_ENCODE, > /** Decrypt and/or verify authentication ICV */ > ODP_CRYPTO_OP_DECODE, > + /** Perform asymmetric crypto RSA operation */ > + ODP_CRYPTO_OP_RSA, > + /** Perform asymmetric crypto modex operation */ > + ODP_CRYPTO_OP_MODEX, > + /** Perform asymmetric crypto ECDSA operation */ > + ODP_CRYPTO_OP_ECDSA, > + /** Perform asymmetric crypto ECC point operation */ > + ODP_CRYPTO_OP_FECC, This looks like a bad abstraction for me. RSA and ECDSA also follow the encode/decode and sign/verify scheme, so my first impression would be that they should also fall into OP_ENCODE/OP_DECODE, exact operation being selected by other parameters. Also I ain't sure why do you exactly need modexp and ecc point operation. [Umesh]>> encode/decode modes are meant for symmetric operations. The encodeing and decoding required for RSA/ECDSA SIGN/VERIFY operations are handled internally based odp_crypto_rsa_op_t and odp_crypto_ecdsa_op_t respectively. > } odp_crypto_op_t; > > /** > @@ -213,6 +221,202 @@ typedef union odp_crypto_auth_algos_t { > } odp_crypto_auth_algos_t; > > /** > + * Asymmetric crypto algorithms > + */ > +typedef enum { > + /** RSA asymmetric key algorithm */ > + ODP_ASYM_ALG_RSA, > + > + /** Modular exponentiation algorithm */ > + ODP_ASYM_ALG_MODEXP, > + > + /** ECDSA authentication algorithm */ > + ODP_ASYM_ALG_ECDSA, > + > + /** Fundamental ECC algorithm */ > + ODP_ASYM_ALG_FECC What is Fundamental? [Umesh]>> Basic EC curve operations used as part of ECDH and ECDSA, that addition, doubling, point multiplication. > +} odp_asym_alg_t; > + > +/** > + * Asymmetric algorithms in a bit field structure > + */ > +typedef union odp_crypto_asym_algos_t { > + /** Asymmetric algorithms */ > + struct { > + /** ODP_ASYM_ALG_RSA_PKCS */ > + uint16_t alg_rsa_pkcs :1; > + > + /** ODP_ASYM_ALG_MODEXP */ > + uint16_t alg_modexp :1; > + > + /** ODP_ASYM_ALG_ECDSA */ > + uint16_t alg_ecdsa :1; > + > + /** ODP_ASYM_FECC */ > + uint16_t alg_fecc :1; > + } bit; > + /** All bits of the bit field structure > + * > + * This field can be used to set/clear all flags, o
[lng-odp] [PATCH 3/3] linux-gen: sched: optimize group scheduling
Use separate priority queues for different groups. Sharing the same priority queue over multiple groups caused multiple issues: * latency and ordering issues when threads push back events (from wrong groups) to the tail of the priority queue * unnecessary contention (scaling issues) when threads belong to different groups Lowered the maximum number of groups from 256 to 32 (in the default configuration) to limit memory usage of priority queues. This should be enough for the most users. Signed-off-by: Petri Savolainen --- platform/linux-generic/odp_schedule.c | 284 +++--- 1 file changed, 195 insertions(+), 89 deletions(-) diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index e7079b9..f366e7e 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -34,7 +34,7 @@ ODP_STATIC_ASSERT((ODP_SCHED_PRIO_NORMAL > 0) && "normal_prio_is_not_between_highest_and_lowest"); /* Number of scheduling groups */ -#define NUM_SCHED_GRPS 256 +#define NUM_SCHED_GRPS 32 /* Priority queues per priority */ #define QUEUES_PER_PRIO 4 @@ -163,7 +163,11 @@ typedef struct { ordered_stash_t stash[MAX_ORDERED_STASH]; } ordered; + uint32_t grp_epoch; + int num_grp; + uint8_t grp[NUM_SCHED_GRPS]; uint8_t weight_tbl[WEIGHT_TBL_SIZE]; + uint8_t grp_weight[WEIGHT_TBL_SIZE]; } sched_local_t; @@ -199,7 +203,7 @@ typedef struct { pri_mask_t pri_mask[NUM_PRIO]; odp_spinlock_t mask_lock; - prio_queue_t prio_q[NUM_PRIO][QUEUES_PER_PRIO]; + prio_queue_t prio_q[NUM_SCHED_GRPS][NUM_PRIO][QUEUES_PER_PRIO]; odp_spinlock_t poll_cmd_lock; /* Number of commands in a command queue */ @@ -214,8 +218,10 @@ typedef struct { odp_shm_t shm; uint32_t pri_count[NUM_PRIO][QUEUES_PER_PRIO]; - odp_spinlock_t grp_lock; - odp_thrmask_t mask_all; + odp_thrmask_tmask_all; + odp_spinlock_t grp_lock; + odp_atomic_u32_t grp_epoch; + struct { char name[ODP_SCHED_GROUP_NAME_LEN]; odp_thrmask_t mask; @@ -223,6 +229,7 @@ typedef struct { } sched_grp[NUM_SCHED_GRPS]; struct { + int grp; int prio; int queue_per_prio; } queue[ODP_CONFIG_QUEUES]; @@ -273,7 +280,7 @@ static void sched_local_init(void) static int schedule_init_global(void) { odp_shm_t shm; - int i, j; + int i, j, grp; ODP_DBG("Schedule init ... "); @@ -293,15 +300,20 @@ static int schedule_init_global(void) sched->shm = shm; odp_spinlock_init(&sched->mask_lock); - for (i = 0; i < NUM_PRIO; i++) { - for (j = 0; j < QUEUES_PER_PRIO; j++) { - int k; + for (grp = 0; grp < NUM_SCHED_GRPS; grp++) { + for (i = 0; i < NUM_PRIO; i++) { + for (j = 0; j < QUEUES_PER_PRIO; j++) { + prio_queue_t *prio_q; + int k; - ring_init(&sched->prio_q[i][j].ring); + prio_q = &sched->prio_q[grp][i][j]; + ring_init(&prio_q->ring); - for (k = 0; k < PRIO_QUEUE_RING_SIZE; k++) - sched->prio_q[i][j].queue_index[k] = - PRIO_QUEUE_EMPTY; + for (k = 0; k < PRIO_QUEUE_RING_SIZE; k++) { + prio_q->queue_index[k] = + PRIO_QUEUE_EMPTY; + } + } } } @@ -317,12 +329,17 @@ static int schedule_init_global(void) sched->pktio_cmd[i].cmd_index = PKTIO_CMD_FREE; odp_spinlock_init(&sched->grp_lock); + odp_atomic_init_u32(&sched->grp_epoch, 0); for (i = 0; i < NUM_SCHED_GRPS; i++) { memset(sched->sched_grp[i].name, 0, ODP_SCHED_GROUP_NAME_LEN); odp_thrmask_zero(&sched->sched_grp[i].mask); } + sched->sched_grp[ODP_SCHED_GROUP_ALL].allocated = 1; + sched->sched_grp[ODP_SCHED_GROUP_WORKER].allocated = 1; + sched->sched_grp[ODP_SCHED_GROUP_CONTROL].allocated = 1; + odp_thrmask_setall(&sched->mask_all); ODP_DBG("done\n"); @@ -330,29 +347,38 @@ static int schedule_init_global(void) return 0; } +static inline void queue_destroy_finalize(uint32_t qi) +{ + sched_cb_queue_destroy_finalize(qi); +} + static int schedule_term_global(void) { int ret = 0; int rc = 0; - int i, j; + int i, j, grp; - for (i = 0; i < NUM_PRIO; i++) { - for (j = 0; j < QUEUES_PER_PRIO; j++) { -
[lng-odp] [PATCH 1/3] test: l2fwd: add group option
User may give number of scheduling groups to test scheduler performance with other that the default (all threads) group. Both pktios and threads are allocated into these groups with round robin. The number of groups may not exceed number of pktios or worker threads. Signed-off-by: Petri Savolainen --- test/common_plat/performance/odp_l2fwd.c | 148 --- 1 file changed, 116 insertions(+), 32 deletions(-) diff --git a/test/common_plat/performance/odp_l2fwd.c b/test/common_plat/performance/odp_l2fwd.c index 8f5c5e1..33efc02 100644 --- a/test/common_plat/performance/odp_l2fwd.c +++ b/test/common_plat/performance/odp_l2fwd.c @@ -104,6 +104,7 @@ typedef struct { int src_change; /**< Change source eth addresses */ int error_check;/**< Check packet errors */ int sched_mode; /**< Scheduler mode */ + int num_groups; /**< Number of scheduling groups */ } appl_args_t; static int exit_threads; /**< Break workers loop if set to 1 */ @@ -130,6 +131,7 @@ typedef union { typedef struct thread_args_t { int thr_idx; int num_pktio; + int num_groups; struct { odp_pktin_queue_t pktin; @@ -142,7 +144,12 @@ typedef struct thread_args_t { int tx_queue_idx; } pktio[MAX_PKTIOS]; - stats_t *stats; /**< Pointer to per thread stats */ + /* Groups to join */ + odp_schedule_group_t group[MAX_PKTIOS]; + + /* Pointer to per thread stats */ + stats_t *stats; + } thread_args_t; /** @@ -297,6 +304,22 @@ static int run_worker_sched_mode(void *arg) thr = odp_thread_id(); + if (gbl_args->appl.num_groups) { + odp_thrmask_t mask; + + odp_thrmask_zero(&mask); + odp_thrmask_set(&mask, thr); + + /* Join non-default groups */ + for (i = 0; i < thr_args->num_groups; i++) { + if (odp_schedule_group_join(thr_args->group[i], + &mask)) { + LOG_ERR("Join failed\n"); + return -1; + } + } + } + num_pktio = thr_args->num_pktio; if (num_pktio > MAX_PKTIOS) { @@ -590,7 +613,7 @@ static int run_worker_direct_mode(void *arg) * @retval -1 on failure */ static int create_pktio(const char *dev, int idx, int num_rx, int num_tx, - odp_pool_t pool) + odp_pool_t pool, odp_schedule_group_t group) { odp_pktio_t pktio; odp_pktio_param_t pktio_param; @@ -650,7 +673,7 @@ static int create_pktio(const char *dev, int idx, int num_rx, int num_tx, pktin_param.queue_param.sched.prio = ODP_SCHED_PRIO_DEFAULT; pktin_param.queue_param.sched.sync = sync_mode; - pktin_param.queue_param.sched.group = ODP_SCHED_GROUP_ALL; + pktin_param.queue_param.sched.group = group; } if (num_rx > (int)capa.max_input_queues) { @@ -1016,39 +1039,46 @@ static void usage(char *progname) printf("\n" "OpenDataPlane L2 forwarding application.\n" "\n" - "Usage: %s OPTIONS\n" + "Usage: %s [options]\n" + "\n" " E.g. %s -i eth0,eth1,eth2,eth3 -m 0 -t 1\n" - " In the above example,\n" - " eth0 will send pkts to eth1 and vice versa\n" - " eth2 will send pkts to eth3 and vice versa\n" + " In the above example,\n" + " eth0 will send pkts to eth1 and vice versa\n" + " eth2 will send pkts to eth3 and vice versa\n" "\n" "Mandatory OPTIONS:\n" - " -i, --interface Eth interfaces (comma-separated, no spaces)\n" - " Interface count min 1, max %i\n" + " -i, --interface Eth interfaces (comma-separated, no spaces)\n" + " Interface count min 1, max %i\n" "\n" "Optional OPTIONS:\n" - " -m, --mode Packet input mode\n" - " 0: Direct mode: PKTIN_MODE_DIRECT (default)\n" - " 1: Scheduler mode with parallel queues: PKTIN_MODE_SCHED + SCHED_SYNC_PARALLEL\n" - " 2: Scheduler mode with atomic queues: PKTIN_MODE_SCHED + SCHED_SYNC_ATOMIC\n" - " 3: Scheduler mode with ordered queues: PKTIN_MODE_SCHED + SCHED_SYNC_ORDERED\n" - " 4: Plain queue mode: ODP_PKTIN_MODE_QUEUE\n" - " -o, --out_mode Packet output mode\n" - " 0: Direct mode: PKTOUT_MODE_DIRECT (default)\n" - " 1: Queue mode: PKTOUT_MODE_QUEUE\n" - " -c, --count C
[lng-odp] [PATCH 2/3] linux-gen: sched: use weight table for preferences
A precalculated table is more flexible for tunning weights than hard coding. As future development, the table may be updated with different weights at init or run time. Signed-off-by: Petri Savolainen --- platform/linux-generic/odp_schedule.c | 51 ++- 1 file changed, 39 insertions(+), 12 deletions(-) diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index cd5bf21..e7079b9 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -39,6 +39,13 @@ ODP_STATIC_ASSERT((ODP_SCHED_PRIO_NORMAL > 0) && /* Priority queues per priority */ #define QUEUES_PER_PRIO 4 +/* A thread polls a non preferred sched queue every this many polls + * of the prefer queue. */ +#define PREFER_RATIO 64 + +/* Size of poll weight table */ +#define WEIGHT_TBL_SIZE ((QUEUES_PER_PRIO - 1) * PREFER_RATIO) + /* Packet input poll cmd queues */ #define PKTIO_CMD_QUEUES 4 @@ -142,7 +149,6 @@ typedef struct { int index; int pause; uint16_t round; - uint16_t prefer_offset; uint16_t pktin_polls; uint32_t queue_index; odp_queue_t queue; @@ -157,6 +163,8 @@ typedef struct { ordered_stash_t stash[MAX_ORDERED_STASH]; } ordered; + uint8_t weight_tbl[WEIGHT_TBL_SIZE]; + } sched_local_t; /* Priority queue */ @@ -237,11 +245,29 @@ static inline void schedule_release_context(void); static void sched_local_init(void) { + int i; + uint8_t id; + uint8_t offset = 0; + memset(&sched_local, 0, sizeof(sched_local_t)); sched_local.thr = odp_thread_id(); sched_local.queue = ODP_QUEUE_INVALID; sched_local.queue_index = PRIO_QUEUE_EMPTY; + + id = sched_local.thr & (QUEUES_PER_PRIO - 1); + + for (i = 0; i < WEIGHT_TBL_SIZE; i++) { + sched_local.weight_tbl[i] = id; + + if (i % PREFER_RATIO == 0) { + offset++; + sched_local.weight_tbl[i] = (id + offset) & + (QUEUES_PER_PRIO - 1); + if (offset == QUEUES_PER_PRIO - 1) + offset = 0; + } + } } static int schedule_init_global(void) @@ -670,10 +696,10 @@ static int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[], { int prio, i; int ret; - int id; - int offset = 0; + int id, first; unsigned int max_deq = MAX_DEQ; uint32_t qi; + uint16_t round; if (sched_local.num) { ret = copy_events(out_ev, max_num); @@ -689,15 +715,15 @@ static int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[], if (odp_unlikely(sched_local.pause)) return 0; - /* Each thread prefers a priority queue. This offset avoids starvation -* of other priority queues on low thread counts. */ - if (odp_unlikely((sched_local.round & 0x3f) == 0)) { - offset = sched_local.prefer_offset; - sched_local.prefer_offset = (offset + 1) & - (QUEUES_PER_PRIO - 1); - } + /* Each thread prefers a priority queue. Poll weight table avoids +* starvation of other priority queues on low thread counts. */ + round = sched_local.round + 1; + + if (odp_unlikely(round == WEIGHT_TBL_SIZE)) + round = 0; - sched_local.round++; + sched_local.round = round; + first = sched_local.weight_tbl[round]; /* Schedule events */ for (prio = 0; prio < NUM_PRIO; prio++) { @@ -705,7 +731,8 @@ static int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[], if (sched->pri_mask[prio] == 0) continue; - id = (sched_local.thr + offset) & (QUEUES_PER_PRIO - 1); + /* Select the first ring based on weights */ + id = first; for (i = 0; i < QUEUES_PER_PRIO;) { int num; -- 2.8.1
Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software scheduler
-Original Message- > Date: Thu, 6 Apr 2017 12:54:10 +0200 > From: Ola Liljedahl > To: Brian Brooks > Cc: Jerin Jacob , > "lng-odp@lists.linaro.org" > Subject: Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software > scheduler > > On 5 April 2017 at 18:50, Brian Brooks wrote: > > On 04/05 21:27:37, Jerin Jacob wrote: > >> -Original Message- > >> > Date: Tue, 4 Apr 2017 13:47:52 -0500 > >> > From: Brian Brooks > >> > To: lng-odp@lists.linaro.org > >> > Subject: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software > >> > scheduler > >> > X-Mailer: git-send-email 2.12.2 > >> > > >> > This work derives from Ola Liljedahl's prototype [1] which introduced a > >> > scalable scheduler design based on primarily lock-free algorithms and > >> > data structures designed to decrease contention. A thread searches > >> > through a data structure containing only queues that are both non-empty > >> > and allowed to be scheduled to that thread. Strict priority scheduling is > >> > respected, and (W)RR scheduling may be used within queues of the same > >> > priority. > >> > Lastly, pre-scheduling or stashing is not employed since it is optional > >> > functionality that can be implemented in the application. > >> > > >> > In addition to scalable ring buffers, the algorithm also uses unbounded > >> > concurrent queues. LL/SC and CAS variants exist in cases where absense of > >> > ABA problem cannot be proved, and also in cases where the compiler's > >> > atomic > >> > built-ins may not be lowered to the desired instruction(s). Finally, a > >> > version > >> > of the algorithm that uses locks is also provided. > >> > > >> > See platform/linux-generic/include/odp_config_internal.h for further > >> > build > >> > time configuration. > >> > > >> > Use --enable-schedule-scalable to conditionally compile this scheduler > >> > into the library. > >> > >> This is an interesting stuff. > >> > >> Do you have any performance/latency numbers in comparison to exiting > >> scheduler > >> for completing say two stage(ORDERED->ATOMIC) or N stage pipeline on any > >> platform? > It is still a SW implementation, there is overhead accessed with queue > enqueue/dequeue and the scheduling itself. > So for an N-stage pipeline, overhead will accumulate. > If only a subset of threads are associated with each stage (this could > be beneficial for I-cache hit rate), there will be less need for > scalability. > What is the recommended strategy here for OCTEON/ThunderX? In the view of portable event driven applications(Works on both embedded and server capable chips), the SW schedule is an important piece. > All threads/cores share all work? That is the recommend one in HW as it supports nativity. But HW provides means to partition the work load based on odp schedule groups > > > > > To give an idea, the avg latency reported by odp_sched_latency is down to > > half > > that of other schedulers (pre-scheduling/stashing disabled) on 4c A53, 16c > > A57, > > and 12c broadwell. We are still preparing numbers, and I think it's worth > > mentioning > > that they are subject to change as this patch series changes over time. > > > > I am not aware of an existing benchmark that involves switching between > > different > > queue types. Perhaps this is happening in an example app? > This could be useful in e.g. IPsec termination. Use an atomic stage > for the replay protection check and update. Now ODP has ordered locks > for that so the "atomic" (exclusive) section can be achieved from an > ordered processing stage. Perhaps Jerin knows some other application > that utilises two-stage ORDERED->ATOMIC processing. We see ORDERED->ATOMIC as main use case for basic packet forward.Stage 1(ORDERED) to process on N cores and Stage2(ATOMIC) to maintain the ingress order. > > > > >> When we say scalable scheduler, What application/means used to quantify > >> scalablity?? > It starts with the design, use non-blocking data structures and try to > distribute data to threads so that they do not access shared data very > often. Some of this is a little detrimental to single-threaded > performance, you need to use more atomic operations. It seems to work > well on ARM (A53, A57) though, the penalty is higher on x86 (x86 is > very good with spin locks, cmpxchg seems to have more overhead > compared to ldxr/stxr on ARM which can have less memory ordering > constraints). We actually use different synchronisation strategies on > ARM and on x86 (compile time configuration). Another school of thought is to avoid all the lock using only single producer and single consumer and create N such channels to avoid any sort of locking primitives for communication. > > You can read more here: > https://docs.google.com/presentation/d/1BqAdni4aP4aHOqO6fNO39-0MN9zOntI-2ZnVTUXBNSQ > I also did an internal presentation on the scheduler prototype back at > Las Vegas, that presentation might also be somewhere on the Linaro web > site. Thanks for
[lng-odp] [PATCH] example:ipsec_offload: Adding ipsec_offload example
From: Nikhil Agarwal Signed-off-by: Nikhil Agarwal --- /** Email created from pull request 8 (NikhilA-Linaro:master) ** https://github.com/Linaro/odp/pull/8 ** Patch: https://github.com/Linaro/odp/pull/8.patch ** Base sha: ff6c083358f97f7b5b261d8e75ca7a2eaaab5dea ** Merge commit sha: b6d92e493b348c164276b2e0bfba4bf7c0e478c9 **/ example/Makefile.am | 1 + example/ipsec_offload/.gitignore | 1 + example/ipsec_offload/Makefile.am| 19 + example/ipsec_offload/odp_ipsec_offload.c| 872 +++ example/ipsec_offload/odp_ipsec_offload_cache.c | 148 example/ipsec_offload/odp_ipsec_offload_cache.h | 78 ++ example/ipsec_offload/odp_ipsec_offload_fwd_db.c | 223 ++ example/ipsec_offload/odp_ipsec_offload_fwd_db.h | 198 + example/ipsec_offload/odp_ipsec_offload_misc.h | 384 ++ example/ipsec_offload/odp_ipsec_offload_sa_db.c | 361 ++ example/ipsec_offload/odp_ipsec_offload_sa_db.h | 126 example/ipsec_offload/odp_ipsec_offload_sp_db.c | 166 + example/ipsec_offload/odp_ipsec_offload_sp_db.h | 72 ++ example/ipsec_offload/run_left | 14 + example/ipsec_offload/run_right | 14 + example/m4/configure.m4 | 1 + 16 files changed, 2678 insertions(+) create mode 100644 example/ipsec_offload/.gitignore create mode 100644 example/ipsec_offload/Makefile.am create mode 100644 example/ipsec_offload/odp_ipsec_offload.c create mode 100644 example/ipsec_offload/odp_ipsec_offload_cache.c create mode 100644 example/ipsec_offload/odp_ipsec_offload_cache.h create mode 100644 example/ipsec_offload/odp_ipsec_offload_fwd_db.c create mode 100644 example/ipsec_offload/odp_ipsec_offload_fwd_db.h create mode 100644 example/ipsec_offload/odp_ipsec_offload_misc.h create mode 100644 example/ipsec_offload/odp_ipsec_offload_sa_db.c create mode 100644 example/ipsec_offload/odp_ipsec_offload_sa_db.h create mode 100644 example/ipsec_offload/odp_ipsec_offload_sp_db.c create mode 100644 example/ipsec_offload/odp_ipsec_offload_sp_db.h create mode 100644 example/ipsec_offload/run_left create mode 100644 example/ipsec_offload/run_right diff --git a/example/Makefile.am b/example/Makefile.am index dfc07b6..24b9e52 100644 --- a/example/Makefile.am +++ b/example/Makefile.am @@ -2,6 +2,7 @@ SUBDIRS = classifier \ generator \ hello \ ipsec \ + ipsec_offload \ l2fwd_simple \ l3fwd \ packet \ diff --git a/example/ipsec_offload/.gitignore b/example/ipsec_offload/.gitignore new file mode 100644 index 000..2fc73aa --- /dev/null +++ b/example/ipsec_offload/.gitignore @@ -0,0 +1 @@ +odp_ipsec_offload diff --git a/example/ipsec_offload/Makefile.am b/example/ipsec_offload/Makefile.am new file mode 100644 index 000..a61b923 --- /dev/null +++ b/example/ipsec_offload/Makefile.am @@ -0,0 +1,19 @@ +include $(top_srcdir)/example/Makefile.inc + +bin_PROGRAMS = odp_ipsec_offload$(EXEEXT) +odp_ipsec_offload_LDFLAGS = $(AM_LDFLAGS) -static +odp_ipsec_offload_CFLAGS = $(AM_CFLAGS) -I${top_srcdir}/example + +noinst_HEADERS = \ + $(top_srcdir)/example/ipsec_offload/odp_ipsec_offload_cache.h \ + $(top_srcdir)/example/ipsec_offload/odp_ipsec_offload_fwd_db.h \ + $(top_srcdir)/example/ipsec_offload/odp_ipsec_offload_misc.h \ + $(top_srcdir)/example/ipsec_offload/odp_ipsec_offload_sa_db.h \ + $(top_srcdir)/example/ipsec_offload/odp_ipsec_offload_sp_db.h \ + $(top_srcdir)/example/example_debug.h + +dist_odp_ipsec_offload_SOURCES = odp_ipsec_offload.c \ +odp_ipsec_offload_sa_db.c \ +odp_ipsec_offload_sp_db.c \ +odp_ipsec_offload_fwd_db.c \ +odp_ipsec_offload_cache.c diff --git a/example/ipsec_offload/odp_ipsec_offload.c b/example/ipsec_offload/odp_ipsec_offload.c new file mode 100644 index 000..ebf70d7 --- /dev/null +++ b/example/ipsec_offload/odp_ipsec_offload.c @@ -0,0 +1,872 @@ +/* Copyright (c) 2017, Linaro Limited + * Copyright (C) 2017 NXP + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * @example odp_ipsec_offload.c ODP basic packet IO cross connect with IPsec + * test application + */ + +#define _DEFAULT_SOURCE +/* enable strtok */ +#define _POSIX_C_SOURCE 200112L +#include +#include +#include +#include + +#include + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#define MAX_WORKERS 32 /**< maximum number of worker threads */ + +/** + * Parsed command line application arguments + */ +typedef struct { + int cpu_count; + int flows;
[lng-odp] [PATCH 1/8] validation: crypto: add tests for checking message digests
From: Dmitry Eremin-Solenikov Currently ODP testsuite only verifies generation of digests. Let's also verify that checking the digest actually works. Test that check function will accept valid digest and that it will reject wrong digests. Signed-off-by: Dmitry Eremin-Solenikov --- /** Email created from pull request 10 (lumag:crypto-update) ** https://github.com/Linaro/odp/pull/10 ** Patch: https://github.com/Linaro/odp/pull/10.patch ** Base sha: e91cf8bb39da24d2a7dbfbb328aa35d1c4cab4ea ** Merge commit sha: 2f5b8229a5006f9dffa717fb31aa784093b8ae76 **/ test/common_plat/validation/api/crypto/crypto.h| 12 +- .../validation/api/crypto/odp_crypto_test_inp.c| 172 +++-- 2 files changed, 169 insertions(+), 15 deletions(-) diff --git a/test/common_plat/validation/api/crypto/crypto.h b/test/common_plat/validation/api/crypto/crypto.h index c25cbb3..49ca512 100644 --- a/test/common_plat/validation/api/crypto/crypto.h +++ b/test/common_plat/validation/api/crypto/crypto.h @@ -22,10 +22,14 @@ void crypto_test_enc_alg_aes128_gcm(void); void crypto_test_enc_alg_aes128_gcm_ovr_iv(void); void crypto_test_dec_alg_aes128_gcm(void); void crypto_test_dec_alg_aes128_gcm_ovr_iv(void); -void crypto_test_alg_hmac_md5(void); -void crypto_test_alg_hmac_sha1(void); -void crypto_test_alg_hmac_sha256(void); -void crypto_test_alg_hmac_sha512(void); +void crypto_test_gen_alg_hmac_md5(void); +void crypto_test_check_alg_hmac_md5(void); +void crypto_test_gen_alg_hmac_sha1(void); +void crypto_test_check_alg_hmac_sha1(void); +void crypto_test_gen_alg_hmac_sha256(void); +void crypto_test_check_alg_hmac_sha256(void); +void crypto_test_gen_alg_hmac_sha512(void); +void crypto_test_check_alg_hmac_sha512(void); /* test arrays: */ extern odp_testinfo_t crypto_suite[]; diff --git a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c index 42149ac..9af7ba3 100644 --- a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c +++ b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c @@ -65,6 +65,7 @@ static const char *cipher_alg_name(odp_cipher_alg_t cipher) * buffer can be used. * */ static void alg_test(odp_crypto_op_t op, +odp_bool_t should_fail, odp_cipher_alg_t cipher_alg, odp_crypto_iv_t ses_iv, uint8_t *op_iv_ptr, @@ -239,6 +240,10 @@ static void alg_test(odp_crypto_op_t op, op_params.override_iv_ptr = op_iv_ptr; op_params.hash_result_offset = plaintext_len; + if (0 != digest_len) { + memcpy(data_addr + op_params.hash_result_offset, + digest, digest_len); + } rc = odp_crypto_operation(&op_params, &posted, &result); if (rc < 0) { @@ -259,8 +264,15 @@ static void alg_test(odp_crypto_op_t op, odp_crypto_compl_free(compl_event); } - CU_ASSERT(result.ok); CU_ASSERT(result.pkt == pkt); + CU_ASSERT(result.ctx == (void *)0xdeadbeef); + + if (should_fail) { + CU_ASSERT(!result.ok); + goto cleanup; + } + + CU_ASSERT(result.ok); if (cipher_alg != ODP_CIPHER_ALG_NULL) CU_ASSERT(!memcmp(data_addr, ciphertext, ciphertext_len)); @@ -268,8 +280,6 @@ static void alg_test(odp_crypto_op_t op, if (op == ODP_CRYPTO_OP_ENCODE && auth_alg != ODP_AUTH_ALG_NULL) CU_ASSERT(!memcmp(data_addr + op_params.hash_result_offset, digest, digest_len)); - - CU_ASSERT(result.ctx == (void *)0xdeadbeef); cleanup: rc = odp_crypto_session_destroy(session); CU_ASSERT(!rc); @@ -453,6 +463,7 @@ void crypto_test_enc_alg_3des_cbc(void) continue; alg_test(ODP_CRYPTO_OP_ENCODE, +0, ODP_CIPHER_ALG_3DES_CBC, iv, NULL, @@ -488,6 +499,7 @@ void crypto_test_enc_alg_3des_cbc_ovr_iv(void) continue; alg_test(ODP_CRYPTO_OP_ENCODE, +0, ODP_CIPHER_ALG_3DES_CBC, iv, tdes_cbc_reference_iv[i], @@ -527,6 +539,7 @@ void crypto_test_dec_alg_3des_cbc(void) continue; alg_test(ODP_CRYPTO_OP_DECODE, +0, ODP_CIPHER_ALG_3DES_CBC, iv, NULL, @@ -564,6 +577,7 @@ void crypto_test_dec_alg_3des_cbc_ovr_iv(void) continue; alg_test(ODP_CRYPTO_OP_DECODE, +0, ODP_CIPHER_ALG_3DES_CBC, iv, tdes_cbc_reference_iv[i], @@ -610,6 +624,7 @@ void crypto_test_enc_alg_aes128_gcm(void
Re: [lng-odp] [PATCH v5 1/3] validation: packet: increase test pool size
for this patch series, Reviewed-by: Balakrishna Garapati /Krishna On 31 March 2017 at 14:18, Matias Elo wrote: > Previously packet_test_concatsplit() could fail on some pool > implementations as the pool ran out of buffers. Increase default pools size > and use capability to make sure the value is valid. > > Signed-off-by: Matias Elo > --- > test/common_plat/validation/api/packet/packet.c | 7 ++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/test/common_plat/validation/api/packet/packet.c > b/test/common_plat/validation/api/packet/packet.c > index 669122a..1997139 100644 > --- a/test/common_plat/validation/api/packet/packet.c > +++ b/test/common_plat/validation/api/packet/packet.c > @@ -13,6 +13,8 @@ > #define PACKET_BUF_LEN ODP_CONFIG_PACKET_SEG_LEN_MIN > /* Reserve some tailroom for tests */ > #define PACKET_TAILROOM_RESERVE 4 > +/* Number of packets in the test packet pool */ > +#define PACKET_POOL_NUM 300 > > static odp_pool_t packet_pool, packet_pool_no_uarea, > packet_pool_double_uarea; > static uint32_t packet_len; > @@ -109,6 +111,7 @@ int packet_suite_init(void) > uint32_t udat_size; > uint8_t data = 0; > uint32_t i; > + uint32_t num = PACKET_POOL_NUM; > > if (odp_pool_capability(&capa) < 0) { > printf("pool_capability failed\n"); > @@ -130,13 +133,15 @@ int packet_suite_init(void) > segmented_packet_len = capa.pkt.min_seg_len * >capa.pkt.max_segs_per_pkt; > } > + if (capa.pkt.max_num != 0 && capa.pkt.max_num < num) > + num = capa.pkt.max_num; > > odp_pool_param_init(¶ms); > > params.type = ODP_POOL_PACKET; > params.pkt.seg_len= capa.pkt.min_seg_len; > params.pkt.len= capa.pkt.min_seg_len; > - params.pkt.num= 100; > + params.pkt.num= num; > params.pkt.uarea_size = sizeof(struct udata_struct); > > packet_pool = odp_pool_create("packet_pool", ¶ms); > -- > 2.7.4 > >
[lng-odp] [PATCH] linux-generic: debug: enable helper use from c++ programs
From: Bill Fischofer The ODP_STATIC_ASSERT() macro expands to _Static_assert(), however when used in C++ programs this needs to expand to static_assert(). This resolves Bug https://bugs.linaro.org/show_bug.cgi?id=2852 Reported-by: Moshe Tubul Signed-off-by: Bill Fischofer --- /** Email created from pull request 9 (Bill-Fischofer-Linaro:master) ** https://github.com/Linaro/odp/pull/9 ** Patch: https://github.com/Linaro/odp/pull/9.patch ** Base sha: 503708078bf6ab9228d23ad65660b42248600c2d ** Merge commit sha: 439821b5943299fcdf399dd63afc3555609007cd **/ platform/linux-generic/include/odp/api/debug.h | 41 -- .../common_plat/miscellaneous/odp_api_from_cpp.cpp | 3 +- 2 files changed, 32 insertions(+), 12 deletions(-) diff --git a/platform/linux-generic/include/odp/api/debug.h b/platform/linux-generic/include/odp/api/debug.h index 7db1433..b0f91b1 100644 --- a/platform/linux-generic/include/odp/api/debug.h +++ b/platform/linux-generic/include/odp/api/debug.h @@ -19,17 +19,36 @@ extern "C" { #include -#if defined(__GNUC__) && !defined(__clang__) - -#if __GNUC__ < 4 || (__GNUC__ == 4 && (__GNUC_MINOR__ < 6)) - /** - * @internal _Static_assert was only added in GCC 4.6. Provide a weak replacement - * for previous versions. + * @internal _Static_assert was only added in GCC 4.6 and the C++ version + * static_assert for g++ 6 and above. Provide a weak replacement for previous + * versions. */ -#define _Static_assert(e, s) (extern int (*static_assert_checker(void)) \ - [sizeof(struct { unsigned int error_if_negative:(e) ? 1 : -1; })]) +#define _odp_merge(a, b) a##b +#define _odp_label(a) _odp_merge(_ODP_SASSERT_, a) +#define _ODP_SASSERT _odp_label(__COUNTER__) +#define _ODP_SASSERT_ENUM(e) { _ODP_SASSERT = 1 / !!(e) } +#define _odp_static_assert(e, s) enum _ODP_SASSERT_ENUM(e) + +#if defined(__clang__) +#if defined(__cplusplus) +#if !__has_feature(cxx_static_assert) && !defined(static_assert) +#definestatic_assert(e, s) _odp_static_assert(e, s) +#endif +#elif !__has_feature(c_static_assert) && !defined(_Static_assert) +#define _Static_assert(e, s) _odp_static_assert(e, s) +#endif +#elif defined(__GNUC__) +#if __GNUC__ < 4 || (__GNUC__ == 4 && (__GNUC_MINOR__ < 6)) || \ + (__GNUC__ < 6 && defined(__cplusplus)) +#if defined(__cplusplus) +#if !defined(static_assert) +#definestatic_assert(e, s) _odp_static_assert(e, s) +#endif +#elif !defined(_Static_assert) +#define _Static_assert(e, s) _odp_static_assert(e, s) +#endif #endif #endif @@ -39,9 +58,11 @@ extern "C" { * if condition 'cond' is false. Macro definition is empty when compiler is not * supported or the compiler does not support static assertion. */ -#define ODP_STATIC_ASSERT(cond, msg) _Static_assert(cond, msg) +#ifndef __cplusplus +#define ODP_STATIC_ASSERT(cond, msg) _Static_assert(cond, msg) -#ifdef __cplusplus +#else +#define ODP_STATIC_ASSERT(cond, msg) static_assert(cond, msg) } #endif diff --git a/test/common_plat/miscellaneous/odp_api_from_cpp.cpp b/test/common_plat/miscellaneous/odp_api_from_cpp.cpp index 2b30786..4578ae4 100644 --- a/test/common_plat/miscellaneous/odp_api_from_cpp.cpp +++ b/test/common_plat/miscellaneous/odp_api_from_cpp.cpp @@ -1,10 +1,9 @@ #include #include -#include +#include int main(int argc ODP_UNUSED, const char *argv[] ODP_UNUSED) { - printf("\tODP API version: %s\n", odp_version_api_str()); printf("\tODP implementation version: %s\n", odp_version_impl_str());
[lng-odp] [PATCH 1/4] validation: crypto: add tests for checking message digests
From: Dmitry Eremin-Solenikov Currently ODP testsuite only verifies generation of digests. Let's also verify that checking the digest actually works. Test that check function will accept valid digest and that it will reject wrong digests. Signed-off-by: Dmitry Eremin-Solenikov --- /** Email created from pull request 11 (lumag:crypto-update-main) ** https://github.com/Linaro/odp/pull/11 ** Patch: https://github.com/Linaro/odp/pull/11.patch ** Base sha: c7014b4848c276c17dcdddab103ce88b3eb29235 ** Merge commit sha: fce85c94ae08a41d268938b7e8e0c6d1f67d24ea **/ test/common_plat/validation/api/crypto/crypto.h| 6 +- .../validation/api/crypto/odp_crypto_test_inp.c| 150 - 2 files changed, 147 insertions(+), 9 deletions(-) diff --git a/test/common_plat/validation/api/crypto/crypto.h b/test/common_plat/validation/api/crypto/crypto.h index 9b909aa..661fe5d 100644 --- a/test/common_plat/validation/api/crypto/crypto.h +++ b/test/common_plat/validation/api/crypto/crypto.h @@ -22,8 +22,10 @@ void crypto_test_enc_alg_aes128_gcm(void); void crypto_test_enc_alg_aes128_gcm_ovr_iv(void); void crypto_test_dec_alg_aes128_gcm(void); void crypto_test_dec_alg_aes128_gcm_ovr_iv(void); -void crypto_test_alg_hmac_md5(void); -void crypto_test_alg_hmac_sha256(void); +void crypto_test_gen_alg_hmac_md5(void); +void crypto_test_check_alg_hmac_md5(void); +void crypto_test_gen_alg_hmac_sha256(void); +void crypto_test_check_alg_hmac_sha256(void); /* test arrays: */ extern odp_testinfo_t crypto_suite[]; diff --git a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c index 43ddb2f..0909741 100644 --- a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c +++ b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c @@ -65,6 +65,7 @@ static const char *cipher_alg_name(odp_cipher_alg_t cipher) * buffer can be used. * */ static void alg_test(odp_crypto_op_t op, +odp_bool_t should_fail, odp_cipher_alg_t cipher_alg, odp_crypto_iv_t ses_iv, uint8_t *op_iv_ptr, @@ -239,6 +240,10 @@ static void alg_test(odp_crypto_op_t op, op_params.override_iv_ptr = op_iv_ptr; op_params.hash_result_offset = plaintext_len; + if (0 != digest_len) { + memcpy(data_addr + op_params.hash_result_offset, + digest, digest_len); + } rc = odp_crypto_operation(&op_params, &posted, &result); if (rc < 0) { @@ -259,8 +264,15 @@ static void alg_test(odp_crypto_op_t op, odp_crypto_compl_free(compl_event); } - CU_ASSERT(result.ok); CU_ASSERT(result.pkt == pkt); + CU_ASSERT(result.ctx == (void *)0xdeadbeef); + + if (should_fail) { + CU_ASSERT(!result.ok); + goto cleanup; + } + + CU_ASSERT(result.ok); if (cipher_alg != ODP_CIPHER_ALG_NULL) CU_ASSERT(!memcmp(data_addr, ciphertext, ciphertext_len)); @@ -268,8 +280,6 @@ static void alg_test(odp_crypto_op_t op, if (op == ODP_CRYPTO_OP_ENCODE && auth_alg != ODP_AUTH_ALG_NULL) CU_ASSERT(!memcmp(data_addr + op_params.hash_result_offset, digest, digest_len)); - - CU_ASSERT(result.ctx == (void *)0xdeadbeef); cleanup: rc = odp_crypto_session_destroy(session); CU_ASSERT(!rc); @@ -445,6 +455,7 @@ void crypto_test_enc_alg_3des_cbc(void) continue; alg_test(ODP_CRYPTO_OP_ENCODE, +0, ODP_CIPHER_ALG_3DES_CBC, iv, NULL, @@ -480,6 +491,7 @@ void crypto_test_enc_alg_3des_cbc_ovr_iv(void) continue; alg_test(ODP_CRYPTO_OP_ENCODE, +0, ODP_CIPHER_ALG_3DES_CBC, iv, tdes_cbc_reference_iv[i], @@ -519,6 +531,7 @@ void crypto_test_dec_alg_3des_cbc(void) continue; alg_test(ODP_CRYPTO_OP_DECODE, +0, ODP_CIPHER_ALG_3DES_CBC, iv, NULL, @@ -556,6 +569,7 @@ void crypto_test_dec_alg_3des_cbc_ovr_iv(void) continue; alg_test(ODP_CRYPTO_OP_DECODE, +0, ODP_CIPHER_ALG_3DES_CBC, iv, tdes_cbc_reference_iv[i], @@ -602,6 +616,7 @@ void crypto_test_enc_alg_aes128_gcm(void) continue; alg_test(ODP_CRYPTO_OP_ENCODE, +0, ODP_CIPHER_ALG_AES_GCM, iv, NULL, @@ -645,6 +660,7 @@ void crypto_tes
Re: [lng-odp] [RFC/API-NEXT 1/1] api: classification: packet hashing per class of service
Regards, Bala On 6 April 2017 at 00:53, Brian Brooks wrote: > On Fri, Dec 9, 2016 at 5:54 AM, Balasubramanian Manoharan > wrote: >> Adds support to spread packet from a single CoS to multiple queues by >> configuring hashing at CoS level. >> >> Signed-off-by: Balasubramanian Manoharan >> --- >> include/odp/api/spec/classification.h | 45 >> --- >> 1 file changed, 42 insertions(+), 3 deletions(-) >> >> diff --git a/include/odp/api/spec/classification.h >> b/include/odp/api/spec/classification.h >> index 0e442c7..220b029 100644 >> --- a/include/odp/api/spec/classification.h >> +++ b/include/odp/api/spec/classification.h >> @@ -126,6 +126,12 @@ typedef struct odp_cls_capability_t { >> /** Maximum number of CoS supported */ >> unsigned max_cos; >> >> + /** Maximun number of queue supported per CoS */ >> + unsigned max_queue_supported; >> + >> + /** Protocol header combination supported for Hashing */ >> + odp_pktin_hash_proto_t hash_supported; > > I like this idea and think it is critical for supporting > implementations that can handle lots of queues. What I don't quite > understand is how it relates to the rest of the Classification > functionality. For example, I would like all packets coming from the > physical interface to be hashed/parsed according to "hash_supported" > and then assigned to their respective queues. I would then like to > schedule from those queues. That is all. Is this possible? What > priority and synchronization will those queues have? This could be done by adding this configuration to the default Cos in the pktio interface. Currently there is hashing support with pktio interface but that will distribute the packet to odp_pktin_queue_t and the packets have to be received using odp_pktin_recv() function. All the queues belonging to the single queue group will have the same priority and synchronisation. > >> /** A Boolean to denote support of PMR range */ >> odp_bool_t pmr_range_supported; >> } odp_cls_capability_t; >> @@ -164,9 +170,25 @@ typedef enum { >> * Used to communicate class of service creation options >> */ >> typedef struct odp_cls_cos_param { >> - odp_queue_t queue; /**< Queue associated with CoS */ >> - odp_pool_t pool;/**< Pool associated with CoS */ >> - odp_cls_drop_t drop_policy; /**< Drop policy associated with CoS >> */ >> + /* Minimum number of queues to be linked to this CoS. >> +* If the number is greater than 1 then hashing has to be >> +* enabled */ >> + uint32_t num_queue; >> + /* Denotes whether hashing is enabled for queues in this CoS >> +* When hashing is enabled the queues are created by the >> implementation >> +* and application need not configure any queue to the class of >> service >> +*/ >> + odp_bool_t enable_hashing; >> + /* Protocol header fields which are included in packet input >> +* hash calculation */ >> + odp_pktin_hash_proto_t hash; >> + /* If hashing is disabled, then application has to configure >> +* this queue and packets are delivered to this queue */ >> + odp_queue_t queue; >> + /* Pool associated with CoS */ >> + odp_pool_t pool; >> + /* Drop policy associated with CoS */ >> + odp_cls_drop_t drop_policy; >> } odp_cls_cos_param_t; >> >> /** >> @@ -209,6 +231,23 @@ int odp_cls_capability(odp_cls_capability_t >> *capability); >> odp_cos_t odp_cls_cos_create(const char *name, odp_cls_cos_param_t *param); >> >> /** >> + * Queue hash result >> + * Returns the queue within a CoS in which a particular packet will be >> enqueued >> + * based on the packet parameters and hash protocol field configured with >> the >> + * class of service. >> + * >> + * @param cos class of service >> + * @param packet Packet handle >> + * >> + * @retval Returns the queue handle on which this packet will be >> + * enqueued. >> + * @retval ODP_QUEUE_INVALID for error case >> + * >> + * @note The packet has to be updated with valid header pointers L2, L3 and >> L4. >> + */ >> +odp_queue_t odp_queue_hash_result(odp_cos_t cos, odp_packet_t packet); >> + >> +/** >> * Discard a class-of-service along with all its associated resources >> * >> * @param[in] cos_id class-of-service instance. >> -- >> 1.9.1 >>
Re: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software scheduler
On 5 April 2017 at 18:50, Brian Brooks wrote: > On 04/05 21:27:37, Jerin Jacob wrote: >> -Original Message- >> > Date: Tue, 4 Apr 2017 13:47:52 -0500 >> > From: Brian Brooks >> > To: lng-odp@lists.linaro.org >> > Subject: [lng-odp] [API-NEXT PATCH v2 00/16] A scalable software scheduler >> > X-Mailer: git-send-email 2.12.2 >> > >> > This work derives from Ola Liljedahl's prototype [1] which introduced a >> > scalable scheduler design based on primarily lock-free algorithms and >> > data structures designed to decrease contention. A thread searches >> > through a data structure containing only queues that are both non-empty >> > and allowed to be scheduled to that thread. Strict priority scheduling is >> > respected, and (W)RR scheduling may be used within queues of the same >> > priority. >> > Lastly, pre-scheduling or stashing is not employed since it is optional >> > functionality that can be implemented in the application. >> > >> > In addition to scalable ring buffers, the algorithm also uses unbounded >> > concurrent queues. LL/SC and CAS variants exist in cases where absense of >> > ABA problem cannot be proved, and also in cases where the compiler's atomic >> > built-ins may not be lowered to the desired instruction(s). Finally, a >> > version >> > of the algorithm that uses locks is also provided. >> > >> > See platform/linux-generic/include/odp_config_internal.h for further build >> > time configuration. >> > >> > Use --enable-schedule-scalable to conditionally compile this scheduler >> > into the library. >> >> This is an interesting stuff. >> >> Do you have any performance/latency numbers in comparison to exiting >> scheduler >> for completing say two stage(ORDERED->ATOMIC) or N stage pipeline on any >> platform? It is still a SW implementation, there is overhead accessed with queue enqueue/dequeue and the scheduling itself. So for an N-stage pipeline, overhead will accumulate. If only a subset of threads are associated with each stage (this could be beneficial for I-cache hit rate), there will be less need for scalability. What is the recommended strategy here for OCTEON/ThunderX? All threads/cores share all work? > > To give an idea, the avg latency reported by odp_sched_latency is down to half > that of other schedulers (pre-scheduling/stashing disabled) on 4c A53, 16c > A57, > and 12c broadwell. We are still preparing numbers, and I think it's worth > mentioning > that they are subject to change as this patch series changes over time. > > I am not aware of an existing benchmark that involves switching between > different > queue types. Perhaps this is happening in an example app? This could be useful in e.g. IPsec termination. Use an atomic stage for the replay protection check and update. Now ODP has ordered locks for that so the "atomic" (exclusive) section can be achieved from an ordered processing stage. Perhaps Jerin knows some other application that utilises two-stage ORDERED->ATOMIC processing. > >> When we say scalable scheduler, What application/means used to quantify >> scalablity?? It starts with the design, use non-blocking data structures and try to distribute data to threads so that they do not access shared data very often. Some of this is a little detrimental to single-threaded performance, you need to use more atomic operations. It seems to work well on ARM (A53, A57) though, the penalty is higher on x86 (x86 is very good with spin locks, cmpxchg seems to have more overhead compared to ldxr/stxr on ARM which can have less memory ordering constraints). We actually use different synchronisation strategies on ARM and on x86 (compile time configuration). You can read more here: https://docs.google.com/presentation/d/1BqAdni4aP4aHOqO6fNO39-0MN9zOntI-2ZnVTUXBNSQ I also did an internal presentation on the scheduler prototype back at Las Vegas, that presentation might also be somewhere on the Linaro web site. >> >> Do you have any numbers in comparison to existing scheduler to show >> magnitude of the scalablity on any platform?
Re: [lng-odp] [API-NEXT PATCH v2 15/16] Add llqueue, an unbounded concurrent queue
On 05.04.2017 21:36, Ola Liljedahl wrote: > On 5 April 2017 at 17:33, Dmitry Eremin-Solenikov > wrote: >> On 05.04.2017 17:40, Ola Liljedahl wrote: >>> On 5 April 2017 at 14:20, Maxim Uvarov wrote: On 04/05/17 01:46, Ola Liljedahl wrote: > On 4 April 2017 at 21:25, Maxim Uvarov wrote: >> it's better to have 2 separate files for that. One for ODP_CONFIG_LLDSCD > "better"? In what way? >>> Please respond to the question. If you claim something is "better", >>> you must be able to explain *why* it is better. >>> >>> *We* have explained why we think it is better to keep both >>> implementations in the same file, close to each other. I think Brian's >>> explanation was very good. >> >> Because it allows one to overview a complete implementation at once >> instead of switching between two different modes. > That's a good argument as well. It doesn't mean that the > implementations should live in separate files. > > We keep both implementations in the same file but avoid interleaving > the different functions (as is done now). This is actually what some > one in our team wanted. Once you have those implementations de-interleaved, it should be easy to split them to separate files. I won't insist on that though, for me deinterleaving them would be good enough. -- With best wishes Dmitry
Re: [lng-odp] [RFC][PATCH] added asymmetric crypto algorithm support.
On 05.04.2017 13:02, Umesh Kartha wrote: > Asymmetric crypto algorithms are essential in protocols such as SSL/TLS. > As the current ODP crypto library lacks support for asymmetric crypto > algorithms, this RFC is an attempt to address it and add support for the > same. If you target TLS, you should probably also include FFDH support as a first grade citizen. Is this API sufficient and efficient to produce working TLS server? Did you consider having separate interfaces for symmetric and for public key crypto? > The asymmetric algorithms featured in this version are > > 1 RSA > - RSA Sign > - RSA Verify > - RSA Public Encrypt > - RSA Private Decrypt > > Padding schemes supported for RSA operations are > * RSA PKCS#1 BT1 > * RSA PKCS#1 BT2 > * RSA PKCS#1 OAEP > * RSA PKCS#1 PSS > > 2 ECDSA > - ECDSA Sign > - ECDSA Verify > > Curves supported for ECDSA operations are > * Prime192v1 > * Secp224k1 > * Prime256v1 > * Secp384r1 > * Secp521r1 What about EdDSA? > > 3 MODEXP > > 4 FUNDAMENTAL ECC > - Point Addition > - Point Multiplication > - Point Doubling > >Curves supported for fundamental ECC operations are same as that of >ECDSA operations. > > > > Signed-off-by: Umesh Kartha > --- > include/odp/api/spec/crypto.h | 570 > +- > 1 file changed, 569 insertions(+), 1 deletion(-) > > diff --git include/odp/api/spec/crypto.h include/odp/api/spec/crypto.h > index d30f050..4cd5a3d 100644 > --- include/odp/api/spec/crypto.h > +++ include/odp/api/spec/crypto.h > @@ -57,6 +57,14 @@ typedef enum { > ODP_CRYPTO_OP_ENCODE, > /** Decrypt and/or verify authentication ICV */ > ODP_CRYPTO_OP_DECODE, > + /** Perform asymmetric crypto RSA operation */ > + ODP_CRYPTO_OP_RSA, > + /** Perform asymmetric crypto modex operation */ > + ODP_CRYPTO_OP_MODEX, > + /** Perform asymmetric crypto ECDSA operation */ > + ODP_CRYPTO_OP_ECDSA, > + /** Perform asymmetric crypto ECC point operation */ > + ODP_CRYPTO_OP_FECC, This looks like a bad abstraction for me. RSA and ECDSA also follow the encode/decode and sign/verify scheme, so my first impression would be that they should also fall into OP_ENCODE/OP_DECODE, exact operation being selected by other parameters. Also I ain't sure why do you exactly need modexp and ecc point operation. > } odp_crypto_op_t; > > /** > @@ -213,6 +221,202 @@ typedef union odp_crypto_auth_algos_t { > } odp_crypto_auth_algos_t; > > /** > + * Asymmetric crypto algorithms > + */ > +typedef enum { > + /** RSA asymmetric key algorithm */ > + ODP_ASYM_ALG_RSA, > + > + /** Modular exponentiation algorithm */ > + ODP_ASYM_ALG_MODEXP, > + > + /** ECDSA authentication algorithm */ > + ODP_ASYM_ALG_ECDSA, > + > + /** Fundamental ECC algorithm */ > + ODP_ASYM_ALG_FECC What is Fundamental? > +} odp_asym_alg_t; > + > +/** > + * Asymmetric algorithms in a bit field structure > + */ > +typedef union odp_crypto_asym_algos_t { > + /** Asymmetric algorithms */ > + struct { > + /** ODP_ASYM_ALG_RSA_PKCS */ > + uint16_t alg_rsa_pkcs :1; > + > + /** ODP_ASYM_ALG_MODEXP */ > + uint16_t alg_modexp :1; > + > + /** ODP_ASYM_ALG_ECDSA */ > + uint16_t alg_ecdsa :1; > + > + /** ODP_ASYM_FECC */ > + uint16_t alg_fecc :1; > + } bit; > + /** All bits of the bit field structure > + * > + * This field can be used to set/clear all flags, or bitwise > + * operations over the entire structure. > + */ > + uint16_t all_bits; > +} odp_crypto_asym_algos_t; > + > +/** > + * Asymmetric Crypto RSA PKCS operation type > + */ > +typedef enum { > + /** Encrypt with PKCS RSA public key */ > + ODP_CRYPTO_RSA_OP_PUBLIC_ENCRYPT, > + > + /** Decrypt with PKCS RSA private key */ > + ODP_CRYPTO_RSA_OP_PRIVATE_DECRYPT, > + > + /** Sign with RSA private key*/ > + ODP_CRYPTO_RSA_OP_SIGN, > + > + /** Verify with RSA public key */ > + ODP_CRYPTO_RSA_OP_VERIFY, > + > +} odp_crypto_rsa_op_t; > + > +/** > + * Asymmetric Crypto RSA PKCS padding type > + */ > +typedef enum { > + /** RSA padding type none */ > + ODP_CRYPTO_RSA_PADDING_NONE, > + > + /** RSA padding type PKCS#1 BT1*/ > + ODP_CRYPTO_RSA_PADDING_BT1, > + > + /** RSA padding type PKCS#1 BT2*/ > + ODP_CRYPTO_RSA_PADDING_BT2, > + > + /** RSA padding type PKCS#1OAEP */ > + ODP_CRYPTO_RSA_PADDING_OAEP, > + > + /** RSA padding type PKCS#1 PSS */ > + ODP_CRYPTO_RSA_PADDING_PSS, > + > +} odp_crypto_rsa_padding_t; > + > +/** > + * RSA padding types in a bitfield structure. > + */ > +typedef union odp_crypto_rsa_pad_bits_t { > + /** RSA padding type */ > + struct { > + /** ODP_CRYPTO_RSA_PADDING_NONE */ > + uint16_t rsa_pad_none
Re: [lng-odp] [API-NEXT PATCH v2 07/16] test: odp_scheduling: Handle dequeueing from a concurrent queue
On 5 April 2017 at 23:39, Maxim Uvarov wrote: > On 04/05/17 17:30, Ola Liljedahl wrote: >> On 5 April 2017 at 14:50, Maxim Uvarov wrote: >>> On 04/05/17 06:57, Honnappa Nagarahalli wrote: This can go into master/api-next as an independent patch. Agree? >>> >>> agree. If we accept implementation where events can be 'delayed' >> Probably all platforms with HW queues. >> >>> than it >>> looks like we missed some api to sync queues. >> When would those API's be used? >> > > might be in case like that. Might be it's not needed in real world > application. This was a test program. I don't see the same situation occurring in a real world application. I could be wrong. > > My point that if situation of postpone event is accepted that we need > document that in api doxygen comment. I think the asynchronous behaviour is the default. ODP is a hardware abstraction. HW is often asynchronous, writes are posted etc. Ensuring synchronous behaviour costs performance. Single-threaded software is "synchronous", writes are immediately visible to the thread. But as soon as you go multi-threaded and don't use locks to access shared resources, software also becomes "asynchronous" (don't know if it is the right word here). Only if you use locks to synchronise accesses to shared memory you return to some form of sequential consistency (all threads see updates in the same order). You don't want to use locks, that quickly creates scalability bottlenecks. Since the scalable scheduler does its best to avoid locks (non-scalable) and sequential consistency (slow), instead utilising lock-less and lock-free algorithms and weak memory ordering (e.g. acquire/release), it exposes the underlying hardware characteristics. > > Maxim. > >>> >>> But I do not see why we need this patch. On the same cpu test queue 1 >>> event and after that dequeue 1 event: >>> >>> for (i = 0; i < QUEUE_ROUNDS; i++) { >>> ev = odp_buffer_to_event(buf); >>> >>> if (odp_queue_enq(queue, ev)) { >>> LOG_ERR(" [%i] Queue enqueue failed.\n", thr); >>> odp_buffer_free(buf); >>> return -1; >>> } >>> >>> ev = odp_queue_deq(queue); >>> >>> buf = odp_buffer_from_event(ev); >>> >>> if (!odp_buffer_is_valid(buf)) { >>> LOG_ERR(" [%i] Queue empty.\n", thr); >>> return -1; >>> } >>> } >>> >>> Where this exactly event can be delayed? >> In the memory system. >> >>> >>> If other threads do the same - then all do enqueue 1 event first and >>> then dequeue one event. I can understand problem with queueing on one >>> cpu and dequeuing on other cpu. But on the same cpu is has to always >>> work. Isn't it? >> No. >> >>> >>> Maxim. >>> On 4 April 2017 at 21:22, Brian Brooks wrote: > On 04/04 17:26:12, Bill Fischofer wrote: >> On Tue, Apr 4, 2017 at 3:37 PM, Brian Brooks >> wrote: >>> On 04/04 21:59:15, Maxim Uvarov wrote: On 04/04/17 21:47, Brian Brooks wrote: > Signed-off-by: Ola Liljedahl > Reviewed-by: Brian Brooks > Reviewed-by: Honnappa Nagarahalli > Reviewed-by: Kevin Wang > --- > test/common_plat/performance/odp_scheduling.c | 12 ++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/test/common_plat/performance/odp_scheduling.c > b/test/common_plat/performance/odp_scheduling.c > index c74a0713..38e76257 100644 > --- a/test/common_plat/performance/odp_scheduling.c > +++ b/test/common_plat/performance/odp_scheduling.c > @@ -273,7 +273,7 @@ static int test_plain_queue(int thr, > test_globals_t *globals) > test_message_t *t_msg; > odp_queue_t queue; > uint64_t c1, c2, cycles; > - int i; > + int i, j; > > /* Alloc test message */ > buf = odp_buffer_alloc(globals->pool); > @@ -307,7 +307,15 @@ static int test_plain_queue(int thr, > test_globals_t *globals) > return -1; > } > > - ev = odp_queue_deq(queue); > + /* When enqueue and dequeue are decoupled (e.g. not using > a > +* common lock), an enqueued event may not be immediately > +* visible to dequeue. So we just try again for a while. > */ > + for (j = 0; j < 100; j++) { where 100 number comes from? >>> >>> It is the retry count. Perhaps it could be a bit lower, or a bit >>> higher, but >>> it works well. >> >> Actually, it's incorrect. What happens if all 100 retries fail? You'll >> call odp_buffer_from_event() for ODP_EVENT_INVALID, which is >> undefined. > > In