Merged,
Maxim.

On 11/19/2015 17:30, Bill Fischofer wrote:
That's a reasonable use case. The main issue is that the scheduler needs a rewrite to properly handle scheduler groups in cases like this. Basically, the scheduler should not be scanning queues that it can know aren't eligible for this core, so the organization of the scheduling queues themselves needs to be rethought. Actually, the queues should themselves probably not be stored in queues (as is done in linux-generic) but in some lock-free structures that can be accessed/scanned very efficiently.

I mentioned some time back that what's basically needed is to divide the scheduler into a scheduler and dispatcher functions where the scheduler proper maintains the per-core dispatch lists and odp_schedule() is just a dispatcher call that picks up the top of the local dispatch list very efficiently.

But in the meantime your patch looks reasonable, so:

Reviewed-by: Bill Fischofer <bill.fischo...@linaro.org <mailto:bill.fischo...@linaro.org>>

On Thu, Nov 19, 2015 at 8:06 AM, Wallen, Carl (Nokia - FI/Espoo) <carl.wal...@nokia.com <mailto:carl.wal...@nokia.com>> wrote:

    Hi,

    We have a use case where we need to create a queue for each core
    running, i.e. we need to create as many sched-groups (each with
    one unique bit set in the thrmask) as we have cores in the system
    and then create as many queues and tie them to each sched-group.
    This way we can create queues from which events are certain to be
    scheduled on a single core only.

    We use these "single core queues" to trigger e.g. control
    operations or functions that need to be run on each core.

    For this you would need as many sched-groups as you have cores +
    some additional more for whatever partitioning the application
    wants to do.

    Adding more sched-groups did not seem to impact performance, at
    least not when they were unused.
    I'm a bit worried about the case where you would have a large
    amount of cores running and use these "single-core queues" for
    internal management operations, thus meaning that events on them
    would be quite rare. Would the ODP scheduler (linux-generic at
    least) keep polling these queues for events all the time even if
    they are empty or is there some logic to only dequeue from queues
    that actually contain events?

    /carl

    From: EXT Bill Fischofer [mailto:bill.fischo...@linaro.org
    <mailto:bill.fischo...@linaro.org>]
    Sent: Thursday, November 19, 2015 3:44 PM
    To: Wallen, Carl (Nokia - FI/Espoo) <carl.wal...@nokia.com
    <mailto:carl.wal...@nokia.com>>
    Cc: LNG ODP Mailman List <lng-odp@lists.linaro.org
    <mailto:lng-odp@lists.linaro.org>>
    Subject: Re: [lng-odp] [PATCH 1/1] linux-generic: config: increase
    ODP_CONFIG_SCHED_GRPS to 256

    No problem with picking a "good" number, but curious as to what
    the use case is for this proposed increase.

    On Thu, Nov 19, 2015 at 5:58 AM, Carl Wallen
    <carl.wal...@nokia.com <mailto:carl.wal...@nokia.com>> wrote:
    Increase the ODP_CONFIG_SCHED_GRPS define from 16 to 256 to support a
    larger number of scheduling groups by default.

    Signed-off-by: Carl Wallen <carl.wal...@nokia.com
    <mailto:carl.wal...@nokia.com>>
    ---
     platform/linux-generic/include/odp/config.h | 2 +-
     1 file changed, 1 insertion(+), 1 deletion(-)

    diff --git a/platform/linux-generic/include/odp/config.h
    b/platform/linux-generic/include/odp/config.h
    index ee23df3..da8856f 100644
    --- a/platform/linux-generic/include/odp/config.h
    +++ b/platform/linux-generic/include/odp/config.h
    @@ -61,7 +61,7 @@ static inline int odp_config_sched_prios(void)
     /**
      * Number of scheduling groups
      */
    -#define ODP_CONFIG_SCHED_GRPS 16
    +#define ODP_CONFIG_SCHED_GRPS 256
     static inline int odp_config_sched_grps(void)
     {
            return ODP_CONFIG_SCHED_GRPS;
    --
    2.1.4

    _______________________________________________
    lng-odp mailing list
    lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
    https://lists.linaro.org/mailman/listinfo/lng-odp




_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to