On 23 May 2016 at 11:13, Yi He <yi...@linaro.org> wrote:

> Thanks very much Christophe
>
> This is great, I'll use this tech in
> odph_odpthread_setaffinity()/getaffinity() APIs.
>

> Do you mean you think keeping underlying Linux pthread/process concepts
> out of ODP thread concept is not good?
>

Not sure. We are hiting this kind of problems a bit everywhere and that
worries me. But it seems quite accepted that these are not and should not
be part of ODP: Hopefully, we can keep going with this approach.


> From many discussions I got a Feeling that by ODP thread we are really
> trying to provide a higher level execution abstraction above linux
> pthread/process units, underlying linux pthread/process are like containers
> or boxes for our componentized/re-usable code pieces?
>

Yes. I am trying to get a clear view on this, which I have hard to achieve.
The behaviour of the differents ODP objects /  pointers when accessed from
different virtual address space (i.e. process) is not 100% clear to me.

Christophe.


> I'm not sure if this feeling is right :), we can talk in ARCH if got time.
>
> Best Regards, Yi
>
> On 23 May 2016 at 16:48, Christophe Milard <christophe.mil...@linaro.org>
> wrote:
>
>> Hi,
>>
>> I guess (not 100% sure if that civer all cases) if gettid() == getpid()
>> you are on the main process thread.
>> Having said that I do not like this solution very much: It really looks
>> to me that we have a larger problem trying to keep these things out of ODP
>>
>> Something for ARCH call, maybe?
>>
>> Christophe.
>>
>> On 23 May 2016 at 10:16, Yi He <yi...@linaro.org> wrote:
>>
>>> Hi, Christophe
>>>
>>> Here I met a difficulty, if I unified the API into 
>>> odph_odpthread_set_affinity(),
>>> inside the function how can I determine whether the current context is a
>>> pthread or process? So I cannot decide to call sched_setaffinity() or
>>> pthread_setaffinity_np().
>>>
>>> thanks and best regards, Yi
>>>
>>>
>>>
>>> On 23 May 2016 at 14:53, Yi He <yi...@linaro.org> wrote:
>>>
>>>> Hi, Christophe
>>>>
>>>> Yes, I'll apply your series and send a new one later.
>>>>
>>>> Best Regards, Yi
>>>>
>>>> On 23 May 2016 at 14:33, Christophe Milard <
>>>> christophe.mil...@linaro.org> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 20 May 2016 at 10:48, Yi He <yi...@linaro.org> wrote:
>>>>>
>>>>>> Set affinity to 1st available control cpu for all odp
>>>>>> validation programs in odp_cunit_common library.
>>>>>>
>>>>>> Signed-off-by: Yi He <yi...@linaro.org>
>>>>>> ---
>>>>>>  helper/include/odp/helper/linux.h         | 47 +++++++++++++++++++
>>>>>>  helper/linux.c                            | 32 +++++++++++++
>>>>>>  helper/test/thread.c                      | 76
>>>>>> +++++++++++++++++++++++++++++--
>>>>>>  test/validation/common/odp_cunit_common.c | 15 ++++--
>>>>>>  4 files changed, 164 insertions(+), 6 deletions(-)
>>>>>>
>>>>>> diff --git a/helper/include/odp/helper/linux.h
>>>>>> b/helper/include/odp/helper/linux.h
>>>>>> index e2dca35..fa815e1 100644
>>>>>> --- a/helper/include/odp/helper/linux.h
>>>>>> +++ b/helper/include/odp/helper/linux.h
>>>>>> @@ -84,6 +84,29 @@ int odph_linux_pthread_create(odph_linux_pthread_t
>>>>>> *pthread_tbl,
>>>>>>   */
>>>>>>  void odph_linux_pthread_join(odph_linux_pthread_t *thread_tbl, int
>>>>>> num);
>>>>>>
>>>>>> +/**
>>>>>> + * Set CPU affinity of the current thread
>>>>>> + *
>>>>>> + * CPU affinity determines the set of CPUs on which the thread is
>>>>>> + * eligible to run.
>>>>>> + *
>>>>>> + * @param cpuset        A bitmask lists the affinity CPU cores
>>>>>> + *
>>>>>> + * @return 0 on success, -1 on failure
>>>>>> + */
>>>>>> +int odph_linux_pthread_setaffinity(const odp_cpumask_t *cpuset);
>>>>>>
>>>>>
>>>>> odph_odpthread_set_affinity(), is I guess better, at least as long as
>>>>> we try to keep thread and processes together. Dropping the linux prefix
>>>>> makes also the code more portable (on some other OS, change provide new
>>>>> helpers and the app does hopefully not need to change)
>>>>>
>>>>> I guess you can apply the "running things in  process mode" patch
>>>>> series to see what I am after...
>>>>>
>>>>> This comment applies to all your function names, of course.
>>>>>
>>>>> Christophe
>>>>>
>>>>> +
>>>>>> +/**
>>>>>> + * Get CPU affinity of the current thread
>>>>>> + *
>>>>>> + * CPU affinity determines the set of CPUs on which the thread is
>>>>>> + * eligible to run.
>>>>>> + *
>>>>>> + * @param cpuset[out]   A bitmask lists the affinity CPU cores
>>>>>> + *
>>>>>> + * @return 0 on success, -1 on failure
>>>>>> + */
>>>>>> +int odph_linux_pthread_getaffinity(odp_cpumask_t *cpuset);
>>>>>>
>>>>>>  /**
>>>>>>   * Fork a process
>>>>>> @@ -134,6 +157,30 @@ int
>>>>>> odph_linux_process_fork_n(odph_linux_process_t *proc_tbl,
>>>>>>  int odph_linux_process_wait_n(odph_linux_process_t *proc_tbl, int
>>>>>> num);
>>>>>>
>>>>>>  /**
>>>>>> + * Set CPU affinity of the current process
>>>>>> + *
>>>>>> + * CPU affinity determines the set of CPUs on which the process is
>>>>>> + * eligible to run.
>>>>>> + *
>>>>>> + * @param cpuset        A bitmask lists the affinity CPU cores
>>>>>> + *
>>>>>> + * @return 0 on success, -1 on failure
>>>>>> + */
>>>>>> +int odph_linux_process_setaffinity(const odp_cpumask_t *cpuset);
>>>>>> +
>>>>>> +/**
>>>>>> + * Get CPU affinity of the current process
>>>>>> + *
>>>>>> + * CPU affinity determines the set of CPUs on which the process is
>>>>>> + * eligible to run.
>>>>>> + *
>>>>>> + * @param cpuset[out]   A bitmask lists the affinity CPU cores
>>>>>> + *
>>>>>> + * @return 0 on success, -1 on failure
>>>>>> + */
>>>>>> +int odph_linux_process_getaffinity(odp_cpumask_t *cpuset);
>>>>>> +
>>>>>> +/**
>>>>>>   * @}
>>>>>>   */
>>>>>>
>>>>>> diff --git a/helper/linux.c b/helper/linux.c
>>>>>> index 24e243b..6ce7e7d 100644
>>>>>> --- a/helper/linux.c
>>>>>> +++ b/helper/linux.c
>>>>>> @@ -114,6 +114,22 @@ void
>>>>>> odph_linux_pthread_join(odph_linux_pthread_t *thread_tbl, int num)
>>>>>>         }
>>>>>>  }
>>>>>>
>>>>>> +int odph_linux_pthread_setaffinity(const odp_cpumask_t *cpuset)
>>>>>> +{
>>>>>> +       const cpu_set_t *_cpuset = &cpuset->set;
>>>>>> +
>>>>>> +       return (0 == pthread_setaffinity_np(pthread_self(),
>>>>>> +               sizeof(cpu_set_t), _cpuset)) ? 0 : -1;
>>>>>> +}
>>>>>> +
>>>>>> +int odph_linux_pthread_getaffinity(odp_cpumask_t *cpuset)
>>>>>> +{
>>>>>> +       cpu_set_t *_cpuset = &cpuset->set;
>>>>>> +
>>>>>> +       return (0 == pthread_getaffinity_np(pthread_self(),
>>>>>> +               sizeof(cpu_set_t), _cpuset)) ? 0 : -1;
>>>>>> +}
>>>>>> +
>>>>>>  int odph_linux_process_fork_n(odph_linux_process_t *proc_tbl,
>>>>>>                               const odp_cpumask_t *mask,
>>>>>>                               const odph_linux_thr_params_t
>>>>>> *thr_params)
>>>>>> @@ -236,3 +252,19 @@ int
>>>>>> odph_linux_process_wait_n(odph_linux_process_t *proc_tbl, int num)
>>>>>>
>>>>>>         return 0;
>>>>>>  }
>>>>>> +
>>>>>> +int odph_linux_process_setaffinity(const odp_cpumask_t *cpuset)
>>>>>> +{
>>>>>> +       const cpu_set_t *_cpuset = &cpuset->set;
>>>>>> +
>>>>>> +       return (0 == sched_setaffinity(0, /* pid zero means calling
>>>>>> process */
>>>>>> +               sizeof(cpu_set_t), _cpuset)) ? 0 : -1;
>>>>>> +}
>>>>>> +
>>>>>> +int odph_linux_process_getaffinity(odp_cpumask_t *cpuset)
>>>>>> +{
>>>>>> +       cpu_set_t *_cpuset = &cpuset->set;
>>>>>> +
>>>>>> +       return (0 == sched_getaffinity(0, /* pid zero means calling
>>>>>> process */
>>>>>> +               sizeof(cpu_set_t), _cpuset)) ? 0 : -1;
>>>>>> +}
>>>>>> diff --git a/helper/test/thread.c b/helper/test/thread.c
>>>>>> index b290753..97b4331 100644
>>>>>> --- a/helper/test/thread.c
>>>>>> +++ b/helper/test/thread.c
>>>>>> @@ -4,19 +4,51 @@
>>>>>>   * SPDX-License-Identifier:     BSD-3-Clause
>>>>>>   */
>>>>>>
>>>>>> +#include <string.h>
>>>>>> +
>>>>>>  #include <test_debug.h>
>>>>>>  #include <odp_api.h>
>>>>>>  #include <odp/helper/linux.h>
>>>>>>
>>>>>>  #define NUMBER_WORKERS 16
>>>>>> +
>>>>>> +/* delayed assertion after threads collection */
>>>>>> +static int worker_results[NUMBER_WORKERS];
>>>>>> +
>>>>>>  static void *worker_fn(void *arg TEST_UNUSED)
>>>>>>  {
>>>>>> -       /* depend on the odp helper to call odp_init_local */
>>>>>> +       odp_cpumask_t workers, affinity;
>>>>>> +       /* save the thread result for delayed assertion */
>>>>>> +       int *result = &worker_results[odp_cpu_id() % NUMBER_WORKERS];
>>>>>>
>>>>>> +       /* depend on the odp helper to call odp_init_local */
>>>>>>         printf("Worker thread on CPU %d\n", odp_cpu_id());
>>>>>>
>>>>>> -       /* depend on the odp helper to call odp_term_local */
>>>>>> +       odp_cpumask_zero(&workers);
>>>>>> +       odp_cpumask_zero(&affinity);
>>>>>> +
>>>>>> +       odp_cpumask_default_worker(&workers, NUMBER_WORKERS);
>>>>>> +
>>>>>> +       /* verify affinity works */
>>>>>> +       if (odph_linux_pthread_getaffinity(&affinity) != 0) {
>>>>>> +               printf("Read worker thread affinity failed %d.\n",
>>>>>> +                       odp_cpu_id());
>>>>>> +               *result = -1;
>>>>>> +       } else if (!odp_cpumask_isset(&workers,
>>>>>> +                       odp_cpumask_first(&affinity))) {
>>>>>> +               printf("Verify worker thread affinity failed %d.\n",
>>>>>> +                       odp_cpu_id());
>>>>>> +               *result = -1;
>>>>>> +       }
>>>>>> +
>>>>>> +       /* verify API is not broken */
>>>>>> +       if (odph_linux_pthread_setaffinity(&affinity) != 0) {
>>>>>> +               printf("Re-configure worker thread affinity failed
>>>>>> %d.\n",
>>>>>> +                       odp_cpu_id());
>>>>>> +               *result = -1;
>>>>>> +       }
>>>>>>
>>>>>> +       /* depend on the odp helper to call odp_term_local */
>>>>>>         return 0;
>>>>>>  }
>>>>>>
>>>>>> @@ -24,7 +56,7 @@ static void *worker_fn(void *arg TEST_UNUSED)
>>>>>>  int main(int argc TEST_UNUSED, char *argv[] TEST_UNUSED)
>>>>>>  {
>>>>>>         odph_linux_pthread_t thread_tbl[NUMBER_WORKERS];
>>>>>> -       odp_cpumask_t cpu_mask;
>>>>>> +       odp_cpumask_t cpu_mask, cpuset;
>>>>>>         int num_workers;
>>>>>>         int cpu;
>>>>>>         char cpumaskstr[ODP_CPUMASK_STR_SIZE];
>>>>>> @@ -41,6 +73,36 @@ int main(int argc TEST_UNUSED, char *argv[]
>>>>>> TEST_UNUSED)
>>>>>>                 exit(EXIT_FAILURE);
>>>>>>         }
>>>>>>
>>>>>> +       /* reset all worker thread results to success */
>>>>>> +       memset(worker_results, 0, sizeof(worker_results));
>>>>>> +       odp_cpumask_zero(&cpu_mask);
>>>>>> +       odp_cpumask_zero(&cpuset);
>>>>>> +
>>>>>> +       /* allocate the 1st available control cpu to main process */
>>>>>> +       if (odp_cpumask_default_control(&cpu_mask, 1) != 1) {
>>>>>> +               LOG_ERR("Allocate main process affinity failed.\n");
>>>>>> +               exit(EXIT_FAILURE);
>>>>>> +       }
>>>>>> +       if (odph_linux_process_setaffinity(&cpu_mask) != 0) {
>>>>>> +               LOG_ERR("Set main process affinify (%d) failed.\n",
>>>>>> +                       odp_cpumask_first(&cpu_mask));
>>>>>> +               exit(EXIT_FAILURE);
>>>>>> +       }
>>>>>> +       /* read back affinity to verify */
>>>>>> +       if ((odph_linux_process_getaffinity(&cpuset) != 0) ||
>>>>>> +               !odp_cpumask_equal(&cpu_mask, &cpuset)) {
>>>>>> +               odp_cpumask_to_str(&cpuset,
>>>>>> +                       cpumaskstr, sizeof(cpumaskstr));
>>>>>> +
>>>>>> +               LOG_ERR("Verify main process affinity failed: "
>>>>>> +                       "set(%d) read(%s).\n",
>>>>>> +                       odp_cpumask_first(&cpu_mask), cpumaskstr);
>>>>>> +               exit(EXIT_FAILURE);
>>>>>> +       }
>>>>>> +
>>>>>> +       odp_cpumask_zero(&cpuset);
>>>>>> +       odp_cpumask_zero(&cpu_mask);
>>>>>> +
>>>>>>         /* discover how many threads this system can support */
>>>>>>         num_workers = odp_cpumask_default_worker(&cpu_mask,
>>>>>> NUMBER_WORKERS);
>>>>>>         if (num_workers < NUMBER_WORKERS) {
>>>>>> @@ -73,6 +135,14 @@ int main(int argc TEST_UNUSED, char *argv[]
>>>>>> TEST_UNUSED)
>>>>>>         odph_linux_pthread_create(&thread_tbl[0], &cpu_mask,
>>>>>> &thr_params);
>>>>>>         odph_linux_pthread_join(thread_tbl, num_workers);
>>>>>>
>>>>>> +       /* assert all worker thread results */
>>>>>> +       for (cpu = 0; cpu < num_workers; cpu++) {
>>>>>> +               if (worker_results[cpu] < 0) {
>>>>>> +                       LOG_ERR("Worker thread %d failed.\n", cpu);
>>>>>> +                       exit(EXIT_FAILURE);
>>>>>> +               }
>>>>>> +       }
>>>>>> +
>>>>>>         if (odp_term_local()) {
>>>>>>                 LOG_ERR("Error: ODP local term failed.\n");
>>>>>>                 exit(EXIT_FAILURE);
>>>>>> diff --git a/test/validation/common/odp_cunit_common.c
>>>>>> b/test/validation/common/odp_cunit_common.c
>>>>>> index 2712abe..535a9f9 100644
>>>>>> --- a/test/validation/common/odp_cunit_common.c
>>>>>> +++ b/test/validation/common/odp_cunit_common.c
>>>>>> @@ -329,9 +329,18 @@ int odp_cunit_update(odp_suiteinfo_t
>>>>>> testsuites[])
>>>>>>  int odp_cunit_register(odp_suiteinfo_t testsuites[])
>>>>>>  {
>>>>>>         /* call test executable init hook, if any */
>>>>>> -       if (global_init_term.global_init_ptr &&
>>>>>> -           ((*global_init_term.global_init_ptr)(&instance) != 0))
>>>>>> -               return -1;
>>>>>> +       if (global_init_term.global_init_ptr) {
>>>>>> +               if ((*global_init_term.global_init_ptr)(&instance) ==
>>>>>> 0) {
>>>>>> +                       odp_cpumask_t cpuset;
>>>>>> +
>>>>>> +                       odp_cpumask_zero(&cpuset);
>>>>>> +                       /* set main process affinity after ODP
>>>>>> initialization */
>>>>>> +                       if (1 == odp_cpumask_default_control(&cpuset,
>>>>>> 1))
>>>>>> +
>>>>>>  odph_linux_process_setaffinity(&cpuset);
>>>>>> +               } else {
>>>>>> +                       return -1;
>>>>>> +               }
>>>>>> +       }
>>>>>>
>>>>>>         CU_set_error_action(CUEA_ABORT);
>>>>>>
>>>>>> --
>>>>>> 1.9.1
>>>>>>
>>>>>> _______________________________________________
>>>>>> lng-odp mailing list
>>>>>> lng-odp@lists.linaro.org
>>>>>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to