On 5/30/2014 8:33 AM, Sebastian Huber wrote:
> On 05/29/2014 09:28 PM, Joel Sherrill wrote:
>> Hi
>>
>> The priority affinity algorithm appears to be behaving as
>> we expect from a decision making standpoint. However,
>> Jennifer and I think that when a scheduled thread must
>> be migrated to another core, we have a case for a new
>> state in the Thread Life Cycle.
> I hope we can avoid this.  Which problem do you want to address with 
> this new state?
We have some tests that are not behaving on grsim as we expect.
I was wondering if migrating the executing thread to another core
would result in some unexpected weirdness. Because of [1], we
were in desk check and thinking mode to find the problem.

[1] grsim+gdb on multi-core configurations results in an assert
in RTEMS if you run with a breakpoint set.   You don't have to
continue -- just "tar remote :2222; load; b XXX; cont" and you
get an assert. Daniel H. is looking into an example for us.
But for now, we have no real visibility on leon3. We are in
the process of switching to the realview on qemu to debug.
>> I am thinking that the thread needs to have a blocking state
>> set, have its context saved and be taken out of the scheduled
>> set. Then a life cycle state change handler van run as an
>> extension to unblock it so it can be potentially scheduled to
>> execute on another processor.
> The scheduler is not responsible for the thread context.  This is 
> _Thread_Dispatch().  A post-switch handler can only do actions for the 
> executing thread.  It would be extremely difficult to perform actions on 
> behalf of another thread. 
I was thinking that. I couldn't see how it would even work reliably.
Plus we are
already technically blocking and unblocking the thread so it should be
moving
from scheduled to ready and back.
>  We have to keep also fine grained locking 
> into account.  The _Thread_Dispatch() function is on the critical path 
> for average and worst-case performance, so we should keep it as simple 
> as possible.
I agree.
> As I wrote already in another thread you can use something like this to 
> allocate an exact processor for a thread:
I suppose I should have posted the patch for review by now but I was hoping
to have it completely working with tests when posted.  Anyway, it is
attached. The key is that I modified the priority SMP to let
let get_lowest_scheduled() and get_highest_ready() both be passed in
from higher levels.

The selection of get_lowest_scheduled() and get_highest_ready()
include the option of a filter thread.  The filter thread is used
as to check that the victim's CPU is in the potential heir's affinity
set.

I think the logic works out so that the normal calls to
_Scheduler_SMP_Allocate_processor() have selected the
correct thread/CPU.

I don't see the extra logic in _Scheduler_SMP_Allocate_processor()
having a negative impact. What do you see that I might be missing?
> static inline void _Scheduler_SMP_Allocate_processor_exact(
>    Scheduler_SMP_Context *self,
>    Thread_Control *scheduled,
>    Thread_Control *victim
> )
> {
>    Scheduler_SMP_Node *scheduled_node = _Scheduler_SMP_Node_get( 
> scheduled );
>    Per_CPU_Control *cpu_of_scheduled = _Thread_Get_CPU( scheduled );
>    Per_CPU_Control *cpu_of_victim = _Thread_Get_CPU( victim );
>    Per_CPU_Control *cpu_self = _Per_CPU_Get();
>
>    _Scheduler_SMP_Node_change_state(
>      scheduled_node,
>      SCHEDULER_SMP_NODE_SCHEDULED
>    );
>
>    _Thread_Set_CPU( scheduled, cpu_of_victim );
>    _Scheduler_SMP_Update_heir( cpu_self, cpu_of_victim, scheduled );
> }
>
> You can even use this function to do things like this:
>
> _Scheduler_SMP_Allocate_processor_exact(self, executing, other);
> _Scheduler_SMP_Allocate_processor_exact(self, other, executing);
>
> This works because the is executing indicator moved to the thread 
> context and is maintained at the lowest context switch level.  For 
> proper migration the scheduler must ensure that,
>
> 1. an heir thread other than the migrating thread exists on the source 
> processor, and
>
> 2. the migrating thread is the heir thread on the destination processor.
>
Hmmm... If I am using the normal _Scheduler_SMP_Allocate_processor()
aren't these ensured?

-- 
Joel Sherrill, Ph.D.             Director of Research & Development
joel.sherr...@oarcorp.com        On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
Support Available                (256) 722-9985

>From d83641f11d016a2cd73e2661326a5dc873ba0c3c Mon Sep 17 00:00:00 2001
From: Joel Sherrill <joel.sherr...@oarcorp.com>
Date: Mon, 19 May 2014 15:26:55 -0500
Subject: [PATCH 4/4] Add SMP Priority Scheduler with Affinity

Added Thread_Control * parameter to Scheduler_SMP_Get_highest_ready type
so methods looking for the highest ready thread can filter by the processor
on which the thread blocking resides. This allows affinity to be considered.
Simple Priority SMP and Priority SMP ignore this parameter.

This scheduler attempts to account for needed thread migrations.

==Side Effects of Adding This Scheduler==

+ Added get_lowest_scheduled to _Scheduler_SMP_Enqueue_ordered().

+ schedulerprioritysmpimpl.h is a new file with prototypes for methods
  which were formerly static in schedulerprioritysmp.c but now need to
  be public to be shared with this scheduler.

NOTE:
  _Scheduler_SMP_Get_lowest_ready() appears to have a path which would
  allow it to return a NULL.  Previously, _Scheduler_SMP_Enqueue_ordered()
  would have asserted on it. If it cannot return a NULL,
  _Scheduler_SMP_Get_lowest_ready() should have an assertions.
---
 cpukit/score/Makefile.am                           |    1 +
 .../rtems/score/schedulerpriorityaffinitysmp.h     |   68 +++-
 .../include/rtems/score/schedulerprioritysmpimpl.h |   84 ++++
 .../score/include/rtems/score/schedulersmpimpl.h   |   52 ++-
 cpukit/score/preinstall.am                         |    4 +
 cpukit/score/src/schedulerpriorityaffinitysmp.c    |  461 +++++++++++++++++++-
 cpukit/score/src/schedulerprioritysmp.c            |   28 +-
 cpukit/score/src/schedulersimplesmp.c              |    8 +-
 8 files changed, 656 insertions(+), 50 deletions(-)
 create mode 100644 cpukit/score/include/rtems/score/schedulerprioritysmpimpl.h

diff --git a/cpukit/score/Makefile.am b/cpukit/score/Makefile.am
index 7c42602..09acc40 100644
--- a/cpukit/score/Makefile.am
+++ b/cpukit/score/Makefile.am
@@ -110,6 +110,7 @@ endif
 if HAS_SMP
 include_rtems_score_HEADERS += include/rtems/score/atomic.h
 include_rtems_score_HEADERS += include/rtems/score/cpustdatomic.h
+include_rtems_score_HEADERS += include/rtems/score/schedulerprioritysmpimpl.h
 include_rtems_score_HEADERS += include/rtems/score/schedulerpriorityaffinitysmp.h
 include_rtems_score_HEADERS += include/rtems/score/schedulersimplesmp.h
 endif
diff --git a/cpukit/score/include/rtems/score/schedulerpriorityaffinitysmp.h b/cpukit/score/include/rtems/score/schedulerpriorityaffinitysmp.h
index 55271dc..ab487ff 100644
--- a/cpukit/score/include/rtems/score/schedulerpriorityaffinitysmp.h
+++ b/cpukit/score/include/rtems/score/schedulerpriorityaffinitysmp.h
@@ -52,9 +52,9 @@ extern "C" {
     _Scheduler_priority_SMP_Initialize, \
     _Scheduler_default_Schedule, \
     _Scheduler_priority_SMP_Yield, \
-    _Scheduler_priority_SMP_Block, \
-    _Scheduler_priority_SMP_Unblock, \
-    _Scheduler_priority_SMP_Change_priority, \
+    _Scheduler_priority_affinity_SMP_Block, \
+    _Scheduler_priority_affinity_SMP_Unblock, \
+    _Scheduler_priority_affinity_SMP_Change_priority, \
     _Scheduler_priority_affinity_SMP_Allocate, \
     _Scheduler_default_Free, \
     _Scheduler_priority_SMP_Update, \
@@ -67,21 +67,47 @@ extern "C" {
   }
 
 /**
- *  @brief Allocates @a the_thread->scheduler.
+ *  @brief Allocates per thread scheduler information
  *
- *  This routine allocates @a the_thread->scheduler.
+ *  This routine allocates @a thread->scheduler.
  *
  *  @param[in] scheduler points to the scheduler specific information.
- *  @param[in] the_thread is the thread the scheduler is allocating
+ *  @param[in] thread is the thread the scheduler is allocating
  *             management memory for.
  */
 bool _Scheduler_priority_affinity_SMP_Allocate(
   const Scheduler_Control *scheduler,
-  Thread_Control          *the_thread
+  Thread_Control          *thread
 );
 
 /**
- * @brief Get affinity for the priority affinity smp scheduler.
+ * @brief SMP Priority Affinity Scheduler Block Operation
+ *
+ * This method is the block operation for this scheduler.
+ *
+ * @param[in] scheduler is the scheduler instance information
+ * @param[in] thread is the thread to block
+ */
+void _Scheduler_priority_affinity_SMP_Block(
+  const Scheduler_Control *scheduler,
+  Thread_Control          *thread
+);
+
+/**
+ * @brief SMP Priority Affinity Scheduler Unblock Operation
+ *
+ * This method is the unblock operation for this scheduler.
+ *
+ * @param[in] scheduler is the scheduler instance information
+ * @param[in] thread is the thread to unblock
+ */
+void _Scheduler_priority_affinity_SMP_Unblock(
+  const Scheduler_Control *scheduler,
+  Thread_Control          *thread
+);
+
+/**
+ * @brief Get affinity for the priority affinity SMP scheduler.
  *
  * @param[in] scheduler The scheduler of the thread.
  * @param[in] thread The associated thread.
@@ -98,26 +124,44 @@ bool _Scheduler_priority_affinity_SMP_Get_affinity(
   cpu_set_t               *cpuset
 );
 
+/**
+ * @brief Change priority for the priority affinity SMP scheduler.
+ *
+ * @param[in] scheduler The scheduler of the thread.
+ * @param[in] thread The associated thread.
+ * @param[in] new_priority The new priority for the thread.
+ * @param[in] prepend_it Append or prepend the thread to its priority FIFO.
+ */
+void _Scheduler_priority_affinity_SMP_Change_priority(
+  const Scheduler_Control *scheduler,
+  Thread_Control          *the_thread,
+  Priority_Control         new_priority,
+  bool                     prepend_it
+);
+
 /** 
- * @brief Set affinity for the priority affinity smp scheduler.
+ * @brief Set affinity for the priority affinity SMP scheduler.
  *
  * @param[in] scheduler The scheduler of the thread.
  * @param[in] thread The associated thread.
  * @param[in] cpusetsize The size of the cpuset.
  * @param[in] cpuset Affinity new affinity set.
  *
- * @retval 0 Successful
+ * @retval true if successful
+ * @retval false if unsuccessful
  */
 bool _Scheduler_priority_affinity_SMP_Set_affinity(
   const Scheduler_Control *scheduler,
   Thread_Control          *thread,
   size_t                   cpusetsize,
-  cpu_set_t               *cpuset
+  const cpu_set_t         *cpuset
 );
 
 /**
  * @brief Scheduler node specialization for Deterministic Priority Affinity SMP
  * schedulers.
+ *
+ * This is a per thread structure.
  */
 typedef struct {
   /**
@@ -137,4 +181,4 @@ typedef struct {
 }
 #endif /* __cplusplus */
 
-#endif /* _RTEMS_SCORE_SCHEDULERPRIORITYSMP_H */
+#endif /* _RTEMS_SCORE_SCHEDULERPRIORITYAFFINITYSMP_H */
diff --git a/cpukit/score/include/rtems/score/schedulerprioritysmpimpl.h b/cpukit/score/include/rtems/score/schedulerprioritysmpimpl.h
new file mode 100644
index 0000000..2d8d1a5
--- /dev/null
+++ b/cpukit/score/include/rtems/score/schedulerprioritysmpimpl.h
@@ -0,0 +1,84 @@
+/**
+ * @file
+ *
+ * @ingroup ScoreSchedulerPrioritySMP
+ *
+ * @brief Deterministic Priority SMP Scheduler API
+ */
+
+/*
+ * Copyright (c) 2013-2014 embedded brains GmbH.  All rights reserved.
+ *
+ *  embedded brains GmbH
+ *  Dornierstr. 4
+ *  82178 Puchheim
+ *  Germany
+ *  <rt...@embedded-brains.de>
+ *
+ * The license and distribution terms for this file may be
+ * found in the file LICENSE in this distribution or at
+ * http://www.rtems.org/license/LICENSE.
+ */
+
+#ifndef _RTEMS_SCORE_SCHEDULERPRIORITYSMPIMPL_H
+#define _RTEMS_SCORE_SCHEDULERPRIORITYSMPIMPL_H
+
+#include <rtems/score/scheduler.h>
+#include <rtems/score/schedulerpriority.h>
+#include <rtems/score/schedulersmp.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+/**
+ * @ingroup ScoreSchedulerPrioritySMP
+ * @{
+ */
+
+Scheduler_priority_SMP_Context *_Scheduler_priority_SMP_Get_self(
+  Scheduler_Context *context
+);
+
+Scheduler_priority_SMP_Node *_Scheduler_priority_SMP_Node_get(
+  Thread_Control *thread
+);
+
+void _Scheduler_priority_SMP_Insert_ready_fifo(
+  Scheduler_Context *context,
+  Thread_Control *thread
+);
+
+void _Scheduler_priority_SMP_Insert_ready_lifo(
+  Scheduler_Context *context,
+  Thread_Control *thread
+);
+
+void _Scheduler_priority_SMP_Move_from_scheduled_to_ready(
+  Scheduler_Context *context,
+  Thread_Control *scheduled_to_ready
+);
+
+void _Scheduler_priority_SMP_Move_from_ready_to_scheduled(
+  Scheduler_Context *context,
+  Thread_Control *ready_to_scheduled
+);
+
+void _Scheduler_priority_SMP_Extract_from_ready(
+  Scheduler_Context *context,
+  Thread_Control *thread
+);
+
+void _Scheduler_priority_SMP_Do_update(
+  Scheduler_Context *context,
+  Scheduler_Node *base_node,
+  Priority_Control new_priority
+);
+
+/** @} */
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif /* _RTEMS_SCORE_SCHEDULERPRIORITYSMPIMPL_H */
diff --git a/cpukit/score/include/rtems/score/schedulersmpimpl.h b/cpukit/score/include/rtems/score/schedulersmpimpl.h
index bb7c41e..e088b28 100644
--- a/cpukit/score/include/rtems/score/schedulersmpimpl.h
+++ b/cpukit/score/include/rtems/score/schedulersmpimpl.h
@@ -275,7 +275,14 @@ extern "C" {
  */
 
 typedef Thread_Control *( *Scheduler_SMP_Get_highest_ready )(
-  Scheduler_Context *context
+  Scheduler_Context *context,
+  Thread_Control    *blocking
+);
+
+typedef Thread_Control *( *Scheduler_SMP_Get_lowest_scheduled )(
+  Scheduler_Context *context,
+  Thread_Control    *thread,
+  Chain_Node_order   order
 );
 
 typedef void ( *Scheduler_SMP_Extract )(
@@ -420,10 +427,13 @@ static inline void _Scheduler_SMP_Allocate_processor(
   }
 }
 
-static inline Thread_Control *_Scheduler_SMP_Get_lowest_scheduled(
-  Scheduler_SMP_Context *self
+static Thread_Control *_Scheduler_SMP_Get_lowest_scheduled(
+  Scheduler_Context *context,
+  Thread_Control    *filter,
+  Chain_Node_order   order
 )
 {
+  Scheduler_SMP_Context *self = _Scheduler_SMP_Get_self( context );
   Thread_Control *lowest_ready = NULL;
   Chain_Control *scheduled = &self->Scheduled;
 
@@ -431,6 +441,12 @@ static inline Thread_Control *_Scheduler_SMP_Get_lowest_scheduled(
     lowest_ready = (Thread_Control *) _Chain_Last( scheduled );
   }
 
+  /*
+   * _Scheduler_SMP_Enqueue_ordered() assumes that get_lowest_scheduled
+   * helpers may return NULL. But this method never should.
+   */
+  _Assert( lowest_ready != NULL );
+
   return lowest_ready;
 }
 
@@ -448,6 +464,8 @@ static inline Thread_Control *_Scheduler_SMP_Get_lowest_scheduled(
  * scheduled nodes.
  * @param[in] move_from_scheduled_to_ready Function to move a node from the set
  * of scheduled nodes to the set of ready nodes.
+ * @param[in] get_lowest_scheduled Function to select the thread from the
+ * scheduled nodes to replace. It may not be possible to find one.
  */
 static inline void _Scheduler_SMP_Enqueue_ordered(
   Scheduler_Context *context,
@@ -455,16 +473,28 @@ static inline void _Scheduler_SMP_Enqueue_ordered(
   Chain_Node_order order,
   Scheduler_SMP_Insert insert_ready,
   Scheduler_SMP_Insert insert_scheduled,
-  Scheduler_SMP_Move move_from_scheduled_to_ready
+  Scheduler_SMP_Move move_from_scheduled_to_ready,
+  Scheduler_SMP_Get_lowest_scheduled get_lowest_scheduled
 )
 {
   Scheduler_SMP_Context *self = _Scheduler_SMP_Get_self( context );
   Thread_Control *lowest_scheduled =
-    _Scheduler_SMP_Get_lowest_scheduled( self );
-
-  _Assert( lowest_scheduled != NULL );
+    ( *get_lowest_scheduled )( context, thread, order ); 
 
-  if ( ( *order )( &thread->Object.Node, &lowest_scheduled->Object.Node ) ) {
+  /*
+   *  get_lowest_scheduled can return a NULL if no scheduled threads
+   *  should be removed from their processor based on the selection
+   *  criteria. For example, this can occur when the affinity of the
+   *  thread being enqueued schedules it against higher priority threads.
+   *  A low priority thread with affinity can only consider the threads
+   *  which are on the cores if has affinity for.
+   *
+   *  The get_lowest_scheduled helper should assert on not returning NULL
+   *  if that is not possible for that scheduler.
+   */ 
+
+  if ( lowest_scheduled &&
+       ( *order )( &thread->Object.Node, &lowest_scheduled->Object.Node ) ) {
     Scheduler_SMP_Node *lowest_scheduled_node =
       _Scheduler_SMP_Node_get( lowest_scheduled );
 
@@ -507,7 +537,8 @@ static inline void _Scheduler_SMP_Enqueue_scheduled_ordered(
 {
   Scheduler_SMP_Context *self = _Scheduler_SMP_Get_self( context );
   Scheduler_SMP_Node *node = _Scheduler_SMP_Node_get( thread );
-  Thread_Control *highest_ready = ( *get_highest_ready )( &self->Base );
+  Thread_Control *highest_ready =
+    ( *get_highest_ready )( &self->Base, thread );
 
   _Assert( highest_ready != NULL );
 
@@ -540,7 +571,8 @@ static inline void _Scheduler_SMP_Schedule_highest_ready(
 )
 {
   Scheduler_SMP_Context *self = _Scheduler_SMP_Get_self( context );
-  Thread_Control *highest_ready = ( *get_highest_ready )( &self->Base );
+  Thread_Control *highest_ready =
+    ( *get_highest_ready )( &self->Base, victim );
 
   _Scheduler_SMP_Allocate_processor( self, highest_ready, victim );
 
diff --git a/cpukit/score/preinstall.am b/cpukit/score/preinstall.am
index d00e137..a090f1f 100644
--- a/cpukit/score/preinstall.am
+++ b/cpukit/score/preinstall.am
@@ -390,6 +390,10 @@ $(PROJECT_INCLUDE)/rtems/score/cpustdatomic.h: include/rtems/score/cpustdatomic.
 	$(INSTALL_DATA) $< $(PROJECT_INCLUDE)/rtems/score/cpustdatomic.h
 PREINSTALL_FILES += $(PROJECT_INCLUDE)/rtems/score/cpustdatomic.h
 
+$(PROJECT_INCLUDE)/rtems/score/schedulerprioritysmpimpl.h: include/rtems/score/schedulerprioritysmpimpl.h $(PROJECT_INCLUDE)/rtems/score/$(dirstamp)
+	$(INSTALL_DATA) $< $(PROJECT_INCLUDE)/rtems/score/schedulerprioritysmpimpl.h
+PREINSTALL_FILES += $(PROJECT_INCLUDE)/rtems/score/schedulerprioritysmpimpl.h
+
 $(PROJECT_INCLUDE)/rtems/score/schedulerpriorityaffinitysmp.h: include/rtems/score/schedulerpriorityaffinitysmp.h $(PROJECT_INCLUDE)/rtems/score/$(dirstamp)
 	$(INSTALL_DATA) $< $(PROJECT_INCLUDE)/rtems/score/schedulerpriorityaffinitysmp.h
 PREINSTALL_FILES += $(PROJECT_INCLUDE)/rtems/score/schedulerpriorityaffinitysmp.h
diff --git a/cpukit/score/src/schedulerpriorityaffinitysmp.c b/cpukit/score/src/schedulerpriorityaffinitysmp.c
index 0d9525d..f84dadd 100644
--- a/cpukit/score/src/schedulerpriorityaffinitysmp.c
+++ b/cpukit/score/src/schedulerpriorityaffinitysmp.c
@@ -22,32 +22,451 @@
 #include <rtems/score/schedulerpriorityaffinitysmp.h>
 #include <rtems/score/schedulerpriorityimpl.h>
 #include <rtems/score/schedulersmpimpl.h>
+#include <rtems/score/schedulerprioritysmpimpl.h>
 #include <rtems/score/wkspace.h>
 #include <rtems/score/cpusetimpl.h>
 
+#include <rtems/score/priority.h>
+
+/*
+ * The following methods which initially were static in schedulerprioritysmp.c
+ * are shared with this scheduler. They are now public so they can be shared.
+ *
+ *  + _Scheduler_priority_SMP_Get_self
+ *  + _Scheduler_priority_SMP_Insert_ready_fifo
+ *  + _Scheduler_priority_SMP_Insert_ready_lifo
+ *  + _Scheduler_priority_SMP_Node_get
+ *  + _Scheduler_priority_SMP_Move_from_scheduled_to_ready
+ *  + _Scheduler_priority_SMP_Move_from_ready_to_scheduled
+ *  + _Scheduler_priority_SMP_Extract_from_ready
+ *  + _Scheduler_priority_SMP_Do_update
+ */
+
+/*
+ * This method returns the scheduler node for the specified thread
+ * as a scheduler specific type.
+ */
 static Scheduler_priority_affinity_SMP_Node *
-_Scheduler_priority_affinity_Node_get( Thread_Control *thread )
+_Scheduler_priority_affinity_SMP_Node_get(
+  Thread_Control *thread
+)
 {
-  return ( Scheduler_priority_affinity_SMP_Node * )
-    _Scheduler_Node_get( thread );
+  return (Scheduler_priority_affinity_SMP_Node *) _Scheduler_Node_get( thread );
 }
 
+/*
+ * This method initializes the scheduler control information for
+ * this scheduler instance.
+ */
 bool _Scheduler_priority_affinity_SMP_Allocate(
   const Scheduler_Control *scheduler,
-  Thread_Control          *the_thread
+  Thread_Control          *thread
 )
 {
+  Scheduler_SMP_Node *smp_node = _Scheduler_SMP_Node_get( thread );
+
   Scheduler_priority_affinity_SMP_Node *node =
-    _Scheduler_priority_affinity_Node_get( the_thread );
+    _Scheduler_priority_affinity_SMP_Node_get( thread );
+
+  (scheduler);
 
-  _Scheduler_SMP_Node_initialize( &node->Base.Base );
+  /*
+   *  All we add is affinity information to the basic SMP node.
+   */
+  _Scheduler_SMP_Node_initialize( smp_node );
 
-  node->Affinity = *_CPU_set_Default();
+  node->Affinity     = *_CPU_set_Default();
   node->Affinity.set = &node->Affinity.preallocated;
 
   return true;
 }
 
+/*
+ * This method is unique to this scheduler because it takes into
+ * account affinity as it determines the highest ready thread.
+ * Since this is used to pick a new thread to replace the victim,
+ * the highest ready thread must have affinity such that it can
+ * be executed on the victim's processor.
+ */
+static Thread_Control *_Scheduler_priority_affinity_SMP_Get_highest_ready(
+  Scheduler_Context *context,
+  Thread_Control    *victim
+)
+{
+  Scheduler_priority_SMP_Context *self =
+    _Scheduler_priority_SMP_Get_self( context );
+  Priority_Control                index;
+  Thread_Control                 *highest = NULL;
+  int                             victim_cpu;
+
+  /*
+   * This is done when we need to check if reevaluations are needed.
+   */
+  if ( victim == NULL ) {
+    return _Scheduler_priority_Ready_queue_first(
+        &self->Bit_map,
+        &self->Ready[ 0 ]
+      );
+  }
+
+  victim_cpu = _Per_CPU_Get_index( _Thread_Get_CPU( victim ) );
+
+  /*
+   * The deterministic priority scheduler structure is optimized
+   * for insertion, extraction, and finding the highest priority
+   * thread. Scanning the list of ready threads is not a purpose
+   * for which it was optimized. There are optimizations to be
+   * made in this loop.
+   *
+   * + by checking the major bit, we could potentially skip entire
+   *   groups of 16.
+   */
+  for ( index = _Priority_bit_map_Get_highest( &self->Bit_map ) ;
+        index <= PRIORITY_MAXIMUM;
+        index++ ) {
+    Chain_Control   *chain =  &self->Ready[index];
+    Chain_Node      *chain_node;
+    for ( chain_node = _Chain_First( chain );
+          chain_node != _Chain_Immutable_tail( chain ) ;
+          chain_node = _Chain_Next( chain_node ) ) {
+      Thread_Control                       *thread;
+      Scheduler_priority_affinity_SMP_Node *node;
+
+
+      thread = (Thread_Control *) chain_node;
+      node = _Scheduler_priority_affinity_SMP_Node_get( thread );
+
+      /*
+       * Can this thread run on this CPU?
+       */
+      if ( CPU_ISSET( victim_cpu, node->Affinity.set ) ) {
+        highest = thread;
+        break;
+      }
+    }
+    if ( highest )
+      break;
+  }
+
+  _Assert( highest != NULL );
+
+  return highest;
+}
+
+/*
+ * This method is very similar to _Scheduler_priority_affinity_SMP_Block
+ * but has the difference that is invokes this scheduler's
+ * get_highest_ready() support method.
+ */
+void _Scheduler_priority_affinity_SMP_Block(
+  const Scheduler_Control *scheduler,
+  Thread_Control *thread
+)
+{
+  Scheduler_Context *context = _Scheduler_Get_context( scheduler );
+
+  _Scheduler_SMP_Block(
+    context,
+    thread,
+    _Scheduler_priority_SMP_Extract_from_ready,
+    _Scheduler_priority_affinity_SMP_Get_highest_ready,
+    _Scheduler_priority_SMP_Move_from_ready_to_scheduled
+  );
+
+  /*
+   * Since this removed a single thread from the scheduled set
+   * and selected the most appropriate thread from the ready
+   * set to replace it, there should be no need for thread
+   * migrations.
+   */
+}
+
+/*
+ * This method is unique to this scheduler because it must take into
+ * account affinity as it searches for the lowest priority scheduled
+ * thread. It ignores those which cannot be replaced by the victim
+ * thread because the victim does not have affinity for that processor.
+ */
+static Thread_Control *_Scheduler_priority_affinity_SMP_Get_lowest_scheduled(
+  Scheduler_Context *context,
+  Thread_Control    *filter,
+  Chain_Node_order   order
+)
+{
+  Scheduler_SMP_Context *self = _Scheduler_SMP_Get_self( context );
+  Thread_Control  *lowest_ready = NULL;
+  Thread_Control  *thread = NULL;
+  Chain_Control   *scheduled = &self->Scheduled;
+  int              cpu_index;
+  Scheduler_priority_affinity_SMP_Node *node =
+    _Scheduler_priority_affinity_SMP_Node_get( filter );
+
+  for ( thread =  (Thread_Control *) _Chain_Last( scheduled );
+        (Chain_Node *) thread != _Chain_Immutable_head( scheduled ) ;
+        thread = (Thread_Control *) _Chain_Previous( &thread->Object.Node ) ) {
+    /*
+     * If we didn't find a thread which is of equal or lower importance
+     * than filter thread is, then we can't schedule the filter thread
+     * to execute.
+     */
+    if ( (*order)(&thread->Object.Node, &filter->Object.Node) )
+      break;
+
+    cpu_index = _Per_CPU_Get_index( _Thread_Get_CPU( thread ) );
+
+    if ( CPU_ISSET( cpu_index, node->Affinity.set ) ) {
+      lowest_ready = thread;
+      break;
+    }
+
+  }
+
+  return lowest_ready;
+}
+
+/*
+ * This method is unique to this scheduler because it must pass
+ * _Scheduler_priority_affinity_SMP_Get_lowest_scheduled into
+ * _Scheduler_SMP_Enqueue_ordered.
+ */
+static void _Scheduler_priority_affinity_SMP_Enqueue_fifo(
+  Scheduler_Context *context,
+  Thread_Control *thread
+)
+{
+  _Scheduler_SMP_Enqueue_ordered(
+    context,
+    thread,
+    _Scheduler_simple_Insert_priority_fifo_order,
+    _Scheduler_priority_SMP_Insert_ready_fifo,
+    _Scheduler_SMP_Insert_scheduled_fifo,
+    _Scheduler_priority_SMP_Move_from_scheduled_to_ready,
+    _Scheduler_priority_affinity_SMP_Get_lowest_scheduled
+  );
+}
+
+/*
+ * This method is invoked at the end of certain scheduling operations
+ * to ensure that the highest priority ready thread cannot be scheduled
+ * to execute. When we schedule with affinity, there is the possibility
+ * that we need to migrate a thread to another core to ensure that the
+ * highest priority ready threads are in fact scheduled.
+ */
+static void _Scheduler_priority_affinity_SMP_Check_for_migrations(
+  Scheduler_Context *context
+)
+{
+  Thread_Control        *lowest_scheduled;
+  Thread_Control        *highest_ready;
+  Scheduler_SMP_Node    *lowest_scheduled_node;
+  Scheduler_SMP_Context *self = _Scheduler_SMP_Get_self( context );
+
+  while (1) {
+    highest_ready =
+      _Scheduler_priority_affinity_SMP_Get_highest_ready( context, NULL );
+    lowest_scheduled = _Scheduler_priority_affinity_SMP_Get_lowest_scheduled(
+      context,
+      highest_ready,
+      _Scheduler_simple_Insert_priority_lifo_order
+    );
+
+    /*
+     * If we can't find a thread to displace from the scheduled set,
+     * then we have placed all the highest priority threads possible
+     * in the scheduled set.
+     *
+     * We found the absolute highest priority thread without
+     * considering affinity. But now we have to consider that thread's
+     * affinity as we look to place it.
+     */
+    if ( lowest_scheduled == NULL )
+      break;
+
+    /*
+     * But if we found a thread which is lower priority than one
+     * in the ready set, then we need to swap them out.
+     */
+    lowest_scheduled_node = _Scheduler_SMP_Node_get( lowest_scheduled );
+
+    _Scheduler_SMP_Node_change_state(
+      lowest_scheduled_node,
+      SCHEDULER_SMP_NODE_READY
+    );
+
+    _Scheduler_SMP_Allocate_processor( self, highest_ready, lowest_scheduled );
+
+    _Scheduler_priority_SMP_Move_from_ready_to_scheduled(
+      context,
+      highest_ready
+    );
+
+    _Scheduler_priority_SMP_Move_from_scheduled_to_ready(
+      &self->Base,
+      lowest_scheduled
+    );
+  }
+}
+
+/*
+ * This is the public scheduler specific Unblock operation.
+ */
+void _Scheduler_priority_affinity_SMP_Unblock(
+  const Scheduler_Control *scheduler,
+  Thread_Control *thread
+)
+{
+  Scheduler_Context *context = _Scheduler_Get_context( scheduler );
+
+  _Scheduler_SMP_Unblock(
+    context,
+    thread,
+    _Scheduler_priority_affinity_SMP_Enqueue_fifo
+  );
+
+  /*
+   * Perform any thread migrations that are needed due to these changes.
+   */
+  _Scheduler_priority_affinity_SMP_Check_for_migrations( context );
+}
+
+/*
+ *  This is unique to this scheduler because it passes scheduler specific
+ *  get_lowest_scheduled helper to _Scheduler_SMP_Enqueue_ordered.
+ */
+static void _Scheduler_priority_affinity_SMP_Enqueue_ordered(
+  Scheduler_Context *context,
+  Thread_Control *thread,
+  Chain_Node_order order,
+  Scheduler_SMP_Insert insert_ready,
+  Scheduler_SMP_Insert insert_scheduled
+)
+{
+  _Scheduler_SMP_Enqueue_ordered(
+    context,
+    thread,
+    order,
+    insert_ready,
+    insert_scheduled,
+    _Scheduler_priority_SMP_Move_from_scheduled_to_ready,
+    _Scheduler_priority_affinity_SMP_Get_lowest_scheduled
+  );
+}
+
+/*
+ *  This is unique to this scheduler because it is on the path
+ *  to _Scheduler_priority_affinity_SMP_Enqueue_ordered() which
+ *  invokes a scheduler unique get_lowest_scheduled helper.
+ */
+static void _Scheduler_priority_affinity_SMP_Enqueue_lifo(
+  Scheduler_Context *context,
+  Thread_Control *thread
+)
+{
+  _Scheduler_priority_affinity_SMP_Enqueue_ordered(
+    context,
+    thread,
+    _Scheduler_simple_Insert_priority_lifo_order,
+    _Scheduler_priority_SMP_Insert_ready_lifo,
+    _Scheduler_SMP_Insert_scheduled_lifo
+  );
+}
+
+/*
+ * This method is unique to this scheduler because it must
+ * invoke _Scheduler_SMP_Enqueue_scheduled_ordered() with
+ * this scheduler's get_highest_ready() helper.
+ */
+static void _Scheduler_priority_affinity_SMP_Enqueue_scheduled_ordered(
+  Scheduler_Context *context,
+  Thread_Control *thread,
+  Chain_Node_order order,
+  Scheduler_SMP_Insert insert_ready,
+  Scheduler_SMP_Insert insert_scheduled
+)
+{
+  _Scheduler_SMP_Enqueue_scheduled_ordered(
+    context,
+    thread,
+    order,
+    _Scheduler_priority_affinity_SMP_Get_highest_ready,
+    insert_ready,
+    insert_scheduled,
+    _Scheduler_priority_SMP_Move_from_ready_to_scheduled
+  );
+}
+
+/*
+ *  This is unique to this scheduler because it is on the path
+ *  to _Scheduler_priority_affinity_SMP_Enqueue_scheduled__ordered() which
+ *  invokes a scheduler unique get_lowest_scheduled helper.
+ */
+static void _Scheduler_priority_affinity_SMP_Enqueue_scheduled_lifo(
+  Scheduler_Context *context,
+  Thread_Control *thread
+)
+{
+  _Scheduler_priority_affinity_SMP_Enqueue_scheduled_ordered(
+    context,
+    thread,
+    _Scheduler_simple_Insert_priority_lifo_order,
+    _Scheduler_priority_SMP_Insert_ready_lifo,
+    _Scheduler_SMP_Insert_scheduled_lifo
+  );
+}
+
+/*
+ *  This is unique to this scheduler because it is on the path
+ *  to _Scheduler_priority_affinity_SMP_Enqueue_scheduled__ordered() which
+ *  invokes a scheduler unique get_lowest_scheduled helper.
+ */
+static void _Scheduler_priority_affinity_SMP_Enqueue_scheduled_fifo(
+  Scheduler_Context *context,
+  Thread_Control *thread
+)
+{
+  _Scheduler_priority_affinity_SMP_Enqueue_scheduled_ordered(
+    context,
+    thread,
+    _Scheduler_simple_Insert_priority_fifo_order,
+    _Scheduler_priority_SMP_Insert_ready_fifo,
+    _Scheduler_SMP_Insert_scheduled_fifo
+  );
+}
+
+/*
+ * This is the public scheduler specific Change Priority operation.
+ */
+void _Scheduler_priority_affinity_SMP_Change_priority(
+  const Scheduler_Control *scheduler,
+  Thread_Control          *thread,
+  Priority_Control         new_priority,
+  bool                     prepend_it
+)
+{
+  Scheduler_Context *context = _Scheduler_Get_context( scheduler );
+
+  _Scheduler_SMP_Change_priority(
+    context,
+    thread,
+    new_priority,
+    prepend_it,
+    _Scheduler_priority_SMP_Extract_from_ready,
+    _Scheduler_priority_SMP_Do_update,
+    _Scheduler_priority_affinity_SMP_Enqueue_fifo,
+    _Scheduler_priority_affinity_SMP_Enqueue_lifo,
+    _Scheduler_priority_affinity_SMP_Enqueue_scheduled_fifo,
+    _Scheduler_priority_affinity_SMP_Enqueue_scheduled_lifo
+  );
+
+  /*
+   * Perform any thread migrations that are needed due to these changes.
+   */
+  _Scheduler_priority_affinity_SMP_Check_for_migrations( context );
+}
+
+/*
+ * This is the public scheduler specific Change Priority operation.
+ */
 bool _Scheduler_priority_affinity_SMP_Get_affinity(
   const Scheduler_Control *scheduler,
   Thread_Control          *thread,
@@ -56,7 +475,7 @@ bool _Scheduler_priority_affinity_SMP_Get_affinity(
 )
 {
   Scheduler_priority_affinity_SMP_Node *node =
-    _Scheduler_priority_affinity_Node_get(thread);
+    _Scheduler_priority_affinity_SMP_Node_get(thread);
 
   (void) scheduler;
 
@@ -65,26 +484,38 @@ bool _Scheduler_priority_affinity_SMP_Get_affinity(
   }
 
   CPU_COPY( cpuset, node->Affinity.set );
-  return true; 
+  return true;
 }
 
 bool _Scheduler_priority_affinity_SMP_Set_affinity(
   const Scheduler_Control *scheduler,
   Thread_Control          *thread,
   size_t                   cpusetsize,
-  cpu_set_t               *cpuset
+  const cpu_set_t         *cpuset
 )
 {
   Scheduler_priority_affinity_SMP_Node *node =
-    _Scheduler_priority_affinity_Node_get(thread);
+    _Scheduler_priority_affinity_SMP_Node_get(thread);
 
   (void) scheduler;
-  
-  if ( ! _CPU_set_Is_valid( cpuset, cpusetsize ) ) {
+
+  /*
+   * Validate that the cpset meets basic requirements.
+   */
+  if ( !_CPU_set_Is_valid( cpuset, cpusetsize ) ) {
     return false;
   }
 
-  CPU_COPY( node->Affinity.set, cpuset );
-  
+  /*
+   * The old and new set are the same, there is no point in
+   * doing anything.
+   */
+  if ( CPU_EQUAL_S( cpusetsize, cpuset, node->Affinity.set ) )
+    return true;
+
+  _Thread_Set_state( thread, STATES_MIGRATING );
+    CPU_COPY( node->Affinity.set, cpuset );
+  _Thread_Clear_state( thread, STATES_MIGRATING );
+
   return true;
 }
diff --git a/cpukit/score/src/schedulerprioritysmp.c b/cpukit/score/src/schedulerprioritysmp.c
index 56bb0ac..a225721 100644
--- a/cpukit/score/src/schedulerprioritysmp.c
+++ b/cpukit/score/src/schedulerprioritysmp.c
@@ -26,6 +26,7 @@
 
 #include <rtems/score/schedulerprioritysmp.h>
 #include <rtems/score/schedulerpriorityimpl.h>
+#include <rtems/score/schedulerprioritysmpimpl.h>
 #include <rtems/score/schedulersmpimpl.h>
 
 static Scheduler_priority_SMP_Context *
@@ -34,13 +35,14 @@ _Scheduler_priority_SMP_Get_context( const Scheduler_Control *scheduler )
   return (Scheduler_priority_SMP_Context *) _Scheduler_Get_context( scheduler );
 }
 
-static Scheduler_priority_SMP_Context *
-_Scheduler_priority_SMP_Get_self( Scheduler_Context *context )
+Scheduler_priority_SMP_Context *_Scheduler_priority_SMP_Get_self(
+  Scheduler_Context *context
+)
 {
   return (Scheduler_priority_SMP_Context *) context;
 }
 
-static Scheduler_priority_SMP_Node *_Scheduler_priority_SMP_Node_get(
+Scheduler_priority_SMP_Node *_Scheduler_priority_SMP_Node_get(
   Thread_Control *thread
 )
 {
@@ -76,7 +78,7 @@ bool _Scheduler_priority_SMP_Allocate(
   return true;
 }
 
-static void _Scheduler_priority_SMP_Do_update(
+void _Scheduler_priority_SMP_Do_update(
   Scheduler_Context *context,
   Scheduler_Node *base_node,
   Priority_Control new_priority
@@ -107,19 +109,22 @@ void _Scheduler_priority_SMP_Update(
 }
 
 static Thread_Control *_Scheduler_priority_SMP_Get_highest_ready(
-  Scheduler_Context *context
+  Scheduler_Context *context,
+  Thread_Control    *thread
 )
 {
   Scheduler_priority_SMP_Context *self =
     _Scheduler_priority_SMP_Get_self( context );
 
+  (thread);
+
   return _Scheduler_priority_Ready_queue_first(
     &self->Bit_map,
     &self->Ready[ 0 ]
   );
 }
 
-static void _Scheduler_priority_SMP_Move_from_scheduled_to_ready(
+void _Scheduler_priority_SMP_Move_from_scheduled_to_ready(
   Scheduler_Context *context,
   Thread_Control *scheduled_to_ready
 )
@@ -137,7 +142,7 @@ static void _Scheduler_priority_SMP_Move_from_scheduled_to_ready(
   );
 }
 
-static void _Scheduler_priority_SMP_Move_from_ready_to_scheduled(
+void _Scheduler_priority_SMP_Move_from_ready_to_scheduled(
   Scheduler_Context *context,
   Thread_Control *ready_to_scheduled
 )
@@ -158,7 +163,7 @@ static void _Scheduler_priority_SMP_Move_from_ready_to_scheduled(
   );
 }
 
-static void _Scheduler_priority_SMP_Insert_ready_lifo(
+void _Scheduler_priority_SMP_Insert_ready_lifo(
   Scheduler_Context *context,
   Thread_Control *thread
 )
@@ -175,7 +180,7 @@ static void _Scheduler_priority_SMP_Insert_ready_lifo(
   );
 }
 
-static void _Scheduler_priority_SMP_Insert_ready_fifo(
+void _Scheduler_priority_SMP_Insert_ready_fifo(
   Scheduler_Context *context,
   Thread_Control *thread
 )
@@ -192,7 +197,7 @@ static void _Scheduler_priority_SMP_Insert_ready_fifo(
   );
 }
 
-static void _Scheduler_priority_SMP_Extract_from_ready(
+void _Scheduler_priority_SMP_Extract_from_ready(
   Scheduler_Context *context,
   Thread_Control *thread
 )
@@ -239,7 +244,8 @@ static void _Scheduler_priority_SMP_Enqueue_ordered(
     order,
     insert_ready,
     insert_scheduled,
-    _Scheduler_priority_SMP_Move_from_scheduled_to_ready
+    _Scheduler_priority_SMP_Move_from_scheduled_to_ready,
+    _Scheduler_SMP_Get_lowest_scheduled
   );
 }
 
diff --git a/cpukit/score/src/schedulersimplesmp.c b/cpukit/score/src/schedulersimplesmp.c
index d5d3908..fb42cbd 100644
--- a/cpukit/score/src/schedulersimplesmp.c
+++ b/cpukit/score/src/schedulersimplesmp.c
@@ -66,12 +66,15 @@ static void _Scheduler_simple_SMP_Do_update(
 }
 
 static Thread_Control *_Scheduler_simple_SMP_Get_highest_ready(
-  Scheduler_Context *context
+  Scheduler_Context *context,
+  Thread_Control    *thread
 )
 {
   Scheduler_simple_SMP_Context *self =
     _Scheduler_simple_SMP_Get_self( context );
 
+  (thread);
+
   return (Thread_Control *) _Chain_First( &self->Ready );
 }
 
@@ -175,7 +178,8 @@ static void _Scheduler_simple_SMP_Enqueue_ordered(
     order,
     insert_ready,
     insert_scheduled,
-    _Scheduler_simple_SMP_Move_from_scheduled_to_ready
+    _Scheduler_simple_SMP_Move_from_scheduled_to_ready,
+    _Scheduler_SMP_Get_lowest_scheduled
   );
 }
 
-- 
1.7.1

_______________________________________________
rtems-devel mailing list
rtems-devel@rtems.org
http://www.rtems.org/mailman/listinfo/rtems-devel

Reply via email to