[PATCH 2/2] infiniband: Modify the reference to xa_store_irq() because the parameter of this function has changed
From: "xiaofeng.yan" function xa_store_irq() has three parameters because of removing patameter "gfp_t gfp" Signed-off-by: xiaofeng.yan --- drivers/infiniband/core/cm.c| 2 +- drivers/infiniband/hw/hns/hns_roce_qp.c | 2 +- drivers/infiniband/hw/mlx5/srq_cmd.c| 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c index 5740d1ba3568..afcb5711270b 100644 --- a/drivers/infiniband/core/cm.c +++ b/drivers/infiniband/core/cm.c @@ -879,7 +879,7 @@ static struct cm_id_private *cm_alloc_id_priv(struct ib_device *device, static void cm_finalize_id(struct cm_id_private *cm_id_priv) { xa_store_irq(_id_table, cm_local_id(cm_id_priv->id.local_id), -cm_id_priv, GFP_KERNEL); +cm_id_priv); } struct ib_cm_id *ib_create_cm_id(struct ib_device *device, diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c index 6c081dd985fc..1876a51f9e08 100644 --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c @@ -237,7 +237,7 @@ static int hns_roce_qp_store(struct hns_roce_dev *hr_dev, if (!hr_qp->qpn) return -EINVAL; - ret = xa_err(xa_store_irq(xa, hr_qp->qpn, hr_qp, GFP_KERNEL)); + ret = xa_err(xa_store_irq(xa, hr_qp->qpn, hr_qp)); if (ret) dev_err(hr_dev->dev, "Failed to xa store for QPC\n"); else diff --git a/drivers/infiniband/hw/mlx5/srq_cmd.c b/drivers/infiniband/hw/mlx5/srq_cmd.c index db889ec3fd48..f277e264ceab 100644 --- a/drivers/infiniband/hw/mlx5/srq_cmd.c +++ b/drivers/infiniband/hw/mlx5/srq_cmd.c @@ -578,7 +578,7 @@ int mlx5_cmd_create_srq(struct mlx5_ib_dev *dev, struct mlx5_core_srq *srq, refcount_set(>common.refcount, 1); init_completion(>common.free); - err = xa_err(xa_store_irq(>array, srq->srqn, srq, GFP_KERNEL)); + err = xa_err(xa_store_irq(>array, srq->srqn, srq)); if (err) goto err_destroy_srq_split; -- 2.17.1
[PATCH 1/2] [xarry]:Fixed an issue with memory allocated using the GFP_KERNEL flag in spinlocks
From: "xiaofeng.yan" function xa_store_irq() has a spinlock as follows: xa_lock_irq() -->spin_lock_irq(&(xa)->xa_lock) GFP_KERNEL flag could cause sleep. So change GFP_KERNEL to GFP_ATOMIC and Romve "gfp_t gfp" in function static inline void *xa_store_irq(struct xarray *xa, unsigned long index, void *entry, gfp_t gfp) Signed-off-by: xiaofeng.yan --- include/linux/xarray.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/xarray.h b/include/linux/xarray.h index 92c0160b3352..aeaf97d5642f 100644 --- a/include/linux/xarray.h +++ b/include/linux/xarray.h @@ -595,7 +595,6 @@ static inline void *xa_store_bh(struct xarray *xa, unsigned long index, * @xa: XArray. * @index: Index into array. * @entry: New entry. - * @gfp: Memory allocation flags. * * This function is like calling xa_store() except it disables interrupts * while holding the array lock. @@ -605,12 +604,12 @@ static inline void *xa_store_bh(struct xarray *xa, unsigned long index, * Return: The old entry at this index or xa_err() if an error happened. */ static inline void *xa_store_irq(struct xarray *xa, unsigned long index, - void *entry, gfp_t gfp) + void *entry) { void *curr; xa_lock_irq(xa); - curr = __xa_store(xa, index, entry, gfp); + curr = __xa_store(xa, index, entry, GFP_ATOMIC); xa_unlock_irq(xa); return curr; -- 2.17.1
[tip:sched/core] sched/core: Remove a parameter in the migrate_task_rq() function
Commit-ID: 5a4fd0368517bc5b5399ef958f6d30cbff492918 Gitweb: http://git.kernel.org/tip/5a4fd0368517bc5b5399ef958f6d30cbff492918 Author: xiaofeng.yan AuthorDate: Wed, 23 Sep 2015 14:55:59 +0800 Committer: Ingo Molnar CommitDate: Tue, 6 Oct 2015 17:08:23 +0200 sched/core: Remove a parameter in the migrate_task_rq() function The parameter "int next_cpu" in the following function is unused: migrate_task_rq(struct task_struct *p, int next_cpu) Remove it. Signed-off-by: xiaofeng.yan Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1442991360-31945-1-git-send-email-yanxiaof...@inspur.com Signed-off-by: Ingo Molnar --- kernel/sched/core.c | 2 +- kernel/sched/fair.c | 2 +- kernel/sched/sched.h | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a395db1..1764a0f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1294,7 +1294,7 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu) if (task_cpu(p) != new_cpu) { if (p->sched_class->migrate_task_rq) - p->sched_class->migrate_task_rq(p, new_cpu); + p->sched_class->migrate_task_rq(p); p->se.nr_migrations++; perf_event_task_migrate(p); } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3bdc3da..700eb54 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5009,7 +5009,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f * previous cpu. However, the caller only guarantees p->pi_lock is held; no * other assumptions, including the state of rq->lock, should be made. */ -static void migrate_task_rq_fair(struct task_struct *p, int next_cpu) +static void migrate_task_rq_fair(struct task_struct *p) { /* * We are supposed to update the task to "current" time, then its up to date diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e08cc4c..efd3bfc 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1190,7 +1190,7 @@ struct sched_class { #ifdef CONFIG_SMP int (*select_task_rq)(struct task_struct *p, int task_cpu, int sd_flag, int flags); - void (*migrate_task_rq)(struct task_struct *p, int next_cpu); + void (*migrate_task_rq)(struct task_struct *p); void (*task_waking) (struct task_struct *task); void (*task_woken) (struct rq *this_rq, struct task_struct *task); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/core: Remove a parameter in the migrate_task_rq() function
Commit-ID: 5a4fd0368517bc5b5399ef958f6d30cbff492918 Gitweb: http://git.kernel.org/tip/5a4fd0368517bc5b5399ef958f6d30cbff492918 Author: xiaofeng.yan <yanxiaof...@inspur.com> AuthorDate: Wed, 23 Sep 2015 14:55:59 +0800 Committer: Ingo Molnar <mi...@kernel.org> CommitDate: Tue, 6 Oct 2015 17:08:23 +0200 sched/core: Remove a parameter in the migrate_task_rq() function The parameter "int next_cpu" in the following function is unused: migrate_task_rq(struct task_struct *p, int next_cpu) Remove it. Signed-off-by: xiaofeng.yan <yanxiaof...@inspur.com> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> Cc: Linus Torvalds <torva...@linux-foundation.org> Cc: Mike Galbraith <efa...@gmx.de> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Thomas Gleixner <t...@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1442991360-31945-1-git-send-email-yanxiaof...@inspur.com Signed-off-by: Ingo Molnar <mi...@kernel.org> --- kernel/sched/core.c | 2 +- kernel/sched/fair.c | 2 +- kernel/sched/sched.h | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a395db1..1764a0f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1294,7 +1294,7 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu) if (task_cpu(p) != new_cpu) { if (p->sched_class->migrate_task_rq) - p->sched_class->migrate_task_rq(p, new_cpu); + p->sched_class->migrate_task_rq(p); p->se.nr_migrations++; perf_event_task_migrate(p); } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3bdc3da..700eb54 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5009,7 +5009,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f * previous cpu. However, the caller only guarantees p->pi_lock is held; no * other assumptions, including the state of rq->lock, should be made. */ -static void migrate_task_rq_fair(struct task_struct *p, int next_cpu) +static void migrate_task_rq_fair(struct task_struct *p) { /* * We are supposed to update the task to "current" time, then its up to date diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e08cc4c..efd3bfc 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1190,7 +1190,7 @@ struct sched_class { #ifdef CONFIG_SMP int (*select_task_rq)(struct task_struct *p, int task_cpu, int sd_flag, int flags); - void (*migrate_task_rq)(struct task_struct *p, int next_cpu); + void (*migrate_task_rq)(struct task_struct *p); void (*task_waking) (struct task_struct *task); void (*task_woken) (struct rq *this_rq, struct task_struct *task); -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] ACPI, Repair of outdated variable
variable "value" in struct acpi_pnp_device_id has been changed to "string". Signed-off-by: xiaofeng.yan --- drivers/acpi/acpica/nsdumpdv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/acpi/acpica/nsdumpdv.c b/drivers/acpi/acpica/nsdumpdv.c index 7dc367e..1af1af7 100644 --- a/drivers/acpi/acpica/nsdumpdv.c +++ b/drivers/acpi/acpica/nsdumpdv.c @@ -89,7 +89,7 @@ acpi_ns_dump_one_device(acpi_handle obj_handle, ACPI_DEBUG_PRINT_RAW((ACPI_DB_TABLES, "HID: %s, ADR: %8.8X%8.8X, Status: %X\n", - info->hardware_id.value, + info->hardware_id.string, ACPI_FORMAT_UINT64(info->address), info->current_status)); ACPI_FREE(info); -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] ACPI, Repair of outdated variable
variable value in struct acpi_pnp_device_id has been changed to string. Signed-off-by: xiaofeng.yan yanxiaof...@inspur.com --- drivers/acpi/acpica/nsdumpdv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/acpi/acpica/nsdumpdv.c b/drivers/acpi/acpica/nsdumpdv.c index 7dc367e..1af1af7 100644 --- a/drivers/acpi/acpica/nsdumpdv.c +++ b/drivers/acpi/acpica/nsdumpdv.c @@ -89,7 +89,7 @@ acpi_ns_dump_one_device(acpi_handle obj_handle, ACPI_DEBUG_PRINT_RAW((ACPI_DB_TABLES, HID: %s, ADR: %8.8X%8.8X, Status: %X\n, - info-hardware_id.value, + info-hardware_id.string, ACPI_FORMAT_UINT64(info-address), info-current_status)); ACPI_FREE(info); -- 1.9.1 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] *** TEST ***
TEST -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] *** TEST ***
TEST -- 1.9.1 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] Intel_irq_remapping: fix a comment error
change tabke to take. Signed-off-by: xiaofeng.yan Reviewed-by: Jiang Liu --- drivers/iommu/intel_irq_remapping.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c index 5709ae9..85676d0 100644 --- a/drivers/iommu/intel_irq_remapping.c +++ b/drivers/iommu/intel_irq_remapping.c @@ -46,7 +46,7 @@ static struct hpet_scope ir_hpet[MAX_HPET_TBS]; * ->iommu->register_lock * Note: * intel_irq_remap_ops.{supported,prepare,enable,disable,reenable} are called - * in single-threaded environment with interrupt disabled, so no need to tabke + * in single-threaded environment with interrupt disabled, so no need to take * the dmar_global_lock. */ static DEFINE_RAW_SPINLOCK(irq_2_ir_lock); -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] intel_irq_remapping: fix a comment error
change tabke to take. Signed-off-by: xiaofeng.yan --- drivers/iommu/intel_irq_remapping.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c index 5709ae9..d59f82d 100644 --- a/drivers/iommu/intel_irq_remapping.c +++ b/drivers/iommu/intel_irq_remapping.c @@ -46,7 +46,7 @@ static struct hpet_scope ir_hpet[MAX_HPET_TBS]; * ->iommu->register_lock * Note: * intel_irq_remap_ops.{supported,prepare,enable,disable,reenable} are called - * in single-threaded environment with interrupt disabled, so no need to tabke + * in single-threaded environment with interrupt disabled, so no need to take * the dmar_global_lock. */ static DEFINE_RAW_SPINLOCK(irq_2_ir_lock); @@ -185,6 +185,7 @@ static int modify_irte(int irq, struct irte *irte_modified) return -1; raw_spin_lock_irqsave(_2_ir_lock, flags); + while(1): iommu = irq_iommu->iommu; -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] intel_irq_remapping: fix a comment error
change tabke to take. Signed-off-by: xiaofeng.yan yanxiaof...@inspur.com --- drivers/iommu/intel_irq_remapping.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c index 5709ae9..d59f82d 100644 --- a/drivers/iommu/intel_irq_remapping.c +++ b/drivers/iommu/intel_irq_remapping.c @@ -46,7 +46,7 @@ static struct hpet_scope ir_hpet[MAX_HPET_TBS]; * -iommu-register_lock * Note: * intel_irq_remap_ops.{supported,prepare,enable,disable,reenable} are called - * in single-threaded environment with interrupt disabled, so no need to tabke + * in single-threaded environment with interrupt disabled, so no need to take * the dmar_global_lock. */ static DEFINE_RAW_SPINLOCK(irq_2_ir_lock); @@ -185,6 +185,7 @@ static int modify_irte(int irq, struct irte *irte_modified) return -1; raw_spin_lock_irqsave(irq_2_ir_lock, flags); + while(1): iommu = irq_iommu-iommu; -- 1.9.1 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] Intel_irq_remapping: fix a comment error
change tabke to take. Signed-off-by: xiaofeng.yan yanxiaof...@inspur.com Reviewed-by: Jiang Liu jiang@linux.intel.com --- drivers/iommu/intel_irq_remapping.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c index 5709ae9..85676d0 100644 --- a/drivers/iommu/intel_irq_remapping.c +++ b/drivers/iommu/intel_irq_remapping.c @@ -46,7 +46,7 @@ static struct hpet_scope ir_hpet[MAX_HPET_TBS]; * -iommu-register_lock * Note: * intel_irq_remap_ops.{supported,prepare,enable,disable,reenable} are called - * in single-threaded environment with interrupt disabled, so no need to tabke + * in single-threaded environment with interrupt disabled, so no need to take * the dmar_global_lock. */ static DEFINE_RAW_SPINLOCK(irq_2_ir_lock); -- 1.9.1 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
test
test -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
test
test -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/deadline: Fix a precision problem in the microseconds range
Commit-ID: 177ef2a6315ea7bf173653182324e1dcd08ffeaa Gitweb: http://git.kernel.org/tip/177ef2a6315ea7bf173653182324e1dcd08ffeaa Author: xiaofeng.yan AuthorDate: Tue, 26 Aug 2014 03:15:41 + Committer: Ingo Molnar CommitDate: Sun, 7 Sep 2014 11:09:59 +0200 sched/deadline: Fix a precision problem in the microseconds range An overrun could happen in function start_hrtick_dl() when a task with SCHED_DEADLINE runs in the microseconds range. For example, if a task with SCHED_DEADLINE has the following parameters: Task runtime deadline period P1 200us 500us500us The deadline and period from task P1 are less than 1ms. In order to achieve microsecond precision, we need to enable HRTICK feature by the next command: PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch & PC#./schedtool -E -t 20:50:50 -e ./test The binary test is in an endless while(1) loop here. Some pieces of trace.dat are as follows: -0 157.603157: sched_switch: :R ==> 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R ==> 0:120: swapper/2 -0 157.605657: sched_switch: :R ==> 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R ==> 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==>2481:4294967295: test We can get the runtime of P1 from the information above: runtime = 157.608183 - 157.605657 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment "delta > 1" in function start_hrtick_dl(). Because no hrtimer start up to control the rest of runtime when the reset of runtime is less than 10us. So the process will continue to run until tick-period is coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because the EDF schedule class also needs this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure the scheduling slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan Reviewed-by: Li Zefan Acked-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Link: http://lkml.kernel.org/r/1409022941-5880-1-git-send-email-xiaofeng@huawei.com [ Massaged the changelog and the code. ] Signed-off-by: Ingo Molnar --- kernel/sched/core.c | 10 +- kernel/sched/deadline.c | 5 + kernel/sched/fair.c | 8 3 files changed, 10 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a773c91..8d00f4a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -455,7 +455,15 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = >hrtick_timer; - ktime_t time = ktime_add_ns(timer->base->get_time(), delay); + ktime_t time; + s64 delta; + + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer->base->get_time(), delta); hrtimer_set_expires(timer, time); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index d21a8e0..cc4eb89 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + hrtick_start(rq, p->dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 02fc949..50d2025 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3897,14 +3897,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_curr(rq); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq->curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/deadline: Fix a precision problem in the microseconds range
Commit-ID: 177ef2a6315ea7bf173653182324e1dcd08ffeaa Gitweb: http://git.kernel.org/tip/177ef2a6315ea7bf173653182324e1dcd08ffeaa Author: xiaofeng.yan xiaofeng@huawei.com AuthorDate: Tue, 26 Aug 2014 03:15:41 + Committer: Ingo Molnar mi...@kernel.org CommitDate: Sun, 7 Sep 2014 11:09:59 +0200 sched/deadline: Fix a precision problem in the microseconds range An overrun could happen in function start_hrtick_dl() when a task with SCHED_DEADLINE runs in the microseconds range. For example, if a task with SCHED_DEADLINE has the following parameters: Task runtime deadline period P1 200us 500us500us The deadline and period from task P1 are less than 1ms. In order to achieve microsecond precision, we need to enable HRTICK feature by the next command: PC#echo HRTICK /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch PC#./schedtool -E -t 20:50:50 -e ./test The binary test is in an endless while(1) loop here. Some pieces of trace.dat are as follows: idle-0 157.603157: sched_switch: :R == 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R == 0:120: swapper/2 idle-0 157.605657: sched_switch: :R == 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R == 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==2481:4294967295: test We can get the runtime of P1 from the information above: runtime = 157.608183 - 157.605657 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment delta 1 in function start_hrtick_dl(). Because no hrtimer start up to control the rest of runtime when the reset of runtime is less than 10us. So the process will continue to run until tick-period is coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because the EDF schedule class also needs this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure the scheduling slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com Reviewed-by: Li Zefan lize...@huawei.com Acked-by: Juri Lelli juri.le...@arm.com Signed-off-by: Peter Zijlstra (Intel) pet...@infradead.org Cc: Linus Torvalds torva...@linux-foundation.org Link: http://lkml.kernel.org/r/1409022941-5880-1-git-send-email-xiaofeng@huawei.com [ Massaged the changelog and the code. ] Signed-off-by: Ingo Molnar mi...@kernel.org --- kernel/sched/core.c | 10 +- kernel/sched/deadline.c | 5 + kernel/sched/fair.c | 8 3 files changed, 10 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a773c91..8d00f4a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -455,7 +455,15 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = rq-hrtick_timer; - ktime_t time = ktime_add_ns(timer-base-get_time(), delay); + ktime_t time; + s64 delta; + + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer-base-get_time(), delta); hrtimer_set_expires(timer, time); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index d21a8e0..cc4eb89 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + hrtick_start(rq, p-dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 02fc949..50d2025 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3897,14 +3897,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_curr(rq); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq-curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v4] sched/deadline: Fix the precision problem in the microsecond range
The overrun could happen in function start_hrtick_dl() when a task with SCHED_DEADLINE runs in the microsecond range. For example, a task with SCHED_DEADLINE has the following parameters Task runtime deadline period P1 200us 500us500us The deadline and period from task P1 are less than 1ms. In order to achieve microsecond precision, we need to enable HRTICK feature by the next command. PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch & PC#./schedtool -E -t 20:50:50 -e ./test The binary test is in an endless while(1) loop here. Some pieces of trace.dat are as follows: (Remove some irrelevant information) -0 157.603157: sched_switch: :R ==> 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R ==> 0:120: swapper/2 -0 157.605657: sched_switch: :R ==> 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R ==> 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==>2481:4294967295: test We can get a runtime of P1 from the information above. runtime = 157.608183 - 157.605657 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment "delta > 1" in function start_hrtick_dl(). Because no hrtimer start up to control the rest of runtime when the reset of runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan Reviewed-by: Peter Zijlstra Reviewed-by: Li Zefan Acked-by: Juri Lelli --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ec1a286..da2c6f3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -449,8 +449,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = >hrtick_timer; - ktime_t time = ktime_add_ns(timer->base->get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer->base->get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 255ce13..ce52d07 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + hrtick_start(rq, p->dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bfa3c86..0d6b3e6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3892,14 +3892,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_curr(rq); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq->curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v4] sched/deadline: Fix the precision problem in the microsecond range
The overrun could happen in function start_hrtick_dl() when a task with SCHED_DEADLINE runs in the microsecond range. For example, a task with SCHED_DEADLINE has the following parameters Task runtime deadline period P1 200us 500us500us The deadline and period from task P1 are less than 1ms. In order to achieve microsecond precision, we need to enable HRTICK feature by the next command. PC#echo HRTICK /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch PC#./schedtool -E -t 20:50:50 -e ./test The binary test is in an endless while(1) loop here. Some pieces of trace.dat are as follows: (Remove some irrelevant information) idle-0 157.603157: sched_switch: :R == 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R == 0:120: swapper/2 idle-0 157.605657: sched_switch: :R == 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R == 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==2481:4294967295: test We can get a runtime of P1 from the information above. runtime = 157.608183 - 157.605657 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment delta 1 in function start_hrtick_dl(). Because no hrtimer start up to control the rest of runtime when the reset of runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com Reviewed-by: Peter Zijlstra pet...@infradead.org Reviewed-by: Li Zefan lize...@huawei.com Acked-by: Juri Lelli juri.le...@arm.com --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ec1a286..da2c6f3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -449,8 +449,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = rq-hrtick_timer; - ktime_t time = ktime_add_ns(timer-base-get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer-base-get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 255ce13..ce52d07 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + hrtick_start(rq, p-dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bfa3c86..0d6b3e6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3892,14 +3892,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_curr(rq); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq-curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/8/12 22:52, Ingo Molnar wrote: * xiaofeng.yan wrote: It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch & PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) -0 157.603157: sched_switch: :R ==> 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R ==> 0:120: swapper/2 -0 157.605657: sched_switch: :R ==> 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R ==> 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==>2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment "delta > 1". Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan Reviewed-by: Peter Zijlstra Reviewed-by: Li Zefan The whole changelog is very hard to read and isn't proper English, nor is it truly explanatory. Could you please fix the changelog, or bounce it to someone who will fix it for you? Thanks, Ingo Thanks for your reply. I will fix my change log with proper English. Thanks Yan . -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/8/12 22:52, Ingo Molnar wrote: * xiaofeng.yan xiaofeng@huawei.com wrote: It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) idle-0 157.603157: sched_switch: :R == 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R == 0:120: swapper/2 idle-0 157.605657: sched_switch: :R == 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R == 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment delta 1. Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com Reviewed-by: Peter Zijlstra pet...@infradead.org Reviewed-by: Li Zefan lize...@huawei.com The whole changelog is very hard to read and isn't proper English, nor is it truly explanatory. Could you please fix the changelog, or bounce it to someone who will fix it for you? Thanks, Ingo Thanks for your reply. I will fix my change log with proper English. Thanks Yan . -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v3] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch & PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) -0 157.603157: sched_switch: :R ==> 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R ==> 0:120: swapper/2 -0 157.605657: sched_switch: :R ==> 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R ==> 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==>2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment "delta > 1". Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan Reviewed-by: Peter Zijlstra Reviewed-by: Li Zefan --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1211575..53514ba 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -449,8 +449,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = >hrtick_timer; - ktime_t time = ktime_add_ns(timer->base->get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer->base->get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 255ce13..ce52d07 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + hrtick_start(rq, p->dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bfa3c86..0d6b3e6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3892,14 +3892,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_curr(rq); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq->curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v3] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) idle-0 157.603157: sched_switch: :R == 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R == 0:120: swapper/2 idle-0 157.605657: sched_switch: :R == 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R == 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment delta 1. Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com Reviewed-by: Peter Zijlstra pet...@infradead.org Reviewed-by: Li Zefan lize...@huawei.com --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1211575..53514ba 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -449,8 +449,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = rq-hrtick_timer; - ktime_t time = ktime_add_ns(timer-base-get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer-base-get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 255ce13..ce52d07 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + hrtick_start(rq, p-dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bfa3c86..0d6b3e6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3892,14 +3892,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_curr(rq); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq-curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/rt: Fix replenish_dl_entity() comments to match the current upstream code
Commit-ID: 1b09d29bc00964d9032d80516f958044ac6b3805 Gitweb: http://git.kernel.org/tip/1b09d29bc00964d9032d80516f958044ac6b3805 Author: xiaofeng.yan AuthorDate: Mon, 7 Jul 2014 05:59:04 + Committer: Ingo Molnar CommitDate: Wed, 16 Jul 2014 13:38:20 +0200 sched/rt: Fix replenish_dl_entity() comments to match the current upstream code Signed-off-by: xiaofeng.yan Signed-off-by: Peter Zijlstra Link: http://lkml.kernel.org/r/1404712744-16986-1-git-send-email-xiaofeng@huawei.com Signed-off-by: Ingo Molnar --- kernel/sched/deadline.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index df0b77a..255ce13 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -306,7 +306,7 @@ static inline void setup_new_dl_entity(struct sched_dl_entity *dl_se, * the overrunning entity can't interfere with other entity in the system and * can't make them miss their deadlines. Reasons why this kind of overruns * could happen are, typically, a entity voluntarily trying to overcome its - * runtime, or it just underestimated it during sched_setscheduler_ex(). + * runtime, or it just underestimated it during sched_setattr(). */ static void replenish_dl_entity(struct sched_dl_entity *dl_se, struct sched_dl_entity *pi_se) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/rt: Fix replenish_dl_entity() comments to match the current upstream code
Commit-ID: 1b09d29bc00964d9032d80516f958044ac6b3805 Gitweb: http://git.kernel.org/tip/1b09d29bc00964d9032d80516f958044ac6b3805 Author: xiaofeng.yan xiaofeng@huawei.com AuthorDate: Mon, 7 Jul 2014 05:59:04 + Committer: Ingo Molnar mi...@kernel.org CommitDate: Wed, 16 Jul 2014 13:38:20 +0200 sched/rt: Fix replenish_dl_entity() comments to match the current upstream code Signed-off-by: xiaofeng.yan xiaofeng@huawei.com Signed-off-by: Peter Zijlstra pet...@infradead.org Link: http://lkml.kernel.org/r/1404712744-16986-1-git-send-email-xiaofeng@huawei.com Signed-off-by: Ingo Molnar mi...@kernel.org --- kernel/sched/deadline.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index df0b77a..255ce13 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -306,7 +306,7 @@ static inline void setup_new_dl_entity(struct sched_dl_entity *dl_se, * the overrunning entity can't interfere with other entity in the system and * can't make them miss their deadlines. Reasons why this kind of overruns * could happen are, typically, a entity voluntarily trying to overcome its - * runtime, or it just underestimated it during sched_setscheduler_ex(). + * runtime, or it just underestimated it during sched_setattr(). */ static void replenish_dl_entity(struct sched_dl_entity *dl_se, struct sched_dl_entity *pi_se) -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch & PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) -0 157.603157: sched_switch: :R ==> 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R ==> 0:120: swapper/2 -0 157.605657: sched_switch: :R ==> 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R ==> 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==>2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment "delta > 1". Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan Reviewed-by: Li Zefan --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b..7f066c2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -442,8 +442,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = >hrtick_timer; - ktime_t time = ktime_add_ns(timer->base->get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer->base->get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..9135771 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + hrtick_start(rq, p->dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fea7d33..e5cfd57 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3857,14 +3857,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_task(p); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq->curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) idle-0 157.603157: sched_switch: :R == 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R == 0:120: swapper/2 idle-0 157.605657: sched_switch: :R == 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R == 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment delta 1. Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com Reviewed-by: Li Zefan lize...@huawei.com --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b..7f066c2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -442,8 +442,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = rq-hrtick_timer; - ktime_t time = ktime_add_ns(timer-base-get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer-base-get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..9135771 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + hrtick_start(rq, p-dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fea7d33..e5cfd57 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3857,14 +3857,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_task(p); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq-curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch & PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) -0 157.603157: sched_switch: :R ==> 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R ==> 0:120: swapper/2 -0 157.605657: sched_switch: :R ==> 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R ==> 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==>2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment "delta > 1". Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan Reviewed-by: Li Zefan --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b..7f066c2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -442,8 +442,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = >hrtick_timer; - ktime_t time = ktime_add_ns(timer->base->get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer->base->get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..9135771 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + hrtick_start(rq, p->dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fea7d33..e5cfd57 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3857,14 +3857,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_task(p); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq->curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/core: Limit the least time slice in hrtick_start()
Move this piece of code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). EDF tasks with the runtime of microsecond level will lead to the wrong precision because system can't control the end of process when left runtime is less than 10us. The goal is to fix this bug from start_hrtick_dl() and reduce code redundancy. Signed-off-by: Xiaofeng Yan Reviewed-by: Li Zefan --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b..7f066c2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -442,8 +442,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = >hrtick_timer; - ktime_t time = ktime_add_ns(timer->base->get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer->base->get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..9135771 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + hrtick_start(rq, p->dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fea7d33..e5cfd57 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3857,14 +3857,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_task(p); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq->curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/core: Limit the least time slice in hrtick_start()
Move this piece of code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). EDF tasks with the runtime of microsecond level will lead to the wrong precision because system can't control the end of process when left runtime is less than 10us. The goal is to fix this bug from start_hrtick_dl() and reduce code redundancy. Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com Reviewed-by: Li Zefan lize...@huawei.com --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b..7f066c2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -442,8 +442,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = rq-hrtick_timer; - ktime_t time = ktime_add_ns(timer-base-get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer-base-get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..9135771 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + hrtick_start(rq, p-dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fea7d33..e5cfd57 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3857,14 +3857,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_task(p); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq-curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) idle-0 157.603157: sched_switch: :R == 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R == 0:120: swapper/2 idle-0 157.605657: sched_switch: :R == 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R == 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment delta 1. Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. Move the code with the limit of the least time slice from hrtick_start_fair() to hrtick_start() because EDF schedule class also need this function in start_hrtick_dl(). To fix this problem, we call hrtimer_start() unconditionally in start_hrtick_dl(), and make sure schedule slice won't be smaller than 10us in hrtimer_start(). Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com Reviewed-by: Li Zefan lize...@huawei.com --- kernel/sched/core.c |8 +++- kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 8 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b..7f066c2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -442,8 +442,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = rq-hrtick_timer; - ktime_t time = ktime_add_ns(timer-base-get_time(), delay); + ktime_t time; + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer-base-get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..9135771 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + hrtick_start(rq, p-dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fea7d33..e5cfd57 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3857,14 +3857,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_task(p); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq-curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/7/8 20:52, Peter Zijlstra wrote: On Tue, Jul 08, 2014 at 07:50:22PM +0800, xiaofeng.yan wrote: I have tested this solution, It can work very well with deadline schedule class. Great, please send it as a proper patch and I might just press 'A' ;-) Ok, I will send it later. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/7/8 19:23, xiaofeng.yan wrote: On 2014/7/8 17:33, Peter Zijlstra wrote: On Tue, Jul 08, 2014 at 08:53:27AM +, xiaofeng.yan wrote: static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { -s64 delta = p->dl.dl_runtime - p->dl.runtime; - -if (delta > 1) -hrtick_start(rq, p->dl.runtime); +delta = max_t(s64, 1LL, delta); +hrtick_start(rq, delta); } no, no, no. I said to unify the test. I understand your idea after reading the next patch. This is good solution. I will test it with your patch. I have tested this solution, It can work very well with deadline schedule class. kernel/sched/core.c |9 - kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 9 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b..cc9a058 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -442,8 +442,15 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = >hrtick_timer; - ktime_t time = ktime_add_ns(timer->base->get_time(), delay); + ktime_t time; + + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer->base->get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..9135771 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + hrtick_start(rq, p->dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fea7d33..e5cfd57 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3857,14 +3857,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_task(p); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq->curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- --- kernel/sched/core.c | 9 - kernel/sched/deadline.c | 3 +-- kernel/sched/fair.c | 7 --- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e1a2f31bb0cb..c7b8a6fbac66 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -444,7 +444,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = >hrtick_timer; -ktime_t time = ktime_add_ns(timer->base->get_time(), delay); +ktime_t time; + +/* + * Don't schedule slices shorter than 1ns, that just + * doesn't make sense and can cause timer DoS. + */ +delta = max_t(s64, delta, 1LL); transfer the argument delta to delay +time = ktime_add_ns(timer->base->get_time(), delay); hrtimer_set_expires(timer, time); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1258f..e1e24eea8061 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -999,8 +999,7 @@ static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { s64 delta = p->dl.dl_runtime - p->dl.runtime; -if (delta > 1) -hrtick_start(rq, p->dl.runtime); +hrtick_start(rq, p->dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 923fe32db6b3..713c58d2a7b0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3901,13 +3901,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) return; } -/* - * Don't schedule slices shorter than 1ns, that just - * doesn't make sense. Rely on vruntime for fairness. - */ -if (rq->curr != p) -delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ . -- To unsubscribe from this list: send the line "unsubs
Re: [PATCH] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/7/8 17:33, Peter Zijlstra wrote: On Tue, Jul 08, 2014 at 08:53:27AM +, xiaofeng.yan wrote: static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + delta = max_t(s64, 1LL, delta); + hrtick_start(rq, delta); } no, no, no. I said to unify the test. I understand your idea after reading the next patch. This is good solution. I will test it with your patch. --- kernel/sched/core.c | 9 - kernel/sched/deadline.c | 3 +-- kernel/sched/fair.c | 7 --- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e1a2f31bb0cb..c7b8a6fbac66 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -444,7 +444,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = >hrtick_timer; - ktime_t time = ktime_add_ns(timer->base->get_time(), delay); + ktime_t time; + + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + delta = max_t(s64, delta, 1LL); transfer the argument delta to delay + time = ktime_add_ns(timer->base->get_time(), delay); hrtimer_set_expires(timer, time); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1258f..e1e24eea8061 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -999,8 +999,7 @@ static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { s64 delta = p->dl.dl_runtime - p->dl.runtime; - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + hrtick_start(rq, p->dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 923fe32db6b3..713c58d2a7b0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3901,13 +3901,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) return; } - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq->curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch & PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) -0 157.603157: sched_switch: :R ==> 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R ==> 0:120: swapper/2 -0 157.605657: sched_switch: :R ==> 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R ==> 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==>2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment "delta > 1". Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: Xiaofeng Yan --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..51e6b0e 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + s64 delta = max_t(s64, 1LL, p->dl.runtime); + hrtick_start(rq, delta); } #endif -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/7/8 16:53, xiaofeng.yan wrote: Sorry, I send a old patch and send a new one later. Thanks Yan It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch & PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) -0 157.603157: sched_switch: :R ==> 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R ==> 0:120: swapper/2 -0 157.605657: sched_switch: :R ==> 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R ==> 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==>2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment "delta > 1". Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: Xiaofeng Yan --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..b71c229 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + delta = max_t(s64, 1LL, delta); + hrtick_start(rq, delta); } #endif -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch & PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) -0 157.603157: sched_switch: :R ==> 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R ==> 0:120: swapper/2 -0 157.605657: sched_switch: :R ==> 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R ==> 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==>2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment "delta > 1". Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: Xiaofeng Yan --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..b71c229 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + delta = max_t(s64, 1LL, delta); + hrtick_start(rq, delta); } #endif -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/rt: overrun could happen in start_hrtick_dl
On 2014/7/8 15:49, Peter Zijlstra wrote: On Tue, Jul 08, 2014 at 10:51:02AM +0800, xiaofeng.yan wrote: On 2014/7/8 10:40, Li Zefan wrote: On 2014/7/8 9:10, xiaofeng.yan wrote: On 2014/7/7 16:41, Peter Zijlstra wrote: On Fri, Jul 04, 2014 at 12:02:21PM +, xiaofeng.yan wrote: It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#./schedtool -E -t 20:50 -e ./test& PC#trace-cmd record -e sched_switch Are you actually using HRTICK ? yes, If HRTICK is close , then all of runtime and deadline will be wrong. I think what peter meant is, do you use HRTICK in products or just use it for testing/experiment? Thanks for your timely comments. In fact, We use HRTICK feature in product. We need microsecond level precision. Ah, thanks. Be advised that currently HRTICK is rather expensive. The cost is twofold: 1) doing all the kernel side hrtimer things and 2) programming clock hardware. Of course, if that's what you need, you're willing to pay the price. I'll see if I can put making it less expensive slightly higher on the (endless) todo list. another fold: 3) Frequent migration :) In fact, frequent migration lead to higher overload. In our product we design new migration solution. The simple description is as follow: 1 Set affinity in user space program at the beginning 2 Migrate happen per 100ms 3 Free task (runtime < dl_runtime)is migrated to free cpu (rt task bandwidth < 65%) . 4 Busy task will run more time in a CPU by dynamic quota. So the condition of migration depends on whether task is busy or idle or not instead of deadline at some point. This could not meet EDF requirement but meet our product :) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/rt: overrun could happen in start_hrtick_dl
On 2014/7/8 15:49, Peter Zijlstra wrote: On Tue, Jul 08, 2014 at 10:51:02AM +0800, xiaofeng.yan wrote: On 2014/7/8 10:40, Li Zefan wrote: On 2014/7/8 9:10, xiaofeng.yan wrote: On 2014/7/7 16:41, Peter Zijlstra wrote: On Fri, Jul 04, 2014 at 12:02:21PM +, xiaofeng.yan wrote: It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#./schedtool -E -t 20:50 -e ./test PC#trace-cmd record -e sched_switch Are you actually using HRTICK ? yes, If HRTICK is close , then all of runtime and deadline will be wrong. I think what peter meant is, do you use HRTICK in products or just use it for testing/experiment? Thanks for your timely comments. In fact, We use HRTICK feature in product. We need microsecond level precision. Ah, thanks. Be advised that currently HRTICK is rather expensive. The cost is twofold: 1) doing all the kernel side hrtimer things and 2) programming clock hardware. Of course, if that's what you need, you're willing to pay the price. I'll see if I can put making it less expensive slightly higher on the (endless) todo list. another fold: 3) Frequent migration :) In fact, frequent migration lead to higher overload. In our product we design new migration solution. The simple description is as follow: 1 Set affinity in user space program at the beginning 2 Migrate happen per 100ms 3 Free task (runtime dl_runtime)is migrated to free cpu (rt task bandwidth 65%) . 4 Busy task will run more time in a CPU by dynamic quota. So the condition of migration depends on whether task is busy or idle or not instead of deadline at some point. This could not meet EDF requirement but meet our product :) -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) idle-0 157.603157: sched_switch: :R == 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R == 0:120: swapper/2 idle-0 157.605657: sched_switch: :R == 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R == 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment delta 1. Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..b71c229 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + delta = max_t(s64, 1LL, delta); + hrtick_start(rq, delta); } #endif -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/7/8 16:53, xiaofeng.yan wrote: Sorry, I send a old patch and send a new one later. Thanks Yan It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) idle-0 157.603157: sched_switch: :R == 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R == 0:120: swapper/2 idle-0 157.605657: sched_switch: :R == 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R == 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment delta 1. Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..b71c229 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + delta = max_t(s64, 1LL, delta); + hrtick_start(rq, delta); } #endif -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/deadline: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#trace-cmd record -e sched_switch PC#./schedtool -E -t 20:50 -e ./test Some of runtime and deadline run with millisecond level by reading kernershark. Some pieces of trace.dat are as follows: (remove some irrelevant information) idle-0 157.603157: sched_switch: :R == 2481:4294967295: test test-2481 157.603203: sched_switch: 2481:R == 0:120: swapper/2 idle-0 157.605657: sched_switch: :R == 2481:4294967295: test test-2481 157.608183: sched_switch: 2481:R == 2483:120: trace-cmd trace-cmd-2483 157.609656: sched_switch:2483:R==2481:4294967295: test We can get the runtime from the information at some point. runtime = 157.605657 - 157.608183 runtime = 0.002526(2.526ms) The correct runtime should be less than or equal to 200us at some point. The problem is caused by a conditional judgment delta 1. Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: Xiaofeng Yan xiaofeng@huawei.com --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..51e6b0e 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + s64 delta = max_t(s64, 1LL, p-dl.runtime); + hrtick_start(rq, delta); } #endif -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/7/8 17:33, Peter Zijlstra wrote: On Tue, Jul 08, 2014 at 08:53:27AM +, xiaofeng.yan wrote: static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + delta = max_t(s64, 1LL, delta); + hrtick_start(rq, delta); } no, no, no. I said to unify the test. I understand your idea after reading the next patch. This is good solution. I will test it with your patch. --- kernel/sched/core.c | 9 - kernel/sched/deadline.c | 3 +-- kernel/sched/fair.c | 7 --- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e1a2f31bb0cb..c7b8a6fbac66 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -444,7 +444,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = rq-hrtick_timer; - ktime_t time = ktime_add_ns(timer-base-get_time(), delay); + ktime_t time; + + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + delta = max_t(s64, delta, 1LL); transfer the argument delta to delay + time = ktime_add_ns(timer-base-get_time(), delay); hrtimer_set_expires(timer, time); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1258f..e1e24eea8061 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -999,8 +999,7 @@ static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { s64 delta = p-dl.dl_runtime - p-dl.runtime; - if (delta 1) - hrtick_start(rq, p-dl.runtime); + hrtick_start(rq, p-dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 923fe32db6b3..713c58d2a7b0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3901,13 +3901,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) return; } - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq-curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/7/8 19:23, xiaofeng.yan wrote: On 2014/7/8 17:33, Peter Zijlstra wrote: On Tue, Jul 08, 2014 at 08:53:27AM +, xiaofeng.yan wrote: static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { -s64 delta = p-dl.dl_runtime - p-dl.runtime; - -if (delta 1) -hrtick_start(rq, p-dl.runtime); +delta = max_t(s64, 1LL, delta); +hrtick_start(rq, delta); } no, no, no. I said to unify the test. I understand your idea after reading the next patch. This is good solution. I will test it with your patch. I have tested this solution, It can work very well with deadline schedule class. kernel/sched/core.c |9 - kernel/sched/deadline.c |5 + kernel/sched/fair.c |8 3 files changed, 9 insertions(+), 13 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3bdf01b..cc9a058 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -442,8 +442,15 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = rq-hrtick_timer; - ktime_t time = ktime_add_ns(timer-base-get_time(), delay); + ktime_t time; + + /* +* Don't schedule slices shorter than 1ns, that just +* doesn't make sense and can cause timer DoS. +*/ + s64 delta = max_t(s64, delay, 1LL); + time = ktime_add_ns(timer-base-get_time(), delta); hrtimer_set_expires(timer, time); if (rq == this_rq()) { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..9135771 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + hrtick_start(rq, p-dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fea7d33..e5cfd57 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3857,14 +3857,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) resched_task(p); return; } - - /* -* Don't schedule slices shorter than 1ns, that just -* doesn't make sense. Rely on vruntime for fairness. -*/ - if (rq-curr != p) - delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- --- kernel/sched/core.c | 9 - kernel/sched/deadline.c | 3 +-- kernel/sched/fair.c | 7 --- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e1a2f31bb0cb..c7b8a6fbac66 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -444,7 +444,14 @@ static void __hrtick_start(void *arg) void hrtick_start(struct rq *rq, u64 delay) { struct hrtimer *timer = rq-hrtick_timer; -ktime_t time = ktime_add_ns(timer-base-get_time(), delay); +ktime_t time; + +/* + * Don't schedule slices shorter than 1ns, that just + * doesn't make sense and can cause timer DoS. + */ +delta = max_t(s64, delta, 1LL); transfer the argument delta to delay +time = ktime_add_ns(timer-base-get_time(), delay); hrtimer_set_expires(timer, time); diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1258f..e1e24eea8061 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -999,8 +999,7 @@ static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { s64 delta = p-dl.dl_runtime - p-dl.runtime; -if (delta 1) -hrtick_start(rq, p-dl.runtime); +hrtick_start(rq, p-dl.runtime); } #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 923fe32db6b3..713c58d2a7b0 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3901,13 +3901,6 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) return; } -/* - * Don't schedule slices shorter than 1ns, that just - * doesn't make sense. Rely on vruntime for fairness. - */ -if (rq-curr != p) -delta = max_t(s64, 1LL, delta); - hrtick_start(rq, delta); } } -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ . -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http
Re: [PATCH] sched/deadline: overrun could happen in start_hrtick_dl
On 2014/7/8 20:52, Peter Zijlstra wrote: On Tue, Jul 08, 2014 at 07:50:22PM +0800, xiaofeng.yan wrote: I have tested this solution, It can work very well with deadline schedule class. Great, please send it as a proper patch and I might just press 'A' ;-) Ok, I will send it later. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/rt: overrun could happen in start_hrtick_dl
On 2014/7/8 10:40, Li Zefan wrote: On 2014/7/8 9:10, xiaofeng.yan wrote: On 2014/7/7 16:41, Peter Zijlstra wrote: On Fri, Jul 04, 2014 at 12:02:21PM +, xiaofeng.yan wrote: It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#./schedtool -E -t 20:50 -e ./test& PC#trace-cmd record -e sched_switch Are you actually using HRTICK ? yes, If HRTICK is close , then all of runtime and deadline will be wrong. I think what peter meant is, do you use HRTICK in products or just use it for testing/experiment? Thanks for your timely comments. In fact, We use HRTICK feature in product. We need microsecond level precision. Thanks Yan . -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/rt: overrun could happen in start_hrtick_dl
On 2014/7/7 16:41, Peter Zijlstra wrote: On Fri, Jul 04, 2014 at 12:02:21PM +, xiaofeng.yan wrote: It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#./schedtool -E -t 20:50 -e ./test& PC#trace-cmd record -e sched_switch Are you actually using HRTICK ? yes, If HRTICK is close , then all of runtime and deadline will be wrong. Some of runtime and deadline run with millisecond level by reading kernershark. The problem is caused by a conditional judgment "delta > 1". Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: xiaofeng.yan Always when sending patches for deadline, also CC Juri. --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..dfefa82 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + s64 delta = p->dl.runtime > 1 ? p->dl.runtime : 1; + hrtick_start(rq, delta); Yeah, that looks funny. And seeing how the only other user does something similar: hrtick_start_fair() delta = max(1ULL, delta) hrtick_start(rq, delta) I will modify my code according to your suggest. Does it make sense to move this max() into hrtick_start()? Also; and I don't think you mentioned that but did fix, the argument to hrtick_start() is wrong, it should be the delta, not the absolute timeout. Perhaps , if the runtime is less than 10us, the context switch overhead for system could be closed to 10us. So it could loss more then you gain. Thanks for your reply. Thanks Yan -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt:Fix replenish_dl_entity() comments to match the current upstream code
Signed-off-by: xiaofeng.yan --- kernel/sched/deadline.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..6541565 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -306,7 +306,7 @@ static inline void setup_new_dl_entity(struct sched_dl_entity *dl_se, * the overrunning entity can't interfere with other entity in the system and * can't make them miss their deadlines. Reasons why this kind of overruns * could happen are, typically, a entity voluntarily trying to overcome its - * runtime, or it just underestimated it during sched_setscheduler_ex(). + * runtime, or it just underestimated it during sched_setattr(). */ static void replenish_dl_entity(struct sched_dl_entity *dl_se, struct sched_dl_entity *pi_se) -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt:Fix replenish_dl_entity() comments to match the current upstream code
Signed-off-by: xiaofeng.yan xiaofeng@huawei.com --- kernel/sched/deadline.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..6541565 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -306,7 +306,7 @@ static inline void setup_new_dl_entity(struct sched_dl_entity *dl_se, * the overrunning entity can't interfere with other entity in the system and * can't make them miss their deadlines. Reasons why this kind of overruns * could happen are, typically, a entity voluntarily trying to overcome its - * runtime, or it just underestimated it during sched_setscheduler_ex(). + * runtime, or it just underestimated it during sched_setattr(). */ static void replenish_dl_entity(struct sched_dl_entity *dl_se, struct sched_dl_entity *pi_se) -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/rt: overrun could happen in start_hrtick_dl
On 2014/7/7 16:41, Peter Zijlstra wrote: On Fri, Jul 04, 2014 at 12:02:21PM +, xiaofeng.yan wrote: It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#./schedtool -E -t 20:50 -e ./test PC#trace-cmd record -e sched_switch Are you actually using HRTICK ? yes, If HRTICK is close , then all of runtime and deadline will be wrong. Some of runtime and deadline run with millisecond level by reading kernershark. The problem is caused by a conditional judgment delta 1. Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: xiaofeng.yan xiaofeng@huawei.com Always when sending patches for deadline, also CC Juri. --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..dfefa82 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + s64 delta = p-dl.runtime 1 ? p-dl.runtime : 1; + hrtick_start(rq, delta); Yeah, that looks funny. And seeing how the only other user does something similar: hrtick_start_fair() delta = max(1ULL, delta) hrtick_start(rq, delta) I will modify my code according to your suggest. Does it make sense to move this max() into hrtick_start()? Also; and I don't think you mentioned that but did fix, the argument to hrtick_start() is wrong, it should be the delta, not the absolute timeout. Perhaps , if the runtime is less than 10us, the context switch overhead for system could be closed to 10us. So it could loss more then you gain. Thanks for your reply. Thanks Yan -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/rt: overrun could happen in start_hrtick_dl
On 2014/7/8 10:40, Li Zefan wrote: On 2014/7/8 9:10, xiaofeng.yan wrote: On 2014/7/7 16:41, Peter Zijlstra wrote: On Fri, Jul 04, 2014 at 12:02:21PM +, xiaofeng.yan wrote: It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#./schedtool -E -t 20:50 -e ./test PC#trace-cmd record -e sched_switch Are you actually using HRTICK ? yes, If HRTICK is close , then all of runtime and deadline will be wrong. I think what peter meant is, do you use HRTICK in products or just use it for testing/experiment? Thanks for your timely comments. In fact, We use HRTICK feature in product. We need microsecond level precision. Thanks Yan . -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo "HRTICK" > /sys/kernel/debug/sched_features PC#./schedtool -E -t 20:50 -e ./test& PC#trace-cmd record -e sched_switch Some of runtime and deadline run with millisecond level by reading kernershark. The problem is caused by a conditional judgment "delta > 1". Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: xiaofeng.yan --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..dfefa82 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p->dl.dl_runtime - p->dl.runtime; - - if (delta > 1) - hrtick_start(rq, p->dl.runtime); + s64 delta = p->dl.runtime > 1 ? p->dl.runtime : 1; + hrtick_start(rq, delta); } #endif -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt: overrun could happen in start_hrtick_dl
It could be wrong for the precision of runtime and deadline when the precision is within microsecond level. For example: Task runtime deadline period P1 200us 500us 500us This case need enbale HRTICK feature by the next command PC#echo HRTICK /sys/kernel/debug/sched_features PC#./schedtool -E -t 20:50 -e ./test PC#trace-cmd record -e sched_switch Some of runtime and deadline run with millisecond level by reading kernershark. The problem is caused by a conditional judgment delta 1. Because no hrtimer start up to control the runtime when runtime is less than 10us. So the process will continue to run until tick-period coming. For fixing this problem, Let delta is equal to 10us when it is less than 10us. So the hrtimer will start up to control the end of process. Signed-off-by: xiaofeng.yan xiaofeng@huawei.com --- kernel/sched/deadline.c |6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index fc4f98b1..dfefa82 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -997,10 +997,8 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p, #ifdef CONFIG_SCHED_HRTICK static void start_hrtick_dl(struct rq *rq, struct task_struct *p) { - s64 delta = p-dl.dl_runtime - p-dl.runtime; - - if (delta 1) - hrtick_start(rq, p-dl.runtime); + s64 delta = p-dl.runtime 1 ? p-dl.runtime : 1; + hrtick_start(rq, delta); } #endif -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFD] sched/deadline: EDF dynamic quota design
On 2014/6/19 17:13, Luca Abeni wrote: On 06/18/2014 09:01 AM, xiaofeng.yan wrote: [...] I also had an implementation of the GRUB algorithm (based on a modification of my old CBS scheduler for Linux), but the computational complexity of the algorithm was too high. That's why I never proposed to merge it in SCHED_DEADLINE. But maybe there can be some trade-off between the "exact compliance with the GRUB algorithm" and implementation efficiency that can make it acceptable... Has these codes been opened about the implementation in some community or not ? The old GRUB scheduler for Linux was used for some experiments published in a paper at RTLWS 2007, and of course the code was open-source (released under GPL). It required a patch for the Linux kernel (I used a 2.6.something kernel) which allowed to load the scheduler as a kernel module (yes, I know this is the wrong way to go... But implementing it like this was simpler :). That is very old code... I probably still have it somewhere, but I have to search for it. If someone is interested, I can try to search (the story of the user-space daemon for adaptive reservations is similar: I released it as open-source years ago... If anyone is interested I can search for this code too) Luca I'm glad that you reply this email. yes, I'm so interesting about your solution. In fact , there are scenarios in our product. Could you send me a link if you have? I can test your solution in our scene if you like. Ok, so I found my old code for the CBS scheduler with GRUB modifications. You can get it from here: http://disi.unitn.it/~abeni/old-cbs-scheduler.tgz Please note that: 1) This is old code (for 2.6.x kernels), written before SCHED_DEADLINE development was started 2) The scheduler architecture is completely different respect to the current one, but the basic scheduling algorithm implemented by my old scheduler is the same one implemented by SCHED_DEADLINE (but I did not implement multi-processor support :) 3) You can have a look at the modifications needed to implement GRUB by simply grepping for "GRUB" in the source code. Basically, the algorithm is implemented by: 1) Implementing a state machine to keep track of the current state of a task (is it using its reserved fraction of CPU time, did it already use such a fraction of CPU time, or is it not using any CPU time?). This is done by adding a "state" field in "cbs_struct", and properly updating it in cbs.c 2) Keeping track of the total fraction of CPU time used by the active tasks. See the "U" variable in cbs.c (in a modern scheduler, it should probably become a field in the runqueue structure) 3) Modifying the rule used to update the runtime. For a "standard" CBS without CPU reclaiming (the one implemented by SCHED_DEADLINE), if a task executes for an amount of time "delta" its runtime must be decreased by delta. For GRUB, it must be decreased by "delta" mutliplied by U. See "account()" in cbs.c. The "trick" is in properly updating U (and this is done using the state machine mentioned above) Summing up, this code is not directly usable, but it shows you what needs to be done in order to implement the GRUB mechanism for CPU reclaiming in a CBS scheduler... Thanks for giving me your solution. I will take a look at it and modify it in our scene later. Thanks Yan Luca . -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFD] sched/deadline: EDF dynamic quota design
On 2014/6/19 17:13, Luca Abeni wrote: On 06/18/2014 09:01 AM, xiaofeng.yan wrote: [...] I also had an implementation of the GRUB algorithm (based on a modification of my old CBS scheduler for Linux), but the computational complexity of the algorithm was too high. That's why I never proposed to merge it in SCHED_DEADLINE. But maybe there can be some trade-off between the exact compliance with the GRUB algorithm and implementation efficiency that can make it acceptable... Has these codes been opened about the implementation in some community or not ? The old GRUB scheduler for Linux was used for some experiments published in a paper at RTLWS 2007, and of course the code was open-source (released under GPL). It required a patch for the Linux kernel (I used a 2.6.something kernel) which allowed to load the scheduler as a kernel module (yes, I know this is the wrong way to go... But implementing it like this was simpler :). That is very old code... I probably still have it somewhere, but I have to search for it. If someone is interested, I can try to search (the story of the user-space daemon for adaptive reservations is similar: I released it as open-source years ago... If anyone is interested I can search for this code too) Luca I'm glad that you reply this email. yes, I'm so interesting about your solution. In fact , there are scenarios in our product. Could you send me a link if you have? I can test your solution in our scene if you like. Ok, so I found my old code for the CBS scheduler with GRUB modifications. You can get it from here: http://disi.unitn.it/~abeni/old-cbs-scheduler.tgz Please note that: 1) This is old code (for 2.6.x kernels), written before SCHED_DEADLINE development was started 2) The scheduler architecture is completely different respect to the current one, but the basic scheduling algorithm implemented by my old scheduler is the same one implemented by SCHED_DEADLINE (but I did not implement multi-processor support :) 3) You can have a look at the modifications needed to implement GRUB by simply grepping for GRUB in the source code. Basically, the algorithm is implemented by: 1) Implementing a state machine to keep track of the current state of a task (is it using its reserved fraction of CPU time, did it already use such a fraction of CPU time, or is it not using any CPU time?). This is done by adding a state field in cbs_struct, and properly updating it in cbs.c 2) Keeping track of the total fraction of CPU time used by the active tasks. See the U variable in cbs.c (in a modern scheduler, it should probably become a field in the runqueue structure) 3) Modifying the rule used to update the runtime. For a standard CBS without CPU reclaiming (the one implemented by SCHED_DEADLINE), if a task executes for an amount of time delta its runtime must be decreased by delta. For GRUB, it must be decreased by delta mutliplied by U. See account() in cbs.c. The trick is in properly updating U (and this is done using the state machine mentioned above) Summing up, this code is not directly usable, but it shows you what needs to be done in order to implement the GRUB mechanism for CPU reclaiming in a CBS scheduler... Thanks for giving me your solution. I will take a look at it and modify it in our scene later. Thanks Yan Luca . -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFD] sched/deadline: EDF dynamic quota design
On 2014/6/17 16:01, Luca Abeni wrote: Hi, On 06/17/2014 04:43 AM, xiaofeng.yan wrote: [...] The basic ideas are (warning! This is an over-simplification of the algorithm! :) - You assign runtime and period to each SCHED_DEADLINE task as usual - Each task is guaranteed to receive its runtime every period - You can also define a maximum fraction Umax of the CPU time that the SCHED_DEADLINE tasks can use. Note that Umax _must_ be larger or equal than sum_i runtime_i / period_i (note: in the original GRUB paper, only one CPU is considered, and Umax is set equal to 1) - If the tasks are consuming less than Umax, then the scheduling algorithm allows them to use more runtime (but not less than the guaranteed runtime_i) in order to use up to Umax. This is achieved by modifying the rule used to decrease the runtime: in SCHED_DEADLINE, if a task executes for a time delta, its runtime is decreased by delta; using GRUB, it would be decreased by a smaller amount of time (computed based on Umax, on the active SCHED_DEADLINE tasks, etc...). This requires to implement some kind of state machine (the details are in the GRUB paper) I also had an implementation of the GRUB algorithm (based on a modification of my old CBS scheduler for Linux), but the computational complexity of the algorithm was too high. That's why I never proposed to merge it in SCHED_DEADLINE. But maybe there can be some trade-off between the "exact compliance with the GRUB algorithm" and implementation efficiency that can make it acceptable... Has these codes been opened about the implementation in some community or not ? The old GRUB scheduler for Linux was used for some experiments published in a paper at RTLWS 2007, and of course the code was open-source (released under GPL). It required a patch for the Linux kernel (I used a 2.6.something kernel) which allowed to load the scheduler as a kernel module (yes, I know this is the wrong way to go... But implementing it like this was simpler :). That is very old code... I probably still have it somewhere, but I have to search for it. If someone is interested, I can try to search (the story of the user-space daemon for adaptive reservations is similar: I released it as open-source years ago... If anyone is interested I can search for this code too) Luca I'm glad that you reply this email. yes, I'm so interesting about your solution. In fact , there are scenarios in our product. Could you send me a link if you have? I can test your solution in our scene if you like. Thanks Yan -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFD] sched/deadline: EDF dynamic quota design
On 2014/6/17 16:01, Luca Abeni wrote: Hi, On 06/17/2014 04:43 AM, xiaofeng.yan wrote: [...] The basic ideas are (warning! This is an over-simplification of the algorithm! :) - You assign runtime and period to each SCHED_DEADLINE task as usual - Each task is guaranteed to receive its runtime every period - You can also define a maximum fraction Umax of the CPU time that the SCHED_DEADLINE tasks can use. Note that Umax _must_ be larger or equal than sum_i runtime_i / period_i (note: in the original GRUB paper, only one CPU is considered, and Umax is set equal to 1) - If the tasks are consuming less than Umax, then the scheduling algorithm allows them to use more runtime (but not less than the guaranteed runtime_i) in order to use up to Umax. This is achieved by modifying the rule used to decrease the runtime: in SCHED_DEADLINE, if a task executes for a time delta, its runtime is decreased by delta; using GRUB, it would be decreased by a smaller amount of time (computed based on Umax, on the active SCHED_DEADLINE tasks, etc...). This requires to implement some kind of state machine (the details are in the GRUB paper) I also had an implementation of the GRUB algorithm (based on a modification of my old CBS scheduler for Linux), but the computational complexity of the algorithm was too high. That's why I never proposed to merge it in SCHED_DEADLINE. But maybe there can be some trade-off between the exact compliance with the GRUB algorithm and implementation efficiency that can make it acceptable... Has these codes been opened about the implementation in some community or not ? The old GRUB scheduler for Linux was used for some experiments published in a paper at RTLWS 2007, and of course the code was open-source (released under GPL). It required a patch for the Linux kernel (I used a 2.6.something kernel) which allowed to load the scheduler as a kernel module (yes, I know this is the wrong way to go... But implementing it like this was simpler :). That is very old code... I probably still have it somewhere, but I have to search for it. If someone is interested, I can try to search (the story of the user-space daemon for adaptive reservations is similar: I released it as open-source years ago... If anyone is interested I can search for this code too) Luca I'm glad that you reply this email. yes, I'm so interesting about your solution. In fact , there are scenarios in our product. Could you send me a link if you have? I can test your solution in our scene if you like. Thanks Yan -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFD] sched/deadline: EDF dynamic quota design
On 2014/5/21 20:45, Luca Abeni wrote: Hi, first of all, sorry for the ultra-delayed reply: I've been busy, and I did not notice this email... Anyway, some comments are below On 05/16/2014 09:11 AM, Henrik Austad wrote: [...] This can also be implemented in user-space (without modifying the scheduler) by having a daemon that monitors the SCHED_DEADLINE tasks and changes their runtimes based on some kind of feedback (for example, difference between the scheduling deadline of a task and its actual deadline - if this information is made available by the scheduler). I developed a similar implementation in the past (not based on SCHED_DEADLINE, but on some previous implementation of the CBS algorithm). This sounds like a very slow approach. What if the extra BW given by T2 was for one period only? There's no way you could create a userspace daemon to handle that kind of budget-tweaking. Right. With "This can also be implemented in user-space..." I was referring to the feedback scheduling (adaptive reservation) approach, which is designed to "auto-tune" the reservation budget following slower variations (it is like a low-pass filter, which can set the budget to something between the average used budget and the largest one). Basically, it works on a larger time scale. If you want a "per-period" runtime donation, you need a reclaiming mechanism like GRUB, CASH, or similar, which needs to be implemented in the kernel. Also, it sounds like a *really* dangerous idea to let some random (tm) userspace daemon adjust the deadline-budget for other tasks in the system based on an observation of spent budget for the last X seconds. It's not military funding we're concerned with here. When you state your WCET, it is not because you need -exactly- that budget, it is because you should *never* exceed that kind of rquired computational time. Exact. But the idea of feedback scheduling was that sometimes you do not know the WCET... You can guess it, or measure it over a large number of runs (but the Murphy law ensures that you will miss the worst case anyway ;-). And there are situations in which you do not need to respect all of the deadlines... The daemon I was talking about just monitors the difference between the scheduling deadline and the "real job deadline" for some tasks, in order to understand if the runtime they have been assigned is enough or not... If some task is not receiving enough runtime (that is, if the difference between its scheduling deadline and the "real deadline" becomes too large), the daemon tries to increase its runtime. When the system is overloaded (that is, everyone of the monitored tasks wants too much runtime, and the admission control fails), the daemon decreases the runtimes according to the weight of the tasks... Of course, the daemon does not have to monitor all of the SCHED_DEADLINE tasks in the system, but only the ones for which adaptive reservations are useful (tasks for which the WCET is not known for sure, and that can tollerate some missed deadlines). The other SCHED_DEADLINE tasks can stay with their fixed runtimes unchanged. Blindly reducing allocated runtime is defeating that whole purpose. Of course, there could be a minimum guaranteed runtime per task. Granted, if you use EDF for BW-control only, it could be done - but then the thread itself should do that. Real-time is not about being fair. Heck, it's not even about being optimal, it's about being predictable and "dynamically adjusting" is not! Well, this could lead to a long discussion, in which everyone is right and everyone is wrong... Let's say that it depends on the applications requirements and constraints. [...] Will EDF has dynamic quota in future? Well, as Luca said, you can already use SCHED_DEADLINE as the backend for feedback scheduling (that pertain mostly to user-space). And yes, there are already thoughts to modify it a bit to go towards Lipari's et al. GRUB algorithm. That would be probably helpful in situations like yours. But I can't give you any timing for it. Need to read up on GRUB before involving myself in this part of the discussion, but I'm not sure how much I enjoy the idea of some part of userspace (more or less) blindly adjusting deadline-params for other tasks. No, GRUB does not involve the user-space adjusting any scheduling parameter. GRUB is a reclaiming algorithm, which works in a different way respect to the feedback scheduling approach I described, and requires modifications in the scheduler. The basic ideas are (warning! This is an over-simplification of the algorithm! :) - You assign runtime and period to each SCHED_DEADLINE task as usual - Each task is guaranteed to receive its runtime every period - You can also define a maximum fraction Umax of the CPU time that the SCHED_DEADLINE tasks can use. Note that Umax _must_ be larger or equal than sum_i runtime_i / period_i (note: in the original GRUB paper, only one CPU is
Re: [RFD] sched/deadline: EDF dynamic quota design
On 2014/5/21 20:45, Luca Abeni wrote: Hi, first of all, sorry for the ultra-delayed reply: I've been busy, and I did not notice this email... Anyway, some comments are below On 05/16/2014 09:11 AM, Henrik Austad wrote: [...] This can also be implemented in user-space (without modifying the scheduler) by having a daemon that monitors the SCHED_DEADLINE tasks and changes their runtimes based on some kind of feedback (for example, difference between the scheduling deadline of a task and its actual deadline - if this information is made available by the scheduler). I developed a similar implementation in the past (not based on SCHED_DEADLINE, but on some previous implementation of the CBS algorithm). This sounds like a very slow approach. What if the extra BW given by T2 was for one period only? There's no way you could create a userspace daemon to handle that kind of budget-tweaking. Right. With This can also be implemented in user-space... I was referring to the feedback scheduling (adaptive reservation) approach, which is designed to auto-tune the reservation budget following slower variations (it is like a low-pass filter, which can set the budget to something between the average used budget and the largest one). Basically, it works on a larger time scale. If you want a per-period runtime donation, you need a reclaiming mechanism like GRUB, CASH, or similar, which needs to be implemented in the kernel. Also, it sounds like a *really* dangerous idea to let some random (tm) userspace daemon adjust the deadline-budget for other tasks in the system based on an observation of spent budget for the last X seconds. It's not military funding we're concerned with here. When you state your WCET, it is not because you need -exactly- that budget, it is because you should *never* exceed that kind of rquired computational time. Exact. But the idea of feedback scheduling was that sometimes you do not know the WCET... You can guess it, or measure it over a large number of runs (but the Murphy law ensures that you will miss the worst case anyway ;-). And there are situations in which you do not need to respect all of the deadlines... The daemon I was talking about just monitors the difference between the scheduling deadline and the real job deadline for some tasks, in order to understand if the runtime they have been assigned is enough or not... If some task is not receiving enough runtime (that is, if the difference between its scheduling deadline and the real deadline becomes too large), the daemon tries to increase its runtime. When the system is overloaded (that is, everyone of the monitored tasks wants too much runtime, and the admission control fails), the daemon decreases the runtimes according to the weight of the tasks... Of course, the daemon does not have to monitor all of the SCHED_DEADLINE tasks in the system, but only the ones for which adaptive reservations are useful (tasks for which the WCET is not known for sure, and that can tollerate some missed deadlines). The other SCHED_DEADLINE tasks can stay with their fixed runtimes unchanged. Blindly reducing allocated runtime is defeating that whole purpose. Of course, there could be a minimum guaranteed runtime per task. Granted, if you use EDF for BW-control only, it could be done - but then the thread itself should do that. Real-time is not about being fair. Heck, it's not even about being optimal, it's about being predictable and dynamically adjusting is not! Well, this could lead to a long discussion, in which everyone is right and everyone is wrong... Let's say that it depends on the applications requirements and constraints. [...] Will EDF has dynamic quota in future? Well, as Luca said, you can already use SCHED_DEADLINE as the backend for feedback scheduling (that pertain mostly to user-space). And yes, there are already thoughts to modify it a bit to go towards Lipari's et al. GRUB algorithm. That would be probably helpful in situations like yours. But I can't give you any timing for it. Need to read up on GRUB before involving myself in this part of the discussion, but I'm not sure how much I enjoy the idea of some part of userspace (more or less) blindly adjusting deadline-params for other tasks. No, GRUB does not involve the user-space adjusting any scheduling parameter. GRUB is a reclaiming algorithm, which works in a different way respect to the feedback scheduling approach I described, and requires modifications in the scheduler. The basic ideas are (warning! This is an over-simplification of the algorithm! :) - You assign runtime and period to each SCHED_DEADLINE task as usual - Each task is guaranteed to receive its runtime every period - You can also define a maximum fraction Umax of the CPU time that the SCHED_DEADLINE tasks can use. Note that Umax _must_ be larger or equal than sum_i runtime_i / period_i (note: in the original GRUB paper, only one CPU is considered, and
[tip:sched/core] sched/rt: Fix 'struct sched_dl_entity' and dl_task_time() comments, to match the current upstream code
Commit-ID: 4027d080854d1be96ef134a1c3024d5276114db6 Gitweb: http://git.kernel.org/tip/4027d080854d1be96ef134a1c3024d5276114db6 Author: xiaofeng.yan AuthorDate: Fri, 9 May 2014 03:21:27 + Committer: Ingo Molnar CommitDate: Thu, 22 May 2014 11:16:37 +0200 sched/rt: Fix 'struct sched_dl_entity' and dl_task_time() comments, to match the current upstream code Signed-off-by: xiaofeng.yan Signed-off-by: Peter Zijlstra Link: http://lkml.kernel.org/r/1399605687-18094-1-git-send-email-xiaofeng@huawei.com Signed-off-by: Ingo Molnar --- include/linux/sched.h | 4 ++-- kernel/sched/deadline.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 725eef1..0f91d00 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1175,8 +1175,8 @@ struct sched_dl_entity { /* * Original scheduling parameters. Copied here from sched_attr -* during sched_setscheduler2(), they will remain the same until -* the next sched_setscheduler2(). +* during sched_setattr(), they will remain the same until +* the next sched_setattr(). */ u64 dl_runtime; /* maximum runtime for each instance*/ u64 dl_deadline;/* relative deadline of each instance */ diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index e0a04ae..f9ca7d1 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -520,7 +520,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) * We need to take care of a possible races here. In fact, the * task might have changed its scheduling policy to something * different from SCHED_DEADLINE or changed its reservation -* parameters (through sched_setscheduler()). +* parameters (through sched_setattr()). */ if (!dl_task(p) || dl_se->dl_new) goto unlock; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/rt: Fix 'struct sched_dl_entity' and dl_task_time() comments, to match the current upstream code
Commit-ID: 4027d080854d1be96ef134a1c3024d5276114db6 Gitweb: http://git.kernel.org/tip/4027d080854d1be96ef134a1c3024d5276114db6 Author: xiaofeng.yan xiaofeng@huawei.com AuthorDate: Fri, 9 May 2014 03:21:27 + Committer: Ingo Molnar mi...@kernel.org CommitDate: Thu, 22 May 2014 11:16:37 +0200 sched/rt: Fix 'struct sched_dl_entity' and dl_task_time() comments, to match the current upstream code Signed-off-by: xiaofeng.yan xiaofeng@huawei.com Signed-off-by: Peter Zijlstra pet...@infradead.org Link: http://lkml.kernel.org/r/1399605687-18094-1-git-send-email-xiaofeng@huawei.com Signed-off-by: Ingo Molnar mi...@kernel.org --- include/linux/sched.h | 4 ++-- kernel/sched/deadline.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 725eef1..0f91d00 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1175,8 +1175,8 @@ struct sched_dl_entity { /* * Original scheduling parameters. Copied here from sched_attr -* during sched_setscheduler2(), they will remain the same until -* the next sched_setscheduler2(). +* during sched_setattr(), they will remain the same until +* the next sched_setattr(). */ u64 dl_runtime; /* maximum runtime for each instance*/ u64 dl_deadline;/* relative deadline of each instance */ diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index e0a04ae..f9ca7d1 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -520,7 +520,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) * We need to take care of a possible races here. In fact, the * task might have changed its scheduling policy to something * different from SCHED_DEADLINE or changed its reservation -* parameters (through sched_setscheduler()). +* parameters (through sched_setattr()). */ if (!dl_task(p) || dl_se-dl_new) goto unlock; -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/rt: Fix a comment in deadline.c
Commit-ID: 6e9a8b9d6a9257bc124a1609f25597064ef9c167 Gitweb: http://git.kernel.org/tip/6e9a8b9d6a9257bc124a1609f25597064ef9c167 Author: xiaofeng.yan AuthorDate: Mon, 12 May 2014 07:41:17 + Committer: Thomas Gleixner CommitDate: Mon, 19 May 2014 22:02:42 +0900 sched/rt: Fix a comment in deadline.c EDF use sched_setattr() instead of sched_setscheduler(). Cc: mi...@redhat.com Signed-off-by: xiaofeng.yan Signed-off-by: Peter Zijlstra Link: http://lkml.kernel.org/r/1399880477-23376-1-git-send-email-xiaofeng@huawei.com Signed-off-by: Thomas Gleixner --- kernel/sched/deadline.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index e0a04ae..f9ca7d1 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -520,7 +520,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) * We need to take care of a possible races here. In fact, the * task might have changed its scheduling policy to something * different from SCHED_DEADLINE or changed its reservation -* parameters (through sched_setscheduler()). +* parameters (through sched_setattr()). */ if (!dl_task(p) || dl_se->dl_new) goto unlock; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/rt: Fix a comment in struct sched_dl_entity
Commit-ID: c07a16f784dfb8083c3b0157fbef18cb1292b9fc Gitweb: http://git.kernel.org/tip/c07a16f784dfb8083c3b0157fbef18cb1292b9fc Author: xiaofeng.yan AuthorDate: Fri, 9 May 2014 03:21:27 + Committer: Thomas Gleixner CommitDate: Mon, 19 May 2014 22:02:42 +0900 sched/rt: Fix a comment in struct sched_dl_entity Change sched_setscheduler2() to sched_setattr() in the comments. There isn't function sched_setscheduler2() in the main line. The previous EDF version defines this function before being merged into the main line. User should use sched_setattr() instead of sched_setscheduler2() now. Cc: mi...@redhat.com Signed-off-by: xiaofeng.yan Signed-off-by: Peter Zijlstra Link: http://lkml.kernel.org/r/1399605687-18094-1-git-send-email-xiaofeng@huawei.com Signed-off-by: Thomas Gleixner --- include/linux/sched.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 725eef1..0f91d00 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1175,8 +1175,8 @@ struct sched_dl_entity { /* * Original scheduling parameters. Copied here from sched_attr -* during sched_setscheduler2(), they will remain the same until -* the next sched_setscheduler2(). +* during sched_setattr(), they will remain the same until +* the next sched_setattr(). */ u64 dl_runtime; /* maximum runtime for each instance*/ u64 dl_deadline;/* relative deadline of each instance */ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/rt: Fix a comment in struct sched_dl_entity
Commit-ID: c07a16f784dfb8083c3b0157fbef18cb1292b9fc Gitweb: http://git.kernel.org/tip/c07a16f784dfb8083c3b0157fbef18cb1292b9fc Author: xiaofeng.yan xiaofeng@huawei.com AuthorDate: Fri, 9 May 2014 03:21:27 + Committer: Thomas Gleixner t...@linutronix.de CommitDate: Mon, 19 May 2014 22:02:42 +0900 sched/rt: Fix a comment in struct sched_dl_entity Change sched_setscheduler2() to sched_setattr() in the comments. There isn't function sched_setscheduler2() in the main line. The previous EDF version defines this function before being merged into the main line. User should use sched_setattr() instead of sched_setscheduler2() now. Cc: mi...@redhat.com Signed-off-by: xiaofeng.yan xiaofeng@huawei.com Signed-off-by: Peter Zijlstra pet...@infradead.org Link: http://lkml.kernel.org/r/1399605687-18094-1-git-send-email-xiaofeng@huawei.com Signed-off-by: Thomas Gleixner t...@linutronix.de --- include/linux/sched.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 725eef1..0f91d00 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1175,8 +1175,8 @@ struct sched_dl_entity { /* * Original scheduling parameters. Copied here from sched_attr -* during sched_setscheduler2(), they will remain the same until -* the next sched_setscheduler2(). +* during sched_setattr(), they will remain the same until +* the next sched_setattr(). */ u64 dl_runtime; /* maximum runtime for each instance*/ u64 dl_deadline;/* relative deadline of each instance */ -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[tip:sched/core] sched/rt: Fix a comment in deadline.c
Commit-ID: 6e9a8b9d6a9257bc124a1609f25597064ef9c167 Gitweb: http://git.kernel.org/tip/6e9a8b9d6a9257bc124a1609f25597064ef9c167 Author: xiaofeng.yan xiaofeng@huawei.com AuthorDate: Mon, 12 May 2014 07:41:17 + Committer: Thomas Gleixner t...@linutronix.de CommitDate: Mon, 19 May 2014 22:02:42 +0900 sched/rt: Fix a comment in deadline.c EDF use sched_setattr() instead of sched_setscheduler(). Cc: mi...@redhat.com Signed-off-by: xiaofeng.yan xiaofeng@huawei.com Signed-off-by: Peter Zijlstra pet...@infradead.org Link: http://lkml.kernel.org/r/1399880477-23376-1-git-send-email-xiaofeng@huawei.com Signed-off-by: Thomas Gleixner t...@linutronix.de --- kernel/sched/deadline.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index e0a04ae..f9ca7d1 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -520,7 +520,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) * We need to take care of a possible races here. In fact, the * task might have changed its scheduling policy to something * different from SCHED_DEADLINE or changed its reservation -* parameters (through sched_setscheduler()). +* parameters (through sched_setattr()). */ if (!dl_task(p) || dl_se-dl_new) goto unlock; -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt: Fix a comment in deadline.c
EDF use sched_setattr() instead of sched_setscheduler(). Signed-off-by: xiaofeng.yan --- kernel/sched/deadline.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index b080957..558e41a 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -520,7 +520,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) * We need to take care of a possible races here. In fact, the * task might have changed its scheduling policy to something * different from SCHED_DEADLINE or changed its reservation -* parameters (through sched_setscheduler()). +* parameters (through sched_setattr()). */ if (!dl_task(p) || dl_se->dl_new) goto unlock; -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt: Fix a comment in deadline.c
EDF use sched_setattr() instead of sched_setscheduler(). Signed-off-by: xiaofeng.yan xiaofeng@huawei.com --- kernel/sched/deadline.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index b080957..558e41a 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -520,7 +520,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) * We need to take care of a possible races here. In fact, the * task might have changed its scheduling policy to something * different from SCHED_DEADLINE or changed its reservation -* parameters (through sched_setscheduler()). +* parameters (through sched_setattr()). */ if (!dl_task(p) || dl_se-dl_new) goto unlock; -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt: Fix a comment in struct sched_dl_entity
Change sched_setscheduler2() to sched_setattr() in the comments. There isn't function sched_setscheduler2() in the main line. The previous EDF version defines this function before being merged into the main line. User should use sched_setattr() instead of sched_setscheduler2() now. Signed-off-by: xiaofeng.yan --- include/linux/sched.h |4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 25f54c7..ed64468 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1123,8 +1123,8 @@ struct sched_dl_entity { /* * Original scheduling parameters. Copied here from sched_attr -* during sched_setscheduler2(), they will remain the same until -* the next sched_setscheduler2(). +* during sched_setattr(), they will remain the same until +* the next sched_setattr(). */ u64 dl_runtime; /* maximum runtime for each instance*/ u64 dl_deadline;/* relative deadline of each instance */ -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/rt: Fix a comment in struct sched_dl_entity
On 2014/5/8 19:04, Peter Zijlstra wrote: On Thu, May 08, 2014 at 10:31:20AM +, xiaofeng.yan wrote: Change sched_setscheduler2() to sched_setscheduler() in the comments. There isn't function sched_setscheduler2() in the main line. The previous EDF version defines this function before being merged into the main line. User should use sched_setscheduler() instead of sched_setscheduler2() now. Nah, you fail.. the interface is now called sched_setattr() thanks for your reply. I will push new patch according to your suggest. Thanks xiaofeng.yan -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt: Fix a comment in struct sched_dl_entity
Change sched_setscheduler2() to sched_setscheduler() in the comments. There isn't function sched_setscheduler2() in the main line. The previous EDF version defines this function before being merged into the main line. User should use sched_setscheduler() instead of sched_setscheduler2() now. Signed-off-by: xiaofeng.yan --- include/linux/sched.h |4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 25f54c7..fe263e7 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1123,8 +1123,8 @@ struct sched_dl_entity { /* * Original scheduling parameters. Copied here from sched_attr -* during sched_setscheduler2(), they will remain the same until -* the next sched_setscheduler2(). +* during sched_setscheduler(), they will remain the same until +* the next sched_setscheduler(). */ u64 dl_runtime; /* maximum runtime for each instance*/ u64 dl_deadline;/* relative deadline of each instance */ -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt: Fix a comment in struct sched_dl_entity
Change sched_setscheduler2() to sched_setscheduler() in the comments. There isn't function sched_setscheduler2() in the main line. The previous EDF version defines this function before being merged into the main line. User should use sched_setscheduler() instead of sched_setscheduler2() now. Signed-off-by: xiaofeng.yan xiaofeng@huawei.com --- include/linux/sched.h |4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 25f54c7..fe263e7 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1123,8 +1123,8 @@ struct sched_dl_entity { /* * Original scheduling parameters. Copied here from sched_attr -* during sched_setscheduler2(), they will remain the same until -* the next sched_setscheduler2(). +* during sched_setscheduler(), they will remain the same until +* the next sched_setscheduler(). */ u64 dl_runtime; /* maximum runtime for each instance*/ u64 dl_deadline;/* relative deadline of each instance */ -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] sched/rt: Fix a comment in struct sched_dl_entity
On 2014/5/8 19:04, Peter Zijlstra wrote: On Thu, May 08, 2014 at 10:31:20AM +, xiaofeng.yan wrote: Change sched_setscheduler2() to sched_setscheduler() in the comments. There isn't function sched_setscheduler2() in the main line. The previous EDF version defines this function before being merged into the main line. User should use sched_setscheduler() instead of sched_setscheduler2() now. Nah, you fail.. the interface is now called sched_setattr() thanks for your reply. I will push new patch according to your suggest. Thanks xiaofeng.yan -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] sched/rt: Fix a comment in struct sched_dl_entity
Change sched_setscheduler2() to sched_setattr() in the comments. There isn't function sched_setscheduler2() in the main line. The previous EDF version defines this function before being merged into the main line. User should use sched_setattr() instead of sched_setscheduler2() now. Signed-off-by: xiaofeng.yan xiaofeng@huawei.com --- include/linux/sched.h |4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 25f54c7..ed64468 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1123,8 +1123,8 @@ struct sched_dl_entity { /* * Original scheduling parameters. Copied here from sched_attr -* during sched_setscheduler2(), they will remain the same until -* the next sched_setscheduler2(). +* during sched_setattr(), they will remain the same until +* the next sched_setattr(). */ u64 dl_runtime; /* maximum runtime for each instance*/ u64 dl_deadline;/* relative deadline of each instance */ -- 1.7.9.5 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/