Use the plain msec_to_jiffies() rather than the _timeout variant so we round down and do not add an extra jiffy to our interval. For example, with timeslicing we do not want to err on the longer side as any fairness depends on catching hogging contexts on the GPU. Bring on CFS.
Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk> --- drivers/gpu/drm/i915/i915_utils.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/i915_utils.c b/drivers/gpu/drm/i915/i915_utils.c index e28eae4a8f70..f42a9e9a0b4f 100644 --- a/drivers/gpu/drm/i915/i915_utils.c +++ b/drivers/gpu/drm/i915/i915_utils.c @@ -91,7 +91,7 @@ void set_timer_ms(struct timer_list *t, unsigned long timeout) return; } - timeout = msecs_to_jiffies_timeout(timeout); + timeout = msecs_to_jiffies(timeout); /* * Paranoia to make sure the compiler computes the timeout before -- 2.20.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx