Hello.

On 31-01-2014 3:40, Zoran Markovic wrote:

From: Shaibal Dutta <[email protected]>

Garbage collector work does not have to be bound to the CPU that scheduled
it. By moving work to the power-efficient workqueue, the selection of
CPU executing the work is left to the scheduler. This extends idle
residency times and conserves power.

This functionality is enabled when CONFIG_WQ_POWER_EFFICIENT is selected.

Cc: "David S. Miller" <[email protected]>
Cc: Alexey Kuznetsov <[email protected]>
Cc: James Morris <[email protected]>
Cc: Hideaki YOSHIFUJI <[email protected]>
Cc: Patrick McHardy <[email protected]>
Signed-off-by: Shaibal Dutta <[email protected]>
[[email protected]: Rebased to latest kernel version. Added
commit message.]
Signed-off-by: Zoran Markovic <[email protected]>
---
  net/ipv4/inetpeer.c |    6 ++++--
  1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index 48f4244..87155aa 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -161,7 +161,8 @@ static void inetpeer_gc_worker(struct work_struct *work)
        list_splice(&list, &gc_list);
        spin_unlock_bh(&gc_lock);

-       schedule_delayed_work(&gc_work, gc_delay);
+       queue_delayed_work(system_power_efficient_wq,
+               &gc_work, gc_delay);

Please align the continuation line under the next character after ( on the broken up line.

@@ -576,7 +577,8 @@ static void inetpeer_inval_rcu(struct rcu_head *head)
        list_add_tail(&p->gc_list, &gc_list);
        spin_unlock_bh(&gc_lock);

-       schedule_delayed_work(&gc_work, gc_delay);
+       queue_delayed_work(system_power_efficient_wq,
+               &gc_work, gc_delay);

   Same here. This is according to networking coding style.

WBR, Sergei

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to