Once stop_machine() is gone from the CPU offline path, we won't be able to depend on disabling preemption to prevent CPUs from going offline from under us.
Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Cc: Robert Love <robert.w.l...@intel.com> Cc: "James E.J. Bottomley" <jbottom...@parallels.com> Cc: de...@open-fcoe.org Cc: linux-s...@vger.kernel.org Signed-off-by: Srivatsa S. Bhat <srivatsa.b...@linux.vnet.ibm.com> --- drivers/scsi/fcoe/fcoe.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c index 32ae6c6..eaa390e 100644 --- a/drivers/scsi/fcoe/fcoe.c +++ b/drivers/scsi/fcoe/fcoe.c @@ -1484,6 +1484,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev, * was originated, otherwise select cpu using rx exchange id * or fcoe_select_cpu(). */ + get_online_cpus_atomic(); if (ntoh24(fh->fh_f_ctl) & FC_FC_EX_CTX) cpu = ntohs(fh->fh_ox_id) & fc_cpu_mask; else { @@ -1493,8 +1494,10 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev, cpu = ntohs(fh->fh_rx_id) & fc_cpu_mask; } - if (cpu >= nr_cpu_ids) + if (cpu >= nr_cpu_ids) { + put_online_cpus_atomic(); goto err; + } fps = &per_cpu(fcoe_percpu, cpu); spin_lock(&fps->fcoe_rx_list.lock); @@ -1514,6 +1517,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev, spin_lock(&fps->fcoe_rx_list.lock); if (!fps->thread) { spin_unlock(&fps->fcoe_rx_list.lock); + put_online_cpus_atomic(); goto err; } } @@ -1535,6 +1539,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev, if (fps->thread->state == TASK_INTERRUPTIBLE) wake_up_process(fps->thread); spin_unlock(&fps->fcoe_rx_list.lock); + put_online_cpus_atomic(); return 0; err: -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/