It should never happen, but let's get prepared to receiving a confirmation
for VMBus unload on an offlined CPU. As we allocate all structures for all
present CPUs now it's safe.

Signed-off-by: Vitaly Kuznetsov <vkuzn...@redhat.com>
---
 drivers/hv/channel_mgmt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 1bc1d479..5011c95 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -674,7 +674,7 @@ static void vmbus_wait_for_unload(void)
                if (completion_done(&vmbus_connection.unload_event))
                        break;
 
-               for_each_online_cpu(cpu) {
+               for_each_present_cpu(cpu) {
                        page_addr = hv_context.synic_message_page[cpu];
                        msg = (struct hv_message *)page_addr +
                                VMBUS_MESSAGE_SINT;
@@ -700,7 +700,7 @@ static void vmbus_wait_for_unload(void)
         * maybe-pending messages on all CPUs to be able to receive new
         * messages after we reconnect.
         */
-       for_each_online_cpu(cpu) {
+       for_each_present_cpu(cpu) {
                page_addr = hv_context.synic_message_page[cpu];
                msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
                msg->header.message_type = HVMSG_NONE;
-- 
2.7.4

_______________________________________________
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

Reply via email to