RE: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
> From: Michael Kelley (EOSG) > Sent: Thursday, May 31, 2018 09:41 > > > > IMO we can disable the per-channel tasklet to exclude the race: > > This way, when we exit the loop, we're sure hv_pci_onchannelcallback() can > > not run anymore. What do you think of this? > > I've stared at this and the tasklet code over a couple of days now. Adding > the > call to tasklet_disable() solves the immediate problem of preventing > hv_pci_onchannelcallback() from calling complete() against a completion > packet > that has been popped off the stack. But in doing so, it simply pushes the > core > problem further down the road and leaves it unresolved. > > tasklet_disable() does not prevent the tasklet from being scheduled. So if > there > is a response from Hyper-V to the original message, the tasklet still gets > scheduled. Because it is disabled, it will sit in the tasklet queue and be > skipped > over each time the queue is processed. Later, when the channel is > eventually > deleted in free_channel(), tasklet_kill() is called. Unfortunately, > tasklet_kill() > will get stuck in an infinite loop, waiting for the tasklet to run. There > aren't > any tasklet interfaces to dequeue an already scheduled tasklet. I think you're correct. > Please double-check my reasoning. To solve this problem, I think the VMbus > driver code will need some additional synchronization between rescind > notifications and a response, which may or may not have been sent, and > which could be processed after the rescind. I haven't yet thought about > what this synchronization might need to look like. > > Michael Yes, it looks the VMBus driver needs to provide an API to cope with this. I'll try to further investigate the issue. -- Dexuan ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
RE: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
> From: Dexuan Cui > Sent: Tuesday, May 29, 2018 12:58 PM > Subject: RE: [PATCH] PCI: hv: Do not wait forever on a device that has > disappeared > > > From: Michael Kelley (EOSG) > > Sent: Monday, May 28, 2018 17:19 > > > > While this patch solves the immediate problem of getting hung waiting > > for a response from Hyper-V that will never come, there's another scenario > > to look at that I think introduces a race. Suppose the guest VM issues a > > vmbus_sendpacket() request in one of the cases covered by this patch, > > and suppose that Hyper-V queues a response to the request, and then > > immediately follows with a rescind request. Processing the response will > > get queued to a tasklet associated with the channel, while processing the > > rescind will get queued to a tasklet associated with the top-level vmbus > > connection. From what I can see, the code doesn't impose any ordering > > on processing the two. If the rescind is processed first, the new > > wait_for_response() function may wake up, notice the rescind flag, and > > return an error. Its caller will return an error, and in doing so pop the > > completion packet off the stack. When the response is processed later, > > it will try to signal completion via a completion packet that no longer > > exists, and memory corruption will likely occur. > > > > Am I missing anything that would prevent this scenario from happening? > > It is admittedly low probability, and a solution seems non-trivial. I > > haven't > > looked specifically, but a similar scenario is probably possible with the > > drivers for other VMbus devices. We should work on a generic solution. > > > > Michael > > Thanks for spotting the race! > > IMO we can disable the per-channel tasklet to exclude the race: > > --- a/drivers/pci/host/pci-hyperv.c > +++ b/drivers/pci/host/pci-hyperv.c > @@ -565,6 +565,7 @@ static int wait_for_response(struct hv_device *hdev, > { > while (true) { > if (hdev->channel->rescind) { > + tasklet_disable(&hdev->channel->callback_event); > dev_warn_once(&hdev->device, "The device is gone.\n"); > return -ENODEV; > } > > This way, when we exit the loop, we're sure hv_pci_onchannelcallback() can not > run anymore. What do you think of this? I've stared at this and the tasklet code over a couple of days now. Adding the call to tasklet_disable() solves the immediate problem of preventing hv_pci_onchannelcallback() from calling complete() against a completion packet that has been popped off the stack. But in doing so, it simply pushes the core problem further down the road and leaves it unresolved. tasklet_disable() does not prevent the tasklet from being scheduled. So if there is a response from Hyper-V to the original message, the tasklet still gets scheduled. Because it is disabled, it will sit in the tasklet queue and be skipped over each time the queue is processed. Later, when the channel is eventually deleted in free_channel(), tasklet_kill() is called. Unfortunately, tasklet_kill() will get stuck in an infinite loop, waiting for the tasklet to run. There aren't any tasklet interfaces to dequeue an already scheduled tasklet. Please double-check my reasoning. To solve this problem, I think the VMbus driver code will need some additional synchronization between rescind notifications and a response, which may or may not have been sent, and which could be processed after the rescind. I haven't yet thought about what this synchronization might need to look like. Michael > > > It looks the list of the other vmbus devices that can be hot-removed is: > > the hv_utils devices > hv_sock devices > storvsc device > netvsc device > > As I checked, the first 3 types of devices don't have this "send a request to > the > host and wait for the response forever" pattern. NetVSC should be fixed as it > has > the same pattern. > > -- Dexuan ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
RE: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
> From: Andy Shevchenko > Sent: Tuesday, May 29, 2018 14:21 > On Thu, May 24, 2018 at 12:12 AM, Dexuan Cui > wrote: > > > > Before the guest finishes the device initialization, the device can be > > removed anytime by the host, and after that the host won't respond to > > the guest's request, so the guest should be prepared to handle this > > case. > > > + while (true) { > > + if (hdev->channel->rescind) { > > + dev_warn_once(&hdev->device, "The device is > gone.\n"); > > + return -ENODEV; > > + } > > + > > + if (wait_for_completion_timeout(comp, HZ / 10)) > > + break; > > + } > > Infinite loops are usually a red flags. > > What's wrong with simple: > > do { > ... > } while (wait_...(...) == 0); > > ? Thanks for the suggestion, Andy! The coding style you suggested looks better to me. :-) > > + if (!ret) > > + ret = wait_for_response(hdev, &comp); > > Better to use well established patterns, i.e. > > if (ret) > return ret; Agreed. > > > + if (!ret) > > + ret = wait_for_response(hdev, > &comp_pkt.host_event); > > Here it looks okay on the first glance, but better to think about it > again and refactor. > With Best Regards, > Andy Shevchenko I'll try to send out a patch to improve the coding style, after I address Michael Kelley's concern of a race. Thanks, -- Dexuan ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
On Thu, May 24, 2018 at 12:12 AM, Dexuan Cui wrote: > > Before the guest finishes the device initialization, the device can be > removed anytime by the host, and after that the host won't respond to > the guest's request, so the guest should be prepared to handle this > case. > + while (true) { > + if (hdev->channel->rescind) { > + dev_warn_once(&hdev->device, "The device is gone.\n"); > + return -ENODEV; > + } > + > + if (wait_for_completion_timeout(comp, HZ / 10)) > + break; > + } Infinite loops are usually a red flags. What's wrong with simple: do { ... } while (wait_...(...) == 0); ? > + if (!ret) > + ret = wait_for_response(hdev, &comp); Better to use well established patterns, i.e. if (ret) return ret; > + if (!ret) > + ret = wait_for_response(hdev, &comp_pkt.host_event); Here it looks okay on the first glance, but better to think about it again and refactor. -- With Best Regards, Andy Shevchenko ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
RE: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
> From: Michael Kelley (EOSG) > Sent: Monday, May 28, 2018 17:19 > > While this patch solves the immediate problem of getting hung waiting > for a response from Hyper-V that will never come, there's another scenario > to look at that I think introduces a race. Suppose the guest VM issues a > vmbus_sendpacket() request in one of the cases covered by this patch, > and suppose that Hyper-V queues a response to the request, and then > immediately follows with a rescind request. Processing the response will > get queued to a tasklet associated with the channel, while processing the > rescind will get queued to a tasklet associated with the top-level vmbus > connection. From what I can see, the code doesn't impose any ordering > on processing the two. If the rescind is processed first, the new > wait_for_response() function may wake up, notice the rescind flag, and > return an error. Its caller will return an error, and in doing so pop the > completion packet off the stack. When the response is processed later, > it will try to signal completion via a completion packet that no longer > exists, and memory corruption will likely occur. > > Am I missing anything that would prevent this scenario from happening? > It is admittedly low probability, and a solution seems non-trivial. I haven't > looked specifically, but a similar scenario is probably possible with the > drivers for other VMbus devices. We should work on a generic solution. > > Michael Thanks for spotting the race! IMO we can disable the per-channel tasklet to exclude the race: --- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c @@ -565,6 +565,7 @@ static int wait_for_response(struct hv_device *hdev, { while (true) { if (hdev->channel->rescind) { + tasklet_disable(&hdev->channel->callback_event); dev_warn_once(&hdev->device, "The device is gone.\n"); return -ENODEV; } This way, when we exit the loop, we're sure hv_pci_onchannelcallback() can not run anymore. What do you think of this? It looks the list of the other vmbus devices that can be hot-removed is: the hv_utils devices hv_sock devices storvsc device netvsc device As I checked, the first 3 types of devices don't have this "send a request to the host and wait for the response forever" pattern. NetVSC should be fixed as it has the same pattern. -- Dexuan ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
RE: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
> > Before the guest finishes the device initialization, the device can be > removed anytime by the host, and after that the host won't respond to > the guest's request, so the guest should be prepared to handle this > case. > > Signed-off-by: Dexuan Cui > Cc: Stephen Hemminger > Cc: K. Y. Srinivasan > --- > drivers/pci/host/pci-hyperv.c | 46 > --- > 1 file changed, 34 insertions(+), 12 deletions(-) > While this patch solves the immediate problem of getting hung waiting for a response from Hyper-V that will never come, there's another scenario to look at that I think introduces a race. Suppose the guest VM issues a vmbus_sendpacket() request in one of the cases covered by this patch, and suppose that Hyper-V queues a response to the request, and then immediately follows with a rescind request. Processing the response will get queued to a tasklet associated with the channel, while processing the rescind will get queued to a tasklet associated with the top-level vmbus connection. From what I can see, the code doesn't impose any ordering on processing the two. If the rescind is processed first, the new wait_for_response() function may wake up, notice the rescind flag, and return an error. Its caller will return an error, and in doing so pop the completion packet off the stack. When the response is processed later, it will try to signal completion via a completion packet that no longer exists, and memory corruption will likely occur. Am I missing anything that would prevent this scenario from happening? It is admittedly low probability, and a solution seems non-trivial. I haven't looked specifically, but a similar scenario is probably possible with the drivers for other VMbus devices. We should work on a generic solution. Michael ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
On Wed, May 23, 2018 at 09:12:01PM +, Dexuan Cui wrote: > > Before the guest finishes the device initialization, the device can be > removed anytime by the host, and after that the host won't respond to > the guest's request, so the guest should be prepared to handle this > case. > > Signed-off-by: Dexuan Cui > Cc: Stephen Hemminger > Cc: K. Y. Srinivasan > --- > drivers/pci/host/pci-hyperv.c | 46 > --- > 1 file changed, 34 insertions(+), 12 deletions(-) Applied with a minor tweak to the commit log to pci/hv for v4.18. Thanks, Lorenzo > diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c > index ad6a64d..248765f 100644 > --- a/drivers/pci/host/pci-hyperv.c > +++ b/drivers/pci/host/pci-hyperv.c > @@ -556,6 +556,26 @@ static void put_pcichild(struct hv_pci_dev *hv_pcidev, > static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus); > static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus); > > +/* > + * There is no good way to get notified from vmbus_onoffer_rescind(), > + * so let's use polling here, since this is not a hot path. > + */ > +static int wait_for_response(struct hv_device *hdev, > + struct completion *comp) > +{ > + while (true) { > + if (hdev->channel->rescind) { > + dev_warn_once(&hdev->device, "The device is gone.\n"); > + return -ENODEV; > + } > + > + if (wait_for_completion_timeout(comp, HZ / 10)) > + break; > + } > + > + return 0; > +} > + > /** > * devfn_to_wslot() - Convert from Linux PCI slot to Windows > * @devfn: The Linux representation of PCI slot > @@ -1569,7 +1589,8 @@ static struct hv_pci_dev *new_pcichild_device(struct > hv_pcibus_device *hbus, > if (ret) > goto error; > > - wait_for_completion(&comp_pkt.host_event); > + if (wait_for_response(hbus->hdev, &comp_pkt.host_event)) > + goto error; > > hpdev->desc = *desc; > refcount_set(&hpdev->refs, 1); > @@ -2070,15 +2091,16 @@ static int hv_pci_protocol_negotiation(struct > hv_device *hdev) > sizeof(struct pci_version_request), > (unsigned long)pkt, VM_PKT_DATA_INBAND, > VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); > + if (!ret) > + ret = wait_for_response(hdev, &comp_pkt.host_event); > + > if (ret) { > dev_err(&hdev->device, > - "PCI Pass-through VSP failed sending version > reqquest: %#x", > + "PCI Pass-through VSP failed to request > version: %d", > ret); > goto exit; > } > > - wait_for_completion(&comp_pkt.host_event); > - > if (comp_pkt.completion_status >= 0) { > pci_protocol_version = pci_protocol_versions[i]; > dev_info(&hdev->device, > @@ -2287,11 +2309,12 @@ static int hv_pci_enter_d0(struct hv_device *hdev) > ret = vmbus_sendpacket(hdev->channel, d0_entry, sizeof(*d0_entry), > (unsigned long)pkt, VM_PKT_DATA_INBAND, > VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); > + if (!ret) > + ret = wait_for_response(hdev, &comp_pkt.host_event); > + > if (ret) > goto exit; > > - wait_for_completion(&comp_pkt.host_event); > - > if (comp_pkt.completion_status < 0) { > dev_err(&hdev->device, > "PCI Pass-through VSP failed D0 Entry with status %x\n", > @@ -2331,11 +2354,10 @@ static int hv_pci_query_relations(struct hv_device > *hdev) > > ret = vmbus_sendpacket(hdev->channel, &message, sizeof(message), > 0, VM_PKT_DATA_INBAND, 0); > - if (ret) > - return ret; > + if (!ret) > + ret = wait_for_response(hdev, &comp); > > - wait_for_completion(&comp); > - return 0; > + return ret; > } > > /** > @@ -2405,11 +2427,11 @@ static int hv_send_resources_allocated(struct > hv_device *hdev) > size_res, (unsigned long)pkt, > VM_PKT_DATA_INBAND, > VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); > + if (!ret) > + ret = wait_for_response(hdev, &comp_pkt.host_event); > if (ret) > break; > > - wait_for_completion(&comp_pkt.host_event); > - > if (comp_pkt.completion_status < 0) { > ret = -EPROTO; > dev_err(&hdev->device, > -- > 2.7.4 > ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.o
RE: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
> -Original Message- > From: Dexuan Cui > Sent: Wednesday, May 23, 2018 5:12 PM > To: 'Lorenzo Pieralisi' ; 'Bjorn Helgaas' > ; 'linux-...@vger.kernel.org' p...@vger.kernel.org>; KY Srinivasan ; Stephen > Hemminger ; 'o...@aepfle.de' ; > 'a...@canonical.com' ; 'jasow...@redhat.com' > > Cc: 'linux-ker...@vger.kernel.org' ; 'driverdev- > de...@linuxdriverproject.org' ; > Haiyang Zhang ; 'vkuzn...@redhat.com' > ; 'marcelo.ce...@canonical.com' > > Subject: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared > > > Before the guest finishes the device initialization, the device can be removed > anytime by the host, and after that the host won't respond to the guest's > request, so the guest should be prepared to handle this case. > > Signed-off-by: Dexuan Cui > Cc: Stephen Hemminger > Cc: K. Y. Srinivasan > --- > drivers/pci/host/pci-hyperv.c | 46 --- > Reviewed-by: Haiyang Zhang Thank you! ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
On Thu, May 24, 2018 at 11:55:35PM +, Dexuan Cui wrote: > > From: Lorenzo Pieralisi > > Sent: Thursday, May 24, 2018 05:41 > > On Wed, May 23, 2018 at 09:12:01PM +, Dexuan Cui wrote: > > > > > > Before the guest finishes the device initialization, the device can be > > > removed anytime by the host, and after that the host won't respond to > > > the guest's request, so the guest should be prepared to handle this > > > case. > > > > > > --- a/drivers/pci/host/pci-hyperv.c > > > +++ b/drivers/pci/host/pci-hyperv.c > > > @@ -556,6 +556,26 @@ static void put_pcichild(struct hv_pci_dev > > *hv_pcidev, > > > static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus); > > > static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus); > > > > > > +/* > > > + * There is no good way to get notified from vmbus_onoffer_rescind(), > > > + * so let's use polling here, since this is not a hot path. > > > + */ > > > +static int wait_for_response(struct hv_device *hdev, > > > + struct completion *comp) > > > +{ > > > + while (true) { > > > + if (hdev->channel->rescind) { > > > + dev_warn_once(&hdev->device, "The device is gone.\n"); > > > + return -ENODEV; > > > + } > > > + > > > + if (wait_for_completion_timeout(comp, HZ / 10)) > > > + break; > > > + } > > > + > > > + return 0; > > > > This is pretty racy, isn't it ? Also, I reckon you should consider the > > timeout return value as an error condition unless I am completely > > missing the point of what you are doing. > > > > Lorenzo > > Actually, this is not racy: we only exit the loop when > 1) the channel is rescinded > or > 2) the channel is not rescinded, and the event is completed. > > wait_for_completion_timeout() returns 0 if timed out: in this case, > we keep spinning in the loop every 0.1 second, testing the 2 conditions. Yes sorry, you are right, the exit condition is correct, I am waiting for maintainers ACK to merge it, I need it as soon as possible if you want this to make it for v4.18. Thanks, Lorenzo > If the chanel is not rescinded, here we should wait for the event > forever, as the host is supposed to respond to us quickly, and the > event will be completed accordingly. This is what the current code > does. But, in case the channel is rescinded, we need to exit the loop > immediately with an error return value: this is the only change > made by the patch. > > Ideally, we should not use this ugly "polling" method, and the > rescind-handler, i.e. vmbus_onoffer_rescind(), should notify > wait_for_response(), but as I mentioned, there is no good way > to get notified from vmbus_onoffer_rescind(), so I'm proposing > this "polling" method: it's simple and it can work correctly, > and this is not a hot path. > > Thanks, > -- Dexuan ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
RE: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
> From: Lorenzo Pieralisi > Sent: Thursday, May 24, 2018 05:41 > On Wed, May 23, 2018 at 09:12:01PM +, Dexuan Cui wrote: > > > > Before the guest finishes the device initialization, the device can be > > removed anytime by the host, and after that the host won't respond to > > the guest's request, so the guest should be prepared to handle this > > case. > > > > --- a/drivers/pci/host/pci-hyperv.c > > +++ b/drivers/pci/host/pci-hyperv.c > > @@ -556,6 +556,26 @@ static void put_pcichild(struct hv_pci_dev > *hv_pcidev, > > static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus); > > static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus); > > > > +/* > > + * There is no good way to get notified from vmbus_onoffer_rescind(), > > + * so let's use polling here, since this is not a hot path. > > + */ > > +static int wait_for_response(struct hv_device *hdev, > > +struct completion *comp) > > +{ > > + while (true) { > > + if (hdev->channel->rescind) { > > + dev_warn_once(&hdev->device, "The device is gone.\n"); > > + return -ENODEV; > > + } > > + > > + if (wait_for_completion_timeout(comp, HZ / 10)) > > + break; > > + } > > + > > + return 0; > > This is pretty racy, isn't it ? Also, I reckon you should consider the > timeout return value as an error condition unless I am completely > missing the point of what you are doing. > > Lorenzo Actually, this is not racy: we only exit the loop when 1) the channel is rescinded or 2) the channel is not rescinded, and the event is completed. wait_for_completion_timeout() returns 0 if timed out: in this case, we keep spinning in the loop every 0.1 second, testing the 2 conditions. If the chanel is not rescinded, here we should wait for the event forever, as the host is supposed to respond to us quickly, and the event will be completed accordingly. This is what the current code does. But, in case the channel is rescinded, we need to exit the loop immediately with an error return value: this is the only change made by the patch. Ideally, we should not use this ugly "polling" method, and the rescind-handler, i.e. vmbus_onoffer_rescind(), should notify wait_for_response(), but as I mentioned, there is no good way to get notified from vmbus_onoffer_rescind(), so I'm proposing this "polling" method: it's simple and it can work correctly, and this is not a hot path. Thanks, -- Dexuan ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH] PCI: hv: Do not wait forever on a device that has disappeared
On Wed, May 23, 2018 at 09:12:01PM +, Dexuan Cui wrote: > > Before the guest finishes the device initialization, the device can be > removed anytime by the host, and after that the host won't respond to > the guest's request, so the guest should be prepared to handle this > case. > > Signed-off-by: Dexuan Cui > Cc: Stephen Hemminger > Cc: K. Y. Srinivasan > --- > drivers/pci/host/pci-hyperv.c | 46 > --- > 1 file changed, 34 insertions(+), 12 deletions(-) > > diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c > index ad6a64d..248765f 100644 > --- a/drivers/pci/host/pci-hyperv.c > +++ b/drivers/pci/host/pci-hyperv.c > @@ -556,6 +556,26 @@ static void put_pcichild(struct hv_pci_dev *hv_pcidev, > static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus); > static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus); > > +/* > + * There is no good way to get notified from vmbus_onoffer_rescind(), > + * so let's use polling here, since this is not a hot path. > + */ > +static int wait_for_response(struct hv_device *hdev, > + struct completion *comp) > +{ > + while (true) { > + if (hdev->channel->rescind) { > + dev_warn_once(&hdev->device, "The device is gone.\n"); > + return -ENODEV; > + } > + > + if (wait_for_completion_timeout(comp, HZ / 10)) > + break; > + } > + > + return 0; This is pretty racy, isn't it ? Also, I reckon you should consider the timeout return value as an error condition unless I am completely missing the point of what you are doing. Lorenzo > +} > + > /** > * devfn_to_wslot() - Convert from Linux PCI slot to Windows > * @devfn: The Linux representation of PCI slot > @@ -1569,7 +1589,8 @@ static struct hv_pci_dev *new_pcichild_device(struct > hv_pcibus_device *hbus, > if (ret) > goto error; > > - wait_for_completion(&comp_pkt.host_event); > + if (wait_for_response(hbus->hdev, &comp_pkt.host_event)) > + goto error; > > hpdev->desc = *desc; > refcount_set(&hpdev->refs, 1); > @@ -2070,15 +2091,16 @@ static int hv_pci_protocol_negotiation(struct > hv_device *hdev) > sizeof(struct pci_version_request), > (unsigned long)pkt, VM_PKT_DATA_INBAND, > VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); > + if (!ret) > + ret = wait_for_response(hdev, &comp_pkt.host_event); > + > if (ret) { > dev_err(&hdev->device, > - "PCI Pass-through VSP failed sending version > reqquest: %#x", > + "PCI Pass-through VSP failed to request > version: %d", > ret); > goto exit; > } > > - wait_for_completion(&comp_pkt.host_event); > - > if (comp_pkt.completion_status >= 0) { > pci_protocol_version = pci_protocol_versions[i]; > dev_info(&hdev->device, > @@ -2287,11 +2309,12 @@ static int hv_pci_enter_d0(struct hv_device *hdev) > ret = vmbus_sendpacket(hdev->channel, d0_entry, sizeof(*d0_entry), > (unsigned long)pkt, VM_PKT_DATA_INBAND, > VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); > + if (!ret) > + ret = wait_for_response(hdev, &comp_pkt.host_event); > + > if (ret) > goto exit; > > - wait_for_completion(&comp_pkt.host_event); > - > if (comp_pkt.completion_status < 0) { > dev_err(&hdev->device, > "PCI Pass-through VSP failed D0 Entry with status %x\n", > @@ -2331,11 +2354,10 @@ static int hv_pci_query_relations(struct hv_device > *hdev) > > ret = vmbus_sendpacket(hdev->channel, &message, sizeof(message), > 0, VM_PKT_DATA_INBAND, 0); > - if (ret) > - return ret; > + if (!ret) > + ret = wait_for_response(hdev, &comp); > > - wait_for_completion(&comp); > - return 0; > + return ret; > } > > /** > @@ -2405,11 +2427,11 @@ static int hv_send_resources_allocated(struct > hv_device *hdev) > size_res, (unsigned long)pkt, > VM_PKT_DATA_INBAND, > VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); > + if (!ret) > + ret = wait_for_response(hdev, &comp_pkt.host_event); > if (ret) > break; > > - wait_for_completion(&comp_pkt.host_event); > - > if (comp_pkt.completion_status < 0) { > ret = -EPROTO; > dev_err(&hdev->device, > -- > 2.7.4 > __