RE: [PATCH v4 3/4] PCI: hv: Remove hbus->enum_sem

2018-03-16 Thread Dexuan Cui
> From: Lorenzo Pieralisi 
> Sent: Friday, March 16, 2018 11:32
> ...
> 
> OK, patch series reworked and queued in my pci/hv branch please have
> a look and let me know if that looks OK for you, I won't ask Bjorn
> to move it into -next till you give me the go-ahead.
> 
> Lorenzo

Yes, it looks good!
Thanks for updating the commit logs!
I'll try to follow the writing style in future.  :-)

BTW, I hope these two still have the chance to go in v4.16:
PCI: hv: Fix 2 hang issues in hv_compose_msi_msg()
PCI: hv: Serialize the present and eject work items

Thanks,
-- Dexuan
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [PATCH v4 3/4] PCI: hv: Remove hbus->enum_sem

2018-03-16 Thread Lorenzo Pieralisi
On Fri, Mar 16, 2018 at 05:41:27PM +, Dexuan Cui wrote:
> > From: Lorenzo Pieralisi 
> > Sent: Friday, March 16, 2018 03:54
> > ...
> > Dexuan,
> > while applying/updating these patches I notice this one may be squashed
> > into: https://patchwork.ozlabs.org/patch/886266/
> > 
> > since they logically belong in the same patch. Are you OK with me doing
> > that ? Is my reading correct ?
> > Lorenzo
> 
> I'm OK. 
> I used two patches
> [PATCH v4 1/2] PCI: hv: Serialize the present and eject work items
> [PATCH v4 3/4] PCI: hv: Remove hbus->enum_sem
> only because the first fixed a real issue and hence IMO should go into
> stable kernels, and the second is only a cleanup patch, which doesn't
> need go into stable kernels.
> 
> Either way is ok to me. 
> Please feel free to do whatever you think is better. :-)

OK, patch series reworked and queued in my pci/hv branch please have
a look and let me know if that looks OK for you, I won't ask Bjorn
to move it into -next till you give me the go-ahead.

Thanks,
Lorenzo
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


RE: [PATCH v4 3/4] PCI: hv: Remove hbus->enum_sem

2018-03-16 Thread Dexuan Cui
> From: Lorenzo Pieralisi 
> Sent: Friday, March 16, 2018 03:54
> ...
> Dexuan,
> while applying/updating these patches I notice this one may be squashed
> into: https://patchwork.ozlabs.org/patch/886266/
> 
> since they logically belong in the same patch. Are you OK with me doing
> that ? Is my reading correct ?
> Lorenzo

I'm OK. 
I used two patches
[PATCH v4 1/2] PCI: hv: Serialize the present and eject work items
[PATCH v4 3/4] PCI: hv: Remove hbus->enum_sem
only because the first fixed a real issue and hence IMO should go into
stable kernels, and the second is only a cleanup patch, which doesn't
need go into stable kernels.

Either way is ok to me. 
Please feel free to do whatever you think is better. :-)

Thanks,
-- Dexuan

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [PATCH v4 3/4] PCI: hv: Remove hbus->enum_sem

2018-03-16 Thread Lorenzo Pieralisi
On Thu, Mar 15, 2018 at 02:21:51PM +, Dexuan Cui wrote:
> Since we serialize the present/eject work items now, we don't need the
> semaphore any more.
> 
> Signed-off-by: Dexuan Cui 
> Reviewed-by: Michael Kelley 
> Acked-by: Haiyang Zhang 
> Cc: Vitaly Kuznetsov 
> Cc: Jack Morgenstein 
> Cc: Stephen Hemminger 
> Cc: K. Y. Srinivasan 
> ---
>  drivers/pci/host/pci-hyperv.c | 17 ++---
>  1 file changed, 2 insertions(+), 15 deletions(-)

Dexuan,

while applying/updating these patches I notice this one may be squashed
into:

https://patchwork.ozlabs.org/patch/886266/

since they logically belong in the same patch. Are you OK with me doing
that ? Is my reading correct ?

Thanks,
Lorenzo

> diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
> index 1d2e1cb617f4..0dc2ecdbbe45 100644
> --- a/drivers/pci/host/pci-hyperv.c
> +++ b/drivers/pci/host/pci-hyperv.c
> @@ -447,7 +447,6 @@ struct hv_pcibus_device {
>   spinlock_t device_list_lock;/* Protect lists below */
>   void __iomem *cfg_addr;
>  
> - struct semaphore enum_sem;
>   struct list_head resources_for_children;
>  
>   struct list_head children;
> @@ -1648,12 +1647,8 @@ static struct hv_pci_dev *get_pcichild_wslot(struct 
> hv_pcibus_device *hbus,
>   * It must also treat the omission of a previously observed device as
>   * notification that the device no longer exists.
>   *
> - * Note that this function is a work item, and it may not be
> - * invoked in the order that it was queued.  Back to back
> - * updates of the list of present devices may involve queuing
> - * multiple work items, and this one may run before ones that
> - * were sent later. As such, this function only does something
> - * if is the last one in the queue.
> + * Note that this function is serialized with hv_eject_device_work(),
> + * because both are pushed to the ordered workqueue hbus->wq.
>   */
>  static void pci_devices_present_work(struct work_struct *work)
>  {
> @@ -1674,11 +1669,6 @@ static void pci_devices_present_work(struct 
> work_struct *work)
>  
>   INIT_LIST_HEAD(&removed);
>  
> - if (down_interruptible(&hbus->enum_sem)) {
> - put_hvpcibus(hbus);
> - return;
> - }
> -
>   /* Pull this off the queue and process it if it was the last one. */
>   spin_lock_irqsave(&hbus->device_list_lock, flags);
>   while (!list_empty(&hbus->dr_list)) {
> @@ -1695,7 +1685,6 @@ static void pci_devices_present_work(struct work_struct 
> *work)
>   spin_unlock_irqrestore(&hbus->device_list_lock, flags);
>  
>   if (!dr) {
> - up(&hbus->enum_sem);
>   put_hvpcibus(hbus);
>   return;
>   }
> @@ -1782,7 +1771,6 @@ static void pci_devices_present_work(struct work_struct 
> *work)
>   break;
>   }
>  
> - up(&hbus->enum_sem);
>   put_hvpcibus(hbus);
>   kfree(dr);
>  }
> @@ -2516,7 +2504,6 @@ static int hv_pci_probe(struct hv_device *hdev,
>   spin_lock_init(&hbus->config_lock);
>   spin_lock_init(&hbus->device_list_lock);
>   spin_lock_init(&hbus->retarget_msi_interrupt_lock);
> - sema_init(&hbus->enum_sem, 1);
>   init_completion(&hbus->remove_event);
>   hbus->wq = alloc_ordered_workqueue("hv_pci_%x", 0,
>  hbus->sysdata.domain);
> -- 
> 2.7.4
> 
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v4 3/4] PCI: hv: Remove hbus->enum_sem

2018-03-15 Thread Dexuan Cui
Since we serialize the present/eject work items now, we don't need the
semaphore any more.

Signed-off-by: Dexuan Cui 
Reviewed-by: Michael Kelley 
Acked-by: Haiyang Zhang 
Cc: Vitaly Kuznetsov 
Cc: Jack Morgenstein 
Cc: Stephen Hemminger 
Cc: K. Y. Srinivasan 
---
 drivers/pci/host/pci-hyperv.c | 17 ++---
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
index 1d2e1cb617f4..0dc2ecdbbe45 100644
--- a/drivers/pci/host/pci-hyperv.c
+++ b/drivers/pci/host/pci-hyperv.c
@@ -447,7 +447,6 @@ struct hv_pcibus_device {
spinlock_t device_list_lock;/* Protect lists below */
void __iomem *cfg_addr;
 
-   struct semaphore enum_sem;
struct list_head resources_for_children;
 
struct list_head children;
@@ -1648,12 +1647,8 @@ static struct hv_pci_dev *get_pcichild_wslot(struct 
hv_pcibus_device *hbus,
  * It must also treat the omission of a previously observed device as
  * notification that the device no longer exists.
  *
- * Note that this function is a work item, and it may not be
- * invoked in the order that it was queued.  Back to back
- * updates of the list of present devices may involve queuing
- * multiple work items, and this one may run before ones that
- * were sent later. As such, this function only does something
- * if is the last one in the queue.
+ * Note that this function is serialized with hv_eject_device_work(),
+ * because both are pushed to the ordered workqueue hbus->wq.
  */
 static void pci_devices_present_work(struct work_struct *work)
 {
@@ -1674,11 +1669,6 @@ static void pci_devices_present_work(struct work_struct 
*work)
 
INIT_LIST_HEAD(&removed);
 
-   if (down_interruptible(&hbus->enum_sem)) {
-   put_hvpcibus(hbus);
-   return;
-   }
-
/* Pull this off the queue and process it if it was the last one. */
spin_lock_irqsave(&hbus->device_list_lock, flags);
while (!list_empty(&hbus->dr_list)) {
@@ -1695,7 +1685,6 @@ static void pci_devices_present_work(struct work_struct 
*work)
spin_unlock_irqrestore(&hbus->device_list_lock, flags);
 
if (!dr) {
-   up(&hbus->enum_sem);
put_hvpcibus(hbus);
return;
}
@@ -1782,7 +1771,6 @@ static void pci_devices_present_work(struct work_struct 
*work)
break;
}
 
-   up(&hbus->enum_sem);
put_hvpcibus(hbus);
kfree(dr);
 }
@@ -2516,7 +2504,6 @@ static int hv_pci_probe(struct hv_device *hdev,
spin_lock_init(&hbus->config_lock);
spin_lock_init(&hbus->device_list_lock);
spin_lock_init(&hbus->retarget_msi_interrupt_lock);
-   sema_init(&hbus->enum_sem, 1);
init_completion(&hbus->remove_event);
hbus->wq = alloc_ordered_workqueue("hv_pci_%x", 0,
   hbus->sysdata.domain);
-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel