Currently pciehp_resume() always enables the slot if it is occupied.  But
often the slot was already occupied before the suspend, so we complain like
this:

    pciehp 0000:00:1c.1:pcie04: Device 0000:03:00.0 already exists at 
0000:03:00, cannot hot-add
    pciehp 0000:00:1c.1:pcie04: Cannot add device at 0000:03:00

This patch only enables the slot if it was empty before the suspend and is
now occupied, i.e., a card was inserted while suspended.

Similarly, we only disable the slot if a card was removed while suspended.
If it was already empty before the suspend, we don't need to do anything.

[bhelgaas: changelog]
Tested-by: Paul Bolle <pebo...@tiscali.nl>
Signed-off-by: Yijing Wang <wangyij...@huawei.com>
Signed-off-by: Bjorn Helgaas <bhelg...@google.com>
Cc: "Rafael J. Wysocki" <r...@sisk.pl>
Cc: Oliver Neukum <oneu...@suse.de>
Cc: Gu Zheng <guz.f...@cn.fujitsu.com>
---
 drivers/pci/hotplug/pciehp_core.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/hotplug/pciehp_core.c 
b/drivers/pci/hotplug/pciehp_core.c
index 53b58de..551137f 100644
--- a/drivers/pci/hotplug/pciehp_core.c
+++ b/drivers/pci/hotplug/pciehp_core.c
@@ -317,6 +317,7 @@ static int pciehp_resume (struct pcie_device *dev)
 {
        struct controller *ctrl;
        struct slot *slot;
+       struct pci_bus *pbus = dev->port->subordinate;
        u8 status;
 
        ctrl = get_service_data(dev);
@@ -328,10 +329,13 @@ static int pciehp_resume (struct pcie_device *dev)
 
        /* Check if slot is occupied */
        pciehp_get_adapter_status(slot, &status);
-       if (status)
-               pciehp_enable_slot(slot);
-       else
+       if (status) {
+               if (list_empty(&pbus->devices))
+                       pciehp_enable_slot(slot);
+       } else if (!list_empty(&pbus->devices)) {
                pciehp_disable_slot(slot);
+       }
+
        return 0;
 }
 #endif /* PM */
-- 
1.7.1


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to