Re: [PATCH v7 22/22] iommu/dma: Avoid PCI host bridge windows

2016-09-15 Thread Marek Szyprowski

Hi Robin,


On 2016-09-14 15:25, Robin Murphy wrote:

On 14/09/16 13:35, Marek Szyprowski wrote:

On 2016-09-14 13:10, Robin Murphy wrote:

On 14/09/16 11:55, Marek Szyprowski wrote:

On 2016-09-12 18:14, Robin Murphy wrote:

With our DMA ops enabled for PCI devices, we should avoid allocating
IOVAs which a host bridge might misinterpret as peer-to-peer DMA and
lead to faults, corruption or other badness. To be safe, punch out
holes
for all of the relevant host bridge's windows when initialising a DMA
domain for a PCI device.

CC: Marek Szyprowski 
CC: Inki Dae 
Reported-by: Lorenzo Pieralisi 
Signed-off-by: Robin Murphy 

I don't know much about PCI and their IOMMU integration, but can't we
use
the direct mapping region feature of iommu core for it? There are
already
iommu_get_dm_regions(), iommu_put_dm_regions() and
iommu_request_dm_for_dev()
functions for handling them...

It's rather the opposite problem - in the direct-mapping case, we're
making sure the iommu_domain has translations installed for the given
IOVAs (which are also the corresponding physical address) before it goes
live, whereas what we need to do here is make sure the these addresses
never get used as IOVAs at all, because any attempt to do so them will
likely go wrong. Thus we carve them out of the iova_domain such that
they will never get near an actual IOMMU API call.

This is a slightly generalised equivalent of e.g. amd_iommu.c's
init_reserved_iova_ranges().

Hmmm. Each dm_region have protection parameter. Can't we reuse them to
create prohibited/reserved regions by setting it to 0 (no read / no write)
and mapping to physical 0 address? That's just a quick idea, because
dm_regions and the proposed code for pci looks a bit similar for me...

It might look similar, but at different levels (iommu_domain vs.
iova_domain) and with the opposite intent. The dm_region prot flag is
just the standard flag as passed to iommu_map() - trying to map a region
with no access in an empty pagetable isn't going to achieve anything
anyway (it's effectively unmapping addresses that are already unmapped).

But for this case, even if you _did_ map something in the pagetable
(i.e. the iommu_domain), it wouldn't make any difference, because the
thing we're mitigating against is handing out addresses which are going
to cause a device's accesses to blow up inside the PCI root complex
without ever even reaching the IOMMU. In short, dm_regions are about
"these addresses are already being used for DMA, so make sure the IOMMU
API doesn't block them", whereas reserved ranges are about "these
addresses are unusable for DMA, so make sure the DMA API can't allocate
them".


Okay, thanks for the explanation.

Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v7 22/22] iommu/dma: Avoid PCI host bridge windows

2016-09-14 Thread Robin Murphy
On 14/09/16 13:35, Marek Szyprowski wrote:
> Hi Robin,
> 
> 
> On 2016-09-14 13:10, Robin Murphy wrote:
>> On 14/09/16 11:55, Marek Szyprowski wrote:
>>> On 2016-09-12 18:14, Robin Murphy wrote:
 With our DMA ops enabled for PCI devices, we should avoid allocating
 IOVAs which a host bridge might misinterpret as peer-to-peer DMA and
 lead to faults, corruption or other badness. To be safe, punch out
 holes
 for all of the relevant host bridge's windows when initialising a DMA
 domain for a PCI device.

 CC: Marek Szyprowski 
 CC: Inki Dae 
 Reported-by: Lorenzo Pieralisi 
 Signed-off-by: Robin Murphy 
>>> I don't know much about PCI and their IOMMU integration, but can't we
>>> use
>>> the direct mapping region feature of iommu core for it? There are
>>> already
>>> iommu_get_dm_regions(), iommu_put_dm_regions() and
>>> iommu_request_dm_for_dev()
>>> functions for handling them...
>> It's rather the opposite problem - in the direct-mapping case, we're
>> making sure the iommu_domain has translations installed for the given
>> IOVAs (which are also the corresponding physical address) before it goes
>> live, whereas what we need to do here is make sure the these addresses
>> never get used as IOVAs at all, because any attempt to do so them will
>> likely go wrong. Thus we carve them out of the iova_domain such that
>> they will never get near an actual IOMMU API call.
>>
>> This is a slightly generalised equivalent of e.g. amd_iommu.c's
>> init_reserved_iova_ranges().
> 
> Hmmm. Each dm_region have protection parameter. Can't we reuse them to
> create prohibited/reserved regions by setting it to 0 (no read / no write)
> and mapping to physical 0 address? That's just a quick idea, because
> dm_regions and the proposed code for pci looks a bit similar for me...

It might look similar, but at different levels (iommu_domain vs.
iova_domain) and with the opposite intent. The dm_region prot flag is
just the standard flag as passed to iommu_map() - trying to map a region
with no access in an empty pagetable isn't going to achieve anything
anyway (it's effectively unmapping addresses that are already unmapped).

But for this case, even if you _did_ map something in the pagetable
(i.e. the iommu_domain), it wouldn't make any difference, because the
thing we're mitigating against is handing out addresses which are going
to cause a device's accesses to blow up inside the PCI root complex
without ever even reaching the IOMMU. In short, dm_regions are about
"these addresses are already being used for DMA, so make sure the IOMMU
API doesn't block them", whereas reserved ranges are about "these
addresses are unusable for DMA, so make sure the DMA API can't allocate
them".

Robin.

> 
> [...]
> 
> Best regards

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v7 22/22] iommu/dma: Avoid PCI host bridge windows

2016-09-14 Thread Marek Szyprowski

Hi Robin,


On 2016-09-14 13:10, Robin Murphy wrote:

On 14/09/16 11:55, Marek Szyprowski wrote:

On 2016-09-12 18:14, Robin Murphy wrote:

With our DMA ops enabled for PCI devices, we should avoid allocating
IOVAs which a host bridge might misinterpret as peer-to-peer DMA and
lead to faults, corruption or other badness. To be safe, punch out holes
for all of the relevant host bridge's windows when initialising a DMA
domain for a PCI device.

CC: Marek Szyprowski 
CC: Inki Dae 
Reported-by: Lorenzo Pieralisi 
Signed-off-by: Robin Murphy 

I don't know much about PCI and their IOMMU integration, but can't we use
the direct mapping region feature of iommu core for it? There are already
iommu_get_dm_regions(), iommu_put_dm_regions() and
iommu_request_dm_for_dev()
functions for handling them...

It's rather the opposite problem - in the direct-mapping case, we're
making sure the iommu_domain has translations installed for the given
IOVAs (which are also the corresponding physical address) before it goes
live, whereas what we need to do here is make sure the these addresses
never get used as IOVAs at all, because any attempt to do so them will
likely go wrong. Thus we carve them out of the iova_domain such that
they will never get near an actual IOMMU API call.

This is a slightly generalised equivalent of e.g. amd_iommu.c's
init_reserved_iova_ranges().


Hmmm. Each dm_region have protection parameter. Can't we reuse them to
create prohibited/reserved regions by setting it to 0 (no read / no write)
and mapping to physical 0 address? That's just a quick idea, because
dm_regions and the proposed code for pci looks a bit similar for me...

[...]

Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v7 22/22] iommu/dma: Avoid PCI host bridge windows

2016-09-14 Thread Robin Murphy
Hi Marek,

On 14/09/16 11:55, Marek Szyprowski wrote:
> Hi Robin,
> 
> 
> On 2016-09-12 18:14, Robin Murphy wrote:
>> With our DMA ops enabled for PCI devices, we should avoid allocating
>> IOVAs which a host bridge might misinterpret as peer-to-peer DMA and
>> lead to faults, corruption or other badness. To be safe, punch out holes
>> for all of the relevant host bridge's windows when initialising a DMA
>> domain for a PCI device.
>>
>> CC: Marek Szyprowski 
>> CC: Inki Dae 
>> Reported-by: Lorenzo Pieralisi 
>> Signed-off-by: Robin Murphy 
> 
> I don't know much about PCI and their IOMMU integration, but can't we use
> the direct mapping region feature of iommu core for it? There are already
> iommu_get_dm_regions(), iommu_put_dm_regions() and
> iommu_request_dm_for_dev()
> functions for handling them...

It's rather the opposite problem - in the direct-mapping case, we're
making sure the iommu_domain has translations installed for the given
IOVAs (which are also the corresponding physical address) before it goes
live, whereas what we need to do here is make sure the these addresses
never get used as IOVAs at all, because any attempt to do so them will
likely go wrong. Thus we carve them out of the iova_domain such that
they will never get near an actual IOMMU API call.

This is a slightly generalised equivalent of e.g. amd_iommu.c's
init_reserved_iova_ranges().

Robin.

> 
>> ---
>>
>> - Squash in the previous drm/exynos fixup
>> - If need be, this one can probably wait
>> ---
>>   arch/arm64/mm/dma-mapping.c   |  2 +-
>>   drivers/gpu/drm/exynos/exynos_drm_iommu.h |  2 +-
>>   drivers/iommu/dma-iommu.c | 25
>> -
>>   include/linux/dma-iommu.h |  3 ++-
>>   4 files changed, 28 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
>> index c4284c432ae8..610d8e53011e 100644
>> --- a/arch/arm64/mm/dma-mapping.c
>> +++ b/arch/arm64/mm/dma-mapping.c
>> @@ -827,7 +827,7 @@ static bool do_iommu_attach(struct device *dev,
>> const struct iommu_ops *ops,
>>* then the IOMMU core will have already configured a group for
>> this
>>* device, and allocated the default domain for that group.
>>*/
>> -if (!domain || iommu_dma_init_domain(domain, dma_base, size)) {
>> +if (!domain || iommu_dma_init_domain(domain, dma_base, size, dev)) {
>>   pr_warn("Failed to set up IOMMU for device %s; retaining
>> platform DMA ops\n",
>>   dev_name(dev));
>>   return false;
>> diff --git a/drivers/gpu/drm/exynos/exynos_drm_iommu.h
>> b/drivers/gpu/drm/exynos/exynos_drm_iommu.h
>> index c8de4913fdbe..87f6b5672e11 100644
>> --- a/drivers/gpu/drm/exynos/exynos_drm_iommu.h
>> +++ b/drivers/gpu/drm/exynos/exynos_drm_iommu.h
>> @@ -66,7 +66,7 @@ static inline int
>> __exynos_iommu_create_mapping(struct exynos_drm_private *priv,
>>   if (ret)
>>   goto free_domain;
>>   -ret = iommu_dma_init_domain(domain, start, size);
>> +ret = iommu_dma_init_domain(domain, start, size, NULL);
>>   if (ret)
>>   goto put_cookie;
>>   diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 4329d18080cf..c5ab8667e6f2 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -27,6 +27,7 @@
>>   #include 
>>   #include 
>>   #include 
>> +#include 
>>   #include 
>>   #include 
>>   @@ -103,18 +104,38 @@ void iommu_put_dma_cookie(struct iommu_domain
>> *domain)
>>   }
>>   EXPORT_SYMBOL(iommu_put_dma_cookie);
>>   +static void iova_reserve_pci_windows(struct pci_dev *dev,
>> +struct iova_domain *iovad)
>> +{
>> +struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
>> +struct resource_entry *window;
>> +unsigned long lo, hi;
>> +
>> +resource_list_for_each_entry(window, &bridge->windows) {
>> +if (resource_type(window->res) != IORESOURCE_MEM &&
>> +resource_type(window->res) != IORESOURCE_IO)
>> +continue;
>> +
>> +lo = iova_pfn(iovad, window->res->start - window->offset);
>> +hi = iova_pfn(iovad, window->res->end - window->offset);
>> +reserve_iova(iovad, lo, hi);
>> +}
>> +}
>> +
>>   /**
>>* iommu_dma_init_domain - Initialise a DMA mapping domain
>>* @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
>>* @base: IOVA at which the mappable address space starts
>>* @size: Size of IOVA space
>> + * @dev: Device the domain is being initialised for
>>*
>>* @base and @size should be exact multiples of IOMMU page
>> granularity to
>>* avoid rounding surprises. If necessary, we reserve the page at
>> address 0
>>* to ensure it is an invalid IOVA. It is safe to reinitialise a
>> domain, but
>>* any change which could make prior IOVAs invalid will fail.
>>*/
>> -int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t
>> base, u64 size)
>> +int iommu_dma_i

Re: [PATCH v7 22/22] iommu/dma: Avoid PCI host bridge windows

2016-09-14 Thread Marek Szyprowski

Hi Robin,


On 2016-09-12 18:14, Robin Murphy wrote:

With our DMA ops enabled for PCI devices, we should avoid allocating
IOVAs which a host bridge might misinterpret as peer-to-peer DMA and
lead to faults, corruption or other badness. To be safe, punch out holes
for all of the relevant host bridge's windows when initialising a DMA
domain for a PCI device.

CC: Marek Szyprowski 
CC: Inki Dae 
Reported-by: Lorenzo Pieralisi 
Signed-off-by: Robin Murphy 


I don't know much about PCI and their IOMMU integration, but can't we use
the direct mapping region feature of iommu core for it? There are already
iommu_get_dm_regions(), iommu_put_dm_regions() and 
iommu_request_dm_for_dev()

functions for handling them...


---

- Squash in the previous drm/exynos fixup
- If need be, this one can probably wait
---
  arch/arm64/mm/dma-mapping.c   |  2 +-
  drivers/gpu/drm/exynos/exynos_drm_iommu.h |  2 +-
  drivers/iommu/dma-iommu.c | 25 -
  include/linux/dma-iommu.h |  3 ++-
  4 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index c4284c432ae8..610d8e53011e 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -827,7 +827,7 @@ static bool do_iommu_attach(struct device *dev, const 
struct iommu_ops *ops,
 * then the IOMMU core will have already configured a group for this
 * device, and allocated the default domain for that group.
 */
-   if (!domain || iommu_dma_init_domain(domain, dma_base, size)) {
+   if (!domain || iommu_dma_init_domain(domain, dma_base, size, dev)) {
pr_warn("Failed to set up IOMMU for device %s; retaining platform 
DMA ops\n",
dev_name(dev));
return false;
diff --git a/drivers/gpu/drm/exynos/exynos_drm_iommu.h 
b/drivers/gpu/drm/exynos/exynos_drm_iommu.h
index c8de4913fdbe..87f6b5672e11 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_iommu.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_iommu.h
@@ -66,7 +66,7 @@ static inline int __exynos_iommu_create_mapping(struct 
exynos_drm_private *priv,
if (ret)
goto free_domain;
  
-	ret = iommu_dma_init_domain(domain, start, size);

+   ret = iommu_dma_init_domain(domain, start, size, NULL);
if (ret)
goto put_cookie;
  
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c

index 4329d18080cf..c5ab8667e6f2 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -27,6 +27,7 @@
  #include 
  #include 
  #include 
+#include 
  #include 
  #include 
  
@@ -103,18 +104,38 @@ void iommu_put_dma_cookie(struct iommu_domain *domain)

  }
  EXPORT_SYMBOL(iommu_put_dma_cookie);
  
+static void iova_reserve_pci_windows(struct pci_dev *dev,

+   struct iova_domain *iovad)
+{
+   struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
+   struct resource_entry *window;
+   unsigned long lo, hi;
+
+   resource_list_for_each_entry(window, &bridge->windows) {
+   if (resource_type(window->res) != IORESOURCE_MEM &&
+   resource_type(window->res) != IORESOURCE_IO)
+   continue;
+
+   lo = iova_pfn(iovad, window->res->start - window->offset);
+   hi = iova_pfn(iovad, window->res->end - window->offset);
+   reserve_iova(iovad, lo, hi);
+   }
+}
+
  /**
   * iommu_dma_init_domain - Initialise a DMA mapping domain
   * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
   * @base: IOVA at which the mappable address space starts
   * @size: Size of IOVA space
+ * @dev: Device the domain is being initialised for
   *
   * @base and @size should be exact multiples of IOMMU page granularity to
   * avoid rounding surprises. If necessary, we reserve the page at address 0
   * to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but
   * any change which could make prior IOVAs invalid will fail.
   */
-int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 
size)
+int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
+   u64 size, struct device *dev)
  {
struct iova_domain *iovad = cookie_iovad(domain);
unsigned long order, base_pfn, end_pfn;
@@ -152,6 +173,8 @@ int iommu_dma_init_domain(struct iommu_domain *domain, 
dma_addr_t base, u64 size
iovad->dma_32bit_pfn = end_pfn;
} else {
init_iova_domain(iovad, 1UL << order, base_pfn, end_pfn);
+   if (dev && dev_is_pci(dev))
+   iova_reserve_pci_windows(to_pci_dev(dev), iovad);
}
return 0;
  }
diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h
index 5ee806e41b5c..32c589062bd9 100644
--- a/include/linux/dma-iommu.h
+++ b/include/linux/dma-iommu.h
@@ -30,7 +30,8 @@ int iommu_get_dma_cookie(st

[PATCH v7 22/22] iommu/dma: Avoid PCI host bridge windows

2016-09-12 Thread Robin Murphy
With our DMA ops enabled for PCI devices, we should avoid allocating
IOVAs which a host bridge might misinterpret as peer-to-peer DMA and
lead to faults, corruption or other badness. To be safe, punch out holes
for all of the relevant host bridge's windows when initialising a DMA
domain for a PCI device.

CC: Marek Szyprowski 
CC: Inki Dae 
Reported-by: Lorenzo Pieralisi 
Signed-off-by: Robin Murphy 

---

- Squash in the previous drm/exynos fixup
- If need be, this one can probably wait
---
 arch/arm64/mm/dma-mapping.c   |  2 +-
 drivers/gpu/drm/exynos/exynos_drm_iommu.h |  2 +-
 drivers/iommu/dma-iommu.c | 25 -
 include/linux/dma-iommu.h |  3 ++-
 4 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index c4284c432ae8..610d8e53011e 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -827,7 +827,7 @@ static bool do_iommu_attach(struct device *dev, const 
struct iommu_ops *ops,
 * then the IOMMU core will have already configured a group for this
 * device, and allocated the default domain for that group.
 */
-   if (!domain || iommu_dma_init_domain(domain, dma_base, size)) {
+   if (!domain || iommu_dma_init_domain(domain, dma_base, size, dev)) {
pr_warn("Failed to set up IOMMU for device %s; retaining 
platform DMA ops\n",
dev_name(dev));
return false;
diff --git a/drivers/gpu/drm/exynos/exynos_drm_iommu.h 
b/drivers/gpu/drm/exynos/exynos_drm_iommu.h
index c8de4913fdbe..87f6b5672e11 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_iommu.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_iommu.h
@@ -66,7 +66,7 @@ static inline int __exynos_iommu_create_mapping(struct 
exynos_drm_private *priv,
if (ret)
goto free_domain;
 
-   ret = iommu_dma_init_domain(domain, start, size);
+   ret = iommu_dma_init_domain(domain, start, size, NULL);
if (ret)
goto put_cookie;
 
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 4329d18080cf..c5ab8667e6f2 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -27,6 +27,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -103,18 +104,38 @@ void iommu_put_dma_cookie(struct iommu_domain *domain)
 }
 EXPORT_SYMBOL(iommu_put_dma_cookie);
 
+static void iova_reserve_pci_windows(struct pci_dev *dev,
+   struct iova_domain *iovad)
+{
+   struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
+   struct resource_entry *window;
+   unsigned long lo, hi;
+
+   resource_list_for_each_entry(window, &bridge->windows) {
+   if (resource_type(window->res) != IORESOURCE_MEM &&
+   resource_type(window->res) != IORESOURCE_IO)
+   continue;
+
+   lo = iova_pfn(iovad, window->res->start - window->offset);
+   hi = iova_pfn(iovad, window->res->end - window->offset);
+   reserve_iova(iovad, lo, hi);
+   }
+}
+
 /**
  * iommu_dma_init_domain - Initialise a DMA mapping domain
  * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
  * @base: IOVA at which the mappable address space starts
  * @size: Size of IOVA space
+ * @dev: Device the domain is being initialised for
  *
  * @base and @size should be exact multiples of IOMMU page granularity to
  * avoid rounding surprises. If necessary, we reserve the page at address 0
  * to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but
  * any change which could make prior IOVAs invalid will fail.
  */
-int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 
size)
+int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
+   u64 size, struct device *dev)
 {
struct iova_domain *iovad = cookie_iovad(domain);
unsigned long order, base_pfn, end_pfn;
@@ -152,6 +173,8 @@ int iommu_dma_init_domain(struct iommu_domain *domain, 
dma_addr_t base, u64 size
iovad->dma_32bit_pfn = end_pfn;
} else {
init_iova_domain(iovad, 1UL << order, base_pfn, end_pfn);
+   if (dev && dev_is_pci(dev))
+   iova_reserve_pci_windows(to_pci_dev(dev), iovad);
}
return 0;
 }
diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h
index 5ee806e41b5c..32c589062bd9 100644
--- a/include/linux/dma-iommu.h
+++ b/include/linux/dma-iommu.h
@@ -30,7 +30,8 @@ int iommu_get_dma_cookie(struct iommu_domain *domain);
 void iommu_put_dma_cookie(struct iommu_domain *domain);
 
 /* Setup call for arch DMA mapping code */
-int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 
size);
+int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
+   u64 size, struct device *dev);
 
 /* Ge