Re: [PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-06-03 Thread Halil Pasic
On Mon, 3 Jun 2019 16:04:28 +0200
Halil Pasic  wrote:

> On Mon, 3 Jun 2019 14:09:02 +0200
> Michael Mueller  wrote:
> 
> > >> @@ -1059,16 +1168,19 @@ static int __init css_bus_init(void)
> > >>  if (ret)
> > >>  goto out_unregister;
> > >>  ret = register_pm_notifier(&css_power_notifier);
> > >> -if (ret) {
> > >> -unregister_reboot_notifier(&css_reboot_notifier);
> > >> -goto out_unregister;
> > >> -}
> > >> +if (ret)
> > >> +goto out_unregister_rn;
> > >> +ret = cio_dma_pool_init();
> > >> +if (ret)
> > >> +goto out_unregister_rn;  
> > > 
> > > Don't you also need to unregister the pm notifier on failure here?  
> > 
> > Mmh, that was the original intention. Thanks!
> 
> I suppose we could also move cio_dma_pool_init() right before the
> register_reboot_notifier() call and goto out_unregister on error.
> 

Forget it, then we have to rollback the pool creation if the register
stuff fails... Sorry for the noise.

Regards,
Halil

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-06-03 Thread Halil Pasic
On Mon, 3 Jun 2019 14:09:02 +0200
Michael Mueller  wrote:

> >> @@ -1059,16 +1168,19 @@ static int __init css_bus_init(void)
> >>if (ret)
> >>goto out_unregister;
> >>ret = register_pm_notifier(&css_power_notifier);
> >> -  if (ret) {
> >> -  unregister_reboot_notifier(&css_reboot_notifier);
> >> -  goto out_unregister;
> >> -  }
> >> +  if (ret)
> >> +  goto out_unregister_rn;
> >> +  ret = cio_dma_pool_init();
> >> +  if (ret)
> >> +  goto out_unregister_rn;  
> > 
> > Don't you also need to unregister the pm notifier on failure here?  
> 
> Mmh, that was the original intention. Thanks!

I suppose we could also move cio_dma_pool_init() right before the
register_reboot_notifier() call and goto out_unregister on error.

Regards,
Halil

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-06-03 Thread Michael Mueller




On 03.06.19 15:34, Cornelia Huck wrote:

On Mon, 3 Jun 2019 14:57:30 +0200
Halil Pasic  wrote:


On Mon, 3 Jun 2019 14:09:02 +0200
Michael Mueller  wrote:


@@ -224,6 +226,8 @@ struct subchannel *css_alloc_subchannel(struct 
subchannel_id schid,
INIT_WORK(&sch->todo_work, css_sch_todo);
sch->dev.release = &css_subchannel_release;
device_initialize(&sch->dev);


It might be helpful to add a comment why you use 31 bit here...


@Halil, please let me know what comment you prefere here...
   


How about?

/*
  * The physical addresses of some the dma structures that
  * can belong  to a subchannel need to fit 31 bit width (examples ccw,).
  */


"e.g. ccw"?




 

+   sch->dev.coherent_dma_mask = DMA_BIT_MASK(31);
+   sch->dev.dma_mask = &sch->dev.coherent_dma_mask;
return sch;
   
   err:

@@ -899,6 +903,8 @@ static int __init setup_css(int nr)
dev_set_name(&css->device, "css%x", nr);
css->device.groups = cssdev_attr_groups;
css->device.release = channel_subsystem_release;


...and 64 bit here.


and here.


/*
  * We currently allocate notifier bits with this (using css->device
  * as the device argument with the DMA API), and are fine with 64 bit
  * addresses.
  */


Thanks, that makes things hopefully clearer if we look at it some time
in the future ;)



Applied both with with requested change.

Michael

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-06-03 Thread Cornelia Huck
On Mon, 3 Jun 2019 14:47:06 +0200
Halil Pasic  wrote:

> On Mon, 3 Jun 2019 13:37:45 +0200
> Cornelia Huck  wrote:
> 
> > On Wed, 29 May 2019 14:26:51 +0200
> > Michael Mueller  wrote:

> > > diff --git a/arch/s390/include/asm/cio.h b/arch/s390/include/asm/cio.h
> > > index 1727180e8ca1..43c007d2775a 100644
> > > --- a/arch/s390/include/asm/cio.h
> > > +++ b/arch/s390/include/asm/cio.h
> > > @@ -328,6 +328,17 @@ static inline u8 pathmask_to_pos(u8 mask)
> > >  void channel_subsystem_reinit(void);
> > >  extern void css_schedule_reprobe(void);
> > >  
> > > +extern void *cio_dma_zalloc(size_t size);
> > > +extern void cio_dma_free(void *cpu_addr, size_t size);
> > > +extern struct device *cio_get_dma_css_dev(void);
> > > +
> > > +struct gen_pool;  
> > 
> > That forward declaration is a bit ugly...   
> 
> Can you explain to me what is ugly about it so I can avoid similar
> mistakes in the future?
> 
> >I guess the alternative was
> > include hell?
> >   
> 
> What do you mean by include hell?
> 
> I decided to use a forward declaration because the guys that include
> "cio.h" are not expected to require the interfaces defined in
> linux/genalloc.h. My motivation to do it like this was the principle of
> encapsulation.

My personal rule-of-thumb is to include the header if it is
straightforward enough (e.g. if adding a basic header is enough). If
you need to include a header together with all of its friends and
family, a forward declaration is probably nicer. And of course,
sometimes it is simply needed.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-06-03 Thread Cornelia Huck
On Mon, 3 Jun 2019 14:57:30 +0200
Halil Pasic  wrote:

> On Mon, 3 Jun 2019 14:09:02 +0200
> Michael Mueller  wrote:
> 
> > >> @@ -224,6 +226,8 @@ struct subchannel *css_alloc_subchannel(struct 
> > >> subchannel_id schid,
> > >>  INIT_WORK(&sch->todo_work, css_sch_todo);
> > >>  sch->dev.release = &css_subchannel_release;
> > >>  device_initialize(&sch->dev);
> > > 
> > > It might be helpful to add a comment why you use 31 bit here...
> > 
> > @Halil, please let me know what comment you prefere here...
> >   
> 
> How about?
> 
> /*
>  * The physical addresses of some the dma structures that
>  * can belong  to a subchannel need to fit 31 bit width (examples ccw,).
>  */

"e.g. ccw"?

> 
> 
> > > 
> > >> +sch->dev.coherent_dma_mask = DMA_BIT_MASK(31);
> > >> +sch->dev.dma_mask = &sch->dev.coherent_dma_mask;
> > >>  return sch;
> > >>   
> > >>   err:
> > >> @@ -899,6 +903,8 @@ static int __init setup_css(int nr)
> > >>  dev_set_name(&css->device, "css%x", nr);
> > >>  css->device.groups = cssdev_attr_groups;
> > >>  css->device.release = channel_subsystem_release;
> > > 
> > > ...and 64 bit here.
> > 
> > and here.  
> 
> /*
>  * We currently allocate notifier bits with this (using css->device
>  * as the device argument with the DMA API), and are fine with 64 bit
>  * addresses.
>  */

Thanks, that makes things hopefully clearer if we look at it some time
in the future ;)
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-06-03 Thread Halil Pasic
On Mon, 3 Jun 2019 14:09:02 +0200
Michael Mueller  wrote:

> >> @@ -224,6 +226,8 @@ struct subchannel *css_alloc_subchannel(struct 
> >> subchannel_id schid,
> >>INIT_WORK(&sch->todo_work, css_sch_todo);
> >>sch->dev.release = &css_subchannel_release;
> >>device_initialize(&sch->dev);  
> > 
> > It might be helpful to add a comment why you use 31 bit here...  
> 
> @Halil, please let me know what comment you prefere here...
> 

How about?

/*
 * The physical addresses of some the dma structures that
 * can belong  to a subchannel need to fit 31 bit width (examples ccw,).
 */


> >   
> >> +  sch->dev.coherent_dma_mask = DMA_BIT_MASK(31);
> >> +  sch->dev.dma_mask = &sch->dev.coherent_dma_mask;
> >>return sch;
> >>   
> >>   err:
> >> @@ -899,6 +903,8 @@ static int __init setup_css(int nr)
> >>dev_set_name(&css->device, "css%x", nr);
> >>css->device.groups = cssdev_attr_groups;
> >>css->device.release = channel_subsystem_release;  
> > 
> > ...and 64 bit here.  
> 
> and here.

/*
 * We currently allocate notifier bits with this (using css->device
 * as the device argument with the DMA API), and are fine with 64 bit
 * addresses.
 */

Regards,
Halil

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-06-03 Thread Halil Pasic
On Mon, 3 Jun 2019 13:37:45 +0200
Cornelia Huck  wrote:

> On Wed, 29 May 2019 14:26:51 +0200
> Michael Mueller  wrote:
> 
> > From: Halil Pasic 
> > 
> > To support protected virtualization cio will need to make sure the
> > memory used for communication with the hypervisor is DMA memory.
> > 
> > Let us introduce one global pool for cio.
> > 
> > Our DMA pools are implemented as a gen_pool backed with DMA pages. The
> > idea is to avoid each allocation effectively wasting a page, as we
> > typically allocate much less than PAGE_SIZE.
> > 
> > Signed-off-by: Halil Pasic 
> > Reviewed-by: Sebastian Ott 
> > Signed-off-by: Michael Mueller 
> > ---
> >  arch/s390/Kconfig   |   1 +
> >  arch/s390/include/asm/cio.h |  11 
> >  drivers/s390/cio/css.c  | 120 
> > ++--
> >  3 files changed, 128 insertions(+), 4 deletions(-)
> 
> (...)
> 
> > diff --git a/arch/s390/include/asm/cio.h b/arch/s390/include/asm/cio.h
> > index 1727180e8ca1..43c007d2775a 100644
> > --- a/arch/s390/include/asm/cio.h
> > +++ b/arch/s390/include/asm/cio.h
> > @@ -328,6 +328,17 @@ static inline u8 pathmask_to_pos(u8 mask)
> >  void channel_subsystem_reinit(void);
> >  extern void css_schedule_reprobe(void);
> >  
> > +extern void *cio_dma_zalloc(size_t size);
> > +extern void cio_dma_free(void *cpu_addr, size_t size);
> > +extern struct device *cio_get_dma_css_dev(void);
> > +
> > +struct gen_pool;
> 
> That forward declaration is a bit ugly... 

Can you explain to me what is ugly about it so I can avoid similar
mistakes in the future?

>I guess the alternative was
> include hell?
> 

What do you mean by include hell?

I decided to use a forward declaration because the guys that include
"cio.h" are not expected to require the interfaces defined in
linux/genalloc.h. My motivation to do it like this was the principle of
encapsulation.

Regards,
Halil

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-06-03 Thread Michael Mueller




On 03.06.19 13:37, Cornelia Huck wrote:

On Wed, 29 May 2019 14:26:51 +0200
Michael Mueller  wrote:


From: Halil Pasic 

To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.

Let us introduce one global pool for cio.

Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a page, as we
typically allocate much less than PAGE_SIZE.

Signed-off-by: Halil Pasic 
Reviewed-by: Sebastian Ott 
Signed-off-by: Michael Mueller 
---
  arch/s390/Kconfig   |   1 +
  arch/s390/include/asm/cio.h |  11 
  drivers/s390/cio/css.c  | 120 ++--
  3 files changed, 128 insertions(+), 4 deletions(-)


(...)


diff --git a/arch/s390/include/asm/cio.h b/arch/s390/include/asm/cio.h
index 1727180e8ca1..43c007d2775a 100644
--- a/arch/s390/include/asm/cio.h
+++ b/arch/s390/include/asm/cio.h
@@ -328,6 +328,17 @@ static inline u8 pathmask_to_pos(u8 mask)
  void channel_subsystem_reinit(void);
  extern void css_schedule_reprobe(void);
  
+extern void *cio_dma_zalloc(size_t size);

+extern void cio_dma_free(void *cpu_addr, size_t size);
+extern struct device *cio_get_dma_css_dev(void);
+
+struct gen_pool;


That forward declaration is a bit ugly... I guess the alternative was
include hell?


That's an easy one.

 #include 
+#include 
 #include 




+void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
+   size_t size);
+void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size);
+void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev);
+struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages);
+
  /* Function from drivers/s390/cio/chsc.c */
  int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta);
  int chsc_sstpi(void *page, void *result, size_t size);
diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
index aea502922646..b97618497848 100644
--- a/drivers/s390/cio/css.c
+++ b/drivers/s390/cio/css.c
@@ -20,6 +20,8 @@
  #include 
  #include 
  #include 
+#include 
+#include 
  #include 
  #include 
  
@@ -224,6 +226,8 @@ struct subchannel *css_alloc_subchannel(struct subchannel_id schid,

INIT_WORK(&sch->todo_work, css_sch_todo);
sch->dev.release = &css_subchannel_release;
device_initialize(&sch->dev);


It might be helpful to add a comment why you use 31 bit here...


@Halil, please let me know what comment you prefere here...




+   sch->dev.coherent_dma_mask = DMA_BIT_MASK(31);
+   sch->dev.dma_mask = &sch->dev.coherent_dma_mask;
return sch;
  
  err:

@@ -899,6 +903,8 @@ static int __init setup_css(int nr)
dev_set_name(&css->device, "css%x", nr);
css->device.groups = cssdev_attr_groups;
css->device.release = channel_subsystem_release;


...and 64 bit here.


and here.




+   css->device.coherent_dma_mask = DMA_BIT_MASK(64);
+   css->device.dma_mask = &css->device.coherent_dma_mask;
  
  	mutex_init(&css->mutex);

css->cssid = chsc_get_cssid(nr);


(...)


@@ -1059,16 +1168,19 @@ static int __init css_bus_init(void)
if (ret)
goto out_unregister;
ret = register_pm_notifier(&css_power_notifier);
-   if (ret) {
-   unregister_reboot_notifier(&css_reboot_notifier);
-   goto out_unregister;
-   }
+   if (ret)
+   goto out_unregister_rn;
+   ret = cio_dma_pool_init();
+   if (ret)
+   goto out_unregister_rn;


Don't you also need to unregister the pm notifier on failure here?


Mmh, that was the original intention. Thanks!



Other than that, I noticed only cosmetic issues; seems reasonable to me.


css_init_done = 1;
  
  	/* Enable default isc for I/O subchannels. */

isc_register(IO_SCH_ISC);
  
  	return 0;

+out_unregister_rn:
+   unregister_reboot_notifier(&css_reboot_notifier);
  out_unregister:
while (i-- > 0) {
struct channel_subsystem *css = channel_subsystems[i];




Thanks,
Michael

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-06-03 Thread Cornelia Huck
On Wed, 29 May 2019 14:26:51 +0200
Michael Mueller  wrote:

> From: Halil Pasic 
> 
> To support protected virtualization cio will need to make sure the
> memory used for communication with the hypervisor is DMA memory.
> 
> Let us introduce one global pool for cio.
> 
> Our DMA pools are implemented as a gen_pool backed with DMA pages. The
> idea is to avoid each allocation effectively wasting a page, as we
> typically allocate much less than PAGE_SIZE.
> 
> Signed-off-by: Halil Pasic 
> Reviewed-by: Sebastian Ott 
> Signed-off-by: Michael Mueller 
> ---
>  arch/s390/Kconfig   |   1 +
>  arch/s390/include/asm/cio.h |  11 
>  drivers/s390/cio/css.c  | 120 
> ++--
>  3 files changed, 128 insertions(+), 4 deletions(-)

(...)

> diff --git a/arch/s390/include/asm/cio.h b/arch/s390/include/asm/cio.h
> index 1727180e8ca1..43c007d2775a 100644
> --- a/arch/s390/include/asm/cio.h
> +++ b/arch/s390/include/asm/cio.h
> @@ -328,6 +328,17 @@ static inline u8 pathmask_to_pos(u8 mask)
>  void channel_subsystem_reinit(void);
>  extern void css_schedule_reprobe(void);
>  
> +extern void *cio_dma_zalloc(size_t size);
> +extern void cio_dma_free(void *cpu_addr, size_t size);
> +extern struct device *cio_get_dma_css_dev(void);
> +
> +struct gen_pool;

That forward declaration is a bit ugly... I guess the alternative was
include hell?

> +void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
> + size_t size);
> +void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size);
> +void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev);
> +struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages);
> +
>  /* Function from drivers/s390/cio/chsc.c */
>  int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta);
>  int chsc_sstpi(void *page, void *result, size_t size);
> diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
> index aea502922646..b97618497848 100644
> --- a/drivers/s390/cio/css.c
> +++ b/drivers/s390/cio/css.c
> @@ -20,6 +20,8 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +#include 
>  #include 
>  #include 
>  
> @@ -224,6 +226,8 @@ struct subchannel *css_alloc_subchannel(struct 
> subchannel_id schid,
>   INIT_WORK(&sch->todo_work, css_sch_todo);
>   sch->dev.release = &css_subchannel_release;
>   device_initialize(&sch->dev);

It might be helpful to add a comment why you use 31 bit here...

> + sch->dev.coherent_dma_mask = DMA_BIT_MASK(31);
> + sch->dev.dma_mask = &sch->dev.coherent_dma_mask;
>   return sch;
>  
>  err:
> @@ -899,6 +903,8 @@ static int __init setup_css(int nr)
>   dev_set_name(&css->device, "css%x", nr);
>   css->device.groups = cssdev_attr_groups;
>   css->device.release = channel_subsystem_release;

...and 64 bit here.

> + css->device.coherent_dma_mask = DMA_BIT_MASK(64);
> + css->device.dma_mask = &css->device.coherent_dma_mask;
>  
>   mutex_init(&css->mutex);
>   css->cssid = chsc_get_cssid(nr);

(...)

> @@ -1059,16 +1168,19 @@ static int __init css_bus_init(void)
>   if (ret)
>   goto out_unregister;
>   ret = register_pm_notifier(&css_power_notifier);
> - if (ret) {
> - unregister_reboot_notifier(&css_reboot_notifier);
> - goto out_unregister;
> - }
> + if (ret)
> + goto out_unregister_rn;
> + ret = cio_dma_pool_init();
> + if (ret)
> + goto out_unregister_rn;

Don't you also need to unregister the pm notifier on failure here?

Other than that, I noticed only cosmetic issues; seems reasonable to me.

>   css_init_done = 1;
>  
>   /* Enable default isc for I/O subchannels. */
>   isc_register(IO_SCH_ISC);
>  
>   return 0;
> +out_unregister_rn:
> + unregister_reboot_notifier(&css_reboot_notifier);
>  out_unregister:
>   while (i-- > 0) {
>   struct channel_subsystem *css = channel_subsystems[i];

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 2/8] s390/cio: introduce DMA pools to cio

2019-05-29 Thread Michael Mueller
From: Halil Pasic 

To support protected virtualization cio will need to make sure the
memory used for communication with the hypervisor is DMA memory.

Let us introduce one global pool for cio.

Our DMA pools are implemented as a gen_pool backed with DMA pages. The
idea is to avoid each allocation effectively wasting a page, as we
typically allocate much less than PAGE_SIZE.

Signed-off-by: Halil Pasic 
Reviewed-by: Sebastian Ott 
Signed-off-by: Michael Mueller 
---
 arch/s390/Kconfig   |   1 +
 arch/s390/include/asm/cio.h |  11 
 drivers/s390/cio/css.c  | 120 ++--
 3 files changed, 128 insertions(+), 4 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 88d8355b7bf7..2a245b56db8b 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -191,6 +191,7 @@ config S390
select ARCH_HAS_SCALED_CPUTIME
select HAVE_NMI
select SWIOTLB
+   select GENERIC_ALLOCATOR
 
 
 config SCHED_OMIT_FRAME_POINTER
diff --git a/arch/s390/include/asm/cio.h b/arch/s390/include/asm/cio.h
index 1727180e8ca1..43c007d2775a 100644
--- a/arch/s390/include/asm/cio.h
+++ b/arch/s390/include/asm/cio.h
@@ -328,6 +328,17 @@ static inline u8 pathmask_to_pos(u8 mask)
 void channel_subsystem_reinit(void);
 extern void css_schedule_reprobe(void);
 
+extern void *cio_dma_zalloc(size_t size);
+extern void cio_dma_free(void *cpu_addr, size_t size);
+extern struct device *cio_get_dma_css_dev(void);
+
+struct gen_pool;
+void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
+   size_t size);
+void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size);
+void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev);
+struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages);
+
 /* Function from drivers/s390/cio/chsc.c */
 int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta);
 int chsc_sstpi(void *page, void *result, size_t size);
diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
index aea502922646..b97618497848 100644
--- a/drivers/s390/cio/css.c
+++ b/drivers/s390/cio/css.c
@@ -20,6 +20,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 #include 
 #include 
 
@@ -224,6 +226,8 @@ struct subchannel *css_alloc_subchannel(struct 
subchannel_id schid,
INIT_WORK(&sch->todo_work, css_sch_todo);
sch->dev.release = &css_subchannel_release;
device_initialize(&sch->dev);
+   sch->dev.coherent_dma_mask = DMA_BIT_MASK(31);
+   sch->dev.dma_mask = &sch->dev.coherent_dma_mask;
return sch;
 
 err:
@@ -899,6 +903,8 @@ static int __init setup_css(int nr)
dev_set_name(&css->device, "css%x", nr);
css->device.groups = cssdev_attr_groups;
css->device.release = channel_subsystem_release;
+   css->device.coherent_dma_mask = DMA_BIT_MASK(64);
+   css->device.dma_mask = &css->device.coherent_dma_mask;
 
mutex_init(&css->mutex);
css->cssid = chsc_get_cssid(nr);
@@ -1018,6 +1024,109 @@ static struct notifier_block css_power_notifier = {
.notifier_call = css_power_event,
 };
 
+#define  CIO_DMA_GFP (GFP_KERNEL | __GFP_ZERO)
+static struct gen_pool *cio_dma_pool;
+
+/* Currently cio supports only a single css */
+struct device *cio_get_dma_css_dev(void)
+{
+   return &channel_subsystems[0]->device;
+}
+
+struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages)
+{
+   struct gen_pool *gp_dma;
+   void *cpu_addr;
+   dma_addr_t dma_addr;
+   int i;
+
+   gp_dma = gen_pool_create(3, -1);
+   if (!gp_dma)
+   return NULL;
+   for (i = 0; i < nr_pages; ++i) {
+   cpu_addr = dma_alloc_coherent(dma_dev, PAGE_SIZE, &dma_addr,
+ CIO_DMA_GFP);
+   if (!cpu_addr)
+   return gp_dma;
+   gen_pool_add_virt(gp_dma, (unsigned long) cpu_addr,
+ dma_addr, PAGE_SIZE, -1);
+   }
+   return gp_dma;
+}
+
+static void __gp_dma_free_dma(struct gen_pool *pool,
+ struct gen_pool_chunk *chunk, void *data)
+{
+   size_t chunk_size = chunk->end_addr - chunk->start_addr + 1;
+
+   dma_free_coherent((struct device *) data, chunk_size,
+(void *) chunk->start_addr,
+(dma_addr_t) chunk->phys_addr);
+}
+
+void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev)
+{
+   if (!gp_dma)
+   return;
+   /* this is qite ugly but no better idea */
+   gen_pool_for_each_chunk(gp_dma, __gp_dma_free_dma, dma_dev);
+   gen_pool_destroy(gp_dma);
+}
+
+static int cio_dma_pool_init(void)
+{
+   /* No need to free up the resources: compiled in */
+   cio_dma_pool = cio_gp_dma_create(cio_get_dma_css_dev(), 1);
+   if (!cio_dma_pool)
+   return -ENOMEM;
+   return 0;
+}
+