RE: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-17 Thread Bounine, Alexandre
On Mon, Oct 17, 2011 at 1:01 PM, Vinod Koul  wrote:
> 
> On Mon, 2011-10-17 at 21:22 +0530, Jassi Brar wrote:
> > On 15 October 2011 23:05, Vinod Koul  wrote:
> >
> > > Another alternate approach could be to add one more argument to
> > > prep_slave_sg API which allows us to pass additional runtime
> specific
> > > parameters. This can be NULL and unused for existing drivers and
> used in
> > > RIO and any future subsystems which want to use dmaengine.
> > > Thoughts...?
> > >
> > That doesn't sound much different than passing the data via
> > dma_chan.private during prep_slave_sg. Only now we add to
> > the number of arguments.
> Yes agreed, but we already decided and marked it as depreciated.
> 
> > And then either this argument would be RapidIO specific (unfair
> > to other users) or generic. In latter case what would it look like ?
> My initial thoughts were to add an argument which only the specific
> dmac
> knows howto decode and is filled by its client. As i said for existing
> users and people who don't require dynamic information wouldn't bother.
> The pros
>  - allows us to support RIO kind of subsystems where one needs to pass
> subsystem specific information for programing the dmac
>  - doesn't require us to add subsystem specific stuff in dmaengine,
> today its RIO tomorrow some other folks may want to add. We want to
> maintain dmaengine as a generic framework, while also trying to support
> multiple audiences.
> Cons:
>  - there is no guarantee;  dmac expects foo and clients pass bar
> 
This was my concern that I mentioned in the beginning of this thread:
how to differentiate between different types of slave channels. At that
time we decided that channel filtering routines should be sufficient
to properly identify a suitable channel. And this is true - if RIO filter
detects registered RapidIO capable DMA channel it should be safe to pass
a parameter specific to RIO client implementation.   

If we want to protect pure slave channels from mixing with some specific
implementations we may consider adding new values for dma_transaction_type
as it done for RapidIO (DMA_RAPIDIO, DMA_FOO, DMA_BAR etc.).
This way we may keep single API with extra argument and differentiate
between "flavors" of DMA_SLAVE to make decisions about use of that
extra parameter. E.g. channels registered as DMA_SLAVE will ignore
the new parameter (pure slave mode), DMA_RAPIDIO and DMA_FOO use it
according to the "flavor".  

Yes, I am planning to drop that flag from the current RIO implementation but
It may have a new meaning if prep_slave_sg() gets the extra argument.  

> I am okay if we have alternate way to support this with above goal :)
> 
> --
> ~Vinod

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-17 Thread Bounine, Alexandre
On Mon, Oct 17, 2011 at 11:53 AM, Jassi Brar
 wrote:
> 
> On 15 October 2011 23:05, Vinod Koul  wrote:
> 
> > Another alternate approach could be to add one more argument to
> > prep_slave_sg API which allows us to pass additional runtime
specific
> > parameters. This can be NULL and unused for existing drivers and
used
> in
> > RIO and any future subsystems which want to use dmaengine.
> > Thoughts...?
> >
> That doesn't sound much different than passing the data via
> dma_chan.private during prep_slave_sg. Only now we add to
> the number of arguments.
One dma_chan may be used by multiple drivers requesting data transfer.
In this case we will need a lock to keep dma_chan and its private
coupled together. 
If we consider this coupling as a valid way we may think about
adding a lock member into dma_chan structure. This will make locking
more effective for configurations with multiple DMA channels. 

> And then either this argument would be RapidIO specific (unfair
> to other users) or generic. In latter case what would it look like ?
It should not be RapidIO specific. Just transfer specific context that
will be interpreted by participants. Given that we have a channel
filtering mechanism there is a little chance of wrongful usage of
that parameter.

Alex.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-17 Thread Vinod Koul
On Mon, 2011-10-17 at 21:22 +0530, Jassi Brar wrote:
> On 15 October 2011 23:05, Vinod Koul  wrote:
> 
> > Another alternate approach could be to add one more argument to
> > prep_slave_sg API which allows us to pass additional runtime specific
> > parameters. This can be NULL and unused for existing drivers and used in
> > RIO and any future subsystems which want to use dmaengine.
> > Thoughts...?
> >
> That doesn't sound much different than passing the data via
> dma_chan.private during prep_slave_sg. Only now we add to
> the number of arguments.
Yes agreed, but we already decided and marked it as depreciated.

> And then either this argument would be RapidIO specific (unfair
> to other users) or generic. In latter case what would it look like ?
My initial thoughts were to add an argument which only the specific dmac
knows howto decode and is filled by its client. As i said for existing
users and people who don't require dynamic information wouldn't bother.
The pros
 - allows us to support RIO kind of subsystems where one needs to pass
subsystem specific information for programing the dmac
 - doesn't require us to add subsystem specific stuff in dmaengine,
today its RIO tomorrow some other folks may want to add. We want to
maintain dmaengine as a generic framework, while also trying to support
multiple audiences.
Cons:
 - there is no guarantee;  dmac expects foo and clients pass bar

I am okay if we have alternate way to support this with above goal :)

-- 
~Vinod

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-17 Thread Jassi Brar
On 15 October 2011 23:05, Vinod Koul  wrote:

> Another alternate approach could be to add one more argument to
> prep_slave_sg API which allows us to pass additional runtime specific
> parameters. This can be NULL and unused for existing drivers and used in
> RIO and any future subsystems which want to use dmaengine.
> Thoughts...?
>
That doesn't sound much different than passing the data via
dma_chan.private during prep_slave_sg. Only now we add to
the number of arguments.
And then either this argument would be RapidIO specific (unfair
to other users) or generic. In latter case what would it look like ?
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-17 Thread Bounine, Alexandre
On Sat, Oct 15, 2011 at 1:36 PM, Vinod Koul  wrote:
> 
> > ... skip ...
> > > >
> > > > Second, having ability to pass private target information allows me to 
> > > > pass
> > > > information about remote target device on per-transfer basis.
> > > Okay, then why not pass the dma address and make your dma driver
> > > transparent (i saw you passed RIO address, IIRC 64+2 bits)
> > > Currently using dma_slave_config we pass channel specific information,
> > > things like peripheral address and config don't change typically
> > > between
> > > transfers and if you have some controller specific properties you can
> > > pass them by embedding dma_slave_config in your specific structure.
> > > Worst case, you can configure slave before every prepare
> >
> > In addition to address on target RIO device I need to pass corresponding
> > device destination ID. With single channel capable to transfer data between
> > local memory and different RapidIO devices I have to pass device specific
> > information on per transfer basis (destID + 66-bit address + type of write 
> > ops, etc.).
> >
> > Even having 8 channels (each set for specific target) will not help me with
> > full support of RapidIO network where I have 8- or 16-bit destID (256 or 64K
> > devices respectively).
> >
> > RapidIO controller device (and its DMA component) may provide services to
> > multiple device drivers which service individual devices on RapidIO network
> > (similar to PCIe having multiple peripherals, but not using memory mapped
> > model - destID defines route to a device).
> >
> > Generic RapidIO controller may have only one DMA channel which services all
> > target devices forming the network. We may have multiple concurrent data
> > transfer requests for different devices.
> >
> > Parallel discussion with Dan touches the same post-config approach and
> > another option. I like Dan's idea of having RIO-specific version of 
> > prep_sg().
> > At the same time my current implementation of rio_dma_prep_slave_sg() with
> > added appropriate locking may do job as well and keeps DMA part within
> > existing API (DMA_RAPIDIO removed).
> Thanks this helps to understand your model.
> 
> Per above you need destID, 66bit address and type of write to be passed
> for each transfer. Looking at your rio_dma_data, i cant see the destID.
> 
destID is extracted by rio_dma_prep_slave_sg() from the rio_dev parameter
and passed down prep_slave_sg() as part of rio_dma_ext structure
(as dchan->private). 

> Nevertheless we have few ways to solve this, pass this using
> dma_slave_config _every-time_ before doing a prep_slave_sg. Or as Dan
> pointed out create RIO specific version. Principally I am not for
> adding
> a subsystem specific stuff, even more when we are talking about
> converging and cutting down on dmaengine API :)
> 
Agree. This is why I am trying to find my way within boundaries of DMA_SLAVE 
API first.

> Another alternate approach could be to add one more argument to
> prep_slave_sg API which allows us to pass additional runtime specific
> parameters. This can be NULL and unused for existing drivers and used
> in
> RIO and any future subsystems which want to use dmaengine.
> Thoughts...?
> 
This is exactly what I need and it may be useful for future systems.
This makes DMA_SLAVE prepared for variations in device addressing.

Plus, this will make porting RIO DMA to previous kernel versions
relatively easy for those who prefers/requires that.

Based on the fact that you started transition to the new DMA transfer
direction flags, it may be a good time to add that extra argument
to prep_slave_sg().

Alex.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-15 Thread Vinod Koul
On Fri, 2011-10-07 at 12:08 -0700, Bounine, Alexandre wrote:
> Vinod Koul wrote:
> > 
> > On Mon, 2011-10-03 at 09:52 -0700, Bounine, Alexandre wrote:
Adding Jassi to this and sorry for late reply...

> ... skip ...
> > >
> > > Second, having ability to pass private target information allows me to 
> > > pass
> > > information about remote target device on per-transfer basis.
> > Okay, then why not pass the dma address and make your dma driver
> > transparent (i saw you passed RIO address, IIRC 64+2 bits)
> > Currently using dma_slave_config we pass channel specific information,
> > things like peripheral address and config don't change typically
> > between
> > transfers and if you have some controller specific properties you can
> > pass them by embedding dma_slave_config in your specific structure.
> > Worst case, you can configure slave before every prepare
> 
> In addition to address on target RIO device I need to pass corresponding
> device destination ID. With single channel capable to transfer data between
> local memory and different RapidIO devices I have to pass device specific
> information on per transfer basis (destID + 66-bit address + type of write 
> ops, etc.).
> 
> Even having 8 channels (each set for specific target) will not help me with
> full support of RapidIO network where I have 8- or 16-bit destID (256 or 64K
> devices respectively).
> 
> RapidIO controller device (and its DMA component) may provide services to
> multiple device drivers which service individual devices on RapidIO network
> (similar to PCIe having multiple peripherals, but not using memory mapped
> model - destID defines route to a device). 
> 
> Generic RapidIO controller may have only one DMA channel which services all
> target devices forming the network. We may have multiple concurrent data
> transfer requests for different devices.
> 
> Parallel discussion with Dan touches the same post-config approach and
> another option. I like Dan's idea of having RIO-specific version of prep_sg().
> At the same time my current implementation of rio_dma_prep_slave_sg() with
> added appropriate locking may do job as well and keeps DMA part within
> existing API (DMA_RAPIDIO removed). 
Thanks this helps to understand your model.

Per above you need destID, 66bit address and type of write to be passed
for each transfer. Looking at your rio_dma_data, i cant see the destID.

Nevertheless we have few ways to solve this, pass this using
dma_slave_config _every-time_ before doing a prep_slave_sg. Or as Dan
pointed out create RIO specific version. Principally I am not for adding
a subsystem specific stuff, even more when we are talking about
converging and cutting down on dmaengine API :)

Another alternate approach could be to add one more argument to
prep_slave_sg API which allows us to pass additional runtime specific
parameters. This can be NULL and unused for existing drivers and used in
RIO and any future subsystems which want to use dmaengine.
Thoughts...?

-- 
~Vinod

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-07 Thread Bounine, Alexandre
Vinod Koul wrote:
> 
> On Mon, 2011-10-03 at 09:52 -0700, Bounine, Alexandre wrote:
> >
> > My concern here is that other subsystems may use/request DMA_SLAVE 
> > channel(s) as well
> > and wrongfully acquire one that belongs to RapidIO. In this case separation 
> > with another
> > flag may have a sense - it is possible to have a system that uses RapidIO
> > and other "traditional" DMA slave channel.
> Nope that will never happen in current form.
> Every controller driver today "magically" ensures that it doesn't get
> any other dma controllers channel. We use filter function for that.
> Although it is not clean yet and we are working to fix that but that's
> another discussion.
> Even specifying plain DMA_SLAVE should work if you code your filter
> function properly :)

RIO filter checks for DMA device associated with RapidIO mport object.
This should work reliable from the RapidIO side. It also verifies
that returned DMA channel is capable to service corresponding RIO device
(in system that has more than one RIO controller). 

... skip ...
> >
> > Second, having ability to pass private target information allows me to pass
> > information about remote target device on per-transfer basis.
> Okay, then why not pass the dma address and make your dma driver
> transparent (i saw you passed RIO address, IIRC 64+2 bits)
> Currently using dma_slave_config we pass channel specific information,
> things like peripheral address and config don't change typically
> between
> transfers and if you have some controller specific properties you can
> pass them by embedding dma_slave_config in your specific structure.
> Worst case, you can configure slave before every prepare

In addition to address on target RIO device I need to pass corresponding
device destination ID. With single channel capable to transfer data between
local memory and different RapidIO devices I have to pass device specific
information on per transfer basis (destID + 66-bit address + type of write ops, 
etc.).

Even having 8 channels (each set for specific target) will not help me with
full support of RapidIO network where I have 8- or 16-bit destID (256 or 64K
devices respectively).

RapidIO controller device (and its DMA component) may provide services to
multiple device drivers which service individual devices on RapidIO network
(similar to PCIe having multiple peripherals, but not using memory mapped
model - destID defines route to a device). 

Generic RapidIO controller may have only one DMA channel which services all
target devices forming the network. We may have multiple concurrent data
transfer requests for different devices.

Parallel discussion with Dan touches the same post-config approach and
another option. I like Dan's idea of having RIO-specific version of prep_sg().
At the same time my current implementation of rio_dma_prep_slave_sg() with
added appropriate locking may do job as well and keeps DMA part within
existing API (DMA_RAPIDIO removed).  

Alex.

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-07 Thread Bounine, Alexandre
Dan J Williams wrote:
> 
> On Mon, Oct 3, 2011 at 9:52 AM, Bounine, Alexandre
>  wrote:
> >
> > My concern here is that other subsystems may use/request DMA_SLAVE
channel(s) as well
> > and wrongfully acquire one that belongs to RapidIO. In this case
separation with another
> > flag may have a sense - it is possible to have a system that uses
RapidIO
> > and other "traditional" DMA slave channel.
> >
> > This is why I put that proposed interface for discussion instead of
keeping everything
> > inside of RapidIO.
> > If you think that situation above will not happen I will be happy to
remove
> > that subsystem knowledge from dmaengine files.
> 
> I don't think that situation will happen, even on the same arch I
> don't think DMA_SLAVE is ready to be enabled generically.  So you're
> probably safe here.

Thank you for confirmation. I will rely on DMA_SLAVE only as in my most
recent
version. 
 
> >
> > I agree, this is not a case of "pure" slave transfers but existing
DMA_SLAVE
> > interface fits well into the RapidIO operations.
> >
> > First, we have only one memory mapped location on the host side. We
transfer
> > data to/from location that is not mapped into memory on the same
side.
> >
> > Second, having ability to pass private target information allows me
to pass
> > information about remote target device on per-transfer basis.
> 
> ...but there is no expectation that these engines will be generically
> useful to other subsytems.  To be clear you are just using dmaengine
> as a match making service for your dma providers to clients, right?

Not only that. As an example I am offering other defined DMA slave mode
callbacks
in tsi721_dma driver. I think that RapidIO specific DMA channels should
follow
unified DMA engine interface. I am expecting that other DMA defined
functions
to be used directly without RapidIO specifics. E.g. tx_submit, tx_status
and
issue_pending are implemented in tsi721_dma driver. Similar approach may
be
applied to fsl_dma driver which is capable to RapidIO data transfers on
platforms
with RapidIO interface. 

> > ... skip ...
> > RapidIO network usually has more than one device attached to it and
> > single DMA channel may service data transfers to/from several
devices.
> > In this case device information should be passed on per-transfer
basis.
> >
> 
> You could maybe do what async_tx does and just apply the extra context
> after the ->prep(), but before ->submit(), but it looks like that
> context is critical to setting up the operation.

Yes, it is possible to do but does not look as quite safe and effective
as passing all related parameters during single call. This would require
RIO DMA drivers to hold a "half cooked" descriptor chain until next
portion
of information arrives. This requires to have prep and post-configure
calls
coupled together by locks. In this situation prep with private data
looks
safer and more effective.

> This looks pretty dangerous without knowing the other details.  What
> prevents another thread from changing dchan->private before the the
> prep routine reads it?

Yes, locking is needed in rio_dma_prep_slave_sg() around a call for prep
routine. After all internal descriptors are set dchan can be submitted
with new private content.  

> 
> DMA_SLAVE assumes a static relationship between dma device and
> slave-device, instead this rapid-io case is a per-operation slave
> context.  It sounds like you really do want a new dma operation type
> that is just an extra-parameter version of the current
> ->device_prep_slave_sg.  But now we're getting into to
> dma_transaction_type proliferation again.  This is probably more fuel
> for the fire of creating a structure transfer template that defines
> multiple possible operation types and clients just fill in the fields
> that they need, rather than adding new operation types for every
> possible permutation of copy operation (DMA_SLAVE, DMA_MEMCPY,
> DMA_CYCLIC, DMA_SG, DMA_INTERLEAVE, DMA_RAPIDIO), it's getting to be a
> bit much.

Exactly. I need an ability of passing private parameter to prep
function.
I chose to adopt DMA engine interface instead of adding one completely
internal to RapidIO because having one common API has much more sense.
Passing user defined data structure when building individual request
would
make not only my life easier but probably will help some other drivers
as well.   
 
> As a starting point, since this the first driver proposal to have
> per-operation slave context and there are other rapid-io specific
> considerations, maybe it's ok to have a rio_dma_prep_slave_sg() that
> does something like:
> 
> struct tsi721_bdma_chan *bdma_chan = to_tsi721_chan(dchan);
> 
> bdma_chan->prep_sg(rext, sgl, sg_len, direction, flags);
> 
> Thoughts?

This brings HW dependence into generic RapidIO layer.
I would like to see it as a generic interface that is available
to other RIO-capable devices (e.g Freescale's 85xx and QorIQ).

I would probably make  ->prep_sg callback part of rio_mport str

RE: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-06 Thread Vinod Koul
On Mon, 2011-10-03 at 09:52 -0700, Bounine, Alexandre wrote:
> Vinod Koul wrote:
> > 
> > On Fri, 2011-09-30 at 17:38 -0400, Alexandre Bounine wrote:
> > Please CC *maintainers* on your patches, get_maintainers.pl will tell
> > you who. Adding Dan here
> 
> Based on https://lkml.org/lkml/2011/2/14/67 and use of DMA_SLAVE in this
> patch I decided that you are the best match among two and there is no reason
> to disturb Dan ;) 
I don't think he minds :) and we can all benefit from his wisdom

> 
> > > Adds DMA Engine framework support into RapidIO subsystem.
> > > Uses DMA Engine DMA_SLAVE interface to generate data transfers to/from 
> > > remote
> > > RapidIO target devices. Uses scatterlist to describe local data buffer and
> > > dma_chan.private member to pass target specific information. Supports flat
> > > data buffer only for a remote side.
> > The way dmaengine works today is that it doesn't know anything about
> > client subsystem. But this brings in a subsystem details to dmaengine
> > which I don't agree with yet.
> > Why can't we abstract this out??
> 
> The only thing that brings subsystem knowledge into dmaengine is DMA_RAPIDIO 
> flag.
> I am actually on the fence about this. From RapidIO side point of view I do 
> not
> need that flag at all.
> RapidIO uses a filter routine that is sufficient to identify dmaengine 
> channels
> associated with specific RapidIO mport. Use of DMA_SLAVE flag is safe here.
> Use of private member of dma_chan is "private" business of RapidIO and does
> not break anything. 
> 
> My concern here is that other subsystems may use/request DMA_SLAVE channel(s) 
> as well
> and wrongfully acquire one that belongs to RapidIO. In this case separation 
> with another
> flag may have a sense - it is possible to have a system that uses RapidIO
> and other "traditional" DMA slave channel.
Nope that will never happen in current form.
Every controller driver today "magically" ensures that it doesn't get
any other dma controllers channel. We use filter function for that.
Although it is not clean yet and we are working to fix that but that's
another discussion.
Even specifying plain DMA_SLAVE should work if you code your filter
function properly :)

> 
> This is why I put that proposed interface for discussion instead of keeping 
> everything
> inside of RapidIO.
> If you think that situation above will not happen I will be happy to remove
> that subsystem knowledge from dmaengine files.
> 
> My most recent test implementation runs without DMA_RAPIDIO flag though.
> 
> > 
> > After going thru the patch, I do not believe that this this is case of
> > SLAVE transfers, Dan can you please take a look at this patch
> 
> I agree, this is not a case of "pure" slave transfers but existing DMA_SLAVE
> interface fits well into the RapidIO operations.
> 
> First, we have only one memory mapped location on the host side. We transfer
> data to/from location that is not mapped into memory on the same side.  
> 
> Second, having ability to pass private target information allows me to pass
> information about remote target device on per-transfer basis.
Okay, then why not pass the dma address and make your dma driver
transparent (i saw you passed RIO address, IIRC 64+2 bits)
Currently using dma_slave_config we pass channel specific information,
things like peripheral address and config don't change typically between
transfers and if you have some controller specific properties you can
pass them by embedding dma_slave_config in your specific structure.
Worst case, you can configure slave before every prepare


-- 
~Vinod

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-05 Thread Williams, Dan J
On Mon, Oct 3, 2011 at 9:52 AM, Bounine, Alexandre
 wrote:
> Vinod Koul wrote:
>>
>> On Fri, 2011-09-30 at 17:38 -0400, Alexandre Bounine wrote:
>> Please CC *maintainers* on your patches, get_maintainers.pl will tell
>> you who. Adding Dan here
>
> Based on https://lkml.org/lkml/2011/2/14/67 and use of DMA_SLAVE in this
> patch I decided that you are the best match among two and there is no reason
> to disturb Dan ;)
>
>> > Adds DMA Engine framework support into RapidIO subsystem.
>> > Uses DMA Engine DMA_SLAVE interface to generate data transfers to/from 
>> > remote
>> > RapidIO target devices. Uses scatterlist to describe local data buffer and
>> > dma_chan.private member to pass target specific information. Supports flat
>> > data buffer only for a remote side.
>> The way dmaengine works today is that it doesn't know anything about
>> client subsystem. But this brings in a subsystem details to dmaengine
>> which I don't agree with yet.
>> Why can't we abstract this out??
>
> The only thing that brings subsystem knowledge into dmaengine is DMA_RAPIDIO 
> flag.
> I am actually on the fence about this. From RapidIO side point of view I do 
> not
> need that flag at all.
> RapidIO uses a filter routine that is sufficient to identify dmaengine 
> channels
> associated with specific RapidIO mport. Use of DMA_SLAVE flag is safe here.
> Use of private member of dma_chan is "private" business of RapidIO and does
> not break anything.
>
> My concern here is that other subsystems may use/request DMA_SLAVE channel(s) 
> as well
> and wrongfully acquire one that belongs to RapidIO. In this case separation 
> with another
> flag may have a sense - it is possible to have a system that uses RapidIO
> and other "traditional" DMA slave channel.
>
> This is why I put that proposed interface for discussion instead of keeping 
> everything
> inside of RapidIO.
> If you think that situation above will not happen I will be happy to remove
> that subsystem knowledge from dmaengine files.

I don't think that situation will happen, even on the same arch I
don't think DMA_SLAVE is ready to be enabled generically.  So you're
probably safe here.

> My most recent test implementation runs without DMA_RAPIDIO flag though.
>
>>
>> After going thru the patch, I do not believe that this this is case of
>> SLAVE transfers, Dan can you please take a look at this patch
>
> I agree, this is not a case of "pure" slave transfers but existing DMA_SLAVE
> interface fits well into the RapidIO operations.
>
> First, we have only one memory mapped location on the host side. We transfer
> data to/from location that is not mapped into memory on the same side.
>
> Second, having ability to pass private target information allows me to pass
> information about remote target device on per-transfer basis.

...but there is no expectation that these engines will be generically
useful to other subsytems.  To be clear you are just using dmaengine
as a match making service for your dma providers to clients, right?

>
>>
>>
>> > Signed-off-by: Alexandre Bounine 
>> > Cc: Vinod Koul 
>> > Cc: Kumar Gala 
>> > Cc: Matt Porter 
>> > Cc: Li Yang 
>> > ---
>> >  drivers/dma/dmaengine.c   |    4 ++
> ... skip ...
>> > +#ifdef CONFIG_RAPIDIO_DMA_ENGINE
>> > +
>> > +#include 
>> > +
> ... skip ...
>> > + */
>> > +struct dma_async_tx_descriptor *rio_dma_prep_slave_sg(struct rio_dev 
>> > *rdev,
>> > +   struct dma_chan *dchan, struct rio_dma_data *data,
>> > +   enum dma_data_direction direction, unsigned long flags)
>> > +{
>> > +   struct dma_async_tx_descriptor *txd = NULL;
>> > +   struct rio_dma_ext rio_ext;
>> > +
>> > +   rio_ext.destid = rdev->destid;
>> > +   rio_ext.rio_addr_u = data->rio_addr_u;
>> > +   rio_ext.rio_addr = data->rio_addr;
>> > +   rio_ext.wr_type = data->wr_type;
>> > +   dchan->private = &rio_ext;
>> > +
>> > +   txd = dchan->device->device_prep_slave_sg(dchan, data->sg, data-
>> >sg_len,
>> > +                                             direction, flags);
>> > +
>> > +   return txd;
>> > +}
>> > +EXPORT_SYMBOL_GPL(rio_dma_prep_slave_sg);
>> You should move the rdev and data to dma_slave_config, that way you
>> should be able to use the existing _prep_slave_sg function.
>
> RapidIO network usually has more than one device attached to it and
> single DMA channel may service data transfers to/from several devices.
> In this case device information should be passed on per-transfer basis.
>

You could maybe do what async_tx does and just apply the extra context
after the ->prep(), but before ->submit(), but it looks like that
context is critical to setting up the operation.

This looks pretty dangerous without knowing the other details.  What
prevents another thread from changing dchan->private before the the
prep routine reads it?

DMA_SLAVE assumes a static relationship between dma device and
slave-device, instead this rapid-io case is a per-operation slave
context.  It sounds like you really do want a new dma operation type
that is just a

RE: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-03 Thread Bounine, Alexandre
Vinod Koul wrote:
> 
> On Fri, 2011-09-30 at 17:38 -0400, Alexandre Bounine wrote:
> Please CC *maintainers* on your patches, get_maintainers.pl will tell
> you who. Adding Dan here

Based on https://lkml.org/lkml/2011/2/14/67 and use of DMA_SLAVE in this
patch I decided that you are the best match among two and there is no reason
to disturb Dan ;) 

> > Adds DMA Engine framework support into RapidIO subsystem.
> > Uses DMA Engine DMA_SLAVE interface to generate data transfers to/from 
> > remote
> > RapidIO target devices. Uses scatterlist to describe local data buffer and
> > dma_chan.private member to pass target specific information. Supports flat
> > data buffer only for a remote side.
> The way dmaengine works today is that it doesn't know anything about
> client subsystem. But this brings in a subsystem details to dmaengine
> which I don't agree with yet.
> Why can't we abstract this out??

The only thing that brings subsystem knowledge into dmaengine is DMA_RAPIDIO 
flag.
I am actually on the fence about this. From RapidIO side point of view I do not
need that flag at all.
RapidIO uses a filter routine that is sufficient to identify dmaengine channels
associated with specific RapidIO mport. Use of DMA_SLAVE flag is safe here.
Use of private member of dma_chan is "private" business of RapidIO and does
not break anything. 

My concern here is that other subsystems may use/request DMA_SLAVE channel(s) 
as well
and wrongfully acquire one that belongs to RapidIO. In this case separation 
with another
flag may have a sense - it is possible to have a system that uses RapidIO
and other "traditional" DMA slave channel.

This is why I put that proposed interface for discussion instead of keeping 
everything
inside of RapidIO.
If you think that situation above will not happen I will be happy to remove
that subsystem knowledge from dmaengine files.

My most recent test implementation runs without DMA_RAPIDIO flag though.

> 
> After going thru the patch, I do not believe that this this is case of
> SLAVE transfers, Dan can you please take a look at this patch

I agree, this is not a case of "pure" slave transfers but existing DMA_SLAVE
interface fits well into the RapidIO operations.

First, we have only one memory mapped location on the host side. We transfer
data to/from location that is not mapped into memory on the same side.  

Second, having ability to pass private target information allows me to pass
information about remote target device on per-transfer basis.

> 
> 
> > Signed-off-by: Alexandre Bounine 
> > Cc: Vinod Koul 
> > Cc: Kumar Gala 
> > Cc: Matt Porter 
> > Cc: Li Yang 
> > ---
> >  drivers/dma/dmaengine.c   |4 ++
... skip ...
> > +#ifdef CONFIG_RAPIDIO_DMA_ENGINE
> > +
> > +#include 
> > +
... skip ...
> > + */
> > +struct dma_async_tx_descriptor *rio_dma_prep_slave_sg(struct rio_dev *rdev,
> > +   struct dma_chan *dchan, struct rio_dma_data *data,
> > +   enum dma_data_direction direction, unsigned long flags)
> > +{
> > +   struct dma_async_tx_descriptor *txd = NULL;
> > +   struct rio_dma_ext rio_ext;
> > +
> > +   rio_ext.destid = rdev->destid;
> > +   rio_ext.rio_addr_u = data->rio_addr_u;
> > +   rio_ext.rio_addr = data->rio_addr;
> > +   rio_ext.wr_type = data->wr_type;
> > +   dchan->private = &rio_ext;
> > +
> > +   txd = dchan->device->device_prep_slave_sg(dchan, data->sg, data-
> >sg_len,
> > + direction, flags);
> > +
> > +   return txd;
> > +}
> > +EXPORT_SYMBOL_GPL(rio_dma_prep_slave_sg);
> You should move the rdev and data to dma_slave_config, that way you
> should be able to use the existing _prep_slave_sg function.
 
RapidIO network usually has more than one device attached to it and
single DMA channel may service data transfers to/from several devices.
In this case device information should be passed on per-transfer basis.

> > +
> > +#endif /* CONFIG_RAPIDIO_DMA_ENGINE */
> > +
... skip ...
> > + *
> > + * Note: RapidIO specification defines write (NWRITE) and
> > + * write-with-response (NWRITE_R) data transfer operations.
> > + * Existing DMA controllers that service RapidIO may use one of these 
> > operations
> > + * for entire data transfer or their combination with only the last data 
> > packet
> > + * requires response.
> > + */
> > +enum rio_write_type {
> > +   RDW_DEFAULT,/* default method used by DMA driver */
> > +   RDW_ALL_NWRITE, /* all packets use NWRITE */
> > +   RDW_ALL_NWRITE_R,   /* all packets use NWRITE_R */
> > +   RDW_LAST_NWRITE_R,  /* last packet uses NWRITE_R, all other - 
> > NWRITE */
> > +};
> Why not use the current mechanism of specifying callback or ACK flags
> if you want a response or not.

That response is handled by RapidIO hardware and ensures reliable
packet delivery when response is used. User may not need callback or ACK
for his operation (in terms of dmaengine) but error handling will be initiated
if there is no response from the target device. 
 
> 
> >

Re: [RFC PATCH 1/2] RapidIO: Add DMA Engine support for RIO data transfers

2011-10-01 Thread Vinod Koul
On Fri, 2011-09-30 at 17:38 -0400, Alexandre Bounine wrote:
Please CC *maintainers* on your patches, get_maintainers.pl will tell
you who. Adding Dan here
> Adds DMA Engine framework support into RapidIO subsystem.
> Uses DMA Engine DMA_SLAVE interface to generate data transfers to/from remote
> RapidIO target devices. Uses scatterlist to describe local data buffer and
> dma_chan.private member to pass target specific information. Supports flat
> data buffer only for a remote side.
The way dmaengine works today is that it doesn't know anything about
client subsystem. But this brings in a subsystem details to dmaengine
which I don't agree with yet.
Why can't we abstract this out??

After going thru the patch, I do not believe that this this is case of
SLAVE transfers, Dan can you please take a look at this patch


> Signed-off-by: Alexandre Bounine 
> Cc: Vinod Koul 
> Cc: Kumar Gala 
> Cc: Matt Porter 
> Cc: Li Yang 
> ---
>  drivers/dma/dmaengine.c   |4 ++
>  drivers/rapidio/Kconfig   |6 +++
>  drivers/rapidio/rio.c |   79 
> +
>  include/linux/dmaengine.h |1 +
>  include/linux/rio.h   |   47 ++
>  include/linux/rio_drv.h   |9 +
>  6 files changed, 146 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index b48967b..11fdc2c 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -699,6 +699,10 @@ int dma_async_device_register(struct dma_device *device)
>   !device->device_prep_dma_cyclic);
>   BUG_ON(dma_has_cap(DMA_SLAVE, device->cap_mask) &&
>   !device->device_control);
> + BUG_ON(dma_has_cap(DMA_RAPIDIO, device->cap_mask) &&
> + !device->device_prep_slave_sg);
> + BUG_ON(dma_has_cap(DMA_RAPIDIO, device->cap_mask) &&
> + !device->device_control);
>  
>   BUG_ON(!device->device_alloc_chan_resources);
>   BUG_ON(!device->device_free_chan_resources);
> diff --git a/drivers/rapidio/Kconfig b/drivers/rapidio/Kconfig
> index bc87192..c4aa279 100644
> --- a/drivers/rapidio/Kconfig
> +++ b/drivers/rapidio/Kconfig
> @@ -34,3 +34,9 @@ config RAPIDIO_DEBUG
> If you are unsure about this, say N here.
>  
>  source "drivers/rapidio/switches/Kconfig"
> +
> +# This option to be turned on by a device selection
> +config RAPIDIO_DMA_ENGINE
> + bool
> + select DMADEVICES
> + select DMA_ENGINE
> diff --git a/drivers/rapidio/rio.c b/drivers/rapidio/rio.c
> index 86c9a09..e5905fc 100644
> --- a/drivers/rapidio/rio.c
> +++ b/drivers/rapidio/rio.c
> @@ -1121,6 +1121,85 @@ int rio_std_route_clr_table(struct rio_mport *mport, 
> u16 destid, u8 hopcount,
>   return 0;
>  }
>  
> +#ifdef CONFIG_RAPIDIO_DMA_ENGINE
> +
> +#include 
> +
> +static bool rio_chan_filter(struct dma_chan *chan, void *arg)
> +{
> + struct rio_dev *rdev = arg;
> +
> + /* Check that DMA device belongs to the right MPORT */
> + return (rdev->net->hport ==
> + container_of(chan->device, struct rio_mport, dma));
> +}
> +
> +/**
> + * rio_request_dma - request RapidIO capable DMA channel that supports
> + *   specified target RapidIO device.
> + * @rdev: RIO device control structure
> + *
> + * Returns pointer to allocated DMA channel or NULL if failed.
> + */
> +struct dma_chan *rio_request_dma(struct rio_dev *rdev)
> +{
> + dma_cap_mask_t mask;
> + struct dma_chan *dchan;
> +
> + dma_cap_zero(mask);
> + dma_cap_set(DMA_RAPIDIO, mask);
> + dchan = dma_request_channel(mask, rio_chan_filter, rdev);
> +
> + return dchan;
> +}
> +EXPORT_SYMBOL_GPL(rio_request_dma);
> +
> +/**
> + * rio_release_dma - release specified DMA channel
> + * @dchan: DMA channel to release
> + */
> +void rio_release_dma(struct dma_chan *dchan)
> +{
> + dma_release_channel(dchan);
> +}
> +EXPORT_SYMBOL_GPL(rio_release_dma);
> +
> +/**
> + * rio_dma_prep_slave_sg - RapidIO specific wrapper
> + *   for device_prep_slave_sg callback defined by DMAENGINE.
> + * @rdev: RIO device control structure
> + * @dchan: DMA channel to configure
> + * @data: RIO specific data descriptor
> + * @direction: DMA data transfer direction (TO or FROM the device)
> + * @flags: dmaengine defined flags
> + *
> + * Initializes RapidIO capable DMA channel for the specified data transfer.
> + * Uses DMA channel private extension to pass information related to remote
> + * target RIO device.
> + * Returns pointer to DMA transaction descriptor or NULL if failed.
> + */
> +struct dma_async_tx_descriptor *rio_dma_prep_slave_sg(struct rio_dev *rdev,
> + struct dma_chan *dchan, struct rio_dma_data *data,
> + enum dma_data_direction direction, unsigned long flags)
> +{
> + struct dma_async_tx_descriptor *txd = NULL;
> + struct rio_dma_ext rio_ext;
> +
> + rio_ext.destid = rdev->destid;
> + rio_ext.rio_addr_u = data->rio_addr_u;
> + rio_ext.rio_addr = data->rio_addr;
> + rio_ext.wr_type =