Re: [PATCH v4 00/25] dmaengine/ARM: Merge the edma drivers into one

2015-10-06 Thread Koul, Vinod
On Tue, 2015-10-06 at 09:15 +0300, Peter Ujfalusi wrote:
> Hi
> 
> On 09/24/2015 01:01 PM, Peter Ujfalusi wrote:
> > Hi,
> > 
> > Changes since v3:
> > - Separated the two (patch 10/11 in v2 patch 10 in v3) patch which got 
> > squashed
> >   by accident for v3
> > - Added Tony's Acked-by to patch 11 (for mach-oamp2 part)
> 
> Gentle ping on this series ;)

Its a long one :)

I hope to get this done next week after am back from Dublin

> 
-- 
~Vinod


Re: [PATCH] DMAEngine: Define generic transfer request api

2011-08-19 Thread Koul, Vinod
On Fri, 2011-08-19 at 21:16 +0530, Jassi Brar wrote:
> On 19 August 2011 19:49, Linus Walleij  wrote:
> > 2011/8/19 Koul, Vinod :
> >> On Tue, 2011-08-16 at 15:06 +0200, Linus Walleij wrote:
> >>> On Tue, Aug 16, 2011 at 2:56 PM, Koul, Vinod  wrote:
> >
> >>> I think Sundaram is in the position of doing some heavy work on
> >>> using one or the other of the API:s, and I think he is better
> >>> suited than anyone else of us to select what scheme to use,
> >>> in the end he's going to write the first code using the API.
> >
> >> And Unfortunately TI folks don't seem to care about this discussion :(
> >> Haven't seen anything on this from them, or on previous RFC by Jassi
> >
> > Well if there is no code usig the API then there is no rush
> > in merging it either I guess. Whenever someone (TI or
> > Samsung) cook some driver patches they can choose their
> > approach.
> >
> No, it's not a matter of "choice".
> If that were the case, Sundaram already proposed a TI specific
> flag. Why wait for him to tell his choice again?
> 
> You might, but I can't molest my sensibility to believe that a Vendor
> specific flag could be better than a generic solution.
> Not here at least, where the overhead due to generality is not much.
> (though I can trim some 'futuristic' members from the 'struct xfer_template')
Who said anything about adding a vendor flag solution, since TI are
potential users of the API it would good to know i this fits there needs
are not. If they don't care, we can't help it...

> 
> Maintainers might wait as long as they want, but there should never
> be an option to have vendor specific hacks.
to me API looks decent after reading some specs of DMACs which support
this mode. Pls send updated patch along with one driver which uses it.
Should be good to go...


-- 
~Vinod

--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] DMAEngine: Define generic transfer request api

2011-08-19 Thread Koul, Vinod
On Tue, 2011-08-16 at 15:06 +0200, Linus Walleij wrote:
> On Tue, Aug 16, 2011 at 2:56 PM, Koul, Vinod  wrote:
> 
> > Currently we have two approaches to solve this problem first being the
> > DMA_STRIDE_CONFIG proposed by Linus W, I feel this one is better
> > approach as this can give client ability to configure each transfer
> > rather than set for the channel. Linus W, do you agree?
> 
> I think Sundaram is in the position of doing some heavy work on
> using one or the other of the API:s, and I think he is better
> suited than anyone else of us to select what scheme to use,
> in the end he's going to write the first code using the API.
And Unfortunately TI folks don't seem to care about this discussion :(
Haven't seen anything on this from them, or on previous RFC by Jassi

-- 
~Vinod

--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] DMAEngine: Define generic transfer request api

2011-08-16 Thread Koul, Vinod
On Fri, 2011-08-12 at 16:44 +0530, Jassi Brar wrote:
> Define a new api that could be used for doing fancy data transfers
> like interleaved to contiguous copy and vice-versa.
> Traditional SG_list based transfers tend to be very inefficient in
> such cases as where the interleave and chunk are only a few bytes,
> which call for a very condensed api to convey pattern of the transfer.
> 
> This api supports all 4 variants of scatter-gather and contiguous transfer.
> Besides, it could also represent common operations like
> device_prep_dma_{cyclic, memset, memcpy}
> and maybe some more that I am not sure of.
> 
> Of course, neither can this api help transfers that don't lend to DMA by
> nature, i.e, scattered tiny read/writes with no periodic pattern.
> 
> Signed-off-by: Jassi Brar 
> ---
>  include/linux/dmaengine.h |   73 
> +
>  1 files changed, 73 insertions(+), 0 deletions(-)
> 
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index 8fbf40e..74f3ae0 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -76,6 +76,76 @@ enum dma_transaction_type {
>  /* last transaction type for creation of the capabilities mask */
>  #define DMA_TX_TYPE_END (DMA_CYCLIC + 1)
>  
> +/**
> + * Generic Transfer Request
> + * 
> + * A chunk is collection of contiguous bytes to be transfered.
> + * The gap(in bytes) between two chunks is called inter-chunk-gap(ICG).
> + * ICGs may or maynot change between chunks.
> + * A FRAME is the smallest series of contiguous {chunk,icg} pairs,
> + *  that when repeated an integral number of times, specifies the transfer.
> + * A transfer template is specification of a Frame, the number of times
> + *  it is to be repeated and other per-transfer attributes.
> + *
> + * Practically, a client driver would have ready a template for each
> + *  type of transfer it is going to need during its lifetime and
> + *  set only 'src_start' and 'dst_start' before submitting the requests.
> + *
> + *
> + *  |  Frame-1|   Frame-2   | ~ |   Frame-'numf'  |
> + *  |==.===...=...|==.===...=...| ~ |==.===...=...|
> + *
> + *==  Chunk size
> + *... ICG
> + */
> +
> +/**
> + * struct data_chunk - Element of scatter-gather list that makes a frame.
> + * @size: Number of bytes to read from source.
> + * size_dst := fn(op, size_src), so doesn't mean much for destination.
> + * @icg: Number of bytes to jump after last src/dst address of this
> + *chunk and before first src/dst address for next chunk.
> + *Ignored for dst(assumed 0), if dst_inc is true and dst_sgl is false.
> + *Ignored for src(assumed 0), if src_inc is true and src_sgl is false.
> + */
> +struct data_chunk {
> + size_t size;
> + size_t icg;
> +};
> +
> +/**
> + * struct xfer_template - Template to convey DMAC the transfer pattern
> + *and attributes.
> + * @op: The operation to perform on source data before writing it on
> + *to destination address.
> + * @src_start: Bus address of source for the first chunk.
> + * @dst_start: Bus address of destination for the first chunk.
> + * @src_inc: If the source address increments after reading from it.
> + * @dst_inc: If the destination address increments after writing to it.
> + * @src_sgl: If the 'icg' of sgl[] applies to Source (scattered read).
> + *   Otherwise, source is read contiguously (icg ignored).
> + *   Ignored if src_inc is false.
> + * @dst_sgl: If the 'icg' of sgl[] applies to Destination (scattered write).
> + *   Otherwise, destination is filled contiguously (icg ignored).
> + *   Ignored if dst_inc is false.
> + * @frm_irq: If the client expects DMAC driver to do callback after each 
> frame.
> + * @numf: Number of frames in this template.
> + * @frame_size: Number of chunks in a frame i.e, size of sgl[].
> + * @sgl: Array of {chunk,icg} pairs that make up a frame.
> + */
> +struct xfer_template {
> + enum dma_transaction_type op;
> + dma_addr_t src_start;
> + dma_addr_t dst_start;
> + bool src_inc;
> + bool dst_inc;
> + bool src_sgl;
> + bool dst_sgl;
> + bool frm_irq;
> + size_t numf;
> + size_t frame_size;
> + struct data_chunk sgl[0];
> +};
>  
>  /**
>   * enum dma_ctrl_flags - DMA flags to augment operation preparation,
> @@ -432,6 +502,7 @@ struct dma_tx_state {
>   * @device_prep_dma_cyclic: prepare a cyclic dma operation suitable for 
> audio.
>   *   The function takes a buffer of size buf_len. The callback function will
>   *   be called after period_len bytes have been transferred.
> + * @device_prep_dma_genxfer: Transfer expression in a generic way.
>   * @device_control: manipulate all pending operations on a channel, returns
>   *   zero or error code
>   * @device_tx_status: poll for transaction completion, the optional
> @@ -496,6 +567,8 @@ struct dma_device {
>   struct dma_async_t

RE: [RFC] dmaengine: add new api for preparing simple slave transfer

2011-06-15 Thread Koul, Vinod
On Tue, 2011-06-14 at 12:12 +0530, Raju, Sundaram wrote:
> 
> > 
> > The overall conclusion which I'm coming to is that we already support
> > what you're asking for, but the problem is that we're using different
> > (and I'd argue standard) terminology to describe what we have.
> > 
> > The only issue which I see that we don't cover is the case where you want
> > to describe a single buffer which is organised as N bytes to be transferred,
> > M following bytes to be skipped, N bytes to be transferred, M bytes to be
> > skipped.  I doubt there are many controllers which can be programmed with
> > both 'N' and 'M' parameters directly.
> >
> 
> Yes this is what I wanted to communicate and discuss in the list.
> Thanks for describing it in the standard terminology, and pointing me in the 
> right direction.
> 
> Also please note that if the gap between the buffers in a scatterlist is
> uniform and that can again be skipped by the DMAC automatically
> by programming that gap size, then that feature also is not available 
> in the current framework.
> 
> I understand it can be done with the existing scatterlist, but just writing
> a value in a DMAC register is much better if available, than preparing 
> a scatterlist and passing to the API.
> 
> > 
> > Please, don't generate special cases.  Design proper APIs from the
> > outset rather than abusing what's already there.  So no, don't abuse
> > the address width stuff.
> > 
> > In any case, the address width stuff must still be used to describe the
> > peripherals FIFO register.
> >
> 
> I did not intend to abuse the existing address width. It might look 
> like that because of how I described it here. 
> I agree that the dma_slave_config is for peripheral related 
> properties to be stored. I was pointing out that the chunk size
> variable in the dma_buffer_config I proposed will be in most cases 
> equal to FIFO register width, to describe what I actually meant by
> chunk size.
> 
> > IIUC, you have a buffer with gaps in between (given by above params).
> > Is your DMA h/w capable of parsing this buffer and directly do a
> > transfer based on above parameters (without any work for SW), or you are
> > doing this in your dma driver and then programming a list of buffers?
> > In former case (although i haven't seen such dma controller yet), can
> > you share the datasheet? It would make very little sense to change the
> > current API for your extra special case. This special case needs to be
> > handled differently rather than making rule out of it!!
> > 
> 
> Yes, Vinod. This is directly handled in the DMAC h/w.
> 
> This capability is present in 2 TI DMACs.
> EDMA (Enhanced DMA) in all TI DaVinci SoCs and the
> SDMA (System DMA) in all TI OMAP SoCs. The drivers of these
> controllers are present in the respective DaVinci tree and OMAP tree
> under the SoC folders.
> 
> 
> > SDMA and EDMA are TI SoC specific DMA controllers. Their drivers have
> > been maintained in the respective SoC folders till now.
> > arch/arm/plat-omap/dma.c
> > arch/arm/mach-davinci/dma.c
> 
> 
> The Manual of the EDMA controller in TI DaVinci SoCs is available at
> http://www.ti.com/litv/pdf/sprue23d
> Section 2.2 in the page 23 explains how transfers are made based
> on the gaps programmed. It also explains how the 3D buffer is 
> internally split in EDMA based on the gaps programmed.
> 
> The complete Reference manual of TI OMAP SoCs is available at
> http://www.ti.com/litv/pdf/spruf98s
> Chapter 9 in this document describes the SDMA controller.
> Section 9.4.3 in page 981 explains the various address modes,
> how the address is incremented by the DMAC and about the gaps 
> in between buffers and frames and how the DMAC handles them.
> 
> Linus,
> > 
> > Is it really so bad? It is a custom configuration after all. Even if
> > there were many DMACs out there supporting it, we'd probably
> > model it like this, just pass something like DMA_STRIDE_CONFIG
> > instead of something named after a specific slave controller.
> > 
> > This way DMACs that didn't support striding could NACK a
> > transfer for device drivers requesting it and then it could figure
> > out what to do.
> > 
> > If we can get some indication as to whether more DMACs can
> > do this kind of stuff, we'd maybe like to introduce
> > DMA_STRIDE_CONFIG already now.
> >
> 
> I wanted to discuss this feature in the list and get this feature 
> added to the current dmaengine framework. If the current APIs
> can handle this feature, then its very good 
> and I will follow that only.
> 
> If what you suggest is the right way to go then I am okay.
> IMHO the framework should always handle the complex case
> and individual implementations should implement a subset of
> the capability and hence I suggest the changes I posted to the list.
Okay now things are more clear on what you are trying to do...

1) The changes you need are to describe your buffer and convey to dmac
so please don't talk about slave here as that is specific to dm

RE: [RFC] dmaengine: add new api for preparing simple slave transfer

2011-06-10 Thread Koul, Vinod
On Fri, 2011-06-10 at 16:43 +0530, Raju, Sundaram wrote:
> Vinod,

...
> > 
> > > Now coming to the buffer related attributes, sg list is a nice way to
> > > represent a disjoint buffer; all the offload engines in drivers/dma
> > > create a descriptor for each of the contiguous chunk in the sg list
> > > buffer and pass it to the controller.
> > >
> > > But many a times a client may want to transfer from a single buffer to
> > > the peripheral and most of the DMA controllers have the capability to
> > > transfer data in chunks/frames directly at a stretch.
> > > All the attributes of a buffer, which are required for the transfer
> > > can be programmed in single descriptor and submitted to the
> > > DMA controller.
> > >
> > > So I think the slave transfer API must also have a provision to pass
> > > the buffer configuration. The buffer attributes have to be passed
> > > directly as an argument in the prepare API, unlike dma_slave_config
> > > as they will be different for each buffer that is going to be
> > > transferred.
> > Can you articulate what attributes you are talking about. The
> > dma_slave_config parameters don't represent buffer attributes. They
> > represent the dma attributes on how you want to transfer. These things
> > like bus widths, burst lengths are usually constant for the slave
> > transfers, not sure why they should change per buffer?
> > 
> 
> I have tried to explain the attributes in the previous mail 
> I posted in this thread.
> 
> Yes, buffer attributes should not be represented in 
> the dma_slave_config. It is for slave related data.
> That is why had mentioned that buffer configuration should be
> a separate structure and passed in the prepare API.
> See quoted below:
> 
> 
> > struct dma_buffer_config {
> > u32 chunk_size; /* number of bytes in a chunk */
> > u32 frame_size; /* number of chunks in a frame */
> > /* u32 n_frames; number of frames in the buffer */
> > u32 inter_chunk_gap; /* number of bytes between end of a chunk 
> > and the start of the next chunk */ 
> > u32 inter_frame_gap; /* number of bytes between end of a frame 
> > and the start of the next frame */ 
> > bool sync_rate; /* 0 - a sync event is required from the 
> > peripheral to transfer a chunk 
> > 1 - a sync event is required from the 
> > peripheral to transfer a frame */ 
> > };
IIUC, you have a buffer with gaps in between (given by above params).
Is your DMA h/w capable of parsing this buffer and directly do a
transfer based on above parameters (without any work for SW), or you are
doing this in your dma driver and then programming a list of buffers?
In former case (although i haven't seen such dma controller yet), can
you share the datasheet? It would make very little sense to change the
current API for your extra special case. This special case needs to be
handled differently rather than making rule out of it!!

And in latter case you are really going the wrong path as Russel pointed
out and trying to abuse the APIs


-- 
~Vinod

--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] dmaengine: add new api for preparing simple slave transfer

2011-06-09 Thread Koul, Vinod
On Thu, 2011-06-09 at 17:32 +0100, Russell King - ARM Linux wrote:
> On Thu, Jun 09, 2011 at 09:31:56PM +0530, Raju, Sundaram wrote:
> > Here it is, with proper line wrapping;
> 
> Thanks.  This is much easier to reply to.
> 
> > I believe that even though the dmaengine framework addresses and 
> > supports most of the required use cases of a client driver to a DMA 
> > controller, some extensions are required in it to make it still more 
> > generic.
> > 
> > Current framework contains two APIs to prepare for slave transfers: 
> > 
> > struct dma_async_tx_descriptor *(*device_prep_slave_sg)(
> > struct dma_chan *chan, struct scatterlist *sgl,
> > unsigned int sg_len, enum dma_data_direction direction,
> > unsigned long flags);
> > 
> > struct dma_async_tx_descriptor *(*device_prep_dma_cyclic)(
> > struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
> > size_t period_len, enum dma_data_direction direction);
> > 
> > and one control API. 
> > int (*device_control)(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
> > unsigned long arg);
> > 
> > A simple single buffer transfer (i.e. non sg transfer) can be done only
> > as a trivial case of the device_prep_slave_sg API. The client driver is
> > expected to prepare a sg list and provide to the dmaengine API for that
> > single buffer also.
> 
> We can avoid preparing a sg list in every driver which wants this by
> having dmaengine.h provide a helper for this case:
> 
> static inline dma_async_tx_descriptor *dmaengine_prep_slave_single(
>   struct dma_chan *chan, void *buf, size_t len,
>   enum dma_data_direction dir, unsigned long flags)
> {
>   struct scatterlist sg;
> 
>   sg_init_one(&sg, buf, len);
> 
>   return chan->device->device_prep_slave_sg(chan, &sg, 1, dir, flags);
> }
That sounds good...

> 
> I think also providing dmaengine_prep_slave_sg() and
> dmaengine_prep_dma_cyclic() as wrappers to hide the chan->device->blah
> bit would also be beneficial (and helps keep that detail out of the
> users of DMA engine.)
> 
> > In a slave transfer, the client has to define all the buffer related 
> > attributes and the peripheral related attributes. 
> 
> I'd argue that it is incorrect to have drivers specify the buffer
> related attributes - that makes the API unnecessarily complicated
> to use, requiring two calls (one to configure the channel, and one
> to prepare the transfer) each time it needs to be used.
> 
> Not only that but the memory side of the transfer should be no business
> of the driver - the driver itself should only specify the attributes
> for the device being driven.  The attributes for the memory side of the
> transfer should be a property of the DMA engine itself.
> 
> I would like to see in the long term the dma_slave_config structure
> lose its 'direction' argument, and the rest of the parameters used to
> define the device side parameters only.
I am not sure why direction flag is required and can be done away with.
Both sg and cyclic API have a direction parameter and that should be
used. A channel can be used in any direction client wishes to.
> 
> This will allow the channel to be configured once when its requested,
> and then the prepare call can configure the direction as required.
> 
> > Now coming to the buffer related attributes, sg list is a nice way to 
> > represent a disjoint buffer; all the offload engines in drivers/dma 
> > create a descriptor for each of the contiguous chunk in the sg list 
> > buffer and pass it to the controller. 
> 
> The sg list is the standard Linux way to describe scattered buffers.
> 
> > But many a times a client may want to transfer from a single buffer to
> > the peripheral and most of the DMA controllers have the capability to 
> > transfer data in chunks/frames directly at a stretch. 
> 
> So far, I've only seen DMA controllers which operate on a linked list of
> source, destination, length, attributes, and next entry pointer.
> 
> > All the attributes of a buffer, which are required for the transfer 
> > can be programmed in single descriptor and submitted to the 
> > DMA controller. 
> 
> I'm not sure that this is useful - in order to make use of the data, the
> data would have to be copied in between the descriptors - and doesn't that
> rather negate the point of DMA if you have to memcpy() the data around?
> 
> Isn't it far more efficient to have DMA place the data exactly where it's
> required in memory right from the start without any memcpy() operations?
> 
> Can you explain where and how you would use something like this:
> 
> > ---
> >  | Chunk 0 |ICG| Chunk 1 |ICG| ... |ICG| Chunk n | Frame 0 
> > ---
> >  |  Inter Frame Gap  | 
> > ---
> >  | Chunk 0 |ICG| Chunk 1 |ICG

RE: [RFC] dmaengine: add new api for preparing simple slave transfer

2011-06-09 Thread Koul, Vinod
On Thu, 2011-06-09 at 21:31 +0530, Raju, Sundaram wrote:
> Here it is, with proper line wrapping;
Please cc the respective MAINTAINERS, added Dan...

> 
> SDMA and EDMA are TI SoC specific DMA controllers. Their drivers have 
> been maintained in the respective SoC folders till now.
> arch/arm/plat-omap/dma.c
> arch/arm/mach-davinci/dma.c
> 
> I have gone through the existing offload engine (DMA) drivers in 
> drivers/dma which do slave transfers.
> I would like to move SDMA and EDMA also to dmaengine framework.
> 
> I believe that even though the dmaengine framework addresses and 
> supports most of the required use cases of a client driver to a DMA 
> controller, some extensions are required in it to make it still more 
> generic.
> 
> Current framework contains two APIs to prepare for slave transfers: 
> 
> struct dma_async_tx_descriptor *(*device_prep_slave_sg)(
>   struct dma_chan *chan, struct scatterlist *sgl,
>   unsigned int sg_len, enum dma_data_direction direction,
>   unsigned long flags);
> 
> struct dma_async_tx_descriptor *(*device_prep_dma_cyclic)(
>   struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
>   size_t period_len, enum dma_data_direction direction);
> 
> and one control API. 
> int (*device_control)(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
>   unsigned long arg);
> 
> A simple single buffer transfer (i.e. non sg transfer) can be done only
> as a trivial case of the device_prep_slave_sg API. The client driver is
> expected to prepare a sg list and provide to the dmaengine API for that
> single buffer also.
> 
> In a slave transfer, the client has to define all the buffer related 
> attributes and the peripheral related attributes. 
> 
> The above 2 APIs in the dmaengine framework expect all the 
> peripheral/slave related attributes to be filled in the 
> dma_slave_config data structure. 
> struct dma_slave_config {
> enum dma_data_direction direction;
> dma_addr_t src_addr;
> dma_addr_t dst_addr;
> enum dma_slave_buswidth src_addr_width;
> enum dma_slave_buswidth dst_addr_width;
> u32 src_maxburst;
> u32 dst_maxburst;
> };
> 
> This data structure is passed to the offload engine via the dma_chan 
> data structure in its private pointer.
No, this is passed thru control API you described above. Please read
Documentation/dmaengine.txt

> Now coming to the buffer related attributes, sg list is a nice way to 
> represent a disjoint buffer; all the offload engines in drivers/dma 
> create a descriptor for each of the contiguous chunk in the sg list 
> buffer and pass it to the controller. 
> 
> But many a times a client may want to transfer from a single buffer to
> the peripheral and most of the DMA controllers have the capability to 
> transfer data in chunks/frames directly at a stretch. 
> All the attributes of a buffer, which are required for the transfer 
> can be programmed in single descriptor and submitted to the 
> DMA controller. 
> 
> So I think the slave transfer API must also have a provision to pass 
> the buffer configuration. The buffer attributes have to be passed 
> directly as an argument in the prepare API, unlike dma_slave_config 
> as they will be different for each buffer that is going to be 
> transferred. 
Can you articulate what attributes you are talking about. The
dma_slave_config parameters don't represent buffer attributes. They
represent the dma attributes on how you want to transfer. These things
like bus widths, burst lengths are usually constant for the slave
transfers, not sure why they should change per buffer?

> 
> It is a stretch and impractical to use a highly segmented buffer (like
> the one described below) in a sglist. This is because sg list itself 
> is a representation of a disjoint buffer collection in terms of 
> smaller buffers. Now then each of these smaller buffers can have 
> different buffer configurations (like described below) and we are not 
> going to go down that road now. 
well thats the linux idea, you use sg-list to describe this
segmentation...

> 
> Hence it makes sense to pass these buffer attributes for only a single
> buffer transfer and not a sg list. 
> This can be done by OPTION #1 
> 1. Adding a pointer of the dma_buffer_config data structure in the 
> device_prep_slave_sg API.
> 2. Ensuring that it will be ignored if a sg list passed. 
> Only when a single buffer is passed (in the sg list) then this buffer 
> configuration will be used.
> 3. Any client that wants to do a sg transfer can simply ignore this 
> buffer configuration and pass NULL.
> The main disadvantage of this option is that all the existing drivers 
> need to be updated since the API signature is changed.
> 
> It might even be better to have a separate API for non sg transfers.
> This is OPTION #2 
> Advantages of this option are 
> 1. No change required in the existing drivers that use 
> device_prep_slave_sg