Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-31 Thread Mikael Pettersson
Mark Lord writes:
  Tejun Heo wrote:
  ..
   I'm skeptical about the benefit of IRQ coalescing on storage
   controllers.  Coalescing improves performance when there are many small
   requests to complete and if you put a lot of small non-consecutive
   requests to a disk, it gets really really really slow and IRQ coalescing
   just doesn't matter at all.  The only way to achieve high number of
   completions is to issue small commands to consecutive addresses which is
   just silly.  In storage, high volume transfer is achieved through
   request coalescing not completion coalescing and this is true for even 
   SDDs.
  ..
  
  One cool thing with the Marvell cores, is that they actually implement
  transaction based IRQ coalescing, whereby a number of related I/O commands
  (say, all the RAID5 member commands generated by a single R/W request)
  can be tagged together, generating an interrupt only when they all complete
  (or after a timeout if something goes wrong).
  
  We don't have anything resembling an appropriate abstraction for that yet,
  so I doubt that we could really take advantage of it.

Promise SATA controllers have this feature too,
though sata_promise doesn't make use of it.
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-30 Thread Jeff Garzik

Mark Lord wrote:

Jeff Garzik wrote:

Mark Lord wrote:
Create host-owned DMA memory pools, for use in allocating/freeing 
per-port
command/response queues and SG tables.  This gives us a way to 
guarantee we
meet the hardware address alignment requirements, and also reduces 
memory that

might otherwise be wasted on alignment gaps.

Signed-off-by: Mark Lord [EMAIL PROTECTED]


ACK patches 1-13

applied patches 1-9 to #upstream.  patch #10 failed, with git-am 
reporting it as a corrupt patch.

..

That's weird.  I can save the email from linux-ide here,
and apply as a patch (after 01-09) with no issues at all.

Jeff got mail reader problems?

Here it is again, in case it got corrupted in transit to you.


Nope, not corrupted in transit or on this side.  It falls into a 
familiar pattern:


* git-am(1) fails
* patch(1) succeeds
* when applying patch, patch(1) drops a .orig turd

So while patch(1) succeeds because patch(1) is highly forgiving and 
git-am(1) is more strict, something was definitely strange on that 
incoming email.  patch(1) lets you know by giving you a .orig file, 
something not normally created if the patch operation was 100% sound.


ISTR Linus or Junio explaining why git-am(1) was more strict and why it 
was a good thing... As I did in this case, I usually just run patch(1), 
look carefully at the result using 'git diff HEAD', and then 
commit/resolve the changes.


Jeff






-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-30 Thread Mark Lord

Jeff Garzik wrote:


Nope, not corrupted in transit or on this side.  It falls into a 
familiar pattern:


* git-am(1) fails
* patch(1) succeeds
* when applying patch, patch(1) drops a .orig turd

..

Okay, I figured it out.  There's a 1-line offset difference
between what is used in patch 10, and where stuff actually
is/was in sata_mv.c .

I probably changed an unrelated line and rediff'd an earlier patch
without rediff'ing everything that followed.

Because patch is smart and can handle it.

But since git-am plays dumb (both good and bad), I'll be more careful
about rediffing the entire series next time round.

Meanwhile, no further action required here.

Cheers
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-30 Thread Jeff Garzik

Mark Lord wrote:

Meanwhile, no further action required here.


ACK :)

And thanks for rounding out the NCQ work.  sata_mv has needed love and 
attention for a while (well, really, its entire life).


Jeff


-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-30 Thread Mark Lord

Jeff Garzik wrote:

Mark Lord wrote:

Meanwhile, no further action required here.


ACK :)

And thanks for rounding out the NCQ work.  sata_mv has needed love and 
attention for a while (well, really, its entire life).

..

Well, it's going to be getting plenty of TLC over the next few months.

In the short term, my plan is to submit further small patches to fix
the IRQ and error-handling in sata_mv, as bug fixes for 2.6.25.

Note that hot plug/unplug will begin to work correctly once the IRQ/EH
code gets fixed (it sort of works already, but sometimes kills the machine).

There are also some errata that need to be addressed in the 2.6.25 timeframe.

In particular, there's an NCQ EH errata for the 60x1 chips,
and a tricky issue about HOB access not working correctly on
most versions of the chips.

Bigger stuff that I'm deferring for 2.6.26:

 -- Port multiplier support (though this does look rather simple..)
 -- power management support
 -- ATAPI
 -- IRQ Coalescing
 -- Target Mode support (interfaces yet to be defined)
 -- TCQ support: would be good in general for libata on smart hosts,
 but I'm not sure of the impact on libata EH processing.

How's that all sound to you guys?

Cheers
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-30 Thread Jeff Garzik

Mark Lord wrote:

Jeff Garzik wrote:

Mark Lord wrote:

Meanwhile, no further action required here.


ACK :)

And thanks for rounding out the NCQ work.  sata_mv has needed love and 
attention for a while (well, really, its entire life).

..

Well, it's going to be getting plenty of TLC over the next few months.

In the short term, my plan is to submit further small patches to fix
the IRQ and error-handling in sata_mv, as bug fixes for 2.6.25.

Note that hot plug/unplug will begin to work correctly once the IRQ/EH
code gets fixed (it sort of works already, but sometimes kills the 
machine).


There are also some errata that need to be addressed in the 2.6.25 
timeframe.


In particular, there's an NCQ EH errata for the 60x1 chips,
and a tricky issue about HOB access not working correctly on
most versions of the chips.

Bigger stuff that I'm deferring for 2.6.26:

 -- Port multiplier support (though this does look rather simple..)
 -- power management support
 -- ATAPI


I'm interested to see this :)


 -- IRQ Coalescing


Most modern SATA has some form of this, but I've yet to see any 
benefit.  I've dealt with interrupt (packet) rates of well over 500k/sec 
in network land, and IMO the overhead in storage, even with tiny 
operations, is really small in comparison.


So, I'm not sure its worth the latency penalty...  at least as turned on 
by default.




 -- Target Mode support (interfaces yet to be defined)


I would assume this would be along the lines of the SCSI target mode stuff.



 -- TCQ support: would be good in general for libata on smart hosts,
 but I'm not sure of the impact on libata EH processing.


Agreed, it would be nice to support host queueing controllers.

However, specifically for TCQ, it was rather poorly conceived.  For most 
controllers (mv, broadcom/svw, others) an error will stop the DMA 
engine, and you perform recovery in software.  All well and good, but 
figuring out all the states possible during recovery is non-trivial (I 
looked into it years ago).  Its just messy.


Jeff


-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-30 Thread Tejun Heo
Mark Lord wrote:
 So, I'm not sure its worth the latency penalty...  at least as turned
 on by default.
 ..
 
 I agree.  It should default to off, and perhaps have some /sys/ attributes
 to enable/tune it.  Or something like that.

Eeeek.. :-)

 The theoretical value is apparently for situations like writing to RAID1.
 The write isn't really complete until all drives ACK it,
 so with IRQ coalescing it may behave more like NAPI than one I/O per IRQ.
 And NAPI is apparently a win under heavy load for network, so.. we'll see.
 
 The vendor wants it in the driver, and I think it will be good to have it
 there so we can play with and tune it -- to find out for real whether it
 has
 worthwhile value or not.  But yes, the default should be off, I think.

I'm skeptical about the benefit of IRQ coalescing on storage
controllers.  Coalescing improves performance when there are many small
requests to complete and if you put a lot of small non-consecutive
requests to a disk, it gets really really really slow and IRQ coalescing
just doesn't matter at all.  The only way to achieve high number of
completions is to issue small commands to consecutive addresses which is
just silly.  In storage, high volume transfer is achieved through
request coalescing not completion coalescing and this is true for even SDDs.

  -- Target Mode support (interfaces yet to be defined)

 I would assume this would be along the lines of the SCSI target mode
 stuff.
 ..
 
 Ah, now there's a starting point.  Thanks.

It would be great if we can make a cheap SATA analyzer out of it.

  -- TCQ support: would be good in general for libata on smart hosts,
  but I'm not sure of the impact on libata EH processing.

 Agreed, it would be nice to support host queueing controllers.

 However, specifically for TCQ, it was rather poorly conceived.  For
 most controllers (mv, broadcom/svw, others) an error will stop the DMA
 engine, and you perform recovery in software.  All well and good, but
 figuring out all the states possible during recovery is non-trivial (I
 looked into it years ago).  Its just messy.
 ..
 
 So is NCQ EH, but we manage it.  I wonder how similar (or not) the two are?

How many devices with working TCQ support are out there?  Early raptors?
 If the controller handles all that releasing and state transitions, I
think is shouldn't be too difficult.  You'll probably need to add TCQ
support to ata_eh_autopsy or roll your own autopsy function but that
should be about it for EH.  Heck, just freezing on any sign of problem
and let EH reset the drive and retry should work for most of the cases
although things like media errors won't be reported properly.

 I've done host-based TCQ several times now, and EH can be as simple as:
 
 when something goes wrong, just reset everything, and then re-issue the
 commands one at a time, doing per-command EH normally.  Then resume TCQ.
 
 That's dumb, but works extremely reliably.

Oh, you were thinking the same thing. :-) It can be made more reliable
by simply falling back to non-TCQ if error repeats itself.  e.g. Media
error - TCQ freeze - reset - media error - TCQ freeze - reset and
turn off TCQ - media error gets handled and reported.  It isn't pretty
but should work for initial implementation.

Thanks.

-- 
tejun
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-30 Thread Tejun Heo
Tejun Heo wrote:
 I'm skeptical about the benefit of IRQ coalescing on storage
 controllers.  Coalescing improves performance when there are many small
 requests to complete and if you put a lot of small non-consecutive
 requests to a disk, it gets really really really slow and IRQ coalescing
 just doesn't matter at all.  The only way to achieve high number of
 completions is to issue small commands to consecutive addresses which is
 just silly.  In storage, high volume transfer is achieved through
 request coalescing not completion coalescing and this is true for even SDDs.

To add a bit, I was thinking about writes regarding SDDs.  Completion
coalescing makes more sense for reads from SDDs if log based filesystem
is used on it because then read operations are scattered all over the
disk and the device doesn't have to seek.  Limited queue size compared
to network interfaces will make completion coalescing less effective but
yeah, I think it will be worth playing with it on SDD + lfs.

Thanks.

-- 
tejun
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-30 Thread Mark Lord

Tejun Heo wrote:

..
I'm skeptical about the benefit of IRQ coalescing on storage
controllers.  Coalescing improves performance when there are many small
requests to complete and if you put a lot of small non-consecutive
requests to a disk, it gets really really really slow and IRQ coalescing
just doesn't matter at all.  The only way to achieve high number of
completions is to issue small commands to consecutive addresses which is
just silly.  In storage, high volume transfer is achieved through
request coalescing not completion coalescing and this is true for even SDDs.

..

One cool thing with the Marvell cores, is that they actually implement
transaction based IRQ coalescing, whereby a number of related I/O commands
(say, all the RAID5 member commands generated by a single R/W request)
can be tagged together, generating an interrupt only when they all complete
(or after a timeout if something goes wrong).

We don't have anything resembling an appropriate abstraction for that yet,
so I doubt that we could really take advantage of it.

I think one thought in general for IRQ coalescing, is that with largish
storage arrays it may cut down the number of IRQ entry/exit cycles 
considerably under heavy load, and perhaps slightly improve L1 cache

occupancy of the IRQ handler paths as well.  Dunno, but it could be fun
to wire it in there so we can experiment and (in)validate such theories.


 -- Target Mode support (interfaces yet to be defined)

I would assume this would be along the lines of the SCSI target mode
stuff.

..

Ah, now there's a starting point.  Thanks.


It would be great if we can make a cheap SATA analyzer out of it.

..

Yeah.  It'll have software latency added in, but the chip really looks
like it can do the analyzer job just fine.  I wonder if any of the existing
analyzer products already use these chips..  


..

How many devices with working TCQ support are out there?  Early raptors?

..

Raptors, everything ever made by Hitachi, some Maxtor, some Seagate (I think),
and even the odd Western Digital drive.  And in the PATA space, there were
WD and IBM/Hitachi drives with it.  The PDC ADMA and QStor chips were designed
for TCQ (and NCQ for QStor).

Still though, I don't think it's a huge priority, since all modern drives
now implement NCQ when it matters.

Cheers
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-29 Thread Jeff Garzik

Mark Lord wrote:

Create host-owned DMA memory pools, for use in allocating/freeing per-port
command/response queues and SG tables.  This gives us a way to guarantee we
meet the hardware address alignment requirements, and also reduces 
memory that

might otherwise be wasted on alignment gaps.

Signed-off-by: Mark Lord [EMAIL PROTECTED]


ACK patches 1-13

applied patches 1-9 to #upstream.  patch #10 failed, with git-am 
reporting it as a corrupt patch.



-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-29 Thread Mark Lord

Jeff Garzik wrote:

Mark Lord wrote:
Create host-owned DMA memory pools, for use in allocating/freeing 
per-port
command/response queues and SG tables.  This gives us a way to 
guarantee we
meet the hardware address alignment requirements, and also reduces 
memory that

might otherwise be wasted on alignment gaps.

Signed-off-by: Mark Lord [EMAIL PROTECTED]


ACK patches 1-13

applied patches 1-9 to #upstream.  patch #10 failed, with git-am 
reporting it as a corrupt patch.

..

That's weird.  I can save the email from linux-ide here,
and apply as a patch (after 01-09) with no issues at all.

Jeff got mail reader problems?

Here it is again, in case it got corrupted in transit to you.

* * * *

In preparation for supporting NCQ, we must allocate separate SG tables
for each command tag, rather than just a single table per port as before.

Gen-I hardware cannot do NCQ, though, so we still allocate just a single
table for that, but populate it in all 32 slots to avoid special-cases
elsewhere in hotter paths of the code.

Signed-off-by: Mark Lord [EMAIL PROTECTED]

--- old/drivers/ata/sata_mv.c   2008-01-26 14:18:05.0 -0500
+++ new/drivers/ata/sata_mv.c   2008-01-26 14:19:12.0 -0500
@@ -398,8 +398,8 @@
dma_addr_t  crqb_dma;
struct mv_crpb  *crpb;
dma_addr_t  crpb_dma;
-   struct mv_sg*sg_tbl;
-   dma_addr_t  sg_tbl_dma;
+   struct mv_sg*sg_tbl[MV_MAX_Q_DEPTH];
+   dma_addr_t  sg_tbl_dma[MV_MAX_Q_DEPTH];

unsigned intreq_idx;
unsigned intresp_idx;
@@ -483,6 +483,10 @@
void __iomem *port_mmio, int want_ncq);
static int __mv_stop_dma(struct ata_port *ap);

+/* .sg_tablesize is (MV_MAX_SG_CT / 2) in the structures below
+ * because we have to allow room for worst case splitting of
+ * PRDs for 64K boundaries in mv_fill_sg().
+ */
static struct scsi_host_template mv5_sht = {
.module = THIS_MODULE,
.name   = DRV_NAME,
@@ -1107,6 +,7 @@
{
struct mv_host_priv *hpriv = ap-host-private_data;
struct mv_port_priv *pp = ap-private_data;
+   int tag;

if (pp-crqb) {
dma_pool_free(hpriv-crqb_pool, pp-crqb, pp-crqb_dma);
@@ -1116,9 +1121,18 @@
dma_pool_free(hpriv-crpb_pool, pp-crpb, pp-crpb_dma);
pp-crpb = NULL;
}
-   if (pp-sg_tbl) {
-   dma_pool_free(hpriv-sg_tbl_pool, pp-sg_tbl, pp-sg_tbl_dma);
-   pp-sg_tbl = NULL;
+   /*
+* For GEN_I, there's no NCQ, so we have only a single sg_tbl.
+* For later hardware, we have one unique sg_tbl per NCQ tag.
+*/
+   for (tag = 0; tag  MV_MAX_Q_DEPTH; ++tag) {
+   if (pp-sg_tbl[tag]) {
+   if (tag == 0 || !IS_GEN_I(hpriv))
+   dma_pool_free(hpriv-sg_tbl_pool,
+ pp-sg_tbl[tag],
+ pp-sg_tbl_dma[tag]);
+   pp-sg_tbl[tag] = NULL;
+   }
}
}

@@ -1139,7 +1153,7 @@
struct mv_port_priv *pp;
void __iomem *port_mmio = mv_ap_base(ap);
unsigned long flags;
-   int rc;
+   int tag, rc;

pp = devm_kzalloc(dev, sizeof(*pp), GFP_KERNEL);
if (!pp)
@@ -1160,10 +1174,21 @@
goto out_port_free_dma_mem;
memset(pp-crpb, 0, MV_CRPB_Q_SZ);

-   pp-sg_tbl = dma_pool_alloc(hpriv-sg_tbl_pool, GFP_KERNEL,
- pp-sg_tbl_dma);
-   if (!pp-sg_tbl)
-   goto out_port_free_dma_mem;
+   /*
+* For GEN_I, there's no NCQ, so we only allocate a single sg_tbl.
+* For later hardware, we need one unique sg_tbl per NCQ tag.
+*/
+   for (tag = 0; tag  MV_MAX_Q_DEPTH; ++tag) {
+   if (tag == 0 || !IS_GEN_I(hpriv)) {
+   pp-sg_tbl[tag] = dma_pool_alloc(hpriv-sg_tbl_pool,
+ GFP_KERNEL, pp-sg_tbl_dma[tag]);
+   if (!pp-sg_tbl[tag])
+   goto out_port_free_dma_mem;
+   } else {
+   pp-sg_tbl[tag] = pp-sg_tbl[0];
+   pp-sg_tbl_dma[tag] = pp-sg_tbl_dma[0];
+   }
+   }

spin_lock_irqsave(ap-host-lock, flags);

@@ -1213,7 +1238,7 @@
struct mv_sg *mv_sg, *last_sg = NULL;
unsigned int si;

-   mv_sg = pp-sg_tbl;
+   mv_sg = pp-sg_tbl[qc-tag];
for_each_sg(qc-sg, sg, qc-n_elem, si) {
dma_addr_t addr = sg_dma_address(sg);
u32 sg_len = sg_dma_len(sg);
@@ -1283,9 +1308,9 @@
in_index = pp-req_idx  MV_MAX_Q_DEPTH_MASK;

pp-crqb[in_index].sg_addr =
-   cpu_to_le32(pp-sg_tbl_dma  0x);
+   

[PATCH 09/13] sata_mv ncq Use DMA memory pools for hardware memory tables

2008-01-26 Thread Mark Lord

Create host-owned DMA memory pools, for use in allocating/freeing per-port
command/response queues and SG tables.  This gives us a way to guarantee we
meet the hardware address alignment requirements, and also reduces memory that
might otherwise be wasted on alignment gaps.

Signed-off-by: Mark Lord [EMAIL PROTECTED]

--- old/drivers/ata/sata_mv.c   2008-01-26 12:50:54.0 -0500
+++ new/drivers/ata/sata_mv.c   2008-01-26 13:01:56.0 -0500
@@ -107,14 +107,12 @@

/* CRQB needs alignment on a 1KB boundary. Size == 1KB
 * CRPB needs alignment on a 256B boundary. Size == 256B
-* SG count of 176 leads to MV_PORT_PRIV_DMA_SZ == 4KB
 * ePRD (SG) entries need alignment on a 16B boundary. Size == 16B
 */
MV_CRQB_Q_SZ= (32 * MV_MAX_Q_DEPTH),
MV_CRPB_Q_SZ= (8 * MV_MAX_Q_DEPTH),
-   MV_MAX_SG_CT= 176,
+   MV_MAX_SG_CT= 256,
MV_SG_TBL_SZ= (16 * MV_MAX_SG_CT),
-   MV_PORT_PRIV_DMA_SZ = (MV_CRQB_Q_SZ + MV_CRPB_Q_SZ + MV_SG_TBL_SZ),

MV_PORTS_PER_HC = 4,
/* == (port / MV_PORTS_PER_HC) to determine HC from 0-7 port */
@@ -421,6 +419,14 @@
u32 irq_cause_ofs;
u32 irq_mask_ofs;
u32 unmask_all_irqs;
+   /*
+* These consistent DMA memory pools give us guaranteed
+* alignment for hardware-accessed data structures,
+* and less memory waste in accomplishing the alignment.
+*/
+   struct dma_pool *crqb_pool;
+   struct dma_pool *crpb_pool;
+   struct dma_pool *sg_tbl_pool;
};

struct mv_hw_ops {
@@ -1097,6 +1103,25 @@
writelfl(cfg, port_mmio + EDMA_CFG_OFS);
}

+static void mv_port_free_dma_mem(struct ata_port *ap)
+{
+   struct mv_host_priv *hpriv = ap-host-private_data;
+   struct mv_port_priv *pp = ap-private_data;
+
+   if (pp-crqb) {
+   dma_pool_free(hpriv-crqb_pool, pp-crqb, pp-crqb_dma);
+   pp-crqb = NULL;
+   }
+   if (pp-crpb) {
+   dma_pool_free(hpriv-crpb_pool, pp-crpb, pp-crpb_dma);
+   pp-crpb = NULL;
+   }
+   if (pp-sg_tbl) {
+   dma_pool_free(hpriv-sg_tbl_pool, pp-sg_tbl, pp-sg_tbl_dma);
+   pp-sg_tbl = NULL;
+   }
+}
+
/**
 *  mv_port_start - Port specific init/start routine.
 *  @ap: ATA channel to manipulate
@@ -1113,51 +1138,36 @@
struct mv_host_priv *hpriv = ap-host-private_data;
struct mv_port_priv *pp;
void __iomem *port_mmio = mv_ap_base(ap);
-   void *mem;
-   dma_addr_t mem_dma;
unsigned long flags;
int rc;

pp = devm_kzalloc(dev, sizeof(*pp), GFP_KERNEL);
if (!pp)
return -ENOMEM;
-
-   mem = dmam_alloc_coherent(dev, MV_PORT_PRIV_DMA_SZ, mem_dma,
- GFP_KERNEL);
-   if (!mem)
-   return -ENOMEM;
-   memset(mem, 0, MV_PORT_PRIV_DMA_SZ);
+   ap-private_data = pp;

rc = ata_pad_alloc(ap, dev);
if (rc)
return rc;

-   /* First item in chunk of DMA memory:
-* 32-slot command request table (CRQB), 32 bytes each in size
-*/
-   pp-crqb = mem;
-   pp-crqb_dma = mem_dma;
-   mem += MV_CRQB_Q_SZ;
-   mem_dma += MV_CRQB_Q_SZ;
-
-   /* Second item:
-* 32-slot command response table (CRPB), 8 bytes each in size
-*/
-   pp-crpb = mem;
-   pp-crpb_dma = mem_dma;
-   mem += MV_CRPB_Q_SZ;
-   mem_dma += MV_CRPB_Q_SZ;
+   pp-crqb = dma_pool_alloc(hpriv-crqb_pool, GFP_KERNEL, pp-crqb_dma);
+   if (!pp-crqb)
+   return -ENOMEM;
+   memset(pp-crqb, 0, MV_CRQB_Q_SZ);

-   /* Third item:
-* Table of scatter-gather descriptors (ePRD), 16 bytes each
-*/
-   pp-sg_tbl = mem;
-   pp-sg_tbl_dma = mem_dma;
+   pp-crpb = dma_pool_alloc(hpriv-crpb_pool, GFP_KERNEL, pp-crpb_dma);
+   if (!pp-crpb)
+   goto out_port_free_dma_mem;
+   memset(pp-crpb, 0, MV_CRPB_Q_SZ);
+
+   pp-sg_tbl = dma_pool_alloc(hpriv-sg_tbl_pool, GFP_KERNEL,
+ pp-sg_tbl_dma);
+   if (!pp-sg_tbl)
+   goto out_port_free_dma_mem;

spin_lock_irqsave(ap-host-lock, flags);

mv_edma_cfg(pp, hpriv, port_mmio, 0);
-
mv_set_edma_ptrs(port_mmio, hpriv, pp);

spin_unlock_irqrestore(ap-host-lock, flags);
@@ -1166,8 +1176,11 @@
 * we'll be unable to send non-data, PIO, etc due to restricted access
 * to shadow regs.
 */
-   ap-private_data = pp;
return 0;
+
+out_port_free_dma_mem:
+   mv_port_free_dma_mem(ap);
+   return -ENOMEM;
}

/**
@@ -1182,6 +1195,7 @@
static void mv_port_stop(struct ata_port *ap)
{
mv_stop_dma(ap);
+   mv_port_free_dma_mem(ap);
}