Re: [PATCH] [PPC32] ADMA support for PPC 440SPe processors.

2007-03-22 Thread Christoph Hellwig
On Wed, Mar 21, 2007 at 09:03:27PM +0100, Segher Boessenkool wrote: BTW folks. Would it be hard to change your spe_ prefixes to something else ? There's already enough confusion between the freescale SPE unit and the cell SPEs :-) Will you change _your_ prefixes too? :-) :-)

Re: [PATCH] [PPC32] ADMA support for PPC 440SPe processors.

2007-03-22 Thread Segher Boessenkool
BTW folks. Would it be hard to change your spe_ prefixes to something else ? There's already enough confusion between the freescale SPE unit and the cell SPEs :-) Will you change _your_ prefixes too? :-) :-) Which ones ? I'm not in charge of the fsl spe thingy nor the spe scheduler code :-)

Re: [PATCH] [PPC32] ADMA support for PPC 440SPe processors.

2007-03-22 Thread Geert Uytterhoeven
On Thu, 22 Mar 2007, Segher Boessenkool wrote: BTW folks. Would it be hard to change your spe_ prefixes to something else ? There's already enough confusion between the freescale SPE unit and the cell SPEs :-) Will you change _your_ prefixes too? :-) :-) Which ones ? I'm not in

Re: [PATCH] [PPC32] ADMA support for PPC 440SPe processors.

2007-03-22 Thread Segher Boessenkool
That sounds nice. There are slightly fewer things called SPU than there are called SPE I imagine? Or is it just a historical misnomer. AFAIK SPE is the preferred name, as the SPU is only a part of the SPE. That's what I was told. And you were told right. I was trying to be sarcastic here,

Re: Another report of a raid6 array being maintaind by _raid5 in ps .

2007-03-22 Thread Bill Davidsen
Neil Brown wrote: On Wednesday March 21, [EMAIL PROTECTED] wrote: Hello Neil , Someone else reported this before . But I'd thought it was under a older kernel than 2.6.21-rc4 . Hth , JimL root 2936 0.0 0.0 2948 1760 tts/0Ss 04:30 0:00 -bash root 2965 0.3 0.0

Re: BLK_DEV_MD with CONFIG_NET

2007-03-22 Thread Adrian Bunk
On Wed, Mar 21, 2007 at 11:30:24PM +0100, Arnd Bergmann wrote: On Wednesday 21 March 2007 13:02:46 Sam Ravnborg wrote: Anything which is every exported to modules, which ought to be the situation in this case, should be obj-y not lib-y right? That is also my understanding of lib-y -

Re: 2.6.20.3 AMD64 oops in CFQ code

2007-03-22 Thread linux
3 (I think) seperate instances of this, each involving raid5. Is your array degraded or fully operational? Ding! A drive fell out the other day, which is why the problems only appeared recently. md5 : active raid5 sdf4[5] sdd4[3] sdc4[2] sdb4[1] sda4[0] 1719155200 blocks level 5, 64k

Re: 2.6.20.3 AMD64 oops in CFQ code

2007-03-22 Thread Jens Axboe
On Thu, Mar 22 2007, [EMAIL PROTECTED] wrote: 3 (I think) seperate instances of this, each involving raid5. Is your array degraded or fully operational? Ding! A drive fell out the other day, which is why the problems only appeared recently. md5 : active raid5 sdf4[5] sdd4[3] sdc4[2]

Re: 2.6.20.3 AMD64 oops in CFQ code

2007-03-22 Thread Neil Brown
On Thursday March 22, [EMAIL PROTECTED] wrote: On Thu, Mar 22 2007, [EMAIL PROTECTED] wrote: 3 (I think) seperate instances of this, each involving raid5. Is your array degraded or fully operational? Ding! A drive fell out the other day, which is why the problems only appeared

Re: 2.6.20.3 AMD64 oops in CFQ code

2007-03-22 Thread Dan Williams
On 3/22/07, Neil Brown [EMAIL PROTECTED] wrote: On Thursday March 22, [EMAIL PROTECTED] wrote: On Thu, Mar 22 2007, [EMAIL PROTECTED] wrote: 3 (I think) seperate instances of this, each involving raid5. Is your array degraded or fully operational? Ding! A drive fell out the other day,

Re: 2.6.20.3 AMD64 oops in CFQ code

2007-03-22 Thread Neil Brown
On Thursday March 22, [EMAIL PROTECTED] wrote: Not a cfq failure, but I have been able to reproduce a different oops at array stop time while i/o's were pending. I have not dug into it enough to suggest a patch, but I wonder if it is somehow related to the cfq failure since it involves

[PATCH 000 of 3] md: bug fixes for md for 2.6.21

2007-03-22 Thread NeilBrown
A minor new feature and 2 bug fixes for md suitable for 2.6.21 The minor feature is to make reshape (adding a drive to an array and restriping it) work for raid4. The code is all ready, it just wasn't used. Thanks, NeilBrown [PATCH 001 of 3] md: Allow raid4 arrays to be reshaped. [PATCH 002

[PATCH 001 of 3] md: Allow raid4 arrays to be reshaped.

2007-03-22 Thread NeilBrown
All that is missing the the function pointers in raid4_pers. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid5.c |4 1 file changed, 4 insertions(+) diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c --- .prev/drivers/md/raid5.c2007-03-23

[PATCH 002 of 3] md: Clear the congested_fn when stopping a raid5

2007-03-22 Thread NeilBrown
If this mddev and queue got reused for another array that doesn't register a congested_fn, this function would get called incorretly. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c|1 + ./drivers/md/raid5.c |3 ++- 2 files changed, 3

[PATCH 003 of 3] md: Convert compile time warnings into runtime warnings.

2007-03-22 Thread NeilBrown
... still not sure why we need this Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c| 41 +++-- ./drivers/md/raid5.c | 12 ++-- 2 files changed, 41 insertions(+), 12 deletions(-) diff

Re: XFS sunit/swidth for raid10

2007-03-22 Thread dean gaudet
On Thu, 22 Mar 2007, Peter Rabbitson wrote: dean gaudet wrote: On Thu, 22 Mar 2007, Peter Rabbitson wrote: Hi, How does one determine the XFS sunit and swidth sizes for a software raid10 with 3 copies? mkfs.xfs uses the GET_ARRAY_INFO ioctl to get the data it needs from

[PATCH 2.6.21-rc4 00/15] md raid5 acceleration and async_tx

2007-03-22 Thread Dan Williams
The following patch set implements the async_tx api and modifies md-raid5 to issue memory copies and xor calculations asynchronously. Async_tx is an extension of the existing dmaengine interface in the kernel. Async_tx allows kernel code to utilize application specific acceleration engines when

[PATCH 2.6.21-rc4 01/15] dmaengine: add base support for the async_tx api

2007-03-22 Thread Dan Williams
The async_tx api provides methods for describing a chain of asynchronous bulk memory transfers/transforms with support for inter-transactional dependencies. It is implemented as a dmaengine client that smooths over the details of different hardware offload engine implementations. Code that is

[PATCH 2.6.21-rc4 06/15] md: move write operations to raid5_run_ops

2007-03-22 Thread Dan Williams
handle_stripe sets STRIPE_OP_PREXOR, STRIPE_OP_BIODRAIN, STRIPE_OP_POSTXOR to request a write to the stripe cache. raid5_run_ops is triggerred to run and executes the request outside the stripe lock. Signed-off-by: Dan Williams [EMAIL PROTECTED] --- drivers/md/raid5.c | 152

[PATCH 2.6.21-rc4 11/15] md: move raid5 io requests to raid5_run_ops

2007-03-22 Thread Dan Williams
handle_stripe now only updates the state of stripes. All execution of operations is moved to raid5_run_ops. Signed-off-by: Dan Williams [EMAIL PROTECTED] --- drivers/md/raid5.c | 68 1 files changed, 10 insertions(+), 58 deletions(-) diff

[PATCH 2.6.21-rc4 12/15] md: remove raid5 compute_block and compute_parity5

2007-03-22 Thread Dan Williams
replaced by raid5_run_ops Signed-off-by: Dan Williams [EMAIL PROTECTED] --- drivers/md/raid5.c | 124 1 files changed, 0 insertions(+), 124 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 0be26c2..062df02 100644 ---

[PATCH 2.6.21-rc4 15/15] iop3xx: Surface the iop3xx DMA and AAU units to the iop-adma driver

2007-03-22 Thread Dan Williams
Adds the platform device definitions and the architecture specific support routines (i.e. register initialization and descriptor formats) for the iop-adma driver. Changelog: * add support for 1k zero sum buffer sizes * added dma/aau platform devices to iq80321 and iq80332 setup * fixed the

[PATCH 2.6.21-rc4 13/15] dmaengine: driver for the iop32x, iop33x, and iop13xx raid engines

2007-03-22 Thread Dan Williams
This is a driver for the iop DMA/AAU/ADMA units which are capable of pq_xor, pq_update, pq_zero_sum, xor, dual_xor, xor_zero_sum, fill, copy+crc, and copy operations. Changelog: * fixed a slot allocation bug in do_iop13xx_adma_xor that caused too few slots to be requested eventually leading to

[PATCH 2.6.21-rc4 10/15] md: use async_tx and raid5_run_ops for raid5 expansion operations

2007-03-22 Thread Dan Williams
The parity calculation for an expansion operation is the same as the calculation performed at the end of a write with the caveat that all blocks in the stripe are scheduled to be written. An expansion operation is identified as a stripe with the POSTXOR flag set and the BIODRAIN flag not set.

[PATCH 2.6.21-rc4 14/15] iop13xx: Surface the iop13xx adma units to the iop-adma driver

2007-03-22 Thread Dan Williams
Adds the platform device definitions and the architecture specific support routines (i.e. register initialization and descriptor formats) for the iop-adma driver. Changelog: * added 'descriptor pool size' to the platform data * add base support for buffer sizes larger than 16MB (hw max) * build

[PATCH 2.6.21-rc4 03/15] dmaengine: add the async_tx api

2007-03-22 Thread Dan Williams
async_tx is an api to describe a series of bulk memory transfers/transforms. When possible these transactions are carried out by asynchrounous dma engines. The api handles inter-transaction dependencies and hides dma channel management from the client. When a dma engine is not present the

[PATCH 2.6.21-rc4 09/15] md: satisfy raid5 read requests via raid5_run_ops

2007-03-22 Thread Dan Williams
Use raid5_run_ops to carry out the memory copies for a raid5 read request. Signed-off-by: Dan Williams [EMAIL PROTECTED] --- drivers/md/raid5.c | 40 +++- 1 files changed, 15 insertions(+), 25 deletions(-) diff --git a/drivers/md/raid5.c

[PATCH 2.6.21-rc4 04/15] md: add raid5_run_ops and support routines

2007-03-22 Thread Dan Williams
Prepare the raid5 implementation to use async_tx for running stripe operations: * biofill (copy data into request buffers to satisfy a read request) * compute block (generate a missing block in the cache from the other blocks) * prexor (subtract existing data as part of the read-modify-write

[PATCH 2.6.21-rc4 08/15] md: move raid5 parity checks to raid5_run_ops

2007-03-22 Thread Dan Williams
handle_stripe sets STRIPE_OP_CHECK to request a check operation in raid5_run_ops. If raid5_run_ops is able to perform the check with a dma engine the parity will be preserved in memory removing the need to re-read it from disk, as is necessary in the synchronous case. 'Repair' operations re-use

[PATCH 2.6.21-rc4 05/15] md: use raid5_run_ops for stripe cache operations

2007-03-22 Thread Dan Williams
Each stripe has three flag variables to reflect the state of operations (pending, ack, and complete). -pending: set to request servicing in raid5_run_ops -ack: set to reflect that raid5_runs_ops has seen this request -complete: set when the operation is complete and it is ok for handle_stripe5 to