Steve Fairbairn wrote:
Hi All,
I've got a degraded RAID5 which I'm trying to add in the replacement
disk. Trouble is, every time the recovery starts, it flies along at
70MB/s or so. Then after doing about 1%, it starts dropping rapidly,
until eventually a device is marked failed.
When I look
Bruce Miller wrote:
The beginning of Section 4 of the Linux Sotfware-RAID-HOWTO
states emphatically that "you should only have one device per
IDE bus. Running disks as master/slave is horrible for
performance. IDE is really bad at accessing more that one drive
per bus".
Do the same cautions appl
Promise just gave permission to post the docs for their PDC20621 (i.e.
SX4) hardware:
http://gkernel.sourceforge.net/specs/promise/pdc20621-pguide-1.2.pdf.bz2
joining the existing PDC20621 DIMM and PLL docs:
http://gkernel.sourceforge.net/specs/promise/pdc20621-pguide-dimm-1.6.pdf.bz2
http://g
Neil Brown wrote:
On Tuesday October 23, [EMAIL PROTECTED] wrote:
As for where the metadata "should" be placed, it is interesting to
observe that the SNIA's "DDFv1.2" puts it at the end of the device.
And as DDF is an industry standard sponsored by multiple companies it
must be ..
Sorry. I h
Theodore Tso wrote:
Can someone with knowledge of current disk drive behavior confirm that
for all drives that support bad block sparing, if an attempt to write
to a particular spot on disk results in an error due to bad media at
that spot, the disk drive will automatically rewrite the sector to
Colin Simpson wrote:
SATA isn't supported on RH 4's SMART.
False. Works just fine.
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Lars Ellenberg wrote:
md raidX make_request functions strip off the BIO_RW_SYNC flag,
this introducing additional latency.
below is a suggested patch for the raid1.c .
other suggested solutions would be to let the bio_clone do its work,
and not reassign thereby stripping off all flags.
at most s
Rafael J. Wysocki wrote:
On Friday, 15 December 2006 22:39, Andrew Morton wrote:
On Fri, 15 Dec 2006 13:05:52 -0800
Andrew Morton <[EMAIL PROTECTED]> wrote:
Jeff, I shall send all the sata patches which I have at you one single time
and I shall then drop the lot. So please don't flub them.
I
Alan wrote:
On Fri, 15 Dec 2006 13:39:27 -0800
Andrew Morton <[EMAIL PROTECTED]> wrote:
On Fri, 15 Dec 2006 13:05:52 -0800
Andrew Morton <[EMAIL PROTECTED]> wrote:
Jeff, I shall send all the sata patches which I have at you one single time
and I shall then drop the lot. So please don't flub
Rafael J. Wysocki wrote:
The other box is mine and it works just fine with 2.6.20-rc1.
I think something bad happened in sata land just recently.
Yup. Please see, for example:
http://marc.theaimsgroup.com/?l=linux-kernel&m=116621656432500&w=2
It looks like the breakage is in sata, in the p
The "Re: Linux 2.6.20-rc1" sub-thread that had Jens and Alistair John
Strachan replying seemed to implicate some core block layer badness.
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Molle Bestefich wrote:
Jeff Garzik wrote:
Molle Bestefich wrote:
> I've just found this:
>
http://home-tj.org/wiki/index.php/Sil_m15w#Message:_Re:_SiI_3112_.26_Seagate_drivers
>
> Which in particular mentions that Silicon Image controllers and
> Seagate drives don'
Molle Bestefich wrote:
Hi everyone;
Thanks for the information so far!
Greatly appreciated.
I've just found this:
http://home-tj.org/wiki/index.php/Sil_m15w#Message:_Re:_SiI_3112_.26_Seagate_drivers
Which in particular mentions that Silicon Image controllers and
Seagate drives don't work to
Dan Williams wrote:
On 9/11/06, Jeff Garzik <[EMAIL PROTECTED]> wrote:
Dan Williams wrote:
> This is a frequently asked question, Alan Cox had the same one at OLS.
> The answer is "probably." The only complication I currently see is
> where/how the stripe cache is m
Dan Williams wrote:
On 9/11/06, Roland Dreier <[EMAIL PROTECTED]> wrote:
Jeff> Are we really going to add a set of hooks for each DMA
Jeff> engine whizbang feature?
...ok, but at some level we are going to need a file that has:
EXPORT_SYMBOL_GPL(dma_whizbang_op1)
. . .
EXPORT_SYMBOL_GPL
Dan Williams wrote:
This is a frequently asked question, Alan Cox had the same one at OLS.
The answer is "probably." The only complication I currently see is
where/how the stripe cache is maintained. With the IOPs its easy
because the DMA engines operate directly on kernel memory. With the
Pro
Dan Williams wrote:
From: Dan Williams <[EMAIL PROTECTED]>
dma_sync_wait is a common routine to live wait for a dma operation to
complete.
Signed-off-by: Dan Williams <[EMAIL PROTECTED]>
---
include/linux/dmaengine.h | 12
1 files changed, 12 insertions(+), 0 deletions(-)
diff
Dan Williams wrote:
From: Dan Williams <[EMAIL PROTECTED]>
Adds a dmaengine client that is the hardware accelerated version of
raid5_do_soft_block_ops. It utilizes the raid5 workqueue implementation to
operate on multiple stripes simultaneously. See the iop-adma.c driver for
an example of a dr
Dan Williams wrote:
From: Dan Williams <[EMAIL PROTECTED]>
Currently the iop3xx platform support code assumes that RedBoot is the
bootloader and has already initialized the ATU. Linux should handle this
initialization for three reasons:
1/ The memory map that RedBoot sets up is not optimal (pa
Dan Williams wrote:
From: Dan Williams <[EMAIL PROTECTED]>
Also brings the iop3xx registers in line with the format of the iop13xx
register definitions.
Signed-off-by: Dan Williams <[EMAIL PROTECTED]>
---
include/asm-arm/arch-iop32x/entry-macro.S |2
include/asm-arm/arch-iop32x/iop32x.h
Dan Williams wrote:
From: Dan Williams <[EMAIL PROTECTED]>
Default virtual function that returns an error if the user attempts a
memcpy operation. An XOR engine is an example of a DMA engine that does
not support memcpy.
Signed-off-by: Dan Williams <[EMAIL PROTECTED]>
---
drivers/dma/dmaengi
Dan Williams wrote:
From: Dan Williams <[EMAIL PROTECTED]>
Changelog:
* make the dmaengine api EXPORT_SYMBOL_GPL
* zero sum support should be standalone, not integrated into xor
Signed-off-by: Dan Williams <[EMAIL PROTECTED]>
---
drivers/dma/dmaengine.c | 15 ++
drivers/dma/ioatdm
Dan Williams wrote:
@@ -759,8 +755,10 @@ #endif
device->common.device_memcpy_buf_to_buf = ioat_dma_memcpy_buf_to_buf;
device->common.device_memcpy_buf_to_pg = ioat_dma_memcpy_buf_to_pg;
device->common.device_memcpy_pg_to_pg = ioat_dma_memcpy_pg_to_pg;
- device->commo
Dan Williams wrote:
Neil,
The following patches implement hardware accelerated raid5 for the Intel
XscaleĀ® series of I/O Processors. The MD changes allow stripe
operations to run outside the spin lock in a work queue. Hardware
acceleration is achieved by using a dma-engine-aware work queue rou
Dan Williams wrote:
From: Dan Williams <[EMAIL PROTECTED]>
Enable handle_stripe5 to pass off write operations to
raid5_do_soft_blocks_ops (which can be run as a workqueue). The operations
moved are reconstruct-writes and read-modify-writes formerly handled by
compute_parity5.
Changelog:
* move
Dan Williams wrote:
From: Dan Williams <[EMAIL PROTECTED]>
raid5_do_soft_block_ops consolidates all the stripe cache maintenance
operations into a single routine. The stripe operations are:
* copying data between the stripe cache and user application buffers
* computing blocks to save a disk ac
Richard Scobie wrote:
Jeff Garzik wrote:
Mark Perkel wrote:
Running Linux on an AMD AM2 nVidia chip ser that supports Raid 0
striping on the motherboard. Just wondering if hardware raid (SATA2) is
going to be faster that software raid and why?
Jeff, on a slightly related note, is the
Mark Perkel wrote:
Running Linux on an AMD AM2 nVidia chip ser that supports Raid 0
striping on the motherboard. Just wondering if hardware raid (SATA2) is
going to be faster that software raid and why?
First, it sounds like you are confusing motherboard "RAID" with real
RAID. There's a FAQ
Jens Axboe wrote:
On Mon, Aug 14 2006, Arjan van de Ven wrote:
On Mon, 2006-08-14 at 14:39 -0400, Jeff Garzik wrote:
So... has anybody given any thought to enabling fsync(2), fdatasync(2),
and sync_file_range(2) issuing a [FLUSH|SYNCHRONIZE] CACHE command?
This has bugged me for _years_
So... has anybody given any thought to enabling fsync(2), fdatasync(2),
and sync_file_range(2) issuing a [FLUSH|SYNCHRONIZE] CACHE command?
This has bugged me for _years_, that Linux does not do this. Looking at
forums on the web, it bugs a lot of other people too.
My suggestion would be to
Justin Piszcz wrote:
In the source:
enum {
uli_5289= 0,
uli_5287= 1,
uli_5281= 2,
uli_max_ports = 4,
/* PCI configuration registers */
ULI5287_BASE= 0x90, /* sata0 phy SCR reg
Molle Bestefich wrote:
Does anyone know of a way to disable libata's 5-time retry when a read fails?
It has the effect of causing every failed sector read to take 6
seconds before it fails, causing raid5 rebuilds to go awfully slow.
It's generally undesirable too, when you've got RAID on top th
Jim Klimov wrote:
Hello linux-raid,
I have tried several cheap RAID controllers recently (namely,
VIA VT6421, Intel 6300ESB and Adaptec/Marvell 885X6081).
VIA one is a PCI card, the second two are built in a Supermicro
motherboard (E7520/X6DHT-G).
The intent was to let the BIOS of
Louis-David Mitterrand wrote:
On Sun, Mar 05, 2006 at 02:29:15AM -0500, Jeff Garzik wrote:
Raz Ben-Jehuda(caro) wrote:
Is NCQ supported when setting the controller to JBOD instead of using HW
raid?
1) The two have nothing to do with each other
2) It sounds like you haven't yet read
Raz Ben-Jehuda(caro) wrote:
Is NCQ supported when setting the controller to JBOD instead of using HW raid?
1) The two have nothing to do with each other
2) It sounds like you haven't yet read
http://linux-ata.org/faq-sata-raid.html
Jeff
-
To unsubscribe from this list: send the line
Steve Byan wrote:
On Mar 3, 2006, at 5:19 PM, Jeff Garzik wrote:
Steve Byan wrote:
it. It works OK for reads. TCQ was really invented as a way to
allow CD-ROM drives to play nice on the same ATA bus as disks.
Disagree, you are probably thinking about bus disconnect associated
with
Steve Byan wrote:
On Mar 1, 2006, at 8:55 AM, Jens Axboe wrote:
The problem with TCQ is that the host can't disconnect on writes after
sending the data to the drive but before receiving the status. The host
can only disconnect between sending the command and moving the data.
That, but als
Raz Ben-Jehuda(caro) wrote:
Thank you Mr Garzik.
Is there a list of all drivers and there features they give ?
Yes: http://linux-ata.org/sata-status.html
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More ma
Jens Axboe wrote:
(don't top post)
On Thu, Mar 02 2006, Raz Ben-Jehuda(caro) wrote:
i can see the NCQ realy bother people.
i am using a promise card sata TX4 150.
does any of you has a patch for the driver
so it would support NCQ ?
I don't know of any documentation for the promise cards (o
Jens Axboe wrote:
On Wed, Mar 01 2006, Gentoopower wrote:
P.S. Just waiting to see NCQ support for my nforce system in libata:-)
Don't hold your breath, it's unlikely to get supported as nvidia wont
open the specs. ahci is a really really nice controller, if you want ncq
I suggest going with
Pierre Ossman wrote:
Dan Williams wrote:
The ADMA (Asynchronous / Application Specific DMA) interface is proposed
as a cross platform mechanism for supporting system CPU offload engines.
The goal is to provide a unified asynchronous interface to support
memory copies, block xor, block pattern s
Dan Williams wrote:
This patch set was originally posted to linux-raid, Neil suggested that
I send to linux-kernel as well:
Per the discussion in this thread (http://marc.theaimsgroup.com/?
t=11260312014&r=1&w=2) these patches implement the first phase of MD
acceleration, pre-emptible xor.
On Tue, Jan 24, 2006 at 10:54:19AM +0100, PFC wrote:
> SATA disks accessed via libata are not currently supported by
> smartmontools. When libata is given an ATA pass-thru ioctl() then an
> additional '-d libata' device type will be added to smartmontools.
It already has support. Pass "-d ata" to
On Mon, Jan 23, 2006 at 02:44:04PM -0600, Shawn Usry wrote:
> The drives physically support SMART, but apparently the roadblock
> is lacking support in the libata drivers.
Support is now present in libata. Most likely the user has forgotten to
add "-d ata" to smartd and smartctl.
Jeff
Andre Majorel wrote:
Hello all.
I have a motherboard with two SATA chips (Nvidia nForce 4 and
Silicon Image SiI 3114) running Linux 2.6.14.3. The drivers are
SCSI_SATA_NV and SCSI_SATA_SIL, both compiled-in.
On Mon, Nov 21, 2005 at 10:15:11AM -0800, Raz Ben-Jehuda(caro) wrote:
> Well , i have tested the disk with a new tester i have written. it seems that
> the ata driver causes the high cpu and not raid.
Which drivers are you using? lspci and kernel .config?
Jeff
-
To unsubscribe from th
Tim Kent wrote:
Hello,
I've compiled the Linux 2.6.8 kernel (Debian Sarge version) with the
updated aic79xx drivers from Just Gibbs' web site which has allowed me to
now see disks while in HostRAID mode. What is the next step to see the
logical RAID device instead of the disks? I understand th
y
guarantee only exists when a single IRQ is assigned to a single
handler...
Jeff
--
Jeff Garzik | Dinner is ready when
Building 1024 | the smoke alarm goes off.
MandrakeSoft| -/usr/games/fortune
-
To unsubscribe from this list: send the line "u
48 matches
Mail list logo