Re: [RFC] Introduce HAVE_IDE to support flexible IDE per arch configuration

2008-02-07 Thread Russell King - ARM Linux
On Thu, Feb 07, 2008 at 10:42:56PM +0100, Sam Ravnborg wrote:
 Following patch introduce HAVE_IDE to support flexible per
 arch or even per. sub-arch configuration of IDE support.
 This patch is needed to allow arm to use the generic
 drivers/Kconfig file.
 
 Introducing HAVE_IDE so each arch explicit select HAVE_IDE
 if supported allowed us to get rid of HAS_IOMEM which
 is anyway overloaded.
 And doing it this way is a much better way to document which
 architectures that supports IDE.
 Furthermore the decision if IDE is supported or not is
 distributed.
 Consider seeing this all over:
 
 -if PCMCIA || ARCH_CLPS7500 || ARCH_IOP32X || ARCH_IOP33X || ARCH_IXP4XX \
 -   || ARCH_L7200 || ARCH_LH7A40X || ARCH_PXA || ARCH_RPC \
 -   || ARCH_S3C2410 || ARCH_SA1100 || ARCH_SHARK || FOOTBRIDGE \
 -   || ARCH_IXP23XX
  source drivers/ide/Kconfig
 -endif
 
 Only s390 and um does not support IDE from my quick
 investigation, if there are others let me know.
 [Added linux-arch to catch all arch maintainers].
 
 Comments?

Acked-by: Russell King [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3 2.6.24-git] ARM/RPC: Use HAVE_PATA_PLATFORM to select pata platform driver

2008-02-06 Thread Russell King - ARM Linux
On Wed, Feb 06, 2008 at 06:58:17AM -0500, Jeff Garzik wrote:
 ACK patch series...  would it be ok to send via the ARM maintainer?
 
 I would prefer to add this at the same time as its user...

I've only seen the one patch, and I suspect that it depends on patches
to other architectures (to convert other architectures to use the new
variable.)
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3 2.6.24-git] ARM/RPC: Use HAVE_PATA_PLATFORM to select pata platform driver

2008-02-05 Thread Russell King - ARM Linux
On Sat, Feb 02, 2008 at 04:21:35PM +, Ben Dooks wrote:
 Use HAVE_PATA_PLATFORM for ARCH_RPC 
 
 Cc: Linux ARM Kernel [EMAIL PROTECTED]
 Cc: Russell King [EMAIL PROTECTED]
 Signed-off-by: Ben Dooks [EMAIL PROTECTED]

Patch is fine.

Acked-by: Russell King [EMAIL PROTECTED]

Thanks.
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] pata_platform: move ARCH_RPC to use MACH_HAS_PATA_PLATFORM

2007-11-19 Thread Russell King - ARM Linux
On Mon, Nov 19, 2007 at 12:43:40PM +, Ben Dooks wrote:
 Move ARCH_RPC to using MACH_HAS_PATA_PLATFORM as an example
 of using this new configuration.

I thought that it had been agreed (on linux-arch) that the name of these
options shall be HAVE_foo to enable a driver configuration symbol of
foo.

IOW, this should be HAVE_PATA_PLATFORM not MACH_HAS_PATA_PLATFORM.

(Not sure who introduced MACH_HAS_xxx but added Sam and Mathieu
since they were involved in the discussion over HAVE_xxx.)
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


2.6.22-rc4 BUG, old IDE driver

2007-06-18 Thread linux
(I sent this to linux-RAID, then actually read it and noticed that the
crash was in the IDE code.  Reposting here.)

This is 2.6.22-rv4 + linuxpps, on a venerable and stable 32-bit system
(P3 processor, 400BX motherboard, ECC RAM).  That drive has been giving me
hassles from time to time, but is working fine after a reboot...

(Errors start at 09:06:56)
hdk: dma_timer_expiry: dma status == 0x20
hdk: DMA timeout retry
hdk: timeout waiting for DMA
hdk: dma_timer_expiry: dma status == 0x20
hdk: DMA timeout retry
hdk: timeout waiting for DMA
hdk: task_out_intr: status=0x58 { DriveReady SeekComplete DataRequest }
ide: failed opcode was: unknown
pdc202xx_new: Secondary channel reset.
ide5: reset: success
hdk: task_out_intr: status=0x58 { DriveReady SeekComplete DataRequest }
ide: failed opcode was: unknown
pdc202xx_new: Secondary channel reset.
ide5: reset: success
hdk: task_out_intr: status=0x58 { DriveReady SeekComplete DataRequest }
ide: failed opcode was: unknown
pdc202xx_new: Secondary channel reset.
ide5: reset: success

(repeat many times)

(Time is now 10:45:44)
ide5: reset: success
hdk: task_out_intr: status=0x50 { DriveReady SeekComplete }
ide: failed opcode was: unknown
hdk: task_out_intr: status=0x50 { DriveReady SeekComplete }
ide: failed opcode was: unknown
hdk: task_out_intr: status=0x58 { DriveReady SeekComplete DataRequest }
ide: failed opcode was: unknown
pdc202xx_new: Secondary channel reset.
ide5: reset: success
hdk: task_out_intr: status=0x50 { DriveReady SeekComplete }
ide: failed opcode was: unknown
BUG: unable to handle kernel paging request at virtual address 3000
 printing eip:
b02554b1
*pde = 
Oops:  [#1]
CPU:0
EIP:0060:[b02554b1]Not tainted VLI
EFLAGS: 00010246   (2.6.22-rc4 #27)
EIP is at ide_outsl+0x5/0x9
eax: 9400   ebx: b0457624   ecx: 0080   edx: 9400
esi: 3000   edi: b0457624   ebp: 0080   esp: efc7dda8
ds: 007b   es: 007b   fs:   gs:   ss: 0068
Process md7_raid10 (pid: 360, ti=efc7d000 task=eff1e500 task.ti=efc7d000)
Stack: b04576b8 b025605d 3000 b0457624 3000 b0457624 b04576b8 b025875c 
   0001 b1985000 0004 b04576b8 0001 b0850370 b025910f b0850370 
   b04576b8 06e94ed8 b025933a  0019 efc7de64 b03e6520 b04576b8 
Call Trace:
 [b025605d] ata_output_data+0x4d/0x64
 [b025875c] ide_pio_sector+0xea/0x121
 [b025910f] ide_pio_datablock+0x46/0x5c
 [b025933a] pre_task_out_intr+0x9a/0xa5
 [b0254a3b] ide_do_request+0x6e7/0x89a
 [b01d4505] blk_remove_plug+0x4e/0x5a
 [b01d452e] __generic_unplug_device+0x1d/0x1f
 [b01d51a8] __make_request+0x386/0x489
 [b01d3901] generic_make_request+0x186/0x1b3
 [b0290f63] md_wakeup_thread+0x25/0x27
 [b029640c] md_check_recovery+0x3ff/0x407
 [b01d535c] generic_unplug_device+0x3e/0x44
 [b01d4505] blk_remove_plug+0x4e/0x5a
 [b028eab6] raid10d+0xaa/0x8a5
 [b010245b] common_interrupt+0x23/0x28
 [b033e722] schedule_timeout+0x13/0x95
 [b029584b] md_thread+0xc1/0xd7
 [b0121405] autoremove_wake_function+0x0/0x35
 [b029578a] md_thread+0x0/0xd7
 [b01212b0] kthread+0x36/0x5a
 [b012127a] kthread+0x0/0x5a
 [b01025db] kernel_thread_helper+0x7/0x10
 ===
Code: 89 c2 f3 66 6d 5f c3 57 89 d7 89 c2 f3 6d 5f c3 89 d0 89 ca ee c3 0f b7 
c0 66 ef c3 56 89 d6 89 c2 f3 66 6f 5e c3 56 89 d6 89 c2 f3 6f 5e c3 c7 80 08 
05 00 00 a3 64 25 b0 c7 80 0c 05 00 00 96 
EIP: [b02554b1] ide_outsl+0x5/0x9 SS:ESP 0068:efc7dda8
note: md7_raid10[360] exited with preempt_count 1

The system seemed to still be running, but I rebooted as a precaution.
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.20.3 AMD64 oops in CFQ code

2007-04-03 Thread linux
:00/40 tag 2 cdb 0x0 data 
188416 out
14:56:13:  res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
14:56:13: ata5.00: cmd 61/00:18:d2:31:ba/01:00:1c:00:00/40 tag 3 cdb 0x0 data 
131072 out
14:56:13:  res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
14:56:13: ata5.00: cmd 61/00:20:d2:32:ba/01:00:1c:00:00/40 tag 4 cdb 0x0 data 
131072 out
14:56:13:  res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
14:56:13: ata5.00: cmd 61/00:28:d2:33:ba/01:00:1c:00:00/40 tag 5 cdb 0x0 data 
131072 out
14:56:13:  res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
14:56:13: ata5.00: cmd 61/00:30:d2:34:ba/01:00:1c:00:00/40 tag 6 cdb 0x0 data 
131072 out
14:56:13:  res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
14:56:13: ata5.00: cmd 61/00:38:d2:35:ba/01:00:1c:00:00/40 tag 7 cdb 0x0 data 
131072 out
14:56:13:  res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
14:56:13: ata5: soft resetting port
14:56:43: ata5: softreset failed (timeout)
14:56:43: ata5: softreset failed, retrying in 5 secs
14:56:48: ata5: hard resetting port
14:57:20: ata5: softreset failed (timeout)
14:57:20: ata5: follow-up softreset failed, retrying in 5 secs
14:57:25: ata5: hard resetting port
14:57:58: ata5: softreset failed (timeout)
14:57:58: ata5: reset failed, giving up
14:57:58: ata5.00: disabled
14:57:58: ata5: EH complete
14:57:58: sd 4:0:0:0: SCSI error: return code = 0x0004
14:57:58: end_request: I/O error, dev sde, sector 481965522
14:57:58: raid5: Disk failure on sde4, disabling device. Operation continuing 
on 5 devices
14:57:58: sd 4:0:0:0: SCSI error: return code = 0x0004
14:57:58: end_request: I/O error, dev sde, sector 481965266
14:57:58: sd 4:0:0:0: SCSI error: return code = 0x0004
14:57:58: end_request: I/O error, dev sde, sector 481965010
14:57:58: sd 4:0:0:0: SCSI error: return code = 0x0004
14:57:58: end_request: I/O error, dev sde, sector 481964754
14:57:58: sd 4:0:0:0: SCSI error: return code = 0x0004
14:57:58: end_request: I/O error, dev sde, sector 481964498
14:57:58: sd 4:0:0:0: SCSI error: return code = 0x0004
14:57:58: end_request: I/O error, dev sde, sector 481964130
14:57:58: sd 4:0:0:0: SCSI error: return code = 0x0004
14:57:58: end_request: I/O error, dev sde, sector 481963986
14:57:58: sd 4:0:0:0: SCSI error: return code = 0x0004
14:57:58: end_request: I/O error, dev sde, sector 481941210
14:57:58: md: md5: recovery done.
14:57:58: RAID5 conf printout:
14:57:58:  --- rd:6 wd:5
14:57:58:  disk 0, o:1, dev:sda4
14:57:58:  disk 1, o:1, dev:sdb4
14:57:58:  disk 2, o:1, dev:sdc4
14:57:58:  disk 3, o:1, dev:sdd4
14:57:58:  disk 4, o:0, dev:sde4
14:57:58:  disk 5, o:1, dev:sdf4
14:57:58: RAID5 conf printout:
14:57:58:  --- rd:6 wd:5
14:57:58:  disk 0, o:1, dev:sda4
14:57:58:  disk 1, o:1, dev:sdb4
14:57:58:  disk 2, o:1, dev:sdc4
14:57:58:  disk 3, o:1, dev:sdd4
14:57:58:  disk 5, o:1, dev:sdf4
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread linux
Here's some more data.

6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel

Tested able to sustain reads at 60 MB/sec/drive simultaneously.

RAID-10 is across 6 drives, first part of drive.
RAID-5 most of the drive, so depending on allocation policies,
may be a bit slower.

The test sequence actually was:
1) raid5ncq
2) raid5noncq
3) raid10noncq
4) raid10ncq
5) raid5ncq
6) raid5noncq
but I rearranged things to make it easier to compare.

Note that NCQ makes writes faster (oh... I have write cacheing turned off;
perhaps I should turn it on and do another round), but no-NCQ seems to have
a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
file
read times; I haven't bothered to fix that yet.

NCQ seems to have a pretty significant effect on the file operations,
especially deletes.

Update: added
7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
8) wcache5ncq - RAID 5 with NCQ and write cache enabled


RAID=5, NCQ
Version  1.03   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.2   0
raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.6   0
raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.6   0
raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.4   0
wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.6   0
wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.8   0
raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.8   0
raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.2   0

--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5

raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread linux
From [EMAIL PROTECTED] Tue Mar 27 16:25:58 2007
Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT)
From: Justin Piszcz [EMAIL PROTECTED]
X-X-Sender: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED], linux-ide@vger.kernel.org, 
linux-kernel@vger.kernel.org
Subject: Re: Why is NCQ enabled by default by libata? (2.6.20)
In-Reply-To: [EMAIL PROTECTED]
References: [EMAIL PROTECTED]
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:

 Here's some more data.

 6x ST3400832AS (Seagate 7200.8) 400 GB drives.
 3x SiI3232 PCIe SATA controllers
 2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
 Linux 2.6.20.4, 64-bit kernel

 Tested able to sustain reads at 60 MB/sec/drive simultaneously.

 RAID-10 is across 6 drives, first part of drive.
 RAID-5 most of the drive, so depending on allocation policies,
 may be a bit slower.

 The test sequence actually was:
 1) raid5ncq
 2) raid5noncq
 3) raid10noncq
 4) raid10ncq
 5) raid5ncq
 6) raid5noncq
 but I rearranged things to make it easier to compare.

 Note that NCQ makes writes faster (oh... I have write cacheing turned off;
 perhaps I should turn it on and do another round), but no-NCQ seems to have
 a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
 file
 read times; I haven't bothered to fix that yet.

 NCQ seems to have a pretty significant effect on the file operations,
 especially deletes.

 Update: added
 7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
 8) wcache5ncq - RAID 5 with NCQ and write cache enabled


 RAID=5, NCQ
 Version  1.03   --Sequential Output-- --Sequential Input- 
 --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
 MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
 %CP
 raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.20
 raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.60
 raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.60
 raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.40
 wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.60
 wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.80
 raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.80
 raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.20

--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
 files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
 %CP
16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5

 raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
 raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
 raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
 raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
 wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
 wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
 raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
 raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5


 I would try with write-caching enabled.

I did.  See the wcache5 lines?

 Also, the RAID5/RAID10 you mention seems like each volume is on part of
 the platter, a strange setup you got there :)

I don't quite understand.  Each volume is on part of the platter -
yes, it's called partitioning, and it's pretty common.

Basically, the first 50G of each drive is assembled with RAID-10 to make
a 150G system file system, where I appreciate the speed and greater
redundancy of RAID-10, and the last 250G

Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread linux
 I meant you do not allocate the entire disk per raidset, which may alter 
 performance numbers.

No, that would be silly.  It does lower the average performance of the
large RAID-5 area, but I don't know how ext3fs is allocating the blocks
anyway, so

 04:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II 
 Controller (rev 01)
 I assume you mean 3132 right?

Yes; did I mistype?

02:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
03:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
04:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)

 I also have 6 seagates, I'd need to run one 
 of these tests on them as well, also you took the micro jumper off the 
 Seagate 400s in the back as well right?

Um... no, I don't remember doing anything like that.  What micro jumper?
It's been a while, but I just double-checked the drive manual and
it doesn't mention any jumpers.
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.20.3 AMD64 oops in CFQ code

2007-03-23 Thread linux
As an additional data point, here's a libata problem I'm having trying to
rebuild the array.

I have six identical 400 GB drives (ST3400832AS), and one is giving
me hassles.  I've run SMART short and long diagnostics, badblocks, and
Seagate's seatools diagnostic software, and none of these find problems.
It is the only one of the six with a non-zero reallocated sector count
(it's 26).

Anyway, the drive is partitioned into a 45G RAID-10 part and a 350G RAID-5
part.  The RAID-10 part integrated successfully, but the RAID-5 got to
about 60% and then puked:

ata5.00: exception Emask 0x0 SAct 0x1ef SErr 0x0 action 0x2 frozen
ata5.00: cmd 61/c0:00:d2:d0:b9/00:00:1c:00:00/40 tag 0 cdb 0x0 data 98304 out
 res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
ata5.00: cmd 61/40:08:92:d1:b9/00:00:1c:00:00/40 tag 1 cdb 0x0 data 32768 out
 res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
ata5.00: cmd 61/00:10:d2:d1:b9/01:00:1c:00:00/40 tag 2 cdb 0x0 data 131072 out
 res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
ata5.00: cmd 61/00:18:d2:d2:b9/01:00:1c:00:00/40 tag 3 cdb 0x0 data 131072 out
 res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
ata5.00: cmd 61/00:28:d2:d3:b9/01:00:1c:00:00/40 tag 5 cdb 0x0 data 131072 out
 res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
ata5.00: cmd 61/00:30:d2:d4:b9/01:00:1c:00:00/40 tag 6 cdb 0x0 data 131072 out
 res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
ata5.00: cmd 61/00:38:d2:d5:b9/01:00:1c:00:00/40 tag 7 cdb 0x0 data 131072 out
 res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
ata5.00: cmd 61/00:40:d2:d6:b9/01:00:1c:00:00/40 tag 8 cdb 0x0 data 131072 out
 res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
ata5: soft resetting port
ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata5.00: configured for UDMA/100
ata5: EH complete
SCSI device sde: 781422768 512-byte hdwr sectors (400088 MB)
sde: Write Protect is off
SCSI device sde: write cache: enabled, read cache: enabled, doesn't support DPO 
or FUA
ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2 frozen
ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 cdb 0x0 data 0 
 res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
ata5: soft resetting port
ata5: softreset failed (timeout)
ata5: softreset failed, retrying in 5 secs
ata5: hard resetting port
ata5: softreset failed (timeout)
ata5: follow-up softreset failed, retrying in 5 secs
ata5: hard resetting port
ata5: softreset failed (timeout)
ata5: reset failed, giving up
ata5.00: disabled
ata5: EH complete
sd 4:0:0:0: SCSI error: return code = 0x0004
end_request: I/O error, dev sde, sector 91795259
md: super_written gets error=-5, uptodate=0
raid10: Disk failure on sde3, disabling device. 
Operation continuing on 5 devices
sd 4:0:0:0: SCSI error: return code = 0x0004
end_request: I/O error, dev sde, sector 481942994
raid5: Disk failure on sde4, disabling device. Operation continuing on 5 devices
sd 4:0:0:0: SCSI error: return code = 0x0004
end_request: I/O error, dev sde, sector 481944018
md: md5: recovery done.
RAID10 conf printout:
 --- wd:5 rd:6
 disk 0, wo:0, o:1, dev:sdb3
 disk 1, wo:0, o:1, dev:sdc3
 disk 2, wo:0, o:1, dev:sdd3
 disk 3, wo:1, o:0, dev:sde3
 disk 4, wo:0, o:1, dev:sdf3
 disk 5, wo:0, o:1, dev:sda3
RAID10 conf printout:
 --- wd:5 rd:6
 disk 0, wo:0, o:1, dev:sdb3
 disk 1, wo:0, o:1, dev:sdc3
 disk 2, wo:0, o:1, dev:sdd3
 disk 4, wo:0, o:1, dev:sdf3
 disk 5, wo:0, o:1, dev:sda3
RAID5 conf printout:
 --- rd:6 wd:5
 disk 0, o:1, dev:sda4
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
 disk 4, o:0, dev:sde4
 disk 5, o:1, dev:sdf4
RAID5 conf printout:
 --- rd:6 wd:5
 disk 0, o:1, dev:sda4
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
 disk 5, o:1, dev:sdf4

The first error address is just barely inside the RAID-10 part (which ends at
sector 91,795,410), while the second and third errors (at 481,942,994)
look like where the reconstruction was working.


Anyway, what's annoying is that I can't figure out how to bring the
drive back on line without resetting the box.  It's in a hot-swap enclosure,
but power cycling the drive doesn't seem to help.  I thought libata hotplug
was working?  (SiI3132 card, using the sil24 driver.)

(H'm... after rebooting, reallocated sectors jumped from 26 to 39.
Something is up with that drive.)
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


2.6.20.3 AMD64 oops in CFQ code

2007-03-22 Thread linux
] generic_unplug_device+0xa/0xe
 [80407ced] unplug_slaves+0x5b/0x94
 [80223d65] sync_page+0x0/0x40
 [80223d9b] sync_page+0x36/0x40
 [80256d45] __wait_on_bit_lock+0x36/0x65
 [80237496] __lock_page+0x5e/0x64
 [8028061d] wake_bit_function+0x0/0x23
 [802074de] find_get_page+0xe/0x2d
 [8020b38e] do_generic_mapping_read+0x1c2/0x40d
 [8020bd80] file_read_actor+0x0/0x118
 [8021422e] generic_file_aio_read+0x15c/0x19e
 [8020bafa] do_sync_read+0xc9/0x10c
 [80210342] may_open+0x5b/0x1c6
 [802805ef] autoremove_wake_function+0x0/0x2e
 [8020a857] vfs_read+0xaa/0x152
 [8020faf3] sys_read+0x45/0x6e
 [8025041e] system_call+0x7e/0x83


Code: 4c 8b ae 98 00 00 00 4c 8b 70 08 e8 63 fe ff ff 8b 43 28 4c 
RIP  [8031504a] cfq_dispatch_insert+0x18/0x68
 RSP 8100789b5af8
CR2: 0098
 
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Success report: Silicon Image 3132

2005-09-06 Thread linux
I'm currently setting up a new x86_64 server with 6x400G 7200.8
drives using 3x Sil3132 and the sata_sil24 driver.

Kernel 2.6.13 + 2.6.13-rc7-libata1.patch.bz2

So far, it kicks ass.  A simple RAID-0 across 4 drives on 2 controllers
lets me hit 270 MB/sec on zcav, and I'm having trouble getting bonnie++
to produce numbers instead of .  (It failed with 256K files;
I'm trying again with 1024K).

The one problem is that attempting to run hddtemp on one of the
drives kills the machine hard.  (For now, I'll just avoid doing that.)


Anyway, since this is fairly new and I was nervous about SATA support (I
avoided using the Nforce4 on-board SATA), I thought I'd send a success
report.  I couldn't get an AHCI controller on an AMD processor (except
for 2 pprts in a ULi chipset), so after reviewing the alternatives,
it looked like Silicon Image were being among the friendliest to Linux
driver development.
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Which controllers can support port multipliers?

2005-08-17 Thread linux
I realize the software support isn't there now, but I'm having trouble
getting a lot of SATA drives into a cheap AMD64 computer.

Most everything has 4 ports on the motherboard, but all the cheap SATA
controllers are PCI-X, and all of the Socket 939 or 940 PCI-X motherboards
are expensive.  Yes, I can plug a PCI-X controller into an ordinary PCI
slot, but that's a big bandwidth hit.

So I'm thinking of starting my RAID system with 4x400 GB drives, but
getting a case that can hold more, and hoping that port multipliers will
appear by the time I need to expand.

But that means that I need to pick a motherboard that is hardware-capable
of port multiplier support, even if it isn't supported yet.

Does anyone know which controllers are capable of driving a port multiplier,
and which are definitely not?

Thanks!

(P.S. If anyone is searching, a cheap peripheral company named Addonics
makes Sil3124 PCI-X 4-port SATA cards.)
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Which controllers can support port multipliers?

2005-08-17 Thread linux
 All controllers support port multipliers, but libata does not support 
 them yet.

So even a Promise SX4 can use a port multiplier?  Great, thanks!

Yes, I'm fully aware there is no Linux support at the moment, and it's
toward the bottom of the to-do list, but I have some hope that it will
appear in a year or two.
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NCQ support NVidia NForce4 (CK804) SATAII

2005-08-10 Thread linux
To save Jeff the trouble of replying

 If NVidia followed the SATA-IO spec than should be possible to make them 
 work with NCQ, or do I think wrong of that?
 Or isn't it possible?

The SATA spec defines the interface between the SATA controller and the
hard drive.  It does not define in any way the interface between
the host processor and the SATA controller.  There is no mention of
controller registers and what the bits in them mean.

Given that a good controller involves not only the registers themselves,
but also a number of data structures pointed to by those registers,
there's quite a bit of complexity there.

Jeff has, quite sensibly, decided to focus his efforts on hardware whose
manufacturers haven't made special effort to keep useful documentation
away from him.  (By declaring them trade secrets and threatening to
punish any employees who might othrwise send him a copy.)

 I found a Product Brief/Specification and a Blockdiagramm.

That sort of thing is quite devoid of programming detail.  It's like
trying to navigate the New Jersey Turnpike using an early Dutch map of
New Amsterdam.

In fact, it was probably created before the programming interface was
even designed.  Somebody said we want these features, drew up the spec,
and handed that wishlist to the silicon hackers to fill in the details
and implement.

 Could it be possible to make reverse engeneering?

Yes, but it's far more time-consuming.  In particular, early silicon
always has bugs, and finding the bugs and developing workarounds
is a PITA when you have the specs; without them, it can be a nightmare.

Jeff has plenty to do without making his life more difficult.
Reverse-engineering NVIDIA is at the bottom of the list.  He
may never get to it in person.

Of course, if you'd like to make an attempt...
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Marvell 88SX[56]0[48]1 libata progress?

2005-01-26 Thread linux
I was just wondering if there's been any progress.
I'm about to invest in an Abit SU-2S with an 88sx6081 on board, and was
wondering how things were going.

I can use the Marvell binary driver as a stopgap, but I'm hoping that
an open-source driver (and eventually NCQ support) will appear before
too long.

I'm afraid I can't afford to sponsor development personally, so I don't
have the right to complain too loudly, but could I politely inquire?

Thanks!
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html