Hi Tejun,
I'm having some trouble with my e-SATA ports being reset. I'm
testing 2.6.22.1 with the 20070808 patch tarball on a nortec ds-1220
flashed to Silicon Image bios version 6.4.09 (the latest).
I'm testing with 6 500gig SATA drives shown as:
WD5000AAKS-22TMA0, 12.01C01, max UDMA/133
Hello, Rusty.
Rusty Conover wrote:
> I'm having some trouble with my e-SATA ports being reset. I'm testing
> 2.6.22.1 with the 20070808 patch tarball on a nortec ds-1220 flashed to
> Silicon Image bios version 6.4.09 (the latest).
>
> I'm testing with 6 500gig SATA drives shown as:
>
> WD5000AA
Rusty Conover wrote:
> Hello Tejun,
>
> Thanks for your reply.
>
>
>> * Please post kernel log including boot messages and errors.
>>
>
> I've included the full log at the bottom of this message.
>
>> * Please post the result of 'hdparm -I /dev/sdX' where sdX is the
>> offending device.
>>
>>
Hi Tejun,
I've taken your advice, reseat-ed and re-cabled everything. I did
find one bad drive that I've removed, but sadly I'm still having
problems.
I've done some more testing that may be able to help you out.
I've tested all 5 WDC drives, they all work. The problem is I get
this ex
Hi Tejun,
Just as some further testing and poking I added the drives to the
list of disks to disable NCQ for, it didn't resolve the issue.
I increased the PMP timeout to 1000 rather then 250 and that didn't
resolve the problem either.
The interface still has timeout errors writing the ext
Rusty Conover wrote:
> Hi Tejun,
>
> Just as some further testing and poking I added the drives to the list
> of disks to disable NCQ for, it didn't resolve the issue.
>
> I increased the PMP timeout to 1000 rather then 250 and that didn't
> resolve the problem either.
>
> The interface still ha
On Aug 21, 2007, at 9:03 PM, Tejun Heo wrote:
Rusty Conover wrote:
Hi Tejun,
Just as some further testing and poking I added the drives to the
list
of disks to disable NCQ for, it didn't resolve the issue.
I increased the PMP timeout to 1000 rather then 250 and that didn't
resolve the pr
Rusty Conover wrote:
> Putting just one hard disk into the PMP slots, works great all by
> itself. I created an ext3 fs on it, used dd to dump lots of data to it,
> no problems in all of my testing.
H..
> One hard disk in a PMP slot and another hard disk in a native slot on a
> different SATA
On Aug 21, 2007, at 11:43 PM, Tejun Heo wrote:
Rusty Conover wrote:
Putting just one hard disk into the PMP slots, works great all by
itself. I created an ext3 fs on it, used dd to dump lots of data
to it,
no problems in all of my testing.
H..
One hard disk in a PMP slot and another
Does the attached patch make any difference?
--
tejun
---
drivers/ata/libata-core.c |1 +
1 file changed, 1 insertion(+)
Index: work1/drivers/ata/libata-core.c
===
--- work1.orig/drivers/ata/libata-core.c
+++ work1/drivers/ata/
On Aug 22, 2007, at 12:39 AM, Tejun Heo wrote:
Does the attached patch make any difference?
--
tejun
---
drivers/ata/libata-core.c |1 +
1 file changed, 1 insertion(+)
Index: work1/drivers/ata/libata-core.c
===
--- work1.
Rusty Conover wrote:
> After adding a semicolon to the added line, and recompiling there are
> still timeouts like before on the PMP ports.
Oops.
> It did have the effect of setting the SATA speed to 1.5 rather then 3.0
> on boot though.
Yeah, that was the intention. I'm running out of ideas.
On Aug 22, 2007, at 1:02 AM, Tejun Heo wrote:
Rusty Conover wrote:
After adding a semicolon to the added line, and recompiling there are
still timeouts like before on the PMP ports.
Oops.
It did have the effect of setting the SATA speed to 1.5 rather
then 3.0
on boot though.
Yeah, that
Hello, Rusty.
Rusty Conover wrote:
> I have some interesting results.
>
> I had a pair of Seagate 250 GB SATA disks (models below) and tried those
> out rather then the WD's.At the 1.5 gbps rather they appear to work
> just fine both being on the same PMP, at 3.0 gbps they timeout just like
>
On Aug 24, 2007, at 8:19 PM, Tejun Heo wrote:
Hello, Rusty.
Rusty Conover wrote:
I have some interesting results.
I had a pair of Seagate 250 GB SATA disks (models below) and tried
those
out rather then the WD's.At the 1.5 gbps rather they appear to
work
just fine both being on the s
Hello, Rusty.
Rusty Conover wrote:
> After all I decided to send back the piece of hardware and just switch
> to a solution that has the SATA ports on the main board. Thanks for you
> help trying to get all of this working, I probably just had a bad
> adaptor card or drive enclosure.
3726/4726 w
> 3726/4726 work very well under high load with most drives. I guess
> you had some problem with the cage.
Does anyone have any performance figures to share, with these PMP
interfaces?
Regards,
Richard
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a me
Richard Scobie wrote:
> 3726/4726 work very well under high load with most drives. I guess
> you had some problem with the cage.
Does anyone have any performance figures to share, with these PMP
interfaces?
Hello,
what exactly you are looking for? For me it behaves exactly as
intended,
Hi Petr,
> Though I do not run any RAID on them (I would say that with PMP's
> bottleneck it would be bad idea), so maybe I'm not stressing them
> sufficiently to trip over bugs.
Thanks for your reply.
Sorry, I should have asked "Does anyone have any performance figures to
share, using md RAID
On Mon, Aug 27, 2007 at 08:08:08PM +1200, Richard Scobie wrote:
> I was just interested to see if anyone had tested a similar md RAID 5 using
> these chips on Linux.
/dev/md2 is a 5-disk RAID5 on a Sonnet Fusion 500 enclosure (3726 based):
http://www.sonnettech.com/product/fusiond500p-eseries.ht
Tejun Heo wrote:
> Hello, Rusty.
>
> Rusty Conover wrote:
> > After all I decided to send back the piece of hardware and just switch
> > to a solution that has the SATA ports on the main board. Thanks for you
> > help trying to get all of this working, I probably just had a bad
> > adaptor card o
Petr Vandrovec wrote:
> For comparsion 1TB Hitachi behind 3726 PMP (again MS4UM) with sata_sil
> patch I sent last week (no NCQ, 1.5Gbps link between 3512 and PMP, and
> 3.0Gbps link between PMP and drive... why is it faster?):
If you turn off NCQ by echoing 1 to /sys/block/sdd/device/queue_depth
Tejun Heo wrote:
Petr Vandrovec wrote:
For comparsion 1TB Hitachi behind 3726 PMP (again MS4UM) with sata_sil
patch I sent last week (no NCQ, 1.5Gbps link between 3512 and PMP, and
3.0Gbps link between PMP and drive... why is it faster?):
If you turn off NCQ by echoing 1 to /sys/block/sdd/devi
Hello,
Petr Vandrovec wrote:
> I have recompiled kernel with all debugging disabled, and it brought me
> 1.5MBps, so it is still consistently 1MBps slower than on sil. Disabling
> NCQ seems to improve concurrent access a bit (for which I have no
> explanation), while slows down single drive scenar
Hi,
Tejun said:
> Yeah, that seems to be the hardware limit and is consistent with what
> I hear from non-linux people too.
My comment earlier regarding "broken silicon" was based on comments here
and reports from Mac users and only pertained to the 3132.
For some reasonably impressive num
On Tue, Sep 04, 2007 at 07:39:16AM +1200, Richard Scobie wrote:
> > Yeah, that seems to be the hardware limit and is consistent with what > I
> hear from non-linux people too.
>
> My comment earlier regarding "broken silicon" was based on comments here and
> reports from Mac users and only p
Robin H. Johnson wrote:
The single PMP numbers they have (under "Addonics ADSA3GPX8-4EM Striped RAID Set
Performance Comparison") has Write=211MB/sec Read=231MB/sec.
True. I wonder if the bus spec of 3Gb/s is somewhat optimistic in the
real world - a bit like saying one can get 132MB/s from a
Tejun Heo wrote:
Hello,
Petr Vandrovec wrote:
I have recompiled kernel with all debugging disabled, and it brought me
1.5MBps, so it is still consistently 1MBps slower than on sil. Disabling
NCQ seems to improve concurrent access a bit (for which I have no
explanation), while slows down single
Petr Vandrovec wrote:
Tejun Heo wrote:
Hello,
Petr Vandrovec wrote:
I have recompiled kernel with all debugging disabled, and it brought me
1.5MBps, so it is still consistently 1MBps slower than on sil. Disabling
NCQ seems to improve concurrent access a bit (for which I have no
explanation)
On Wed, Sep 05, 2007 at 05:08:00AM -0700, Petr Vandrovec wrote:
> > 3124-2 (norco 4618):
> > NCQ:
> > 1TB alone: 82.30, 82.43
> > 1TB+1TB: 68.36+68.25
> > noNCQ:
> > 1TB alone: 82.39, 82.29
> > 1TB+1TB: 70.33+70.32, 69.47+70.01
> > Unfortunately that enclosure has only two slots used.
Robin H. Johnson wrote:
On Wed, Sep 05, 2007 at 05:08:00AM -0700, Petr Vandrovec wrote:
3124-2 (norco 4618):
NCQ:
1TB alone: 82.30, 82.43
1TB+1TB: 68.36+68.25
noNCQ:
1TB alone: 82.39, 82.29
1TB+1TB: 70.33+70.32, 69.47+70.01
Unfortunately that enclosure has only two slots used. I'll
Petr Vandrovec wrote:
>>> H Weird. Is the different still there if you take PMP out of
>>> the picture?
>>
>> Will do tomorrow. I need physical access to the box to do that.
>
> Yes, no difference. 3512 is consistently about 1MBps faster than 3132
> when talking to single Hitachi 1TB dr
Petr Vandrovec wrote:
> Concurrent hdparm -t, like
>
> hdparm -t /dev/sdd & hdparm -t /dev/sde & hdparm -t /dev/sdf & hdparm -t
> /dev/sdg & sleep 20
>
> (and from hdparm output & visually confirmed that all activity LEDs go
> on & off simultaneously)
Not sure whether it matters but 'hdparm' tes
Robin H. Johnson wrote:
/dev/md2:
Timing cached reads: 2334 MB in 2.00 seconds = 1167.20 MB/sec
Timing buffered disk reads: 350 MB in 3.01 seconds = 116.32 MB/sec
It should exceed that speed - if I run hdparm -tT on 3 or more separate drives
in the array at the same time, their combined
On Tue, Aug 28, 2007 at 11:31:02AM +1200, Richard Scobie wrote:
> http://www.barefeats.com/quick.html
> and scroll down to the December 23rd 2006 entry.
> "This is due to the fact that all current ExpressCard products use the
> Silicon Image 3132 chip set and, for some reason, that's as fast a
35 matches
Mail list logo