/cxtyd0p0 of=/dev/null
and then try pulling out the disk. The dd should return with an I/O
error virtually immediately. If it doesn't then
ZFS is probably not the issue. You can also issue the command cfgadm
and see what it lists as the state(s) of the
various disks.
Hope that helps,
Lida Horn
Christophe Dupre wrote:
Tim,
the system is a Silicon Mechanics A266; the motherboard is a
SuperMicro H8DM8E-2
I tried pluging the Marvell card in both 133MHz PCI-X slots. In one I
get a lockup during install, in the other I get a reset on first boot.
Just a shot in the dark, but is it
Tim wrote:
On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
wrote:
I see that the configuration tested in this X4500 writeup only uses
the four built-in gigabit ethernet interfaces. This places a natural
limit on the amount of
Andrius wrote:
Bob Friesenhahn wrote:
On Mon, 16 Jun 2008, Andrius wrote:
Thanks! It works. Volume managagement is that thing that does not
exist in zfs perhaps and made disk managemet more easy. Thanks for
everybody for advices.
Volume Manager should be off before creating pools in
Carson Gaspar wrote:
[ Sending this here, as I've publicly complained about this bug on the
ZFS list previously, and there have been prior threads related to the
fix hitting OpenSolaris ]
For those of you who have been suffering marvell device resets and hung
I/Os on Sol 10 U4 with NCQ
Paul Raines wrote:
Well, I ran updatemanager and started applying about 64 updates. After
the progress meter got about half way it seemed to hang not moving for
hours. I finally gave up and did a reboot. But the machine would not
reboot. I went in the ILOM and tried 'stop /SYS' but after a
Marc Bevand wrote:
Paul Raines raines at nmr.mgh.harvard.edu writes:
Mar 9 03:22:16 raidsrv03 sata: NOTICE:
/pci at 0,0/pci1022,7458 at 1/pci11ab,11ab at 1:
Mar 9 03:22:16 raidsrv03 port 6: device reset
[...]
The above repeated a few times but now seems to have stopped.
Running 'hd
Jason J. W. Williams wrote:
X4500 problems seconded. Still having issues with port resets due to
the Marvell driver. Though they seem considerably more transient and
less likely to lock up the entire systems in the most recent ( b72)
OpenSolaris builds.
Build 72 is pretty old. The build
Carson Gaspar wrote:
Tim wrote:
A much cheaper (and probably the BEST supported card), is the supermicro
based on the marvell chipset. This is the same chipset that is used in
the thumper x4500 so you know that the folks at sun are doing their due
diligence to make sure the
Will Murnane wrote:
On Feb 12, 2008 4:45 AM, Lida Horn [EMAIL PROTECTED] wrote:
The latest changes to the sata and marvell88sx modules
have been put back to Solaris Nevada and should be
available in the next build (build 84). Hopefully,
those of you who use it will find the changes
The latest changes to the sata and marvell88sx modules
have been put back to Solaris Nevada and should be
available in the next build (build 84). Hopefully,
those of you who use it will find the changes helpful.
This message posted from opensolaris.org
I now have a improved sata and marvell88sx driver modules that
deal with various error conditions in a much more solid way.
Changes include reducing the number of required device resets,
properly reporting media errors (rather than no additional sense),
clearing aborted packets more rapidly so
The reset: no matching NCQ I/O found issue appears to be related to the
error recovery for bad blocks on the disk. In general it should be harmless,
but
I have looked into this. If there is someone out there who;
1) Is hitting this issue, and;
2) Is running recent Solaris Nevada bits (not
'
There is no way (short of patching the text of the driver) to alter the
allowed SATA
communication speeds for the marvell88sx driver. If you wish you can
request an
RFE (Request For Enhancement), but I don't think it will be given high
priority.
Sorry,
Lida Horn
Thanks again,
Eric
Stuart Anderson wrote:
After applying 125205-07 on two X4500 machines running Sol10U4 and
removing set sata:sata_func_enable = 0x5 from /etc/system to
re-enable NCQ, I am again observing drive disconnect error messages.
This in spite of the patch description which claims multiple fixes
in
Jeff Creek wrote:
I posted this in ZFS-Discuss. Eric Schrock suggested I ask in this forum.
I am trying to test a new setup of NV74. I have set up the system with ZFS
boot. Everything works fine until I pull a drive. The system locks up when I
try to run any command e.g. zpool status.
?bug_id=6564677
This bug and related fix has nothing to do with the zfs scrub issue.
Regardless, I would like
to know if this is happening with the marvell88sx driver (and if so,
what hardware) or with
some other driver and hardware.
Regards,
Lida Horn
James C. McPherson
--
Solaris kernel
Point one, the comments that Eric made do not give the complete picture.
All the tests that Eric's referring to were done through ZFS filesystem.
When sequential I/O is done to the disk directly there is no performance
degradation at all. Second point, it does not take any additional
time in
Roch Bourbonnais wrote:
Le 29 mai 07 à 22:59, [EMAIL PROTECTED] a écrit :
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk
And yes, I would feel better if this driver was open
sourced but
that
is Suns' decision to make.
Well, no. That is Marvell's decision to make. Marvell is
the one who make the determination that the driver
could not be open sourced, not Sun. Since Sun
needed information received under
We just put together a new system for ZFS use at a
company, and twice
in one week we've had the system wedge. You can log
on, but the zpools
are hosed, and a reboot never occurs if requested
since it can't
unmount the zfs volumes. So, only a power cycle
works.
I've tried to reproduce
21 matches
Mail list logo