OK problem solved.
I had incorrectly assumed that the server wasn't booting, the longest I had
left it was over night and there was still no logon prompt in the morning! The
reality is that it was just taking a very long time due to an excessive amount
of automatically created snapshots by
I can confirm that on an X4240 with the LSI (mpt) controller:
X25-M G1 with 8820 still returns invalid selftest data
X25-E G1 with 8850 now returns correct selftest data
(I haven't got any X25-M G2)
Going to replace an X25-E with the old firmware in one of our X4500s
soon and we'll see if things
Now tested a firmware 8850 X25-E in one of our X4500:s and things look better:
# /ifm/bin/smartctl -d scsi -l selftest /dev/rdsk/c5t7d0s0
smartctl version 5.38 [i386-pc-solaris2.10] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
No self-tests have been
On Sat, 12 Sep 2009, Jeremy Kister wrote:
scrub: resilver in progress, 0.12% done, 108h42m to go
[...]
raidz1 DEGRADED 0 0 0
c3t8d0ONLINE 0 0 0
c5t8d0ONLINE 0 0 0
c3t9d0ONLINE 0 0
[Originally posted to indiana-discuss]
On certain X86 machines there's a hardware/software glitch
that causes odd transient checksum failures that always seem
to affect the same files even if you replace them. This has
been submitted as a bug:
Bug 11201 - Checksum failures on mirrored drives -
On Sun, 2009-09-13 at 11:01 -0700, Stefan Parvu wrote:
5. Disconnecting the other disk. Problems occur:
# zpool status zones
pool: zones
state: ONLINE
status: One or more devices has experienced an unrecoverable error.
An
attempt was made to correct the error.
Thanks for the reply but this seems to be a bit different.
a couple of things I failed to mention;
1) this is a secondary pool and not the root pool.
2) the snapshot are trimmed to only keep 80 or so.
The system boots and runs fine. It's just an issue for this secondary pool
and
Hello all,
I have a situation where zpool status shows no known data errors but all
processes on a specific filesystem are hung. This has happened 2 times before
since we installed Opensolaris 2009.06 snv_111b. For instance there are two
files systems in this pool 'zfs get all' on one
Is it possible to create flar image of ZFS root filesystem to install it to
other macines?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
RB wrote:
I have zfs on my base T5210 box installed with LDOMS (v.1.0.3). Every time I try to jumpstart my Guest machine, I get the following error.
ERROR: One or more disks are found, but one of the following problems exists:
- Hardware failure
- The disk(s) available on
RB wrote:
Is it possible to create flar image of ZFS root filesystem to install it to
other macines?
yes but it needs solaris update 7 or later to install a zfs flar
see
http://www.opensolaris.org/os/community/zfs/boot/flash/;jsessionid=AB24EEFB6955AD505F19A152CDEC84A8
isn't supported on
Hi RB,
We have a draft of the ZFS/flar image support here:
http://opensolaris.org/os/community/zfs/boot/flash/
Make sure you review the Solaris OS requirements.
Thanks,
Cindy
On 09/14/09 11:45, RB wrote:
Is it possible to create flar image of ZFS root filesystem to install it to
other
Absent any replies to the list, submitted as a bug:
http://defect.opensolaris.org/bz/show_bug.cgi?id=11358
Cheers -- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
All,
IHAC that is asking the following...reviewing the following document
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6nc
it appears it may not for the parent setting will transcend downwards to
the child
Can anyone elaborate if this is correct or not..
Thanks
Peter
the question is
14 matches
Mail list logo