I would get a new 1.5 TB and make sure it has the new firmware and replace
c6t3d0 right away - even if someone here comes up with a magic solution, you
don't want to wait for another drive to fail.
http://hardware.slashdot.org/article.pl?sid=09/01/17/0115207
Hi Jeffrey,
jeffrey huang wrote:
Hi, Jan,
After successfully install AI on SPARC(zpool/zfs created), without
reboot, I want try a installation again, so I want to destroy the rpool.
# dumpadm -d swap -- ok
# zfs destroy rpool/dump -- ok
# swap -l
# swap -d /dev/zvol/dsk/rpool/swap --
What does this mean? Does that mean that ZFS + HW raid with raid-5 is not able
to heal corrupted blocks? Then this is evidence against ZFS + HW raid, and you
should only use ZFS?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS works well with storage based
I would in this case also immediately export the pool (to prevent any
write attempts) and see about a firmware update for the failed drive
(probably need windows for this).
Sent from my iPhone
On Jan 20, 2009, at 3:22 AM, zfs user zf...@itsbeen.sent.com wrote:
I would get a new 1.5 TB and
Hi,
I'm completely new to Solaris, but have managed to bumble through installing it
to a single disk, creating an additional 3 disk RAIDZ array and then copying
over data from a separate NTFS formatted disk onto the array using NTFS-3G.
However, the single disk that was used for the OS
On Mon, Jan 19, 2009 at 5:39 PM, Adam Leventhal a...@eng.sun.com wrote:
And again, I say take a look at the market today, figure out a
percentage,
and call it done. I don't think you'll find a lot of users crying foul
over
losing 1% of their drive space when they don't already cry foul
Luke,
You're looking for a `zpool list`, followed by a `zpool import
poolname` after Solaris has correctly recognised the attachment of
the three original disks (ie. they appear in `format` and/or `cfgadm -
al`).
Complete docs here, now you know what you are looking for ...
Luke Scammell wrote:
Hi,
I'm completely new to Solaris, but have managed to bumble through installing
it to a single disk, creating an additional 3 disk RAIDZ array and then
copying over data from a separate NTFS formatted disk onto the array using
NTFS-3G.
However, the single disk
I think maybe it means that if ZFS can't 'see' the block (the
controller does that in HW RAID), it can't checksum said block.
cheers,
Blake
On Tue, Jan 20, 2009 at 6:34 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
What does this mean? Does that mean that ZFS + HW raid with raid-5 is
Nobody can comment on this?
-Brian
Brian H. Nelson wrote:
I noticed this issue yesterday when I first started playing around with
zfs send/recv. This is on Solaris 10U6.
It seems that a zfs send of a zvol issues 'volblocksize' reads to the
physical devices. This doesn't make any sense to
Good observations, Eric, more below...
Eric D. Mudama wrote:
On Mon, Jan 19 at 23:14, Greg Mason wrote:
So, what we're looking for is a way to improve performance, without
disabling the ZIL, as it's my understanding that disabling the ZIL
isn't exactly a safe thing to do.
We're looking
Ross wrote:
The problem is they might publish these numbers, but we
really have
no way of controlling what number manufacturers will
choose to use
in the future.
If for some reason future 500GB drives all turn out to be slightly
smaller than the current ones you're going to
Brian H. Nelson wrote:
Nobody can comment on this?
-Brian
Brian H. Nelson wrote:
I noticed this issue yesterday when I first started playing around with
zfs send/recv. This is on Solaris 10U6.
It seems that a zfs send of a zvol issues 'volblocksize' reads to the
physical devices.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I see http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
as a pretty outdated (3 years old) document. is there any plan to update
it?.
Maybe somebody could update it every time a new ZFS pool version is
available?.
- --
Jesus Cea
Orvar Korvar wrote:
What does this mean? Does that mean that ZFS + HW raid with raid-5 is not
able to heal corrupted blocks? Then this is evidence against ZFS + HW raid,
and you should only use ZFS?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
ZFS works well
Any recommendations for an SSD to work with an X4500 server? Will the SSDs
used in the 7000 series servers work with X4500s or X4540s?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 1/20/2009 1:14 PM, Richard Elling wrote:
Orvar Korvar wrote:
What does this mean? Does that mean that ZFS + HW raid with raid-5 is not
able to heal corrupted blocks? Then this is evidence against ZFS + HW raid,
and you should only use ZFS?
mj == Moore, Joe joe.mo...@siemens.com writes:
mj For a ZFS pool, (until block pointer rewrite capability) this
mj would have to be a pool-create-time parameter.
naw. You can just make ZFS do it all the time, like the other storage
vendors do. no parameters.
You can invent
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Nicolas Williams wrote:
I'd recommend waiting for ZFS crypto rather than using lofi with ZFS.
Wait... for how long?. Any schedule?.
I am very interested in ZFS Crypto, although I have lost hope of seeing
in Solaris 10.
- --
Jesus Cea Avion
So ZFS is not hindered at all, if you use it in conjunction with HW raid? ZFS
can utilize all functionality and heal corrupted blocks without problems -
with HW raid?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Probably Richard Elling's blog:
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Miles Nordin wrote:
mj == Moore, Joe joe.mo...@siemens.com writes:
mj For a ZFS pool, (until block pointer rewrite capability) this
mj would have to be a pool-create-time parameter.
naw. You can just make ZFS do it all the time, like the other storage
vendors do. no
d...@yahoo.com said:
Any recommendations for an SSD to work with an X4500 server? Will the SSDs
used in the 7000 series servers work with X4500s or X4540s?
The Sun System Handbook (sunsolve.sun.com) for the 7210 appliance (an
X4540-based system) lists the logzilla device with this fine print:
[I hate to keep dragging this thread forward, but...]
Moore, Joe wrote:
And there is no way to change this after the pool has been created,
since after that time, the disk size can't be changed. So whatever
policy is used by default, it is very important to get it right.
Today, vdev size can
I have been testing the 32 GB X25-E last week.
When I connect it to one of the onboard (tyan 2925) sata ports, it's not
detected by opensolaris 2008.11.
When I connect it to an PCIE LSI 3081 , The disk is found But I'm getting
trouble when I run performance tests via filebench.
Filebench
jm == Moore, Joe joe.mo...@siemens.com writes:
jm Sysadmins should not be required to RTFS.
I never said they were. The comparison was between hardware RAID and
ZFS, not between two ZFS alternatives. The point: other systems'
behavior is enitely secret. Therefore, secret opaque
I have configured a test system with a mirrored rpool and one hot spare. I
powered the systems off, pulled one of the disks from rpool to simulate a
hardware failure.
The hot spare is not activating automatically. Is there something more i
should have done to make this work ?
pool: rpool
On Tue, 20 Jan 2009 12:13:00 PST, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
So ZFS is not hindered at all, if you use it in conjunction
with HW raid? ZFS can utilize all functionality
and heal corrupted blocks without problems - with HW raid?
Only if you build the zpool from a mirror
An interesting interpretation of using hot spares.
Could it be that the hot-spare code only fires if the disk goes down
whilst the pool is active?
hm.
Nathan.
Scot Ballard wrote:
I have configured a test system with a mirrored rpool and one hot spare.
I powered the systems off, pulled one
What software are you running? There was a bug where offline device
failure did not trigger hot spares, but that should be fixed now (at
least in OpenSolaris, not sure about s10u6).
- Eric
On Wed, Jan 21, 2009 at 09:57:42AM +1100, Nathan Kroenert wrote:
An interesting interpretation of using
On Tue, Jan 20, 2009 at 2:26 PM, Moore, Joe joe.mo...@siemens.com wrote:
Other storage vendors have specific compatibility requirements for the
disks you are allowed to install in their chassis.
And again, the reason for those requirements is 99% about making money, not
a technical one. If
The user DEFINITELY isn't expecting 5 bytes, or what you meant to say
5000
bytes, they're expecting 500GB. You know, 536,870,912,000 bytes. But even if
the drive mfg's
calculated it correctly, they wouldn't even be getting that due to filesystem
overhead.
I doubt there are
On Tue, Jan 20 at 9:04, Richard Elling wrote:
Yes. And I think there are many more use cases which are not
yet characterized. What we do know is that using an SSD for
the separate ZIL log works very well for a large number of cases.
It is not clear to me that the efforts to characterize a
Sigh. Richard points out in private email that automatic savecore functionality
is disabled in OpenSolaris; you need to manually set up a dump device and save
core files if you want them. However, the stack may be sufficient to ID the bug.
--
This message posted from opensolaris.org
On Tue, Jan 20 at 21:35, Eric D. Mudama wrote:
On Tue, Jan 20 at 9:04, Richard Elling wrote:
Yes. And I think there are many more use cases which are not
yet characterized. What we do know is that using an SSD for
the separate ZIL log works very well for a large number of cases.
It is not
so you're suggesting I buy 750s to replace the 500s. then if a 750 fails buy
another bigger drive again?
the drives are RMA replacements for the other disks that faulted in the array
before. they are the same brand, model and model number, apparently not so
under the label though, but no way I
36 matches
Mail list logo