Richard Elling richard.ell...@gmail.com writes:
In my experience, this looks like a set of devices sitting behind an
expander. I have seen one bad disk take out all disks sitting behind
an expander. I have also seen bad disk firmware take out all disks
behind an expander. I once saw a bad
I have been leading the charge in my IT department to evaluate the Sun
Fire X45x0 as a commodity storage platform, in order to leverage
capacity and cost against our current NAS solution which is backed by
EMC Fiberchannel SAN. For our corporate environments, it would seem
like a single machine
their BlueArc san and spent 100k for 15TB
(raw)... I spent 50K for 33TB (useable)...
David
David Glaser
Systems Administrator
LSA Information Technology
University of Michigan
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Solaris
Sent: Thursday
Hello... Since there has been much discussion about zpool import failures
resulting in loss of an entire pool, I thought I would illustrate a scenario
I just went through to recover a faulted pool that wouldn't import under
Solaris 10 U5. While this is a simple scenario, and the data
Greetings,
I have Sun Fire 4600 running Solaris 10, running Sun Cluster 3.2.
SunOS hubdb004 5.10 Generic_120012-14 i86pc i386 i86pc
We ran into some space issues in /usr today, so as a quick fix, I created a
slice (c5t0d0s12) with about 25GB of disk in order to create some zfs
filesystems
Richard,
Having read your blog regarding the copies feature, do you have an
opinion on whether mirroring or copies are better for a SAN situation?
It strikes me that since we're discussing SAN and not local physical
disk, that for a system needing 100GB of usable storage (size chosen
for
and
to an even number of disks.
I still have yet to purchase the system due to my issues with finding
the right board with the right SATA controller. My desktop system at
home runs an nVidia 590a chipset on a Foxconn motherboard and Solaris
U3 will only recognize the DVD drive during installation
I considered this as well, but that's the beauty of marrying ZFS with
a hotplug SATA backplane :)
I chose the to use the 5-in-3 hot-swap chassis in order to give me a
way to upgrade capacity in place, though the 4-in-3 would be as easy,
though with higher risk.
1. hot-plug a new 500GB SATA disk
You don't have to do it all at once... ZFS will function fine with 1
large disk and 1 small disk in a mirror, it just means you will only
have the as much space as the smaller disk.
As of things now, if you have multiple vdevs in a pool and they are of
diverse capacities, the striping becomes
Greetings.
I applied the Recommended Patch Cluster including 120012-14 to a U3
system today. I upgraded my zpool and it seems like we have some very
strange information coming from zpool list and zfs list...
[EMAIL PROTECTED]:/]# zfs list
NAMEUSED AVAIL REFER MOUNTPOINT
zpool02
Try exporting the pool then import it. I have seen this after moving disks
between systems, and on a couple of occasions just rebooting.
On 9/13/07, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Date: Thu, 13 Sep 2007 15:19:02 +0100
From: Peter Tribble [EMAIL PROTECTED]
Subject: [zfs-discuss]
suggestion would not be applicable to your situation.
On 9/13/07, Peter Tribble [EMAIL PROTECTED] wrote:
On 9/13/07, Solaris [EMAIL PROTECTED] wrote:
Try exporting the pool then import it. I have seen this after moving
disks
between systems, and on a couple of occasions just rebooting
Is it possible to force ZFS to nicely re-organize data inside a zpool
after a new root level vdev has been introduced?
e.g. Take a pool with 1 vdev consisting of a 2 disk mirror. Populate some
arbitrary files using about 50% of the capacity. Then add another 2
mirrored disks to the pool.
It
Hi Jim,
The handout referenced is in fact the second of the two PDF documents
posted on the LOSUG website.
Cheers,
Joy
Jim Mauro wrote:
Is the referenced Laminated Handout on slide 3 available anywhere in
any form electronically?
If not, I'd be happy to create an electronic copy and
Hi Thomas,
The man page for zpool has:
zpool scrub [-s] pool ...
Begins a scrub. The scrub examines all data in the
specified pools to verify that it checksums correctly.
For replicated (mirror or raidz) devices, ZFS automati-
cally repairs
15 matches
Mail list logo