Hi list,
from ZFS documentation it appears unclear to me if a zpool
scrub will black list any found bad blocks so they won't be used
anymore.
I know Netapp's WAFL scrub does reallocate bad blocks and mark them as
unsable. Does ZFS have this kind of strategy ?
Thanks.
--
Didier
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Didier Rebeix
from ZFS documentation it appears unclear to me if a zpool
scrub will black list any found bad blocks so they won't be used
anymore.
If there are any physically bad
ZFS detects far more errors that traditional filesystems will simply miss.
This means that many of the possible causes for those errors will be
something other than a real bad block on the disk. As Edward said, the disk
firmware should automatically remap real bad blocks, so if ZFS did that
Very interesting... I didn't know disk firwares were responsible for
automagically relocating bad blocks. Knowing this, it makes no sense for
a filesystem to try to deal with this kind of errors.
For now, any disk with read/write errors detected will be discarded
from my filers and replaced...
Hi all,
I'm trying to evaluate what are the risks of running NFS share of zfs
dataset with sync=disabled property. The clients are vmware hosts in our
environment and server is SunFire X4540 Thor system. Though general
recommendation tells not to do this, but after testing performance with
On Nov 8, 2011, at 6:38 AM, Evaldas Auryla wrote:
Hi all,
I'm trying to evaluate what are the risks of running NFS share of zfs dataset
with sync=disabled property. The clients are vmware hosts in our environment
and server is SunFire X4540 Thor system. Though general recommendation
On Tue, November 8, 2011 09:38, Evaldas Auryla wrote:
I'm trying to evaluate what are the risks of running NFS share of zfs
dataset with sync=disabled property. The clients are vmware hosts in our
environment and server is SunFire X4540 Thor system. Though general
recommendation tells not to
On Tue, Nov 8, 2011 at 9:14 AM, Didier Rebeix
didier.reb...@u-bourgogne.fr wrote:
Very interesting... I didn't know disk firwares were responsible for
automagically relocating bad blocks. Knowing this, it makes no sense for
a filesystem to try to deal with this kind of errors.
In the dark
Hello all,
I am thinking about a new laptop. I see that there are
a number of higher-performance models (incidenatlly, they
are also marketed as gamer ones) which offer two SATA
2.5 bays and an SD flash card slot. Vendors usually
position the two-HDD bay part as either get lots of
capacity
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import
On Tue, 8 Nov 2011, Jim Klimov wrote:
Second question regards single-HDD reliability: I can
do ZFS mirroring over two partitions/slices, or I can
configure copies=2 for the datasets. Either way I
think I can get protection from bad blocks of whatever
nature, as long as the spindle spins. Can
2011-11-08 22:30, Jim Klimov wrote:
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Thanks to my wife's sister, who is my hands and eyes near
the problematic PC, here's
2011-11-08 23:36, Bob Friesenhahn wrote:
On Tue, 8 Nov 2011, Jim Klimov wrote:
Second question regards single-HDD reliability: I can
do ZFS mirroring over two partitions/slices, or I can
configure copies=2 for the datasets. Either way I
think I can get protection from bad blocks of whatever
On Wed, 9 Nov 2011, Jim Klimov wrote:
Thanks, Bob, I figured so...
And would copies=2 save me from problems of data loss and/or
inefficient resilvering? Does all required data and metadata
get duplicated this way, so any broken sector can be amended?
I read on this list recently, that some
Hi All,
On Wed, Nov 2, 2011 at 5:24 PM, Lachlan Mulcahy
lmulc...@marinsoftware.comwrote:
Now trying another suggestion sent to me by a direct poster:
* Recommendation from Sun (Oracle) to work around a bug:
* 6958068 - Nehalem deeper C-states cause erratic scheduling
behavior
Hello all,
A couple of months ago I wrote up some ideas about clustered
ZFS with shared storage, but the idea was generally disregarded
as not something to be done in near-term due to technological
difficultes.
Recently I stumbled upon a Nexenta+Supermicro report [1] about
cluster-in-a-box
On Wed, Nov 09, 2011 at 03:52:49AM +0400, Jim Klimov wrote:
Recently I stumbled upon a Nexenta+Supermicro report [1] about
cluster-in-a-box with shared storage boasting an active-active
cluster with transparent failover. Now, I am not certain how
these two phrases fit in the same sentence,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Evaldas Auryla
I'm trying to evaluate what are the risks of running NFS share of zfs
dataset with sync=disabled property. The clients are vmware hosts in our
environment and server is
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
1) Use a ZFS mirror of two SSDs
- seems too pricey
2) Use a HDD with redundant data (copies=2 or mirroring
over two partitions), and an SSD for L2ARC (+maybe ZIL)
-
This is accomplished with the Nexenta HA cluster plugin. The plugin is
written by RSF, and you can read more about it here :
http://www.high-availability.com/
You can do either option 1 or two that you put forth. There is some
failover time, but in the latest version of Nexenta (3.1.1) there
Hi list,
My zfs write performance is poor and need your help.
I create zpool with 2 raidz1. When the space is to be used up, I add 2
another raidz1 to extend the zpool.
After some days, the zpool is almost full, I remove some old data.
But now, as show below, the first 2 raidz1 vdev usage is
On Wed, Nov 09, 2011 at 11:09:45AM +1100, Daniel Carosone wrote:
On Wed, Nov 09, 2011 at 03:52:49AM +0400, Jim Klimov wrote:
Recently I stumbled upon a Nexenta+Supermicro report [1] about
cluster-in-a-box with shared storage boasting an active-active
cluster with transparent failover.
To some people active-active means all cluster members serve the
same filesystems.
To others active-active means all cluster members serve some
filesystems and can serve all filesystems ultimately by taking over
failed cluster members.
Nico
--
___
23 matches
Mail list logo