From: Arne Jansen [mailto:sensi...@gmx.net]
Edward Ned Harvey wrote:
Due to recent experiences, and discussion on this list, my colleague
and
I performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does
not
have log device removal that was introduced
On Wed, Jun 30, 2010 at 09:47:15AM -0700, Edward Ned Harvey wrote:
From: Arne Jansen [mailto:sensi...@gmx.net]
Edward Ned Harvey wrote:
Due to recent experiences, and discussion on this list, my colleague
and
I performed some tests:
Using solaris 10, fully upgraded. (zpool 15
On 12 apr 2010, at 22.32, Carson Gaspar wrote:
Carson Gaspar wrote:
Miles Nordin wrote:
re == Richard Elling richard.ell...@gmail.com writes:
How do you handle the case when a hotplug SATA drive is powered off
unexpectedly with data in its write cache? Do you replay the writes, or do
On Wed, 2010-06-30 at 22:28 +0200, Ragnar Sundblad wrote:
To be safe, the protocol needs to be able to discover that the devices
(host or disk) has been disconnected and reconnected or has been reset
and that either parts assumptions about the state of the other has to
be invalidated.
I
On 30 jun 2010, at 22.46, Garrett D'Amore wrote:
On Wed, 2010-06-30 at 22:28 +0200, Ragnar Sundblad wrote:
To be safe, the protocol needs to be able to discover that the devices
(host or disk) has been disconnected and reconnected or has been reset
and that either parts assumptions about
Ragnar Sundblad wrote:
I was referring to the case where zfs has written data to the drive but
still hasen't issued a cache flush, and before the cache flush the drive
is reset. If zfs finally issues a cache flush and then isn't informed
that the drive has been reset, data is lost.
I hope this
Edward Ned Harvey wrote:
Due to recent experiences, and discussion on this list, my colleague and
I performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not
have log device removal that was introduced in zpool 19) In any way
possible, you lose an
Miles Nordin wrote:
re == Richard Elling richard.ell...@gmail.com writes:
How do you handle the case when a hotplug SATA drive is powered off
unexpectedly with data in its write cache? Do you replay the writes,
or do they go down the ZFS hotplug write hole?
If zfs never got a positive
Carson Gaspar wrote:
Miles Nordin wrote:
re == Richard Elling richard.ell...@gmail.com writes:
How do you handle the case when a hotplug SATA drive is powered off
unexpectedly with data in its write cache? Do you replay the writes,
or do they go down the ZFS hotplug write hole?
If zfs
Carson Gaspar wrote:
Does anyone who understands the internals better than care to take a
stab at what happens if:
- ZFS writes data to /dev/foo
- /dev/foo looses power and the data from the above write, not yet
flushed to rust (say a field tech pulls the wrong drive...)
- /dev/foo
From: Tim Cook [mailto:t...@cook.ms]
Awesome! Thanks for letting us know the results of your tests Ed,
that's extremely helpful. I was actually interested in grabbing some
of the cheaper intel SSD's for home use, but didn't want to waste my
money if it wasn't going to handle the various
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
Thanks for the testing. so FINALLY with version 19 does ZFS
demonstrate production-ready status in my book. How long is it going to
take Solaris to catch up?
Oh, it's been production worthy for some time - Just don't use
On Apr 11, 2010, at 5:36 AM, Edward Ned Harvey wrote:
In the event a pool is faulted, I wish you didn't have to power cycle the
machine. Let all the zfs filesystems that are in that pool simply
disappear, and when somebody does zpool status you can see why.
In general, I agree. How would
From: Richard Elling [mailto:richard.ell...@gmail.com]
On Apr 11, 2010, at 5:36 AM, Edward Ned Harvey wrote:
In the event a pool is faulted, I wish you didn't have to power cycle
the
machine. Let all the zfs filesystems that are in that pool simply
disappear, and when somebody does
On Sun, Apr 11, 2010 at 07:03:29PM -0400, Edward Ned Harvey wrote:
Heck, even if the faulted pool spontaneously sent the server into an
ungraceful reboot, even *that* would be an improvement.
Please look at the pool property failmode. Both of the preferences
you have expressed are available,
From: Daniel Carosone [mailto:d...@geek.com.au]
Please look at the pool property failmode. Both of the preferences
you have expressed are available, as well as the default you seem so
unhappy with.
I ... did not know that. :-)
Thank you.
___
Due to recent experiences, and discussion on this list, my colleague and I
performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
log device removal that was introduced in zpool 19) In any way possible,
you lose an unmirrored log device, and the OS
On Sat, 10 Apr 2010, Edward Ned Harvey wrote:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log
device removal that was
introduced in zpool 19) In any way possible, you lose an unmirrored log
device, and the OS will crash, and
the whole zpool is permanently
On Sat, Apr 10, 2010 at 10:08 AM, Edward Ned Harvey
solar...@nedharvey.comwrote:
Due to recent experiences, and discussion on this list, my colleague and
I performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
log device removal that was
Thanks for the testing. so FINALLY with version 19 does ZFS demonstrate
production-ready status in my book. How long is it going to take Solaris to
catch up?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
20 matches
Mail list logo