Ragnar Sundblad wrote:
I was referring to the case where zfs has written data to the drive but
still hasen't issued a cache flush, and before the cache flush the drive
is reset. If zfs finally issues a cache flush and then isn't informed
that the drive has been reset, data is lost.
I hope this
On 30 jun 2010, at 22.46, Garrett D'Amore wrote:
> On Wed, 2010-06-30 at 22:28 +0200, Ragnar Sundblad wrote:
>
>> To be safe, the protocol needs to be able to discover that the devices
>> (host or disk) has been disconnected and reconnected or has been reset
>> and that either parts assumptions
On Wed, 2010-06-30 at 22:28 +0200, Ragnar Sundblad wrote:
> To be safe, the protocol needs to be able to discover that the devices
> (host or disk) has been disconnected and reconnected or has been reset
> and that either parts assumptions about the state of the other has to
> be invalidated.
>
>
On 12 apr 2010, at 22.32, Carson Gaspar wrote:
> Carson Gaspar wrote:
>> Miles Nordin wrote:
"re" == Richard Elling writes:
>>> How do you handle the case when a hotplug SATA drive is powered off
>>> unexpectedly with data in its write cache? Do you replay the writes, or do
>>> they g
On Wed, Jun 30, 2010 at 09:47:15AM -0700, Edward Ned Harvey wrote:
> > From: Arne Jansen [mailto:sensi...@gmx.net]
> >
> > Edward Ned Harvey wrote:
> > > Due to recent experiences, and discussion on this list, my colleague
> > and
> > > I performed some tests:
> > >
> > > Using solaris 10, fully u
> From: Arne Jansen [mailto:sensi...@gmx.net]
>
> Edward Ned Harvey wrote:
> > Due to recent experiences, and discussion on this list, my colleague
> and
> > I performed some tests:
> >
> > Using solaris 10, fully upgraded. (zpool 15 is latest, which does
> not
> > have log device removal that wa
Edward Ned Harvey wrote:
> Due to recent experiences, and discussion on this list, my colleague and
> I performed some tests:
>
> Using solaris 10, fully upgraded. (zpool 15 is latest, which does not
> have log device removal that was introduced in zpool 19) In any way
> possible, you lose an un
> Carson Gaspar wrote:
>
> Does anyone who understands the internals better than care to take a
> stab at what happens if:
>
> - ZFS writes data to /dev/foo
> - /dev/foo looses power and the data from the above write, not yet
> flushed to rust (say a field tech pulls the wrong drive...)
> - /dev/
Carson Gaspar wrote:
Miles Nordin wrote:
"re" == Richard Elling writes:
How do you handle the case when a hotplug SATA drive is powered off
unexpectedly with data in its write cache? Do you replay the writes,
or do they go down the ZFS hotplug write hole?
If zfs never got a positive resp
Miles Nordin wrote:
"re" == Richard Elling writes:
How do you handle the case when a hotplug SATA drive is powered off
unexpectedly with data in its write cache? Do you replay the writes,
or do they go down the ZFS hotplug write hole?
If zfs never got a positive response to a cache flush,
> "re" == Richard Elling writes:
> "dc" == Daniel Carosone writes:
re> In general, I agree. How would you propose handling nested
re> mounts?
force-unmount them. (so that they can be manually mounted elsewhere,
if desired, or even in the same place with the middle filesystem
mi
> From: Daniel Carosone [mailto:d...@geek.com.au]
>
> Please look at the pool property "failmode". Both of the preferences
> you have expressed are available, as well as the default you seem so
> unhappy with.
I ... did not know that. :-)
Thank you.
On Sun, Apr 11, 2010 at 07:03:29PM -0400, Edward Ned Harvey wrote:
> Heck, even if the faulted pool spontaneously sent the server into an
> ungraceful reboot, even *that* would be an improvement.
Please look at the pool property "failmode". Both of the preferences
you have expressed are available
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> On Apr 11, 2010, at 5:36 AM, Edward Ned Harvey wrote:
> >
> > In the event a pool is faulted, I wish you didn't have to power cycle
> the
> > machine. Let all the zfs filesystems that are in that pool simply
> > disappear, and when some
On Apr 11, 2010, at 5:36 AM, Edward Ned Harvey wrote:
>
> In the event a pool is faulted, I wish you didn't have to power cycle the
> machine. Let all the zfs filesystems that are in that pool simply
> disappear, and when somebody does "zpool status" you can see why.
In general, I agree. How wou
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>
> Thanks for the testing. so FINALLY with version > 19 does ZFS
> demonstrate production-ready status in my book. How long is it going to
> take Solaris to catch up?
Oh, it's been production worthy for some time - Just don't use u
> From: Tim Cook [mailto:t...@cook.ms]
>
> Awesome! Thanks for letting us know the results of your tests Ed,
> that's extremely helpful. I was actually interested in grabbing some
> of the cheaper intel SSD's for home use, but didn't want to waste my
> money if it wasn't going to handle the vari
Thanks for the testing. so FINALLY with version > 19 does ZFS demonstrate
production-ready status in my book. How long is it going to take Solaris to
catch up?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
On Sat, Apr 10, 2010 at 10:08 AM, Edward Ned Harvey
wrote:
> Due to recent experiences, and discussion on this list, my colleague and
> I performed some tests:
>
>
>
> Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
> log device removal that was introduced in zpool 19)
On Sat, 10 Apr 2010, Edward Ned Harvey wrote:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log
device removal that was
introduced in zpool 19) In any way possible, you lose an unmirrored log
device, and the OS will crash, and
the whole zpool is permanently gone,
Due to recent experiences, and discussion on this list, my colleague and I
performed some tests:
Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have
log device removal that was introduced in zpool 19) In any way possible,
you lose an unmirrored log device, and the OS wi
21 matches
Mail list logo