On Fri, May 05, 2006 at 10:19:56AM +0200, Constantin Gonzalez wrote:
(apologies if this was discussed before, I _did_ some research, but this
one may have slipped for me...)
I'm in the process of writing a blog on this one. Give me another day
or so.
Looking through the current Sun ZFS
Maybe there could be a flag for certain snaps where it could be made read
only?!? But I dont know how this could be implemented and I do not think that
would be possible... Anyway I still think that if I had a production system with
those snaps I would rather remove that golden image and
I realy do like the way NetApp is handling snaps :) that would be an excelent
thing in ZFS :)
On Fri, 5 May 2006, Marion Hakanson wrote:
Interesting discussion. I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's). Is it
verboten to
On Fri, May 05, 2006 at 09:43:05AM -0700, Marion Hakanson wrote:
Interesting discussion. I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's). Is it
verboten to compare ZFS to NetApp? I hope not
It's a public list, you can do the
On 5/5/06, Constantin Gonzalez [EMAIL PROTECTED] wrote:
Hi,
(apologies if this has been discussed before, I hope not)
while setting up a script at home to do automatic snapshots, a number of
wishes popped into my mind:
The basic problem with regular snapshotting is that you end up managing
so
On Fri, 5 May 2006, Marion Hakanson wrote:
Interesting discussion. I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's). Is it
verboten to compare ZFS to NetApp? I hope not
Of course not. And if Thumper is similar to the rumoured
I have a raidz pool which looks like this after a disk failure:
# zpool status
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
Thanks for the tip. In the local case, I could send to the
iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
of 50seconds (17 seconds better than UFS). However, I didn't even both
finishing the NFS client test, since it was taking a few seconds
between multiple 27K files. So,
Thanks. I'm playing with it now, trying to get the most succinct test.
This is one thing that bothers me: Regardless of the backend, it
appears that a delete of a large tree (say the linux kernel) over NFS
takes forever, but its immediate when doing so locally. Is delete over
NFS really take such