Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Markus Grundmann

On 02/21/2013 02:59 AM, Tim Cook wrote:


I hear you, but in his scenario of using scripts for management, there 
isn't necessarily human interaction to confirm the operation 
(appropriately or not).  Having a pool property seems like an easy way 
to prevent a mis-parsed or outright incorrect script from causing 
havoc on the system.


--Tim


ACK. That's what I mean ...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Tim Cook
On Wed, Feb 20, 2013 at 6:47 PM, Richard Elling wrote:

> On Feb 20, 2013, at 3:27 PM, Tim Cook  wrote:
>
> On Wed, Feb 20, 2013 at 5:09 PM, Richard Elling 
> wrote:
>
>> On Feb 20, 2013, at 2:49 PM, Markus Grundmann 
>> wrote:
>>
>> Hi!
>>
>> My name is Markus and I living in germany. I'm new to this list and I
>> have a simple question
>> related to zfs. My favorite operating system is FreeBSD and I'm very
>> happy to use zfs on them.
>>
>> It's possible to enhance the properties in the current source tree with
>> an entry like "protected"?
>> I find it seems not to be difficult but I'm not an professional C
>> programmer. For more information
>> please take a little bit of time and read my short post at
>>
>> http://forums.freebsd.org/showthread.php?t=37895
>>
>> I have reviewed some pieces of the source code in FreeBSD 9.1 to find out
>> how difficult it was to
>> add an pool / filesystem property as an additional security layer for
>> administrators.
>>
>>
>> Whenever I modify zfs pools or filesystems it's possible to destroy [on a
>> bad day :-)] my data. A new
>> property "protected=on|off" in the pool and/or filesystem can help the
>> administrator for datalost
>> (e.g. "zpool destroy tank" or "zfs destroy " command
>> will be rejected
>> when "protected=on" property is set).
>>
>>
>> Look at the delegable properties (zfs allow). For example, you can
>> delegate a user to have
>> specific privileges and then not allow them to destroy.
>>
>> Note: I'm only 99% sure this is implemented in FreeBSD, hopefully someone
>> can verify.
>>  -- richard
>>
>>
>
> With the version of allow I'm looking at, unless I'm missing a setting, it
> looks like it'd be a complete nightmare.  I see no concept of "deny", so
> that means you either have to give *everyone* all permissions besides
> delete, or you have to go through every user/group on the box and give
> specific permissions and on top of not allowing destroy.  And then if you
> change your mind later you have to go back through and give everyone you
> want to have that feature access to it.  That seems like a complete PITA to
> me.
>
>
> :-) they don't call it "idiot-proofing" for nothing! :-)
>
> But seriously, one of the first great zfs-discuss wars was over the
> request for a
> "-f" flag for "destroy." The result of the research showed that if you
> typed "destroy"
> then you meant it, and adding a "-f" flag just teaches you to type
> "destroy -f" instead.
> See also "kill -9"
>  -- richard
>
>
I hear you, but in his scenario of using scripts for management, there
isn't necessarily human interaction to confirm the operation (appropriately
or not).  Having a pool property seems like an easy way to prevent a
mis-parsed or outright incorrect script from causing havoc on the system.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Richard Elling
On Feb 20, 2013, at 3:27 PM, Tim Cook  wrote:
> On Wed, Feb 20, 2013 at 5:09 PM, Richard Elling  
> wrote:
> On Feb 20, 2013, at 2:49 PM, Markus Grundmann  wrote:
> 
>> Hi!
>> 
>> My name is Markus and I living in germany. I'm new to this list and I have a 
>> simple question
>> related to zfs. My favorite operating system is FreeBSD and I'm very happy 
>> to use zfs on them.
>> 
>> It's possible to enhance the properties in the current source tree with an 
>> entry like "protected"?
>> I find it seems not to be difficult but I'm not an professional C 
>> programmer. For more information
>> please take a little bit of time and read my short post at
>> 
>> http://forums.freebsd.org/showthread.php?t=37895
>> 
>> I have reviewed some pieces of the source code in FreeBSD 9.1 to find out 
>> how difficult it was to
>> add an pool / filesystem property as an additional security layer for 
>> administrators.
>> 
>> 
>> Whenever I modify zfs pools or filesystems it's possible to destroy [on a 
>> bad day :-)] my data. A new
>> property "protected=on|off" in the pool and/or filesystem can help the 
>> administrator for datalost
>> (e.g. "zpool destroy tank" or "zfs destroy " command will 
>> be rejected
>> when "protected=on" property is set).
> 
> Look at the delegable properties (zfs allow). For example, you can delegate a 
> user to have
> specific privileges and then not allow them to destroy. 
> 
> Note: I'm only 99% sure this is implemented in FreeBSD, hopefully someone can 
> verify.
>  -- richard
> 
> 
> 
> With the version of allow I'm looking at, unless I'm missing a setting, it 
> looks like it'd be a complete nightmare.  I see no concept of "deny", so that 
> means you either have to give *everyone* all permissions besides delete, or 
> you have to go through every user/group on the box and give specific 
> permissions and on top of not allowing destroy.  And then if you change your 
> mind later you have to go back through and give everyone you want to have 
> that feature access to it.  That seems like a complete PITA to me.  

:-) they don't call it "idiot-proofing" for nothing! :-)

But seriously, one of the first great zfs-discuss wars was over the request for 
a
"-f" flag for "destroy." The result of the research showed that if you typed 
"destroy"
then you meant it, and adding a "-f" flag just teaches you to type "destroy -f" 
instead.
See also "kill -9"
 -- richard

--

richard.ell...@richardelling.com
+1-760-896-4422









___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Ian Collins

Peter Wood wrote:

Currently the pool is about 20% full:
# zpool list pool01
NAME SIZE  ALLOC   FREE  EXPANDSZCAP  DEDUP  HEALTH  ALTROOT
pool01  65.2T  15.4T  49.9T -23%  1.00x  ONLINE  -
#



So you will be about 15% full after adding a new vdev.

Unless you are likely to get too close to filling the enlarged pool, you 
will probably be OK performance wise.  The old data access times will be 
no worse, the new data better.


If you can spread some of your old data around after added the new vdev, 
do so.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Peter Wood
Currently the pool is about 20% full:
# zpool list pool01
NAME SIZE  ALLOC   FREE  EXPANDSZCAP  DEDUP  HEALTH  ALTROOT
pool01  65.2T  15.4T  49.9T -23%  1.00x  ONLINE  -
#

The old data and new data will be equally use after adding the vdev.

The FS hold tens of thousands of small images (~500KB) that are read, write
and new one added depending on what customers are doing. It's pretty heavy
on the file system. About 800 IOPS going up to 1500 IOPS at times.

Performance is important.



On Wed, Feb 20, 2013 at 3:48 PM, Tim Cook  wrote:

>
>
>
> On Wed, Feb 20, 2013 at 5:46 PM, Bob Friesenhahn <
> bfrie...@simple.dallas.tx.us> wrote:
>
>> On Thu, 21 Feb 2013, Sašo Kiselkov wrote:
>>
>>  On 02/21/2013 12:27 AM, Peter Wood wrote:
>>>
 Will adding another vdev hurt the performance?

>>>
>>> In general, the answer is: no. ZFS will try to balance writes to
>>> top-level vdevs in a fashion that assures even data distribution. If
>>> your data is equally likely to be hit in all places, then you will not
>>> incur any performance penalties. If, OTOH, newer data is more likely to
>>> be hit than old data
>>> , then yes, newer data will be served from fewer spindles. In that case
>>> it is possible to do a send/receive of the affected datasets into new
>>> locations and then renaming them.
>>>
>>
>> You have this reversed.  The older data is served from fewer spindles
>> than data written after the new vdev is added. Performance with the newer
>> data should be improved.
>>
>> Bob
>>
>
>
> That depends entirely on how full the pool is when the new vdev is added,
> and how frequently the older data changes, snapshots, etc.
>
> --Tim
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Ian Collins

Bob Friesenhahn wrote:

On Thu, 21 Feb 2013, Sašo Kiselkov wrote:


On 02/21/2013 12:27 AM, Peter Wood wrote:

Will adding another vdev hurt the performance?

In general, the answer is: no. ZFS will try to balance writes to
top-level vdevs in a fashion that assures even data distribution. If
your data is equally likely to be hit in all places, then you will not
incur any performance penalties. If, OTOH, newer data is more likely to
be hit than old data
, then yes, newer data will be served from fewer spindles. In that case
it is possible to do a send/receive of the affected datasets into new
locations and then renaming them.

You have this reversed.  The older data is served from fewer spindles
than data written after the new vdev is added. Performance with the
newer data should be improved.


Not if the pool is close to full, when new data will end up on fewer 
spindles (the new or extended vdev).


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Tim Cook
On Wed, Feb 20, 2013 at 5:46 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Thu, 21 Feb 2013, Sašo Kiselkov wrote:
>
>  On 02/21/2013 12:27 AM, Peter Wood wrote:
>>
>>> Will adding another vdev hurt the performance?
>>>
>>
>> In general, the answer is: no. ZFS will try to balance writes to
>> top-level vdevs in a fashion that assures even data distribution. If
>> your data is equally likely to be hit in all places, then you will not
>> incur any performance penalties. If, OTOH, newer data is more likely to
>> be hit than old data
>> , then yes, newer data will be served from fewer spindles. In that case
>> it is possible to do a send/receive of the affected datasets into new
>> locations and then renaming them.
>>
>
> You have this reversed.  The older data is served from fewer spindles than
> data written after the new vdev is added. Performance with the newer data
> should be improved.
>
> Bob
>


That depends entirely on how full the pool is when the new vdev is added,
and how frequently the older data changes, snapshots, etc.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Bob Friesenhahn

On Thu, 21 Feb 2013, Sašo Kiselkov wrote:


On 02/21/2013 12:27 AM, Peter Wood wrote:

Will adding another vdev hurt the performance?


In general, the answer is: no. ZFS will try to balance writes to
top-level vdevs in a fashion that assures even data distribution. If
your data is equally likely to be hit in all places, then you will not
incur any performance penalties. If, OTOH, newer data is more likely to
be hit than old data
, then yes, newer data will be served from fewer spindles. In that case
it is possible to do a send/receive of the affected datasets into new
locations and then renaming them.


You have this reversed.  The older data is served from fewer spindles 
than data written after the new vdev is added. Performance with the 
newer data should be improved.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Sašo Kiselkov
On 02/21/2013 12:27 AM, Peter Wood wrote:
> Will adding another vdev hurt the performance?

In general, the answer is: no. ZFS will try to balance writes to
top-level vdevs in a fashion that assures even data distribution. If
your data is equally likely to be hit in all places, then you will not
incur any performance penalties. If, OTOH, newer data is more likely to
be hit than old data
, then yes, newer data will be served from fewer spindles. In that case
it is possible to do a send/receive of the affected datasets into new
locations and then renaming them.

Cheers,
--
Saso

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Ian Collins

Peter Wood wrote:

I'm using OpenIndiana 151a7, zpool v28, zfs v5.

When I bought my storage servers I intentionally left hdd slots 
available so I can add another vdev when needed and delay immediate 
expenses.


After reading some posts on the mailing list I'm getting concerned 
about degrading performance due to unequal distribution of data among 
the vdevs. I still have a chance to migrate the data away, add all 
drives and rebuild the pools and start fresh.


Before going that road I was hoping to hear your opinion on what will 
be the best way to handle this.


System: Supermicro with 36 hdd bays. 28 bays filled with 3TB SAS 7.2K 
enterprise drives. 8 bays available to add another vdev to the pool.


Pool configuration:



#

Will adding another vdev hurt the performance?


How full is the pool?

When I've added (or grown an existing) vdev, I used zfs send to make a
copy of a suitably large filesystem, then deleted the original and
renamed the copy.  I had to do this a couple of times to redistribute
data, but it saved a lot of down time.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Tim Cook
On Wed, Feb 20, 2013 at 5:09 PM, Richard Elling wrote:

> On Feb 20, 2013, at 2:49 PM, Markus Grundmann 
> wrote:
>
> Hi!
>
> My name is Markus and I living in germany. I'm new to this list and I have
> a simple question
> related to zfs. My favorite operating system is FreeBSD and I'm very happy
> to use zfs on them.
>
> It's possible to enhance the properties in the current source tree with an
> entry like "protected"?
> I find it seems not to be difficult but I'm not an professional C
> programmer. For more information
> please take a little bit of time and read my short post at
>
> http://forums.freebsd.org/showthread.php?t=37895
>
> I have reviewed some pieces of the source code in FreeBSD 9.1 to find out
> how difficult it was to
> add an pool / filesystem property as an additional security layer for
> administrators.
>
>
> Whenever I modify zfs pools or filesystems it's possible to destroy [on a
> bad day :-)] my data. A new
> property "protected=on|off" in the pool and/or filesystem can help the
> administrator for datalost
> (e.g. "zpool destroy tank" or "zfs destroy " command will
> be rejected
> when "protected=on" property is set).
>
>
> Look at the delegable properties (zfs allow). For example, you can
> delegate a user to have
> specific privileges and then not allow them to destroy.
>
> Note: I'm only 99% sure this is implemented in FreeBSD, hopefully someone
> can verify.
>  -- richard
>
>

With the version of allow I'm looking at, unless I'm missing a setting, it
looks like it'd be a complete nightmare.  I see no concept of "deny", so
that means you either have to give *everyone* all permissions besides
delete, or you have to go through every user/group on the box and give
specific permissions and on top of not allowing destroy.  And then if you
change your mind later you have to go back through and give everyone you
want to have that feature access to it.  That seems like a complete PITA to
me.


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Peter Wood
I'm using OpenIndiana 151a7, zpool v28, zfs v5.

When I bought my storage servers I intentionally left hdd slots available
so I can add another vdev when needed and delay immediate expenses.

After reading some posts on the mailing list I'm getting concerned about
degrading performance due to unequal distribution of data among the vdevs.
I still have a chance to migrate the data away, add all drives and rebuild
the pools and start fresh.

Before going that road I was hoping to hear your opinion on what will be
the best way to handle this.

System: Supermicro with 36 hdd bays. 28 bays filled with 3TB SAS 7.2K
enterprise drives. 8 bays available to add another vdev to the pool.

Pool configuration:
# zpool status pool01
  pool: pool01
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Nov 21 17:41:52 2012
config:

NAME   STATE READ WRITE CKSUM
pool01 ONLINE   0 0 0
  raidz2-0 ONLINE   0 0 0
c8t5000CCA01AA8E3C0d0  ONLINE   0 0 0
c8t5000CCA01AA8E3F0d0  ONLINE   0 0 0
c8t5000CCA01AA8E394d0  ONLINE   0 0 0
c8t5000CCA01AA8E434d0  ONLINE   0 0 0
c8t5000CCA01AA793A0d0  ONLINE   0 0 0
c8t5000CCA01AA79380d0  ONLINE   0 0 0
c8t5000CCA01AA79398d0  ONLINE   0 0 0
c8t5000CCA01AB56B10d0  ONLINE   0 0 0
  raidz2-1 ONLINE   0 0 0
c8t5000CCA01AB56B28d0  ONLINE   0 0 0
c8t5000CCA01AB56B64d0  ONLINE   0 0 0
c8t5000CCA01AB56B80d0  ONLINE   0 0 0
c8t5000CCA01AB56BB0d0  ONLINE   0 0 0
c8t5000CCA01AB56EA4d0  ONLINE   0 0 0
c8t5000CCA01ABDAEBCd0  ONLINE   0 0 0
c8t5000CCA01ABDAED0d0  ONLINE   0 0 0
c8t5000CCA01ABDAF1Cd0  ONLINE   0 0 0
  raidz2-2 ONLINE   0 0 0
c8t5000CCA01ABDAF7Cd0  ONLINE   0 0 0
c8t5000CCA01ABDAF10d0  ONLINE   0 0 0
c8t5000CCA01ABDAF40d0  ONLINE   0 0 0
c8t5000CCA01ABDAF60d0  ONLINE   0 0 0
c8t5000CCA01ABDAF74d0  ONLINE   0 0 0
c8t5000CCA01ABDAF80d0  ONLINE   0 0 0
c8t5000CCA01ABDB04Cd0  ONLINE   0 0 0
c8t5000CCA01ABDB09Cd0  ONLINE   0 0 0
logs
  mirror-3 ONLINE   0 0 0
c6t0d0 ONLINE   0 0 0
c6t1d0 ONLINE   0 0 0
cache
  c6t2d0   ONLINE   0 0 0
  c6t3d0   ONLINE   0 0 0
spares
  c8t5000CCA01ABDB020d0AVAIL
  c8t5000CCA01ABDB060d0AVAIL

errors: No known data errors
#

Will adding another vdev hurt the performance?

Thank you,

-- Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Markus Grundmann


Look at the delegable properties (zfs allow). For example, you can 
delegate a user to have

specific privileges and then not allow them to destroy.

Note: I'm only 99% sure this is implemented in FreeBSD, hopefully 
someone can verify.

 -- richard




Hi Richard!

I think it's implemented but I have never used.
I hope this feature can "protect zfs before markus aka root" :-)))

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Markus Grundmann

Am 21.02.2013 00:08, schrieb Mike Gerdts:

On Wed, Feb 20, 2013 at 4:49 PM, Markus Grundmann  wrote:

Whenever I modify zfs pools or filesystems it's possible to destroy [on a
bad day :-)] my data. A new
property "protected=on|off" in the pool and/or filesystem can help the
administrator for datalost
(e.g. "zpool destroy tank" or "zfs destroy " command will
be rejected
when "protected=on" property is set).

It's anywhere here on this list their can discuss/forward this feature
request? I hope you have
understand my post ;-)

I like the idea and it is likely not very hard to implement.  This is
very similar to how snapshot holds work.

# zpool upgrade -v | grep -i hold
  18  Snapshot user holds

So long as you aren't using a really ancient zpool version, you could
use this feature to protect your file systems.

# zfs create a/b
# zfs snapshot a/b@snap
# zfs hold protectme a/b@snap
# zfs destroy a/b
cannot destroy 'a/b': filesystem has children
use '-r' to destroy the following datasets:
a/b@snap
# zfs destroy -r a/b
cannot destroy 'a/b@snap': snapshot is busy

Of course, snapshots aren't free if you write to the file system.  A
way around that is to create an empty file system within the one that
you are trying to protect.

# zfs create a/1
# zfs create a/1/hold
# zfs snapshot a/1/hold@hold
# zfs hold 'saveme!' a/1/hold@hold
# zfs holds a/1/hold@hold
NAME   TAG  TIMESTAMP
a/1/hold@hold  saveme!  Wed Feb 20 15:06:29 2013
# zfs destroy -r a/1
cannot destroy 'a/1/hold@hold': snapshot is busy

Extending the hold mechanism to filesystems and volumes would be quite nice.

Mike
Hi Mike! Yes that I have understand. zfs filesystems can protect on this 
way. With a new "protected" property the pool and the vdev's are lay 
under an additional security layer. We are all humans (that's good) but 
we are full with errors *lol*


The protection property helps to lock modification on the infrastructure 
of zfs. The pools. With a simple "zpool set protected=off " all 
modifications are available. The different is you must type as 
administrator additional command to unlock the pool for your next 
action. An example: In many linux distributions you will be ask "Sure?" 
when you type "rm *". That's fine or? zpool and zfs commands working 
without any warnings. Yes I was "root" but root is not god ;-)


-Markus

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Markus Grundmann

Am 21.02.2013 00:02, schrieb Tim Cook:

I think you're underestimating your English, it's quite good :)


Thank you Tim :-)

In any case, I think the proposal is a good one.  With the default 
behavior being off, it won't break anything for existing datasets, and 
it can absolutely help prevent a fat finger or a lack of caffeine 
ruining someone's day.


If the feature is already there somewhere, I'm sure someone will chime 
in.


--Tim


Yes! After some minutes I have found places in the source to patch it 
but is a long way to go them *g*
The better way is when this property was available for all operating 
systems and not only in my sand box.


Regards,
Markus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Richard Elling
On Feb 20, 2013, at 2:49 PM, Markus Grundmann  wrote:

> Hi!
> 
> My name is Markus and I living in germany. I'm new to this list and I have a 
> simple question
> related to zfs. My favorite operating system is FreeBSD and I'm very happy to 
> use zfs on them.
> 
> It's possible to enhance the properties in the current source tree with an 
> entry like "protected"?
> I find it seems not to be difficult but I'm not an professional C programmer. 
> For more information
> please take a little bit of time and read my short post at
> 
> http://forums.freebsd.org/showthread.php?t=37895
> 
> I have reviewed some pieces of the source code in FreeBSD 9.1 to find out how 
> difficult it was to
> add an pool / filesystem property as an additional security layer for 
> administrators.
> 
> Whenever I modify zfs pools or filesystems it's possible to destroy [on a bad 
> day :-)] my data. A new
> property "protected=on|off" in the pool and/or filesystem can help the 
> administrator for datalost
> (e.g. "zpool destroy tank" or "zfs destroy " command will be 
> rejected
> when "protected=on" property is set).

Look at the delegable properties (zfs allow). For example, you can delegate a 
user to have
specific privileges and then not allow them to destroy. 

Note: I'm only 99% sure this is implemented in FreeBSD, hopefully someone can 
verify.
 -- richard

> 
> It's anywhere here on this list their can discuss/forward this feature 
> request? I hope you have
> understand my post ;-)
> 
> Thanks and best regards,
> Markus
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--

richard.ell...@richardelling.com
+1-760-896-4422









___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Mike Gerdts
On Wed, Feb 20, 2013 at 4:49 PM, Markus Grundmann  wrote:
> Whenever I modify zfs pools or filesystems it's possible to destroy [on a
> bad day :-)] my data. A new
> property "protected=on|off" in the pool and/or filesystem can help the
> administrator for datalost
> (e.g. "zpool destroy tank" or "zfs destroy " command will
> be rejected
> when "protected=on" property is set).
>
> It's anywhere here on this list their can discuss/forward this feature
> request? I hope you have
> understand my post ;-)

I like the idea and it is likely not very hard to implement.  This is
very similar to how snapshot holds work.

# zpool upgrade -v | grep -i hold
 18  Snapshot user holds

So long as you aren't using a really ancient zpool version, you could
use this feature to protect your file systems.

# zfs create a/b
# zfs snapshot a/b@snap
# zfs hold protectme a/b@snap
# zfs destroy a/b
cannot destroy 'a/b': filesystem has children
use '-r' to destroy the following datasets:
a/b@snap
# zfs destroy -r a/b
cannot destroy 'a/b@snap': snapshot is busy

Of course, snapshots aren't free if you write to the file system.  A
way around that is to create an empty file system within the one that
you are trying to protect.

# zfs create a/1
# zfs create a/1/hold
# zfs snapshot a/1/hold@hold
# zfs hold 'saveme!' a/1/hold@hold
# zfs holds a/1/hold@hold
NAME   TAG  TIMESTAMP
a/1/hold@hold  saveme!  Wed Feb 20 15:06:29 2013
# zfs destroy -r a/1
cannot destroy 'a/1/hold@hold': snapshot is busy

Extending the hold mechanism to filesystems and volumes would be quite nice.

Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Jan Owoc
Hi Markus,

On Wed, Feb 20, 2013 at 3:49 PM, Markus Grundmann  wrote:
> It's possible to enhance the properties in the current source tree with an
> entry like "protected"?
> I find it seems not to be difficult but I'm not an professional C
> programmer. For more information
> please take a little bit of time and read my short post at
>
> http://forums.freebsd.org/showthread.php?t=37895

Zfs already allows for custom properties. You could create your own
property, like "protected", and set it to anything you wanted to.

> Whenever I modify zfs pools or filesystems it's possible to destroy [on a
> bad day :-)] my data. A new
> property "protected=on|off" in the pool and/or filesystem can help the
> administrator for datalost
> (e.g. "zpool destroy tank" or "zfs destroy " command will
> be rejected
> when "protected=on" property is set).

"zpool destroy tank" can be undone as long as you didn't overwrite the
partitions with something (the data is still there). The more
dangerous one is "zfs destroy". I suggest putting in a snapshot, which
counts as a child filesystem, so you would have to do "zfs destroy -r
tank/filesystem" to recursively destroy all the children.

I would imagine you could write some sort of wrapper for the "zfs"
command that checks if the command includes "destroy" and then check
for the existence of your custom property.

Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Tim Cook
On Wed, Feb 20, 2013 at 4:49 PM, Markus Grundmann wrote:

> Hi!
>
> My name is Markus and I living in germany. I'm new to this list and I have
> a simple question
> related to zfs. My favorite operating system is FreeBSD and I'm very happy
> to use zfs on them.
>
> It's possible to enhance the properties in the current source tree with an
> entry like "protected"?
> I find it seems not to be difficult but I'm not an professional C
> programmer. For more information
> please take a little bit of time and read my short post at
>
> http://forums.freebsd.org/**showthread.php?t=37895
>
> I have reviewed some pieces of the source code in FreeBSD 9.1 to find out
> how difficult it was to
> add an pool / filesystem property as an additional security layer for
> administrators.
>
> Whenever I modify zfs pools or filesystems it's possible to destroy [on a
> bad day :-)] my data. A new
> property "protected=on|off" in the pool and/or filesystem can help the
> administrator for datalost
> (e.g. "zpool destroy tank" or "zfs destroy " command will
> be rejected
> when "protected=on" property is set).
>
> It's anywhere here on this list their can discuss/forward this feature
> request? I hope you have
> understand my post ;-)
>
> Thanks and best regards,
> Markus
>
>
>

I think you're underestimating your English, it's quite good :)  In any
case, I think the proposal is a good one.  With the default behavior being
off, it won't break anything for existing datasets, and it can absolutely
help prevent a fat finger or a lack of caffeine ruining someone's day.

If the feature is already there somewhere, I'm sure someone will chime in.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Markus Grundmann

Hi!

My name is Markus and I living in germany. I'm new to this list and I 
have a simple question
related to zfs. My favorite operating system is FreeBSD and I'm very 
happy to use zfs on them.


It's possible to enhance the properties in the current source tree with 
an entry like "protected"?
I find it seems not to be difficult but I'm not an professional C 
programmer. For more information

please take a little bit of time and read my short post at

http://forums.freebsd.org/showthread.php?t=37895

I have reviewed some pieces of the source code in FreeBSD 9.1 to find 
out how difficult it was to
add an pool / filesystem property as an additional security layer for 
administrators.


Whenever I modify zfs pools or filesystems it's possible to destroy [on 
a bad day :-)] my data. A new
property "protected=on|off" in the pool and/or filesystem can help the 
administrator for datalost
(e.g. "zpool destroy tank" or "zfs destroy " command 
will be rejected

when "protected=on" property is set).

It's anywhere here on this list their can discuss/forward this feature 
request? I hope you have

understand my post ;-)

Thanks and best regards,
Markus

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs] Re: how to know available disk space, 22% free space missing

2013-02-20 Thread Pasi Kärkkäinen
Hello,

Any comments/suggestions about this would be very nice.. 

Thanks!

-- Pasi

On Fri, Feb 08, 2013 at 05:09:56PM +0200, Pasi Kärkkäinen wrote:
> 
> I'm seeing weird output aswell:
> 
> # zpool list foo
> NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
> foo  5.44T  4.44T  1023G81%  14.49x  ONLINE  -
> 
> # zfs list | grep foo
> foo  62.9T  0   250G  /volumes/foo
> foo/.nza-reserve   31K   100M31K  none
> foo/foo  62.6T  0  62.6T  /volumes/foo/foo
> 
> # zfs list -o space foo
> NAME AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
> foo  0  62.9T 0250G  0  62.7T
> 
> # zfs list -o space foo/foo
> NAME AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
> foo/foo  0  62.6T 0   62.6T  0  0
> 
> 
> What's the correct way of finding out what actually uses/reserves that 1023G 
> of FREE in the zpool? 
> 
> At this point the filesystems are full, and it's not possible to write to 
> them anymore.
> Also creating new filesystems to the pool fail:
> 
> "Operation completed with error: cannot create 'foo/Test': out of space"
> 
> So the zpool is full for real.
> 
> I'd like to better understand what actually uses that 1023G of FREE space 
> reported by zpool..
> 1023G out of 4.32T is around 22% overhead..
> zpool "foo" consists of 3x mirror vdevs, so there's no raidz involved.
> 
> 62.6T / 14.49x dedup-ratio = 4.32T 
> Which is pretty close to the ALLOC value reported by zpool.. 
> 
> Data on the filesystem is VM images written over NFS.
> 
> 
> Thanks,
> 
> -- Pasi
> 
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss