Re: [zfs-discuss] Locking snapshots when using zfs send

2010-04-07 Thread Chris Kirby
On Apr 7, 2010, at 5:06 PM, Will Murnane wrote:

 This is on b134:
 $ pfexec pkg image-update
 No updates available for this image.
 
 There is a zfs hold command available, but checking for holds on the
 snapshot I'm trying to send (I started it again, to see if disabling
 automatic snapshots helped) doesn't show anything:
 $ zfs holds -r h...@next
 $ echo $?
 0
 and applying a recursive hold to that snapshot doesn't seem to hold
 all its children:
 $ pfexec zfs hold -r keep h...@next

Hmm, I made a number of fixes in build 132 related to destroying
snapshots while sending replication streams.  I'm unable to reproduce
the 'zfs holds -r' issue on build 133.  I'll try build 134, but I'm
not aware of any changes in that area.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't destroy snapshot

2010-04-01 Thread Chris Kirby
On Mar 31, 2010, at 7:51 AM, Charles Hedrick wrote:

 We're getting the notorious cannot destroy ... dataset already exists. I've 
 seen a number of reports of this, but none of the reports seem to get any 
 response. Fortunately this is a backup system, so I can recreate the pool, 
 but it's going to take me several days to get all the data back. Is there any 
 known workaround?

Charles,
   Can you 'zpool export' and 'zpool import' the pool, and then
try destroying the snapshot again?

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots rsync --delete

2009-10-18 Thread Chris Kirby

On Oct 18, 2009, at 11:37 AM, Sander Smeenk wrote:


I tried to indicate that it's strange that rmdir works on the snapshot
directory while files inside snapshots are immutable.

This, to me, is a bug.


If you have a snapshot named p...@snap, this:

# rmdir /pool/.zfs/snapshot/snap

is equivalent to this:

# zfs destroy p...@snap

Similarly, this:

# mkdir /pool/.zfs/snapshot/snap

is equivalent to this:

# zfs snapshot p...@snap

This can be very handy if you want to create or destroy
a snapshot from an NFS client, for example.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] refreservation not transferred by zfs send when sending a volume?

2009-09-28 Thread Chris Kirby

On Sep 28, 2009, at 6:58 PM, Albert Chin wrote:


Any reason the refreservation and usedbyrefreservation properties are
not sent?


I believe this was CR 6853862, fixed in snv_121.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot hold 'xxx': pool must be upgraded

2009-09-25 Thread Chris Kirby

On Sep 25, 2009, at 2:43 PM, Robert Milkowski wrote:


Chris Kirby wrote:

On Sep 25, 2009, at 11:54 AM, Robert Milkowski wrote:

 That's useful information indeed.  I've filed this CR:

6885860 zfs send shouldn't require support for snapshot holds

Sorry for the trouble, please look for this to be fixed soon.

Thank you.
btw: how do you want to fix it? Do you want to acquire  a snapshot  
hold but continue anyway if it is not possible (only in case whene  
error is ENOTSUP I think)? Or do you want to get rid of it entirely?



In this particular case, we should make sure the pool version supports  
snapshot

holds before trying to request (or release) any.

We still want to acquire the temporary holds if we can, since that
prevents a race with zfs destroy.  That case is becoming more common
with automated snapshots and their associated retention policies.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Crash

2009-09-10 Thread Chris Kirby

On Sep 10, 2009, at 7:07 AM, Brandon Mercer wrote:


On Thu, Sep 10, 2009 at 5:11 AM,  casper@sun.com wrote:



Hello all, I'm running 2009.06 and I've got a random kernel panic
that keeps killing my system under high IO loads.  It happens almost
every time I start loading up the writes on at pool.  Memory has  
been

tested extensively and I'm relatively certain this is not a hardware
related issue.  here is the panic:
Sep  9 22:09:45 eon genunix: [ID 683410 kern.notice] BAD TRAP:  
type=d

(#gp General protection) rp=ff0010362770 addr=ff7fff02fe41cc78
Sep  9 22:09:45 eon unix: [ID 10 kern.notice]
Sep  9 22:09:45 eon unix: [ID 839527 kern.notice] sched:
Sep  9 22:09:45 eon unix: [ID 753105 kern.notice] #gp General  
protection
Sep  9 22:09:45 eon unix: [ID 358286 kern.notice]  
addr=0xff7fff02fe41cc78

Sep  9 22:09:45 eon unix: [ID 243837 kern.notice] pid=0,



Random panics are, unfortunately, mostly caused by bad hardware.

Do you have ECC memory in the system?  Did you run memtest86 on your
system?


Casper,
I have run memtest86 on the machine for about 4 hours which was enough
time to complete two passes.  It is not ECC memory in this machine.
Perhaps if I said this isn't a random panic but more of an easily
reproducable panic... :)  If I do dd if=/dev/zero of=/pool/blah
bs=1024k count=1 it will always panic and reboot.  In this type of
a scenario it seems less like hardware to me and more like a bug.
What do you think?


Brandon,
   It looks like you have some bad RAM.  The bad address  
(ff7fff02fe41cc78)
appears to have a single-bit error (the leading ff7 should probably be  
fff).


-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RFE: creating multiple clones in one zfs(1) call and one txg

2009-03-27 Thread Chris Kirby

On Mar 27, 2009, at 10:33 AM, Darren J Moffat wrote:

a) that is probably what is wanted most of the time anyway
b) it is easy to pass from userland to kernel - you pass the
   rules (after some userland sanity checking first) as is.



But doesn't that also exclude the possibility of creating non-pattern  
based

clones in a single txg?

While I think that allowing multiple clones to be created in a single  
txg

is perfectly reasonable, we shouldn't need to artificially restrict the
clone namespace in order to achieve that.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Chris Kirby

On Mar 17, 2009, at 4:45 PM, Grant Lowe wrote:


bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1

I'm trying to set a mountpoint.  But trying to mount it doesn't work.

bash-3.00# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
oracle   44.0G   653G  25.5K  /oracle
oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
oracle/prd_data/db1  22.5K   697G  22.5K  -
bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/ 
prd_data/db1
cannot set property for 'oracle/prd_data/db1': 'mountpoint' does  
not apply to datasets of this type


The issue is, you can't set a mountpoint on a zvol, there's no  
filesystem
on there yet. Once you've created whatever (non-ZFS) filesystem on  
that zvol,

then you can either mount it manually or set up an entry in /etc/vfstab.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()

2009-01-28 Thread Chris Kirby
On Jan 28, 2009, at 11:49 AM, Will Murnane wrote:


 (on the client workstation)
 wil...@chasca:~$ dd if=/dev/urandom of=bigfile
 dd: closing output file `bigfile': Disk quota exceeded
 wil...@chasca:~$ rm bigfile
 rm: cannot remove `bigfile': Disk quota exceeded

Will,

   I filed a CR on this (6798878), fix on the way for OpenSolaris.
Can you continue using regular quotas (instead of refquotas)?  Those
don't suffer from the same issue, although of course you'll lose the
refquota functionality.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verbose Information from zfs send -v snapshot

2009-01-16 Thread Chris Kirby
On Jan 16, 2009, at 4:47 AM, Nick Smith wrote:

 When I use the command 'zfs send -v snapshot-name' I expect to see  
 as the manpage states, some verbose information printed to stderr  
 (probably) but I don't see anything on either Solaris 10u6 or  
 OpenSolaris 2008.11. I am doing something wrong here?

 Also what should be the contents of this verbose information anyway?

Nick,

Specifying -v to zfs send doesn't result in much extra  
information, and only in
certain cases.  For example, if you do an incremental send, you'll get  
this piece
of extra output:

# zfs send -v -I t...@snap1 t...@snap2 /tmp/x
sending from @snap1 to t...@snap2
#

zfs recv -v is a bit more chatty, but again, only in certain cases.

Output from zfs send -v goes to stderr; output from zfs recv -v goes  
to stdout.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zvol snapshot at size 100G

2008-11-13 Thread Chris Kirby

On Nov 13, 2008, at 12:37 PM, Matthew Ahrens wrote:

 Are you sure that you don't have any refreservations?

Oh, right, on pools with version = SPA_VERSION_REFRESERVATION
we add a refreservation for zvols instead of a regular reservation.

So a 100G zvol will have a 100G refreservation set at creation
time.

-Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zvol snapshot at size 100G

2008-11-13 Thread Chris Kirby
On Nov 13, 2008, at 1:45 PM, Chris Kirby wrote:
 Oh, right, on pools with version = SPA_VERSION_REFRESERVATION
 we add a refreservation for zvols instead of a regular reservation.

 So a 100G zvol will have a 100G refreservation set at creation
 time.

Just to clarify this a bit, the reason why we do this is so that
snapshots don't steal space from the zvol.

One consequence of this is, in order to take a snapshot of a zvol
(or any dataset with a refreservation), there must be enough free
space in the pool to accommodate the possibility that every block
(bounded by the size of the refreservation) that is not already part
of a snapshot (the referenced or REFER bytes) might become dirty.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Image with DD from ZFS partition

2008-05-14 Thread Chris Kirby
Andy Lubel wrote:
 On May 14, 2008, at 10:39 AM, Chris Siebenmann wrote:
 
 | Think what you are looking for would be a combination of a snapshot
 | and zfs send/receive, that would give you an archive that you can  
 use
 | to recreate your zfs filesystems on your zpool at will at later  
 time.

 Talking of using zfs send/recieve for backups and archives: the
 Solaris 10U4 zfs manpage contains some blood-curdling warnings about
 there being no cross-version compatability promises for the output
 of 'zfs send'. Can this be ignored in practice, or is it a real issue?
 
 It's real!  You cant send and receive between versions of ZFS.

The warning is a little scary, but in practice it's not such a big deal.

The man page says this:

   The format of the stream is evolving. No backwards  com-
   patibility is guaranteed. You may not be able to receive
   your streams on future versions of ZFS.

To date, the only incompatibility is with send streams created prior
to Nevada build 36 (there probably aren't very many of those, ZFS was
introduced in Nevada build 27), which cannot be received by zfs receive
on Nevada build 89 and later.

Note that this incompatibility doesn't affect Solaris 10 at all.  All
s10 releases use the new stream format.

More details (and instructions on how to resurrect any pre build 36 streams)
can be found here:

http://opensolaris.org/os/community/on/flag-days/pages/2008042301

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and disk usage management?

2008-05-07 Thread Chris Kirby
Kyle McDonald wrote:
 [EMAIL PROTECTED] wrote:
 I assume that ZFS quotas are enforced even if the current 
 size and space free is not included in the user visible 'df'. 
  Is that not true?

 Presumably applications get some unexpected error when the 
 quota limit is hit since the client OS does not know the real 
 amount of space free.
 
 In my experience, I simply couldn't implement Solaris-level quotas at
 all for ZFS filesystems.

   
 That's my understanding also. I'm not clear (but I think I can guess) on 
 the exact difference between reservations, and quotas but from what I 
 understand ZFS implements it's own 'Tree Quotas' , that limit the space 
 consumed by a directory and everything below it. It does not 
 (currently?) support a tradtional unix User/Group Quotas, where the 
 space consumed by files owned by a user or group are limited no matter 
 where in the file system the are located.

Here's a good description of ZFS quotas and reservations:

http://blogs.sun.com/markm/category/ZFS

We have since added refquota and refreservation, which do not include
the space consumed by snapshots.

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling zfs xattr in S10u4

2008-03-14 Thread Chris Kirby
Balaji Kutty wrote:
 Chris Kirby wrote:
 
Balaji Kutty wrote:

Hi,

I want to disable extended attributes in my zfs on s10u4. I found out 
that the command to do is zfs set xattr=off poolname. But, I do not 
see this option in s10u4.

This RFE (6351954) appears to have been integrated into s10u4.
What error message are you seeing when you try to turn off the
xattr property?

-Chris
 
 Sorry, for the confusion. My apologies. I'm using s10u3 only.
 
 The question is, is it possible to set the attributes off in s10u3?

In that case, no, there's no way to disable extended attributes
for s10u3.  The xattr property and support for legacy mounts
with -onoxattr weren't added until s10u4.

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling zfs xattr in S10u4

2008-03-13 Thread Chris Kirby
Balaji Kutty wrote:
 Hi,
 
 I want to disable extended attributes in my zfs on s10u4. I found out 
 that the command to do is zfs set xattr=off poolname. But, I do not 
 see this option in s10u4.

Hmm, I thought that had made it back to s10u4, but I guess not.

 
 How can I disable zfs extended attributes on s10u4?

You can use a legacy-style mount:

zfs set mountpoint=legacy tank/fs1
mount -F zfs -onoxattr tank/fs1 /mnt

If you want the fs to be mounted at boot time, you'll need to add an
entry to /etc/vfstab.

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling zfs xattr in S10u4

2008-03-13 Thread Chris Kirby
Balaji Kutty wrote:
 Hi,
 
 I want to disable extended attributes in my zfs on s10u4. I found out 
 that the command to do is zfs set xattr=off poolname. But, I do not 
 see this option in s10u4.

This RFE (6351954) appears to have been integrated into s10u4.
What error message are you seeing when you try to turn off the
xattr property?

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-27 Thread Chris Kirby
Nicolas Williams wrote:
 On Wed, Feb 27, 2008 at 01:13:06PM -0500, Kyle McDonald wrote:
 
Nicolas Williams wrote:

man runat
 

Oh! Cool!

Is that the only way to access those attributes? or just the one that's 
most likely to work?
 
 
 man fsattr
 
 :)
 
 
I can see how for running commands it'd be useful, but for interactive 
use it's too bad 'cd' can't work. or can it? I wasn't able to get it to.
 
 
 Er, good question!  I think the shells would have to support it.  A good
 question for Roland :)

The shells don't actually have to care:

$ cd /tmp
$ touch f1
$ runat f1 sh

Now my shell is running in file f1's extended attribute space.

$ ls
SUNWattr_ro  SUNWattr_rw
$

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-27 Thread Chris Kirby
Nicolas Williams wrote:
 On Wed, Feb 27, 2008 at 12:31:09PM -0600, Chris Kirby wrote:
 
Er, good question!  I think the shells would have to support it.  A good
question for Roland :)

The shells don't actually have to care:

$ cd /tmp
$ touch f1
$ runat f1 sh
 
 
 I know that works.  But why start a new process when the shell could
 have a built-in (or mod to the cd built-in) that can do this?

Yep, that certainly could be done with just a few lines
of code.  I was just demonstrating that it could be done
now, in an interactive session.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Chris Kirby
Paul B. Henson wrote:
 On Thu, 20 Sep 2007, James F. Hranicky wrote:
 
 
and due to the fact that snapshots counted toward ZFS quota, I decided
 
 
 Yes, that does seem to remove a bit of their value for backup purposes. I
 think they're planning to rectify that at some point in the future.

We're adding a style of quota that only includes the bytes
referenced by the active fs.  Also, there will be a matching
style for reservations.

some point in the future is very soon (weeks).  :-)

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-07 Thread Chris Kirby
Mike Gerdts wrote:
 It appears as though the author has not yet tried out snapshots.  The
 fact that space used by a snapshot for the sysadmin's convenience
 counts against the user's quota is the real killer. 

Very soon there will be another way to specify quotas (and
reservations) such that they only apply to the space used by
the active dataset.

This should make the effect of quotas more obvious to end users
while allowing them to remain blissfully unaware of any snapshot
activity by the sysadmin.

-Chris



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs legacy filesystem remounted rw: atime temporary off?

2007-02-05 Thread Chris Kirby

Jürgen Keil wrote:

I have my /usr filesystem configured as a zfs filesystem,
using a legacy mountpoint.  I noticed that the system boots
with atime updates temporarily turned off (and doesn't record
file accesses in the /usr filesystem):

# df -h /usr
Filesystem size   used  avail capacity  Mounted on
files/usr-b57   98G   2.1G18G11%/usr

# zfs get atime files/usr-b57
NAME   PROPERTY  VALUE  SOURCE
files/usr-b57  atime offtemporary


That is, when a zfs legacy filesystem is mounted in
read-only mode, and then remounted read/write,
atime updates are off:

# zfs create -o mountpoint=legacy files/foobar

# mount -F zfs -o ro files/foobar /mnt

# zfs get atime files/foobar
NAME  PROPERTY  VALUE SOURCE
files/foobar  atime ondefault

# mount -F zfs -o remount,rw files/foobar /mnt

# zfs get atime files/foobar
NAME  PROPERTY  VALUE SOURCE
files/foobar  atime off   temporary


Is this expected behaviour?



I suspect it's related to this bug:

http://bugs.opensolaris.org/view_bug.do?bug_id=6498096

which is zfs noatime broken with legacy mount.

I started fixing this a while back but never finished
it.  Is this causing you pain, or is it just something
you noticed?

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss