Re: [zfs-discuss] Re: Re: Deadlock with a pool using files on another zfs?

2006-12-29 Thread Richard Elling

Jason Austin wrote:

A bit off the subject but what would be the advantage in virtualization using a 
pool of files verse just creating another zfs on an existing pool.  My purpose 
for using the file pools was to experiment and learn about any quirks before I 
go production.  It let me do things like set up a large raidz and fail parts 
out without having a ton of disks on my test system.


You can do this sort of testing with ramdisks, too.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Differences between ZFS and UFS.

2006-12-29 Thread Pawel Jakub Dawidek
Hi.

Here are some things my file system test suite discovered on Solaris ZFS
and UFS.

Bascially ZFS pass all my tests (about 3000). I see one problem with UFS
and two differences:

1. link(2) manual page states that privileged processes can make
   multiple links to a directory. This looks like a general comment, but
   it's only true for UFS.

2. link(2) in UFS allows to remove directories, but doesn't allow this
   in ZFS.

3. Unsuccessful link(2) can update file's ctime:

# fstest mkdir foo 0755
# fstest create foo/bar 0644
# fstest chown foo/bar 65534 -1
# ctime1=`fstest stat foo/bar ctime`
# sleep 1
# fstest -u 65534 link foo/bar foo/baz   <--- this unsuccessful 
operation updates ctime
EACCES
# ctime2=`fstest stat ${n0} ctime`
# echo $ctime1 $ctime2
1167440797 1167440798

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
[EMAIL PROTECTED]   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpWPytzVTegq.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs internal working - compression question

2006-12-29 Thread Tomas Ögren
On 29 December, 2006 - roland sent me these 1,0K bytes:

> Hello !
> 
> I have come across some weirdness, i would like to understand. 
> 
> it`s not an issue, but i`m just wondering about.
> 
> i created two zfs filesystems based on image-files used as devices  -
> i.e. i created them on top of two empty files, exactly the same size.
> 
> then i enabled compression on one of them (zfs set compression=on 
> compressedzfs)
> 
> after copying a large file to both filesystems, i unmounted them,
> exported them and did a gzip on those zfs imagefiles.
> 
> after gzipping, the imagefile with compression=on is nearly twice as
> big as the imagefile with compression=off.
> 
> this is something i wouldn`t have expected. 
> 
> ok, i didn`t expect the same size, but i never would have expected
> such BIG difference, since we are basically re-compressing data which
> is already compressed.
> 
> what`s causing this effect?
> can someone probably explain this ?

The compression used in ZFS isn't as good as in gzip, because that would
take too much cpu (and I've heard they just snatched the code from the
kernel panic crash dump thing which isn't allowed to allocate memory for
instance).. It's a simple form of Lempel-Ziv.. instead it's a "quick and
kinda good" compression algorithm.. Before compression, data could be
quite compressible (either with a fast & larger-endresult or slow &
smaller-endresult).. but after compression (even from a non-ideal
algorithm), the data is close to random data which is quite hard to
re-compress..

Try the difference between zfs -> zfs+gzip  vs   gzip -> gzip+gzip..

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Deadlock with a pool using files on another zfs?

2006-12-29 Thread Jason Austin
A bit off the subject but what would be the advantage in virtualization using a 
pool of files verse just creating another zfs on an existing pool.  My purpose 
for using the file pools was to experiment and learn about any quirks before I 
go production.  It let me do things like set up a large raidz and fail parts 
out without having a ton of disks on my test system.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Deadlock with a pool using files on another zfs?

2006-12-29 Thread Holger Berger

On 12/29/06, Eric Schrock <[EMAIL PROTECTED]> wrote:

On Fri, Dec 29, 2006 at 11:23:30PM +0100, Holger Berger wrote:
>
> So the goal is to allow infinite nesting?
>

That would be my guess, based on the fact that disallowing the opposite
is effectively impossible.


I guess it may be possible by adding enough return ERRDONTDOTHAT;
lines in the code but IMO it would greatly limit the usefulness of zfs
in virtualization scenarios.


However, no serious investigation into this
problem has been done.


;(

Holger
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Deadlock with a pool using files on another zfs?

2006-12-29 Thread Eric Schrock
On Fri, Dec 29, 2006 at 11:23:30PM +0100, Holger Berger wrote:
> 
> So the goal is to allow infinite nesting?
>

That would be my guess, based on the fact that disallowing the opposite
is effectively impossible.  However, no serious investigation into this
problem has been done.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Deadlock with a pool using files on another zfs?

2006-12-29 Thread Holger Berger

On 12/29/06, Eric Schrock <[EMAIL PROTECTED]> wrote:

On Fri, Dec 29, 2006 at 01:48:17PM -0800, Jason Austin wrote:
> Which part is the bug?  The crash or allowing pools of files that are on a 
zfs?

The crash.  Disallowing files from a ZFS filesystem would solve part of
the problem, but one could always create a lofi device on top of a ZFS
file, or a file on a UFS filesystem on top of a zvol, which would result
in the same behavior.  Since one needs PRIV_SYS_CONFIG to create a pool,
and the solution is non-trivial, it hasn't been high priority.


So the goal is to allow infinite nesting?

Holger
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Deadlock with a pool using files on another zfs?

2006-12-29 Thread Eric Schrock
On Fri, Dec 29, 2006 at 01:48:17PM -0800, Jason Austin wrote:
> Which part is the bug?  The crash or allowing pools of files that are on a 
> zfs?

The crash.  Disallowing files from a ZFS filesystem would solve part of
the problem, but one could always create a lofi device on top of a ZFS
file, or a file on a UFS filesystem on top of a zvol, which would result
in the same behavior.  Since one needs PRIV_SYS_CONFIG to create a pool,
and the solution is non-trivial, it hasn't been high priority.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Deadlock with a pool using files on another zfs?

2006-12-29 Thread Holger Berger

On 12/29/06, Eric Schrock <[EMAIL PROTECTED]> wrote:

On Thu, Dec 28, 2006 at 04:07:57PM -0800, Jason Austin wrote:
> When messing around with zfs trying to break it, I creating a new pool
> using files on an existing zfs filesystem.  It seem to work fine until
> I created a snapshot of the original filesystem and then tried to
> destroy the pool using the files.  The system appeared to deadlock and
> had to be rebooted.  When it came back up the files pool was in an
> error state and could be destroyed.
>
> I don't see much value in being able to do that but it might be a good
> idea to have zpool error out instead of a creating a pool that could
> crash that system.

Yes, this is a known issue.  There is a bug filed, but I don't have it
offhand.


I hope the "fix" includes the option to create pools on zfs file
systems recursively. It'll be a shame if there would be a restriction
as it would limit zfs usefulness for virtualization (like creating a
"mobile" zone where all data are stored in a file instead of a
physical disk).

Holger
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Deadlock with a pool using files on another zfs?

2006-12-29 Thread Jason Austin
Which part is the bug?  The crash or allowing pools of files that are on a zfs?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Deadlock with a pool using files on another zfs?

2006-12-29 Thread Eric Schrock
On Thu, Dec 28, 2006 at 04:07:57PM -0800, Jason Austin wrote:
> When messing around with zfs trying to break it, I creating a new pool
> using files on an existing zfs filesystem.  It seem to work fine until
> I created a snapshot of the original filesystem and then tried to
> destroy the pool using the files.  The system appeared to deadlock and
> had to be rebooted.  When it came back up the files pool was in an
> error state and could be destroyed.
> 
> I don't see much value in being able to do that but it might be a good
> idea to have zpool error out instead of a creating a pool that could
> crash that system.

Yes, this is a known issue.  There is a bug filed, but I don't have it
offhand.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs internal working - compression question

2006-12-29 Thread roland
Hello !

I have come across some weirdness, i would like to understand. 

it`s not an issue, but i`m just wondering about.

i created two zfs filesystems based on image-files used as devices  - i.e. i 
created them on top of two empty files, exactly the same size.

then i enabled compression on one of them (zfs set compression=on compressedzfs)

after copying a large file to both filesystems, i unmounted them, exported them 
and did a gzip on those zfs imagefiles.

after gzipping, the imagefile with compression=on is nearly twice as big as the 
imagefile with compression=off.

this is something i wouldn`t have expected. 

ok, i didn`t expect the same size, but i never would have expected such BIG 
difference, since we are basically re-compressing data which is already 
compressed.

what`s causing this effect?
can someone probably explain this ?

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Fix dot-dot permissions without unmount?

2006-12-29 Thread Chris Gerhard
You have to mount the file system using NFS v3 or v2 for this trick to work.

See http://blogs.sun.com/chrisg/entry/fixing_a_covered_mount_point

--chris
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-29 Thread Roch Bourbonnais


It seems though that the critical feature we need was optional in the  
SBC-2 spec.

So we still need some development to happen on the storage end.

But we'll get there...


Le 19 déc. 06 à 20:59, Jason J. W. Williams a écrit :


Hi Roch,

That sounds like a most excellent resolution to me. :-) I believe
Engenio devices support SBC-2. It seems to me making intelligent
decisions for end-users is generally a good policy.

Best Regards,
Jason

On 12/19/06, Roch - PAE <[EMAIL PROTECTED]> wrote:



Jason J. W. Williams writes:
 > Hi Jeremy,
 >
 > It would be nice if you could tell ZFS to turn off fsync() for ZIL
 > writes on a per-zpool basis. That being said, I'm not sure  
there's a

 > consensus on that...and I'm sure not smart enough to be a ZFS
 > contributor. :-)
 >
 > The behavior is a reality we had to deal with and workaround, so I
 > posted the instructions to hopefully help others in a similar  
boat.

 >
 > I think this is a valuable discussion point though...at least  
for us. :-)

 >
 > Best Regards,
 > Jason
 >

To Summarize:

Today, ZFS sends a ioctl to  the storage that says flush the
write  cache, while what it really  wants is, make sure data
is on stable storage.  The  Storage should then flush or not
the cache  depending on if   it is considered stable  or not
(only the storage knows that).

Soon  ZFS (more precisely SD)  will be sending a 'qualified'
ioctl to clarify the requested behavior.

Inparallel, Storage vendorshall be implementing that
qualified  ioctl.   ZFS  Customers  of third   party storage
probably have more influence to get those vendors to support
the qualified behavior.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6462690

With  SD fixed and Storage  vendor support, there will be no
more need to tune anything.

-r



 > On 12/15/06, Jeremy Teo <[EMAIL PROTECTED]> wrote:
 > > > The instructions will tell you how to configure the array  
to ignore
 > > > SCSI cache flushes/syncs on Engenio arrays. If anyone has  
additional
 > > > instructions for other arrays, please let me know and I'll  
be happy to

 > > > add them!
 > >
 > > Wouldn't it be more appropriate to allow the administrator to  
disable

 > > ZFS from issuing the write cache enable command during a commit?
 > > (assuming expensive high end battery backed cache etc etc)
 > > --
 > > Regards,
 > > Jeremy
 > >
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fix dot-dot permissions without unmount?

2006-12-29 Thread Casper . Dik

>After importing some pools after a re-install of the OS, i hit that "..: 
>Permission denied" proble
m.  I figured out I could unmount, chmod, and mount to fix it but that wouldn't 
be a good situation
 on a production box.  Is there anyway to fix this problem without unmounting?


NFS share the containing directory with root access.

While generally loopback NFS is frowned upon, you can use it for this
purpose:

share -o anon=0,rw=localhost /
mount -F nfs localhost:/ /mnt
chmod 755 /mnt/zfs/mount
umount /mnt
unshare /

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss