Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Michael Schuster

perhaps this helps:

http://www.eweek.com/c/a/Linux-and-Open-Source/Oracle-Explains-Unclear-Message-About-OpenSolaris-444787/

Michael

On 02/24/10 20:02, Troy Campbell wrote:

http://www.oracle.com/technology/community/sun-oracle-community-continuity.html


Half way down it says:
Will Oracle support Java and OpenSolaris User Groups, as Sun has?


...

--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] upgrading ZFS tools in opensolaris.com

2010-02-26 Thread Michael Schuster

On 02/26/10 09:36, Laurence wrote:

I'm probably getting this all wrong, but basically OpenSolaris 2009.6 (which is 
the latest ISO available iirc) ships with snv 111b.
My problem is I have a borked zpool and could really use PSARC 2009/479 to fix 
it. The problem is PSARC 2009/479 was only built recently and subsequently was 
released for solaris_nevada(snv_128).

Is there a safe way of brining snv 128 to OpenSolaris?


set your publisher to the /devel branch and 'pkg image-update' - this will 
get you b133 (of course, as long as the pool you borked isn't the root pool ;-)


HTH
Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-19 Thread Michael Schuster

On 20.04.10 07:52, Ken Gunderson wrote:


On Tue, 2010-04-20 at 12:27 +0700, "C. Bergström" wrote:

Ken Gunderson wrote:

Greetings All:

Granted there has been much fear, uncertainty, and doubt following
Oracle's take over of Sun, but I ran across this on a FreeBSD mailing
list post dated 4/20/2010"

"...Seems that Oracle won't offer support for ZFS on opensolaris"


This guy probably
1) Doesn't know the difference between OpenSolaris and Solaris
2) Doesn't know anything
3) Doesn't cite a source

Stop wasting everyone's time with speculation and FUD


I think from the context of my post it was pretty clear that I viewed
the OP's thread as suspect.  Not being omnipotent, however, the
possibility exists that they may know something I do not, particularly
as the time stamp was very recent.  As I am sincerely interested in
either dispelling or confirming this as the case may be, I posted to the
place I thought most likely to offer a definitive answer.


if you'd been watching this place since the acquisition, you'd know that 
that is not the case - this is primarily an engineering "place", whereas 
answers regarding questions like the one you're floating come from the 
marketing/management side of the house.


The best chance for you to find out about this is to talk to your Oracle 
sales rep.


Michael
--
michael.schus...@oracle.com
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mapping inode numbers to file names

2010-04-28 Thread Michael Schuster

On 28.04.10 14:06, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey

Look up the inode number of README.  (for example, ls -i README)
 (suppose it’s inode 12345)
find /tank/.zfs/snapshot -inum 12345

Problem is, the find command will run for a long time.

Is there any faster way to find the file name(s) when all you know is
the inode number?  (Actually, all you know is all the info that’s in
the present directory, which is not limited to inode number; but, inode
number is the only information that I personally know could be useful.)


Due to lack of response, and based on my personal knowledge, and lack of any
useful response anywhere else I've asked this question, I'm drawing the
conclusion it's not possible to quickly lookup the name(s) of an inode.


no - consider hard links. (and sorry for not answering sooner, this obvious 
one didn't occur to me earlier).


Michael
--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] osol monitoring question

2010-05-10 Thread Michael Schuster

On 10.05.10 08:57, Roy Sigurd Karlsbakk wrote:

Hi all

It seems that if using zfs, the usual tools like vmstat, sar, top etc are quite 
worthless, since zfs i/o load is not reported as iowait etc. Are there any 
plans to rewrite the old performance monitoring tools or the zfs parts to allow 
for standard monitoring tools? If not, what other tools exist that can do the 
same?


"zpool iostat" for one.

Michael
--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Michael Schuster

On 19.05.10 17:53, John Andrunas wrote:

Not to my knowledge, how would I go about getting one?  (CC'ing discuss)


man savecore and dumpadm.

Michael



On Wed, May 19, 2010 at 8:46 AM, Mark J Musante  wrote:


Do you have a coredump?  Or a stack trace of the panic?

On Wed, 19 May 2010, John Andrunas wrote:


Running ZFS on a Nexenta box, I had a mirror get broken and apparently
the metadata is corrupt now.  If I try and mount vol2 it works but if
I try and mount -a or mount vol2/vm2 is instantly kernel panics and
reboots.  Is it possible to recover from this?  I don't care if I lose
the file listed below, but the other data in the volume would be
really nice to get back.  I have scrubbed the volume to no avail.  Any
other thoughts.


zpool status -xv vol2
  pool: vol2
state: ONLINE
status: One or more devices has experienced an error resulting in data
   corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
   entire pool from backup.
  see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   vol2ONLINE   0 0 0
 mirror-0  ONLINE   0 0 0
   c3t3d0  ONLINE   0 0 0
   c3t2d0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

   vol2/v...@snap-daily-1-2010-05-06-:/as5/as5-flat.vmdk

--
John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




Regards,
markm








--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] core dumps eating space in snapshots

2010-07-27 Thread Michael Schuster

On 27.07.10 14:21, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of devsk

I have many core files stuck in snapshots eating up gigs of my disk
space. Most of these are BE's which I don't really want to delete right
now.


Ok, you don't want to delete them ...



Is there a way to get rid of them? I know snapshots are RO but can I do
some magic with clones and reclaim my space?


You don't want to delete them, but you don't want them to take up space
either?  Um ... Sorry, can't be done.  Move them to a different disk ...

Or clarify what it is that you want.

If you're saying you have core files in your present filesystem that you
don't want to delete ... And you also have core files in snapshots that you
*do* want to delete ...  As long as the file hasn't been changing, it's not
consuming space beyond what's in the current filesystem.  (See the output of
zfs list, looking at sizes and you'll see that.)  If it has been changing
... the cores in snapshot are in fact different from the cores in present
filesytem ... then the only way to delete them is to destroy snapshots.

Or have I still misunderstood the question?


yes, I think so.

Here's how I read it: the snapshots contain lots more than the core files, 
and OP wants to remove only the core files (I'm assuming they weren't 
discovered before the snapshot was taken) but retain the rest.


does that explain it better?

HTH
Michael
--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS p[erformance drop with new Xeon 55xx and 56xx cpus

2010-08-11 Thread michael schuster

On 08/12/10 04:16, Steve Gonczi wrote:

Greetings,

I am seeing some unexplained performance drop using the above cpus,
using a fairly up-to-date build ( late 145).
Basically, the system seems to be 98% idle, spending most if its time in this 
stack:

   unix`i86_mwait+0xd
   unix`cpu_idle_mwait+0xf1
   unix`idle+0x114
   unix`thread_start+0x8
455645

Most cpus seem to be idling most of the time, sitting on the mwait instruction.
No lock contention, not waiting on io, I am finding myself at a loss explaining 
what this system is doing.
(I am monitoring the system w. lockstat, mpstat, prstat).  Despite the 
predominantly idle system,
I see some latency reported by prstat microstate accounting on the zfs threads.

This is a fairly beefy box, 24G memory,  16 cpus.
Doing a local zfs send | receive, should be getting at least 100MB+,
and I am only getting  5-10MB.
I see some Intel errata on the 55xx series xeons, a problem with the
monitor/mwait instructions, that could conceivably cause missed wake-up or 
mis-reported  mwait status.


I'd suggest you supply a bit more information (to the list, not to me, I 
don't know very much about zfs internals):


- zpool/zfs configuration
- history of this issue: has it been like this since you installed the 
machine?

  - if no: what changes were introduced around the time you saw this first?
- does this happen on a busy machine too?
- describe your test in more detail
- provide measurements (lockstat, iostat, maybe some DTrace) before and 
during test, add some timestamps so people can correlate data to events.

- anything else you can think of that might be relevant.

HTH
Michael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Michael Schuster

On 29.07.09 07:56, Andre van Eyssen wrote:

On Wed, 29 Jul 2009, Mark J Musante wrote:


Yes, if it's local. Just use df -n $path and it'll spit out the 
filesystem type.  If it's mounted over NFS, it'll just say something 
like nfs or autofs, though.


$ df -n /opt
Filesystemkbytesused   avail capacity  Mounted on
/dev/md/dsk/d24  33563061 11252547 2197488434%/opt
$ df -n /sata750
Filesystemkbytesused   avail capacity  Mounted on
sata750  2873622528  77 322671575 1%/sata750

Not giving the filesystem type. It's easy to spot the zfs with the lack 
of recognisable device path, though.




which df are you using?

Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool UNAVAIL even though disk is online: another label issue?

2009-09-18 Thread michael schuster

All,

this morning, I did "pkg image-update" from 118 to 123 (internal repo), and 
upon reboot all I got was the grub prompt - no menu, nothing.


I found a 2009.06 CD, and when I boot that and run "zpool import", I
get told

localtank   UNAVAIL  insufficient replicas
  c8t1d0ONLINE

some research showed that disklabel changes sometimes cause this, so I ran 
format:


AVAILABLE DISK SELECTIONS:
   0. c8t0d0 
  /p...@0,0/pci108e,5...@7/d...@0,0
   1. c8t1d0 
  /p...@0,0/pci108e,5...@7/d...@1,0
Specify disk (enter its number): 1
selecting c8t1d0
[disk formatted]
Note: capacity in disk label is smaller than the real disk capacity.
Select   to adjust the label capacity.

[..]
partition> print
Current partition table (original):
Total disk sectors available: 781401310 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm   256  372.60GB  781401310
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 7814013118.00MB  781417694


Format already tells me that the label doesn't align with the disk size ... 
 should I just do "expand", or should I change the first sectore of 
partition 0 to be 0?

 I'd appreciate advice on the above, and on how to avoid this in the future.
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] addendum: zpool UNAVAIL even though disk is online: another label issue?

2009-09-18 Thread michael schuster

michael schuster wrote:

All,

this morning, I did "pkg image-update" from 118 to 123 (internal repo), 
and upon reboot all I got was the grub prompt - no menu, nothing.


I found a 2009.06 CD, and when I boot that and run "zpool import", I
get told

localtank   UNAVAIL  insufficient replicas
  c8t1d0ONLINE

some research showed that disklabel changes sometimes cause this, so I 
ran format:


AVAILABLE DISK SELECTIONS:
   0. c8t0d0 
  /p...@0,0/pci108e,5...@7/d...@0,0
   1. c8t1d0 
  /p...@0,0/pci108e,5...@7/d...@1,0
Specify disk (enter its number): 1
selecting c8t1d0
[disk formatted]
Note: capacity in disk label is smaller than the real disk capacity.
Select   to adjust the label capacity.

[..]
partition> print
Current partition table (original):
Total disk sectors available: 781401310 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm   256  372.60GB  781401310
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 7814013118.00MB  781417694


Format already tells me that the label doesn't align with the disk size 
...  should I just do "expand", or should I change the first sectore of 
partition 0 to be 0?
 I'd appreciate advice on the above, and on how to avoid this in the 
future.


I just found out that this disk has been EFI-labelled, which I understand 
isn't what zfs like/expects.


what to do now?

TIA
Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] addendum: zpool UNAVAIL even though disk is online: another label issue?

2009-09-18 Thread michael schuster

Cindy Swearingen wrote:

Michael,

ZFS handles EFI labels just fine, but you need an SMI label on the disk 
that you are booting from.


Are you saying that localtank is your root pool?


no... (I was on the plane yesterday, I'm still jet-lagged), I should have 
realised that that's strange.


I believe the OSOL install creates a root pool called rpool. I don't 
remember if its configurable.


I didn't do anything to change that. This leads me to the assumption that 
the disk I should be looking at is actually c8t0d0, the "other" disk in the 
format output.



Can you describe the changes other than the pkg-image-update that lead 
up to this problem?



0) pkg refresh; pkg install SUNWipkg
1) pkg image-update (creates opensolaris-119)
2) pkg mount opensolaris-119 /mnt
3) cat /mnt/etc/release (to verify I'd indeed installed b123)
4) pkg umount opensolaris-119
5) pkg rename opensolaris-119 opensolaris-123 # this failed, because it's 
active

6) pkg activate opensolaris-118   # so I can rename the new one
7) pkg rename ...
8) pkg activate opensolaris-123

9) reboot

thx
Michael


Cindy

On 09/18/09 11:05, michael schuster wrote:

michael schuster wrote:

All,

this morning, I did "pkg image-update" from 118 to 123 (internal 
repo), and upon reboot all I got was the grub prompt - no menu, nothing.


I found a 2009.06 CD, and when I boot that and run "zpool import", I
get told

localtank   UNAVAIL  insufficient replicas
  c8t1d0ONLINE

some research showed that disklabel changes sometimes cause this, so 
I ran format:


AVAILABLE DISK SELECTIONS:
   0. c8t0d0 
  /p...@0,0/pci108e,5...@7/d...@0,0
   1. c8t1d0 
  /p...@0,0/pci108e,5...@7/d...@1,0
Specify disk (enter its number): 1
selecting c8t1d0
[disk formatted]
Note: capacity in disk label is smaller than the real disk capacity.
Select   to adjust the label capacity.

[..]
partition> print
Current partition table (original):
Total disk sectors available: 781401310 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm   256  372.60GB  781401310
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 7814013118.00MB  781417694


Format already tells me that the label doesn't align with the disk 
size ...  should I just do "expand", or should I change the first 
sectore of partition 0 to be 0?
 I'd appreciate advice on the above, and on how to avoid this in the 
future.


I just found out that this disk has been EFI-labelled, which I 
understand isn't what zfs like/expects.


what to do now?

TIA
Michael





--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] addendum: zpool UNAVAIL even though disk is online: another label issue?

2009-09-18 Thread michael schuster

Cindy Swearingen wrote:

Michael,

Get some rest. :-)

Then see if you can import your root pool while booted from the LiveCD.


that's what I tried - I'm never even shown "rpool", I probably wouldn't 
have mentioned localpool at all if I had ;-)


After you get to that point, you might search the indiana-discuss 
archive for tips on

resolving the pkg-image-update no grub menu problem.


if I don't see rpool, that's not going to be the next step for me, right?

thx
Michael


Cindy

On 09/18/09 12:08, michael schuster wrote:

Cindy Swearingen wrote:

Michael,

ZFS handles EFI labels just fine, but you need an SMI label on the 
disk that you are booting from.


Are you saying that localtank is your root pool?


no... (I was on the plane yesterday, I'm still jet-lagged), I should 
have realised that that's strange.


I believe the OSOL install creates a root pool called rpool. I don't 
remember if its configurable.


I didn't do anything to change that. This leads me to the assumption 
that the disk I should be looking at is actually c8t0d0, the "other" 
disk in the format output.



Can you describe the changes other than the pkg-image-update that 
lead up to this problem?



0) pkg refresh; pkg install SUNWipkg
1) pkg image-update (creates opensolaris-119)
2) pkg mount opensolaris-119 /mnt
3) cat /mnt/etc/release (to verify I'd indeed installed b123)
4) pkg umount opensolaris-119
5) pkg rename opensolaris-119 opensolaris-123 # this failed, because 
it's active

6) pkg activate opensolaris-118   # so I can rename the new one
7) pkg rename ...
8) pkg activate opensolaris-123

9) reboot

thx
Michael


Cindy

On 09/18/09 11:05, michael schuster wrote:

michael schuster wrote:

All,

this morning, I did "pkg image-update" from 118 to 123 (internal 
repo), and upon reboot all I got was the grub prompt - no menu, 
nothing.


I found a 2009.06 CD, and when I boot that and run "zpool import", I
get told

localtank   UNAVAIL  insufficient replicas
  c8t1d0ONLINE

some research showed that disklabel changes sometimes cause this, 
so I ran format:


AVAILABLE DISK SELECTIONS:
   0. c8t0d0 
  /p...@0,0/pci108e,5...@7/d...@0,0
   1. c8t1d0 
  /p...@0,0/pci108e,5...@7/d...@1,0
Specify disk (enter its number): 1
selecting c8t1d0
[disk formatted]
Note: capacity in disk label is smaller than the real disk capacity.
Select   to adjust the label capacity.

[..]
partition> print
Current partition table (original):
Total disk sectors available: 781401310 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last 
Sector
  0usrwm   256  372.60GB  
781401310

  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 7814013118.00MB  
781417694



Format already tells me that the label doesn't align with the disk 
size ...  should I just do "expand", or should I change the first 
sectore of partition 0 to be 0?
 I'd appreciate advice on the above, and on how to avoid this in 
the future.


I just found out that this disk has been EFI-labelled, which I 
understand isn't what zfs like/expects.


what to do now?

TIA
Michael










--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] addendum: zpool UNAVAIL even though disk is online: another label issue?

2009-09-19 Thread michael schuster

Victor Latushkin wrote:


I think you need to get a closer look at your another disk.

Is it possible to get result of (change controller/target numbers as 
appropriate if needed)


dd if=/dev/rdsk/c8t0d0p0 bs=1024k count=4 | bzip2 -9 > c8t0d0p0.front.bz2

while booted off OpenSolaris CD?


not anymore - I realised I had no relevant data on the box, so I 
re-installed to get going again.


thx

Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mounting rootpool

2009-10-01 Thread Michael Schuster

On 01.10.09 07:20, camps support wrote:

I have a system that is having issues with the pam.conf.

I have booted to cd but am stuck at how to mount the rootpool in single-user.  I need to make some changes to the pam.conf but am not sure how to do this. 


I think "zpool import" should be the first step for you.

HTH
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mounting rootpool

2009-10-01 Thread Michael Schuster

On 01.10.09 08:25, camps support wrote:

I did zpool import -R /tmp/z rootpool

It only mounted /export and /rootpool only had /boot and /platform.

I need to be able to get /etc and /var?


zfs set mountpoint ...
zfs mount

--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] million files in single directory

2009-10-03 Thread michael schuster

Jeff Haferman wrote:

A user has 5 directories, each has tens of thousands of files, the
largest directory has over a million files.  The files themselves are
not very large, here is an "ls -lh" on the directories:
[these are all ZFS-based]

[r...@cluster]# ls -lh
total 341M
drwxr-xr-x+ 2 someone cluster  13K Sep 14 19:09 0/
drwxr-xr-x+ 2 someone cluster  50K Sep 14 19:09 1/
drwxr-xr-x+ 2 someone cluster 197K Sep 14 19:09 2/
drwxr-xr-x+ 2 someone cluster 785K Sep 14 19:09 3/
drwxr-xr-x+ 2 someone cluster 3.1M Sep 14 19:09 4/

When I go into directory "0", it takes about a minute for an "ls -1 |
grep wc" to return (it has about 12,000 files).  Directory "1" takes
between 5-10 minutes for the same command to return (it has about 50,000
files).


"ls" sorts its output before printing, unless you use the option to turn 
this off (-f, IIRC, but check the man-page).


"echo * | wc" is also a way to find out what's in a directory, but you'll 
miss "."files, and the shell you're using may have an influence ..


HTH
Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compressed fs taking up more space than uncompressed equivalent

2009-10-22 Thread michael schuster

Stathis Kamperis wrote:

Salute.

I have a filesystem where I store various source repositories (cvs +
git). I have compression enabled on and zfs get compressratio reports
1.46x. When I copy all the stuff to another filesystem without
compression, the data take up _less_ space (3.5GB vs 2.5GB). How's
that possible ?


just a few thoughts:
- how do you measure how much space your data consumes?
- how do you copy?
- is the other FS also ZFS?

Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compressed fs taking up more space than uncompressed equivalent

2009-10-22 Thread michael schuster

Stathis Kamperis wrote:

2009/10/23 michael schuster :

Stathis Kamperis wrote:

Salute.

I have a filesystem where I store various source repositories (cvs +
git). I have compression enabled on and zfs get compressratio reports
1.46x. When I copy all the stuff to another filesystem without
compression, the data take up _less_ space (3.5GB vs 2.5GB). How's
that possible ?

just a few thoughts:
- how do you measure how much space your data consumes?

With zfs list, under the 'USED' column. du(1) gives the same results
as well (the individual fs sizes aren't enterily identical with those
that zfs list reports , but the difference still exists).

tank/sources   3.73G   620G  3.73G  /export/sources
  <--- compressed
tank/test  2.32G   620G  2.32G  /tank/test
  <--- uncompressed


obvious, but still: you did make sure that the compressed one doesn't have 
any other data lying around, right?





- how do you copy?

With cp(1). Should I be using zfs send | zfs receive ?


I don't know :-) I was just (still am) thinking out loud.


- is the other FS also ZFS?

Yes. And they both live under the same pool.

If it matters, I don't have any snapshots on neither of the filesystems.


"zfs list -t all" might still be revealing ...

Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup question

2009-11-23 Thread Michael Schuster

Colin Raven wrote:

Folks,
I've been reading Jeff Bonwick's fascinating dedup post. This is going 
to sound like either the dumbest or the most obvious question ever 
asked, but, if you don't know and can't produce meaningful RTFM 
resultsask...so here goes:


Assuming you have a dataset in a zfs pool that's been deduplicated, with 
pointers all nicely in place and so on.


Doesn't this mean that you're now always and forever tied to ZFS (and 
why not? I'm certainly not saying that's a Bad Thing) because no other 
"wannabe file system" will be able to read those ZFS pointers?


no other filesystem (unless it's ZFS-compatible ;-) will be able to read 
any "zfs pointers" (or much of any zfs internal data) - and it is 
completely independent of whether you use deduplication or not.


If you want to have your data on a different FS, you'll have to copy it off 
of zfs and onto your other FS with something like cpio or tar or maybe a 
backup tool that understands both - ZFS and OFS (other ...).



Or am I horribly misunderstanding the concept in some way?


maybe - OTOH, maybe I misread your question: is this about a different FS 
*on top of* zpools/zvols? If so, I'll have to defer to Team ZFS.


HTH
Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup clarification

2009-11-27 Thread Michael Schuster

Thomas Maier-Komor wrote:


Script started on Wed Oct 28 09:38:38 2009
# zfs get dedup rpool/export/home
NAME   PROPERTY  VALUE SOURCE
rpool/export/home  dedup onlocal
# for i in 1 2 3 4 5 ; do mkdir /export/home/d${i} && df -k
/export/home/d${i} && zfs get used rpool/export/home && cp /testfile
/export/home/d${i}; done 





as far as I understood it, the dedup works during writing, and won't
deduplicate already written data (this is planned for a later release).


isn't he doing just that (writing, that is)?

Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Michael Schuster

Per Baatrup wrote:

"dedup" operates on the block level leveraging the existing FFS
checksums. Read "What to dedup: Files, blocks, or bytes" here
http://blogs.sun.com/bonwick/entry/zfs_dedup

The trick should be that the zcat userland app already knows that it
will generate duplicate files so data read and writes could be avoided
all together.


you'd probably be better off avoiding "zcat" - it's been in use since 
almost forever, from the man-page:


  zcat
 The zcat utility writes to standard output the  uncompressed
 form  of  files that have been compressed using compress. It
 is the equivalent  of  uncompress-c.  Input  files  are  not
 affected.

:-)

cheers
Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread michael schuster

Roland Rambau wrote:

gang,

actually a simpler version of that idea would be a "zcp":

if I just cp a file, I know that all blocks of the new file
will be duplicates; so the cp could take full advantage for
the dedup without a need to check/read/write anz actual data


I think they call it 'ln' ;-) and that even works on ufs.

Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread michael schuster

Per Baatrup wrote:

Actually 'ln -s source target' would not be the same "zcp source target"
as writing to the source file after the operation would change the
target file as well where as for "zcp" this would only change the source
file due to copy-on-write semantics of ZFS.


I actually was thinking of creating a hard link (without the -s option), 
but your point is valid for hard and soft links.


cheers
Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Michael Schuster

Nicolas Williams wrote:

On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote:

if any of f2..f5 have different block sizes from f1

This restriction does not sound so bad to me if this only refers to
changes to the blocksize of a particular ZFS filesystem or copying
between different ZFSes in the same pool. This can properly be managed
with a "-f" switch on the userlan app to force the copy when it would
fail.


Why expose such details?

If you have dedup on and if the file blocks and sizes align then

cat f1 f2 f3 f4 f5 > f6

will do the right thing and consume only space for new metadata.


I think Per's concern was not only with space consumed but also the effort 
involved in the process (think large files); if I read his emails 
correctly, he'd like what amounts to manipulation of meta-data only to have 
the data blocks of what was originally 5 files to end up in one file; the 
traditional concat operation will cause all the data to be read and written 
back, at which point dedup will kick in, and so most of the processing has 
already been spent. (Per, please correct/comment)


Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Deduplication - deleting the original

2009-12-08 Thread Michael Schuster

Colin Raven wrote:

What happens if, once dedup is on, I (or someone else with delete 
rights) open a photo management app containing that collection, and 
start deleting dupes - AND - happen to delete the original that all 
other references are pointing to. I know, I know, it doesn't matter - 
snapshots save the day - but in this instance that's not the point 
because I'm trying to properly understand the underlying dedup concept.


Logically, if you delete what everything is pointing at, all the 
pointers are now null values, they are - in effect - pointing at 
nothing...an empty hole.


I have the feeling the answer to this is; "no they don't, there is no 
spoon ("original") you're still OK". I suspect that, only because the 
people who thought this up couldn't possibly have missed such an 
"obvious" point. The problem I have is in trying to mentally frame this 
in such a way that I can subsequently explain it, if asked to do so 
(which I see coming for sure).


Help in understanding this would be hugely helpful - anyone?


I mentally compare deduplication to links to files (hard, not soft) - as I 
understand it, there is no "original" and "copy"; rather, every directory 
entry points to "the data" (the inode, in ufs-speak), and if one directory 
entry of several is deleted, only the reference count changes.
It's probably a little more complicated with dedup, but I think the 
parallel is valid.


HTH
Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster

Mike Gerdts wrote:

On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi  wrote:

Hello,

As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of them (because they eat 80% of disk space) it seems
to be quite challenging.

Traditional approaches like "find ./ -exec rm {} \;" seem to take forever
- after running several days, the directory size still says the same. The
only way how I've been able to remove something has been by giving "rm
-rf" to problematic directory from parent level. Running this command
shows directory size decreasing by 10,000 files/hour, but this would still
mean close to ten months (over 250 days) to delete everything!

I also tried to use "unlink" command to directory as a root, as a user who
created the directory, by changing directory's owner to root and so forth,
but all attempts gave "Not owner" error.

Any commands like "ls -f" or "find" will run for hours (or days) without
actually listing anything from the directory, so I'm beginning to suspect
that maybe the directory's data structure is somewhat damaged. Is there
some diagnostics that I can run with e.g "zdb" to investigate and
hopefully fix for a single directory within zfs dataset?


In situations like this, ls will be exceptionally slow partially
because it will sort the output. 


that's what '-f' was supposed to avoid, I'd guess.

Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster

David Magda wrote:

On Tue, January 5, 2010 10:12, casper@sun.com wrote:


How about creating a new data set, moving the directory into it, and then
destroying it?

Assuming the directory in question is /opt/MYapp/data:
 1. zfs create rpool/junk
 2. mv /opt/MYapp/data /rpool/junk/
 3. zfs destroy rpool/junk

The "move" will create and remove the files; the "remove" by mv will be as
inefficient removing them one by one.

"rm -rf" would be at least as quick.


Normally when you do a move with-in a 'regular' file system all that's
usually done is the directory pointer is shuffled around. This is not the
case with ZFS data sets, even though they're on the same pool?


no - mv doesn't know about zpools, only about posix filesystems.

--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Michael Schuster

Paul Gress wrote:

On 01/ 5/10 05:34 AM, Mikko Lammi wrote:

Hello,

As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of them (because they eat 80% of disk space) it seems
to be quite challenging.
  


I've been following this thread.  Would it be faster to do the reverse.  
Copy the 20% of disk then format then move the 20% back.


I'm not sure the OS installation would survive that.

Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Michael Schuster

Joerg Schilling wrote:

Julian Regel  wrote:

If you like to have a backup that allows to access files, you need a file based 
backup and I am sure that even a filesystem level scan for recently changed 
files will not be much faster than what you may achive with e.g. star.


Note that ufsdump directly accesees the raw disk device and thus _is_ at 
filesystem leven but still is slower than star on UFS.

While I am sure that star is technically a fine utility, the problem is that it 
is effectively an unsupported product.


From this viewpoint, you may call most of Solaris "unsupported".


what is that supposed to mean?

Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /tmp on ZFS?

2007-03-22 Thread Michael Schuster

Matt B wrote:

Is this something that should work? The assumption is that there is a dedicated 
raw SWAP slice and after install /tmp (which will be on /) will be unmounted 
and mounted to zpool/tmp (just like zpool/home)

Thoughts on this?


you are aware that /tmp by default resides in memory these days? putting 
/tmp on disk can have quite severe impact on performance.


Michael
--
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS for Linux (NO LISCENCE talk, please)

2007-04-17 Thread Michael Schuster

Erblichs wrote:


Whose job is it to "clean" or declare for removal kernel
sources that "do not work"?


not the people on *this* list, IMO.

Michael
--
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] vxfs and zfs

2007-06-01 Thread Michael Schuster

benita ulisano wrote:

Hi,

I have been given the task to research converting our vxfs/vm file
systems and volumes to zfs. The volumes are attached to an EMC Clariion
running raid-5, and raid 1_0. I have no test machine, just a migration
machine that currently hosts other things. It is possible to setup a zfs
file system while vxfs/vm are still running and controlling other file
systems and volumes, or is it all or nothing. I searched many blogs and
web docs and cannot find the answer to this question.



I'm not quite sure what you're asking here: do you want to set up 
zpools/zfs on the same disks as vxvm/vxfs is running on *while* 
vxvm/vxfs is still running on them? that won't work.
If you're asking "can I set up zfs on free disks while vxvm is still set 
up on others" I don't see why not. As long as there's no contention 
around actual disks, there shouldn't be an issue here.


If you expand a bit on this, I'm sure our zfs experts can give you a 
more precise answer than this :-)


HTH
--
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Michael Schuster

People,

indeed, even though interesting and a problem, this is OT. I suggest that 
everyone who has trouble with SDM address it to the people who actually 
work on it - especially if you're a (potential) customer.


cheers
Michael

Richard Elling wrote:

more background below...

Richard Elling wrote:

Graham Perrin wrote:
Intending to experiment with ZFS, I have been struggling with what 
should be a simple download routine.


Sun Download Manager leaves a great deal to be desired.

In the Online Help for Sun Download Manager there's a section on 
troubleshooting, but if it causes *anyone* this much trouble
<http://fuzzy.wordpress.com/2007/06/14/sundownloadmanagercrap/> then 
it should, surely, be fixed.


Sun Download Manager -- a FORCED step in an introduction to 
downloadable software from Sun -- should be PROBLEM FREE in all 
circumstances. It gives an extraordinarily poor first impression.


Though it is written in Java, and JavaIO has no concept of file
size limits, the downloads are limited to 2 GBytes.  This makes SDM
totally useless for me.  I filed the bug about 4 years ago, and it
is still not being fixed (yes, they know about it)  I recommend you
use something else, there are many good alternatives.


The important feature is the ability to restart a download in the middle.
Downloads cost real money, so if you have to restart from the beginning,
then it costs even more money.  Again, there are several good downloaders
out there which offer this feature.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Michael Schuster

John Martinez wrote:


On Jun 14, 2007, at 8:58 AM, Michael Schuster wrote:


People,

indeed, even though interesting and a problem, this is OT. I suggest 
that everyone who has trouble with SDM address it to the people who 
actually work on it - especially if you're a (potential) customer.


Michael, for the sake of others who aren't familiar with Sun's 
practices, how does one do that? Go to Sunsolve?


I was afraid someone would ask :-)

I'm sorry I have no idea. In general, there should be a feedback mechanism 
attached to SDM. I've never used it myself, so I just had a look at 
http://www.sun.com/download/sdm/index.xml, and would suggest you use the 
'contact' link (bottom left). I realise it's probably very un-SDM-specific ...


HTH
--
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Michael Schuster

replying to myself ...

Michael Schuster wrote:
look at http://www.sun.com/download/sdm/index.xml, and would suggest you 
use the 'contact' link (bottom left). I realise it's probably very 
un-SDM-specific ...


HTH


http://www.sun.com/download/sdm/sdm_help.xml has a feedback form at the 
very bottom (under "Customer Support").


HTH
--
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Install new Solaris - how to see old ZFS disk

2007-06-20 Thread Michael Schuster

Joubert Nel wrote:

Hi,

Stupid question I'm sure - I've just upgraded to Solaris Express Dev Edition 
(05/07) by installing over my previous Solaris 10 installation (intentionally, 
so as to get a clean setup).
The install is on Disk #1.

I also have a Disk #2, which was the sole disk in a ZFS pool under Solaris 10.
How can I now mount/incorporate/import this Disk #2 into a ZFS pool on my new 
Solaris so that I can see the data stored on that disk?


zpool import

HTH
--
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread michael schuster

Joubert Nel wrote:



What I meant is that when I do "zpool create" on a disk, the entire
contents of the disk doesn't seem to be overwritten/destroyed. I.e. I
suspect that if I didn't copy any data to this disk, a large portion of
what was on it is potentially recoverable.

If so, is there a tool that can help with such recovery?


I can't answer this in detail, but, to borrow from Tim O'Reilly, think 
of it as the text of a book where you've lost the table of contents and 
the first few chapters, and thrown all the remaining pages on the floor...



--
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs receive

2007-06-23 Thread michael schuster

Russell Aspinwall wrote:
Hi, 
 
As part of a disk subsystem upgrade I am thinking of using ZFS but there are two issues at present 
 
1) The current filesystems are mounted as  /hostname/mountpoint

except for one directory where the mount point is /. Is is possible to mount a ZFS
filesystem as /hostname// so that
/hostname/ contains only directory . Storage dir is empty apart from the  directory which contains all the file?


I hope I understand you correctly - if so, I see no reason why this 
shouldn't work:


# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
bigpool   4.11G  92.4G21K  /extra
bigpool/home   819M  92.4G   819M  /export/home
bigpool/store 1.94G  92.4G  1.94G  /extra/store
# ls -als /extra/
total 13
   3 drwxr-xr-x   4 root sys4 May 15 21:44 .
   4 drwxr-xr-x  55 root root1536 Jun 22 03:31 ..
   3 drwxr-xr-x   2 root root   2 May 15 21:44 home
   3 drwxr-xr-x   5 root sys8 May 15 21:54 store
# zfs set mountpoint=/extra/some/more/dirs/store bigpool/store
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
bigpool   4.11G  92.4G25K  /extra
bigpool/home   819M  92.4G   819M  /export/home
bigpool/store 1.94G  92.4G  1.94G  /extra/some/more/dirs/store
bigpool/zones 1.37G  92.4G20K  /zones
bigpool/zones/lx  1.37G  92.4G  1.37G  /zones/lx
# ls -als /extra/some/
total 9
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 .
   3 drwxr-xr-x   5 root sys5 Jun 23 10:19 ..
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 more
# ls -als /extra/some/more/
total 9
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 .
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 ..
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 dirs
# ls -als /extra/some/more/dirs/
total 9
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 .
   3 drwxr-xr-x   3 root root   3 Jun 23 10:19 ..
   3 drwxr-xr-x   5 root sys8 May 15 21:54 store
#

HTH
michael
--
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] left over mount points [was: 'mv' and zfs filesystems]

2007-07-25 Thread Michael Schuster
Eric Schrock wrote:
> Yes, you can rename mountpoints, and always have been able to.  It just
> didn't happen much before the arrival of ZFS.  When you reboot the
> machine, it would have tried to mount the filesystem in the original
> location.  Under ZFS, this would have created a new mountpoint for you.

this reminded me of something I've been wanting to ask for some time ... 
sorry for highjacking a thread ;-).

in the past, I've sometimes done things like:
- have some stuff in /path/to/storage (ufs)
- decided that that stuff might just as well live on/in zfs
- "zfs creat"ed /path/to/storage.copy (with implicit creation of the 
mountpoint), copied data from storage to storage.copy
- mv /path/to/storage to /path/to/storage.old
- zfs set mountpoint=/path/to/storage 

when this whole dance is done, I'm left with an empty directory 
/path/to/storage.copy; since zfs created this directory in the first place, 
is it an unreasonable expectation that zfs remove it as well?

Michael
-- 
Michael SchusterSun Microsystems, Inc.
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] controller number mismatch

2007-07-31 Thread michael schuster
Hi,

I just noticed something interesting ... don't know whether it's 
relevant or not (two commands run in succession during a 'nightly' run):

$ iostat -xnz 6
[...]
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 0.00.30.00.8  0.0  0.00.20.2   0   0 c2t0d0
 2.2   79.8  128.7  216.1  0.0  0.10.11.4   1   6 c1t0d0
 2.0   76.8  118.1  208.6  0.0  0.10.11.2   1   4 c1t1d0
 1.7   79.0  106.7  216.1  0.0  0.10.11.1   1   5 c1t2d0
 2.2   78.2  128.7  209.9  0.0  0.10.11.1   1   4 c1t3d0
 1.8   81.2  107.4  217.4  0.0  0.10.11.1   1   5 c1t4d0
 1.8   78.2  107.4  209.2  0.0  0.10.11.1   1   5 c1t5d0
 0.5   53.7   32.0  106.7  0.0  0.10.11.6   0   3 c1t8d0
 0.5   51.8   32.0  106.7  0.0  0.10.11.7   0   3 c1t9d0
 0.2   52.5   10.7  106.9  0.0  0.10.11.3   0   3 c1t10d0
 0.3   51.3   21.3  107.5  0.0  0.10.11.5   0   3 c1t11d0
 0.3   52.3   21.3  107.6  0.0  0.10.11.7   0   3 c1t12d0

$ zpool iostat -v 6

 capacity operationsbandwidth
pool  used  avail   read  write   read  write
---  -  -  -  -  -  -
zpool 930G   566G 16114  41.5K   915K
   raidz1  631G   185G 11 71  30.9K   573K
 c3t0d0   -  -  0 38  38.8K   120K
 c3t1d0   -  -  0 42  38.8K   115K
 c3t2d0   -  -  0 44  49.9K   120K
 c3t3d0   -  -  0 43  33.3K   115K
 c3t4d0   -  -  0 41  44.4K   120K
 c3t5d0   -  -  0 43  38.8K   114K
   raidz1  299G   381G  5 42  10.5K   341K
 c3t8d0   -  -  0 25  0  87.5K
 c3t9d0   -  -  0 24  0  87.1K
 c3t10d0  -  -  0 25  5.54K  87.0K
 c3t11d0  -  -  0 25  0  86.8K
 c3t12d0  -  -  0 25  0  87.2K
---  -  -  -  -  -  -


I find it remarkable that what is c1* in iostat obviously turns into c3* 
in zpool iostat.

comments?
Michael
-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unremovable file in ZFS filesystem.

2007-08-09 Thread Michael Schuster
Roger Fujii wrote:
> I managed to create a link in a ZFS directory that I can't remove.  Session 
> as follows:
> 
> # ls
> bayes.lock.router.3981  bayes_journal   user_prefs
> # ls -li bayes.lock.router.3981
> bayes.lock.router.3981: No such file or directory
> # ls
> bayes.lock.router.3981  bayes_journal   user_prefs
> # /usr/sbin/unlink bayes.lock.router.3981
> unlink: No such file or directory
> # find . -print
> .
> ./bayes_journal
> find: stat() error ./bayes.lock.router.3981: No such file or directory
> ./user_prefs
> #

make sure you have no unprintable characters in the file name (eg. with a 
command like
    ls -las | od -c
or some such)

HTH
Michael
-- 
Michael SchusterSun Microsystems, Inc.
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Extremely long creat64 latencies on higly utilized zpools

2007-08-15 Thread michael schuster
Yaniv,

I'm adding dtrace-discuss to this email for reasons that will be obvious 
immediately :-) - see below

Yaniv Aknin wrote:

> When volumes approach 90% usage, and under medium/light load (zpool
> iostat reports 50mb/s and 750iops reads), some creat64 system calls take
> over 50 seconds to complete (observed with 'truss -D touch'). When doing
> manual tests, I've seen similar times on unlink() calls (truss -D rm).
> 
> I'd like to stress this happens on /some/ of the calls, maybe every
> 100th manual call (I scripted the test), which (along with normal system
> operations) would probably be every 10,000th or 100,000th call.

I'd suggest you do something like this (not tested, so syntax errors etc 
may be lurking; I'd also suggest you get the DTrace guide off of 
opensolaris.org and read the chapter about speculations):

#!/usr/sbin/drace -Fs

int limit  ONE_SECOND   /* you need to replace this with 10^9, I think)

syscall::creat64:entry
{
self->spec = speculation();
speculate(self->spec);
self->ts=timestamp();
self->duration = 0;
}

fbt:::entry,
fbt:::return
/self->spec/
{
speculate(self->spec);
}

syscall::creat64:return
/self->spec/
{
speculate(self->spec);
self->duration = timestamp() - self->ts;
}

syscall::creat64:return
/self->duration > limit/
{
commit(self->spec);
self->spec = 0;
}

syscall::creat64:return
/self->spec/
{
discard(self->spec);
self->spec = 0;
}


you may need to use a different timestamp (walltimestamp?); and perhaps 
you'll want to somehow reduce the number of fbt probes, but that's up to 
you. I hope you can take it from here.

cheers
Michael
-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Extremely long creat64 latencies on higly

2007-08-21 Thread Michael Schuster
Yaniv Aknin wrote:

> It looks to me a lot like a conditional program flow (first we calculate
>  the duration, then we commit() the speculation if duration is > limit and
> discard() it otherwise) rather than discrete probes that fire
> independently. I read the manual as saying that conditional flow isn't
> possible in D, but I could have been wrong. What guarantees that the last
> free probes will fire in order, producing an IF-THEN-ELSE logic? If nothing
> guarantees that - why does the script work?

that's exactly the point. Otherwise identical probes with different 
predicates fire in the order encountered in the script.

is there a reason you took dtrace-discuss off the distribution list?

cheers
Michael
-- 
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-26 Thread michael schuster
Rainer J.H. Brandt wrote:
> Ronald,
> 
> thanks for your comments.
> 
> I was thinking about this scenario:
> 
> Host w continuously has a UFS mounted with read/write access.
> Host w writes to the file f/ff/fff.
> Host w ceases to touch anything under f.
> Three hours later, host r mounts the file system read-only,
> reads f/ff/fff, and unmounts the file system.
> 
> My assumption was:
> 
> a1) This scenario won't hurt w,
> a2) this scenario won't damage the data on the file system,
> a3) this scenario won't hurt r, and
> a4) the read operation will succeed,
> 
> even if w continues with arbitrary I/O, except that it doesn't
> touch anything under f until after r has unmounted the file system.
> 
> Of course everything that you and Tim and Casper said is true,
> but I'm still inclined to try that scenario.

you might get lucky once (note: I said "might"), but there's no 
guarantee, and sooner or later this approach *will* cause data corruption.

wouldn't it be much simpler to use NFS & automounter for this scenario 
(I didn't follow the whole thread, so this may have been discussed 
already)?

Michael
-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-30 Thread Michael Schuster
Peter L. Thomas wrote:

> That said, is there a "HOWTO" anywhere on installing QFS on Solaris 9 
> (Sparc64) machines?  Is that even possible?

We've been selling SAMFS (which qfs is a part of) for ages, long before S10 
ever 
saw the light, so I'd be *very* surprised if it wasn't documented with the 
whole 
qfs wad you get when you acquire (read: buy) the stuff.

Michael
-- 
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] find on ZFS much slower than on xfs

2007-09-05 Thread Michael Schuster
Joerg Moellenkamp wrote:
> Hello,
> 
> in a different benchmark run on the same system, the gfind took 15 
> minutes whereas the standarf find took 18 minutes. With find and 
> noatime=off the benchmark took 14 minutes. But even this is slow 
> compared to 2-3 minutes of the xfs system.

just asking the obvious:
- is this the same HW?
- are zfs/zpool and xfs set up similarly?

Michael
-- 
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs chattiness at boot time

2007-09-24 Thread Michael Schuster
Hi all,

I recently started seeing zfs chattiness at boot time: "reading zfs config" 
and something like "mounting zfs filesystems (n/n)".

Is this really necessary? I thought with SMF the times where every script 
announced its' existance had gone (and good thing, too).

Can't we print something only if it goes wrong?

Michael
-- 
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs chattiness at boot time

2007-09-24 Thread Michael Schuster
Mark J Musante wrote:
> On Mon, 24 Sep 2007, Michael Schuster wrote:
> 
>> I recently started seeing zfs chattiness at boot time: "reading zfs config"
>> and something like "mounting zfs filesystems (n/n)".
> 
> This was added recently because ZFS can take a while to mount large
> configs.  Consoles would appear to freeze after the initial boot-up
> messages.  10k filesystems could easily take several minutes to mount.
> 
> Eric Taylor is working on parallel mounting for ZFS which will speed up
> things considerably, although his changes currently do not remove the
> messages.  Perhaps the reading/mounting messages should be only displayed
> if, say, a minute has passed and we're not done?

That was my suspicion, and I'd vote for a change along the outline you 
suggest here. A minute may be a bit much, though (YMMV ;-).
I'm also quite prepared to see a running tally(?) after an initial timeout 
(your minute) has gone by and we haven't finished ... but I guess we'd also 
have to make sure that the output generated isn't messed up by other output 
to the console that's independant of ZFS.
I'd completely do away with the first message ("reading zfs config") for 
the OK case.

> I agree things should not be needlessly chatty, but I also believe that
> processes which run a long time (especially when affecting boot time)
> should provide feedback to users to let them know the box isn't dead.

indeed - see above.

thx
Michael
-- 
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] UC Davis Cyrus Incident September 2007

2007-10-18 Thread michael schuster
Gary Mills wrote:

> What's the command to show cross calls?

mpstat

-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hierarchal zfs mounts

2007-10-22 Thread Michael Schuster
Mike DeMarco wrote:
> Looking for a way to mount a zfs filesystem ontop of another zfs
> filesystem without resorting to legacy mode.

doesn't simply 'zfs set mountpoint=...' work for you?

-- 
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hierarchal zfs mounts

2007-10-22 Thread Michael Schuster
M D wrote:
> No boot problems. The zfs filesystems are in the same pool. What would
> be nice is something like
 > zfs set mountorder=1   local/apps
 > zfs set mountorder=2   local/apps-bin
> 
> or something along that line. So one zfs filesystem can be reliably
> mounted to a point inside another zfs filesystem.

I may be a missing the obvious here: what are you trying to solve that 
Eric's explanation didn't cover?

Michael
-- 
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3rd posting: ZFS question (case 65730249)

2007-11-08 Thread michael schuster
Dave Bevans wrote:
> Does anyone have any thoughts on this?
> 
> Hi,
> 
> I have a customer with the following questions...
> 
> 
> 
> *Describe the problem:*
> A ZFS Question -  I have one ZFS pool which is made from 2 storage 
> arrays (vdevs).  I have to delete  the zfs filesystems with the names of 
> /orbits/araid/* and remove one of the arrays from the system.  After I 
> delete this data the remaining data easily fits on one array.  The 
> question's are:
> 
> Can I remove one of the vdev's  from the orbits pool without having to 
> unload/rebuild the remaining data in the orbits/myear filesystem? 

as far as I can tell, you're asking for/about remove functionality in zfs 
(please correct). The last I heard, this was in the works. Search the 
archives on opensolaris.org etc. to find references to the bug ID and 
several discussions about this.

HTH
Michael
-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pls discontinue troll bait was: Yager on ZFS and

2007-11-18 Thread michael schuster
Anton B. Rang wrote:
> Hint: Bill was already writing file system code when I was in elementary
> school.  ;-)
> 
> Seriously...it's rather sad to see serious and useful discussions
> derailed by thin skins and zealotry. Bill's kind enough to share some of
> his real-world experience and observations in the old tradition of
> senior engineers helping out those who are entering the field. I for one
> find his contributions (here and on USENET) very useful and always
> thought-provoking, whether I agree or disagree with a particular point.
> Thought-provoking often means controversial, but that's not a bad thing
> -- if you've been involved in the software engineering process, you'll
> know that most engineers have spent many afternoons yelling at each
> other over opposing points of view before going out for a drink
> together, and usually the end product is better for it.  ;-)

may be.

OTOH, when someone whom I don't know comes across as a pushover, he loses 
credibility.
I'd expect a senior engineer to show not only technical expertise but also 
the ability to handle difficult situations, *not* adding to the 
difficulties by his comments.

(and remember: email is not the same as "yelling at one another" (in the 
hallway/at the conference table))

Michael
-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv-76 panics on installation

2007-11-20 Thread Michael Schuster
Bill Moloney wrote:
> I have an Intel based server running dual P3 Xeons (Intel A46044-609,
> 1.26GHz) with a BIOS from American Megatrends Inc (AMIBIOS, SCB2
> production BIOS rev 2.0, BIOS build 0039) with 2GB of RAM
> 
> when I attempt to install snv-76 the system panics during the initial
> boot from CD

please post the panic stack (to the list, not to me alone), if possible, 
and as much other information as you have (ie. what step does the panic 
happen at, etc.)

where did you get the media from (is it really a CD, or a DVD?)?
Can you read/mount the CD when running an older build? if no, are there 
errors in the messages file? ...

HTH
Michael
-- 
Michael Schuster
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs (u)mount conundrum with non-existent mountpoint

2008-11-06 Thread Michael Schuster
all,

I've gotten myself into a fix I don't know how to resolve (and I can't 
reboot the machine, it's a build server we share):

$ zfs list -r tank/schuster
NAME  USED  AVAIL  REFER  MOUNTPOINT
tank/schuster17.6G   655G  5.83G  /exportz/schuster
tank/schuster/ilb5.72G   655G  5.72G  /exportz/schuster/ilb
tank/schuster/ip_wput_local  6.06G   655G  6.06G 
/exportz/schuster/ip_wput_local

note the 2nd one.

$ ls -las /exportz/schuster/
total 20
4 drwxr-xr-x   5 schuster staff  5 Nov  6 09:38 .
4 drwxrwxrwt  15 ml37995  staff 15 Sep 28 05:21 ..
4 drwxr-xr-x   9 schuster staff 12 Nov  6 08:46 ilb_hg
4 drwxr-xr-x   8 schuster staff 12 Oct 31 10:07 ip_wput_local
4 drwxr-xr-x   9 schuster staff 11 Sep 11 13:06 old_ilb
$

oops, no "ilb/" subdirectory.

$ zfs mount | grep schuster
tank/schuster   /exportz/schuster
tank/schuster/ilb   /exportz/schuster/ilb
tank/schuster/ip_wput_local /exportz/schuster/ip_wput_local
$  mount | grep schuster
/exportz/schuster on tank/schuster ...
/exportz/schuster/ilb on tank/schuster/ilb ...
/exportz/schuster/ip_wput_local on tank/schuster/ip_wput_local ...
$

I've tried creating an ilb subdir, as well as "set mountpoint"ing, all to 
no avail, so far; "zfs unmount" also fails, even with -f. I've unshared the 
FS, still no luck, as with "zfs rename".

I don't want to "zfs destroy" tank/schuster/ilb before I've had a chance to 
check what's inside ...

this is snv_89, btw. zfs and zpool are at current revisions (3 and 10, resp.).

does anyone have any hints what I could do to solve this?

TIA
Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs (u)mount conundrum with non-existent mountpoint

2008-11-06 Thread Michael Schuster
Mark J Musante wrote:
> 
> Hi Michael,
> 
> Did you try doing an export/import of tank?

no - that would make it unavailable for use right? I don't think I can 
(easily) do that during production hours.

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs (u)mount conundrum with non-existent mountpoint

2008-11-06 Thread Michael Schuster
Johan Hartzenberg wrote:
> 
> 
> On Thu, Nov 6, 2008 at 8:22 PM, Michael Schuster 
> <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
> 
> Mark J Musante wrote:
>  >
>  > Hi Michael,
>  >
>  > Did you try doing an export/import of tank?
> 
> no - that would make it unavailable for use right? I don't think I can
> (easily) do that during production hours.
> 
> 
> Can you please post the output from:
> zfs get all tank/schuster/ilb

NAME   PROPERTY VALUE  SOURCE
tank/schuster/ilb  type filesystem -
tank/schuster/ilb  creation Tue Sep  2 15:57 2008  -
tank/schuster/ilb  used 5.72G  -
tank/schuster/ilb  available662G   -
tank/schuster/ilb  referenced   5.72G  -
tank/schuster/ilb  compressratio2.10x  -
tank/schuster/ilb  mounted  yes-
tank/schuster/ilb  quotanone   default
tank/schuster/ilb  reservation  none   default
tank/schuster/ilb  recordsize   128K   default
tank/schuster/ilb  mountpoint   /exportz/schuster/ilb  inherited from tank
tank/schuster/ilb  sharenfs offlocal
tank/schuster/ilb  checksum on default
tank/schuster/ilb  compression  on inherited from tank
tank/schuster/ilb  atimeon default
tank/schuster/ilb  devices  on default
tank/schuster/ilb  exec on default
tank/schuster/ilb  setuid   on default
tank/schuster/ilb  readonly offdefault
tank/schuster/ilb  zonedoffdefault
tank/schuster/ilb  snapdir  hidden default
tank/schuster/ilb  aclmode  groupmask  default
tank/schuster/ilb  aclinherit   restricted default
tank/schuster/ilb  canmount on default
tank/schuster/ilb  shareiscsi   offdefault
tank/schuster/ilb  xattron default
tank/schuster/ilb  copies   1  default
tank/schuster/ilb  version  3  -
tank/schuster/ilb  utf8only off-
tank/schuster/ilb  normalizationnone   -
tank/schuster/ilb  casesensitivity  sensitive  -
tank/schuster/ilb  vscanoffdefault
tank/schuster/ilb  nbmand   offdefault
tank/schuster/ilb  sharesmb offdefault
tank/schuster/ilb  refquota none   default
tank/schuster/ilb  refreservation   none   default
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to "mirror" an entire zfs pool to another pool

2009-07-28 Thread michael schuster

Thomas Walker wrote:

We are upgrading to new storage hardware.  We currently have a zfs pool
with the old storage volumes.  I would like to create a new zfs pool,
completely separate, with the new storage volumes.  I do not want to
just replace the old volumes with new volumes in the pool we are
currently using.  I don't see a way to create a mirror of a pool.  Note,
I'm not talking about a mirrored-pool, meaning mirrored drives inside
the pool.  I want to mirror pool1 to pool2.  Snapshots and clones do not
seem to be what I want as they only work inside a given pool.  I have
looked at Sun Network Data Replicator (SNDR) but that doesn't seem to be
what I want either as the physical volumes in the new pool may be a
different size than in the old pool.

Does anyone know how to do this?  My only idea at the moment is to
create the new pool, create new filesystems and then use rsync from the
old filesystems to the new filesystems, but it seems like there should
be a way to mirror or replicate the pool itself rather than doing it at
the filesystem level.


have you looked at what 'zfs send' can do?

Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 64-bit vs 32-bit applications

2010-08-16 Thread Michael Schuster

On 17.08.10 04:17, Will Murnane wrote:

On Mon, Aug 16, 2010 at 21:58, Kishore Kumar Pusukuri
  wrote:

Hi,
I am surprised with the performances of some 64-bit multi-threaded
applications on my AMD Opteron machine. For most of the applications, the
performance of 32-bit version is almost same as the performance of 64-bit
version. However, for a couple of applications, 32-bit versions provide
better performance (running-time is around 76 secs) than 64-bit (running
time is around 96 secs). Could anyone help me to find the reason behind
this, please?

[...]

This list discusses the ZFS filesystem.  Perhaps you'd be better off
posting to perf-discuss or tools-gcc?

That said, you need to provide more information.  What compiler and
flags did you use?  What does your program (broadly speaking) do?
What did you measure to conclude that it's slower in 64-bit mode?


add to that: what OS are you using?

Michael
--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "zfs unmount" versus "umount"?

2010-09-30 Thread Michael Schuster

On 30.09.10 15:42, Mark J Musante wrote:

On Thu, 30 Sep 2010, Linder, Doug wrote:


Is there any technical difference between using "zfs unmount" to unmount
a ZFS filesystem versus the standard unix "umount" command? I always use
"zfs unmount" but some of my colleagues still just use umount. Is there
any reason to use one over the other?


No, they're identical. If you use 'zfs umount' the code automatically maps
it to 'unmount'. It also maps 'recv' to 'receive' and '-?' to call into the
usage function. Here's the relevant code from main():


Mark, I think that wasn't the question, rather, "what's the difference 
between 'zfs u[n]mount' and '/usr/bin/umount'?"


HTH
Michael
--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A couple of quick questions

2010-12-22 Thread Michael Schuster
I can't answer any of these authoritatively(?), but have a comment:

On Wed, Dec 22, 2010 at 10:55, Per Hojmark  wrote:
> 1) What's the maximum number of disk devices that can be used to construct 
> filesystems?

lots.

> 2) Is there a practical limit on #1? I've seen messages where folks suggested 
> 40 physical devices is the practical maximum. That would seem to imply a 
> maximum single volume size of 80TB...

how does that follow, or, in other words, why do you believe zfs can
only handle 2 TB per physical disc? (hint: look up GTP or EFI label
;-)

HTH
-- 
regards/mit freundlichen Grüssen
Michael Schuster
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A few questions

2011-01-05 Thread Michael Schuster
On Wed, Jan 5, 2011 at 15:34, Edward Ned Harvey
 wrote:
>> From: Deano [mailto:de...@rattie.demon.co.uk]
>> Sent: Wednesday, January 05, 2011 9:16 AM
>>
>> So honestly do we want to innovate ZFS (I do) or do we just want to follow
>> Oracle?
>
> Well, you can't follow Oracle.  Unless you wait till they release something,
> reverse engineer it, and attempt to reimplement it.

that's not my understanding - while we will have to wait, oracle is
supposed to release *some* source code afterwards to satisfy some
claim or other. I agree, some would argue that that should have
already happened with S11 express... I don't know it has, but that's
not *the* release of S11, is it? And once the code is released, even
if after the fact, it's not reverse-engineering anymore, is it?

Michael
PS: just in case: even while at Oracle, I had no insight into any of
these plans, much less do I have now.
-- 
regards/mit freundlichen Grüssen
Michael Schuster
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Troubleshooting help on ZFS

2011-01-20 Thread Michael Schuster
On Thu, Jan 20, 2011 at 01:47, Steve Kellam
 wrote:
> I have a home media server set up using OpenSolaris.   All my experience with 
> OpenSolaris has been through setting up and maintaining this server so it is 
> rather limited.   I have run in to some problems recently and I am not sure 
> how the best way to troubleshoot this.  I was hoping to get some feedback on 
> possible fixes for this.
>
> I am running SunOS 5.11 snv_134.  It is running on a tower with 6 HDD 
> configured in as raidz2 array.  Motherboard: ECS 945GCD-M(1.0) Intel Atom 330 
> Intel 945GC Micro ATX Motherboard/CPU Combo.  Memory: 4GB.
>
> I set this up about a year ago and have had very few problems.  I was 
> streaming a movie off the server a few days ago and it all of a sudden lost 
> connectivity with the server.  When I checked the server, there was no output 
> on the display from the server but the power supply seemed to be running and 
> the fans were going.
> The next day it started working again and I was able to log in.  The SMB and 
> NFS file server was connecting without problems.
>
> Now I am able to connect remotely via SSH.  I am able to bring up a zpool 
> status screen that shows no problems.  It reports no known data errors.  I am 
> able to go to the top level data directories but when I cd into the 
> sub-directories the SSH connection freezes.
>
> I have tried to do a ZFS scrub on the pool and it only gets to 0.02% and 
> never gets beyond that but does not report any errors.  Now, also, I am 
> unable to stop the scrub.  I use the zpool scrub -s command but this freezes 
> the SSH connection.
> When I reboot, it is still trying to scrub but not making progress.
>
> I have the system set up to a battery back up with surge protection and I'm 
> not aware of any spikes in electricity recently.  I have not made any 
> modifications to the system.  All the drives have been run through SpinRite 
> less than a couple months ago without any data errors.
>
> I can't figure out how this happened all of the sudden and how best to 
> troubleshoot it.
>
> If you have any help or technical wisdom to offer, I'd appreciate it as this 
> has been frustrating.

look in /var/adm/messages (.*) to see whether there's anything
interesting around the time you saw the loss of connectivity, and also
since, then take it from there.

HTH
Michael
-- 
regards/mit freundlichen Grüssen
Michael Schuster
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about COW and snapshots

2011-06-15 Thread Michael Schuster

On 15.06.2011 14:30, Simon Walter wrote:


Another one is that snapshots are per-filesystem, while the intention
here is to capture a document in one user session. Taking a snapshot
will of course say nothing about the state of other user sessions. Any
document in the process of being saved by another user, for example,
will be corrupt.


Would it be? I think that's pretty lame for ZFS to corrupt data.


I think "corrupt" is not the right word to use here - "inconsistent" is 
probably better. ZFS has no idea when a document is "OK", so if your 
snapshot happens between two writes (even from a single user), it will 
be consistent from the POV of the FS, but may not be from the POV of the 
application.


HTH
Michael
--
Michael Schuster
http://recursiveramblings.wordpress.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resizing ZFS partition, shrinking NTFS?

2011-06-16 Thread Michael Schuster

On 17.06.2011 01:44, John D Groenveld wrote:

In message<444915109.61308252125289.JavaMail.Twebapp@sf-app1>, Clive Meredith
writes:

I currently run a duel boot machine with a 45Gb partition for Win7 Ultimate an
d a 25Gb partition for OpenSolaris 10 (134).  I need to shrink NTFS to 20Gb an
d increase the ZFS partion to 45Gb.  Is this possible please?  I have looked a
t using the partition tool in OpenSolaris but both partition are locked, even
under admin.  Win7 won't allow me to shrink the dynamic volume, as the Finsh b
utton is always greyed out, so no luck in that direction.


Shrink the NTFS filesystem first.
I've used the Knoppix LiveCD against a defragmented NTFS.

Then use beadm(1M) to duplicate your OpenSolaris BE to
a USB drive and also send snapshots of any other rpool ZFS
there.


I'd suggest a somewhat different approach:
1) boot a live cd and use something like parted to shrink the NTFS partition
2) create a new partition without FS in the space now freed from NTFS
3) boot OpenSolaris, add the partition from 2) as vdev to your zpool.

HTH
Michael
--
Michael Schuster
http://recursiveramblings.wordpress.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove corrupt files from snapshot

2011-11-03 Thread Michael Schuster
Hi,

snapshots are read-only by design; you can clone them and manipulate
the clone, but the snapshot itself remains r/o.

HTH
Michael

On Thu, Nov 3, 2011 at 13:35,   wrote:
>
> Hello,
>
> I have got a bunch of corrupted files in various snapshots on my ZFS file 
> backing store. I was not able to recover them so decided to remove all, 
> otherwise the continuously make trouble for my incremental backup (rsync, 
> diff etc. fails).
>
> However, snapshots seem to be read-only:
>
> # zpool status -v
>  pool: backups
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
>        corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
>        entire pool from backup.
>   see: http://www.sun.com/msg/ZFS-8000-8A
>  scrub: none requested
> config:
>        NAME        STATE     READ WRITE CKSUM
>        backups     ONLINE       0     0    13
>          md0       ONLINE       0     0    13
> errors: Permanent errors have been detected in the following files:
>        /backups/memory_card/.zfs/snapshot/20110218230726/Backup/Backup.arc
> ...
>
> # rm /backups/memory_card/.zfs/snapshot/20110218230726/Backup/Backup.arc
> rm: /backups/memory_card/.zfs/snapshot/20110218230726/Backup/Backup.arc: 
> Read-only file system
>
>
> Is there any way to force the file removal?
>
>
> Cheers,
> B.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Michael Schuster
http://recursiveramblings.wordpress.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about btrfs and zfs

2011-11-14 Thread Michael Schuster
On Mon, Nov 14, 2011 at 14:40, Paul Kraus  wrote:
> On Fri, Nov 11, 2011 at 9:25 PM, Edward Ned Harvey
>  wrote:
>
>> LOL.  Well, for what it's worth, there are three common pronunciations for
>> btrfs.  Butterfs, Betterfs, and B-Tree FS (because it's based on b-trees.)
>> Check wikipedia.  (This isn't really true, but I like to joke, after saying
>> something like that, I wrote the wikipedia page just now.)   ;-)
>
> Is it really B-Tree based? Apple's HFS+ is B-Tree based and falls
> apart (in terms of performance) when you get too many objects in one
> FS, which is specifically what drove us to ZFS. We had 4.5 TB of data
> in about 60 million files/directories on an Apple X-Serve and X-RAID
> and the overall response was terrible. We moved the data to ZFS and
> the performance was limited by the Windows client at that point.
>
>> Speaking of which. zettabyte filesystem.   ;-)  Is it just a dumb filesystem
>> with a lot of address bits?  Or is it something that offers functionality
>> that other filesystems don't have?      ;-)
>
> The stories I have heard indicate that the name came after the TLA.
> "zfs" came first and "zettabyte" later.

as Jeff told it (IIRC), the "expanded" version of zfs underwent
several changes during the development phase, until it was decided one
day to attach none of them to "zfs" and just have it be "the last word
in filesystems". (perhaps he even replied to a similar message on this
list ... check the archives :-)

regards
-- 
Michael Schuster
http://recursiveramblings.wordpress.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] missing files on copy

2008-01-24 Thread michael schuster
Christopher Gorski wrote:
> Hi, I'm running snv_78 on a dual-core 64-bit x86 system with 2 500GB usb
> drives mirrored into one pool.
> 
> I did this (intending to set the rdonly flag after I copy my data):
> 
> zfs create pond/read-only
> mkdir /pond/read-only/copytest
> cp -rp /pond/photos/* /pond/read-only/copytest/
> 
> After the copy is complete, a comparison of the original and copied
> trees revealed that /pond/read-only/copytest/photos has missing files.
> I tried this twice, and the missing files are different every time.  I'm
> copying 35GB, and about 1GB is missing.
> 
> cp gives me no errors, and zpool status says everything is fine.
> 
> A du -k of both trees shows the discrepancy.

are you missing disk space, or actual files?

Michael
-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] missing files on copy

2008-01-24 Thread michael schuster
Christopher Gorski wrote:
> FWIW, I just finished performing a copy again, to the same filesystem:
> 
> mkdir /pond/copytestsame
> cd /pond/photos
> cp -rp * /pond/copytestsame
> 
> Same files are missing throughout the new tree...on the order of a
> thousand files.  There are about 27k files in /pond/photos and 25k files
> in /pond/copytestsame
> 
> The original samba copy from another PC to /pond/photos copied
> everything correctly.

I assume you've assured that there's enough space in /pond ...

can you try

$(cd pond/photos; tar cf - *) | (cd /pond/copytestsame; tar xf -)



Michael
-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] missing files on copy

2008-01-24 Thread michael schuster
Nicolas Williams wrote:
> On Thu, Jan 24, 2008 at 11:06:13PM -0500, Christopher Gorski wrote:
>> I'm missing actual files.
>>
>>> Christopher Gorski wrote:
>>>> zfs create pond/read-only
>>>> mkdir /pond/read-only/copytest
>>>> cp -rp /pond/photos/* /pond/read-only/copytest/
> 
> Might the missing files' names start with '.' by any chance?
> 
> If so, know that the glob pattern "*" does not match names that start
> with '.'.

Valid point, but I think more precisely you need to ask whether any 
files/directories in /pond/photos/ start with a "."; beneath there, that 
should be irrelevant.

Michael
-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is the ZFS GUI in open Solaris ?

2008-02-14 Thread Michael Schuster
Tim Thomas wrote:
> Thanks Chris
> 
> someone else has suggested that to me but it still does not work.
> 
> I also tried...
> 
> # svccfg -s svc:/system/webconsole setprop options/tcp_listen = true
> # svcadm refresh svc:/system/webconsole
> 
> Still no luck..then I tried..
> 
> #/usr/sbin/netservices open
> 
> Still not working..
> 
> I am running snv_82, a fresh install: is there anything else that I 
> should enable/disable ?

are you sure the service is actually running? does "svcs -a | grep 
webconsole" say "online"?

HTH
Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Preferred backup s/w

2008-02-24 Thread michael schuster
Rich Teer wrote:
> On Sat, 23 Feb 2008, Joerg Schilling wrote:
> 
>> Star is the only portable and non fs-dependent archiver that supports 
>> incremental dumps, so I see no cometition
> 
> Incremental backups aren't what I'm talking about.  I'm talking about
> the ability to retrieve one or more distinct files from an archive,
> without having to restore the whole archive, like one can do with
> ufsrestore.

that's been in tar since I can remember; from the man-page of tar(1):

 x

  Extract or restore. The named files are  extracted  from
  the  tarfile  and  written to the directory specified in
  the tarfile, relative to the current directory.

HTH
Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Preferred backup s/w

2008-02-24 Thread michael schuster
Joerg Schilling wrote:
> Rich Teer <[EMAIL PROTECTED]> wrote:
> 
>>> People who like to backup usually also like to do incremental backups.
>>> Why don't you?
>> I do like incremental backups.  But the ability to do incremental backups
>> and restore arbitrary files from an archive are two different things.  An
>> incremental backup backs up files that have changed since the most recent
>> backup, so suppose my home directory contains 1000 files, 100 of which have
>> changed since my last backup.  I perform an incremental backup of my home
>> directory, and the resulting archive contains those 100 files.
>>
>> Now suppose that I accidentally delete a couple of those files; it is very
>> desirable to be able to restore just a certain named subset of the files
>> in an archive rather than having to restore the whole archive.  I'm looking
>> for a tool that can do that.
> 
> Why do you believe that an incremental backup disallows to extract single 
> files

Rich never said so. He said "the ability to do incremental backups and 
restore arbitrary files from an archive are two different things." You were 
addressing an issue he never brought up.

Michael
-- 
Michael SchusterSun Microsystems, Inc.
recursion, n: see 'recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-27 Thread Michael Schuster
[EMAIL PROTECTED] wrote:
> 
> (Again, I disliked the "file;X"
> notation and the fact that a manual purge was required).

You could set the number of revisions to keep; VMS would delete older ones.

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10 x86 + ZFS / NFS server "cp" problem with AIX

2008-03-18 Thread Michael Schuster
Sachin Palav wrote:
> Friends,
> I have recently built a file server on x2200 with solaris x86 having zfs 
> (version4) and running NFS version2 & samba.
> 
> the AIX 5.2 & AIX 5.2 client give error while running command "cp -R 
>   as below:
> cp: 0653-440 directory/1: name too long.
> cp: 0653-438 cannot read directory directory/1.
>  and the cp core dumps in AIX.

I think someone from the AIX camp is probably better suited to answering 
this, as they hopefully understand under which circumstances AIX's cp 
would spit out this kind of error message.

HTH
Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] not able to create file greate than 2gb on zfs file system over NFS

2008-03-25 Thread Michael Schuster
Tim wrote:
> What are you using to create the files?  Is this x86/32bit solaris 9, or 
> 64bit sparc?

to add to that: how *precisely* are you creating the files, and what is 
the error?

Michael
> 
> 
> 
> On Tue, Mar 25, 2008 at 11:13 AM, Sachin Palav 
> <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
> 
> Hello Team,
> I have a file server running solaris 10 (X86), I have ZFS on the
> file server and the file systems are exported using NFS.
> 
> But my solaris 9 client (using automounter) is not able to create a
> file of size more than 2gb.
> 
> please help urgently
> 
> thanks
> Sachin Palav
> 
> 
> This message posted from opensolaris.org <http://opensolaris.org>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org <mailto:zfs-discuss@opensolaris.org>
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz write performance:what to expect from SATA drives on ICH9R

2008-04-19 Thread michael schuster
Bob Friesenhahn wrote:
> On Sun, 20 Apr 2008, A Darren Dunham wrote:
>> I think these paragraphs are referring to two different concepts with
>> "swap".  Swapfiles or backing store in the first, and virtual memory
>> space in the second.
> 
> The "swap" area is mis-named since Solaris never "swaps".  Some older 
> operating systems would put an entire program in the swap area when 
> the system ran short on memory and would have to "swap" between 
> programs.  Solaris just "pages" (a virtual memory function) and it is 
> very smart about how and when it does it.  Only dirty pages which are 
> not write-mapped to a file in the filesystem need to go in the swap 
> area, and only when the system runs short on RAM.

that's true most of the time ... unless free memory gets *really* low, then 
Solaris *does* start to swap (ie page out pages by process). IIRC, the 
threshold for swapping is minfree (measured in pages), and the value that 
needs to fall below this threshold is freemem.

HTH
Michael
-- 
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-30 Thread michael schuster
dh wrote:
> Hello eschrock,
> 
> I'm a newbe on solaris, would you tell me how I can get/install build 89 of 
> nevada?
> 
> Fabrice.

Hi Fabrice,

I think a good place to start is http://www.opensolaris.org/os/newbies/ - I 
don't know whether they give you access to build 89 yet, but you can 
certainly get some practice so that wen you get b89, you know what to do.

(and no, eschrock is not a pseudonym of mine ;-)

HTH
Michael
-- 
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Image with DD from ZFS partition

2008-05-07 Thread michael schuster
Hans wrote:
> hello,
> can i create a image from ZFS with the DD command?

You're probably looking for "zfs send" - have a go at the man-page and see 
whether that serves the purpose.

HTH
Michael
-- 
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] vanished ZFS pool

2008-05-20 Thread michael schuster
Brian Nelson wrote:
> Although not OpenSolaris, I had a raidz pool on a SCSI A1000 using Solaris 10 
> just disappear. zpool 
> import says no pool exists.

have you checked the state / health of the A1000?

Michael
-- 
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to identify zpool version

2008-06-13 Thread Michael Schuster
Brian H. Nelson wrote:
> S10 U4 and U5 both use ZFS v4 (you specified your U4 machine as using v3).
> 
> If you have access to both machines, you can do 'zpool upgrade -v' to 
> confirm which versions are being used.

careful - there's zpool version and zfs version, and they're not the same:

$ uname -a
SunOS erdinger 5.11 snv_89 sun4u sparc SUNW,A70
$ zpool upgrade -v
This system is currently running ZFS pool version 10.

The following versions are supported:

VER  DESCRIPTION
---  
  1   Initial ZFS version
  2   Ditto blocks (replicated metadata)
  3   Hot spares and double parity RAID-Z
  4   zpool history
  5   Compression using the gzip algorithm
  6   bootfs pool property
  7   Separate intent log devices
  8   Delegated administration
  9   refquota and refreservation properties
  10  Cache devices
[...]
$ zfs upgrade -v
The following filesystem versions are supported:

VER  DESCRIPTION
---  
  1   Initial ZFS filesystem version
  2   Enhanced directory entries
  3   Case insensitive and File system unique identifer (FUID)

[...]
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] uncorrectable I/O error ... how to address?

2008-06-15 Thread Michael Schuster
Hi all,

I have a situation I don't know how to get out of:

I'm trying to 'zfs send' an FS off of my laptop, but in the middle of the 
send process, it hangs, and I see an message:

"WARNING: Pool 'p' has encountered an uncorrectable I/O error. Manual 
intervention is required."

that's all very nice, apart from the fact that I don't see any indication 
what the manual intervention is supposed to be .. and worse, when I try to 
find out more using "zpool status -v", it hangs (or appears to) after:

# zpool status -v
   pool: p
  state: ONLINE
status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
 entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 p   ONLINE   0 0 2
   c1t0d0s7  ONLINE   0 0 2

errors: Permanent errors have been detected in the following files:

[ hangs ]

both the "zfs send" and "zpool status" seem uninterruptible.

I saw this once before and rebooted, thereafter "zpool status" showed nothing.

so: how do I find out more about what's going on and what's broken, and how 
do I fix it without just deleting the FS?

thx
Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] uncorrectable I/O error ... how to address?

2008-06-15 Thread Michael Schuster
Michael Schuster wrote:
> Hi all,
> 
> I have a situation I don't know how to get out of:

I forgot the technical data:

$ uname -a
SunOS paddy 5.11 snv_86 i86pc i386 i86pc

$ zfs upgrade -v
The following filesystem versions are supported:

VER  DESCRIPTION
---  
  1   Initial ZFS filesystem version
  2   Enhanced directory entries
  3   Case insensitive and File system unique identifer (FUID)
[..]

(btw: is the current version no printed on purpose, or is it understood 
that zfs is always at the latest possible version?)

~$ zpool upgrade -v
This system is currently running ZFS pool version 10.

The following versions are supported:

VER  DESCRIPTION
---  
  1   Initial ZFS version
  2   Ditto blocks (replicated metadata)
  3   Hot spares and double parity RAID-Z
  4   zpool history
  5   Compression using the gzip algorithm
  6   bootfs pool property
  7   Separate intent log devices
  8   Delegated administration
  9   refquota and refreservation properties
  10  Cache devices

> I'm trying to 'zfs send' an FS off of my laptop, but in the middle of the 
> send process, it hangs, and I see an message:
> 
> "WARNING: Pool 'p' has encountered an uncorrectable I/O error. Manual 
> intervention is required."
> 
> that's all very nice, apart from the fact that I don't see any indication 
> what the manual intervention is supposed to be .. and worse, when I try to 
> find out more using "zpool status -v", it hangs (or appears to) after:
> 
> # zpool status -v
>pool: p
>   state: ONLINE
> status: One or more devices has experienced an error resulting in data
>  corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
>  entire pool from backup.
> see: http://www.sun.com/msg/ZFS-8000-8A
>   scrub: none requested
> config:
> 
>  NAMESTATE READ WRITE CKSUM
>  p   ONLINE   0 0 2
>c1t0d0s7  ONLINE   0 0 2
> 
> errors: Permanent errors have been detected in the following files:
> 
> [ hangs ]
> 
> both the "zfs send" and "zpool status" seem uninterruptible.
> 
> I saw this once before and rebooted, thereafter "zpool status" showed nothing.
> 
> so: how do I find out more about what's going on and what's broken, and how 
> do I fix it without just deleting the FS?
> 
> thx
> Michael


-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] uncorrectable I/O error ... how to address?

2008-06-15 Thread Michael Schuster
Michael Schuster wrote:

> (btw: is the current version no printed on purpose, or is it understood 
> that zfs is always at the latest possible version?)

ah ... I just found the answer to that myself:

# zpool upgrade
This system is currently running ZFS pool version 10.

The following pools are out of date, and can be upgraded.  After being
upgraded, these pools will no longer be accessible by older software versions.

VER  POOL
---  
  8   p

# zfs upgrade
This system is currently running ZFS filesystem version 3.

internal error: unable to get version property
The following filesystems are out of date, and can be upgraded.  After being
upgraded, these filesystems (and any 'zfs send' streams generated from
subsequent snapshots) will no longer be accessible by older software versions.


VER  FILESYSTEM
---  
  2   p/csw
  2   p/export
  2   p/home
  2   p/store

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mount failed at boot stops network services.

2008-06-28 Thread michael schuster
Charles Soto wrote:
> On 6/27/08 8:55 AM, "Mark J Musante" <[EMAIL PROTECTED]> wrote:
> 
>> On Fri, 27 Jun 2008, wan_jm wrote:
>>
>>> the procedure is follows:
>>> 1. mkdir /tank
>>> 2. touch /tank/a
>>> 3. zpool create tank c0d0p3
>>> this command give the following error message:
>>> cannot mount '/tank': directory is not empty;
>>> 4. reboot.
>>> then the os can only be login in from console. does it a bug?
>> No, I would not consider that a bug.
> 
> Why?

well ... why would it be a bug?

zfs is just making sure that it's not accidentally "hiding" anything by 
mounting something on a non-empty mountpoint; as you probably know, 
anything that is in a directory is invisible if that directory is used as a 
mountpoint for another filesystem.

zfs cannot know whether the mountpoint contains rubbish or whether the 
mountpoint property is incorrect, therefore the only sensible thing to do 
is to not mount an FS if the mountpoint is non-empty.

to quote Renaud:

> This is an expected behavior. filesystem/local is supposed to mount all
> ZFS filesystems. If it fails then filesystem/local goes into maintenance
> and network/inetd cannot start.

HTH
Michael
-- 
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mount failed at boot stops network services.

2008-06-29 Thread michael schuster
Richard Elling wrote:

>> I consider it a bug if my machine doesn't boot up because one single, 
>> non-system and non-mandatory, FS has an issue and doesn't mount. The 
>> rest of the machine should still boot and function fine.
>>   
> 
> I think Kyle might be onto something here. 

I tend to agree.

would it be possible to create a zfs property, eg. "mandatory", that, when 
true, causes the behaviour we're discussing, and when false, doesn't stop 
the rest of the boot process?

Michael
-- 
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/receive

2008-07-25 Thread Michael Schuster
Justin Vassallo wrote:
> I created snapshot for my whole zpool (zfs version 3):
> 
>  
> 
> zfs snapshot -r [EMAIL PROTECTED] +%F_%T`
> 
>  
> 
> then trid to send it to the remote host:
> 
> zfs send [EMAIL PROTECTED]:31:03 | ssh [EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]> -i identitykey ‘zfs receive tank/tankbackup’
> 
>  
> 
> but got the error “zfs: command not found” since /user/ is not 
> superuser, even though it is in the root group.

on my machine (sparc, nv. b92):

$ which zfs
/sbin/zfs

so ... you need to (at least) change your "receive" command to

'/sbin/zfs receive ...'

HTH
Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/receive

2008-07-25 Thread Michael Schuster
Justin Vassallo wrote:
> Thanks Michale,
> 
> that got me through to second round :) I eventually added /sbin to my
> /etc/profile to avoid the mistake in future.
> 
> So the issue is now with the USER rights on the zfs. How can I grant USER
> rights on this zfs? Is upgrading to a zfs which supports 'zfs allow' my only
> option?

I would suspect as much, though I'll defer to the ZFS experts to give you a 
definite answer.

Is 'zpool upgrade' / 'zfs upgrade' difficult for you?

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ButterFS

2008-08-01 Thread Michael Schuster
dick hoogendijk wrote:
> I read this just now in the Unix Guardian:
> 
> 
> BTRFS, pronounced ButterFS:
> BTRFS was launched in June 2007, and is a POSIX-compliant file system
> that will support very large files and volumes (16 exabytes) and a
> ridiculous number of files (two to the power of 64 files, to be
> precise). The file system has object-level mirroring and striping,
> checksums on data and metadata, online file system check, incremental
> backup and file system mirroring, subvolumes with their own file system
> roots, writable snapshots, and index and file packing to conserve
> space, among many other features. BTRFS is not anywhere near primetime,
> and Garbee figures it will take at least three years to get it out the
> door.
> 
> 
> I thought that ZFS was/is the way to the future, but reading this it
> seems there are compatitors out there ;-)

I don't see any contradiction here - even if ZFS is the way to go, there's 
no objecting to other people trying their own path, right? ;-)

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' mount after installing B95

2008-08-04 Thread Michael Schuster
Lori Alt wrote:
> 
> 
> Darren J Moffat wrote:
>> Lori Alt wrote:
>>   
>>> Alan Burlison wrote:
>>> 
>>>> NAME   USED  AVAIL  REFER  MOUNTPOINT
>>>> pool/ROOT 5.58G  53.4G18K  legacy
>>>>
>>>> What's the legacy mount for?  Is it related to zones?
>>>>
>>>>
>>>>   
>>>>   
>>> Basically, it means that we don't want it mounted at all
>>> because it's a placeholder dataset.  It's just a container for
>>> all the boot environments on the system.
>>> Though, now that I think about it, we should have
>>> made it "none".
>>> 
>>
>> Why none as the mountpoint rather than canmount=off ?
> 
> canmount=off would have been a good option too.  I still would
> have wanted to set the  mount point to either legacy or none since
> a real mountpoint would have been meaningless and would
> have still been inheritable.

I personally find 'legacy' a little misleading, and - if possible - would 
suggest changing that to 'none'

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot delete file when fs 100% full

2008-08-29 Thread Michael Schuster
On 08/29/08 04:09, Tomas Ögren wrote:
> On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes:
> 
>> On 14 August, 2008 - Paul Raines sent me these 2,9K bytes:
>>
>>> This problem is becoming a real pain to us again and I was wondering
>>> if there has been in the past few month any known fix or workaround.
>> Sun is sending me an IDR this/next week regarding this bug..
> 
> It seems to work, but I am unfortunately not allowed to pass this IDR

IDR are "point patches", built against specific kernel builds (IIRC) and as 
such not intended for a wider distribution. Therefore they need to be 
tracked so they can be replaced with the proper patch once that is available.
If you believe you need the IDR, you need to get in touch with your local 
services organisation and ask them to get it to you - they know the proper 
procedures to make sure you get one that works on your machine(s) and that 
you also get the patch once it's available.

HTH
Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS: First use recommendations

2008-09-12 Thread Michael Schuster
Hi,

I'm by no means a ZFS expert, but I do have one comment:

gm_sjo wrote:

> - To provide a large slice of storage (~4TB) to a Windows 2003/8 file 
> server guest on the vmware host, to be accessed by Windows clients over 
> CIFS.

Solaris provide CIFS support natively too - maybe you can save yourself the 
hassle of going through the vmware + windows combo.

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow zpool import with b98

2008-09-22 Thread Michael Schuster
On 09/22/08 06:59, Detlef [EMAIL PROTECTED] wrote:
> With Nevada Build 98 I realize a slow zpool import of my pool which 
> holds my user and archive data on my laptop.
> 
> The first time it was realized during the boot if Solaris tells me to 
> mount zfs filesystems (1/9) and then works for 1-2 minutes until it goes 
> ahead. I hear the disk working but have no clue what happens here.
> So I checked to zpool export and import, and with this import it is also 
> slow (takes around 90 seconds to import and with b97 it took 5 seconds). 
> Has anyone an idea what the reason could be ?
> 
> I also had created 2 ZVOL's under one filesysystem. Now I removed the 
> upper filesystem (and expected that zfs will also remove the both 
> zvols). But now on zpool exports it complains about these two unknown 
> datasets as: "dataset does not exist"
> 
> Any comments and ideas how to "really" remove the zvols and what's the 
> issue with slow zpool import ?

maybe "zpool status" and "fmdump" can shed some light ...

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to share

2008-09-22 Thread Michael Schuster
On 09/22/08 16:11, Srinivas Chadalavada wrote:
> Hi All,
> 
>I am trying to share zfs file system, I did enable sharnfs using this 
> command.

what OS/build are you using?

> sudo zfs set sharenfs=on export/home
> 
> when I do share –a
> 
> I get this error
> 
> ech3-mes01.prod:schadala[511] ~ $ sudo zfs share -a
> 
> cannot share 'export': /export: Unknown error
> 
> cannot share 'export/home': /export/home: Unknown error
> 
> I am not able to start nfs server also.
> 
> ech3-mes01.prod:schadala[512] ~ $ svcs -a |grep 
> nfs 


> disabled   18:04:58  svc:/network/nfs/server:default

so what happens when you do "svcadm enable svc:/network/nfs/server:default"?

what's the output of "svcs -x svc:/network/nfs/server:default", and what do 
the log files you find there say?

-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to share

2008-09-22 Thread michael schuster
Srinivas Chadalavada wrote:
> Hi Mike,

That's not my name.

also, please answer *all* my questions, you're only providing half the 
information: we're still missing the OS & revision, as well as some 
information about what's in the log files svcs -x tells us about.

Michael
>Here is the output.
> Sep 22 18:46:01 Executing start method ("/lib/svc/method/nfs-server
> start") ]
> cannot share 'export': /export: Unknown error
> cannot share 'export/home': /export/home: Unknown error
> [ Sep 22 18:46:01 Method "start" exited with status 0 ]
> [ Sep 22 18:46:01 Stopping because all processes in service exited. ]
> [ Sep 22 18:46:01 Executing stop method ("/lib/svc/method/nfs-server
> stop 472")
> ]
> [ Sep 22 18:46:01 Method "stop" exited with status 0 ]
> [ Sep 22 18:46:01 Disabled. ]
> 
> ech3-mes01.prod:schadala[561] ~ $ svcs -x
> svc:/network/nfs/server:default
> svc:/network/nfs/server:default (NFS server)
>  State: disabled since September 22, 2008  6:04:58 PM CDT
> Reason: Disabled by an administrator.
>See: http://sun.com/msg/SMF-8000-05
>See: nfsd(1M)
>See: /var/svc/log/network-nfs-server:default.log
> Impact: This service is not running.
> 
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> Sent: Monday, September 22, 2008 4:26 PM
> To: Srinivas Chadalavada
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] unable to share
> 
> On 09/22/08 16:11, Srinivas Chadalavada wrote:
>> Hi All,
>>
>>I am trying to share zfs file system, I did enable sharnfs using
> this 
>> command.
> 
> what OS/build are you using?
> 
>> sudo zfs set sharenfs=on export/home
>>
>> when I do share -a
>>
>> I get this error
>>
>> ech3-mes01.prod:schadala[511] ~ $ sudo zfs share -a
>>
>> cannot share 'export': /export: Unknown error
>>
>> cannot share 'export/home': /export/home: Unknown error
>>
>> I am not able to start nfs server also.
>>
>> ech3-mes01.prod:schadala[512] ~ $ svcs -a |grep 
>> nfs 
> 
> 
>> disabled   18:04:58  svc:/network/nfs/server:default
> 
> so what happens when you do "svcadm enable
> svc:/network/nfs/server:default"?
> 
> what's the output of "svcs -x svc:/network/nfs/server:default", and what
> do 
> the log files you find there say?
> 


-- 
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] am I "screwed"?

2008-10-12 Thread michael schuster
dick hoogendijk wrote:
> After an error I had to press the reset button on my ZFS root filesystem
> based sxce.b99
> 
> The system did not come up again!

please elaborate - what does the system do precisely?

Michael
> I tried a failsafe reboot; it works but I cannot mount rpool/ROOT on /a
> I did a zpool scrub rpool and it has no known data errors.
> 
> How can I access the data on this ZFS disk?
> I wouldn't mind a reinstall, but I really want to save the data in my home
> directories. I can't understand why the grub boot menu does not find the
> rpool/ROOT anymore. It should be mounted on (legacy) .alt.tmp.b-yh.mnt/
> 
> Is this problem solvable? I -do- hope so!
> 


-- 
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance with tens of thousands of zfs filesystems

2008-10-30 Thread Michael Schuster
Bob Friesenhahn wrote:
> On Thu, 30 Oct 2008, Phillip Wagstrom -- Area SSE wrote:
>>  OpenSolaris (as a distribution) is ABSOLUTELY supported by Sun. Take a
>> look at the datasheet from opensolaris.com and sun.com
> 
> Does Sun now offer patches for OpenSolaris? 

since when does "support" equate to "patches" (only)? There's much more to 
support than just supplying (or even creating!) patches.

(oh, btw: wasn't IPS created in part to get away from the whole patch ... 
ermm ... issue? ;-)

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Managing low free space and snapshots

2008-10-30 Thread Michael Schuster
Jose Luis Lopez Campoy wrote:
> Good evening!
> 
> Here at work we are considering switching our linux-based NAS systems to
> OpenSolaris because of ZFS, but I have some doubts.
> 
> Imagine we have a 500Gb hard disk full of data. We do an snapshot of the
> data, for backup or whatever, then we delete those files and try to save
> another 500 gigs of info there.

that won't work - the snapshot is will cause the data to be retained *as it 
was at the time of taking the snapshot*.

> How does the copy-on-write manage the free space?

the space isn't free as long as there's a snapshot referencing it.

> Would the snapshot be overwritten or the system would warn there is no
> free space?

I'd expect you'd get "no free space" or something like that.

Michael
-- 
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Proposal: delegated administration

2006-07-18 Thread michael schuster

Jeff Bonwick wrote:

PERMISSION GRANTING

zfs allow [-l] [-d] <"everyone"|user|group> [,...] \
...
zfs unallow  [-r] [-l] [-d]
 

If we're going to use English words, it should be "allow" and "disallow".


The problem with 'disallow' is that it implies precluding a behavior
that would normally be allowed -- similar to allow/deny in ACLs.

How about allow/revoke, or grant/revoke, or delegate/revoke?


delegate/revoke gets my vote

Michael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   >