Re: [zfs-discuss] Sanity check -- x4500 storage server forenterprise file service

2008-05-12 Thread Ross
Yeah, it's a *very* old bug.  The main reason we put our ZFS rollout on hold 
was concerns over reliability with such an old (and imo critical) bug still 
present in the system.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question regarding gzip compression in S10

2008-05-12 Thread Krzys
I just upgraded to Sol 10 U5 and I was hoping that gzip compression will be 
there, but when I do upgrade it only does show v4

[10:05:36] [EMAIL PROTECTED]: /export/home  zpool upgrade
This system is currently running ZFS version 4.

Do you know when Version 5 will be included in Solaris 10? are there any plans 
for it or will it be in Sol 11 only?

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS cli for REMOTE Administration

2008-05-12 Thread Mark Shellenbaum
Andy Lubel wrote:
 Paul B. Henson wrote:
 On Thu, 8 May 2008, Mark Shellenbaum wrote:

 we already have the ability to allow users to create/destroy snapshots
 over NFS.  Look at the ZFS delegated administration model.  If all you
 want is snapshot creation/destruction then you will need to grant
 snapshot,mount,destroy permissions.

 then on the NFS client mount go into .zfs/snapshot and do mkdir
 snapname.  Providing the user has the appropriate permission the
 snapshot will be created.

 rmdir can be used to remove the snapshot.
 Now that is just uber-cool.

 Can you do that through the in kernel CIFS server too?


 
 Yes, it works over CIFS too.
 
   -Mark
 
 Great stuff!
 
 I confirmed that it does work, but its strange that I don't see the snapshot 
 in 'zfs list' on the zfs box.  Is that a bug or a feature?  Im using XP - 
 another thing is that if you right click in the .zfs/snapshot directory and 
 do new - folder you will be stuck with a snapshot called New Folder.  I 
 couldn't rename it and the only way to delete it was to log into the machine 
 and do a lil 'rm -Rf'.  good news is that it is snapshotting :)
 
 I have a simple backup script where I use robocopy and then at the end I want 
 to do a 'mkdir .zfs/snapshot/xxx', but I would eventually want to delete the 
 oldest snapshot, similar to the zsnap.pl script floating around.
 
 Cant wait to try this on NFS, the whole reason we objected to snapshots in 
 the first place in our org was because our admins didn't want to be involved 
 with the users for the routine of working with snapshots.
 
 -Andy
 


If you want to be able to rename the snapshots then you will need to 
also hand out rename,create permission.  Then after windows creates 
the New Folder you can rename it to something else.

You can also rename it from the server by doing a mv command in 
.zfs/snapshot

I created this bug to address the issue with the space character

6700649 zfs_ctldir snapshot creatation issue with CIFS clients


   -Mark

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-12 Thread Ralf Bertling
Hi all,
until the scrub problem (http://bugs.opensolaris.org/view_bug.do?bug_id=6343667 
) is fixed,you should be able to simulate a scrub on the latest data  
by using
zfs send snapshot  /dev/null
Since the primary purpose is to verify latent bugs and to have zfs  
auto-correct them, simply reading all data would be sufficient to  
achieve the same purpose.
Problems:
1. This does not verify data from older snapshots and has to be issued  
for each FS in the pool.
2. It might be hard to schedule this task as comfortable as a scrub.

Resilvering should pose less of a problem as that only has to rewrite  
data of a single disk, i.e. you do not have to stop snapshots for a  
very long time. If a device was only temporarily unavailable,  
resilvering is actually much faster as only affected blocks will be re- 
written.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharing UFS root and ZFS pool

2008-05-12 Thread sean walmsley
Some additional information: I should have noted that the client could not see 
the thumper1 shares via the automounter.

I've played around with this setup a bit more and it appears that I can 
manually mount both filesystems (e.g. on /tmp/troot and /tmp/tpool), so the ZFS 
and UFS volumes are being shared properly, it's just the automounter that 
doesn't want to deal with both at once.

Does the automounter have issues with picking up UFS and ZFS volumes at the 
same time?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharing UFS root and ZFS pool

2008-05-12 Thread Richard Elling
sean walmsley wrote:
 Some additional information: I should have noted that the client could not 
 see the thumper1 shares via the automounter.

 I've played around with this setup a bit more and it appears that I can 
 manually mount both filesystems (e.g. on /tmp/troot and /tmp/tpool), so the 
 ZFS and UFS volumes are being shared properly, it's just the automounter that 
 doesn't want to deal with both at once.

 Does the automounter have issues with picking up UFS and ZFS volumes at the 
 same time?
   

The automounter has no knowledge of UFS or ZFS file systems.
You are seeing something on the way the client works, and you
should perhaps take this to nfs-discuss and not zfs-discuss.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Where is zfs attributes kept?

2008-05-12 Thread Richard Elling
Christine Tran wrote:
 Hi,

 If I delegate a dataset to a zone, and inside the zone, the zoneadmin 
 set the attribute of that dataset, where is that data kept?  More to the 
 point, at what level is that data kept? In the zone?  Or on the pool, 
 with the zone having privilege to modify that info at the pool?

 I'm looking into a case where a claim is made that a zone reboot wipes 
 out recordsize setting.  I looked with a simple DTrace script and have 
 not found this to be the case.  But, I want to know if it's possible, 
 perhaps I'm missing something.
   

Attributes are stored on disk. For more information see the
on-disk format, chapter 5 - ZAP (ZFS Attribute Processor)
http://www.opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf

-- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Deletion of file from ZFS Disk and Snapshots

2008-05-12 Thread Aaron Epps
This is a common problem that we run into and perhaps there's a good 
explanation of why it can't be done. Often, there will be a large set of data, 
say 200GB or so that gets written to a ZFS share, snapshotted and then deleted 
a few days later. As I'm sure you know, none of the space is returned to the 
pool since the bits on disk are still being referenced by the snapshot. Is 
there any way to delete a large set of data both from the disk and from any 
snapshots that may reference this large data set?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Deletion of file from ZFS Disk and Snapshots

2008-05-12 Thread Simon Breden
From my understanding, when you delete all the snapshots that reference the 
files that have already been deleted from the file system(s), then all the 
space will be returned to the pool.

So try deleting the snapshots that you no longer need. Obviously, be sure that 
you don't need any files referenced by the snapshots first ;-)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Problems under vmware

2008-05-12 Thread Paul B. Henson

I have a test bed S10U5 system running under vmware ESX that has a weird
problem.

I have a single virtual disk, with some slices allocated as UFS filesystem
for the operating system, and s7 as a ZFS pool.

Whenever I reboot, the pool fails to open:

May  8 17:32:30 niblet fmd: [ID 441519 daemon.error] SUNW-MSG-ID: ZFS-8000-CS, 
TYPE: Fault, VER: 1, SEVERITY: Major
May  8 17:32:30 niblet EVENT-TIME: Thu May  8 17:32:30 PDT 2008
May  8 17:32:30 niblet PLATFORM: VMware Virtual Platform, CSN: VMware-50 35 75 
0b a3 b3 e5 d4-38 3f 00 7a 10 c0 e2 d7, HOSTNAME: niblet
May  8 17:32:30 niblet SOURCE: zfs-diagnosis, REV: 1.0
May  8 17:32:30 niblet EVENT-ID: f163d843-694d-4659-81e8-aa15bb72e2e0
May  8 17:32:30 niblet DESC: A ZFS pool failed to open.  Refer to 
http://sun.com/msg/ZFS-8000-CS for more information.
May  8 17:32:30 niblet AUTO-RESPONSE: No automated response will occur.
May  8 17:32:30 niblet IMPACT: The pool data is unavailable
May  8 17:32:30 niblet REC-ACTION: Run 'zpool status -x' and either attach the 
missing device or
May  8 17:32:30 niblet  restore from backup.


According to 'zpool status', the device could not be opened:

[EMAIL PROTECTED] ~ # zpool status
  pool: ospool
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
ospool  UNAVAIL  0 0 0  insufficient replicas
  c1t0d0s7  UNAVAIL  0 0 0  cannot open


However, according to format, the device is perfectly accessible, and
format even indicates that slice 7 is an active pool:

[EMAIL PROTECTED] ~ # format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c1t0d0 DEFAULT cyl 4092 alt 2 hd 128 sec 32
  /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Specify disk (enter its number): 0
selecting c1t0d0
[disk formatted]
Warning: Current Disk has mounted partitions.
/dev/dsk/c1t0d0s0 is currently mounted on /. Please see umount(1M).
/dev/dsk/c1t0d0s1 is currently used by swap. Please see swap(1M).
/dev/dsk/c1t0d0s3 is currently mounted on /usr. Please see umount(1M).
/dev/dsk/c1t0d0s4 is currently mounted on /var. Please see umount(1M).
/dev/dsk/c1t0d0s5 is currently mounted on /opt. Please see umount(1M).
/dev/dsk/c1t0d0s6 is currently mounted on /home. Please see umount(1M).
/dev/dsk/c1t0d0s7 is part of active ZFS pool ospool. Please see zpool(1M).


Trying to import it does not find it:

[EMAIL PROTECTED] ~ # zpool import
no pools available to import


Exporting it works fine:

[EMAIL PROTECTED] ~ # zpool export ospool


But then the import indicates that the pool may still be in use:

[EMAIL PROTECTED] ~ # zpool import ospool
cannot import 'ospool': pool may be in use from other system


Adding the -f flag imports successfully:

[EMAIL PROTECTED] ~ # zpool import -f ospool

[EMAIL PROTECTED] ~ # zpool status
  pool: ospool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
ospool  ONLINE   0 0 0
  c1t0d0s7  ONLINE   0 0 0

errors: No known data errors


And then everything works perfectly fine, until I reboot again, at which
point the cycle repeats.

I have a similar test bed running on actual x4100 hardware that doesn't
exhibit this problem.

Any idea what's going on here?


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-12 Thread A Darren Dunham
On Mon, May 12, 2008 at 06:44:39PM +0200, Ralf Bertling wrote:
 ...you should be able to simulate a scrub on the latest data by
 using
 zfs send snapshot  /dev/null
 Since the primary purpose is to verify latent bugs and to have zfs  
 auto-correct them, simply reading all data would be sufficient to  
 achieve the same purpose.
 Problems:
 1. This does not verify data from older snapshots and has to be issued  
 for each FS in the pool.
 2. It might be hard to schedule this task as comfortable as a scrub.

It also won't check redundant copies or parity data.  Does a scrub do
that?

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss