Re: [zfs-discuss] ZFS extended ACL
Hit this myself. I could be wrong, but from memory I think the paths are ok if you're a normal user, it's just root that's messed up. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS extended ACL
On Wed, Jan 28, 2009 at 11:07 PM, Christine Tran wrote: > What is wrong with this? > > # chmod -R A+user:webservd:add_file/write_data/execute:allow /var/apache > chmod: invalid mode: `A+user:webservd:add_file/write_data/execute:allow' > Try `chmod --help' for more information. > Never mind. /usr/gnu/bin/chmod Can we lose GNU, gee louise it is OpenSolaris2008.11 isn't it. ls [-v|-V] is messed up as well. Blarhghgh! ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS extended ACL
What is wrong with this? # chmod -R A+user:webservd:add_file/write_data/execute:allow /var/apache chmod: invalid mode: `A+user:webservd:add_file/write_data/execute:allow' Try `chmod --help' for more information. This works in a zone, works on S10u5, does not work on OpenSolaris2008.11. CT ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting
Yes. I have disconnected the bad disk and booted with nothing in the slot, and also with known good replacement disk in on the same sata port. Doesn't change anything. Running 2008.11 on the box and 2008.11 snv_101b_rc2 on the LiveCD. I'll give it a shot booting from the latest build and see if that makes any kind of difference. Thanks for the suggestions. Brad > Just a thought, but have you physically disconnected > the bad disk? It's not unheard of for a bad disk to > cause problems with others. > > Failing that, it's the "corrupted data" bit that's > worrying me, it sounds like you may have other > corruption on the pool (always a risk with single > parity raid), but I'm worried that it's not giving > you any more details as to what's wrong. > > Also, what version of OpenSolaris are you running? > Could you maybe try booting off a CD of the latest > build? There are often improvements in the way ZFS > copes with errors, so it's worth a try. I don't > think it's likely to help, but I wouldn't discount > it. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
On Wed, Jan 28, 2009 at 19:04, Chris Kirby wrote: > On Jan 28, 2009, at 11:49 AM, Will Murnane wrote: > >> >> (on the client workstation) >> wil...@chasca:~$ dd if=/dev/urandom of=bigfile >> dd: closing output file `bigfile': Disk quota exceeded >> wil...@chasca:~$ rm bigfile >> rm: cannot remove `bigfile': Disk quota exceeded > > Will, > > I filed a CR on this (6798878), fix on the way for OpenSolaris. > Can you continue using regular quotas (instead of refquotas)? Those > don't suffer from the same issue Weird, you're right. I was under the impression that quotas suffered the same issue, and refquota was the *fix* to this issue (c.f. [1]). Well, live and learn; I'll switch them back. Thanks for the quick fix. I didn't see anything about this in the 10u6 release notes [2]; where could I have found the notification that quotas were behaving nicely WRT unlink()? > although of course you'll lose the > refquota functionality. That's okay, at least for the time being. I'll track the CR for further details. Will [1]: https://kerneltrap.org/mailarchive/freebsd-bugs/2008/3/11/1134694 [2]: http://docs.sun.com/app/docs/doc/817-0547/ghgdx ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] destroy means destroy, right?
On Wed, Jan 28, 2009 at 5:18 PM, Nathan Kroenert wrote: > As a side note, I had a look for anything that looked like a CR for zfs > destroy / undestroy and could not find one. > > Anyone interested in me submitting an RFE to have something like a > >zfs undestroy pool/fs Heh, this question came up a long long time ago regarding pools and I think the most succinct answer was "ZFS was designed to be easy, to err the other way would make it too hard." I think this problem would not need a fix if the admin realizes the magnitude of what he's going to do and not just zip thru and hit [RETURN]. I mean, what would be the time window in which pool/fs could be preserved? 5 minutes? 24 hours? forever? OK, so everybody has fat-fingered something catastrophic. Take a good backup, and watch what you're typing. Everybody respects rm -f *. CT ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] firewire card?
On Jan 28, 2009, at 16:39, Miles Nordin wrote: > Oxford 911 seems to describe a brand of chips, not a specific chip, > but it's been a good brand, and it's a very old brand for firewire. As an added bonus this chipset allow "multiple logins" so it can be used to experiment with this like Oracle RAC. There's a list of products that have the 911 (as well as 912 and 922) in this article: http://www.oracle.com/technology/pub/articles/hunter_rac10gr2.html#5 (Scroll down to "Miscellaneous Components".) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
On Jan 28, 2009, at 11:49 AM, Will Murnane wrote: > > (on the client workstation) > wil...@chasca:~$ dd if=/dev/urandom of=bigfile > dd: closing output file `bigfile': Disk quota exceeded > wil...@chasca:~$ rm bigfile > rm: cannot remove `bigfile': Disk quota exceeded Will, I filed a CR on this (6798878), fix on the way for OpenSolaris. Can you continue using regular quotas (instead of refquotas)? Those don't suffer from the same issue, although of course you'll lose the refquota functionality. -Chris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mount race condition?
On 1/28/2009 12:16 PM, Nicolas Williams wrote: On Wed, Jan 28, 2009 at 09:07:06AM -0800, Frank Cusack wrote: On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn wrote: On Tue, 27 Jan 2009, Frank Cusack wrote: i was wondering if you have a zfs filesystem that mounts in a subdir in another zfs filesystem, is there any problem with zfs finding them in the wrong order and then failing to mount correctly? I have not encountered that problem here and I do have a multilevel mount heirarchy so I assume that ZFS orders the mounting intelligently. well, the thing is, if the two filesystems are in different pools (let me repeat the example): Then weird things happen I think. You run into the same problems if you want to mix ZFS and non-ZFS filesystems in a mount hierarchy. You end up having to set the mountpoint property so the mounts don't happen at boot and then write a service to mount all the relevant things in order. Or set them all to legacy, and put them in /etc/vfstab. That's what I do. I have a directory on ZFS that holds ISO images, and a peer directory that contains mountpoints for loopback mounts of all those ISO's. I set the ZFS to legacy, and then in /etc/vfstab I put the FS containing the ISO files before I list all the ISO's to be mounted. -Kyle Nico ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?
Orvar, In an existing RAIDZ configuration, you would add the cache device like this: # zpool add pool-name cache device-name Currently, cache devices are only supported in the OpenSolaris and SXCE releases. The important thing is determining whether the cache device would improve your workload's performance, following Richard's advice. Can you try and buy an SSD? :-) Cindy Mark J Musante wrote: > On Wed, 28 Jan 2009, Richard Elling wrote: > > >>Orvar Korvar wrote: >> >> >>>I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a >>>similar vein? Would it be easy to do? >> >>Yes. >> > > > To be specific, you use the 'cache' argument to zpool, as in: > > zpool create <...> cache > > > Regards, > markm > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?
On Wed, 28 Jan 2009, Richard Elling wrote: > Orvar Korvar wrote: > >> I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a >> similar vein? Would it be easy to do? > > Yes. > To be specific, you use the 'cache' argument to zpool, as in: zpool create <...> cache Regards, markm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication
> The means to specify this is "sndradm -nE ...", > when 'E' is equal enabled. Got it. Nothing on the disk, nothing to replicate (yet). >The manner in which SNDR can guarantee that >two or more volumes are write-order consistent, as they are >replicated is place them in the same I/O consistency group. Ok, so my "sndradm -nE" command with "g [same name as first data drive group]" simply ADDs a set of drives to the group, it doesn't stop or replace the replication on the first set of drives, and in fact in keeping the same group name I even keep the two sets of drives in each server in sync. THEN I run my "zfs attach" command on the non-bitmap slice to my existing pool. Do I have that all right? Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] destroy means destroy, right?
On Wed, Jan 28, 2009 at 02:11:54PM -0800, bdebel...@intelesyscorp.com wrote: > Recovering Destroyed ZFS Storage Pools. > You can use the zpool import -D command to recover a storage pool that has > been destroyed. > http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view But the OP destroyed a dataset, not a pool. I don't think there's a way to undo dataset destruction. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] destroy means destroy, right?
He's not trying to recover a pool - Just a filesystem... :) bdebel...@intelesyscorp.com wrote: > Recovering Destroyed ZFS Storage Pools. > You can use the zpool import -D command to recover a storage pool that has > been destroyed. > http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view -- // // Nathan Kroenert nathan.kroen...@sun.com // // Systems Engineer Phone: +61 3 9869-6255 // // Sun Microsystems Fax:+61 3 9869-6288 // // Level 7, 476 St. Kilda Road Mobile: 0419 305 456// // Melbourne 3004 VictoriaAustralia // // ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Is Disabling ARC on SolarisU4 possible?
Also - My experience with a very small ARC is that your performance will stink. ZFS is an advanced filesystem that IMO makes some assumptions about capability and capacity of current hardware. If you don't give what it's expecting, your results may be equally unexpected. If you are keen to test the *actual* disk performance, you should just use the underlying disk device like /dev/rdsk/c0t0d0s0 Beware, however, that any writes to these devices will indeed result in the loss of the data on those devices, zpools or other. Cheers. Nathan. Richard Elling wrote: > Rob Brown wrote: >> Afternoon, >> >> In order to test my storage I want to stop the cacheing effect of the >> ARC on a ZFS filesystem. I can do similar on UFS by mounting it with >> the directio flag. > > No, not really the same concept, which is why Roch wrote > http://blogs.sun.com/roch/entry/zfs_and_directio > >> I saw the following two options on a nevada box which presumably >> control it: >> >> primarycache >> secondarycache > > Yes, to some degree this offers some capability. But I don't believe > they are in any release of Solaris 10. > -- richard > >> But I’m running Solaris 10U4 which doesn’t have them -can I disable it? >> >> Many thanks >> >> Rob >> >> >> >> >> *|* *Robert Brown - **ioko *Professional Services *| >> | **Mobile:* +44 (0)7769 711 885 *| >> * >> >> >> ___ >> zfs-discuss mailing list >> zfs-discuss@opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- // // Nathan Kroenert nathan.kroen...@sun.com // // Systems Engineer Phone: +61 3 9869-6255 // // Sun Microsystems Fax:+61 3 9869-6288 // // Level 7, 476 St. Kilda Road Mobile: 0419 305 456// // Melbourne 3004 VictoriaAustralia // // ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] destroy means destroy, right?
I'm no authority, but I believe it's gone. Some of the others on the list might have some funky thoughts, but I would suggest that if you have already done any other I/O's to the disk that you have likely rolled past the point of no return. Anyone else care to comment? As a side note, I had a look for anything that looked like a CR for zfs destroy / undestroy and could not find one. Anyone interested in me submitting an RFE to have something like a zfs undestroy pool/fs capability? Clearly, there would be limitations in how long you would have to get the command to work, but it would have it's merits... Cheers! Nathan. Jacob Ritorto wrote: > Hi, > I just said zfs destroy pool/fs, but meant to say zfs destroy > pool/junk. Is 'fs' really gone? > > thx > jake > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- // // Nathan Kroenert nathan.kroen...@sun.com // // Systems Engineer Phone: +61 3 9869-6255 // // Sun Microsystems Fax:+61 3 9869-6288 // // Level 7, 476 St. Kilda Road Mobile: 0419 305 456// // Melbourne 3004 VictoriaAustralia // // ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] destroy means destroy, right?
Recovering Destroyed ZFS Storage Pools. You can use the zpool import -D command to recover a storage pool that has been destroyed. http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Issue with drive replacement
In the process of replacing a raidz1 of four 500GB drives with four 1.5TB drives on the third one I ran into an interesting issue. The process was to remove the old drive, put the new drive in and let it rebuild. The problem was the third drive I put in had a hardware fault. That caused both drives (c4t2d0) to show as FAULTED. I couldn't put a new 1.5TB drive in as a replacement - it'd still show as a faulted drive. I couldn't remove the faulted since you can't remove a drive without enough replicas. You also can't do anything to a pool in the process of replacing. The remedy was to put the original drive back in and let it resilver. Once complete, a new 1.5TB drive was put in and the process was able to complete. If I didn't have the original drive (or it was broken) I think I would have been in a tough spot. Has anyone else experienced this - and if so, is there a way to force the replacement of drive that failed during resilvering? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mount race condition?
> "fc" == Frank Cusack writes: fc> say you have pool1/data which mounts on /data and pool2/foo fc> which mounts on /data/subdir/foo, From the rest of the thread I guess the mounts aren't reordered across pool boundarires, but I have this problem even for mount-ordering within the same pool if the iSCSI devices that make up a pool in zpool.cache are UNAVAIL at boot (iscsiadm remove discovery-address), then they come online after boot (iscsiadm add discovery-address). Once the devices appear, the pool auto-imports, but it doesn't always mount filesystems in the right order and never NFS-exports them properly. pgp2KeuCbhB8T.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] need to add space to zfs pool that's part of SNDR replication
BJ Quinn wrote: > I have two servers set up, with two drives each. The OS is stored > on one drive, and the data on the second drive. I have SNDR > replication set up between the two servers for the data drive only. > > I'm running out of space on my data drive, and I'd like to do a > simple "zpool attach" command to add a second data drive. Of > course, this will break my replication unless I can also get the > second drive replicating. > > What can I do? Do I simply add a second data drive to both servers > and format them as I did the first drive (space for bitmap > partitions, etc.) and then do a command like the following -- > > sndradm -ne server1 /dev/rdsk/[2nd data drive s0] /dev/rdsk/[2nd > data drive s0] server2 /dev/rdsk/[2nd data drive s1] /dev/rdsk/[2nd > data drive s1] ip sync g [some name other than my first synced > drive's group name] If you were to enable the SNDR replica before giving the new disk to ZFS, then there is no data to be synchronized, as both disks are uninitialized. Then when the disk is given to ZFS, only the ZFS metadata write I/Os need to be replicated. The means to specify this is "sndradm -nE ...", when 'E' is equal enabled. The "g [some name other than my first synced drive's group name]", needs to be "g [same name as first synced drive's group name]". The concept here is that all vdevs in a singe ZFS storage pool must be write-order consistent. The manner in which SNDR can guarantee that two or more volumes are write-order consistent, as they are replicated is place them in the same I/O consistency group. > Is that all there is to it? In other words, zfs will be happy as > long as both drives are being synced? And is this the way to sync > them, independently, with a "sndradm -ne" command set up and running > for each drive to be replicated, or is there a better way to do it? > > Thanks! > -- > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] firewire card?
> "ap" == Alan Perry writes: ap> the firewire drive that you want to use or, more precisely, ap> the 1394-to-ATA (or SATA) bridge for me Oxford 911 worked well, and PL-3507 crashed daily and needed a reboot of the case to come back. Prolific released new firmware, but it didn't help. I think there are probably bugs in some USB cases, too. Oxford 911 seems to describe a brand of chips, not a specific chip, but it's been a good brand, and it's a very old brand for firewire. pgpmFYHQwLKZB.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Unable to destory a pool
bash-3.00# uname -a SunOS opf-01 5.10 Generic_13-01 sun4v sparc SUNW,T5140 It has dual port SAS HBA connected to a dual controller ST2530. Storage is connected to two 5140's. Tried exporting the pool to other node and tried destroying without any luck. thanks ramesh -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] need to add space to zfs pool that's part of SNDR replication
I have two servers set up, with two drives each. The OS is stored on one drive, and the data on the second drive. I have SNDR replication set up between the two servers for the data drive only. I'm running out of space on my data drive, and I'd like to do a simple "zpool attach" command to add a second data drive. Of course, this will break my replication unless I can also get the second drive replicating. What can I do? Do I simply add a second data drive to both servers and format them as I did the first drive (space for bitmap partitions, etc.) and then do a command like the following -- sndradm -ne server1 /dev/rdsk/[2nd data drive s0] /dev/rdsk/[2nd data drive s0] server2 /dev/rdsk/[2nd data drive s1] /dev/rdsk/[2nd data drive s1] ip sync g [some name other than my first synced drive's group name] Is that all there is to it? In other words, zfs will be happy as long as both drives are being synced? And is this the way to sync them, independently, with a "sndradm -ne" command set up and running for each drive to be replicated, or is there a better way to do it? Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?
Orvar Korvar wrote: > I understand Fishworks has a L2ARC cache, which as I have understood it, is a > SSD drive as a cache? > Fishworks is an engineering team, I hear they have many L2ARCs in their lab :-) Yes, the Sun Storage 7000 series systems can have read-optimized SSDs for use as L2ARC devices. > I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a similar > vein? Yes. You should look for a read-optimized SSD, for best performance gains. > Would it be easy to do? Yes. > What would be the impact? It depends on the workload. In general, it helps workloads which do random reads, perhaps by 10x or more. > Has anyone tried this? > Yes. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] usb 2.0 card (was: firewire card?)
ok, how about a 4 port PCIe usb2.0 card that works? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] destroy means destroy, right?
Hi, I just said zfs destroy pool/fs, but meant to say zfs destroy pool/junk. Is 'fs' really gone? thx jake ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs send -R slow
On 28 Jan 2009, at 19:40, BJ Quinn wrote: >>> What about when I pop in the drive to be resilvered, but right >>> before I add it back to the mirror, will Solaris get upset that I >>> have two drives both with the same pool name? >> No, you have to do a manual import. > > What you mean is that if Solaris/ZFS detects a drive with an > identical pool name to a currently mounted pool, that it will safely > not disrupt the mounted pool and simply not mount the same-named > pool on the newly inserted drive? > > Can I mount a pool "as" another pool name? Yes: "zpool import oldname newname" Cheers, Chris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs send -R slow
>> What about when I pop in the drive to be resilvered, but right before I add >> it back to the mirror, will Solaris get upset that I have two drives both >> with the same pool name? >No, you have to do a manual import. What you mean is that if Solaris/ZFS detects a drive with an identical pool name to a currently mounted pool, that it will safely not disrupt the mounted pool and simply not mount the same-named pool on the newly inserted drive? Can I mount a pool "as" another pool name? Message was edited by: bjquinn -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mounting disks
Thanks a lot Ethan - that helped! -Garima Ethan Quach wrote: > You've got to import the pool first: > ># zpool import (to see the names of pools available to import) > > The name of the pool is likely "rpool", so > ># zpool import -f rpool > > > Then you mount your root dataset via zfs, or use the > beadm(1M) command to mount it: > ># beadm list (to see the boot environment name(s) ) > > The name of your boot environment is likely "opensolaris" > ># beadm mount opensolaris /mnt > > > Which ever you do, make sure you unmount it before you > reboot: > ># beadm unmount opensolaris > > > > With OpenSolaris being on ZFS, its much easier to make a > backup "clone" boot environment of your system before making > such changes that could mess up your system. Rather than to > have to boot to media to fix such issues, you can just boot > a backup boot environment. See beadm(1M) to see how to > create boot environments. > > > -ethan > > > Garima Tripathi wrote: >> Can anyone help me figure this out: >> >> I am a new user of ZFS, and recently installed 2008.11 with ZFS. >> Unfortunately I messed up the system and had to boot using LiveCD. >> >> In the legacy systems, it was possible to get to the boot prompt, and >> then mount the disk containing the "/" on /mnt, and then from there >> fix the issue. >> >> How do I do the same using ZFS? I tried several zfs commands - >> zfs list returns that there are no pools available, >> zfs list /dev/dsk/cXdYsZ returns that it is not a zfs filesystem, >> zpool online returns that there are no such pools >> >> Is there some way I can get to my file using zfs, or do I have to >> re-install? >> >> Thanks, >> -Garima ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mounting disks
You've got to import the pool first: # zpool import (to see the names of pools available to import) The name of the pool is likely "rpool", so # zpool import -f rpool Then you mount your root dataset via zfs, or use the beadm(1M) command to mount it: # beadm list (to see the boot environment name(s) ) The name of your boot environment is likely "opensolaris" # beadm mount opensolaris /mnt Which ever you do, make sure you unmount it before you reboot: # beadm unmount opensolaris With OpenSolaris being on ZFS, its much easier to make a backup "clone" boot environment of your system before making such changes that could mess up your system. Rather than to have to boot to media to fix such issues, you can just boot a backup boot environment. See beadm(1M) to see how to create boot environments. -ethan Garima Tripathi wrote: > Can anyone help me figure this out: > > I am a new user of ZFS, and recently installed 2008.11 with ZFS. > Unfortunately I messed up the system and had to boot using LiveCD. > > In the legacy systems, it was possible to get to the boot prompt, and > then mount the disk containing the "/" on /mnt, and then from there > fix the issue. > > How do I do the same using ZFS? I tried several zfs commands - > zfs list returns that there are no pools available, > zfs list /dev/dsk/cXdYsZ returns that it is not a zfs filesystem, > zpool online returns that there are no such pools > > Is there some way I can get to my file using zfs, or do I have to re-install? > > Thanks, > -Garima ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] mounting disks
Can anyone help me figure this out: I am a new user of ZFS, and recently installed 2008.11 with ZFS. Unfortunately I messed up the system and had to boot using LiveCD. In the legacy systems, it was possible to get to the boot prompt, and then mount the disk containing the "/" on /mnt, and then from there fix the issue. How do I do the same using ZFS? I tried several zfs commands - zfs list returns that there are no pools available, zfs list /dev/dsk/cXdYsZ returns that it is not a zfs filesystem, zpool online returns that there are no such pools Is there some way I can get to my file using zfs, or do I have to re-install? Thanks, -Garima -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?
I understand Fishworks has a L2ARC cache, which as I have understood it, is a SSD drive as a cache? I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a similar vein? Would it be easy to do? What would be the impact? Has anyone tried this? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mount race condition?
On Wed, Jan 28, 2009 at 09:07:06AM -0800, Frank Cusack wrote: > On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn > wrote: > > On Tue, 27 Jan 2009, Frank Cusack wrote: > >> i was wondering if you have a zfs filesystem that mounts in a subdir > >> in another zfs filesystem, is there any problem with zfs finding > >> them in the wrong order and then failing to mount correctly? > > > > I have not encountered that problem here and I do have a multilevel mount > > heirarchy so I assume that ZFS orders the mounting intelligently. > > well, the thing is, if the two filesystems are in different pools (let > me repeat the example): Then weird things happen I think. You run into the same problems if you want to mix ZFS and non-ZFS filesystems in a mount hierarchy. You end up having to set the mountpoint property so the mounts don't happen at boot and then write a service to mount all the relevant things in order. Nico -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mount race condition?
On Wed, Jan 28, 2009 at 09:32:23AM -0800, Frank Cusack wrote: > On January 28, 2009 9:24:21 AM -0800 Richard Elling > wrote: > > Frank Cusack wrote: > >> i was wondering if you have a zfs filesystem that mounts in a subdir > >> in another zfs filesystem, is there any problem with zfs finding > >> them in the wrong order and then failing to mount correctly? > >> > >> say you have pool1/data which mounts on /data and pool2/foo which > >> mounts on /data/subdir/foo, what if at boot time, pool2 is imported > >> first, what happens? /data would exist but /data/subdir wouldn't > >> exist since pool1/data hasn't been mounted yet. > >> > > > > It is a race condition and the mount may fail. Don't do this, > > unless you also use legacy mounts. Mounts of file systems > > inside a pool works fine because the order is discernable. > > i guess it's ok for the root pool since it's always available and > always first. For the datasets making up a BE, yes. For datasets in the rootpool that don't make up a BE (e.g., /export/home) maybe not, I'm not sure. But yes, that makes sense. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now. When we discovered the problem with full filesystems not allowing deletes over NFS, we became very anxious to fix this; our users fill their quotas on a fairly regular basis, so it's important that they have a simple recourse to fix this (e.g., rm). I played around with this on my OpenSolaris box at home, read around on mailing lists, and concluded that the 'refquota' property would solve this. With some trepidation, we decided at work that we would ignore the problem and wait for 10u6, at which point we would put the value of the quota property in the refquota property, and set quota=none. We did this a week or so ago, and we're still having the problem. Here's an example: (on the client workstation) wil...@chasca:~$ dd if=/dev/urandom of=bigfile dd: closing output file `bigfile': Disk quota exceeded wil...@chasca:~$ rm bigfile rm: cannot remove `bigfile': Disk quota exceeded wil...@chasca:~$ strace rm bigfile execve("/bin/rm", ["rm", "bigfile"], [/* 57 vars */]) = 0 (...) access("bigfile", W_OK) = 0 unlink("bigfile") = -1 EDQUOT (Disk quota exceeded) (on the NFS server) wil...@c64:~$ rm bigfile (no error) This is a big problem. We don't want to allow (or force) users to log in to the NFS server to delete files. Why is the behavior different over NFSv3/4 (I tried both, same problem both times) versus locally? In case it matters, here are the properties of the filesystem above: # zfs get all home1/willm1 NAME PROPERTY VALUE SOURCE home1/willm1 type filesystem - home1/willm1 creation Mon Jun 2 14:37 2008 - home1/willm1 used 3.47G - home1/willm1 available136M - home1/willm1 referenced 3.47G - home1/willm1 compressratio1.00x - home1/willm1 mounted yes- home1/willm1 quotanone default home1/willm1 reservation none default home1/willm1 recordsize 128K default home1/willm1 mountpoint /export/home1/willm1 inherited from home1 home1/willm1 sharenfs rw=cmscnet inherited from home1 home1/willm1 checksum on default home1/willm1 compression offdefault home1/willm1 atimeon default home1/willm1 devices on default home1/willm1 exec on default home1/willm1 setuid on default home1/willm1 readonly offdefault home1/willm1 zonedoffdefault home1/willm1 snapdir hidden default home1/willm1 aclmode groupmask default home1/willm1 aclinherit restricted default home1/willm1 canmount on default home1/willm1 shareiscsi offdefault home1/willm1 xattron default home1/willm1 copies 1 default home1/willm1 version 1 - home1/willm1 utf8only off- home1/willm1 normalizationnone - home1/willm1 casesensitivity sensitive - home1/willm1 vscanoffdefault home1/willm1 nbmand offdefault home1/willm1 sharesmb offdefault home1/willm1 refquota 3.60G local home1/willm1 refreservation none default Any suggestions are welcome. If we can't resolve this we'll have to investigate other options for our home directories; going without quotas is unacceptable for administrative reasons, and other options don't have this problem. Thanks, Will ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Is Disabling ARC on SolarisU4 possible?
Rob Brown wrote: > Afternoon, > > In order to test my storage I want to stop the cacheing effect of the > ARC on a ZFS filesystem. I can do similar on UFS by mounting it with > the directio flag. No, not really the same concept, which is why Roch wrote http://blogs.sun.com/roch/entry/zfs_and_directio > I saw the following two options on a nevada box which presumably > control it: > > primarycache > secondarycache Yes, to some degree this offers some capability. But I don't believe they are in any release of Solaris 10. -- richard > > But I’m running Solaris 10U4 which doesn’t have them -can I disable it? > > Many thanks > > Rob > > > > > *|* *Robert Brown - **ioko *Professional Services *| > | **Mobile:* +44 (0)7769 711 885 *| > * > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mount race condition?
On January 28, 2009 9:24:21 AM -0800 Richard Elling wrote: > Frank Cusack wrote: >> i was wondering if you have a zfs filesystem that mounts in a subdir >> in another zfs filesystem, is there any problem with zfs finding >> them in the wrong order and then failing to mount correctly? >> >> say you have pool1/data which mounts on /data and pool2/foo which >> mounts on /data/subdir/foo, what if at boot time, pool2 is imported >> first, what happens? /data would exist but /data/subdir wouldn't >> exist since pool1/data hasn't been mounted yet. >> > > It is a race condition and the mount may fail. Don't do this, > unless you also use legacy mounts. Mounts of file systems > inside a pool works fine because the order is discernable. i guess it's ok for the root pool since it's always available and always first. -frank ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS concat pool
Peter van Gemert wrote: > I have a need to created pool that only concatenates the LUNS assigned to it. > The default for a pool is stripe and other possibilities are mirror, raidz > and raidz2. > > Is there any way I can create concat pools. Main reason is that the > underlying LUNs are already striping and we do not want to stripe in ZFS > *and* in the storage cabinet (internal politics). > Nobody has provided a convincing use case to justify concats, so they don't exist. In the bad old days, logical volume managers did concats because they could not dynamically stripe. ZFS does dynamic striping (!= RAID-0) so it doesn't have the capacity issues caused by RAID-0. -- richard > To my knowledge the only other possibility (if ZFS can't do it) is using SVM. > > Greetings, > Peter > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mount race condition?
Frank Cusack wrote: > i was wondering if you have a zfs filesystem that mounts in a subdir > in another zfs filesystem, is there any problem with zfs finding > them in the wrong order and then failing to mount correctly? > > say you have pool1/data which mounts on /data and pool2/foo which > mounts on /data/subdir/foo, what if at boot time, pool2 is imported > first, what happens? /data would exist but /data/subdir wouldn't > exist since pool1/data hasn't been mounted yet. > It is a race condition and the mount may fail. Don't do this, unless you also use legacy mounts. Mounts of file systems inside a pool works fine because the order is discernable. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mount race condition?
On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn wrote: > On Tue, 27 Jan 2009, Frank Cusack wrote: > >> i was wondering if you have a zfs filesystem that mounts in a subdir >> in another zfs filesystem, is there any problem with zfs finding >> them in the wrong order and then failing to mount correctly? > > I have not encountered that problem here and I do have a multilevel mount > heirarchy so I assume that ZFS orders the mounting intelligently. well, the thing is, if the two filesystems are in different pools (let me repeat the example): On January 27, 2009 10:53:18 PM -0800 Frank Cusack wrote: > say you have pool1/data which mounts on /data and pool2/foo which > mounts on /data/subdir/foo, what if at boot time, pool2 is imported > first, what happens? /data would exist but /data/subdir wouldn't > exist since pool1/data hasn't been mounted yet. i would not expect zfs to wait for pool1 to show up; that might never happen. so /data/subdir/foo would be created for pool2/foo to be mounted on, at which point there would seem to be a race condition where /data/subdir is on the root filesystem, not the pool1/data filesystem. any data written into /data/subdir at this time will be masked when pool1 is imported. using root zfs pools made me think of this case. also, my mail gets delivered into ~/Maildir, where each homedir is a zfs filesystem, but now i've decided to also create separate zfs filesytems for each mail spool. i still want them visible in each home directory though. previously, i've only mounted filesystems from different pools in different hierarchies. -frank ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Zpool export failure or interrupt messes up mount ordering?
Hi, I have the following setup that worked fine for a couple of months. (root disk) - zfs rootpool (build 100) (on 2 mirrored data disks:) - datapool/export - datapool/export/home - datapool/export/fotos - datapool/export/fotos/2008 Now I tried to live upgrade from build 100 to 106 things got messy. So I decided to do a clean install for build 106. The things that I'd like to know about why the datapool lost its mount ordering. Because when I tried to import the zpool the mount ordering was messed up. I mounted datapool/export/fotos/2008 before trying to mount datapool/export/fotos. And so the latter failed to mount. I fixed it by unmounting, removing the mountpoints and mounting them in the right order, so its fixed now. But I'd like to know what could cause the mount order to get messed up? I have a theorymy zpool export failed/hung was interrupted due to the automounter hogging datapool/export/home. Could an failed/interrupted/hung zpool export can corrupt the mount ordering on the next zpool import? Thanks, ..Remco ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS concat pool
On Wed, 28 Jan 2009, Peter van Gemert wrote: > I have a need to created pool that only concatenates the LUNS > assigned to it. The default for a pool is stripe and other > possibilities are mirror, raidz and raidz2. Zfs does "concatenate" vdevs, and load-shares the writes across vdevs. If each vdev is one disk or one LUN, then you have concatenation and not "striping". Bob == Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS concat pool
On Wed, Jan 28, 2009 at 07:37, Peter van Gemert wrote: > Is there any way I can create concat pools. Not that I'm aware of. However, pools that are not redundant at the zpool level (i.e., mirror or raidz{,2}) are prone to becoming irrevocably faulted; creating non-redundant pools, even on "intelligent" storage arrays, is thus not recommended. > Main reason is that the underlying LUNs are already striping and we do not > want to stripe in ZFS *and* in the storage cabinet (internal politics). What reasons are there for not doing so? Perhaps if we know more about the situation we can suggest alternative configurations. Will ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] mount race condition?
On Tue, 27 Jan 2009, Frank Cusack wrote: > i was wondering if you have a zfs filesystem that mounts in a subdir > in another zfs filesystem, is there any problem with zfs finding > them in the wrong order and then failing to mount correctly? I have not encountered that problem here and I do have a multilevel mount heirarchy so I assume that ZFS orders the mounting intelligently. Bob == Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool status -x strangeness
# zpool status -xv all pools are healthy Ben > What does 'zpool status -xv' show? > > On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller > wrote: > > I forgot the pool that's having problems was > recreated recently so it's already at zfs version 3. > I just did a 'zfs upgrade -a' for another pool, but > some of those filesystems failed since they are busy > and couldn't be unmounted. > > > # zfs upgrade -a > > cannot unmount '/var/mysql': Device busy > > cannot unmount '/var/postfix': Device busy > > > > 6 filesystems upgraded > > 821 filesystems already at this version > > > > Ben > > -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Is Disabling ARC on SolarisU4 possible?
Afternoon, In order to test my storage I want to stop the cacheing effect of the ARC on a ZFS filesystem. I can do similar on UFS by mounting it with the directio flag. I saw the following two options on a nevada box which presumably control it: primarycache secondarycache But I¹m running Solaris 10U4 which doesn¹t have them -can I disable it? Many thanks Rob | Robert Brown - ioko Professional Services | | Mobile: +44 (0)7769 711 885 | ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS concat pool
I have a need to created pool that only concatenates the LUNS assigned to it. The default for a pool is stripe and other possibilities are mirror, raidz and raidz2. Is there any way I can create concat pools. Main reason is that the underlying LUNs are already striping and we do not want to stripe in ZFS *and* in the storage cabinet (internal politics). To my knowledge the only other possibility (if ZFS can't do it) is using SVM. Greetings, Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] firewire card?
Which firewire card? Any firewire card that is OHCI compliant, which is almost any add-on firewire card that you would buy new these days. The bigger question is the firewire drive that you want to use or, more precisely, the 1394-to-ATA (or SATA) bridge used by the drive. Some work better than others with OpenSolaris. Also, there is a bug in the OpenSolaris sbp2/scsa1394 modules that manifests itself in ZFS. Juergen Keil provided a fix some time ago. The fix seems to work for a lot of people, so there have been many requests to integrate it. However, there also seems to be instances where the fix may make things worse, so it has not yet been integrated into OpenSolaris. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss