Re: [zfs-discuss] Files from the future are not accessible on ZFS
> I see, thanks. > And as Jörg said, I only need a 64 bit binary. I > didn't know, but there is one for ls, and it does > work as expected: > > $ /usr/bin/amd64/ls -l .gtk-bookmarks > -rw-r--r-- 1 user opc0 oct. 16 2057 > .gtk-bookmarks > > This is a bit absurd. I thought Solaris was fully 64 > bit. I hope those tools will be integrated soon. > I am not sure if this is expected, I thought ls should be actually a hard link to isaexec and system picks applicable ISA transparently? indeed wierd. Ivan. > Thanks for the pointers! > > Laurent -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Restore a ZFS Root Mirror
my hunch is you need to boot from live cd (in virtual box case, iso) and installgrub -m /boot/grub/stage1 /boot/grub/stage2 $diskB_root_slice Ivan. Currently grub hasn't been installed for disk attached to root mirror later. > Hi all, > > after installing OpenSolaris 2008.05 in VirtualBox > I've created a ZFS Root Mirror by: > > "zfs attach rpool Disk B" > > and it works like a charm. Now I tried to restore the > rpool from the worst Case > Scenario: The Disk the System was installed to (Disk > A) fails. > I replaced the Disk A with another virtual Disk C and > tried to restore the rpool, but > my Problem is that I can't boot. > > Does anybody know how to restore the rpool in this > Scenario? Is it possible? > If not, I think a ZFS Root Mirror doesn't make > sense!? > > Thanks for Help, > > Guido -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] GUI support for ZFS root?
> 0n Thu, Aug 14, 2008 at 09:00:12AM -0700, Rich > Teer wrote: > >Summary: Solaris Express Community Edition > (SXCE) is like the OpenSolaris > >of old; OpenSolaris .xx is apparently Sun's > intended future direction > >for Solaris. Based on what I've heard, I've not > tried the latter. If I > >wanted Linux I'd use Linux. But for the > foreseeable future, I'm sticking >>to SXCE. > es that mean SXCE is going to disapear and be > replaced by .xx ? This, at least to my knowledge, has never been *officially* answered, but consensus is yes (means it happens eventually, without specific date) Time to practice linux crossdressing. :) Ivan. > > -aW > IMPORTANT: This email remains the property of the > Australian Defence Organisation and is subject to the > jurisdiction of section 70 of the CRIMES ACT 1914. > If you have received this email in error, you are > requested to contact the sender and delete the > email. > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?
> Richard Elling wrote: > > For ZFS, there are some features which conflict > with the > > notion of user quotas: compression, copies, and > snapshots come > > immediately to mind. UFS (and perhaps VxFS?) do > not have > > these features, so accounting space to users is > much simpler. > > Indeed, if was was easy to add to ZFS, then CR > 6557894 > > would have been closed long ago. Surely we can > describe the > > business problems previously solved by user-quotas > and then > > proceed to solve them? Mail is already solved. > > I just find it ironic that before ZFS I kept hearing > of people wanting > group quotas rather than user quotas. Now that we > have ZFS group quotas > are easy - quota the filesystem and ensure only that > group can write to > it - but now the focus is back on user quotas again > ;-) > I hate to say that, but most of us didn't expect zfs takes possibility of user quotas away.. I am not sure "trading" one capability with the other is desired. Back to zfs' "filesystem is cheap" paradigm, it won't be complained so much if other facilities provided by OS (for example, automounter) scales equally well with zfs. Making other fs related facility fitting into the new paradigm would help much, at least I think.. Ivan. > -- > Darren J Moffat > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Moving zfs pool to new machine?
> Steve, > > > Can someone tell me or point me to links that > describe how to > > do the following. > > > > I had a machine that crashed and I want to move to > a newer machine > > anyway. The boot disk on the old machine is fried. > The two disks I > was > using for a zfs pool on that machine need to be > moved to a newer > machine > now running 2008.05 OpenSolaris. > > > What is the procedure for getting back the pool on > the new machine and > > not losing any of the files I had in that pool? I > searched the docs, > > but did not find a clear answer to this and > experimenting with various > > zsh and zpool commands did not see the two disks or > their contents. > > To see all available pools to import: > > zpool import > > From this list, it should include your prior storage > pool name > zpool import > > - Jim How about migrating a root zool? Aside from rebuilding /devices, is there anything to watch for when migrating a root zpool between two similar configured systems? Ivan > > > > > > > The new disks are c6t0d0s0 and c6t1d0s0. They are > identical disks set > > that were set up in a mirrored pool on the old > machine. > > > > Thanks, > > > > Steve Christensen > > ___ > > zfs-discuss mailing list > > zfs-discuss@opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs write cache enable on boot disks ?
> Hello andrew, > > Thursday, April 24, 2008, 11:03:48 AM, you wrote: > > a> What is the reasoning behind ZFS not enabling the > write cache for > a> the root pool? Is there a way of forcing ZFS to > enable the write cache? > > The reason is that EFI labels are not supported for > booting. IIRC that is limited by BIOS, right? I thought the original problem is some x86 BIOS doesn't understand EFI so they cannot boot. Is there something else prevents from ZFS using EFI labeled disks for boot device? And for the coming zfs root support in new installer, will new installer still create a zfs in a slice rather than the whole device? Now that's somehow disheartening if we still stuck with SMI and slices. ok, I know write-cache can be forced on with format. I also know that practically EFI doesn't bring additional advantage over SMI label for small systems. But still... > So from ZFS perspective you put root pool on a slice > on SMI labeled > disk - the way currently ZFS works it assumes in such > a case that > there could be other slices used by other programs > and because you can > enable/disable write cache per disk and not per slice > it's just safer > to not automatically enable it. > > If you havoever enable it yourself then it should > stay that way (see > format -e -> cache) > > > > -- > Best regards, > Robert Milkowski > mailto:[EMAIL PROTECTED] >http://milek.blogspot.com > _ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs boot, status anyone
> On Mon, 10 Mar 2008, Ivan Wang wrote: > > > Hi Lori, > > > > Do you happen to know any update on Live Update zfs > support? > > You mean Live Upgrade? It's being worked on as we > speak to make it > zfs-aware. It should be available at the same time > the zfs-aware > installer is made available. Yeah, it's Live Upgrade, thanks for that. That's really a great news the feature should be working along with zfs-aware installer, that's one major thing SXCE lags behind Indiana. Thanks for the great news. Ivan. > > > Regards, > markm > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs boot, status anyone
Hi Lori, Do you happen to know any update on Live Update zfs support? Thanks, Ivan. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] /usr/gnu/bin compatibility [Was: Re: ZFS and acl questions in Indiana]
Moving to indiana-discuss.. Please do not start battling each other, bringing this issue to indiana-discuss is only to show potential gotcha when assuming a specific PATH setting in utilities/scripts Cheers, Ivan This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and acl questions in Indiana
> What does 'which chmod' show? I think that Indiana > chose to have > /usr/gnu/bin at the head of the path, so you're > probably picking up the > GNU chmod, which doesn't handle NFSv4 ACLs. Manually > running > /usr/bin/chmod should solve your problem. Would it be better if this issue is brought to indiana-discuss? Not to start a fresh round of PATH war, the point is either to make /usr/gnu/bin utilities compatible with /usr/bin ones, or no scripts in Indiana should safely *assume* everyone will have /usr/gnu/bin before /usr/bin (or every scripts do export PATH=$its_own_like) Ivan. > > - Eric > > On Wed, Dec 19, 2007 at 09:13:25AM -0800, ludo wrote: > > Hi, > > > > I am struggling with ZFS/ACL on indiana preview. > (ps: I am new to ZFS, > > new to indiana, and generally incompetent on > Solaris admin commands:-) > > > > First of, I am a bit surprised the 'old' setfacl > command does not work > > on ZFS: > > > > setfacl -m user:ludo:rw- > /etc/apache2/2.2/httpd.conf > > File system doesn't support aclent_t style ACL's. > > See acl(5) for more information on ACL styles > support by Solaris. > > > > > > So I try the chmod (based on google search > > > http://www.cims.nyu.edu/cgi-comment/man.cgi?section=1&; > topic=chmod > > or http://blogs.sun.com/lisaweek ) > > > > I do: > > chmod A+user:ludo:read_data:rwx php.ini > > chmod: invalid mode: `A+user:ludo:read_data:rwx' > > Try `chmod --help' for more information. > > > > or > > chmod A+user:ludo:read_data:allow php.ini > > chmod: invalid mode: `A+user:ludo:read_data:allow' > > Try `chmod --help' for more information. > > > > or: > > ls -v php.ini > > > > ls -v php.ini > > php.ini > > > > (note the lack of ACL info displayed there) > > > > Then > > > > man chmod > > > > Miscellaneous > missing(x) > CRIPTION > > Unfortunately, this OpenSolaris Developer > Preview does not > > include the manual page you are looking for. > We're sorry > and hope to improve upon this situation in > future releases. > Online versions of many manual pages are > available at > http://docs.sun.com/app/docs/coll/40.17. > SunOS 5.11Last change: 07/10/25 > 1 > erstand how people would react to this incompatible > setfacl > > command on an indiana system with zfs : > > > > How would you write a script to change acl for a > user for both zfs and > > non zfs system (i.e SXDE default installation or > Indiana default > > installation): > > > https://www.phillconrad.org/cisc474/Wiki.jsp?page=Acce > ssControlLists) > > What is the good way for doing this? > > > > So how can I write a portable script (with or > without zfs) that would > > take a user name as a parameter and would add rwx > rights to the file /foo? > > > > Why setfacl could not be adapted to work on ZFS, as > if I am guessing > > correctly, there should be a simple mapping from > the limited setfacl > > options to the mega-extended chmod options for ZFS? > > > > Thanks for some pointers or some help, > > Ludo > > > > ___ > > zfs-discuss mailing list > > zfs-discuss@opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > > -- > Eric Schrock, FishWorks >http://blogs.sun.com/eschrock > > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] enlarge a mirrored pool
> > Would the bootloader have issues here? On x86 I would > imagine that you > would have to reload grub, would a similar thing need > to be done on SPARC? > Yeah, that's also what I'm thinking, apparently zfs mirror doesn't take care of boot sector. So as of now, estimating size of a zfs root pool is still required, better not to go with a carefree grow-as-needed mindset. Ivan. > > Ivan Wang wrote: > >>> Erik Trimble wrote: > >>> After both drives are replaced, you will > automatically see the > >>> additional space. > >>> > >> I believe currently after the last replace an > >> import/export sequence > >> is needed to force zfs to see the increased size. > >> > > > > What if root fs is also in this pool? will there be > any limitation for a pool containing /? > > > > Thanks, > > Ivan. > > > > > >> Neil. > >> > >> __ > >> zfs-discuss mailing list > >> zfs-discuss@opensolaris.org > >> > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > >> ss > >> > > > > > > This message posted from opensolaris.org > > ___ > > zfs-discuss mailing list > > zfs-discuss@opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] enlarge a mirrored pool
> > Erik Trimble wrote: > > After both drives are replaced, you will automatically see the > > additional space. > > I believe currently after the last replace an > import/export sequence > is needed to force zfs to see the increased size. What if root fs is also in this pool? will there be any limitation for a pool containing /? Thanks, Ivan. > > Neil. > > __ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] enlarge a mirrored pool
Hi all, Forgive me if this is a dumb question. Is it possible for a two-disk mirrored zpool to be seamlessly enlarged by gradually replacing previous disk with larger one? Say, in a constrained desktop, only space for two internal disks is available, could I just begin with two 160G disks, then at some time, replace one of the 160G with 250G, resilvering, then replace another 160G, and finally get a two-disk 250G mirrored pool? Cheers, Ivan. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] About bug 6486493 (ZFS boot incompatible with
> This bug was rendered moot via 6528732 in build > snv_68 (and s10_u5). We > now store physical devices paths with the vnodes, so > even though the > SATA framework doesn't correctly support open by > devid in early boot, we But if I read it right, there is still a problem in SATA framework (failing ldi_open_by_devid,) right? If this problem is framework-wide, it might just bite back some time in the future. Ivan. > can fallback to the device path just fine. ZFS root > works great on > thumper, which uses the marvell SATA driver. > > - Eric > > On Wed, Oct 03, 2007 at 08:10:16AM +, Marc Bevand > wrote: > > I would like to test ZFS boot on my home server, > but according to bug > > 6486493 ZFS boot cannot be used if the disks are > attached to a SATA > > controller handled by a driver using the new SATA > framework (which > > is my case: driver si3124). I have never heard of > someone having > > successfully used ZFS boot with the SATA framework, > so I assume this > > bug is real and everybody out there playing with > ZFS boot is doing so > > with PATA controllers, or SATA controllers > operating in compatibility > > mode, or SCSI controllers, right ? > > > > -marc > > > > ___ > > zfs-discuss mailing list > > zfs-discuss@opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > > -- > Eric Schrock, Solaris Kernel Development > http://blogs.sun.com/eschrock > _ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: ZFS consistency guarantee
> > If i put the database in hotbackup mode,then i will > have to ensure > > that the filesystem is consistent as well.So, you > are saying that > > taking a ZFS snapshot is the only method to > guarantee consistency in > > the filesystem since it flushes all the buffers to > the filesystem , so > > its consistent. > > The ZFS filesystem is always consistent on disk. By > taking a snapshot, > you simply make a consistent copy of the filesystem > available. > > Flushing buffers would be a way of making sure all > the writes have made > it to storage. That's a different statement than > consistency. > > > Just curious,is there any manual way of telling ZFS > to flush the > > buffers after queiscing the db other than taking a > ZFS snapshot?. > > The usual methods of doing this on a filesystem are > to run sync, or call > fsync(), but that's not anything specific to ZFS. > > If you're not taking a snapshot, why would you want > ZFS to flush the > buffers? > Maybe just a way to confirm some really important data/change made it to the storage? Ivan. > -- > Darren Dunham > > [EMAIL PROTECTED] > Senior Technical Consultant TAOS >http://www.taos.com/ > Pepper? San Francisco, CA > bay area > < This line left intentionally blank to > confuse you. > > __ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: ZFS improvements
> On Tue, Apr 10, 2007 at 09:43:39PM -0700, Anton B. > Rang wrote: > > > > That's only one cause of panics. > > > > At least two of gino's panics appear due to > corrupted space maps, for > > instance. I think there may also still be a case > where a failure to > > read metadata during a transaction commit leads to > a panic, too. Maybe > > that one's been fixed, or maybe it will be handled > by the above bug. > > The space map bugs should have been fixed as part of: > > 6458218 assertion failed: ss == NULL > > Which went into Nevada build 60. There are several > different > pathologies that can result from this bug, and I > don't know if the > panics are from before or after this fix. I hope > folks from the ZFS > team are investigating, but I can't speak for > everyone. > > > Maybe someone needs to file a bug/RFE to remove all > panics from ZFS, > > at least in non-debug builds? The QFS approach is > to panic when > > inconsistency is found on debug builds, but return > an appropriate > > error code on release builds, which seems > reasonable. > > In order to do this we need to fix 6322646 first, > which addresses the > issue of 'backing out' of a transaction once we're > down in the ZIO layer > discovering these problems. It doesn't matter if > it's due to an I/O > error or space map inconsistency or I/O error if we > can't propagate the > error. Now this is scary, looking from the descriptions, it is possible that we might lose data in zfs, and/or resulted in a corrupted zpool that panics the kernel, if during the write operation, zfs loses connection to underlying hardwares? (for example a power failure?) But I've rarely seen that since we got UFS w/ logging in Solaris 7 or something. Even with UFS, there is always fsck allowing us to bring system back to a consistent state for recovering from previous backup. Is ZFS really supposed to be more reliable than UFS w/ logging, for example, in single disk, root file system scenario? > > - Eric > > -- > Eric Schrock, Solaris Kernel Development > http://blogs.sun.com/eschrock > _ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS boot: a new heads-up
So any date on when install utility will support zfs root fresh install? almost can't wait for that. Cheers, Ivan. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Live Upgrade with zfs root?
HI all, Recently zfs boot is delivered in scheduled b62, so is there any words when and how may we use live upgrade with zfs root? Since I only use SXCR now and sometimes you just need to boot to older BE in case of a no-so-good build, live upgrade becomes very handy for me on that. Cheers, Ivan. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: update on zfs boot support
> > Ivan Wang wrote: > > > > Hi, > > > > However, this raises another concert that during > recent discussions regarding to disk layout of a zfs > system > (http://www.opensolaris.org/jive/thread.jspa?threadID= > 25679&tstart=0) it was said that currently we'd > better give zfs the whole device (rather than slices) > and keep swap off zfs devices for better performance. > > > If the above recommendation still holds, we still > have to have a swap device out there othere than > devices managed by zfs. is this limited by the design > or implementation of zfs? > > > > Ivan. > > > > ZFS supports swap to /dev/vzol, however, I do not > have data related to > performance. > Also note that ZFS does not support dump yet, see RFE > 5008936. Got it, thanks, and a more general question, in a single disk root pool scenario, what advantage zfs will provide over ufs w/ logging? And when zfs boot integrated in neveda, will live upgrade work with zfs root? Cheers, Ivan. > > Lin > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: update on zfs boot support
> Ian Collins wrote: > > Thanks for the heads up. > > > > I'm building a new file server at the moment and > I'd like to make sure I > > can migrate to ZFS boot when it arrives. > > > > My current plan is to create a pool on 4 500GB > drives and throw in a > > small boot drive. > > > > Will I be able to drop the boot drive and move / > over to the pool when > > ZFS boot ships? > > > > Yes, should be able to, given that you have already > had an UFS boot > drive running root. > Hi, However, this raises another concert that during recent discussions regarding to disk layout of a zfs system (http://www.opensolaris.org/jive/thread.jspa?threadID=25679&tstart=0) it was said that currently we'd better give zfs the whole device (rather than slices) and keep swap off zfs devices for better performance. If the above recommendation still holds, we still have to have a swap device out there othere than devices managed by zfs. is this limited by the design or implementation of zfs? Ivan. > Lin > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss