Re: [zfs-discuss] Re: zfs hot spare not automatically getting used

2006-11-29 Thread Sanjeev Bagewadi

Jim,

That is good news !! Let's us know how it goes.


Regards,
Sanjeev.
PS : I am out of office a couple of days.

Jim Hranicky wrote:


OK, spun down the drives again. Here's that output:

 http://www.cise.ufl.edu/~jfh/zfs/threads
   



I just realized that I changed the configuration, so that doesn't reflect 
a system with spares, sorry. 

However, I reinitialized the pool and spun down one of the drives and 
everything is working as it should:


pool: zmir
state: DEGRADED
   status: One or more devices could not be opened.  Sufficient replicas exist 
for
   the pool to continue functioning in a degraded state.
   action: Attach the missing device and online it using 'zpool online'.
  see: http://www.sun.com/msg/ZFS-8000-D3
scrub: resilver completed with 0 errors on Wed Nov 29 16:29:53 2006
   config:

   NAME  STATE READ WRITE CKSUM
   zmir  DEGRADED 0 0 0
 mirror  DEGRADED 0 0 0
   c0t0d0ONLINE   0 0 0
   spare DEGRADED 0 0 0
 c3t1d0  UNAVAIL 10 28.88 0  cannot open
 c3t3d0  ONLINE   0 0 0
   spares
 c3t3d0  INUSE currently in use
 c3t4d0  AVAIL

   errors: No known data errors

I'm just not sure if it will always work. 


I'll try a few different configs and see what happens.


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 




--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-29 Thread Jason J. W. Williams

Hi Richard,

Been watching the stats on the array and the cache hits are < 3% on
these volumes. We're very write heavy, and rarely write similar enough
data twice. Having random oriented database data and
sequential-oriented database log data on the same volume groups, it
seems to me this was causing a lot of head repositioning.

By shutting down the slaves database servers we cut the latency
tremendously, which would seem to me to indicate a lot of contention.
But I'm trying to come up to speed on this, so I may be wrong.

"iostat -xtcnz 5" showed the latency dropped from 200 to 20 once we
cut the replication. Since the masters and slaves were using the same
the volume groups and RAID-Z was striping across all of them on both
the masters and slaves, I think this was a big problem.

Any comments?

Best Regards,
Jason

On 11/29/06, Richard Elling <[EMAIL PROTECTED]> wrote:

Jason J. W. Williams wrote:
> Hi Richard,
>
> Originally, my thinking was I'd like drop one member out of a 3 member
> RAID-Z and turn it into a RAID-1 zpool.

You would need to destroy the pool to do this -- requiring the data to
be copied twice.

> Although, at the moment I'm not sure.

So many options, so little time... :-)

> Currently, I have 3 volume groups in my array with 4 disk each (total
> 12 disks). These VGs are sliced into 3 volumes each. I then have two
> database servers using one LUN from each of the 3 VGs RAID-Z'd
> together. For redundancy its great, for performance its pretty bad.
>
> One of the major issues is the disk seek contention between the
> servers since they're all using the same disks, and RAID-Z tries to
> utilize all the devices it has access to on every write.

This is difficult to pin down.  The disks cache and the RAID controller
caches.  So it is true that you would have contention, it is difficult
to predict what affect, if any, the hosts would see.

> What I thought I'd move to was 6 RAID-1 VGs on the array, and assign
> the VGs to each server via a 1 device striped zpool. However, given
> the fact that ZFS will kernel panic in the event of bad data I'm
> reconsidering how to lay it out.

NB. all other file systems will similarly panic.  We get spoiled to
some extent because there are errors where ZFS won't panic.  In the
future, there will be more errors that ZFS can handle without panic.

> Essentially I've got 12 disks to work with.
>
> Anyway, long form of trying to convert from RAID-Z to RAID-1. Any help
> is much appreciated.

send/receive = copy/copy = backup/restore
It may be possible to do this as a rolling reconfiguration.
  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs hot spare not automatically getting used

2006-11-29 Thread Jim Hranicky
> 
> OK, spun down the drives again. Here's that output:
> 
>   http://www.cise.ufl.edu/~jfh/zfs/threads

I just realized that I changed the configuration, so that doesn't reflect 
a system with spares, sorry. 

However, I reinitialized the pool and spun down one of the drives and 
everything is working as it should:

 pool: zmir
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist 
for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: resilver completed with 0 errors on Wed Nov 29 16:29:53 2006
config:

NAME  STATE READ WRITE CKSUM
zmir  DEGRADED 0 0 0
  mirror  DEGRADED 0 0 0
c0t0d0ONLINE   0 0 0
spare DEGRADED 0 0 0
  c3t1d0  UNAVAIL 10 28.88 0  cannot open
  c3t3d0  ONLINE   0 0 0
spares
  c3t3d0  INUSE currently in use
  c3t4d0  AVAIL

errors: No known data errors

I'm just not sure if it will always work. 

I'll try a few different configs and see what happens.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs hot spare not automatically getting used

2006-11-29 Thread Jim Hranicky
I know this isn't necessarily ZFS specific, but after I reboot I spin the 
drives back
up, but nothing I do (devfsadm, disks, etc) can get them seen again until the
next reboot.

I've got some older scsi drives in an old Andataco Gigaraid enclosure which
I thought supported hot-swap, but I seem unable to hot swap them in. The PC
has an adaptec 39160 card in it and I'm running Nevada b51. Is this not a 
setup that can support hot swap? Or is there something I have to do other
than devfsadm to get the scsi bus rescanned?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs hot spare not automatically getting used

2006-11-29 Thread Jim Hranicky
> >>Do you have a threadlist from the node when it was
> hung ? That would
> >>reveal some info.
> >
> >Unfortunately I don't. Do you mean the output of
> >
> > ::threadlist -v
> >
> Yes. That would be useful.

OK, spun down the drives again. Here's that output:

  http://www.cise.ufl.edu/~jfh/zfs/threads

here's the output after boot:

  http://www.cise.ufl.edu/~jfh/zfs/threads-after-boot

> Also, check the zpool
> status output.

This hangs and is unkillable. The node also has to be powercycled as it hangs
on a  reboot. Until the boot it seems to work ok, though it spits out a ton of 
SCSI errors.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: system wont boot after zfs

2006-11-29 Thread Richard Elling

David Elefante wrote:

I had this happen on three different motherboards.  So it seems that there 
should be a procedure in the documentation that states if your BIOS doesn't 
support EFI labels than you need to write ZFS to a partition (slice) not the 
overlay, causing the BIOS to hang on reading the drive on boot up.  Most PC 
bios do not support EFI at this point, so this can impact the larger community.

Having that documentation would have saved me 30 hours at least, and I only 
hope that you take this as positive feedback and integrate it into the doc set. 
 I have ZFS working on my Ultra 20 just fine, and that is what confused me when 
I was working with my x86 box.  It says that EFI is not supported on IDE disks 
(SATA drive), but I'm assuming that this has changed.


From the sol9 doc set:


You shouldn't be using the Solaris *9* doc set.  Use the Solaris 10 docs,
specifically:
  Solaris 10 System Administrator Collection >>
System Administration Guide: Devices and File Systems >>
  11.  Managing Disks (Overview)  >>
What's New in Disk Management in the Solaris 10 Release?
  http://docs.sun.com/app/docs/doc/817-5093

and
  Solaris 10 System Administrator Collection >>
Solaris ZFS Administration Guide >>
  4.  Managing ZFS Storage Pools  >>
Components of a ZFS Storage Pool
  http://docs.sun.com/app/docs/doc/819-5461

Also, please pester your mobo vendor to get with the times... :-)
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread Casper . Dik

>I suspect a lack of an MBR could cause some BIOS implementations to  
>barf ..

Why?

Zeroed disks don't have that issue either.

What appears to be happening is more that raid controllers attempt
to interpret the data in the EFI label as the proprietary
"hardware raid" labels.  At least, it seems to be a problem
with internal RAIDs only.

In my experience, removing the disks from the boot sequence was
not enough; you need to disable the disks in the BIOS.

The SCSI disks with EFI labels in the same system caused no
issues at all; but the disks connected to the on-board RAID
did have issues.

So what you need to do is:

- remove the controllers from the probe sequence
- disable the disks

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread Jonathan Edwards

On Nov 29, 2006, at 10:41, [EMAIL PROTECTED] wrote:

This is a problem since how can anyone use ZFS on a PC???  My  
motherboard is a newly minted AM2 w/ all the latest firmware.  I  
disabled boot detection on the sata channels and it still refuses  
to boot.  I had to purchase an external SATA enclosure to fix the  
drives.  This seems to me to be a serious problem.  I put build 47  
and 50 on there with the same issue.


A serious problem *IN YOUR BIOS*.

You will need to format the disks, at ordinary PC (fdisk) labels and
on those create Solaris partitions and give those to ZFS.


take a look at the EFI/GPT discussion here (apple):
http://www.roughlydrafted.com/RD/Home/7CC25766-EF64-4D85-AD37- 
BCC39FBD2A4F.html


I suspect a lack of an MBR could cause some BIOS implementations to  
barf ..


does our fdisk put an MBR on the disk?  if so, does the EFI vdev  
labeling invalidate the MBR?


Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-29 Thread Richard Elling

Jason J. W. Williams wrote:

Hi Richard,

Originally, my thinking was I'd like drop one member out of a 3 member
RAID-Z and turn it into a RAID-1 zpool.


You would need to destroy the pool to do this -- requiring the data to
be copied twice.


Although, at the moment I'm not sure.


So many options, so little time... :-)


Currently, I have 3 volume groups in my array with 4 disk each (total
12 disks). These VGs are sliced into 3 volumes each. I then have two
database servers using one LUN from each of the 3 VGs RAID-Z'd
together. For redundancy its great, for performance its pretty bad.

One of the major issues is the disk seek contention between the
servers since they're all using the same disks, and RAID-Z tries to
utilize all the devices it has access to on every write.


This is difficult to pin down.  The disks cache and the RAID controller
caches.  So it is true that you would have contention, it is difficult
to predict what affect, if any, the hosts would see.


What I thought I'd move to was 6 RAID-1 VGs on the array, and assign
the VGs to each server via a 1 device striped zpool. However, given
the fact that ZFS will kernel panic in the event of bad data I'm
reconsidering how to lay it out.


NB. all other file systems will similarly panic.  We get spoiled to
some extent because there are errors where ZFS won't panic.  In the
future, there will be more errors that ZFS can handle without panic.


Essentially I've got 12 disks to work with.

Anyway, long form of trying to convert from RAID-Z to RAID-1. Any help
is much appreciated.


send/receive = copy/copy = backup/restore
It may be possible to do this as a rolling reconfiguration.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: system wont boot after zfs

2006-11-29 Thread David Elefante
I had this happen on three different motherboards.  So it seems that there 
should be a procedure in the documentation that states if your BIOS doesn't 
support EFI labels than you need to write ZFS to a partition (slice) not the 
overlay, causing the BIOS to hang on reading the drive on boot up.  Most PC 
bios do not support EFI at this point, so this can impact the larger community.

Having that documentation would have saved me 30 hours at least, and I only 
hope that you take this as positive feedback and integrate it into the doc set. 
 I have ZFS working on my Ultra 20 just fine, and that is what confused me when 
I was working with my x86 box.  It says that EFI is not supported on IDE disks 
(SATA drive), but I'm assuming that this has changed.

>From the sol9 doc set:

Restrictions of the EFI Disk Label

Keep the following restrictions in mind when determining whether to use disks 
greater than 1 terabyte is appropriate for your environment:

*

  The SCSI driver, ssd, currently only supports up to 2 terabytes. If you 
need greater disk capacity than 2 terabytes, use a volume management product 
like Solaris Volume Manager to create a larger device.
*

  Layered software products intended for systems with EFI-labeled disks 
might be incapable of accessing a disk with an EFI disk label.
*

  A disk with an EFI disk label is not recognized on systems running 
previous Solaris releases.
*

  The EFI disk label is not supported on IDE disks.
*

  You cannot boot from a disk with an EFI disk label.
*

  You cannot use the Solaris Management Console's Disk Manager Tool to 
manage disks with EFI labels. Use the format utility or the Solaris Management 
Console's Enhanced Storage Tool to manage disks with EFI labels, after you use 
the format utility to partition the disk.
*

  The EFI specification prohibits overlapping slices. The whole disk is 
represented by cxtydz.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' vs 'none'

2006-11-29 Thread Dick Davies

On 29/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:

On 28/11/06, Terence Patrick Donoghue <[EMAIL PROTECTED]> wrote:
> Is there a difference - Yep,
>
> 'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
> options
> whereas
> 'none' tells ZFS not to mount the ZFS filesystem at all. Then you would
> need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint
> poolname/fsname' to get it mounted.

Thanks Terence - now you've explained it, re-reading the manpage
makes more sense :)

This is plain wrong though:

"  Zones
 A ZFS file system can be added to a non-global zone by using
 zonecfg's  "add  fs"  subcommand.  A ZFS file system that is
 added to a non-global zone must have its mountpoint property
 set to legacy."

It has to be 'none' or it can't be delegated. Could someone change that?


I've had one last go at understanding what the hell is going on,
and what's *really* being complained about is the fact that the mountpoint
attribute is inherited (regardless of whether the value is 'none' or 'legacy').

Explicitly setting the mountpoint lets the zone boot.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-11-29 Thread Cindy Swearingen

Hi Betsy,

Yes, part of this is a documentation problem.

I recently documented the find -inum scenario in the community version 
of the admin guide. Please see page 156, (well, for next time) here:


http://opensolaris.org/os/community/zfs/docs/

We're working on the larger issue as well.

Cindy




Elizabeth Schwartz wrote:
Well, I fixed the HW but I had one bad file, and the problem was that 
ZFS was saying "delete the pool and restore from tape" when, it turns 
out, the answer is just find the file with the bad inode, delete it, 
clear the device and scrub.  Maybe more of a documentation problme, but 
it sure is disconcerting to have a file system threatening to give up 
the game over one bad file (and the real irony: it was a file in 
someone's TRASH!)


Anyway I'm back in business without a restore (and with a rebuilt RAID) 
but yeesh, it sure took a lot of escalating to get to the point where 
someone knew to tell me to do a find -inum.





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread Casper . Dik
>This is a problem since how can anyone use ZFS on a PC???  My motherboard is a 
>newly minted AM2 w/
 all the latest firmware.  I disabled boot detection on the sata channels and 
it still refuses to b
oot.  I had to purchase an external SATA enclosure to fix the drives.  This 
seems to me to be a ser
ious problem.  I put build 47 and 50 on there with the same issue.

A serious problem *IN YOUR BIOS*.

You will need to format the disks, at ordinary PC (fdisk) labels and
on those create Solaris partitions and give those to ZFS.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread James McPherson

On 11/30/06, David Elefante <[EMAIL PROTECTED]> wrote:

I had the same thing happen to me twice on my x86 box.  I
installed ZFS (RaidZ) on my enclosure with four drives and
upon reboot the bios hangs upon detection of the newly EFI'd
drives.  I've already RMA'd 4 drives to seagate and the new
batch was frozen as well.  I was suspecting my enclosure,
but I was suspicious when it only went bye bye after installing ZFS.

This is a problem since how can anyone use ZFS on a PC???
My motherboard is a newly minted AM2 w/ all the latest
firmware.  I disabled boot detection on the sata channels and
it still refuses to boot.  I had to purchase an external SATA
enclosure to fix the drives.  This seems to me to be a serious
problem.  I put build 47 and 50 on there with the same issue.




Yes, this is a serious problem. It's a problem with your motherboard
bios, which is clearly not up to date. The Sun Ultra-20 bios was
updated with a fix for this issue back in May.

Until you have updated your bios, you will need to destroy the
EFI labels, write SMI labels to the disks, and create slices on those
disks which are the size that you want to devote to ZFS. Then you
can specify the slice name when you run your zpool create operation.

This has been covered in the ZFS discussion lists several times, and
a quick google search should have found the answer for you.


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread Toby Thain


On 29-Nov-06, at 9:30 AM, David Elefante wrote:

I had the same thing happen to me twice on my x86 box.  I installed  
ZFS (RaidZ) on my enclosure with four drives and upon reboot the  
bios hangs upon detection of the newly EFI'd drives.  ...  This  
seems to me to be a serious problem.


Indeed. Yay for PC BIOS.

--Toby

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs corrupted my data!

2006-11-29 Thread Toby Thain


On 29-Nov-06, at 8:53 AM, Brian Hechinger wrote:


On Tue, Nov 28, 2006 at 10:48:46PM -0500, Toby Thain wrote:


Her original configuration wasn't redundant, so she should expect
this kind of manual recovery from time to time. Seems a logical
conclusion to me? Or is this one of those once-in-a-lifetime strikes?


That's not an entirely true statement.  Her configuration is redundant
from a traditional disk subsystem point of view.  I think the problem
here is that the old disk subsystem mindsets no longer apply with the
way something like ZFS works.


That is very true from what I've seen. ZFS definitely has a problem  
cracking the old-think, but then any generational shift does,  
historically! (I won't bore with other examples.)




This is going to be the largest stumbling
block of all of them I believe, not anything technical.

If I had the money and time, I'd build a hardware RAID controller that
could do ZFS natively.


We already have one: Thumper. :)

But in terms of replacing the traditional RAID subsystem: I don't see  
how such a design could address faults between the isolated  
controller and the host (in the way that software ZFS does). Am I  
missing something in your idea?


The "old" think is that it is sufficient to have a very complex and  
expensive RAID controller which claims to be reliable storage. But of  
course it's not: No matter how excellent your subsystem is, it's  
still isolated by unreliable components (and non-checksummed RAID is  
inherently at risk anyway).


--Toby


It would be dead simple (*I* think anyway) to make
it transparent to the ZFS layer.  ;)

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread David Elefante
I had the same thing happen to me twice on my x86 box.  I installed ZFS (RaidZ) 
on my enclosure with four drives and upon reboot the bios hangs upon detection 
of the newly EFI'd drives.  I've already RMA'd 4 drives to seagate and the new 
batch was frozen as well.  I was suspecting my enclosure, but I was suspicious 
when it only went bye bye after installing ZFS.

This is a problem since how can anyone use ZFS on a PC???  My motherboard is a 
newly minted AM2 w/ all the latest firmware.  I disabled boot detection on the 
sata channels and it still refuses to boot.  I had to purchase an external SATA 
enclosure to fix the drives.  This seems to me to be a serious problem.  I put 
build 47 and 50 on there with the same issue.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs corrupted my data!

2006-11-29 Thread Brian Hechinger
On Tue, Nov 28, 2006 at 10:48:46PM -0500, Toby Thain wrote:
> 
> Her original configuration wasn't redundant, so she should expect  
> this kind of manual recovery from time to time. Seems a logical  
> conclusion to me? Or is this one of those once-in-a-lifetime strikes?

That's not an entirely true statement.  Her configuration is redundant
from a traditional disk subsystem point of view.  I think the problem
here is that the old disk subsystem mindsets no longer apply with the
way something like ZFS works.  This is going to be the largest stumbling
block of all of them I believe, not anything technical.

If I had the money and time, I'd build a hardware RAID controller that
could do ZFS natively.  It would be dead simple (*I* think anyway) to make
it transparent to the ZFS layer.  ;)

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: 'legacy' vs 'none'

2006-11-29 Thread Ceri Davies
On Wed, Nov 29, 2006 at 10:25:18AM +, Ceri Davies wrote:
> On Tue, Nov 28, 2006 at 04:48:19PM +, Dick Davies wrote:
> > Just spotted one - is this intentional?
> > 
> > You can't delegate a dataset to a zone if mountpoint=legacy.
> > Changing it to 'none' works fine.
> > 
> > 
> >   vera / # zfs create tank/delegated
> >   vera / # zfs get mountpoint tank/delegated
> >   NAMEPROPERTYVALUE   SOURCE
> >   tank/delegated  mountpoint  legacy  inherited from tank
> >   vera / # zfs create tank/delegated/ganesh
> >   vera / # zfs get mountpoint tank/delegated/ganesh
> >   NAME   PROPERTYVALUE  SOURCE
> >   tank/delegated/ganesh  mountpoint  legacy inherited from 
> >   tank
> >   vera / # zonecfg -z ganesh
> >   zonecfg:ganesh> add dataset
> >   zonecfg:ganesh:dataset> set name=tank/delegated/ganesh
> >   zonecfg:ganesh:dataset> end
> >   zonecfg:ganesh> commit
> >   zonecfg:ganesh> exit
> >   vera / # zoneadm -z ganesh boot
> >   could not verify zfs dataset tank/delegated/ganesh: mountpoint cannot be 
> > inherited
> >   zoneadm: zone ganesh failed to verify
> >   vera / # zfs set mountpoint=none tank/delegated/ganesh
> >   vera / # zoneadm -z ganesh boot
> >   vera / #
> 
> Does it actually boot then?  Eric is saying that the filesystem cannot
> be mounted in the 'none' case, so presumably it doesn't.

Not to worry, I see what you're doing now.

Ceri
-- 
That must be wonderful!  I don't understand it at all.
  -- Moliere


pgpBKHDiFmMYt.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: 'legacy' vs 'none'

2006-11-29 Thread Ceri Davies
On Tue, Nov 28, 2006 at 04:48:19PM +, Dick Davies wrote:
> Just spotted one - is this intentional?
> 
> You can't delegate a dataset to a zone if mountpoint=legacy.
> Changing it to 'none' works fine.
> 
> 
>   vera / # zfs create tank/delegated
>   vera / # zfs get mountpoint tank/delegated
>   NAMEPROPERTYVALUE   SOURCE
>   tank/delegated  mountpoint  legacy  inherited from tank
>   vera / # zfs create tank/delegated/ganesh
>   vera / # zfs get mountpoint tank/delegated/ganesh
>   NAME   PROPERTYVALUE  SOURCE
>   tank/delegated/ganesh  mountpoint  legacy inherited from 
>   tank
>   vera / # zonecfg -z ganesh
>   zonecfg:ganesh> add dataset
>   zonecfg:ganesh:dataset> set name=tank/delegated/ganesh
>   zonecfg:ganesh:dataset> end
>   zonecfg:ganesh> commit
>   zonecfg:ganesh> exit
>   vera / # zoneadm -z ganesh boot
>   could not verify zfs dataset tank/delegated/ganesh: mountpoint cannot be 
> inherited
>   zoneadm: zone ganesh failed to verify
>   vera / # zfs set mountpoint=none tank/delegated/ganesh
>   vera / # zoneadm -z ganesh boot
>   vera / #

Does it actually boot then?  Eric is saying that the filesystem cannot
be mounted in the 'none' case, so presumably it doesn't.

Ceri
-- 
That must be wonderful!  I don't understand it at all.
  -- Moliere


pgp0ux4tVwC6W.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' vs 'none'

2006-11-29 Thread Dick Davies

On 28/11/06, Terence Patrick Donoghue <[EMAIL PROTECTED]> wrote:

Is there a difference - Yep,

'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
options
whereas
'none' tells ZFS not to mount the ZFS filesystem at all. Then you would
need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint
poolname/fsname' to get it mounted.


Thanks Terence - now you've explained it, re-reading the manpage
makes more sense :)

This is plain wrong though:

"  Zones
A ZFS file system can be added to a non-global zone by using
zonecfg's  "add  fs"  subcommand.  A ZFS file system that is
added to a non-global zone must have its mountpoint property
set to legacy."

It has to be 'none' or it can't be delegated. Could someone change that?



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' vs 'none'

2006-11-29 Thread Ceri Davies
On Tue, Nov 28, 2006 at 11:13:02AM -0800, Eric Schrock wrote:
> On Tue, Nov 28, 2006 at 06:06:24PM +, Ceri Davies wrote:
> > 
> > But you could presumably get that exact effect by not listing a
> > filesystem in /etc/vfstab.
> > 
> 
> Yes, but someone could still manually mount the filesystem using 'mount
> -F zfs ...'.  If you set the mountpoint to 'none', then it cannot be
> mounted, period.

Aha, that's the key then, thanks.

Ceri
-- 
That must be wonderful!  I don't understand it at all.
  -- Moliere


pgpTk6riwrX7S.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss