Re: [zfs-discuss] 'legacy' vs 'none'

2006-11-29 Thread Dick Davies

On 28/11/06, Terence Patrick Donoghue [EMAIL PROTECTED] wrote:

Is there a difference - Yep,

'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
options
whereas
'none' tells ZFS not to mount the ZFS filesystem at all. Then you would
need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint
poolname/fsname' to get it mounted.


Thanks Terence - now you've explained it, re-reading the manpage
makes more sense :)

This is plain wrong though:

  Zones
A ZFS file system can be added to a non-global zone by using
zonecfg's  add  fs  subcommand.  A ZFS file system that is
added to a non-global zone must have its mountpoint property
set to legacy.

It has to be 'none' or it can't be delegated. Could someone change that?



--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: 'legacy' vs 'none'

2006-11-29 Thread Ceri Davies
On Tue, Nov 28, 2006 at 04:48:19PM +, Dick Davies wrote:
 Just spotted one - is this intentional?
 
 You can't delegate a dataset to a zone if mountpoint=legacy.
 Changing it to 'none' works fine.
 
 
   vera / # zfs create tank/delegated
   vera / # zfs get mountpoint tank/delegated
   NAMEPROPERTYVALUE   SOURCE
   tank/delegated  mountpoint  legacy  inherited from tank
   vera / # zfs create tank/delegated/ganesh
   vera / # zfs get mountpoint tank/delegated/ganesh
   NAME   PROPERTYVALUE  SOURCE
   tank/delegated/ganesh  mountpoint  legacy inherited from 
   tank
   vera / # zonecfg -z ganesh
   zonecfg:ganesh add dataset
   zonecfg:ganesh:dataset set name=tank/delegated/ganesh
   zonecfg:ganesh:dataset end
   zonecfg:ganesh commit
   zonecfg:ganesh exit
   vera / # zoneadm -z ganesh boot
   could not verify zfs dataset tank/delegated/ganesh: mountpoint cannot be 
 inherited
   zoneadm: zone ganesh failed to verify
   vera / # zfs set mountpoint=none tank/delegated/ganesh
   vera / # zoneadm -z ganesh boot
   vera / #

Does it actually boot then?  Eric is saying that the filesystem cannot
be mounted in the 'none' case, so presumably it doesn't.

Ceri
-- 
That must be wonderful!  I don't understand it at all.
  -- Moliere


pgp0ux4tVwC6W.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs corrupted my data!

2006-11-29 Thread Brian Hechinger
On Tue, Nov 28, 2006 at 10:48:46PM -0500, Toby Thain wrote:
 
 Her original configuration wasn't redundant, so she should expect  
 this kind of manual recovery from time to time. Seems a logical  
 conclusion to me? Or is this one of those once-in-a-lifetime strikes?

That's not an entirely true statement.  Her configuration is redundant
from a traditional disk subsystem point of view.  I think the problem
here is that the old disk subsystem mindsets no longer apply with the
way something like ZFS works.  This is going to be the largest stumbling
block of all of them I believe, not anything technical.

If I had the money and time, I'd build a hardware RAID controller that
could do ZFS natively.  It would be dead simple (*I* think anyway) to make
it transparent to the ZFS layer.  ;)

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread David Elefante
I had the same thing happen to me twice on my x86 box.  I installed ZFS (RaidZ) 
on my enclosure with four drives and upon reboot the bios hangs upon detection 
of the newly EFI'd drives.  I've already RMA'd 4 drives to seagate and the new 
batch was frozen as well.  I was suspecting my enclosure, but I was suspicious 
when it only went bye bye after installing ZFS.

This is a problem since how can anyone use ZFS on a PC???  My motherboard is a 
newly minted AM2 w/ all the latest firmware.  I disabled boot detection on the 
sata channels and it still refuses to boot.  I had to purchase an external SATA 
enclosure to fix the drives.  This seems to me to be a serious problem.  I put 
build 47 and 50 on there with the same issue.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs corrupted my data!

2006-11-29 Thread Toby Thain


On 29-Nov-06, at 8:53 AM, Brian Hechinger wrote:


On Tue, Nov 28, 2006 at 10:48:46PM -0500, Toby Thain wrote:


Her original configuration wasn't redundant, so she should expect
this kind of manual recovery from time to time. Seems a logical
conclusion to me? Or is this one of those once-in-a-lifetime strikes?


That's not an entirely true statement.  Her configuration is redundant
from a traditional disk subsystem point of view.  I think the problem
here is that the old disk subsystem mindsets no longer apply with the
way something like ZFS works.


That is very true from what I've seen. ZFS definitely has a problem  
cracking the old-think, but then any generational shift does,  
historically! (I won't bore with other examples.)




This is going to be the largest stumbling
block of all of them I believe, not anything technical.

If I had the money and time, I'd build a hardware RAID controller that
could do ZFS natively.


We already have one: Thumper. :)

But in terms of replacing the traditional RAID subsystem: I don't see  
how such a design could address faults between the isolated  
controller and the host (in the way that software ZFS does). Am I  
missing something in your idea?


The old think is that it is sufficient to have a very complex and  
expensive RAID controller which claims to be reliable storage. But of  
course it's not: No matter how excellent your subsystem is, it's  
still isolated by unreliable components (and non-checksummed RAID is  
inherently at risk anyway).


--Toby


It would be dead simple (*I* think anyway) to make
it transparent to the ZFS layer.  ;)

-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread Toby Thain


On 29-Nov-06, at 9:30 AM, David Elefante wrote:

I had the same thing happen to me twice on my x86 box.  I installed  
ZFS (RaidZ) on my enclosure with four drives and upon reboot the  
bios hangs upon detection of the newly EFI'd drives.  ...  This  
seems to me to be a serious problem.


Indeed. Yay for PC BIOS.

--Toby

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread James McPherson

On 11/30/06, David Elefante [EMAIL PROTECTED] wrote:

I had the same thing happen to me twice on my x86 box.  I
installed ZFS (RaidZ) on my enclosure with four drives and
upon reboot the bios hangs upon detection of the newly EFI'd
drives.  I've already RMA'd 4 drives to seagate and the new
batch was frozen as well.  I was suspecting my enclosure,
but I was suspicious when it only went bye bye after installing ZFS.

This is a problem since how can anyone use ZFS on a PC???
My motherboard is a newly minted AM2 w/ all the latest
firmware.  I disabled boot detection on the sata channels and
it still refuses to boot.  I had to purchase an external SATA
enclosure to fix the drives.  This seems to me to be a serious
problem.  I put build 47 and 50 on there with the same issue.




Yes, this is a serious problem. It's a problem with your motherboard
bios, which is clearly not up to date. The Sun Ultra-20 bios was
updated with a fix for this issue back in May.

Until you have updated your bios, you will need to destroy the
EFI labels, write SMI labels to the disks, and create slices on those
disks which are the size that you want to devote to ZFS. Then you
can specify the slice name when you run your zpool create operation.

This has been covered in the ZFS discussion lists several times, and
a quick google search should have found the answer for you.


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread Casper . Dik
This is a problem since how can anyone use ZFS on a PC???  My motherboard is a 
newly minted AM2 w/
 all the latest firmware.  I disabled boot detection on the sata channels and 
it still refuses to b
oot.  I had to purchase an external SATA enclosure to fix the drives.  This 
seems to me to be a ser
ious problem.  I put build 47 and 50 on there with the same issue.

A serious problem *IN YOUR BIOS*.

You will need to format the disks, at ordinary PC (fdisk) labels and
on those create Solaris partitions and give those to ZFS.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-11-29 Thread Cindy Swearingen

Hi Betsy,

Yes, part of this is a documentation problem.

I recently documented the find -inum scenario in the community version 
of the admin guide. Please see page 156, (well, for next time) here:


http://opensolaris.org/os/community/zfs/docs/

We're working on the larger issue as well.

Cindy




Elizabeth Schwartz wrote:
Well, I fixed the HW but I had one bad file, and the problem was that 
ZFS was saying delete the pool and restore from tape when, it turns 
out, the answer is just find the file with the bad inode, delete it, 
clear the device and scrub.  Maybe more of a documentation problme, but 
it sure is disconcerting to have a file system threatening to give up 
the game over one bad file (and the real irony: it was a file in 
someone's TRASH!)


Anyway I'm back in business without a restore (and with a rebuilt RAID) 
but yeesh, it sure took a lot of escalating to get to the point where 
someone knew to tell me to do a find -inum.





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: system wont boot after zfs

2006-11-29 Thread Casper . Dik

I suspect a lack of an MBR could cause some BIOS implementations to  
barf ..

Why?

Zeroed disks don't have that issue either.

What appears to be happening is more that raid controllers attempt
to interpret the data in the EFI label as the proprietary
hardware raid labels.  At least, it seems to be a problem
with internal RAIDs only.

In my experience, removing the disks from the boot sequence was
not enough; you need to disable the disks in the BIOS.

The SCSI disks with EFI labels in the same system caused no
issues at all; but the disks connected to the on-board RAID
did have issues.

So what you need to do is:

- remove the controllers from the probe sequence
- disable the disks

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: system wont boot after zfs

2006-11-29 Thread Richard Elling

David Elefante wrote:

I had this happen on three different motherboards.  So it seems that there 
should be a procedure in the documentation that states if your BIOS doesn't 
support EFI labels than you need to write ZFS to a partition (slice) not the 
overlay, causing the BIOS to hang on reading the drive on boot up.  Most PC 
bios do not support EFI at this point, so this can impact the larger community.

Having that documentation would have saved me 30 hours at least, and I only 
hope that you take this as positive feedback and integrate it into the doc set. 
 I have ZFS working on my Ultra 20 just fine, and that is what confused me when 
I was working with my x86 box.  It says that EFI is not supported on IDE disks 
(SATA drive), but I'm assuming that this has changed.


From the sol9 doc set:


You shouldn't be using the Solaris *9* doc set.  Use the Solaris 10 docs,
specifically:
  Solaris 10 System Administrator Collection 
System Administration Guide: Devices and File Systems 
  11.  Managing Disks (Overview)  
What's New in Disk Management in the Solaris 10 Release?
  http://docs.sun.com/app/docs/doc/817-5093

and
  Solaris 10 System Administrator Collection 
Solaris ZFS Administration Guide 
  4.  Managing ZFS Storage Pools  
Components of a ZFS Storage Pool
  http://docs.sun.com/app/docs/doc/819-5461

Also, please pester your mobo vendor to get with the times... :-)
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs hot spare not automatically getting used

2006-11-29 Thread Jim Hranicky
I know this isn't necessarily ZFS specific, but after I reboot I spin the 
drives back
up, but nothing I do (devfsadm, disks, etc) can get them seen again until the
next reboot.

I've got some older scsi drives in an old Andataco Gigaraid enclosure which
I thought supported hot-swap, but I seem unable to hot swap them in. The PC
has an adaptec 39160 card in it and I'm running Nevada b51. Is this not a 
setup that can support hot swap? Or is there something I have to do other
than devfsadm to get the scsi bus rescanned?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs hot spare not automatically getting used

2006-11-29 Thread Sanjeev Bagewadi

Jim,

That is good news !! Let's us know how it goes.


Regards,
Sanjeev.
PS : I am out of office a couple of days.

Jim Hranicky wrote:


OK, spun down the drives again. Here's that output:

 http://www.cise.ufl.edu/~jfh/zfs/threads
   



I just realized that I changed the configuration, so that doesn't reflect 
a system with spares, sorry. 

However, I reinitialized the pool and spun down one of the drives and 
everything is working as it should:


pool: zmir
state: DEGRADED
   status: One or more devices could not be opened.  Sufficient replicas exist 
for
   the pool to continue functioning in a degraded state.
   action: Attach the missing device and online it using 'zpool online'.
  see: http://www.sun.com/msg/ZFS-8000-D3
scrub: resilver completed with 0 errors on Wed Nov 29 16:29:53 2006
   config:

   NAME  STATE READ WRITE CKSUM
   zmir  DEGRADED 0 0 0
 mirror  DEGRADED 0 0 0
   c0t0d0ONLINE   0 0 0
   spare DEGRADED 0 0 0
 c3t1d0  UNAVAIL 10 28.88 0  cannot open
 c3t3d0  ONLINE   0 0 0
   spares
 c3t3d0  INUSE currently in use
 c3t4d0  AVAIL

   errors: No known data errors

I'm just not sure if it will always work. 


I'll try a few different configs and see what happens.


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 




--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss