Re: [zfs-discuss] Proposal: user-defined properties

2006-08-18 Thread Constantin Gonzalez Schmitz
Hi Eric,

this is a great proposal and I'm sure this is going to help administrators
a lot.

One small question below:

> Any property which contains a colon (':') is defined as a 'user
> property'.  The name can contain alphanumeric characters, plus the
> following special characters: ':', '-', '.', '_'.  User properties are
> always strings, and are always inherited.  No additional validation is
> done on the contents.  Properties are set and retrieved through the
> standard mechanisms: 'zfs set', 'zfs get', and 'zfs inherit'.

>   # zfs list -o name,local:department
>   NAME  LOCAL:DEPARTMENT
>   test  12345
>   test/foo  12345
>   # zfs set local:department=67890 test/foo
>   # zfs inherit local:department test
>   # zfs get -s local -r all test 
>   NAME  PROPERTY  VALUE  SOURCE
>   test/foo  local:department  12345  local
>   # zfs list -o name,local:department
>   NAME  LOCAL:DEPARTMENT
>   test  -
>   test/foo  12345

the example suggests that properties may be case-insensitive. Is that the case
(sorry for the pun)? If so, that should be noted in the user defined property
definition just for clarity.

Best regards,
   Constantin

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Client Solutionshttp://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Dick Davies

On 17/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:


Dick Davies wrote:



> That's excellent news Lori, thanks to everyone who's working
> on this. Are you planning to use a single pool,
> or an 'os pool/application pool' split?



Thus I think of the most important split as the "os pool/data pool"
split.  Maybe that's what you meant.


That's it, yes :)
I should probably have said service rather than application.


.. limitations
in the boot PROMs cause us to place restrictions on the devices
you can place in a root pool.  (root mirroring WILL be supported,
however).


Does boot prom support mean this will be SPARC only? That's
interesting (last time I tried Tabriz' hack, it was x86 only).

Or is x86 zfs root going to need a grub /boot partition on one
of the disks?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Want to try ZFS, but "format" got message

2006-08-18 Thread Xinfeng Liu
Hi, 

I'm trying ZFS on my laptop (Solaris 10 6/06 installed), I want to assign 2 
slices for using ZFS. But when I type "format", I got   
message. 

Although "prtvtoc" can give the drive type info, but I couldn't set the drive 
type because the root filesystem is mounted on the drive.

Any suggestions to fix this problem?

Thanks in advance,
Xinfeng
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Want to try ZFS, but "format" got message

2006-08-18 Thread Sean McGrath - Sun Microsystems Ireland
Xinfeng Liu stated:
< Hi, 
< 
< I'm trying ZFS on my laptop (Solaris 10 6/06 installed), I want to assign 2 
slices for using ZFS. But when I type "format", I got   
message. 


  You've probably got all the disk over to solaris and have things mounted.
  To play around with zfs on a laptop try using files instead, eg:

mkfile 256m file1
mkfile 256m file2

zpool create `pwd`/file*

 etc...

Regards,


< 
< Although "prtvtoc" can give the drive type info, but I couldn't set the drive 
type because the root filesystem is mounted on the drive.
< 
< Any suggestions to fix this problem?
< 
< Thanks in advance,
< Xinfeng
<  
<  
< This message posted from opensolaris.org
< ___
< zfs-discuss mailing list
< zfs-discuss@opensolaris.org
< http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Sean.
.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] delete acl not working on zfs.v3?

2006-08-18 Thread ethan gunword
we give the right to add folder to user foo.(this user can not delete anything 
as a default) After that we give the right create file.And then user foo gains 
delete everthing. How come is it possible.
Even though we add another rule like "0:user:foo:delete_child/delete:deny". 
Again it does not work . Why please somebody answer this strange situation.

we need get answer as a result: user can create file, folder but not delete. 
this is it.

ps: we tried it on ntfs (windows 2003) and hfs+ (apple macosx) fs type.

thanks

bash-3.00# zpool create tank c0d0s7
bash-3.00# zfs create tank/fs

bash-3.00# cd /tank/fs
bash-3.00# mkdir test

useradd foo
passwd foo

bash-3.00# chmod A+user:foo:add_file/add_subdirectory:allow test
bash-3.00# chmod A+user:foo:delete_child/delete:deny test

bash-3.00# ls -v
total 3
drwxr-xr-x+  3 root root   4 Aug 18 15:30 test
 0:user:foo:delete_child/delete:deny
 1:user:foo:add_file/write_data/add_subdirectory/append_data:allow
 2:owner@::deny
 3:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
 /append_data/write_xattr/execute/write_attributes/write_acl
 /write_owner:allow
 4:group@:add_file/write_data/add_subdirectory/append_data:deny
 5:group@:list_directory/read_data/execute:allow
 6:everyone@:add_file/write_data/add_subdirectory/append_data/write_xattr
 /write_attributes/write_acl/write_owner:deny
 7:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
 /read_acl/synchronize:allow
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Proposal: user-defined properties

2006-08-18 Thread Eric Schrock
On Fri, Aug 18, 2006 at 09:36:20AM +0200, Constantin Gonzalez Schmitz wrote:
> 
> the example suggests that properties may be case-insensitive. Is that the case
> (sorry for the pun)? If so, that should be noted in the user defined property
> definition just for clarity.
> 

Good point.  The property names are case-insensitive (and internally are
always converted to lower-case), but the property values can be
anything.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Home Server with ZFS

2006-08-18 Thread Ben Short
Hi, 

I'm plan to build home server that will host my svn repository, fileserver, 
mailserver and webserver. 
This is my plan..

I have an old dell precision 420 dual 933Mhz pIII cpus. Inside this i have one 
scsi 9.1G hdd and 2 80G ide hdds. I am going to install solaris 10 on the scsi 
drive and have it as the boot disk. I will then create a zfs mirror on the two 
ide drives. Since i dont want to mix internet facing services (mailserver, 
webservers) with my internal services (svn server, fileserver) i am going to 
use zones to isolate them. Not sure how many zones just yet.

In this configureation i hope too of gained the protection of having the 
serives mirrors ( will perform backups also ).

What i dont know is what happens if the boot disk dies? can i replace is, 
install solaris again and get it to see the zfs mirror?
Also what happens if one of the ide drives fails? can i plug another one in and 
run some zfs commands to make it part of the mirror?

Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: commercial backup software and zfs

2006-08-18 Thread Phil Coleman
Here are some comments received recently regarding Legato and ZFS:

>From the 7.3.1 release supplement manual
Restrictions associated with the ZFS file system (LGTpa88264)
The following are restrictions associated with the ZFS file system:
Only an Administrator with full access to ZFS directories may recover files. ZFS
files can be restored to a UFS file system. When restoring ZFS files to a UFS 
file
system, only the permission information is retained, the access control entries 
are not retained. ZFS snapshots and the files in ZFS directories are not backed 
up or restored when restoring the original files. file systems must be 
explicitly specified in the client's save set attribute. ZFS file systems will 
not be recognized if you use the ALL keyword.

Additionally, according to LGTpa83909, The ZFS filesystem is supported in 
NetWorker 7.3.2 (the release date is currently unknown but expected to be 
somewhere arround sept/oct).

Also
Thank you for contacting Legato/EMC support. ZFS support is planned for NW 
7.3.2 which is at the moment in beta testing ( See  extract from the 7.3.2 
Release Notes  below) Release is planned for end of September, but this has not 
been finalised:

Sun ZFS Qualification
The NetWorker software supports ZFS/NFSv4 ACLs. The NetWorker software
supports a migration to a ZFS environment.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Server with ZFS

2006-08-18 Thread David Dyer-Bennet

On 8/18/06, Ben Short <[EMAIL PROTECTED]> wrote:


I'm plan to build home server that will host my svn repository, fileserver, 
mailserver and webserver.
This is my plan..

I have an old dell precision 420 dual 933Mhz pIII cpus. Inside this i have one 
scsi 9.1G hdd and 2 80G ide hdds. I am going to install solaris 10 on the scsi 
drive and have it as the boot disk. I will then create a zfs mirror on the two 
ide drives. Since i dont want to mix internet facing services (mailserver, 
webservers) with my internal services (svn server, fileserver) i am going to 
use zones to isolate them. Not sure how many zones just yet.

In this configureation i hope too of gained the protection of having the 
serives mirrors ( will perform backups also ).

What i dont know is what happens if the boot disk dies? can i replace is, 
install solaris again and get it to see the zfs mirror?


As I understand it, this be possible, but I haven't tried it and I'm
not an expert Solaris admin.  Some ZFS info is stored in a persistent
file on your system disk, and you may have to do a little dance to get
around that.  It's worth researching and practicing in advance :-).


Also what happens if one of the ide drives fails? can i plug another one in and 
run some zfs commands to make it part of the mirror?


Yes.  This one I've tried -- in simulation rather than on real
hardware, but that's close enough to make me believe I actually know
the answer :-).

There's a command to replace a disk in a pool; it is "zpool replace
 ".

To answer this and many other questions you will no doubt have, you
want to download the Solaris 10 ZFS Administrators guide from Sun,
Part No: 819–5461–10.

Also continue to hang around this list, people who really know ZFS are
here and will help.  I try to pick off the really easy questions I'm
sure I can get right :-).
--
David Dyer-Bennet, , 
RKBA: 
Pics: 
Dragaera/Steven Brust: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Home Server with ZFS

2006-08-18 Thread Rainer Heilke
I can't be specific with my reply to the second question, as I've never done 
it, but do a search for "re-silvering". It is a functionality that is supposed 
to be there.

As to the first question, absolutely! I have upgraded my internal server twice, 
and both times, I was able to see the old ZFS mirror. The first time I 
upgraded, I forgot to do the ZFS export, so I essentially simulated a server 
going belly-up. In this case, you need to force the ZFS import (if you've 
exported first, as in the case of a server life-cycle upgrade, you don't need 
to force the import). If I remember correctly (I'm at work right now--bear 
with), the commands would be:

# zpool import 
or
# zpool import -f 

 is the name of the ZFS pool as it was on the server before the 
upgrade/failure. Both times, the zpool import created the appropriate 
mountpoint automagically.

IHTH

Rainer
PS I Love ZFS! :-)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Equivalent of command quot

2006-08-18 Thread David OLIVIER - Sun France - Sun Support Services




Hello,

Is there an equivalent of the command quot - summarize file system
ownership
 (/usr/sbin/quot -> ../lib/fs/ufs/quot) working with ZFS ?

It seems that this command is only working on UFS file systems

Thanks in advance for your answers,

Regards,
David



-- 

  

  
   David OLIVIER 
Support Engineer
  
  Sun Microsystems, Inc.
8-10, avenue Morane Saulnier
78140 VELIZY
Phone x31744 / +33 1 34031744
Fax +33 1 34030100
Email [EMAIL PROTECTED]
  

  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] delete acl not working on zfs.v3?

2006-08-18 Thread Mark Shellenbaum

ethan gunword wrote:

we give the right to add folder to user foo.(this user can not delete anything 
as a default) After that we give the right create file.And then user foo gains 
delete everthing. How come is it possible.
Even though we add another rule like "0:user:foo:delete_child/delete:deny". 
Again it does not work . Why please somebody answer this strange situation.

we need get answer as a result: user can create file, folder but not delete. 
this is it.

ps: we tried it on ntfs (windows 2003) and hfs+ (apple macosx) fs type.

thanks

bash-3.00# zpool create tank c0d0s7
bash-3.00# zfs create tank/fs

bash-3.00# cd /tank/fs
bash-3.00# mkdir test

useradd foo
passwd foo

bash-3.00# chmod A+user:foo:add_file/add_subdirectory:allow test
bash-3.00# chmod A+user:foo:delete_child/delete:deny test

bash-3.00# ls -v
total 3
drwxr-xr-x+  3 root root   4 Aug 18 15:30 test
 0:user:foo:delete_child/delete:deny
 1:user:foo:add_file/write_data/add_subdirectory/append_data:allow
 2:owner@::deny
 3:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
 /append_data/write_xattr/execute/write_attributes/write_acl
 /write_owner:allow
 4:group@:add_file/write_data/add_subdirectory/append_data:deny
 5:group@:list_directory/read_data/execute:allow
 6:everyone@:add_file/write_data/add_subdirectory/append_data/write_xattr
 /write_attributes/write_acl/write_owner:deny
 7:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
 /read_acl/synchronize:allow
 



Delete permissions are kind of complicated.  The recommended NFSv4
enforcement for the ability to delete  an object is based on the
following chart:

---
|   Parent Dir  |   Target Object Permissions |
|  permissions  | |
---
|   | ACL Allows | ACL Denies| Delete |
|   |  Delete|  Delete   | unspecified|
---
|  ACL Allows   | Permit | Permit| Permit |
|  DELETE_CHILD | |
---
|  ACL Denies   | Permit | Deny  | Deny   |
|  DELETE_CHILD ||   ||
---
| ACL specifies ||   ||
| only allow| Permit | Permit| Permit |
| write and ||   ||
| execute   ||   ||
---
| ACL denies||   ||
| write and | Permit | Deny  | Deny   |
| execute   ||   ||
 ---

This should mean that you are denied delete permission based on row two 
of the chart.  Unfortunately, the code proceeds on and then finds 
write/execute on the directory.  You picked up write when you added 
add_file to the ACL.  Once we find write/execute on the directory we are 
then on row 3 and access is granted.



I have opened bug 6461609 to address this problem.  thanks for finding 
the problem.



  -Mark



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Server with ZFS

2006-08-18 Thread Richard Elling - PAE

Ben Short wrote:
Hi, 

I'm plan to build home server that will host my svn repository, fileserver, mailserver and webserver. 
This is my plan..


I have an old dell precision 420 dual 933Mhz pIII cpus. Inside this i have one 
scsi 9.1G hdd and 2 80G ide hdds. I am going to install solaris 10 on the scsi 
drive and have it as the boot disk. I will then create a zfs mirror on the two 
ide drives. Since i dont want to mix internet facing services (mailserver, 
webservers) with my internal services (svn server, fileserver) i am going to 
use zones to isolate them. Not sure how many zones just yet.


Sounds reasonable to me.


In this configureation i hope too of gained the protection of having the 
serives mirrors ( will perform backups also ).

What i dont know is what happens if the boot disk dies? can i replace is, 
install solaris again and get it to see the zfs mirror?


Yes.  You can "zfs import" the drives into the new Solaris environment.
I do this regularly, as I tend to upgrade regularly.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Server with ZFS

2006-08-18 Thread Constantin Gonzalez Schmitz
Hi,

>> What i dont know is what happens if the boot disk dies? can i replace
>> is, install solaris again and get it to see the zfs mirror?
> 
> As I understand it, this be possible, but I haven't tried it and I'm
> not an expert Solaris admin.  Some ZFS info is stored in a persistent
> file on your system disk, and you may have to do a little dance to get
> around that.  It's worth researching and practicing in advance :-).

IIRC, then ZFS has all relevant information stored inside the pool. So you
should be able to install a new OS into the replacement disk, then say
"zpool import" (possibly with -d and the devices where the mirror lives)
to re-import the pool.

But I haven't really tried it myself :).

All in all, ZFS is an excellent choice for a home server. I use ZFS as a video
storage for a digital set top box (quotas are really handy here), as a storage
for my music collection, as a backup storage for important data (including
photos), etc.

I'm currently juggling around 4 differently sized disks into a new config
with the goal of getting as much storage as possible out of them at a minimum
level of redundance. Interesting, Teris-like calculation exercise that I'd be
happy to blog about when I'm done.

Feel free to visit my blog for how to set up your home server as a ZFS iTunes
streaming server :).

Best regards,
   Constantin

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Client Solutionshttp://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Issue with zfs snapshot replication from version2 to version3 pool.

2006-08-18 Thread Shane Milton
I did a little bit of digging, and didn't turn up any known issues.  Any insite 
would be appreciated.

Basically I replicated a zfs snapshot from a version2 storage pool into a 
version3 pool and it seems to have corrupted the version3 pool.  At the time of 
the error both pools were running on the same system (amd64 build44)

The command used was something similiar to the following.
"zfs send [EMAIL PROTECTED] | zfs recv [EMAIL PROTECTED]"

zfs list, zfs list-r , zpool destroy  all 
end with a core dump.

After a little digging with mdb and truss, It seems to be dying around the 
function ZFS_IOC_SNAPSHOT_LIST_NEXT.

I'm away from the system at the moment, but do have some of the core files and 
truss output for those interested.

# truss zfs list
execve("/sbin/zfs", 0x08047E90, 0x08047E9C)  argc = 2
resolvepath("/usr/lib/ld.so.1", "/lib/ld.so.1", 1023) = 12
resolvepath("/sbin/zfs", "/sbin/zfs", 1023) = 9
sysconfig(_CONFIG_PAGESIZE) = 4096
xstat(2, "/sbin/zfs", 0x08047C48)   = 0
open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT
xstat(2, "/lib/libzfs.so.1", 0x08047448)= 0
resolvepath("/lib/libzfs.so.1", "/lib/libzfs.so.1", 1023) = 16
open("/lib/libzfs.so.1", O_RDONLY)  = 3
..
...
ioctl(3, ZFS_IOC_OBJSET_STATS, 0x08045FBC)  = 0
ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x08046DFC) = 0
ioctl(3, ZFS_IOC_OBJSET_STATS, 0x080450BC)  = 0
ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x08045EFC) Err#3 ESRCH
ioctl(3, ZFS_IOC_SNAPSHOT_LIST_NEXT, 0x08045EFC) Err#22 EINVAL
fstat64(2, 0x08044EE0)  = 0
internal error: write(2, " i n t e r n a l   e r r".., 16)  = 16
Invalid argumentwrite(2, " I n v a l i d   a r g u".., 16)  = 16

write(2, "\n", 1)   = 1
sigaction(SIGABRT, 0x, 0x08045E30)  = 0
sigaction(SIGABRT, 0x08045D70, 0x08045DF0)  = 0
schedctl()  = 0xFEBEC000
lwp_sigmask(SIG_SETMASK, 0x, 0x) = 0xFFBFFEFF [0x]
lwp_kill(1, SIGABRT)= 0
Received signal #6, SIGABRT [default]
  siginfo: SIGABRT pid=1444 uid=0 code=-1


Thanks
-Shane
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issue with zfs snapshot replication from version2 to version3 pool.

2006-08-18 Thread Eric Schrock
Can you send the output of this D script while running 'zfs list'?

#!/sbin/dtrace -s

zfs_ioc_snapshot_list_next:entry
{
trace(stringof(args[0]->zc_name));
}

zfs_ioc_snapshot_list_next:return
{
trace(arg1);
}


- Eric

On Fri, Aug 18, 2006 at 09:27:36AM -0700, Shane Milton wrote:
> I did a little bit of digging, and didn't turn up any known issues.  Any 
> insite would be appreciated.
> 
> Basically I replicated a zfs snapshot from a version2 storage pool into a 
> version3 pool and it seems to have corrupted the version3 pool.  At the time 
> of the error both pools were running on the same system (amd64 build44)
> 
> The command used was something similiar to the following.
> "zfs send [EMAIL PROTECTED] | zfs recv [EMAIL PROTECTED]"
> 
> zfs list, zfs list-r , zpool destroy  
> all end with a core dump.
> 
> After a little digging with mdb and truss, It seems to be dying around the 
> function ZFS_IOC_SNAPSHOT_LIST_NEXT.
> 
> I'm away from the system at the moment, but do have some of the core files 
> and truss output for those interested.
> 
> # truss zfs list
> execve("/sbin/zfs", 0x08047E90, 0x08047E9C)  argc = 2
> resolvepath("/usr/lib/ld.so.1", "/lib/ld.so.1", 1023) = 12
> resolvepath("/sbin/zfs", "/sbin/zfs", 1023) = 9
> sysconfig(_CONFIG_PAGESIZE) = 4096
> xstat(2, "/sbin/zfs", 0x08047C48)   = 0
> open("/var/ld/ld.config", O_RDONLY) Err#2 ENOENT
> xstat(2, "/lib/libzfs.so.1", 0x08047448)= 0
> resolvepath("/lib/libzfs.so.1", "/lib/libzfs.so.1", 1023) = 16
> open("/lib/libzfs.so.1", O_RDONLY)  = 3
> ..
> ...
> ioctl(3, ZFS_IOC_OBJSET_STATS, 0x08045FBC)  = 0
> ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x08046DFC) = 0
> ioctl(3, ZFS_IOC_OBJSET_STATS, 0x080450BC)  = 0
> ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x08045EFC) Err#3 ESRCH
> ioctl(3, ZFS_IOC_SNAPSHOT_LIST_NEXT, 0x08045EFC) Err#22 EINVAL
> fstat64(2, 0x08044EE0)  = 0
> internal error: write(2, " i n t e r n a l   e r r".., 16)  = 16
> Invalid argumentwrite(2, " I n v a l i d   a r g u".., 16)  = 16
> 
> write(2, "\n", 1)   = 1
> sigaction(SIGABRT, 0x, 0x08045E30)  = 0
> sigaction(SIGABRT, 0x08045D70, 0x08045DF0)  = 0
> schedctl()  = 0xFEBEC000
> lwp_sigmask(SIG_SETMASK, 0x, 0x) = 0xFFBFFEFF [0x]
> lwp_kill(1, SIGABRT)= 0
> Received signal #6, SIGABRT [default]
>   siginfo: SIGABRT pid=1444 uid=0 code=-1
> 
> 
> Thanks
> -Shane
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Lori Alt

Dick Davies wrote:

On 17/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:


Dick Davies wrote:




> That's excellent news Lori, thanks to everyone who's working
> on this. Are you planning to use a single pool,
> or an 'os pool/application pool' split?




Thus I think of the most important split as the "os pool/data pool"
split.  Maybe that's what you meant.



That's it, yes :)
I should probably have said service rather than application.


.. limitations
in the boot PROMs cause us to place restrictions on the devices
you can place in a root pool.  (root mirroring WILL be supported,
however).



Does boot prom support mean this will be SPARC only? That's
interesting (last time I tried Tabriz' hack, it was x86 only).


No, zfs boot will be supported on both x86 and sparc.  Sparc's
OBP, and various x86 BIOS's both have restrictions on the devices
that can be accessed at boot time, so we need to limit the
devices in a root pool on both architectures.




Or is x86 zfs root going to need a grub /boot partition on one
of the disks?


On x86, each disk capable of booting the system (which means each
disk in a root pool) will have grub installed on it in a disk
slice which occupies the first few blocks of the disk.  It's not
the same as the old /boot partition, because all the slice
contains is grub.  It doesn't contain a file system.

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Dick Davies

On 18/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:


No, zfs boot will be supported on both x86 and sparc.  Sparc's
OBP, and various x86 BIOS's both have restrictions on the devices
that can be accessed at boot time, so we need to limit the
devices in a root pool on both architectures.


Gotcha. I wasn't sure if you were proposing requiring a custom
BIOS on x86, but I take it (from your next point)
you're just chainloading a ZFS-aware grub


> Or is x86 zfs root going to need a grub /boot partition on one
> of the disks?

On x86, each disk capable of booting the system (which means each
disk in a root pool) will have grub installed on it in a disk
slice which occupies the first few blocks of the disk.  It's not
the same as the old /boot partition, because all the slice
contains is grub.  It doesn't contain a file system.


I think that was really what I was getting at. So long as one
of the disks is still alive, and the BIOS can boot of it, then you'd
be alright? That sounds perfect - the implementation is really
not that important to me, so long as there's no single point of
failure.

Thanks for your time, and have a good weekend.


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Torrey McMahon

Lori Alt wrote:


No, zfs boot will be supported on both x86 and sparc.  Sparc's
OBP, and various x86 BIOS's both have restrictions on the devices
that can be accessed at boot time, so we need to limit the
devices in a root pool on both architectures.


Hi Lori.

Can you expand a bit on the above? What sort of limitations are you 
referring too? (Boot time? Topology?)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Lori Alt

Torrey McMahon wrote:

Lori Alt wrote:



No, zfs boot will be supported on both x86 and sparc.  Sparc's
OBP, and various x86 BIOS's both have restrictions on the devices
that can be accessed at boot time, so we need to limit the
devices in a root pool on both architectures.



Hi Lori.

Can you expand a bit on the above? What sort of limitations are you 
referring too? (Boot time? Topology?)


The limitation is mainly about the *number* of disks
that can be accessed at one time.  If we were going to
support booting off a set of disks in a RAID-Z
configuration, the early boot code would have to
read some blocks from one disk, and then some blocks
from another disk, and so on.  There are difficulties
doing that when using the capabilities of OBP
or the BIOS to do I/O.  (and if you want me to be more
specific about what THOSE difficulties are, I'd
have to get someone who knows more about BIOS and
OBP to answer the question.)  But with straight
mirroring, there's no such problem because any disk
in the mirror can supply all of the disk blocks needed
to boot.

lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Tabriz Leman


Torrey McMahon wrote:


Lori Alt wrote:



No, zfs boot will be supported on both x86 and sparc.  Sparc's
OBP, and various x86 BIOS's both have restrictions on the devices
that can be accessed at boot time, so we need to limit the
devices in a root pool on both architectures.



Hi Lori.

Can you expand a bit on the above? What sort of limitations are you 
referring too? (Boot time? Topology?)


I think what Lori is referring to here is that we need to limit the 
rootpool to BIOS/OBP visible devices; not all devices are visible from 
the BIOS/OBP (fibre channel devices, for example).


Tabriz


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Bennett, Steve
Lori said:
> The limitation is mainly about the *number* of disks
> that can be accessed at one time.
> ...
> But with straight mirroring, there's no such problem
> because any disk in the mirror can supply all of the
> disk blocks needed to boot.

Does that mean that these restrictions will go away once replication can
be varied on a per dataset (or per file) basis? You could have all your
'essential to boot' files mirrored across all disks, then raidz2 the
rest...

Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Disk

2006-08-18 Thread Lori Alt

Bennett, Steve wrote:

Lori said:


The limitation is mainly about the *number* of disks
that can be accessed at one time.
...
But with straight mirroring, there's no such problem
because any disk in the mirror can supply all of the
disk blocks needed to boot.



Does that mean that these restrictions will go away once replication can
be varied on a per dataset (or per file) basis? You could have all your
'essential to boot' files mirrored across all disks, then raidz2 the
rest...



Maybe.  It depends on how per-file replication is implemented.
I don't think we've made any design decisions at this time that
would prevent that from working in the future.

Lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Home Server with ZFS

2006-08-18 Thread Robert Milkowski
Hello David,

Friday, August 18, 2006, 5:39:31 PM, you wrote:

DDB> On 8/18/06, Ben Short <[EMAIL PROTECTED]> wrote:

>> I'm plan to build home server that will host my svn repository, fileserver, 
>> mailserver and webserver.
>> This is my plan..
>>
>> I have an old dell precision 420 dual 933Mhz pIII cpus. Inside this i have 
>> one scsi 9.1G hdd and 2 80G ide hdds. I am going to install solaris 10 on 
>> the scsi drive and have it as the boot disk. I will then create a zfs mirror 
>> on the two ide drives. Since i dont want to mix internet facing services 
>> (mailserver, webservers) with my internal services (svn server, fileserver) 
>> i am going to use zones to isolate them. Not sure how many zones just yet.
>>
>> In this configureation i hope too of gained the protection of having the 
>> serives mirrors ( will perform backups also ).
>>
>> What i dont know is what happens if the boot disk dies? can i replace is, 
>> install solaris again and get it to see the zfs mirror?

DDB> As I understand it, this be possible, but I haven't tried it and I'm
DDB> not an expert Solaris admin.  Some ZFS info is stored in a persistent
DDB> file on your system disk, and you may have to do a little dance to get
DDB> around that.  It's worth researching and practicing in advance :-).

Unless you use legacy mountpoints all ZFS info is stored inside a
pool. So you can just import those disks into other Solaris system
without any "dancing" at all.



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Server with ZFS

2006-08-18 Thread Robert Milkowski
Hello Ben,

Friday, August 18, 2006, 4:36:45 PM, you wrote:


BS> What i dont know is what happens if the boot disk dies? can i
BS> replace is, install solaris again and get it to see the zfs mirror?
BS> Also what happens if one of the ide drives fails? can i plug
BS> another one in and run some zfs commands to make it part of the mirror?


1. boot disk dies

Importing ZFS pool will be just one command after Solaris
re-install and that's it. So when it comes to data on ZFS it will
just work.

However Zones can be an issue - first it won't just work and even
after some manual tweaking there can be possible problems (after
some patches were applied before disk crashed, etc.). Worst case
would be to "re-install" zones. I would consider creating separate
ZFS file system on those two mirrored disks just for Zones and
theirs data. But still it won't just work.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Filesytem Corrpution

2006-08-18 Thread Richard Elling - PAE

Srivastava, Sanjaya wrote:
   I have been seeing data corruption on the ZFS filesystem. Here are 
some details. The machine is running s10 on X86 platform with a single 
160Gb SATA disk.  (root on s0  and zfs on s7)


I'd wager that it is a hardware problem.  Personally, I've had less than
satisfactory reliability experiences with 160 GByte disks from a variety
of vendors.  Try mirroring.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Filesytem Corrpution

2006-08-18 Thread Robert Milkowski




Hello Sanjaya,

Friday, August 18, 2006, 7:50:21 PM, you wrote:




>


Hi,
 
   I have been seeing data corruption on the ZFS filesystem. Here are some details. The machine is running s10 on X86 platform with a single 160Gb SATA disk.  (root on s0  and zfs on s7)
 





Well you have a ZFS without any protection (except ditto blocks for meta data).
Unless you overwrite underlying disk/slice it's possible you have a problem with your disk
or other hardware.

Try 'fmdump -eV'


btw: your system produced crash dump - I understand that server restarted actually, right?

Also interesting is:

Aug 15 18:31:14 sfo-dk2-s62 unix: [ID 557827 kern.info] cpu3 initialization complete - online
Aug 15 18:31:14 sfo-dk2-s62 unix: [ID 999285 kern.warning] WARNING: BIOS microcode patch for AMD Athlon(tm) 64/Opteron(tmprocessor
Aug 15 18:31:14 sfo-dk2-s62 erratum 131 was not detected; updating your system's BIOS to a version
Aug 15 18:31:14 sfo-dk2-s62 containing this microcode patch is HIGHLY recommended or erroneous system
Aug 15 18:31:14 sfo-dk2-s62 operation may occur. 


However I do not belive it's related to this problem.


-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-18 Thread Frank Cusack

On August 10, 2006 6:04:38 PM -0700 eric kustarz <[EMAIL PROTECTED]> wrote:

If you're doing HA-ZFS (which is SunCluster 3.2 - only available in beta right 
now),


Is the 3.2 beta publicly available?  I can only locate 3.1.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-18 Thread George Wilson

Frank,

The SC 3.2 beta maybe closed but I'm forwarding your request to Eric 
Redmond.


Thanks,
George

Frank Cusack wrote:
On August 10, 2006 6:04:38 PM -0700 eric kustarz <[EMAIL PROTECTED]> 
wrote:
If you're doing HA-ZFS (which is SunCluster 3.2 - only available in 
beta right now),


Is the 3.2 beta publicly available?  I can only locate 3.1.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] ZFS Filesytem Corrpution

2006-08-18 Thread Srivastava, Sanjaya



Thanks for the reply. 
 If a put a SATA  Raid card (ARC-1110, http://www.areca.us/products/html/pcix-sata.htm ) the 
problem disappers.
 
 
 
...Sanjaya
 
 


From: Robert Milkowski 
[mailto:[EMAIL PROTECTED] Sent: Friday, August 18, 2006 11:59 
AMTo: Srivastava, SanjayaCc: 
zfs-discuss@opensolaris.orgSubject: Re: [zfs-discuss] ZFS Filesytem 
Corrpution

Hello Sanjaya,

Friday, August 18, 2006, 7:50:21 PM, you wrote:



  
  

  >

  Hi,
   
     I have been seeing data corruption on 
  the ZFS filesystem. Here are some details. The machine is running s10 on 
  X86 platform with a single 160Gb SATA disk.  (root on s0  and 
  zfs on s7)
   

Well you have a ZFS without any protection (except ditto blocks for meta 
data).
Unless you overwrite underlying disk/slice it's possible you have a problem 
with your disk
or other hardware.

Try 'fmdump -eV'


btw: your system produced crash dump - I understand that server restarted 
actually, right?

Also interesting is:

Aug 15 18:31:14 sfo-dk2-s62 unix: [ID 557827 kern.info] cpu3 initialization 
complete - online
Aug 15 18:31:14 sfo-dk2-s62 unix: [ID 999285 kern.warning] WARNING: BIOS 
microcode patch for AMD Athlon(tm) 64/Opteron(tmprocessor
Aug 15 18:31:14 sfo-dk2-s62 erratum 131 was not detected; updating your 
system's BIOS to a version
Aug 15 18:31:14 sfo-dk2-s62 containing this microcode patch is HIGHLY 
recommended or erroneous system
Aug 15 18:31:14 sfo-dk2-s62 operation may occur. 


However I do not belive it's related to this problem.


-- 
Best regards,
 Robert             
               mailto:[EMAIL PROTECTED]
                
                      
 http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Want to try ZFS, but "format" got message

2006-08-18 Thread Xinfeng Liu

Thank you Sean. That would be wonderful using files.

-Xinfeng

Sean McGrath - Sun Microsystems Ireland ??:

Xinfeng Liu stated:
< Hi, 
< 
< I'm trying ZFS on my laptop (Solaris 10 6/06 installed), I want to assign 2 slices for using ZFS. But when I type "format", I got   message. 



  You've probably got all the disk over to solaris and have things mounted.
  To play around with zfs on a laptop try using files instead, eg:

mkfile 256m file1
mkfile 256m file2

zpool create `pwd`/file*

 etc...

Regards,


< 
< Although "prtvtoc" can give the drive type info, but I couldn't set the drive type because the root filesystem is mounted on the drive.
< 
< Any suggestions to fix this problem?
< 
< Thanks in advance,

< Xinfeng
<  
<  
< This message posted from opensolaris.org

< ___
< zfs-discuss mailing list
< zfs-discuss@opensolaris.org
< http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss