[zfs-discuss] ZFS and Veritas Cluster Server

2007-11-03 Thread Nathan Dietsch
Hello All,

I am working with a customer on a solution where ZFS looks very 
promising. The solution requires disaster recovery and the chosen 
technology for providing DR of services in this organisation is Veritas 
Cluster Server.

Has anyone implemented ZFS with Veritas Cluster Server to provide 
high-availability for ZFS pools and datasets? I understand that Sun 
Cluster is a better product for use with ZFS, but it is not supported 
within the organisation and is not available for use within the proposed 
solution.

I am specifically looking for information on implementation experiences 
and failover testing with ZFS and VCS.

Furthermore, if anyone has implemented ZFS on SRDF, I would also be 
interesting in hearing about those implementation experiences.

Any and all input would be most appreciated.

Kind Regards,

Nathan Dietsch
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of Samba/ZFS integration

2007-11-03 Thread Razvan Corneliu VILT
Sounds like the right solution to my problem in it solves a few problems, but I 
am rather curious about how it would integrate with a potential Samba server 
running on the same system (in case someone needs a domain controller as well 
as a fileserver).

1 - Samba can store the DOS attributes of a file in an xattr. Can sharesmb do 
that? If so, is it compatible with Samba?
2 - Regarding that, are Resource_Forks/xattr/Alternate_data_streams supported?
3 - How do I set share ACLs (allowed users, and their rights)?
4 - How do I set the share name?
5 - Will it support the smb2 protocol?
5b - ill it work over IPv6?
6 - Is Shadow Copy supported (using zfs snapshots) ?
7 - How will it map nss users to domain users? Will it be able to connect to 
Winbind?
8 - Kerberos authentication support?
9 - Will it support the NT priviledges? I could select a normal user on my 
network, and with a simple net rpc rights grant SeBackupPrivilege, 
SeRestorePrivilege, ACLs can be overridden by that user in a Windows 
environment. A user of the sharesmb service might expect that.

In my personal case, I need 1, 2, 3, 4, 6, 7, 8 and 9. And I am sure that more 
will come-up, as these are the ones that came to my mind right now.

Anyway, congratulations on the sharesmb thing. If it has a 
flexible/configurable implementation (for the ones with complex rules in an 
environment), but with sane defaults (for normal, users), it will be a hit.

Cheers,
Razvan
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of Samba/ZFS integration

2007-11-03 Thread MC
ZFS has a smb server on the way, but there has been no real public information 
about it released.  Here is a sample of its existence: 
http://www.opensolaris.org/os/community/arc/caselog/2007/560/;jsessionid=F4061C9308088852992B7DE83CD9C1A3
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Force SATA1 on AOC-SAT2-MV8

2007-11-03 Thread Eric Haycraft
The drives (6 in total) are external (eSATA) ones, so they have their own 
enclosure that I can't open without voiding the warranty... I destroyed one 
enclosure trying out ways to get it to work and learned that there was no way 
to open them up without wrecking the case :(

I have 2 meter sata to esata cables. 

The drives are 750GB FreeAgent Pro USB/eSATA drives from Seagate. 

Thanks for your help.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Status of Samba/ZFS integration

2007-11-03 Thread Razvan Corneliu VILT
I've tried to set up a SAMBA file server that acts completely identical with a 
Microsoft Windows 2000 or 2003 one. First of all, the problem with the ACI 
ordering is simple:
The Microsoft ACI specification imposes that the DENY ACIs are put on top. It 
can be solved with a simple chmod.
Problem no.2 the Samba NFSv4 ACL module doesn't interpret owner@, group@, 
[EMAIL PROTECTED] While the first ones are not surprising, because they have no 
direct mapping in the Windows well known SIDs list , everyone@ is a very well 
known Windows SID.
These problems can be easily solved by initially setting the ACLs manually 
using chmod.
Problem no.3, there is no umask(1) support for NFSv4 ACI model, thus creating a 
new file from the UNIX shell or a UNIX program (say FTP) on that ZFS share, 
will completely mess-up your ACLs from a Windows perspective.
Furthermore, I expected that once I set some ACIs, with the inheritance flags 
on, I would get those ACIs, period. While I do get inheritance of the ACIs, I 
also get some default ACIs added that kinda represent the traditional UNIX 
rights (which is very far from what I'm looking for), furthermore, I also 
expect to be able to ignore the UNIX rights, as mixing the two of them is both 
confusing and difficult.
I think that mixing the two models (the NFSv4 and the Windows one) is 
improbable and it really does require that you make a choice to favor the 
Windows model or the NFSv4. Right now I've concluded that the SAMBA NFSv4 ACL 
support is completely useless, as it allows me to view ACLs set using chmod on 
an existing file, or change them to other _VALID_ Windows ACLs. Unfortunatelly, 
as soon as I try to create a new file or directory all of the benefits go to 
/dev/null, as I get a new file with default ACLs that have nothing to do with 
the inherited flags I've set, and that are completely invalid on a Windows 
system.
I am sure that we need to have a new zfs attribute that changes the behaviour 
of the relation between the UNIX attributes and the NFSv4 ACIs (eventually 
completely ignoring the UNIX ones), as well as specifying that the inherited 
ACIs are the only-ones that will be applied to a newly created file or 
directory. We also need to have the samba config file support new file and 
directory creation masks that are a little more complex than 3 numbers (or to 
take the inheritance flags more seriously into consideration). We also need to 
add support to the nfs4acl module for interpreting owner@, group@ and [EMAIL 
PROTECTED]

The ACIs that I needed and that miserably failed me are rather simple (except 
for a few folders in which I had more complex ones):
Domain Admins:rwxdDpaARWc--s:fd---:allow
Domain Users:rwxdDpaARWc--s:fd---:allow
Administrator:rwxdDpaARWcCos:fd---:allow
As you can probably see, I didn't even need deny ACLs.
Obviously, I've initially set the ACLs with:
chmod -r A=group:Domain\ Admins:rwxdDpaARWc--s:fd---:allow, group:Domain\ 
Users:rwxdDpaARWc--s:fd---:allow, user:Administrator:rwxdDpaARWcCos:fd---:allow 
(or something like that), and it worked until I started creating files and 
folders.

I started this thread in the hope that we can make sure that in the future 
Samba will be able to perfectly emulate a Windows File Server in coordination 
with ZFS, especially considering Sun's offering in the storage area.

I can also come up with technical details about the differences in behavior 
between a Windows Server and a Samba server on the problematic operations.

Cheers,
Razvan
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool.cache

2007-11-03 Thread Denis
I am not seeing this behavior. But I forgot to mention that Iam using FreeBSD. 
Maybe pawel missed  something.

Denis
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool.cache

2007-11-03 Thread Eric Schrock
On Sat, Nov 03, 2007 at 04:59:18PM -0700, Denis wrote:
> Hi
> 
> What is the correct way to recreate the zpool.cache file? I deleted it
> because the devicenames of the vdevs changed.

You shouldn't need to delete it.  ZFS will automatically update the
device paths based on devid when you issue a zpool(1M) command.  If
you're using files or devices without devids, you can just export it and
import it again.

> The pool is still intact but I need to import it manually after every
> reboot.

Once you import it, it will be placed in the zpool.cache file (unless
you use '-R').  If you're on x86, you will need to update your boot
archive, which should happen automatically on clean reboot.  Are you not
seeing this behavior?

- Eric

--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool.cache

2007-11-03 Thread Denis
Hi

What is the correct way to recreate the zpool.cache file? I deleted it because
the devicenames of the vdevs changed.

The pool is still intact but I need to import it manually after every reboot.

Denis
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS very slow under xVM

2007-11-03 Thread Erblichs
Martin,

This is a shot in the dark, but, this seems to be a IO scheduling
issue.

Since, i am late on this thread, what is the characteristic of
the IO: read mostly, appending writes, read, modify write,
sequentiality, random, single large file, multiple files.

And have you tracked whether any IO is aged much beyond 30
seconds if we are talking about writes.

If we were talking about Xen by itself, I am sure their is
some type of schedular involvement, that COULD slow down your
IO due to fairness or some specified weight against other
processes/ threads / tasks.

Can you boost the scheduling of the IO task, by making it
realtime or giving it a niceness or .. in a experimental
environment and comparing stats.

Whether this is the bottleneck of your problem would take
a closer examination of the various metrics of the system.

Mitchell Erblich
-





Martin wrote:
> 
> > The behaviour of ZFS might vary between invocations, but I don't think that
> > is related to xVM. Can you get the results to vary when just booting under
> > "bare metal"?
> 
> It's pretty consistently displays the behaviors of good IO (approx 60Mb/s - 
> 80Mb/s) for about 10-20 seconds, then always drops to approx 2.5 Mb/s for 
> virtually all of the rest of the output. It always displays this when running 
> under xVM/Xen with Dom0, and never on bare metal when xVM/Xen isn't booted.
> 
> 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backport of vfs_zfsacl.c to samba 3.0.26a

2007-11-03 Thread Tomasz Torcz
On 11/2/07, Carson Gaspar <[EMAIL PROTECTED]> wrote:
> As 3.2.0 isn't released yet, and I didn't want to wait, I've backported
> vfs_zfsacl.c from SAMBA_3_2.

 What about licenses? (L)GPLv2/v3 compatibility?

--
Tomasz Torcz
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] CIFS/SMB in (Open)Solaris kernel

2007-11-03 Thread David Magda
Hello all,

In case you missed it:

> We already had the basic CIFS service building on Solaris but it  
> took another 8 months, 22 more ARC cases, a lot of helping hands  
> and many late nights to deliver the project. On October 25th, 2007,  
> the CIFS service project putback over 800 files, approximately  
> 370,000 lines of code (including 180,000 lines of new code) to the  
> Solaris operating system.
[...]
> In addition to the CIFS/SMB and MSRPC protocols and services:
>
>   We added support for SIDs to Solaris credentials. This solved the  
> centralized access control problem: CIFS can specify users in terms  
> of SIDs and ZFS can perform native file system access control using  
> that information.
>
>   There are various VFS updates and enhancements to support new  
> attributes, share reservations and mandatory locking. As with the  
> credential change, this was also a significant effort, which  
> affected the interface to every file system in Solaris.
>
>
> ZFS enhancements include:
> * Support for DOS attributes (archive, hidden, read-only  
> and system)
> * Case-insensitive file name operations.
>There are three modes: case-sensitive, case-insensitive  
> and mixed.
> * Support for ubiquitous cross-protocol file sharing  
> through an option to ensure UTF8-only name encoding.
> * Atomic ACL-on-create semantics.
> * Enhanced ACL support for compatibility with Windows.
> * sharesmb, which is similar to sharenfs.

http://blogs.sun.com/amw/entry/cifs_in_solaris

Regards,
David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Jumpstart integration and the amazing invisible zpool.cache

2007-11-03 Thread Tomas Ögren
On 02 November, 2007 - Dave Pratt sent me these 2,0K bytes:

>  I've been wrestling with implementing some ZFS mounts for /var and 
> /usr into a jumpstart setup. I know that jumpstart does "know" anything 
> about zfs as in your can't define ZFS volumes or pools in the profile. 
> I've gone ahead and let the JS do a base install into a single ufs slice 
> and then attempted to create the zpool and zfs volumes in the finish 
> script and ufsdump|ufsrestore the data from the /usr and /var partitions 
> into the new zfs volumes. Problem is there doesn't seem to be a way to 
> ensure that the zpool is imported into the freshly built system on the 
> first reboot.

Ugly hack I've been doing to create ZFS thingies under jumpstart/sparc,
but it works..

---8<---  profile entry ---8<---
filesys c1t1d0s7 free /makezfs logging

or

filesys c1t1d0s7 free /makezfsmirror1 logging
filesys c1t2d0s7 free /makezfsmirror2 logging


---8<---  run first in client_end_script ---8<---
#!/bin/sh

echo ZFS-stuff
dozfs=0
dozfsmirror=0
if [ -d /a/makezfs ]; then
dozfs=1
fi
if [ -d /a/makezfsmirror1 ]; then
dozfs=1
dozfsmirror=1
fi

test $dozfs = 1 || exit 0

if [ $dozfsmirror = 1 ]; then
umount /a/makezfsmirror1
umount /a/makezfsmirror2
disk1=`grep /makezfsmirror1 /a/etc/vfstab|awk '{print $1}'`
disk2=`grep /makezfsmirror2 /a/etc/vfstab|awk '{print $1}'`
else
umount /a/makezfs
disk1=`grep /makezfs /a/etc/vfstab|awk '{print $1}'`
fi
perl -p -i.bak -e 's,.*/makezfs.*,#,' /a/etc/vfstab
# do it twice due to bug, see
# http://bugs.opensolaris.org/view_bug.do?bug_id=6566433
zpool create -f -R /a -m /data data $disk1 || zpool create -f -R /a -m /data 
data $disk1
if [ "x$disk2" != "x" ]; then
zpool attach data $disk1 $disk2
fi
zfs set compression=on data
zfs set mountpoint=none data
zfs create data/lap
zfs create data/scratch
zfs create data/postfixspool
zfs set mountpoint=/lap data/lap
zfs set mountpoint=/scratch data/scratch
mkdir -p /a/var/spool/postfix
zfs set mountpoint=/var/spool/postfix data/postfixspool
zfs set reservation=256M data/postfixspool

echo ZFS-stuff done



---8<---  run last in client_end_script ---8<---

#!/bin/sh

zpool list | grep -w data > /dev/null || exit 0

echo /sbin/zpool export data
/sbin/zpool export data
echo /sbin/mount -F lofs /devices /a/devices
/sbin/mount -F lofs /devices /a/devices
echo chroot /a /sbin/zpool import data
chroot /a /sbin/zpool import data




The final step is the trick ;)


/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS very slow under xVM

2007-11-03 Thread Martin
> The behaviour of ZFS might vary between invocations, but I don't think that
> is related to xVM. Can you get the results to vary when just booting under
> "bare metal"?

It's pretty consistently displays the behaviors of good IO (approx 60Mb/s - 
80Mb/s) for about 10-20 seconds, then always drops to approx 2.5 Mb/s for 
virtually all of the rest of the output. It always displays this when running 
under xVM/Xen with Dom0, and never on bare metal when xVM/Xen isn't booted.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss