Re: [zfs-discuss] NFS async and ZFS zil_disable

2008-04-22 Thread msl
> 
> On Apr 22, 2008, at 12:16 PM, msl wrote:
> 
> > Hello all,
> >  I think the two options are very similar in the
> "cliente side  
> > view", but i want to hear from the experts... So,
> somebody can talk  
> > a little about the two options?
> >  We have two different layers here, i think:
> >  1) The "async" from the protocol stack, and the
> other...
> >  2) From the filesystem point of view.
> >
> >  What makes me think that the "first" option could
> be more "quick"  
> > for the client, because the "ack" is in a higher
> level (NFS protocol).
> 
> The NFS client has control over WRITE requests in
> that it
> may ask to have them done "async" and then follow it
> with
> a COMMIT request to ensure the data is in
> stable-storage/disk.
 Great information... so, the "sync" option on the server (export) side is just 
a "possible" option for the client requests? I mean, the "sync/async" option is 
a requirement in a nfs write request operation? When i did the question, i was 
talking about the "server side", i did not know about the possibility of the 
client requests "sync/async". 
> 
> However, the NFS client has no control over namespace
> operations
> (file/directory create/remove/rename).  These must be
> done
> synchronously -- no way for the client to direct the
> operational
> behavior of the server in these cases.
 If i understand well, here the "zil_disable" is is a problem for the NFS 
semantics... i mean, the service will be compromise, because the nfs client 
can't control the "namespace operations". What is a big diff in my initial 
question.
> 
> Spencer
> 
 Thanks a lot for your comments! Anybody else?
 ps.: how can i enable async in nfs server on solaris? just add "async" for the 
export options?

 Leal.
> ___
> nfs-discuss mailing list
> [EMAIL PROTECTED]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NFS async and ZFS zil_disable

2008-04-22 Thread msl
Hello all,
 I think the two options are very similar in the "cliente side view", but i 
want to hear from the experts... So, somebody can talk a little about the two 
options?
 We have two different layers here, i think: 
 1) The "async" from the protocol stack, and the other...
 2) From the filesystem point of view.
 
 What makes me think that the "first" option could be more "quick" for the 
client, because the "ack" is in a higher level (NFS protocol).

 Please, comment!

 Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS mountpoints

2008-03-23 Thread msl
Hello all,
 Sorry if that is a "stupid" question, but i need to ask..
 I have some zfs filesystems with two "//" at the beggining like 
"//dir1/dir2/dir3. And some other filesystems "correct" with just one "/" 
(/dir1/dir2/). 
 The question is: Can i set the mountpoint correctly? i mean, rewrite the 
property mount point to /dir1/dir2/dir3? 
 That was just a bug in ZFS, or for "some" reason i should not touch it?

 Thanks!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Telemetry and ZFS HAStoragePlus

2008-03-13 Thread msl
Hello all,
 I'm willing to configure "telemetry" in SC 3.2, but i did not see how...
 The two ways that i did find to configure it is:
 1) "clsetup" or
 2) Using "sctelemetry" resource utility
 The problem is that in the both we need a "FilesystemMountPoints" property 
configured, and that property is not configured by default using ZFS-HA. I 
think using a UFS filesystem, all the mountpoints handled by the hastorageplus 
"must" be configured in that property... but with ZFS don't (Zpools instead).
 So, how can i configure it? Can i create a ZFS filesystem in my pool and 
configure it in that property? What i should i configure in the "vfstab" file?
 I'm afraid to configure something in that property and mess my cluster env.

 Thanks!

 Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Round-robin NFS protocol with ZFS

2008-03-13 Thread msl
Hello all,
 I was thinking if such scenario could be possible:
1 - Export/import a ZFS filesystem in two solaris servers.
2 - Export that filesystem (NFS).
3 - Mount that filesystem on clients in two different mount points (just to 
authenticate in both servers/UDP).
4a - Use some kind of "man-in-the middle" to auto-balance the connections (the 
same IP on servers)
 or
4b - Use different IP's and balance through DNS.
After "little" problems with this initial (mount) setup, the NFS conversation 
should work without problems, without stale file handle issues, right?

  Thinking in such configuration i'm assuming some "concepts", that i'm asking 
your corrections if i'm wrong:
 1) Using ZFS send/receive i will have the SAME filesystem across machines.
 2) Using ZFS, the file handles in NFS protocol are not beeing made using disk 
luns, major/minor numbers, etc... 
 I'm not willing to implement such solution, but what i really want to know is 
IF that configuration is possible, and IF i'm not wrong about my assumptions.

 Thanks a lot for your time!

 Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The old problem with tar, zfs, nfs and zil

2008-02-26 Thread msl
Actually, i have some corrections to be made. When i did see the numbers, i was 
stunned and that blocked me to thinkā€¦
Here you can see the right numbers: http://www.posix.brte.com.br/blog/?p=104
The problem was the discs were i have made the tests.
 Thanks for your time.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The old problem with tar, zfs, nfs and zil

2008-02-26 Thread msl
> For Linux NFS service, it's a option in
> /etc/exports.
> 
> The default for "modern" (post-1.0.1) NFS utilities
> is "sync", which means that data and metadata will be
> written to the disk whenever NFS requires it
> (generally upon an NFS COMMIT operation).  This is
> the same as Solaris with UFS, or with ZFS+ZIL. This
> works with XFS, EXT3, and any other file system with
> a working fsync().
 Ok, i did know that, i have forgot to mention in my question that my doubt was 
if Linux would "really" honour the sync. Do you understand? I did read that 
Linux "does not" (even with sync in exports). In nfsv2 for example, does not 
matter if you put sync or async, the server will ACK as soon as it receives the 
request (NOP). But if you are telling that *now* Linux is really syncing discs 
before ACK the client, well... so there is a huge diff on zfs/nfs and xfs/nfs, 
because the numbers that i have posted is with "sync" on Linux.
> 
> It's possible to switch this off on Linux, but not
> recommended, as there is a chance that data could be
> lost if the server crashed. (For the same reason, the
> ZIL should not be disabled on a Solaris NFS server.)
 I understand that, so i did not even try to disable ZIL until now. All the 
tests that i have made was respecting a semantically correct NFS service. If 
the ZIL could be configured per filesystem, or pool...
 The diff is 7.5s to 1.0s, and theoretically zfs is more efficient than xfs.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] The old problem with tar, zfs, nfs and zil

2008-02-25 Thread msl
Hello all,
 I just did this post about the problem: 
http://www.posix.brte.com.br/blog/?p=103
 I just want to know if somebody knows the Linux implementation of XFS, EXT3, 
or another filesystem to confirm that the ACK by the fileserver is without 
"log" the transaction (like ZIL), or without commit to stable storage? 
 I mean, can you confirm that the zil_disable/zfs solaris nfs service,  is a 
similar service like a standard xfs or ext3 linux/nfs solution (take into 
account the NFS service provided)?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Creating ZFS home filesystems from Linux

2008-02-22 Thread msl
Sometime ago i did post this: 
(http://mail.opensolaris.org/pipermail/zfs-discuss/2006-October/035351.html) on 
ZFS discuss, and Darren J Moffat gave me the idea to use SSH to create the home 
directories on the solaris server. 
 So, i did implement that solution, and did post the results in my blog:   
http://www.posix.brte.com.br/blog/?p=102
 For make the things simpler for you :), the post describes the solution that i 
have implemented to:
 - Create (automatically) the user home directories (ZFS filesystems) from 
Linux clients. In a standard scenario, the "pam_mkhomedir" do the job, but if 
we are using ZFS filesytems to quota, snapshots, etc... we need to create them 
from Linux too.
 p.s: I think in the future i will try to improve that solution to let the 
linux users take snapshots, rollback and etc... 
 But i did find some issues that i'm here to discuss with you..
 - I did not find the "permission" that i need to give to the user so it can 
"chown" a ZFS filesystem. I did try the two ZFS profiles, file_owner, and 
file_chown without luck. 
 The creation sequence is:
 1) The user login on the linux client
 2) The stack PAM execute a SSH session to the solaris server to create the 
user home directory if it does not exists yet (using a "specific user").
 3) The shell for that "specific user" do the filesystem creation task.
 4) PROBLEM: This "specifi user" cannot chown the new ZFS filesystem to the 
final user. So, the user cannot write anything to the home directory.
  Thanks a lot for your time.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iSCSI initiator BUG

2007-12-28 Thread msl
> i have a difficulty in understanding:
> 
> you tell that the device get`s lost whenver the I/O
> error occurs.
 yes.
> 
> you tell that you cannot use ext3 or xfs, but
> reiser.
> 
 Yes, with reiser the test did work. between the unmounts/mounts there was 
"error" messages in "dmesg", but the test seems to work just fine.

> with reiser, the device doesn`t get lost on I/O error
> ?
 yes.
> 
> that`s very weird.
 really? :)
> 
> what`s your distro/kernel version ?
 Gentoo (2.6.17).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool kernel panics.

2007-12-20 Thread msl
Hello mr. Irvine,
 Did you fix that? 
 Do you have a solaris formal support? I mean, they will fix the problem in 
your solaris 10 production server, or you will need to upgrade to a 
"opensolaris" version?
 I'm deploying a ZFS environment, but when i think about "TB" and "how mature" 
is ZFS... i don't know. 
 My concern is just the problem that you have: "Have a TB zfs pool, and occurs 
a but like this". So, i will need restore the entire pool from backup? no way...
 Take a look here: 

 http://prefetch.net/blog/index.php/2007/11/28/is-zfs-ready-for-primetime/
 
 The features of ZFS are really import and great, but the above situation and 
the problem that you have faced is a concern.

 Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] iSCSI initiator BUG

2007-12-20 Thread msl
Hello all..
 I'm making some tests with iozone running on a Linux (initiator), writting on 
a solaris target (ZVOL). I think there is a BUG in Linux initiator software 
(open-iscsi), but i just want your opinion, to see if the target can be the 
problem. Seems to me like a corruption in the filesystem metadata (client is 
not syncing writes ?). The REISER filesystem seems to be more robust, and can 
survive the fails (i'm unmounting the FS between tests).
 Trying to use the XFS filesystem, in the middle of the tests i got:
...
sd 9:0:0:0: rejecting I/O to offline device
sd 9:0:0:0: rejecting I/O to offline device
sd 9:0:0:0: rejecting I/O to offline device
sd 9:0:0:0: rejecting I/O to offline device
sd 9:0:0:0: rejecting I/O to offline device
sd 9:0:0:0: rejecting I/O to offline device
sd 9:0:0:0: rejecting I/O to offline device
sd 9:0:0:0: rejecting I/O to offline device
sd 9:0:0:0: rejecting I/O to offline device
xfs_force_shutdown(sda1,0x1) called from line 338 of file fs/xfs/xfs_rw.c.  
Return address = 0xc026e66e
Filesystem "sda1": I/O Error Detected.  Shutting down filesystem: sda1
Please umount the filesystem, and rectify the problem(s)
xfs_force_shutdown(sda1,0x1) called from line 338 of file fs/xfs/xfs_rw.c.  
Return address = 0xc026e66e
...

 I did try many times with no luck... so i did try with the EXT3 filesystem 
(maybe a XFS bug), and the same problem (i do not have the EXT3 error, sorry). 
And after the error, the Linux can't mount the filesystem anymore, because the 
device (/dev/sda) is lost. After that i need to recreae the filesystem... Now 
i'm running the tests with REISER, and seems to be working, but with the log 
messages:

...
ReiserFS: sda1: journal-1037: journal_read_transaction, offset 120259087495, 
len 437 mount_id -201419776
ReiserFS: sda1: journal-1039: journal_read_trans skipping because 3207 is too 
old
ReiserFS: sda1: journal-1299: Setting newest_mount_id to 474
ReiserFS: sda1: Using r5 hash to sort names
ReiserFS: sda1: found reiserfs format "3.6" with standard journal
ReiserFS: sda1: warning: CONFIG_REISERFS_CHECK is set ON
ReiserFS: sda1: warning: - it is slow mode for debugging.
ReiserFS: sda1: using ordered data mode
ReiserFS: sda1: journal params: device sda1, size 8192, journal first block 18, 
max trans len 1024, max batch 900, max commit age 30, max trans age 30
ReiserFS: sda1: checking transaction log (sda1)
ReiserFS: sda1: journal-1153: found in header: first_unflushed_offset 3235, 
last_flushed_trans_id 6639
ReiserFS: sda1: journal-1206: Starting replay from offset 28518582848675, 
trans_id 0
ReiserFS: sda1: journal-1299: Setting newest_mount_id to 475
ReiserFS: sda1: Using r5 hash to sort names
ReiserFS: sda1: found reiserfs format "3.6" with standard journal
ReiserFS: sda1: warning: CONFIG_REISERFS_CHECK is set ON
ReiserFS: sda1: warning: - it is slow mode for debugging.
ReiserFS: sda1: using ordered data mode
ReiserFS: sda1: journal params: device sda1, size 8192, journal first block 18, 
max trans len 1024, max batch 900, max commit age 30, max trans age 30
ReiserFS: sda1: checking transaction log (sda1)
ReiserFS: sda1: journal-1153: found in header: first_unflushed_offset 3253, 
last_flushed_trans_id 6645
ReiserFS: sda1: journal-1206: Starting replay from offset 28544352652469, 
trans_id 0
ReiserFS: sda1: journal-1299: Setting newest_mount_id to 476
ReiserFS: sda1: Using r5 hash to sort names
ReiserFS: sda1: found reiserfs format "3.6" with standard journal
ReiserFS: sda1: warning: CONFIG_REISERFS_CHECK is set ON
ReiserFS: sda1: warning: - it is slow mode for debugging.
ReiserFS: sda1: using ordered data mode
ReiserFS: sda1: journal params: device sda1, size 8192, journal first block 18, 
max trans len 1024, max batch 900, max commit age 30, max trans age 30
ReiserFS: sda1: checking transaction log (sda1)
ReiserFS: sda1: journal-1153: found in header: first_unflushed_offset 4507, 
last_flushed_trans_id 6667
ReiserFS: sda1: journal-1206: Starting replay from offset 28638841934235, 
trans_id 0
ReiserFS: sda1: journal-1299: Setting newest_mount_id to 477
ReiserFS: sda1: Using r5 hash to sort names
ReiserFS: sda1: found reiserfs format "3.6" with standard journal
ReiserFS: sda1: warning: CONFIG_REISERFS_CHECK is set ON
ReiserFS: sda1: warning: - it is slow mode for debugging.
ReiserFS: sda1: using ordered data mode
ReiserFS: sda1: journal params: device sda1, size 8192, journal first block 18, 
max trans len 1024, max batch 900, max commit age 30, max trans age 30
ReiserFS: sda1: checking transaction log (sda1)
ReiserFS: sda1: journal-1153: found in header: first_unflushed_offset 4744, 
last_flushed_trans_id 6746
ReiserFS: sda1: journal-1206: Starting replay from offset 28978144350856, 
trans_id 0
ReiserFS: sda1: journal-1299: Setting newest_mount_id to 478
ReiserFS: sda1: Using r5 hash to sort names
ReiserFS: sda1: found reiserfs format "3.6" with standard journal
ReiserFS: sda1: warning: CONFIG_REISERFS_CHECK is set ON
ReiserFS: sda1: warning: - it is slow

Re: [zfs-discuss] NFS performance considerations (Linux vs Solaris)

2007-12-10 Thread msl
Ok, i have proposed, so, i'm trying to implement it. :)
 I hope you can (at least) criticizing it. :))
 The document is here: http://www.posix.brte.com.br/blog/?p=89
 It is not complete, i'm running some tests yet, and analyzing the results. But 
i think you can look and contribute with tome thoughts already.
  It was nice to see the write performance for the iSCSI protocol and the 
NFSv3. Why iSCSI was "much" better? Why the read performance was the "same"?  
All guarantees that i have with NFS i have with iSCSI?
 Please, comment it!

 Thanks a lot for your time!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NFS performance considerations (Linux vs Solaris)

2007-11-20 Thread msl
Hello all...
 I think all of you agree that "performance" is a great topic in NFS. 
 So, when we talk about NFS and ZFS we imagine a great combination/solution. 
But one is not dependent on another, actually are two well distinct 
technologies. ZFS has a lot of features that all we know about, and "maybe", 
all of us want in a NFS share (maybe not). The point is: Two technologies with 
diferent priorities.
 So, what i think is important, is a "document" (here on NFS/ZFS discuss), that 
lists and explains the ZFS features that have a "real" performance impact. I 
know that there is the solarisinternals wiki about ZFS/NFS integration, but 
what i think is really important is a comparison between Linux and Solaris/ZFS 
on server side.
 That would be very useful to see for example, what "consistency" i have with 
Linux and (XFS, ext3, etc), with "that" performance. And "how" can i configure 
a similar NFS service on solaris/ZFS. 
 Here we have some information about it: 
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
 but there is no relation with Linux, what i think is important.
 What i do mean, is that the people that knows a lot about the NFS protocol, 
and about the filesystem features, should make such comparison (to facilitate 
the adoption and users' comparison). I think there are many users comparing 
oranges with apples.
 Another example (correct me if i am wrong), Until the kernel 2.4.20 (at 
least), the default export option for sync/async was "async" (in solaris i 
think always was "sync"). Another point was about the "commit" operation in 
vers2, that was not implemented, the server just reply with an "OK", but the 
data was not in stable storage yet (here the ZIL and the roch blog entry is 
excellent).
 That's it, i'm proposing the creation of a "matrix/table" with features and 
performance impact, as well as a comparison with other 
implementations/implications.
 Thanks very much for your time, and sorry for the long post.

 Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] read/write NFS block size and ZFS

2007-11-15 Thread msl
Hello all...
 I'm migrating a nfs server from linux to solaris, and all clients(linux) are 
using read/write block sizes of 8192. That was the better performance that i 
got, and it's working pretty well (nfsv3). I want to use all the zfs' 
advantages, and i know i can have a performance loss, so i want to know if 
there is a "recomendation" for bs on nfs/zfs, or what do you think about it.
I must test, or there is no need to make such configurations with zfs?
Thanks very much for your time!
Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mount points (all-or-nothing)

2007-09-19 Thread msl
Any ideas on this?
Or i will need to update the vfstab file?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs mount points (all-or-nothing)

2007-09-10 Thread msl
Hello all,
 There is a way to configure the zpool to "legacy_mount", and have all 
filesystems in that pool mounted automatically?
 I will try explain better:
 - Imagine that i have a zfs pool with "1000" filesystems. 
 - I want to "control" the mount/unmount of that pool, so, i did configure the 
zpool to legacy_mount. 
 - But i don't want to have to mount the other "1000" filessytems...so, when i 
issue a "mount -F zfs mypool", all the filesystems would be mounted too (i 
think the mount property is per-filesystem).
 Sorry if that is a "dummy" question, but the all-or-nothing configuration that 
i "think" is the solution, is not what i really need.
 Thanks for your time!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS/NFS - SC 3.2 and AVS - HOWTO [SOLVED]

2007-08-21 Thread msl
Hello,
I will try to concentrate in this post the informations about the configurations
that i'm deploying, thinking that it should be usefull for somebody else..

The objective of my tests is:

High Availability services with ZFS/NFS on solaris 10 using a two-node Sun 
Cluster.

 The scenarios are two:
1) Using "shared discs" (global devices).
2) Using "non-shared discs" (local devices).

- The solution (1) is more simple, and we can use the HAStoragePlus resource 
type
 that we can found in the SC 3.2 installation. And as the data are "unique", i 
mean, 
in a failover/switchback scenario the "same" disc will be used in one or 
another host, the practical purpose is clear.

The HOWTO for ZFS/NFS HA using shared discs (global devices), can be found here:
 Sun Cluster 3.2 installation:
 http://www.posix.brte.com.br/blog/?p=71

 HA procedure for shared discs:
 http://www.posix.brte.com.br/blog/?p=68

- The solution (2) maybe be more difficult to see the purpose... but i think are
many:
 a) We can use it as a share of binaries. So the fact that the discs are not the
"same", is not a real problem.
 b) We can use it in applications that does handle "loss of data", i mean, the 
app knows 
if the data is corrupt, and can restart its task. So, the application just 
needs a "share"
(like a "tmp" directory).
 c) We will sync/replicate the data using AVS.

 So, i think ZFS/NFS/SC and AVS can let us use that local SATA discs that can be
300gb or 500gb, in a consistent way. Don't you think?

The HOWTO for ZFS/NFS HA using non-shared discs (local devices), can be found 
here:
 AVS installation on Solaris 10 u3:

 http://www.posix.brte.com.br/blog/?p=74

 HA procedure for non-shared discs:
 Part I:  http://www.posix.brte.com.br/blog/?p=73

 Part II: http://www.posix.brte.com.br/blog/?p=75

 I hope that information help somebody else with the same (crazy) ideas, like 
me.
 I will appreciate your comments!

 Thanks for your time.

 Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshots and NFS

2007-05-30 Thread msl
Hello all,
 Sorry if you think that question is stupid, but i need to ask..
 Imagine a normal situation on a NFS server with "N" client nodes. The objects 
of the shares is software (/usr/ for instance), and the admin wants to make 
available new versions of a few packages. 
 So, would be nice if the admin could associate a NFS share and a ZFS snapshot? 
 I mean, the admin have the option to make a snapshot on that ZFS filesystem, 
make the update on the binaries, and just a few machines would see that changes.
 I know, there are a lot of ways to do that... but i think that would be nice 
(better). That would economize space, and the administration task would be very 
easy (ZFS intend to be easy). I think ZFS have solved the "Stale NFS file 
Handle" on the "mount point", and all woul be necessary would be a respawn for 
precesses already in memory (on migrated clients). So... 
 What do you think about a feature like that?useful, crazy? 
 Thanks very much for your time!

byLeal
[www.posix.brte.com.br/blog]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS vs UFS

2006-11-06 Thread msl
Hello all,
 As you know, i'm making some "screencasts" about a few solaris features. That 
screencasts is one point of many tests that i'm making with solaris 10. Now, 
with some tests with dtrace, i have saw a interesting point: the creat64 and 
unlink system calls are uniforms (times) on ZFS, but have some strange times on 
UFS (standard deviation).
 I have two simple questions:
 [b]a)[/b] The filesystem utilization can be the problem? I mean, the UFS 
filesystem is 86% in use, and the ZFS is almost empty.
 [b]b)[/b] Can that difference be structural? I mean, have a explanation on the 
design of the two filesystems? If yes, can you quickly explain that to me 
(links to opensolaris code)? So i can put it in the blog (screencasts).
 Thanks very much for your time.

 The url: http://www.posix.brte.com.br/blog/?page_id=30

Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS, home and Linux

2006-10-23 Thread msl
Perfect, thanks for all answers. The solution that Darren has suggested to me, 
can be implemented even between linux -> linux. No more no_root_squash in home 
directories, what is a bad thing.
 thanks again.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS, home and Linux

2006-10-19 Thread msl
Ok, thanks very much for your answer. I will look the automounter. But, about 
the pam module, how it would work? running on linux machine, and creating a zfs 
filesystem on a solaris server (via NFS)? 
 Thanks again.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS, home and Linux

2006-10-17 Thread msl
Hello,
I'm trying to implement a NAS server with solaris/NFS and, of course, ZFS. But 
for that, we have a little problem... what about the /home filesystem? I mean, 
i have a lot of linux clients, and the "/home" directory is on a NFS server 
(today, linux). I want to use ZFS, and
change the "directory" home like /home/leal, to "filesystems" like
/home/leal (just like the documentation recommends). Now, a PAM module
(pam_mkhomedir) have solved that problem, creating the user home
directory (as demand). Do you have a solution to that? Like a linux PAM
module to create a zfs filesystem under /home, and not a ordinary
directory? So i can have a per user filesystem under "/home/"... Of course, 
with a service on the NFS solaris server... 
 I'm asking because if you agree that it is a "problem", we
can create a project in opensolaris to work on it. Or maybe, you have a
trivial solution, that i'm not seeing.
Thanks very much for your time!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss