Re: [zfs-discuss] Snapshots, txgs and performance

2010-06-11 Thread Marcelo Leal
Hello there, I think you should share it with the list, if you can, seems like an interesting work. ZFS has some issues with snapshots and spa_sync performance for snapshots deletion. Thanks Leal [ http://www.eall.com.br/blog ] -- This message posted from opensolaris.org __

Re: [zfs-discuss] SSD (SLC) for cache...

2009-08-30 Thread Marcelo Leal
Thanks Adam, So, if i understand well, the MLC SSD more appropriate for read cache is more theory than pratice right now. Right? I mean, SUN is just using SLC SSD's? That would explain the only support for SLC on SUN hardware (x42xx) series. Thanks again. Leal [ http://www.eall.com.br/blog ]

Re: [zfs-discuss] SSD (SLC) for cache...

2009-08-11 Thread Marcelo Leal
Hello David... Thanks for your answer, but i did not talk in buy disks... I think you misunderstood my email (or my bad english), but i know the performance improvements when using a cache device. My question is about SSD, and the differences between use SLC for readzillas instead of MLC. Tha

[zfs-discuss] SSD (SLC) for cache...

2009-08-11 Thread Marcelo Leal
Hello there... Many companies (including SUN), has just hardware with support to SLC... as i need both, i just want to hear your experiences about use SLC SSD for ZFS cache. One point is cost, but i want to know if the performance is much different, because the two are created specifically to p

Re: [zfs-discuss] When writing to SLOG at full speed all disk IO is blocked

2009-07-28 Thread Marcelo Leal
Ok Bob, but i think that is the problem about picket fencing... and so we are talking about commit the sync operations to disk. What i'm seeing is no read activity from disks when the slog is beeing written. The disks are "zero" (no read, no write). Thanks a lot for your reply. Leal [ http:/

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Marcelo Leal
> That's only one element of it Bob. ZFS also needs > devices to fail quickly and in a predictable manner. > > A consumer grade hard disk could lock up your entire > pool as it fails. The kit Sun supply is more likely > to fail in a manner ZFS can cope with. I agree 100%. Hardware, firmware,

Re: [zfs-discuss] When writing to SLOG at full speed all disk IO is blocked

2009-07-27 Thread Marcelo Leal
Hello, Well, i'm trying to understand this workload, but what i'm seeing to reproduce this is just flood the SSD with writes, and the disks show no activity. I'm testing with aggr (two links), and for one or two seconds there is no read activity (output from server). Right now i'm suspecting

[zfs-discuss] Fishworks iSCSI cache enabled...

2009-07-25 Thread Marcelo Leal
Hello all, Somebody using iSCSI cache enable on 7000 series? I'm talking about OpenSolaris (ZFS) as an iSCSI initiator, because i don't know another filesystem that handles disk caches. So, that option was created for ZFS ;-)? Any suggestions on this? Thanks Leal [ http://www.eall.com.br/

[zfs-discuss] When writing to SLOG at full speed all disk IO is blocked

2009-07-24 Thread Marcelo Leal
Hello all... I'm seeing this behaviour in an old build (89), and i just want to hear from you if there is some known bug about it. I'm aware of the "picket fencing" problem, and that ZFS is not choosing right if write to slog is better or not (thinking if we have a better throughput from disks)

Re: [zfs-discuss] ZFS write I/O stalls

2009-07-01 Thread Marcelo Leal
> > Note that this issue does not apply at all to NFS > service, database > service, or any other usage which does synchronous > writes. > > Bob Hello Bob, There is impact for "all" workloads. The fact that the write is sync or not, is just a question to write on slog (SSD) or not. But the

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-24 Thread Marcelo Leal
I think that is the purpose of the current implementation: http://blogs.sun.com/roch/entry/the_new_zfs_write_throttle But seems like is not that easy... as i did understand what Roch said, seems like the cause is not always a "hardy" writer. Leal [ http://www.eall.com.br/blog ] -- This messag

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-24 Thread Marcelo Leal
Hello Bob, I think that is related with my post about "zio_taskq_threads and TXG sync ": ( http://www.opensolaris.org/jive/thread.jspa?threadID=105703&tstart=0 ) Roch did say that this is on top of the performance problems, and in the same email i did talk about the change from 5s to 30s, what i

[zfs-discuss] zio_taskq_threads and TXG sync

2009-06-16 Thread Marcelo Leal
Hello all, I'm trying to understand the ZFS IO scheduler ( http://www.eall.com.br/blog/?p=1170 ), and why sometimes the system seems to be "stalled" for some seconds, and every application that needs some IO (most read, i think), have serious problems. What can be a big problem in iSCSI or NFS

[zfs-discuss] ZFS and Amanda

2009-01-27 Thread Marcelo Leal
Hello all, There is some project here to integrate amanda on opensolaris, or some howto for integration with ZFS? Some use case (using the opensource version)? The amanda site there is a few instructions, but i think here we can create something more specific to OS. Thanks. -- This message po

[zfs-discuss] E2BIG

2009-01-26 Thread Marcelo Leal
Hello all... We are getting this error: "E2BIG - Arg list too long", when trying to send incremental backups (b89 -> b101). Do you know about any bugs related to that? I did a look on the archives, and google but could not find anything. What i did find was something related with wrong timesta

Re: [zfs-discuss] NFS Block Monitor

2009-01-24 Thread Marcelo Leal
FYI (version 0.3): http://www.eall.com.br/blog/?p=970 Leal [ http://www.eall.com.br/blog ] > Hello all.. > I did some tests to understand the behaviour of ZFS > and slog (SSD), and for understand the workload i > did implement a simple software to visualize the > data blocks (read/write). > I'm

[zfs-discuss] NFS Block Monitor

2009-01-12 Thread Marcelo Leal
Hello all.. I did some tests to understand the behaviour of ZFS and slog (SSD), and for understand the workload i did implement a simple software to visualize the data blocks (read/write). I'm posting here the link in the case somebody wants to try it. http://www.eall.com.br/blog/?p=906 Than

Re: [zfs-discuss] How to find out the zpool of an uberblock printed with the fbt:zfs:uberblock_update: probes?

2009-01-07 Thread Marcelo Leal
ls, which means > that the corresponding > uberblocks on disk will be skipped for writing (if I > did not overlook > anything), and the device will likely be worn out > later. I need to know what is the uberblock_update... it seems not related with txg, sync of disks, labels, n

Re: [zfs-discuss] How to find out the zpool of an uberblock printed with the fbt:zfs:uberblock_update: probes?

2009-01-07 Thread Marcelo Leal
Hello Bernd, Now i see your point... ;-) Well, following a "very simple" math: - One txg each 5 seconds = 17280/day; - Each txg writing 1MB (L0-L3) = 17GB/day In the paper the math was 10 years = ( 2.7 * the size of the USB drive) writes per day, right? So, in a 4GB drive, would be ~10GB

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Marcelo Leal
Hello, - One way is virtualization, if you use a virtualization technology that uses NFS for example, you could add your virtual images on a ZFS filesystem. NFS can be used without virtualization too, but as you said the machines are windows, i don't think the NFS client for windows is product

Re: [zfs-discuss] How to find out the zpool of an uberblock printed with the fbt:zfs:uberblock_update: probes?

2009-01-06 Thread Marcelo Leal
> Hi, Hello Bernd, > > After I published a blog entry about installing > OpenSolaris 2008.11 on a > USB stick, I read a comment about a possible issue > with wearing out > blocks on the USB stick after some time because ZFS > overwrites its > uberblocks in place. I did not understand well w

Re: [zfs-discuss] How ZFS decides if write to the slog or directly to the POOL

2009-01-05 Thread Marcelo Leal
> > Marcelo Leal writes: > > Hello all, > > Somedays ago i was looking at the code and did see > some variable that > > seems to make a correlation between the size of > the data, and if the > > data is written to the slog or directly to the > pool. B

Re: [zfs-discuss] Cannot remove a file on a GOOD ZFS filesystem

2008-12-31 Thread Marcelo Leal
Thanks a lot Sanjeev! If you look my first message you will see that discrepancy in zdb... Leal. [http://www.eall.com.br/blog] -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris

Re: [zfs-discuss] Cannot remove a file on a GOOD ZFS filesystem

2008-12-30 Thread Marcelo Leal
execve("/usr/bin/ls", 0x08047DA8, 0x08047DB4) argc = 2 mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON, -1, 0) = 0xFEFF resolvepath("/usr/lib/ld.so.1", "/lib/ld.so.1", 1023) = 12 resolvepath("/usr/bin/ls", "/usr/bin/ls", 1023) = 11 xstat(2, "/usr/bin/ls", 0x08047A5

Re: [zfs-discuss] Cannot remove a file on a GOOD ZFS filesystem

2008-12-30 Thread Marcelo Leal
execve("/usr/bin/rm", 0x08047DBC, 0x08047DC8) argc = 2 mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON, -1, 0) = 0xFEFF resolvepath("/usr/lib/ld.so.1", "/lib/ld.so.1", 1023) = 12 resolvepath("/usr/bin/rm", "/usr/bin/rm", 1023) = 11 sysconfig(_CONFIG_PAGESIZE)

Re: [zfs-discuss] Cannot remove a file on a GOOD ZFS filesystem

2008-12-30 Thread Marcelo Leal
Hello all, # zpool status pool: mypool state: ONLINE scrub: scrub completed after 0h2m with 0 errors on Fri Dec 19 09:32:42 2008 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 mirror ONLINE 0 0 0

[zfs-discuss] How ZFS decides if write to the slog or directly to the POOL

2008-12-29 Thread Marcelo Leal
Hello all, Somedays ago i was looking at the code and did see some variable that seems to make a correlation between the size of the data, and if the data is written to the slog or directly to the pool. But i did not find it anymore, and i think is way more complex than that. For example, if

Re: [zfs-discuss] Cannot remove a file on a GOOD ZFS filesystem

2008-12-29 Thread Marcelo Leal
Hello all... Can that be caused by some cache on the LSI controller? Some flush that the controller or disk did not honour? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.or

[zfs-discuss] OpenSolaris panic while ZFS receiving (SXDE 89)

2008-12-19 Thread Marcelo Leal
Hello all, I'm getting many OpenSolaris kernel panic while send/receiving data. I did try to create another pool and another host to test, and the same error. And the send side can be any server (i did test with four different servers, all build 89). The panic message: --- cut here ---

[zfs-discuss] Cannot remove a file on a GOOD ZFS filesystem

2008-12-17 Thread Marcelo Leal
Hello all, First off, i'm talking about a SXDE build 89. Sorry if that was discussed here before, but i did not find anything related on the archives, and i think is a "weird" issue... If i try to remove a specific file, i got: # rm file1 rm: file1: No such file or directory # rm -rf dir2 rm:

Re: [zfs-discuss] Lost Disk Space

2008-11-06 Thread Marcelo Leal
> A percentage of the total space is reserved for pool > overhead and is not > allocatable, but shows up as available in "zpool > list". > Something to change/show in the future? -- Leal [http://www.posix.brte.com.br/blog] -- This message posted from opensolaris.org _

Re: [zfs-discuss] Enabling load balance with zfs

2008-10-31 Thread Marcelo Leal
I think the better solution is to have two pools, and write a script to change the recording destination time to time, or move the files after it. Like the prototype by Hartz. ps.: Is this a reality show? ;-) -- This message posted from opensolaris.org _

Re: [zfs-discuss] Managing low free space and snapshots

2008-10-30 Thread Marcelo Leal
Hello, In the situation you have described, if i understood well, you would not have any space. When you take a snapshot, your snapshot is referencing the blocks older than it... Ex.: You have a 500gb disk, and create a 5gb file, you got 495gb free space. So you delete the file, you have

Re: [zfs-discuss] DNLC and ARC

2008-10-30 Thread Marcelo Leal
nd use what remains in memory to cache data. Maybe that kind of tuning would be usefull for just a few workloads, but could be a *huge* enhancement for that workloads. Leal -- posix rules -- [http://www.posix.brte.com.br/blog] > > On 10/30/08 04:50, Marcelo Leal wrote: &g

[zfs-discuss] DNLC and ARC

2008-10-30 Thread Marcelo Leal
Hello, In ZFS the DNLC concept is gone, or is in ARC too? I mean, all the cache in ZFS is ARC right? I was thinking if we can tune the DNLC in ZFS like in UFS.. if we have too *many* files and directories, i guess we can have a better performance having all the metadata cached, and that is ev

Re: [zfs-discuss] COW & updates [C1]

2008-10-28 Thread Marcelo Leal
> Because of one change to just one file, the MOS is a brend new one Yes, "all writes in ZFS are done in transaction groups".. so, evertime there is a commit, something is really write to disk, there is a new txg and all the blocks written are related to that txg (even the ubberblock). I don´t

Re: [zfs-discuss] Disabling COMMIT at NFS level,

2008-10-22 Thread Marcelo Leal
> On 10/22/08 13:56, Marcelo Leal wrote: > >>> But the slog is the ZIL. formaly a *separate* > >> intent log. > >> > >> No the slog is not the ZIL! > > Ok, when you did write this: > > "I've been slogging for a while on support for

Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-22 Thread Marcelo Leal
> > But the slog is the ZIL. formaly a *separate* > intent log. > > No the slog is not the ZIL! Ok, when you did write this: "I've been slogging for a while on support for separate intent logs (slogs) for ZFS. Without slogs, the ZIL is allocated dynamically from the main pool". You were talki

Re: [zfs-discuss] Building a 2nd pool, can I do it in stages?

2008-10-22 Thread Marcelo Leal
Hello there, It´s not a wiki, but has many considerations about your question: http://www.opensolaris.org/jive/thread.jspa?threadID=78841&tstart=60 Leal. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-22 Thread Marcelo Leal
> Bah, I've done it again. I meant use it as a slog > device, not as the ZIL... But the slog is the ZIL. formaly a *separate* intent log. What´s the matter? I think everyone did understand. I think you did make a confusion some threads before about ZIL and L2ARC. That is a different thing.. ;-)

Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-22 Thread Marcelo Leal
I agree with you Constantin that the sync is a performance problem, in the same way i think in a NFS environment it is just *required*. If the sync can be relaxed in a "specific NFS environment", my first opinion is that the NFS is not necessary on that environment in first place. IMHO a proto

Re: [zfs-discuss] Booting 0811 from USB Stick

2008-10-21 Thread Marcelo Leal
Hello all, Did you make a install on the USB stick, or did you use the Distribution Constructor (DC)? Leal. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listi

Re: [zfs-discuss] Tool to figure out optimum ZFS recordsize for a Mail server Maildir tree?

2008-10-21 Thread Marcelo Leal
Hello Roch! > > Leave the default recordsize. With 128K recordsize, > files smaller than > 128K are stored as single record > tightly fitted to the smallest possible # of disk > sectors. Reads and > writes are then managed with fewer ops. In the write ZFS is dynamic, but in the read? If i

Re: [zfs-discuss] Lost Disk Space

2008-10-20 Thread Marcelo Leal
Hello there... I did see that already, talk with some guys without answer too... ;-) Actually, this week i did not see discrepancy between tools, but the pool information was wrong (space used). Exporting/importing, scrub, and etc, did not solve. I know that zfs is "async" in the status report

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-17 Thread Marcelo Leal
Hello all, I think he got some point here... maybe that would be an interesting feature for that kind of workload. Caching all the metadata, would make the rsync task more fast (for many files). Try to cache the data is really waste of time, because the data will not be read again, and will jus

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-15 Thread Marcelo Leal
So, there is no raid10 in a solaris/zfs setup? I´m talking about "no redundancy"... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-15 Thread Marcelo Leal
Are you talking about what he had in the "logic of the configuration at top level", or you are saying his top level pool is a raidz? I would think his top level zpool is a raid0... -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Improving zfs send performance

2008-10-15 Thread Marcelo Leal
Hello all, I think in SS 11 should be -xarch=amd64. Leal. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-10 Thread Marcelo Leal
> On Fri, Oct 10, 2008 at 06:15:16AM -0700, Marcelo > Leal wrote: > > - "ZFS does not need fsck". > > Ok, that?s a great statement, but i think ZFS > needs one. Really does. > > And in my opinion a enhanced zdb would be the > solution. Flexibility. >

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-10 Thread Marcelo Leal
Hello all, I think the problem here is the ZFS´ capacity for recovery from a failure. Forgive me, but thinking about creating a code "without failures", maybe the hackers did forget that other people can make mistakes (if they can´t). - "ZFS does not need fsck". Ok, that´s a great statement,

Re: [zfs-discuss] ZSF Solaris

2008-09-30 Thread Marcelo Leal
ZFS has not limit for snapshots and filesystems too, but try to create "a lot" snapshots and filesytems and you will have to wait "a lot" for your pool to import too... ;-) I think you should not think about the "limits", but performance. Any filesytem with *too many" entries by directory will

Re: [zfs-discuss] ZFS on Hitachi SAN, pool recovery

2008-09-24 Thread Marcelo Leal
Just curiosity, why don´t use SC? Leal. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Problem: ZFS export drive

2008-09-23 Thread Marcelo Leal
What was the configuration of that pool? It was a mirror, raidz, or just stripe? If was just stripe, and you loose one, you got problems... Leal. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:

Re: [zfs-discuss] A question about recordsize...

2008-09-09 Thread Marcelo Leal
Hello milek, That information remains true? ZFS algorithm for selecting block sizes The initial block size is the smallest support block size larger than the first write to the file. Grow to the next largest block size for the entire file when the total file length increases beyond the curr

Re: [zfs-discuss] ZFS send/receive filehandle issue

2008-09-09 Thread Marcelo Leal
Hello Adrian, Thanks, i was using send/receive (that´s why i did put it on subject ;), and i did like to know if ZFS could have some solution for that as i said before. The send/receive is not an "exact" copy of the filesystem (creation time, fsid, etc) are different. So, the FH using that for

[zfs-discuss] ZFS send/receive filehandle issue

2008-09-08 Thread Marcelo Leal
Hello all, Some way to workaround the filehandle issue with a send/receive ZFS procedure? In the ZFS begining, i did a conversation with some of the devel guys, and did ask about how ZFS would treat the NFS filehandle.. IIRC, the answere was: "No problem, the NFS filehandle will not depend on t

Re: [zfs-discuss] A question about recordsize...

2008-09-08 Thread Marcelo Leal
> On Fri, 5 Sep 2008, Marcelo Leal wrote: > > 4 - The last one... ;-) For the FSB allocation, > how the zfs knows > > the file size, for know if the file is smaller than > the FSB? > > Something related to the txg? When the write goes > to the disk, the > >

[zfs-discuss] A question about recordsize...

2008-09-05 Thread Marcelo Leal
Hello! Assuming the default recordsize (FSB) in zfs is 128k, so: 1 - If i have a file with 10k, the zfs will allocate a FSD of 10k. Right? As zfs is not static like the other filesystems, i don´t have that old internal fragmentation... 2 - If the above is right, i don´t need to adjust the re

Re: [zfs-discuss] send/receive statistics

2008-09-05 Thread Marcelo Leal
Thanks a lot for the answers! Relling did say something about checksum, i did ask to him about a more "detailed" explanation about it. Because i did not understand "what" checksum the receive part has to check, as the send can be redirected to a file on a disc or tape... In the end, i think i

Re: [zfs-discuss] Terabyte scrub

2008-09-05 Thread Marcelo Leal
You are right! Seeing the numbers i could not think very well ;-) What matters is the "used" size, and not the storage capacity! My fault... Thanks a lot for the answers. Leal. -- This message posted from opensolaris.org ___ zfs-discuss mailing li

[zfs-discuss] Terabyte scrub

2008-09-04 Thread Marcelo Leal
Hello all, I was used to use mirrors and solaris 10, in which the scrub process for 500gb took about two hours... and with solaris express (snv_79a) tests, terabytes in minutes. I did search for release changes in the scrub process, and could not find anything about enhancements in this magnitu

[zfs-discuss] send/receive statistics

2008-09-04 Thread Marcelo Leal
Hello all, Any plans (or already have), a send/receive way to get the transfer backup statistics? I mean, the "how much" was transfered, time and/or bytes/sec? And the last question... i did see in many threads the question about "the consistency between the send/receive through ssh"... but no

Re: [zfs-discuss] CIFS HA service with solaris 10 and SC 3.2

2008-06-23 Thread Marcelo Leal
Thanks all for the answers! Seems like the solution to have a opensolaris storage solution is the CIFS project. And there is no agent to provide HA, so seems like a good project too. Thanks, Leal. This message posted from opensolaris.org ___ zfs

Re: [zfs-discuss] CIFS HA service with solaris 10 and SC 3.2

2008-06-22 Thread Marcelo Leal
Hello all, i would like to continue with this topic, and after doing some "research" about the topic, i have some (many) doubts, and maybe we could use this thread to give some responses to me and other users that can have the same questions... First, sorry to "CC" to many forums, but i think i

[zfs-discuss] ZFS data recovery command

2008-06-17 Thread Marcelo Leal
Hello all, In a "traditional" filesystem, we have a few filesystems, but with ZFS, we can have thousands.. The question is: "There is a command or procedure to remake the filesystems, in a recovery from backup scenario"? I mean, imagine that i have a ZFS pool with 1,000 filesystems, and for "s

Re: [zfs-discuss] ZFS with raidz

2008-05-29 Thread Marcelo Leal
Hello... If i have understood well, you will have a host with EMC RAID5 discs. Is that right? You pay a lot of money to have EMC discs, and i think is not a good idea have another layer of *any* RAID on top of it. If you have EMC RAID5 (eg. symmetrix), you don't need to have a software RAID...

Re: [zfs-discuss] cp -p gives errors on Linux w/ NFS-mounted ZFS

2008-05-16 Thread Marcelo Leal
Hello all, I'm having the same problem here, any news? I need to use ACL's on the GNU/Linux clients. I'm using nfsv3, and on the GNU/Linux servers that feature was working, i think we need a solution for solaris/opensolaris. Now, with the "dmm" project, how we can start a migration process, if

Re: [zfs-discuss] ZFS cli for REMOTE Administration

2008-05-08 Thread Marcelo Leal
No answer... well, do you not have this problem or there is another option to delegate such administration? I was thinking if we can delegate a "single" filesystem administration to some user through ZFS administration web console (67889). Can i create a user and give him administration rights

Re: [zfs-discuss] ZFS still crashing after patch

2008-05-05 Thread Marcelo Leal
Hello, If you believe that the problem can be related to ZIL code, you can try to disable it to debug (isolate) the problem. If it is not a fileserver (NFS), disabling the zil should not impact consistency. Leal. This message posted from opensolaris.org

[zfs-discuss] ZFS cli for REMOTE Administration

2008-05-02 Thread Marcelo Leal
Hello all, Some time ago i did write a simple script to handle "on the fly" filesystem(zfs) creation for linux clients (http://www.posix.brte.com.br/blog/?p=102). I was thinking in improve that script to handle more generic "remote" actions... but i think we could start a project on this: "A