Re: [zfs-discuss] iostat and monitoring

2008-07-05 Thread Mike Gerdts
On Sat, Jul 5, 2008 at 9:48 PM, Brian Hechinger <[EMAIL PROTECTED]> wrote: > On Sat, Jul 05, 2008 at 03:03:34PM -0500, Mike Gerdts wrote: >> $ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes} >> unix:0:vopstats_zfs:nread 418787 >> unix:0:vopstats_zfs:

Re: [zfs-discuss] iostat and monitoring

2008-07-05 Thread Mike Gerdts
ostat. While iostat shows physical reads and writes only "zpool iostat" and fsstat show reads that are satisfied by a cache and never result in physical I/O activity. As such, a workload that looks write-intensive on UFS monitored via iostat may seem to have shifted to being very

Re: [zfs-discuss] Why RAID 5 stops working in 2009

2008-07-03 Thread Mike Gerdts
elling/entry/zfs_raid_recommendations_space_performance http://blogs.sun.com/relling/entry/a_story_of_two_mttdl http://opensolaris.org/jive/thread.jspa?threadID=65564#255257 -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] evil tuning guide updates

2008-07-02 Thread Mike Gerdts
Good explanation at http://mail.opensolaris.org/pipermail/zfs-discuss/2008-April/046937.html -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-07-02 Thread Mike Gerdts
suggest it is fixed) proper dependencies do not exist to prevent paging activity after boot from trashing the crash dump in a shared swap+dump device - even when savecore is enabled. It is only by luck that you get anything out of it. Arguably this should be fixed by proper SMF depend

Re: [zfs-discuss] Some basic questions about getting the best performance for database usage

2008-07-01 Thread Mike Gerdts
ly as physically large as the combined size of your fridge, your mom's fridge, and those of your three best friends that are out of college and have a fridges significantly larger than a keg. 2. "Shared" as in one server's behavior can and may be somewhat likely t

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-07-01 Thread Mike Gerdts
On Tue, Jul 1, 2008 at 7:31 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote: > Mike Gerdts wrote: >> >> On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat <[EMAIL PROTECTED]> >> wrote: >>> >>> Instead we should take it completely out of their hands an

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-07-01 Thread Mike Gerdts
on "why does my machine suck?" can say that it has been excessively short on memory X times in recent history. Any of these approaches is miles above the Linux approach of finding a memory hog to kill. -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] zpool with RAID-5 from intelligent storage arrays

2008-06-30 Thread Mike Gerdts
problems with I/O errors when doing a stat() of a file. Repeated tries fails, but a reboot seems to clear it. zpool scrub reports no errors and the pool consists of a single mirror vdev. I haven't filed a bug on this yet. -- Mike Gerdts http:

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-30 Thread Mike Gerdts
On Mon, Jun 30, 2008 at 9:19 AM, jan damborsky <[EMAIL PROTECTED]> wrote: > Hi Mike, > > > Mike Gerdts wrote: >> >> On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky <[EMAIL PROTECTED]> >> wrote: >>> >>> Thank you very much all for this v

Re: [zfs-discuss] zfs mount failed at boot stops network services.

2008-06-27 Thread Mike Gerdts
The fact that the system is not resilient to any misstep is not a bug. If you remove /sbin/init the system would be hosed worse but you would have gotten no error message before reboot. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-disc

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-25 Thread Mike Gerdts
On Wed, Jun 25, 2008 at 3:36 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote: > On Wed, Jun 25, 2008 at 3:09 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote: >> Well, I've seen core dumps bigger than 10GB (even without ZFS)... :) > > Was that the size in the dump device o

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-25 Thread Mike Gerdts
eported on the console after the dump completed. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-25 Thread Mike Gerdts
to enable (the yet non-existent) svc:/system/savecore:default. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] resilver running for 35 trillion years

2008-06-24 Thread Mike Gerdts
According to the timestamps in my prompt, I'm thinking that virtualbox reset the time to zero while the command was running. This seems to happen from time to time, but this is the most entertaining result I have seen. -- Mike Gerdts http://mgerd

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-24 Thread Mike Gerdts
ument can be made for VMware, LDoms, Xen, etc., but those are much more likely to use jumpstart for installations than laptop-based VM's. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-24 Thread Mike Gerdts
On Tue, Jun 24, 2008 at 7:24 AM, Gary Mills <[EMAIL PROTECTED]> wrote: > On Mon, Jun 23, 2008 at 10:25:09PM -0500, Mike Gerdts wrote: >> >> Really it boils down to lots of file systems to hold the OS adds >> administrative complexity and rarely saves more work than it c

Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-23 Thread Mike Gerdts
ems then patch. Of course today's development work will make the 3 hour outage for patching a thing of ancient history as well. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-23 Thread Mike Gerdts
th /. For example, /var/sadm has lots of information about which packages and patches are installed. There is a lot of other stuff that shouldn't be snapshotted with it. I have proposed /var/share to cope with this. http://mgerdts.blogspot.com/2008/03/future-of-opensolaris-boot-

Re: [zfs-discuss] SXCE build 90 vs S10U6?

2008-06-12 Thread Mike Gerdts
y supported. 3. There were numerous complaints of repeated timeouts when the snv_90 packages were released resulting in having to restart the upgrade from the start. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing li

Re: [zfs-discuss] SXCE build 90 vs S10U6?

2008-06-12 Thread Mike Gerdts
s should be easier and safer than patching is today. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs promote and ENOSPC (+panic with dtrace)

2008-06-11 Thread Mike Gerdts
On Wed, Jun 11, 2008 at 12:58 AM, Robin Guo <[EMAIL PROTECTED]> wrote: > Hi, Mike, > > It's like 6452872, it need enough space for 'zfs promote' Not really - in 6452872 a file system is at its quota before the promote is issued. I expect that a promote may cause

[zfs-discuss] zfs promote and ENOSPC

2008-06-09 Thread Mike Gerdts
Is it a bug in the documentation or zfs? -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Mike Mackovitch
On Fri, Jun 06, 2008 at 03:43:29PM -0700, eric kustarz wrote: > > On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote: > > > On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote: > >> > clients do not. Without per-filesystem mounts, 'df' on the client > will not report correct da

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Mike Mackovitch
On Fri, Jun 06, 2008 at 06:27:01PM -0400, Brian Hechinger wrote: > On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote: > > > > >> clients do not. Without per-filesystem mounts, 'df' on the client > > >> will not report correct data though. > > > > > > I expect that mirror mounts will be

Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-05 Thread Mike Gerdts
ments and suggestions and wrote a blog entry. http://mgerdts.blogspot.com/2008/03/future-of-opensolaris-boot-environment.html -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

[zfs-discuss] LiveUpgrade Bug? -- ZFS root finally here in SNV90

2008-06-05 Thread Ellis, Mike
Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Thursday, June 05, 2008 1:56 PM To: Ellis, Mike Cc: ZFS discuss Subject: Re: [zfs-discuss] ZFS root finally here in SNV90 Mike, As we discussed, you can't currently break out other datasets besides /var. I'll add thi

Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-04 Thread Ellis, Mike
In addition to the standard "containing the carnage" arguments used to justify splitting /var/tmp, /var/mail, /var/adm (process accounting etc), is there an interesting use-case where would one split out /var for "compression reasons" (as in, turn on compression for /var so that process accounting,

Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread Ellis, Mike
The FAQ document ( http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/ ) has a jumpstart profile example: install_type initial_install pool newpool auto auto auto mirror c0t0d0 c0t1d0 bootenv installbe bename sxce_xx The B90 jumpstart "check" program (SPARC) flags th

Re: [zfs-discuss] panic: avl_find() succeeded inside avl_add()

2008-06-01 Thread Mike Gerdts
On Sat, May 31, 2008 at 9:38 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote: > $ find /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix > /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix > /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix/.make.state.lock > /ws/mount/onn

Re: [zfs-discuss] /var/sadm on zfs?

2008-06-01 Thread Mike Gerdts
mittedly, it is sloppy to just get rid of the undo.z file - the existence of the other related directories is (save/) may trip something up. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] panic: avl_find() succeeded inside avl_add()

2008-05-31 Thread Mike Gerdts
On Sat, May 31, 2008 at 8:48 PM, Mike Gerdts <[EMAIL PROTECTED]> wrote: > I just experienced a zfs-related crash. I have filed a bug (don't > know number - grumble). I have a crash dump but little free space. If > someone would like some more info from the core, please let m

[zfs-discuss] panic: avl_find() succeeded inside avl_add()

2008-05-31 Thread Mike Gerdts
default pool0 bootfs - default pool0 delegation on default pool0 autoreplace off default pool0 temporaryoff default -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list

Re: [zfs-discuss] /var/sadm on zfs?

2008-05-31 Thread Mike Gerdts
ll/copyright var/sadm/pkg/SPROcc/save/pspool/SPROcc/install/depend var/sadm/pkg/SPROcc/save/pspool/SPROcc/pkginfo var/sadm/pkg/SPROcc/save/pspool/SPROcc/pkgmap Notice the lack of undo.Z files (and associated patch directories), but the rest looks the same. -- Mike Ge

Re: [zfs-discuss] /var/sadm on zfs?

2008-05-31 Thread Mike Gerdts
says that this is a good idea - but I haven't seen any better method for getting rid of the cruft that builds up in /var/sadm either. I suspect that further discussion on this topic would be best directed to [EMAIL PROTECTED] or sun-managers mailing list (see http://www.sunmanagers.

Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-31 Thread Mike Gerdts
dac_read and file_dac_write. A backup program that has those privileges has everything they need to gain full root access. I wish that there was a flag to open(2) to say not to update the atime and that there was a privilege that could be granted to allow this flag without granting file_dac_write

[zfs-discuss] Create ZFS now, add mirror later

2008-05-28 Thread E. Mike Durbin
Is there a way to to a create a zfs file system (e.g. zpool create boot /dev/dsk/c0t0d0s1) Then, (after vacating the old boot disk) add another device and make the zpool a mirror? (as in: zpool create boot mirror /dev/dsk/c0t0d0s1 /dev/dsk/c1t0d0s1) Thanks! emike This message posted from op

Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-27 Thread Mike Gerdts
GB. However, it is *a lot* less than feeding system-board DIMM slots to workloads that use a lot of RAM but are fairly inactive. As such, a $10k PCIe card may be able to allow a $42k 64 GB T5240 handle 5+ times the number of not-too-busy J2EE instances. If anyone's done any modelling or test

Re: [zfs-discuss] ZFS: A general question

2008-05-24 Thread Ellis, Mike
I like the link you sent along... They did a nice job with that. (but it does show that mixing and matching vastly different drive-sizes is not exactly optimal...) http://www.drobo.com/drobolator/index.html Doing something like this for ZFS allowing people to create pools by mixing/match

Re: [zfs-discuss] zfs iostat

2008-05-18 Thread Mike Gerdts
ot;zfs iostat", or how can I get the stats with general > systemtools of a particular directory? > > any idea would be appreciated > karsten Have you tried fsstat? I think it will do what you are looking for whether it is zfs, ufs, tmpf

Re: [zfs-discuss] zfs! mirror and break

2008-05-09 Thread Mike DeMarco
> Mike DeMarco wrote: > > I currently have a zpool with two 8Gbyte disks in > it. I need to replace them with a single 56Gbyte > disk. > > > > with veritas I would just add the disk in as a > mirror and break off the other plex then destroy it. > > > >

[zfs-discuss] zfs! mirror and break

2008-05-08 Thread Mike DeMarco
I currently have a zpool with two 8Gbyte disks in it. I need to replace them with a single 56Gbyte disk. with veritas I would just add the disk in as a mirror and break off the other plex then destroy it. I see no way of being able to do this with zfs. Being able to migrate data without having

Re: [zfs-discuss] How many ZFS pools is it sensible to use on a single server?

2008-04-15 Thread Mike Gerdts
es over between systems independently either I need to have a zpool per zone or I need to have per-dataset replication. Considering that with some workloads 20+ zones on a T2000 is quite feasible, a T5240 could be pushing 80+ zones and as such a relatively large number of zpools. -- Mike Gerdts http

Re: [zfs-discuss] ZVOL access permissions?

2008-04-12 Thread Ellis, Mike
Could someone kindly provide some details on using a zvol in sparse-mode? Wouldn't the COW nature of zfs (assuming COW still applies on ZVOLS) quickly erode the sparse nature of the zvol? Would sparse data-presentation only work by delegating a part of a zpool to a zone, but that's at the file-

Re: [zfs-discuss] [storage-discuss] Preventing zpool imports on boot

2008-02-15 Thread Mike Gerdts
mport -R" mentioned the temporary attribute. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [storage-discuss] Preventing zpool imports on boot

2008-02-15 Thread Mike Gerdts
e system reboots. This pro- perty can also be referred to by its shortened column name, "temp". > (I am trying to move this thread over to zfs-discuss, since I originally > posted to the wrong alias) storage-discuss trimmed in my reply. -- Mike Gerdt

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-02-02 Thread Mike Gerdts
Activating the updated OS should take only a few seconds longer than a standard "init 6". Failback is similarly easy. I can't remember the last time I swapped physical drives to minimize the outage during an upgrade. -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] Resizing a mirror

2008-01-29 Thread Mike Gerdts
ol and then import it to get the additional space to be seen. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Moving zfs to an iscsci equallogic LUN

2008-01-15 Thread Ellis, Mike
Use zpool replace to swap one side of the mirror with the iscsi lun. -- mikee - Original Message - From: [EMAIL PROTECTED] <[EMAIL PROTECTED]> To: zfs-discuss@opensolaris.org Sent: Tue Jan 15 08:46:40 2008 Subject: Re: [zfs-discuss] Moving zfs to an iscsci equallogic LUN What would be

Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread mike
except in my experience it is piss poor slow... but yes it is another option that is -basically- built on standards (i say that only because it's not really a traditional filesystem concept) On 1/14/08, David Magda <[EMAIL PROTECTED]> wrote: > > On Jan 14, 2008, at 17:15, mike

Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread mike
On 1/14/08, eric kustarz <[EMAIL PROTECTED]> wrote: > > On Jan 14, 2008, at 11:08 AM, Tim Cook wrote: > > > www.mozy.com appears to have unlimited backups for 4.95 a month. > > Hard to beat that. And they're owned by EMC now so you know they > > aren't going anywhere anytime soon. mozy's been oka

Re: [zfs-discuss] Can't access my data

2008-01-04 Thread Mike Gerdts
usy? # fuser /homes If you still can't resolve it # zfs set mountpoint=/somewhere_else homespool/homes # zfs mount -a (not sure this needed) # cd /somewhere_else -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Mike Gerdts
modes (missing license key especially major system changes, on-disk corruption) - Opportunities to do things previously not possible ZFS doesn't win on many of those, but with the improvements that I have seen throughout the storage stack it is somewhat likely that the require

[zfs-discuss] Deadlock with snv_80+

2007-12-27 Thread Mike Gerdts
mirror ONLINE 0 0 0 c0t1d0s7 ONLINE 0 0 0 c0t0d0s7 ONLINE 0 0 0 errors: No known data errors I'll keep the crash dump around for a while in the event that someone has interest in digging into it more. -- Mike Gerdts ht

Re: [zfs-discuss] fclose failing at 2G on a ZFS filesystem

2007-12-25 Thread Mike Gerdts
s failing... What command line is used to compile the code? I would guess that you don't have large file support. A variant of the following would probably be good: cc -c $CFLAGS `getconf LFS_CFLAGS` myprog.c cc -o myprog $LDFLAGS `getconf LFS_LDFLAG

Re: [zfs-discuss] Does Oracle support ZFS as a file system with Oracle RAC?

2007-12-18 Thread Mike Gerdts
re hours of runtime and likely more space in production use than ZFS. I think that ZFS holds a lot of promise for shared-nothing database clusters, such as is being done by Greenplumb with their extended variant of Postgres. -- Mike Gerdts http://m

Re: [zfs-discuss] /usr/bin and /usr/xpg4/bin differences

2007-12-16 Thread Mike Gerdts
the usage error, man page, etc. would be appropriate too. You can see a few other "#ifdef XPG4" blocks that show the quite small differences between the two variants. Also... since there is nothing zfs-specific here, opensolaris-code may be a more appropriate forum. -- Mike Gerdts http:/

Re: [zfs-discuss] /usr/bin and /usr/xpg4/bin differences

2007-12-15 Thread Mike Gerdts
wing: $ ls df* df df.odf.po.xpg4 df.xpg4 df.cdf.po df.xcl df.xpg4.o It looks to me as though df becomes /usr/bin/df and df.xpg4 becomes /usr/xpg4/bin/df. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list

Re: [zfs-discuss] Error in zpool man page?

2007-12-07 Thread Mike Dotson
19246. > > Is there a patch that was not included with 10_Recommended? > > > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/lis

Re: [zfs-discuss] Error in zpool man page?

2007-12-07 Thread Mike Dotson
_ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Mike Dotson ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] x4500 w/ small random encrypted text files

2007-11-29 Thread Mike Gerdts
ation to me. Am I wrong? -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Kernel panic receiving incremental snapshots

2007-11-23 Thread Mike Gerdts
you can likely get the same IDR (errr, an IDR with the same fix - mine was SPARC) to see if it addresses your problem. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread mike
I actually have a related motherboard, chassis, dual power-supplies and 12x400 gig drives already up on ebay too. If I recall Areca cards are supported in OpenSolaris... http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=300172982498 On 11/22/07, Jason P. Warr <[EMAIL PROTECTED]> wrote: > If you

Re: [zfs-discuss] zpool question

2007-11-15 Thread Mike Dotson
ter/failover, etc), could be either or. > > Thanks for any help and advice. > > Brian. > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Mike Dotson

Re: [zfs-discuss] How to create ZFS pool ?

2007-11-15 Thread Mike Dotson
On Thu, 2007-11-15 at 05:25 -0800, Boris Derzhavets wrote: > Thank you very much Mike for your feedback. > Just one more question. > I noticed five device under /dev/rdsk:- > c1t0d0p0 > c1t0d0p1 > c1t0d0p2 > c1t0d0p3 > c1t0d0p4 > been created by system immediately after

Re: [zfs-discuss] How to create ZFS pool ?

2007-11-14 Thread Mike Dotson
none requested config: NAMESTATE READ WRITE CKSUM lpool ONLINE 0 0 0 c0d0p4ONLINE 0 0 0 errors: No known data errors So to create the pool in my case would be: zpool create lpool c0d0p4 -- Mike Dotson

Re: [zfs-discuss] Status of Samba/ZFS integration

2007-11-04 Thread Mike Gerdts
(CiFS, NFS, iSCSI, FC) file and block serving with remote replication seem intuitive. Kinda makes you understand why Netapp no longer feels that they can compete on features + ease of use. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mai

Re: [zfs-discuss] Hierarchal zfs mounts

2007-10-22 Thread Mike DeMarco
> Mike DeMarco wrote: > > Looking for a way to mount a zfs filesystem ontop > of another zfs > > filesystem without resorting to legacy mode. > > doesn't simply 'zfs set mountpoint=...' work for you? > > -- >

[zfs-discuss] Hierarchal zfs mounts

2007-10-22 Thread Mike DeMarco
Looking for a way to mount a zfs filesystem ontop of another zfs filesystem without resorting to legacy mode. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listin

Re: [zfs-discuss] Mounting ZFS Pool to a different server

2007-10-19 Thread Mike Gerdts
The ideal situation it would go like: host1# zpool export pool host2# zpool import pool If you know (really know) that it is offline on the other server (e.g. you can verify the host is dead), you can use: # zpool import -f Mike On 10/19/07, Mertol Ozyoney <[EMAIL PROTECTED]> wrote: &

Re: [zfs-discuss] UC Davis Cyrus Incident September 2007

2007-10-18 Thread Mike Gerdts
On 10/18/07, Gary Mills <[EMAIL PROTECTED]> wrote: > What's the command to show cross calls? mpstat will show it on a system basis. xcallsbypid.d from the DTraceToolkit (ask google) will tell you which PID is responsible. -- Mike Gerdts http://mgerdt

Re: [zfs-discuss] UC Davis Cyrus Incident September 2007

2007-10-18 Thread Mike Gerdts
nd thread migrations, and ...) are much cheaper on systems with lower latency between CPUs. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] HAMMER

2007-10-18 Thread Mike Gerdts
fact-checking, the code rarely finds its way in front of you. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] HAMMER

2007-10-18 Thread Mike Gerdts
ed was that any application that linked against the included version of OpenSSL automatically gets to take advantage of the N2 crypto engine, so long as it is using one of the algorithms supported by N2 engine. -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] HAMMER

2007-10-18 Thread Mike Gerdts
y changes the importance of 2 a bit. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] meet import error after reinstall the OS

2007-10-18 Thread Shuai Mike Cheng
LABEL 3 version=3 name='tank' state=0 txg=43 pool_guid=8219303556773256880 top_guid=4844356610838567439 guid=4844356610838567439 vdev_tree type='disk' id=0

Re: [zfs-discuss] df command in ZFS?

2007-10-17 Thread Mike Gerdts
df now turns into 40+ screens[1] on the default sized terminal window. 1. If you are in this situation, there is a good chance that the formatting of df cause line folding or wrapping that doubles the number of lines to 80+ screens of df output. -- Mike Gerdts http://

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-24 Thread Mike Gerdts
On 9/24/07, Paul B. Henson <[EMAIL PROTECTED]> wrote: > but checking the actual release notes shows no ZFS mention. 3.0.26 to > 3.2.0? That seems an odd version bump... 3.0.x and before are GPLv2. 3.2.0 and later are GPLv3. http://news.samba.org/announcements/samba_gplv3/ -- Mike

Re: [zfs-discuss] "zoneadm clone" doesn't support ZFS snapshots in

2007-09-21 Thread Mike Gerdts
le system cloning capabilities play in coordination with iSCSI. Oh, wait! What if the NAS device runs out of space while I'm patching? Better rule out the thin provisioning capabilities of the HDS storage that Sun sells as well. -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] "zoneadm clone" doesn't support ZFS snapshots in

2007-09-21 Thread Mike Gerdts
On 9/20/07, Matthew Flanagan <[EMAIL PROTECTED]> wrote: > Mike, > > I followed your procedure for cloning zones and it worked > well up until yesterday when I tried applying the S10U4 > kernel patch 12001-14 and it wouldn't apply because I had > my zones on zfs :( Th

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-21 Thread Mike Gerdts
It worked quite well for giving one place to administer the location mapping while providing transparency to the end-users. Mike -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] "zoneadm clone" doesn't support ZFS snapshots in s10u4?

2007-09-20 Thread Mike Gerdts
ne zoneadm -z master attach zonecfg -z newzone create -t master # change IP's et. al. zoneadm -z newzone attach zoneadm -z newzone boot -s zlogin newzone sys-unconfig zoneadm -z newzone boot zlogin -C newzone -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] "zoneadm clone" doesn't support ZFS snapshots in s10u4?

2007-09-19 Thread Mike Gerdts
roject in the works (Snap Upgrade) that is very much targeted at environments that use zfs, I would be surprised to see zfs support come into live upgrade. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolar

Re: [zfs-discuss] Would a device list output be a reasonable feature for zpool(1)?

2007-09-17 Thread Ellis, Mike
Yup... With Leadville/MPXIO targets in the 32-digit range, identifying the "new storage/LUNs" is not a trivial operatrion. -- MikeE -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Russ Petruzzelli Sent: Monday, September 17, 2007 1:51 PM To: zfs-discus

Re: [zfs-discuss] ZFS and Live Upgrade

2007-09-15 Thread Mike Gerdts
bet that Live Upgrade never does, but Snap Upgrade does. http://opensolaris.org/os/project/caiman/Snap_Upgrade/ It is likely worth considering more of the roadmap when reading that page. http://opensolaris.org/os/project/caiman/Roadmap/ -- Mike Gerdts http:/

Re: [zfs-discuss] space allocation vs. thin provisioning

2007-09-14 Thread Mike Gerdts
f a various failure modes. Of course, I can see how writes could be batched coalesced and applied in a journaled manner such that each batch fully applies or is rolled back on the target. I haven't heard of this being done. Mike -- Mike Gerdts http://mgerdts.blogspot.com/ __

[zfs-discuss] space allocation vs. thin provisioning

2007-09-14 Thread Mike Gerdts
quot; so that the array can reclaim the space? I could see this as useful when doing re-writes of data (e.g. crypto rekey) to concentrate data that had become scattered into contiguous space. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discu

Re: [zfs-discuss] How do I get my pool back?

2007-09-13 Thread Mike Lee
have you tried zpool clear? Peter Tribble wrote: On 9/13/07, Solaris <[EMAIL PROTECTED]> wrote: Try exporting the pool then import it. I have seen this after moving disks between systems, and on a couple of occasions just rebooting. Doesn't work. (How can you export something that is

Re: [zfs-discuss] compression=on and zpool attach

2007-09-12 Thread Mike DeMarco
> On 9/12/07, Mike DeMarco <[EMAIL PROTECTED]> wrote: > > > Striping several disks together with a stripe width > that is tuned for your data > > model is how you could get your performance up. > Stripping has been left out > > of the ZFS model for some reason

Re: [zfs-discuss] compression=on and zpool attach

2007-09-12 Thread Mike DeMarco
> On 11/09/2007, Mike DeMarco <[EMAIL PROTECTED]> > wrote: > > > I've got 12Gb or so of db+web in a zone on a ZFS > > > filesystem on a mirrored zpool. > > > Noticed during some performance testing today > that > > > its i/o bound but &

Re: [zfs-discuss] compression=on and zpool attach

2007-09-11 Thread Mike DeMarco
> I've got 12Gb or so of db+web in a zone on a ZFS > filesystem on a mirrored zpool. > Noticed during some performance testing today that > its i/o bound but > using hardly > any CPU, so I thought turning on compression would be > a quick win. If it is io bound won't compression make it worse? >

Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-08 Thread Mike Gerdts
ave reliable backups, etc. Pushing that out to desktop or laptop machines is not really a good idea. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-07 Thread Mike Gerdts
"cp ~course/hugefile ~" become not so expensive - you would be charging quota to each user but only storing one copy. Depending on the balance of CPU power vs. I/O bandwidth, compressed zvols could be a real win, more than paying back the space required to have a few

Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-07 Thread mike
On 9/7/07, Mike Gerdts <[EMAIL PROTECTED]> wrote: > For me, quotas are likely to be a pain point that prevents me from > making good use of snapshots. Getting changes in application teams' > understanding and behavior is just too much trouble. Others are: not to mention the

Re: [zfs-discuss] An Academic Sysadmin's Lament for ZFS ?

2007-09-07 Thread Mike Gerdts
at paging (file system and pager) will begin soon. This may be fine on a file server, but it really messes with me if it is a J2EE server and I'm trying to figure out how many more app servers I can add. I have a lot of hopes for ZFS and have used it with success (and

Re: [zfs-discuss] ZFS/WAFL lawsuit

2007-09-06 Thread mike
On 9/6/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > This is my personal opinion and all, but even knowing that Sun > encourages open conversations on these mailing lists and blogs it seems to > falter common sense for people from @sun.com to be commenting on this > topic. It seems like

Re: [zfs-discuss] (politics) Sharks in the waters

2007-09-05 Thread mike
On 9/5/07, Joerg Schilling <[EMAIL PROTECTED]> wrote: > As I wrote before, my wofs (designed and implemented 1989-1990 for SunOS 4.0, > published May 23th 1991) is copy on write based, does not need fsck and always > offers a stable view on the media because it is COW. Side question: If COW is su

Re: [zfs-discuss] The Dangling DBuf Strikes Back

2007-09-03 Thread Mike Gerdts
From the primary LDOM, there is no corruption. An unexpected reset (panic, I believe) of the primary LDOM seems to have caused the corruption in the guest LDOM. What was that about having the redundancy as close to the consumer as possible? :) -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] ZFS, XFS, and EXT4 compared

2007-08-29 Thread mike
On 8/29/07, Jeffrey W. Baker <[EMAIL PROTECTED]> wrote: > I have a lot of people whispering "zfs" in my virtual ear these days, > and at the same time I have an irrational attachment to xfs based > entirely on its lack of the 32000 subdirectory limit. I'm not afraid of > ext4's newness, since real

Re: [zfs-discuss] do zfs filesystems isolate corruption?

2007-08-11 Thread Mike Gerdts
"zpool is corrupt" "restore from backups" S10u4 Beta, snv69 and I think snv59: panic - S10u4 backtrace is very different from snv* -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

<    1   2   3   4   5   6   >