Re: [zfs-discuss] compressratio vs. dedupratio

2009-12-15 Thread Mike Gerdts
On Tue, Dec 15, 2009 at 2:31 AM, Craig S. Bell wrote: > Mike, I believe that ZFS treats runs of zeros as holes in a sparse file, > rather than as regular data.  So they aren't really present to be counted for > compressratio. > > http://blogs.sun.com/bonwick/entry/seek_hole_

Re: [zfs-discuss] compressratio vs. dedupratio

2009-12-14 Thread Mike Gerdts
s, but that would seem to contribute to a higher compressratio rather than a lower compressratio. If I disable compression and enable dedup, does it count deduplicated blocks of zeros toward the dedupratio? -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-di

Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Mike Gerdts
d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Model: Hitachi HTS5425 Revision: Serial No: 080804BB6300HCG Size: 160.04GB <160039305216 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 ... That /should/ be printed on the di

Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-10 Thread Mike Johnston
od to know that this can be done. On Wed, Dec 9, 2009 at 5:16 AM, Alexander J. Maidak wrote: > On Tue, 2009-12-08 at 09:15 -0800, Mike wrote: > > I had a system that I was testing zfs on using EMC Luns to create a > striped zpool without using the multi-pathing software PowerPath. Of c

Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-09 Thread Mike
Alex, thanks for the info. You made my heart stop a little when reading your problem with PowerPath, but MPxIO seems like it might be a good option for me. I'll will try that as well although I have not used it before. Thank you! -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-08 Thread Mike
Thanks Cindys for your input... I love your fear example too, but lucky for me I have 10 years before I have to worry about that and hopefully we'll all be in hovering bumper cars by then. It looks like I'm going to have to create another test system and try recommondations give here...and hop

[zfs-discuss] Changing ZFS drive pathing

2009-12-08 Thread Mike
I had a system that I was testing zfs on using EMC Luns to create a striped zpool without using the multi-pathing software PowerPath. Of coarse a storage emergency came up so I lent this storage out for temp storage and we're still using. I'd like to add PowerPath to take advanage of the multi

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-05 Thread Mike Gerdts
used as a starting point. http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_raidz.c -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Any way to remove a vdev

2009-12-02 Thread Mike Freeman
I'm sure its been asked a thousand times but is there any prospect of being able to remove a vdev from a pool anytime soon? Thanks! -- Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinf

Re: [zfs-discuss] Best practices for zpools on zfs

2009-11-26 Thread Mike Gerdts
On Thu, Nov 26, 2009 at 8:53 PM, Toby Thain wrote: > > On 26-Nov-09, at 8:57 PM, Richard Elling wrote: > >> On Nov 26, 2009, at 1:20 PM, Toby Thain wrote: >>> >>> On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote: >>> >>>> On 2009-Nov-24 14:07:06

Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2009-11-25 Thread Mike Gerdts
s.org/pipermail/zfs-discuss/2008-July/019762.html. http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html Mike On Thu, Jul 10, 2008 at 4:42 AM, Darren J Moffat wrote: > I regularly create new zfs filesystems or snapshots and I find it > annoying that I have to type the full d

Re: [zfs-discuss] ZFS Random Read Performance

2009-11-25 Thread Mike Gerdts
t is small enough that it is somewhat likely that many of those random reads will be served from cache. A dtrace analysis of just how random the reads are would be interesting. I think that hotspot.d from the DTrace Toolkit would be a good starting place. -- Mike Gerdts http://mgerdts.blogspo

Re: [zfs-discuss] Best practices for zpools on zfs

2009-11-24 Thread Mike Gerdts
On Tue, Nov 24, 2009 at 1:39 PM, Richard Elling wrote: > On Nov 24, 2009, at 11:31 AM, Mike Gerdts wrote: > >> On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling >> wrote: >>> >>> Good question!  Additional thoughts below... >>> >>> On Nov 24, 2

Re: [zfs-discuss] Best practices for zpools on zfs

2009-11-24 Thread Mike Gerdts
On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling wrote: > Good question!  Additional thoughts below... > > On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote: > >> Suppose I have a storage server that runs ZFS, presumably providing >> file (NFS) and/or block (iSCSI, FC) service

[zfs-discuss] Best practices for zpools on zfs

2009-11-24 Thread Mike Gerdts
characteristics in this area? Is there less to be concerned about from a performance standpoint if the workload is primarily read? To maximize the efficacy of dedup, would it be best to pick a fixed block size and match it between the layers of zfs? -- Mike Gerdts http://mgerdts.blogspot.com

Re: [zfs-discuss] CIFS shares being lost

2009-11-20 Thread Mike Gerdts
pub" from agent > Server refused to allocate pty > Sun Microsystems Inc.   SunOS 5.11  snv_127 November 2008 This looks like... http://defect.opensolaris.org/bz/show_bug.cgi?id=12380 But that was supposed to be fixed in snv_126. Can you check /etc/minor_perm for this entry:

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Ellis, Mike
Maybe to create snapshots "after the fact" as a part of some larger disaster recovery effort. (What did my pool/file-system look like at 10am?... Say 30-minutes before the database barffed on itself...) With some enhancements might this functionality be extendable into a "poor man's CDP" offeri

Re: [zfs-discuss] dedup question

2009-11-02 Thread Mike Gerdts
;s. It because quite significant if you have 5000 (e.g. on a ZFS-based file server). Assuming that the deduped blocks stay deduped in the ARC, it means that it is feasible to every block that is accessed with any frequency to be in memory. Oh yeah, and you save a lot of disk space. -- Mike Gerdts ht

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Mike Gerdts
ording to page 35 of http://www.slideshare.net/ramesh_r_nagappan/wirespeed-cryptographic-acceleration-for-soa-and-java-ee-security, a T2 CPU can do 41 Gb/s of SHA256. The implication here is that this keeps the MAU's busy but the rest of the core is still idle for things like compression, TCP,

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Mike Gerdts
hms implemented in software and sha256 implemented in hardware? I've been waiting very patiently to see this code go in. Thank you for all your hard work (and the work of those that helped too!). -- Mike Gerdts http://mgerdts.blogspot.com/ _

[zfs-discuss] ZFS near-synchronous replication...

2009-10-26 Thread Mike Watkins
Anyone have any creative solutions for near-synchronous replication between 2 ZFS hosts? Near-synchronous, meaning RPO X--->0 I realize performance will take a hit. Thanks, Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org h

Re: [zfs-discuss] moving files from one fs to another, splittin/merging

2009-10-20 Thread Mike Bo
Once data resides within a pool, there should be an efficient method of moving it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove. Here's my scenario... When I originally created a 3TB pool, I didn't know the best way carve up the space, so I used a single, flat ZFS file s

[zfs-discuss] zfs on FDE

2009-10-13 Thread Mike DeMarco
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs disk encryption

2009-10-13 Thread Mike DeMarco
Does anyone know when this will be available? Project says Q4 2009 but does not give a build. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] bigger zfs arc

2009-10-02 Thread Mike Gerdts
>         Current Size:             4206 MB (arcsize) >         Target Size (Adaptive):   4207 MB (c) That looks a lot like ~ 4 * 1024 MB. Is this a 64-bit capable system that you have booted from a 32-bit kernel? -- Mike Gerdts http://mgerdts.blogspot.com/ __

Re: [zfs-discuss] New to ZFS: One LUN, multiple zones

2009-09-23 Thread Mike Gerdts
host1# zoneadm -z zone1 detach host1# zfs snapshot zonepool/zo...@migrate host1# zfs send -r zonepool/zo...@migrate \ | ssh host2 zfs receive zones/zo...@migrate host2# zonecfg -z zone1 create -a /zones/zone1 host2# zonecfg -z zone1 attach host2# zoneadm -z zone1 boot -- Mike Gerdts http://m

Re: [zfs-discuss] New to ZFS: One LUN, multiple zones

2009-09-23 Thread Mike Gerdts
On Wed, Sep 23, 2009 at 7:32 AM, bertram fukuda wrote: > Thanks for the info Mike. > > Just so I'm clear.  You suggest 1)create a single zpool from my LUN 2) create > a single ZFS filesystem 3) create 2 zone in the ZFS filesystem. Sound right? Correct --

Re: [zfs-discuss] New to ZFS: One LUN, multiple zones

2009-09-23 Thread Mike Gerdts
to it, so I will give each thing X/Y space. This is because it is quite likely that someone will do the operation Y++ and there are very few storage technologies that allow you to shrink the amount of space allocated to each item. -- Mike Gerdts h

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-13 Thread Mike Gerdts
g/pipermail/fm-discuss/2009-June/000436.html from June 10 suggests you are running firmware release (045C)8626. On August 11 they released firmware revisions 8820, 8850, and 02G9, depending on the drive model. http://downloadcenter.intel.com/Detail_Desc.aspx?agr

Re: [zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Mike Gerdts
On Wed, Sep 2, 2009 at 4:46 PM, Richard Elling wrote: > Thanks Cindy! > > Mike, et.al., > I think the confusion is surrounding replacing an enterprise backup > scheme with send-to-file. There is nothing wrong with send-to-file, > it functions as designed.  But it isn'

Re: [zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Mike Gerdts
On Wed, Sep 2, 2009 at 4:06 PM, wrote: > Hi Mike, > > I reviewed this doc and the only issue I have with it now is that uses > /var/tmp an an example of storing snapshots in "long-term storage" > elsewhere. One other point comes from zfs(1M): The format of t

[zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Mike Gerdts
to do things that will lead them to unsympathetic ears if things go poorly. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Snapshot creation time

2009-08-28 Thread Ellis, Mike
Try a: zfs get -pH -o value creation -- MikeE -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Baker Sent: Friday, August 28, 2009 10:52 AM To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] Snapshot creati

Re: [zfs-discuss] How to prevent /usr/bin/chmod from following symbolic links?

2009-08-24 Thread Mike Gerdts
ry for a project that we are working on together. Unfortunately, his umask was messed up and I can't modify the files in ~alice/proj1. Can you do a 'chmod -fR a+rw /home/alice/proj1' for me? Thanks!" | mailx -s "permissions fix" Helpdesk$ pfexec chmod -fR a+r

[zfs-discuss] Snapshot access from non-global zones

2009-08-20 Thread Mike Futerko
cessible. But if the snapshots were created after the mount - they are not accessible from inside of a zone. So this is correct behavior or it's bug, any workarounds? Thanks in advance for all comments. Regards, Mike ___ zfs-discuss mailing list zf

Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Mike Gerdts
//opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#404589 http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#405835 http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#405308 -- Mike Gerdts http://mgerdts.blogspot.com/ ___

Re: [zfs-discuss] file change long - was zfs fragmentation

2009-08-12 Thread Mike Gerdts
anpages/vxfs/man1m/fcladm.html This functionality would come in very handy. It would seem that it isn't too big of a deal to identify the files that changed, as this type of data is already presented via "zpool status -v" when corruption is detected. http://docs.sun.com/app/

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Mike Gerdts
in the parallelism gaps as the longer-running ones finish. 3. That is, there is sometimes benefit in having many more jobs to run than you have concurrent streams. This avoids having one save set that finishes long after all the others because of poorly balanced save sets. -- Mike Gerdts http

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Mike Gerdts
"sequential" I mean that one doesn't start until the other finishes. There is certainly a better word, but it escapes me at the moment. At an average file size of 45 KB, that translates to about 3 MB/sec. As you run two data streams, you are seeing throughput that looks kinda like the 2 * 3 MB/sec. With 4 backup streams do you get something that looks like 4 * 3 MB/s? How does that effect iostat output? -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] pathnames in zfs(1M) arguments

2009-08-09 Thread Mike Gerdts
ase of creating snapshots, there is also this: # mkdir .zfs/snapshot/foo # zfs list | grep foo rpool/ROOT/s10u7_...@foo 0 - 9.76G - # rmdir .zfs/snapshot/foo # zfs list | grep foo I don't know of a similar shortcut for the create or clone subcommands. -- Mike Gerdts http://mg

Re: [zfs-discuss] zfs fragmentation

2009-08-08 Thread Mike Gerdts
On Sat, Aug 8, 2009 at 3:25 PM, Ed Spencer wrote: > > On Sat, 2009-08-08 at 15:12, Mike Gerdts wrote: > >> The DBA's that I know use files that are at least hundreds of >> megabytes in size.  Your problem is very different. > Yes, definitely. > > I'm relat

Re: [zfs-discuss] zfs fragmentation

2009-08-08 Thread Mike Gerdts
peed with SSD's than there is in read speeds. However, the NVRAM on the NetApp that is backing your iSCSI LUNs is probably already giving you most of this benefit (assuming low latency on network connections). -- Mike Gerdts http://mgerdts.blogspot.com/ ___

Re: [zfs-discuss] zfs fragmentation

2009-08-08 Thread Mike Gerdts
increase the performance of a zfs > filesystem without causing any downtime to an Enterprise email system > used by 30,000 intolerant people, when you don't really know what is > causing the performance issues in the first place? (Yeah, it sucks to be > me!) Hopefully I've helped

Re: [zfs-discuss] How Virtual Box handles the IO

2009-07-31 Thread Mike Gerdts
s/2007-September/013233.html Quite likely related to: http://bugs.opensolaris.org/view_bug.do?bug_id=6684721 In other words, it was a buggy Sun component that didn't do the right thing with cache flushes. -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Mike Gerdts
lly? It appears as though there is an upgrade path. http://www.c0t0d0s0.org/archives/5750-Upgrade-of-a-X4500-to-a-X4540.html However, the troll that you have to pay to follow that path demands a hefty sum ($7995 list). Oh, and a reboot is required. :) -- Mike Gerdts http://m

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Mike Gerdts
roducts (eg VMware, Parallels, Virtual PC) have the > same default behaviour as VirtualBox? I've lost a pool due to LDoms doing the same. This bug seems to be related. http://bugs.opensolaris.org/view_bug.do?bug_id=6684721 -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] An amusing scrub

2009-07-15 Thread Mike Gerdts
u would have enough to pay this credit card bill. http://www.cnn.com/2009/US/07/15/quadrillion.dollar.glitch/index.html > - Rich > > (Footnote: I ran ntpdate between starting the scrub and it finishing, > and time rolled backwards. Nothing more exciting.) And Visa is willing to wave

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
Use is subject to license terms. Assembled 07 May 2009 # uname -srvp SunOS 5.11 snv_111b sparc -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
ied via truss that each read(2) was returning 128K. I thought I had seen excessive reads there too, but now I can't reproduce that. Creating another fs with recordsize=8k seems to make this behavior go away - things seem to be working as designed. I'll go upd

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
On Mon, Jul 13, 2009 at 3:16 PM, Joerg Schilling wrote: > Bob Friesenhahn wrote: > >> On Mon, 13 Jul 2009, Mike Gerdts wrote: >> > >> > FWIW, I hit another bug if I turn off primarycache. >> > >> > http://defect.opensolaris.org/bz/show_bug.c

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
4m21.57s user0m9.72s sys 0m36.30s Doing second 'cpio -o > /dev/null' 4800025 blocks real4m21.56s user0m9.72s sys 0m36.19s Feel free to clean up with 'zfs destroy testpool/zfscachetest'. This bug report contains more detail of the configuration. O

Re: [zfs-discuss] deduplication

2009-07-11 Thread Mike Gerdts
r trouble in the long term without deduplication to handle ongoing operation. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Mike Gerdts
his for smallish (8KB) directories. > > > BTW: If you like to fix the software, you should know that Linux has at least > one filesystem that returns the entries for "." and ".." out of order. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Mike Gerdts
/lib/libc/port/gen/readdir.c http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libbc/libc/gen/common/readdir.c The libbc version hasn't changed since the code became public. You can get to an older libc variant of it by clicking on the history link or using th

Re: [zfs-discuss] zfs select

2009-06-23 Thread Mike Forey
Thanks Darren, I might request that it gets added. That is, if anyone else thinks it might be a useful feature? Regards, Mike. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] zfs select

2009-06-23 Thread Mike Forey
very tidy, thanks! :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs select

2009-06-23 Thread Mike Forey
Hi, I'd like to be able to select zfs filesystems, based on the value of properties. Something like this: zfs select mounted=yes Is anyone aware if this feature might be available in the future? If not, is there a clean way of achieving the same result? Thanks, Mike. -- This message p

Re: [zfs-discuss] Why Oracle process open(2)/ioctl(2) /dev/dtrace/helper?

2009-06-22 Thread Mike Gerdts
009 09:06:09 KST >  open(/dev/dtrace/helper) > >              libc.so.1`open >              libCrun.so.1`0x7a50aed8 >              libCrun.so.1`0x7a50b0f4 >              ld.so.1`call_fini+0xd0 >              ld.so.1`atexit_fini+0x80 >              libc.so.1`_exithandle+0x48 >              libc.so.1`exit+0x4 >              oracle`_start+0x184 > > *** > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-22 Thread Mike Gerdts
4682e64#l8.80> */ For some reason, the CR's listed above are not available through bugs.opensolaris.org. However, at least 6833605 is available through sunsolve if you have a support contract. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-21 Thread Mike Gerdts
sible? > > Thanks... I stumbled across this just now while performing a search for something else. http://opensolaris.org/jive/thread.jspa?messageID=377018 I have no idea of the quality or correctness of this solution. -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] Areca 1160 & ZFS

2009-05-07 Thread Mike Gerdts
16 16  WDC WD4000YS-01MPB1              400.1GB  Pass Through > === > GuiErrMsg<0x00>: Success. > r...@nfs0009:~# Perhaps you have change the configuration of the array since the last reconfiguration boot. I

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Mike Gerdts
On Wed, May 6, 2009 at 2:54 AM, wrote: > >>On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike wrote: >>> PS: At one point the old JumpStart code was encumbered, and the >>> community wasn't able to assist. I haven't looked at the next-gen >>> jumpsta

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-05 Thread Mike Gerdts
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike wrote: > PS: At one point the old JumpStart code was encumbered, and the > community wasn't able to assist. I haven't looked at the next-gen > jumpstart framework that was delivered as part of the OpenSolaris SPARC > preview.

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-05 Thread Ellis, Mike
How about a generic "zfs options" field in the JumpStart profile? (essentially an area where options can be specified that are all applied to the boot-pool (with provisions to deal with a broken-out-var)) That should future proof things to some extent allowing for compression=x, copies=x, blocksiz

Re: [zfs-discuss] [Fwd: ZFS user/group quotas & space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-27 Thread Mike Gerdts
quota locally to support ZFS. > >        - river. For the benefit of those finding this conversation in the archives, this looks like it will be fixed in snv_114. http://bugs.opensolaris.org/view_bug.do?bug_id=6824968 http://hg.genunix.org/onnv-gate.hg/rev/4f68f041ddcd -- Mike Gerdts http://

Re: [zfs-discuss] What is the 32 GB 2.5-Inch SATA Solid State Drive?

2009-04-27 Thread Mike Watkins
Create the zpool with: zpool create log - for the ZIL zpool create cache - for the L2ARC On Sat, Apr 25, 2009 at 11:13 PM, Richard Elling wrote: > Gary Mills wrote: > >> On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote: >> >> >>> Gary Mills wrote: >>> >>> Does anyone k

Re: [zfs-discuss] Add WORM to OpenSolaris

2009-04-26 Thread Ellis, Mike
Wow... that's seriously cool! Throw in some of this... http://www.nexenta.com/demos/auto-cdp.html and now we're really getting somewhere... Nice to see this level of innovation here. Anyone try to employ these types of techniques on s10? I haven't used nexenta in the past, and I'm not clear in

Re: [zfs-discuss] What causes slow performance under load?

2009-04-19 Thread Mike Gerdts
On Sun, Apr 19, 2009 at 10:58 AM, Gary Mills wrote: > On Sat, Apr 18, 2009 at 11:45:54PM -0500, Mike Gerdts wrote: >> Also, you may want to consider doing backups from the NetApp rather >> than from the Solaris box. > > I've certainly recommended finding a differ

Re: [zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Mike Gerdts
or due to some research project that happens to be on the same spindles? What does the network look like from the NetApp side? Are the mail server and the NetApp attached to the same switch, or are they at opposite ends of the campus? Is there something between them th

Re: [zfs-discuss] [Fwd: ZFS user/group quotas & space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Mike Gerdts
nd of it is not overly complicated. Is now too early to file the RFE? For some reason it feels like the person on the other end of bugs.opensolars.org will get confused by the request to enhance a feature that doesn't yet exist. -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] [Fwd: ZFS user/group quotas & space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Mike Gerdts
the global zone and the dataset is deleted to a non-global zone, display the UID rather than a possibly mistaken username. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Permission problems with nfs-mounted zfs user directories

2009-03-31 Thread Pacey, Mike
ions are correct? Thanks, Mike. - Dr Mike Pacey, Email: m.pa...@lancaster.ac.uk High Performance Systems Support, Phone: 01524 593543 Information Systems Services,Fax: 01524 594459 Lancaster University, Lancaster LA1 4YW __

Re: [zfs-discuss] j4200 drive carriers

2009-03-30 Thread Mike Futerko
Hello > 1) Dual IO module option > 2) Multipath support > 3) Zone support [multi host connecting to same JBOD or same set of JBOD's > connected in series. ] This sounds interesting - where I can read more about connecting two hosts to same J4200 e

Re: [zfs-discuss] Trying to determine if this box will be compatible with Opensolaris or Solaris

2009-03-11 Thread mike
me. also making the tools simpler - absolutely no UI for instance. does it really need one to dump out things? :) On Wed, Mar 11, 2009 at 7:15 PM, David Magda wrote: > > On Mar 11, 2009, at 21:59, mike wrote: > >> On Wed, Mar 11, 2009 at 6:53 PM, David Magda wrote: >>> >

Re: [zfs-discuss] Trying to determine if this box will be compatible with Opensolaris or Solaris

2009-03-11 Thread mike
doesnt it require java and x11? On Wed, Mar 11, 2009 at 6:53 PM, David Magda wrote: > > On Mar 11, 2009, at 20:14, mike wrote: > >> >> http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm >> http://www.intel.com/support/motherboards

Re: [zfs-discuss] Trying to determine if this box will be compatible with Opensolaris or Solaris

2009-03-11 Thread mike
2007) would be forward compatible... On Wed, Mar 11, 2009 at 5:14 PM, mike wrote: > http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm > http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm > > It's hard to use the HAL sometimes. >

[zfs-discuss] Trying to determine if this box will be compatible with Opensolaris or Solaris

2009-03-11 Thread mike
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm It's hard to use the HAL sometimes. I am trying to locate chipset info but having a hard time... _

Re: [zfs-discuss] Is there a limit to snapshotting?

2009-03-09 Thread mike
Brad Stone about rolling up daily snapshots into monthly snapshots, which would roll up into yearly snapshots... On Mon, Mar 9, 2009 at 1:29 PM, Richard Elling wrote: > mike wrote: >> >> Well, I could just use the same script to create my daily snapshot to >> remove a snapshot

Re: [zfs-discuss] Is there a limit to snapshotting?

2009-03-09 Thread mike
unted.  Changes have > been made to speed this up by reducing the number of mnttab lookups. > > And zfs list has been changed to no longer show snapshots by default. > But it still might make sense to limit the number of snapshots saved: > http://blogs.sun.com/timf/entry/zfs_automatic_s

[zfs-discuss] Is there a limit to snapshotting?

2009-03-08 Thread mike
I do a daily snapshot of two filesystems, and over the past few months it's obviously grown to a bunch. "zfs list" shows me all of those. I can change it to use the "-t" flag to not show them, so that's good. However, I'm worried about boot times and other things. Will it get to a point with 100

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Mike Gerdts
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams wrote: > On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote: >> On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams >> wrote: >> > On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote: >> >> &

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Mike Gerdts
each database may be constrained to a set of spindles so that each database can be replicated or copied independent of the various others. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Mike Gerdts
pp "snapmirror to tape" - Even having a zfs(1M) option that could list the files that change between snapshots could be very helpful to prevent file system crawls and to avoid being fooled by bogus mtimes. -- Mike Gerdts http://mgerdts.blogspot.com/ __

Re: [zfs-discuss] Details on raidz boot + zfs patents?

2009-02-28 Thread Mike Gerdts
/os/about/faq/licensing_faq/#patents. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is zfs snapshot -r atomic?

2009-02-22 Thread Mike Gerdts
atomic operation. The snapshots are created together (all at once) or not created at all. The benefit of atomic snapshots operations is that the snapshot data is always taken at one consistent time, even across descendent file systems. -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-29 Thread Mike Gerdts
ng as the list of zfs mount points does not overflow the maximum command line length. $ fsstat $(zfs list -H -o mountpoint | nawk '$1 !~ /^(\/|-|legacy)$/') 5 -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@o

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-18 Thread Ellis, Mike
Does this all go away when BP-rewrite gets fully resolved/implemented? Short of the pool being 100% full, it should allow a rebalancing operation and possible LUN/device-size-shrink to match the new device that is being inserted? Thanks, -- MikeE -Original Message- From: zfs-discuss-bo

Re: [zfs-discuss] zfs send / zfs receive hanging

2009-01-12 Thread Mike Futerko
Hi > It would be also nice to be able to specify the zpool version during pool > creation. E.g. If I have a newer machine and I want to move data to an older > one, I should be able to specify the pool version, otherwise it's a one-way > street. zpool create -o vers

Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-01-12 Thread mike
i'm not sure how many via chips support 64-bit, which seems to be highly recommended. atoms seem to be more suitable. On Mon, Jan 12, 2009 at 1:14 PM, Joe S wrote: > In the last few weeks, I've seen a number of new NAS devices released > from companies like HP, QNAP, VIA, Lacie, Buffalo, Iomega,

Re: [zfs-discuss] zfs list improvements?

2009-01-08 Thread Mike Futerko
t performance... even if you want to get the list of snapshots with no other properties "zfs list -oname -t snapshot -r file/system" it still takes quite long time if there are hundreds of snapshots, while "ls /file/system/.zfs/snapshot" returns immediately. Can this also be im

Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-08 Thread Mike Futerko
ise it's almost useless in practice. Regards Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Problem with time-slider

2008-12-29 Thread Mike Gerdts
.png Try running svcs -v zfs/auto-snapshot The last few lines of the log files mentioned in the output from the above command may provide helpful hints. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discu

Re: [zfs-discuss] How to compile mbuffer

2008-12-06 Thread Mike Futerko
AKE=gmake If you are on 64bit system you may want to compile 64bit version: ./configure --prefix=/usr/local --disable-debug CFLAGS="-O -m64" MAKE=gmake 5) gmake && gmake install 6) /usr/local/bin/mbuffer -V Regards Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Status of zpool remove in raidz and non-redundant stripes

2008-12-05 Thread Mike Brancato
Well, I knew it wasn't available. I meant to ask what is the status of the development of the feature? Not started, I presume. Is there no timeline? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] redundancy in non-redundant stripes

2008-12-05 Thread Mike Brancato
In theory, with 2 80GB drives, you would always have a copy somewhere else. But a single drive, no. I guess I'm thinking in the optimal situation. With multiple drives, copies are spread through the vdevs. I guess it would work better if we could define that if copies=2 or more, that at leas

[zfs-discuss] redundancy in non-redundant stripes

2008-12-05 Thread Mike Brancato
With ZFS, we can enable copies=[1,2,3] to configure how many copies of data there are. With copies of 2 or more, in theory, an entire disk can have read errors, and the zfs volume still works. The unfortunate part here is that the redundancy lies in the volume, not the pool vdev like with ra

[zfs-discuss] Status of zpool remove in raidz and non-redundant stripes

2008-12-05 Thread Mike Brancato
I've seen discussions as far back as 2006 that say development is underway to allow the addition and remove of disks in a raidz vdev to grow/shrink the group. Meaning, if a 4x100GB raidz only used 150GB of space, one could do 'zpool remove tank c0t3d0' and data residing on c0t3d0 would be migra

Re: [zfs-discuss] Separate /var

2008-12-02 Thread Mike Gerdts
On Tue, Dec 2, 2008 at 6:13 PM, Lori Alt <[EMAIL PROTECTED]> wrote: > On 12/02/08 10:24, Mike Gerdts wrote: > I follow you up to here. But why do the next steps? > > > zonecfg -z $zone > > remove fs dir=/var > > > > zfs set mountpoint=/zones/$zone/root/var r

Re: [zfs-discuss] Separate /var

2008-12-02 Thread Mike Gerdts
$zone remove fs dir=/var zfs set mountpoint=/zones/$zone/root/var rpool/zones/$zone/var -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] HELP!!! Need to disable zfs

2008-11-25 Thread Mike DeMarco
> Boot from the other root drive, mount up the "bad" > one at /mnt. Then: > > # mv /mnt/etc/zfs/zpool.cache > /mnt/etc/zpool.cache.bad > > > > On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco > <[EMAIL PROTECTED]> wrote: > > My root dri

<    1   2   3   4   5   6   >