Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Mike Gerdts
TAG TIMESTAMP a/1/hold@hold saveme! Wed Feb 20 15:06:29 2013 # zfs destroy -r a/1 cannot destroy 'a/1/hold@hold': snapshot is busy Extending the hold mechanism to filesystems and volumes would be quite nice. Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-19 Thread Traffanstead, Mike
nce I reran the test with a Crucial M4 SSD and the results for 16G/64k were 35mB/s (x5 improvement). I'll rerun that part of the test with zpool iostat and see what it says. Mike On Thu, Jul 19, 2012 at 7:27 PM, Jim Klimov wrote: >> This is normal. The problem is that with zfs 128k block

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-19 Thread Traffanstead, Mike
vfs.zfs.txg.synctime_ms: 1000 vfs.zfs.txg.timeout: 5 On Thu, Jul 19, 2012 at 8:47 PM, John Martin wrote: > On 07/19/12 19:27, Jim Klimov wrote: > >> However, if the test file was written in 128K blocks and then >> is rewritten with 64K blocks, then Bob's answer is probably >> valid - the block wo

Re: [zfs-discuss] Benefits of enabling compression in ZFS for the zones

2012-07-10 Thread Mike Gerdts
ion ./ COMPRESS on $ dd if=/dev/zero of=1gig count=1024 bs=1024k 1024+0 records in 1024+0 records out $ ls -l 1gig -rw-r--r-- 1 mgerdts staff1073741824 Jul 10 07:52 1gig $ du -k 1gig 0 1gig -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Mike Gerdts
er I can see the > following input stream bandwidth (the stream is constant bitrate, so > this shouldn't happen): If processing in interrupt context (use intrstat) is dominating cpu usage, you may be able to use pcitool to cause the device generating a

Re: [zfs-discuss] Strange hang during snapshot receive

2012-05-10 Thread Mike Gerdts
ing https://forums.oracle.com/forums/thread.jspa?threadID=2380689&tstart=15 before updating to SRU 6 (SRU 5 is fine, however). The fix for the problem mentioned in that forums thread should show up in an upcoming SRU via CR 7157313. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] test for holes in a file?

2012-03-26 Thread Mike Gerdts
r 26 18:25:25 CDT 2012 [ 1332804325.889143166 ] ct = Mar 26 18:25:25 CDT 2012 [ 1332804325.889143166 ] bsz=131072 blks=32fs=zfs Notice that it says it has 32 512 byte blocks. The mechanism you suggest does work for every other file system that I've tried it on. -- Mike Gerdts http://mge

Re: [zfs-discuss] test for holes in a file?

2012-03-26 Thread Mike Gerdts
2012/3/26 ольга крыжановская : > How can I test if a file on ZFS has holes, i.e. is a sparse file, > using the C api? See SEEK_HOLE in lseek(2). -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.or

Re: [zfs-discuss] Any rhyme or reason to disk dev names?

2011-12-21 Thread Mike Gerdts
- /dev/chassis//SYS/SASBP/HDD0/disk disk c0t5000CCA012B66E90d0 /dev/chassis//SYS/SASBP/HDD1/disk disk c0t5000CCA012B68AC8d0 The text in the left column represents text that should be printed on the corresponding disk slots. -- Mike Gerdts http

Re: [zfs-discuss] gaining access to var from a live cd

2011-11-29 Thread Mike Gerdts
> its thing. > > chicken / egg situation? I miss the old fail safe boot menu... You can mount it pretty much anywhere: mkdir /tmp/foo zfs mount -o mountpoint=/tmp/foo ... I'm not sure when the temporary mountpoint option (-o mountpoint=...) came in. If it's not valid synt

Re: [zfs-discuss] gaining access to var from a live cd

2011-11-29 Thread Mike Gerdts
as not updated from Solaris 11 Express), it will have a separate /var dataset. zfs mount -o mountpoint=/mnt/rpool/var rpool/ROOT/solaris/var -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-21 Thread Mike Gerdts
impact if an errant command were issued. I'd never do that in production without some form of I/O fencing in place. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-05 Thread Mike Gerdts
On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish wrote: > # zpool import -f tank > > http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/ I encourage you to open a support case and ask for an escalation on CR 7056738. -- Mike Gerdts http://mgerdts.blo

Re: [zfs-discuss] zfs rename query

2011-07-27 Thread Mike Gerdts
I suspect that it doesn't give you exactly the output you are looking for. FWIW, the best way to achieve what you are after without breaking the zones is going to be along the lines of: zlogin z1c1 init 0 zoneadm -z z1c1 detach zfs rename rpool/zones/z1c1 rpool/new/z1c1 zoneadm -

Re: [zfs-discuss] What is ".$EXTEND/$QUOTA" ?

2011-07-19 Thread Mike Gerdts
reated in 757 * a special directory, $EXTEND, at the root of the shared file 758 * system. To hide this directory prepend a '.' (dot). 759 */ -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-15 Thread Mike Gerdts
dding a good enterprise SSD would double the > server cost - not only on those big good systems with > tens of GB of RAM), and hopefully simplifying the system > configuration and maintenance - that is indeed the point > in question. > > //Jim > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Non-Global zone recovery

2011-07-07 Thread Mike Gerdts
/zonecfg.export zoneadm -z attach [-u|-U] Any follow-ups should probably go to Oracle Support or zones-discuss. Your problems are not related to zfs. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] FW: Solaris panic

2011-03-17 Thread Mike Gerdts
enunix: [ID 877030 kern.notice] Copyright (c) 1983, > 2010, Oracle and/or its affiliates. All rights reserved. > > Can anyone help? > > Regards > Karl > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs-nfs-sun 7000 series

2011-03-10 Thread Mike MacNeil
underlying file system zfs vs ufs. Any thoughts to speed up the backup of the Sun 7000 nfs mount? Thanks you. Mike MacNeil Global IT Infrastructure [cid:image001.gif@01CBDF3D.6192F090] 4281 Harvester Rd. Burlington, ON l7l 5m4 Canada Phone: 905 632 2999 ext.2920 Fax: 905 632 2055 Email

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-25 Thread Mike Tancsa
from http://www.addonics.com/ (They ship to Canada as well without issue) Why use USB ? You wll get much better performance/throughput on eSata (if you have good drivers of course). I use their sil3124 eSata controller on FreeBSD as well as a number of PM units and they work great. ---Mike

Re: [zfs-discuss] multiple disk failure (solved?)

2011-02-01 Thread Mike Tancsa
On 1/31/2011 4:19 PM, Mike Tancsa wrote: > On 1/31/2011 3:14 PM, Cindy Swearingen wrote: >> Hi Mike, >> >> Yes, this is looking much better. >> >> Some combination of removing corrupted files indicated in the zpool >> status -v output, running zpool scrub and

Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Mike Tancsa
On 1/31/2011 3:14 PM, Cindy Swearingen wrote: > Hi Mike, > > Yes, this is looking much better. > > Some combination of removing corrupted files indicated in the zpool > status -v output, running zpool scrub and then zpool clear should > resolve the corruption, but its d

Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Mike Tancsa
On 1/29/2011 6:18 PM, Richard Elling wrote: > > On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote: > >> On 1/29/2011 12:57 PM, Richard Elling wrote: >>>> 0(offsite)# zpool status >>>> pool: tank1 >>>> state: UNAVAIL >>>> status: O

Re: [zfs-discuss] multiple disk failure

2011-01-30 Thread Mike Tancsa
On 1/30/2011 12:39 AM, Richard Elling wrote: >> Hmmm, doesnt look good on any of the drives. > > I'm not sure of the way BSD enumerates devices. Some clever person thought > that hiding the partition or slice would be useful. I don't find it useful. > On a Solaris > system, ZFS can show a disk

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Mike Tancsa
On 1/29/2011 6:18 PM, Richard Elling wrote: >> 0(offsite)# > > The next step is to run "zdb -l" and look for all 4 labels. Something like: > zdb -l /dev/ada2 > > If all 4 labels exist for each drive and appear intact, then look more closely > at how the OS locates the vdevs. If you can't so

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Mike Tancsa
age like this ? Its just for backups of backups in a DR site ---Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Mike Tancsa
But they seem OK now. Same order as well. # camcontrol devlist at scbus0 target 0 lun 0 (pass0,ada0) at scbus0 target 1 lun 0 (pass1,ada1) at scbus0 target 2 lun 0 (pass2,ada2) at scbus0 target 3 lun 0 (pass3,ada3) # dd if=/dev/ada2 of=/dev/null count=20 bs=1024 20+0 records in 20

[zfs-discuss] multiple disk failure

2011-01-28 Thread Mike Tancsa
Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new drive cage started to fail, it hung the server and the

[zfs-discuss] zpool import crashes system

2010-11-11 Thread Mike DeMarco
I am trying to bring in my zpool from build 121 into build 134 and every time I do a zpool import the system crashes. I have read other posts for this and have tried setting zfs_recover = 1 and aok = 1 in /etc/system I have used mdb to verify that they are in the kernel but the system still cr

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Mike Gerdts
ms. Perhaps this belongs somewhere other than zfs-discuss - it has nothing to do with zfs. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Moving the 17 zones from one LUN to another LUN

2010-10-27 Thread Mike Gerdts
On Wed, Oct 27, 2010 at 9:27 AM, bhanu prakash wrote: > Hi Mike, > > > Thanks for the information... > > Actually the requirement is like this. Please let me know whether it matches > for the below requirement or not. > > Question: > > The SAN team will assign the

Re: [zfs-discuss] Moving the 17 zones from one LUN to another LUN

2010-10-26 Thread Mike Gerdts
me that you are comfortable that the zone data moved over ok... zfs destroy -r oldpool/zones Again, verify the procedure works on a test/lab/whatever box before trying it for real. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mail

Re: [zfs-discuss] making sense of arcstat.pl output

2010-10-01 Thread Mike Harsch
przemol, Thanks for the feedback. I had incorrectly assumed that any machine running the script would have L2ARC implemented (which is not the case with Solaris 10). I've added a check for this that allows the script to work on non-L2ARC machines as long as you don't specify L2ARC stats on th

Re: [zfs-discuss] making sense of arcstat.pl output

2010-10-01 Thread Mike Harsch
Hello Christian, Thanks for bringing this to my attention. I believe I've fixed the rounding error in the latest version. http://github.com/mharsch/arcstat -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolar

Re: [zfs-discuss] making sense of arcstat.pl output

2010-09-30 Thread Mike Harsch
For posterity, I'd like to point out the following: neel's original arcstat.pl uses a crude scaling routine that results in a large loss of precision as numbers cross from Kilobytes to Megabytes to Gigabytes. The 1G reported arc size case described here, could actually be anywhere between 1,00

Re: [zfs-discuss] file level clones

2010-09-27 Thread Mike Gerdts
On Mon, Sep 27, 2010 at 6:23 AM, Robert Milkowski wrote: > Also see http://www.symantec.com/connect/virtualstoreserver And http://blog.scottlowe.org/2008/12/03/2031-enhancements-to-netapp-cloning-technology/ -- Mike Gerdts http://mgerdts.blogspot.

Re: [zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Mike.
On 9/23/2010 at 12:38 PM Erik Trimble wrote: | [snip] |If you don't really care about ultra-low-power, then there's absolutely |no excuse not to buy a USED server-class machine which is 1- or 2- |generations back. They're dirt cheap, readily available, | [snip] = Anyone have

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-09-16 Thread Mike Mackovitch
On Thu, Sep 16, 2010 at 08:15:53AM -0700, Rich Teer wrote: > On Thu, 16 Sep 2010, Erik Ableson wrote: > > > OpenSolaris snv129 > > Hmm, SXCE snv_130 here. Did you have to do any server-side tuning > (e.g., allowing remote connections), or did it just work out of the > box? I know that Sendmail

[zfs-discuss] recordsize

2010-09-16 Thread Mike DeMarco
effect that data? zpool consists of 8 SANs Luns. Thanks mike -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-09-15 Thread Mike Mackovitch
On Wed, Sep 15, 2010 at 12:08:20PM -0700, Nabil wrote: > any resolution to this issue? I'm experiencing the same annoying > lockd thing with mac osx 10.6 clients. I am at pool ver 14, fs ver > 3. Would somehow going back to the earlier 8/2 setup make things > better? As noted in the earlier thr

Re: [zfs-discuss] How to migrate to 4KB sector drives?

2010-09-12 Thread Mike Gerdts
around the b137 timeframe. OpenIndiana, to be released on Tuesday, is based on b146 or later. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-28 Thread Mike Gerdts
s) Presumably this problem is being worked... http://hg.genunix.org/onnv-gate.hg/rev/d560524b6bb6 Notice that it implements: 866610 Add SATA TRIM support With this in place, I would imagine a next step is for zfs to issue TRIM commands as zil entries have been committed to the data disks. -- M

Re: [zfs-discuss] Halcyon ZFS and system monitoring software for OpenSolaris (beta)

2010-08-25 Thread Mike Kirk
Update: version 3.2.5 out now, with changes to better support snv_134: http://forums.halcyoninc.com/showthread.php?t=368 If you've downloaded v3.2.4 and are on 09/06, there is no reason to upgrade. Regards, mike.k...@halcyoninc.com -- This message posted from opensolaris.org __

[zfs-discuss] shrink zpool

2010-08-25 Thread Mike DeMarco
Is it currently or near future possible to shrink a zpool "remove a disk" -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Halcyon ZFS and system monitoring software for OpenSolaris (beta)

2010-08-20 Thread Mike Kirk
Hi zfs user, > Is the beta free? for how long? if not how much for 5 machines? Everything on our web site (including the beta) runs for 30 days with the baked-in license. After 30 days it will stop collecting fresh numbers, unless you add a license key, or a demo extension file from the sales t

Re: [zfs-discuss] Halcyon ZFS and system monitoring software for OpenSolaris (beta)

2010-08-20 Thread Mike Kirk
Hi wonslung, Thanks for posting to our forum: I'll respond there and take things off-list. Sounds like it's the same bug that appeared with the Sol10 July EIS: (which snv_134 obviously got the changes for first, and that wasn't in 09/06). Fixing it now... mike.k...@halcyoninc.com -- This mess

[zfs-discuss] Halcyon ZFS and system monitoring software for OpenSolaris (beta)

2010-08-19 Thread Mike Kirk
laris: we're keeping our eyes on Solaris 11 Express / Illumos and aim to support the more advanced features of Solaris 11 the day it's pushed out the door. Thanks for your time! Regards, Mike dot Kirk at HalcyonInc dot com * previous build: http://opensolaris.org/jive/thread.jspa?thre

Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HTPC

2010-08-16 Thread Mike DeMarco
What I would really like to know is why do pci-e raid controller cards cost more than an entire motherboard with processor. Some cards can cost over $1,000 dollars, for what. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-

Re: [zfs-discuss] EMC migration and zfs

2010-08-16 Thread Mike DeMarco
Bump this up. Anyone? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-13 Thread Mike M
On 8/13/2010 at 8:56 PM Eric D. Mudama wrote: |On Fri, Aug 13 at 19:06, Frank Cusack wrote: |>Interesting POV, and I agree. Most of the many "distributions" of |>OpenSolaris had very little value-add. Nexenta was the most interesting |>and why should Oracle enable them to build a business at t

Re: [zfs-discuss] Moving /export to another zpool

2010-08-13 Thread Mike Gerdts
hen I boot on using LiveCD, how can I mount my first drive that has > opensolaris installed ? To list the zpools it can see: zpool import To import one called rpool at an alternate root: zpool import -R /mnt rpool -- Mike Gerdts http://mgerdts.blogspot.com/

[zfs-discuss] EMC migration and zfs

2010-08-12 Thread Mike DeMarco
We are going to be migrating to a new EMC frame using Open Replicator. ZFS is sitting on volumes that are running MPXIO. So the controller number/disk number is going to change when we reboot the server. I would like to konw if anyone has done this and will the zfs filesystems "just work" and fi

Re: [zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Mike DeMarco
That looks like that will work. Won't be able to test until late tonight. Thanks mike -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Mike DeMarco
Thanks adding mount did allow me to create it but does not allow me to create the mountpoint. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Mike DeMarco
I am trying to give a general user permissions to create zfs filesystems in the rpool. zpool set=delegation=on rpool zfs allow create rpool both run without any issues. zfs allow rpool reports the user does have create permissions. zfs create rpool/test cannot create rpool/test : permission d

Re: [zfs-discuss] NFS performance?

2010-07-26 Thread Mike Gerdts
On Mon, Jul 26, 2010 at 2:56 PM, Miles Nordin wrote: >>>>>> "mg" == Mike Gerdts writes: >    mg> it is rather common to have multiple 1 Gb links to >    mg> servers going to disparate switches so as to provide >    mg> resilience in the face of switc

Re: [zfs-discuss] NFS performance?

2010-07-26 Thread Mike Gerdts
On Mon, Jul 26, 2010 at 1:27 AM, Garrett D'Amore wrote: > On Sun, 2010-07-25 at 21:39 -0500, Mike Gerdts wrote: >> On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore wrote: >> > On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote: >> >> >> >> I

Re: [zfs-discuss] NFS performance?

2010-07-25 Thread Mike Gerdts
ration choices, and and a bit of luck. Note that with Sun Trunking there was an option to load balance using a round robin hashing algorithm. When pushing high network loads this may cause performance problems with reassembly. -- Mike Gerdts http://mgerdts.blogspot.com/ ___

Re: [zfs-discuss] Hashing files rapidly on ZFS

2010-07-07 Thread Mike Gerdts
it looks as though znode_t's z_seq may be useful. While it isn't a checksum, it seems to be incremented on every file change. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
etting data 32 KB at a time. How does 32 KB compare to the database block size? How does 32 KB compare to the block size on the relevant zfs filesystem or zvol? Are blocks aligned at the various layers? http://blogs.sun.com/dlutz/entry/partition_alignment_guidelines_for_unified -- Mike Gerdts h

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
ut 32 KB I/O's. I think you can perform a test that involves mainly the network if you use netperf with options like: netperf -H $host -t TCP_RR -r 32768 -l 30 That is speculation based on reading http://www.netperf.org/netperf/training/Netperf.html. Someone else (perhaps

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
y good point. You can use a combination of "zpool iostat" and fsstat to see the effect of reads that didn't turn into physical I/Os. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Use of blocksize (-b) during zfs zvol create, poor performance

2010-06-30 Thread Mike La Spina
2 'kstat stmf' outputs on a 5 min interval and a 'zpool iostat -v 30 5' which would help visualize the I/O behavior. Regards, Mike http://blog.laspina.ca/ -- This message posted from opensolaris.org ___ zfs-discuss mailing li

Re: [zfs-discuss] COMSTAR ISCSI - configuration export/import

2010-06-28 Thread Mike Devlin
I havnt tried it yet, but supposedly this will backup/restore the comstar config: $ svccfg export -a stmf > ⁠comstar⁠.bak.${DATE} If you ever need to restore the configuration, you can attach the storage and run an import: $ svccfg import ⁠comstar⁠.bak.${DATE} - Mike On 6/28/10,

Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Mike Gerdts
engineering where group projects were common and CAD, EDA, and simulation tools could generate big files very quickly. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mai

Re: [zfs-discuss] Dedup... still in beta status

2010-06-15 Thread Mike Gerdts
ent mail system should already dedup. Or at least that is how I would have written it for the last decade or so... -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Mike Gerdts
s=513 count=204401 # repeatedly feed that file to dd while true ; do cat /tmp/randomdataa ; done | dd of=/my/test/file bs=... count=... The above should make it so that it will take a while before there are two blocks that are identical, thus confounding deduplication as well. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-31 Thread Mike Gerdts
Sorry, turned on html mode to avoid gmail's line wrapping. On Mon, May 31, 2010 at 4:58 PM, Sandon Van Ness wrote: > On 05/31/2010 02:52 PM, Mike Gerdts wrote: > > On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness > wrote: > > > >> On 05/31/2010 01:51 PM, Bob Fri

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-31 Thread Mike Gerdts
then a few tenths of a percent, you are probably short on CPU. It could also be that interrupts are stealing cycles from rsync. Placing it in a processor set with interrupts disabled in that processor set may help. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-09 Thread Mike Gerdts
rs, ARC, etc. If the processes never page in the pages that have been paged out (or the processes that have been swapped out are never scheduled) then those pages will not consume RAM. The best thing to do with processes that can be swapped out forever is to not run them. -- Mike Gerdts http://

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Mike Mackovitch
On Thu, Apr 22, 2010 at 02:53:37PM -0700, Rich Teer wrote: > On Thu, 22 Apr 2010, Mike Mackovitch wrote: > > > I would also check /var/log/system.log and /var/log/kernel.log on the Mac to > > see if any other useful messages are getting logged. > > Ah, we're gett

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Mike Mackovitch
On Thu, Apr 22, 2010 at 01:54:26PM -0700, Rich Teer wrote: > On Thu, 22 Apr 2010, Mike Mackovitch wrote: > > Hi Mike, > > > So, it looks like you need to investigate why the client isn't > > getting responses from the server's "lockd". > > >

Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Mike Mackovitch
On Thu, Apr 22, 2010 at 12:40:37PM -0700, Rich Teer wrote: > On Thu, 22 Apr 2010, Tomas Ögren wrote: > > > Copying via terminal (and cp) works. > > Interesting: if I copy a file *which has no extended attributes* using cp in > a terminal, it works fine. If I try to cp a file that has EA (to the

[zfs-discuss] Why does ARC grow above hard limit?

2010-04-05 Thread Mike Z
I would appreciate if somebody can clarify a few points. I am doing some random WRITES (100% writes, 100% random) testing and observe that ARC grows way beyond the "hard" limit during the test. The hard limit is set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it

[zfs-discuss] ZFS behavior under limited resources

2010-04-02 Thread Mike Z
I am trying to see how ZFS behaves under resource starvation - corner cases in embedded environments. I see some very strange behavior. Any help/explanation would really be appreciated. My current setup is : OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple connections/

Re: [zfs-discuss] zfs diff

2010-03-29 Thread Mike Gerdts
llions of files with relatively few changes. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread Mike Gerdts
ific butype_name strings accessible via the NDMP_CONFIG_GET_BUTYPE_INFO request. http://www.ndmp.org/download/sdk_v4/draft-skardal-ndmp4-04.txt It seems pretty clear from this that an NDMP data stream can contain most anything and is dependent on the devi

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Mike Gerdts
hat a similar argument could be made for storing the zfs send data streams on a zfs file system. However, it is not clear why you would do this instead of just zfs send | zfs receive. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [OT] excess zfs-discuss mailman digests

2010-02-08 Thread Mike Gerdts
On Mon, Feb 8, 2010 at 9:04 PM, grarpamp wrote: > PS: Is there any way to get a copy of the list since inception > for local client perusal, not via some online web interface? You can get monthly .gz archives in mbox format from http://mail.opensolaris.org/pipermail/zfs-discuss/. --

Re: [zfs-discuss] zero out block / sectors

2010-01-25 Thread Mike Gerdts
On Mon, Jan 25, 2010 at 2:32 AM, Kjetil Torgrim Homme wrote: > Mike Gerdts writes: > >> John Hoogerdijk wrote: >>> Is there a way to zero out unused blocks in a pool?  I'm looking for >>> ways to shrink the size of an opensolaris virtualbox VM and using the

Re: [zfs-discuss] zero out block / sectors

2010-01-23 Thread Mike Gerdts
On Sat, Jan 23, 2010 at 11:55 AM, John Hoogerdijk wrote: > Mike Gerdts wrote: >> >> On Fri, Jan 22, 2010 at 1:00 PM, John Hoogerdijk >> wrote: >> >>> >>> Is there a way to zero out unused blocks in a pool?  I'm looking for ways >>>

Re: [zfs-discuss] zero out block / sectors

2010-01-22 Thread Mike Gerdts
at you should be able to just use mkfile or "dd if=/dev/zero ..." to create a file that consumes most of the free space then delete that file. Certainly it is not an ideal solution, but seems quite likely to be effective. -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-22 Thread Mike Gerdts
se gnu tar to extract data. This seems to be most useful when you need to recover master and/or media servers and to be able to extract your data after you no longer use netbackup. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Mike Gerdts
56 -rw-r--r-- 1 428411 Jan 22 04:14 sha256.Z -rw-r--r-- 1 321846 Jan 22 04:14 sha256.bz2 -rw-r--r-- 1 320068 Jan 22 04:14 sha256.gz -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mai

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Mike La Spina
I use zfs send/recv in the enterprise and in smaller environments all time and it's is excellent. Have a look at how awesome the functionally is in this example. http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zfs Regards, Mike -- This message posted

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Mike Gerdts
data stream > compared to other archive formats. In general it is strongly discouraged for > these purposes. > Yet it is used in ZFS flash archives on Solaris 10 and are slated for use in the successor to flash archives. This initial proposal seems to imply using the same

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
On Fri, Jan 8, 2010 at 12:28 PM, Torrey McMahon wrote: > On 1/8/2010 10:04 AM, James Carlson wrote: >> >> Mike Gerdts wrote: >> >>> >>> This unsupported feature is supported with the use of Sun Ops Center >>> 2.5 when a zone is put on a "NAS St

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
On Fri, Jan 8, 2010 at 9:11 AM, Mike Gerdts wrote: > I've seen similar errors on Solaris 10 in the primary domain and on a > M4000.  Unfortunately Solaris 10 doesn't show the checksums in the > ereport.  There I noticed a mixture between read errors and checksum > errors -

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
On Fri, Jan 8, 2010 at 5:28 AM, Frank Batschulat (Home) wrote: [snip] > Hey Mike, you're not the only victim of these strange CHKSUM errors, I hit > the same during my slightely different testing, where I'm NFS mounting an > entire, pre-existing remote file living in the zpoo

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
ot a good idea in any sort > of production environment?" > > It sounds like a bug, sure, but the fix might be to remove the option. This unsupported feature is supported with the use of Sun Ops Center 2.5 when a zone is put on a "NAS Storage Library". -- Mike Gerd

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
errors from "zoneadm install", which under the covers does a pkg image create followed by *multiple* pkg install invocations. No checksum errors pop up there. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Zones on shared storage - a warning

2010-01-07 Thread Mike Gerdts
[removed zones-discuss after sending heads-up that the conversation will continue at zfs-discuss] On Mon, Jan 4, 2010 at 5:16 PM, Cindy Swearingen wrote: > Hi Mike, > > It is difficult to comment on the root cause of this failure since > the several interactions of these features

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mike Gerdts
e appreciated. > > Thanks, > Mikko > > -- >  Mikko Lammi | l...@lmmz.net | http://www.lmmz.net > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Zpool creation best practices

2010-01-03 Thread Mike
Thanks for the response Marion. I'm glad that I"m not the only one. :) Message was edited by: mijohnst -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

[zfs-discuss] Zpool creation best practices

2009-12-30 Thread Mike
I'm just wondering what some of you might do with your systems. We have an EMC Clariion unit that I connect several sun machines to. I allow the EMC to do it's hardware raid5 for several luns and then I stripe them together. I considered using raidz and just configuring the EMC as a JBOD, bu

Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-30 Thread Mike
Just thought I would let you all know that I followed what Alex suggested along with what many of you pointed out and it worked! Here are the steps I followed: 1. Break root drive mirror 2. zpool export filesystem 3. run the command to start MPIOX and reboot the machine 4. zpool import filesystem

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-30 Thread Mike Gerdts
ndancy choices then there is no need for any rocket scientists. :) -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-30 Thread Mike Gerdts
t; could reclaim those blocks. This is just a variant of the same problem faced with expensive SAN devices that have thin provisioning allocation units measured in the tens of megabytes instead of hundreds to thousands of kilobytes. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Zones on shared storage - a warning

2009-12-22 Thread Mike Gerdts
On Tue, Dec 22, 2009 at 8:02 PM, Mike Gerdts wrote: > I've been playing around with zones on NFS a bit and have run into > what looks to be a pretty bad snag - ZFS keeps seeing read and/or > checksum errors.  This exists with S10u8 and OpenSolaris dev build > snv_129.  This is

[zfs-discuss] Zones on shared storage - a warning

2009-12-22 Thread Mike Gerdts
0 /mnt/osolzone/root DEGRADED 0 0 117 too many errors errors: No known data errors r...@soltrain19# zlogin osol uptime 5:31pm up 1 min(s), 0 users, load average: 0.69, 0.38, 0.52 -- Mike Gerdts http://mgerdts.blogspot.com/ __

  1   2   3   4   5   6   >