Re: [zfs-discuss] LUN expansion choices

2012-11-14 Thread Peter Tribble
On Tue, Nov 13, 2012 at 6:16 PM, Karl Wagner wrote: > On 2012-11-13 17:42, Peter Tribble wrote: > > > Given storage provisioned off a SAN (I know, but sometimes that's > > what you have to work with), what's the best way to expand a pool? > > > > Specific

Re: [zfs-discuss] IOzone benchmarking

2012-05-04 Thread Peter Tribble
ch vdev in a raidz configuration. In practice we're finding that our raidz systems actually perform pretty well when compared with dynamic stripes, mirrors, and hardware raid LUNs.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Does raidzN actually protect against bitrot? If yes - how?

2012-01-15 Thread Peter Tribble
are therefore a good thing.) >   How *do* some things get fixed then - can only dittoed data >   or metadata be salvaged from second good copies on raidZ? You can recover anything you have enough redundancy for. Which means everything, up to the redundancy of the vdev. B

[zfs-discuss] Reversing fdisk changes

2011-11-10 Thread Peter Tribble
the system is out of service and I can reconstruct the data if necessary. Although knowing how to fix this would be generally useful in the future... Thanks, -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mail

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Peter Tribble
On Tue, Oct 18, 2011 at 9:12 PM, Tim Cook wrote: > > > On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble > wrote: >> >> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote: >> > >> > Every scrub I've ever done that has found an error required manual >

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Peter Tribble
as a result of a scrub, and I've never had to intervene manually. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] aclmode gone in S10u10?

2011-09-13 Thread Peter Tribble
On Tue, Sep 13, 2011 at 8:34 PM, Paul B. Henson wrote: > On 9/13/2011 5:21 AM, Peter Tribble wrote: > >> Update 10 has been out for about 3 weeks. > > Where was any announcement posted? I haven't heard anything about it. As far > as I can tell, the Oracle site still o

Re: [zfs-discuss] aclmode gone in S10u10?

2011-09-13 Thread Peter Tribble
(This doesn't affect me all that much, as ACLs on ZFS have never really worked right, so anything where the ACL is critical gets stored on ufs [yuck].) Also, aclmode is no longer listed in the usage message you see if you do 'zfs get'. -- -Peter Tribble http://www.petertribb

Re: [zfs-discuss] ZFS

2011-09-13 Thread Peter Tribble
have the ability to slot that copy of the data instantly into service if the primary copy fails. For tar, you can substitute a free or commercial backup solution. It works the same way. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ __

Re: [zfs-discuss] Issues with supermicro

2011-08-10 Thread Peter Tribble
ent to disk in the background. Second, use a proper benchmark suite, and one that isn't itself a bottleneck. Something like vdbench, although there are others. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-disc

Re: [zfs-discuss] arcstat updates

2011-04-26 Thread Peter Tribble
s (maybe one showing the sizes, one showing the ARC efficiency, another one for L2ARC). > 5. Who wants to help with this little project? I'm definitely interested in emulating arcstat in jkstat. OK, I have an old version, but it's pretty much

Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-25 Thread Peter Tribble
BA (and one slot in the server) for each MD1200, which chews up slots pretty quick. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How many files & directories in a ZFS filesystem?

2010-12-09 Thread Peter Tribble
ifree %iused Mounted on /images/fred 140738056 36000718887 0% /images/fred average 11k I've never seen ZFS run out of inodes, though. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Crypto in Oracle Solaris 11 Express

2010-11-17 Thread Peter Tribble
sted and supported", and it's reasonably clear that the way to get support is via the existing Premier Support offering. And it's just the same deal as with S10 - you want to use it in production, you need to have a support contract. It's not hard to find this out, just a few seconds

Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-18 Thread Peter Tribble
ng. (And you can do this just on the datasets you really want to keep safe, you don't have to do it on the whole pool.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-18 Thread Peter Tribble
.) Tiny changes in block alignment completely ruin the possibility of significant benefit. Using ZFS dedup is logically the wrong place to do this; you want a decent backup system that doesn't generate significant amounts of duplicate data in the first place. -- -P

Re: [zfs-discuss] osol monitoring question

2010-05-10 Thread Peter Tribble
d anyway; I'm playing with replacements for sar. Top is still pretty useful. For zfs, zpool iostat has some utility, but I find fsstat to be pretty useful. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-d

Re: [zfs-discuss] why both dedup and compression?

2010-05-06 Thread Peter Tribble
whatsoever in the log files, which are pretty big, but compress really well. So having both enabled works really well. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolar

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-01 Thread Peter Tribble
esn't even matter if you make the same selections >> you > > With the new Oracle policies, it seems unlikely that you will be able to > reinstall the OS and achieve what you had before. And what policies have Oracle introduced that mean you can't reinstall your system?

Re: [zfs-discuss] Identifying drives

2010-04-26 Thread Peter Tribble
cing those with the serial numbers from the OS (eg from iostat -En) would be a good idea. (You are, I presume, using regular scrubs to catch latent errors.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-

Re: [zfs-discuss] Find out which of many FS from a zpool is busy?

2010-04-22 Thread Peter Tribble
, when one had old style file systems and exported these as a > whole iostat -x came in handy, however, with zpools, this is not the case > anymore, right? fsstat? Typically along the lines of fsstat /tank/* 1 -- -Peter Tribble http://www.petertribble.co

Re: [zfs-discuss] Simultaneous failure recovery

2010-03-31 Thread Peter Tribble
On Tue, Mar 30, 2010 at 10:42 PM, Eric Schrock wrote: > > On Mar 30, 2010, at 5:39 PM, Peter Tribble wrote: > >> I have a pool (on an X4540 running S10U8) in which a disk failed, and the >> hot spare kicked in. That's perfect. I'm happy. >> >> Then a

[zfs-discuss] Simultaneous failure recovery

2010-03-30 Thread Peter Tribble
spare to cover the other failed drive? And can I hotspare it manually? I could do a straight replace, but that isn't quite the same thing. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing li

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Peter Tribble
lease happened.) Whether Oracle make changes in the future remains to be seen. I would expect them to (you can't turn around a loss-making acquisition into a profitable subsidiary without making changes). In terms of OpenSolaris, the word is that a position statement is due shortly. --

Re: [zfs-discuss] future of OpenSolaris

2010-02-22 Thread Peter Tribble
t; Maybe anyone in the know could provide a short blurb on what > the state is, and what the options are. Of course they can't. If they're in the know, then they're almost certainly not in a position to talk about it in public. Asking here does not help, as I doubt if anyone fr

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-15 Thread Peter Tribble
ven an admittedly sub-optimal configuration ought to have delivered.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] unionfs help

2010-02-04 Thread Peter Tribble
symlink in the global zone and other zones, but that's relatively harmless. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Peter Tribble
l Solaris, and use this > pool for other apps? > Also, what happens if a drive fails? Swap it for a new one ;-) (somewhat more complex with the dual layout as I described it). -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ __

Re: [zfs-discuss] ZFS Boot Recovery after Motherboard Death

2009-12-13 Thread Peter Tribble
detailed system configuration baked into an installed image. Yes, you can get rid of it, but the idea that you could pull drives from a failed system and put them into any old system they might happen to fit in and expect it to just work has always been optimistic. The advantage of zfs is that it

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Peter Tribble
cks that match the blocks of f1 f2 f3 f4 f5. Is that likely to happen? dedup is at the block level, so the blocks in f2 will only match the same data in f15 if they're aligned, which is only going to happen if f1 ends on a block boundary. Besides, you still have to read all the da

Re: [zfs-discuss] Dumb idea?

2009-10-29 Thread Peter Tribble
how do you keep the metadata in sync with the real data in the face of modifications by applications that aren't aware of your scheme? -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] libzfs.h versioning

2009-09-10 Thread Peter Tribble
s that library? -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Snapshot creation time

2009-08-28 Thread Peter Tribble
t that feels clumsy. "zfs get creation" will only give me to the nearest > minute. 'zfs get -p creation' gives you seconds since the epoch, which you can convert using a utility of your choice. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/

Re: [zfs-discuss] How to prevent /usr/bin/chmod from following symbolic links?

2009-08-24 Thread Peter Tribble
ost folks won't know which one they are currently using :-( It's not *just* a social engineering attack. It's relying on the fact that (unlike chown -h) the chmod command follows symlinks and there's no way to disable that behaviour. -- -Peter Tribble http://www.petertribble.c

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Peter Tribble
On Thu, Jul 2, 2009 at 2:22 PM, Mike Gerdts wrote: > On Thu, Jul 2, 2009 at 8:07 AM, Peter Tribble wrote: >> We've just stumbled across an interesting problem in one of our >> applications that fails when run on a ZFS filesystem. >> >> I don't have the code,

[zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Peter Tribble
rked for many years.) If not, I was looking at interposing my own readdir() (that's assuming the application is using readdir()) that actually returns the entries in the desired order. However, I'm having a bit of trouble hacking this together (the current source doesn't compile in i

Re: [zfs-discuss] Things I Like About ZFS

2009-06-21 Thread Peter Tribble
me, is the fact that it rapidly became essentially invisible. It just does its job and you soon forget that it's there (until you have to deal with one of the alternatives, which throws it into sharp relief). -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogsp

Re: [zfs-discuss] Sun Flash Modules

2009-04-18 Thread Peter Tribble
would be impossible to do with spinning media. > > 3. The (common) requirement for mirrored boot disks should prove > obsolete. Why? Is the possibility of component or path failure and data corruption so close to zero? -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogsp

Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Peter Tribble
On Sat, Mar 28, 2009 at 11:06 AM, Michael Shadle wrote: > On Sat, Mar 28, 2009 at 1:37 AM, Peter Tribble > wrote: > >> zpool add tank raidz1 disk_1 disk_2 disk_3 ... >> >> (The syntax is just like creating a pool, only with add instead of create.) > > so

Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Peter Tribble
nce. Generally, unless you want different behaviour from different pools, it's easier to combine them. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs and raid 51

2009-02-20 Thread Peter Tribble
hardware to manage disk failures, but have data redundancy provided by zfs which is where you want it. If you want random I/O performance, raidz isn't a good choice. For most things, hardware raid ought to give you more IOPS. You mentioned mail and file serving, which isn't an obvio

Re: [zfs-discuss] Strange performance loss

2009-02-15 Thread Peter Tribble
ne set of data that's slow - I've not noticed this performance falling off a cliff with all the other data that has been moved. (OK, there could be other datasets that have issues. But most of them don't and this one is obiously stuck in molasses.) -- -Peter Tribble http://www.pe

[zfs-discuss] Strange performance loss

2009-02-13 Thread Peter Tribble
the directory real0.610 user0.058 sys 0.551 I don't know whether that explains all the problem, but it's clear that having ACLs on files and directories has a definite cost. -- -Peter Tribble http://www.petertribble.co.uk/

Re: [zfs-discuss] A question on "non-consecutive disk failures"

2009-02-07 Thread Peter Tribble
understanding correct ? > - if disks a and c fail, then I will be be able to read from disks b > and d. Is this understanding correct ? No. That quote is part of the discussion of ditto blocks. See the following: http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape -- -Peter Tribb

Re: [zfs-discuss] Aggregate Pool I/O

2009-01-18 Thread Peter Tribble
On Sun, Jan 18, 2009 at 8:25 PM, Richard Elling wrote: > Peter Tribble wrote: >> See fsstat, which is based upon kstats. One of the thing I want to do with >> JKstat is correlate filesystem operations with underlying disk operations. >> The >> hard part is actually con

Re: [zfs-discuss] Aggregate Pool I/O

2009-01-18 Thread Peter Tribble
In the case above, you need to match 4480002, which on my machine is the following line in /etc/mnttab: swap/tmptmpfs xattr,dev=4480002 1232289278 so that's /tmp (not a zfs filesystem, but you should get the idea). -- -Peter Tribble http://www.petertribble.c

Re: [zfs-discuss] Aggregate Pool I/O

2009-01-18 Thread Peter Tribble
ption.) I would like to see the pool statistics exposed as kstats, though, which would make it easier to analyse them with existing tools. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] Aggregate Pool I/O

2009-01-18 Thread Peter Tribble
ate here: http://www.petertribble.co.uk/Solaris/jkstat.html -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool export+import doesn't maintain snapshot

2009-01-14 Thread Peter Tribble
single-user > and ran > $ zpool import disco > > The disc was mounted, but none of the hundreds of snapshots was there. > > Did Imiss something? How do you know the snapshots are gone? Note that the zfs list command no longer shows snapshots by default. You need 'zfs lis

Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Peter Tribble
e whole disk then zfs will do it all for you; you just need to define partitions/slices if you're going to use slices. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Peter Tribble
ormat shows that the > partition exists: The output you gave shows that there is an fdisk partition. If you're going to use it then you'll need to at the very least put a label on it. format -> partition should offer to label it. You can then set the size of s0 (to be the same as

Re: [zfs-discuss] `zfs list` doesn't show my snapshot

2008-11-22 Thread Peter Tribble
ed to type different commands to get the same output depending on which machine you're on, as '-t all' doesn't work on older systems. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Seeking thoughts on using SXCE rather than Solar 10 on production servers.

2008-11-18 Thread Peter Tribble
ris 10 has years of support left in it, but what happens once SXCE is scrapped and you can't update any further? -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] recommendations on adding vdev to raidz zpool

2008-10-26 Thread Peter Tribble
sk of creating a pool consisting of two raidz vdevs that > don't have the same number of disks? One risk is that you mistyped the command, when you actually meant to specify a balanced configuration. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/

Re: [zfs-discuss] ZFS, NFS and Auto Mounting

2008-10-01 Thread Peter Tribble
ilesystem a child. So instead of: /mnt/zfs1/GroupWS /mnt/zfs1/GroupWS/Integration create /mnt/zfs1/GroupWS /mnt/zfs1/Integration and use that for the Integration mountpoint. Then in GroupWS, 'ln -s ../Integration .'. That way, if you look at Integration in /ws/com you get to something t

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-17 Thread Peter Tribble
On Wed, Sep 17, 2008 at 10:11 AM, gm_sjo <[EMAIL PROTECTED]> wrote: > 2008/9/17 Peter Tribble: >> On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo <[EMAIL PROTECTED]> wrote: >>> Am I right in thinking though that for every raidz1/2 vdev, you're >>> effectively

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-17 Thread Peter Tribble
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo <[EMAIL PROTECTED]> wrote: > Am I right in thinking though that for every raidz1/2 vdev, you're > effectively losing the storage of one/two disks in that vdev? Well yeah - you've got to have some allowance for redundancy. -

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-16 Thread Peter Tribble
t's a side-effect rather than a cause. For what it's worth, we put all the disks on our thumpers into a single pool - mostly it's 5x 8+1 raidz1 vdevs with a hot spare and 2 drives for the OS and would happily go much bigger. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Max vol size and number of files in production

2008-09-14 Thread Peter Tribble
with 10-20 million files in them. Backing that up would be a problem, but I can't see zfs having issues. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ARCSTAT Kstat Definitions

2008-08-28 Thread Peter Tribble
ot sure of the interpretation, but I've basically taken Ben's code and lifted it more or less as is. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread Peter Tribble
s how it works. Say with 16 disks: zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 \ raidz1 disk6 disk7 disk8 disk9 disk10 \ raidz1 disk11 disk12 disk13 disk14 disk15 \ spare disk16 Gives you a single pool containing 3 raidz vdevs (each 4 data + 1 parity) and a hot spare. -- -Pet

[zfs-discuss] zfs_nocacheflush

2008-07-30 Thread Peter Tribble
drives? What I have is a local zfs pool from the free space on the internal drives, so I'm only using a partition and the drive's write cache should be off, so my theory here is that zfs_nocacheflush shouldn't have any effect because there's no drive cache in use... --

Re: [zfs-discuss] Largest (in number of files) ZFS instance tested

2008-07-12 Thread Peter Tribble
On Sat, Jul 12, 2008 at 12:23 AM, Ian Collins <[EMAIL PROTECTED]> wrote: > Peter Tribble wrote: >> >> (The backup problem is the real stumbling block. And backup is an area ripe >> for disruptive innovation.) >> >> > Is down to volume of data, or man

Re: [zfs-discuss] Largest (in number of files) ZFS instance tested

2008-07-11 Thread Peter Tribble
adequate for our needs, although backup performance isn't. (The backup problem is the real stumbling block. And backup is an area ripe for disruptive innovation.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ __

Re: [zfs-discuss] previously mentioned J4000 released

2008-07-10 Thread Peter Tribble
27;re lucky the array will update the firmware for you. I've also seen the intelligent controllers in some of Sun's JBOD units (the S1, and the 3000 series) fail to recognize drives that work perfectly well elsewhere. I'm slightly disappointed that there wasn't a model for 2.5 in

Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Peter Tribble
came soon after in a patch.) You can restrict stmsboot to only enable mpxio on the mpt or fibre interfaces using 'stmsboot -D mpt' or 'stmsboot -D fp'. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Peter Tribble
x27;s what mirroring does - you have redundant data. The extra performance is just a side-effect. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensol

Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctly ?

2008-07-06 Thread Peter Tribble
idz for the lot, or just use mirroring: zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror c1t6d0 c1t8d0 -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-06-24 Thread Peter Tribble
ded sizing would encompass that. So remind me again - what is our recommended sizing? (Especially in the light of this discussion.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discus

Re: [zfs-discuss] "zpool create" behaviour

2008-06-22 Thread Peter Tribble
y a 'zpool export' followed by 'zpool import' - do you get your pool back?) For this I've had to get rid of powerpath and use mpxio instead. The problem seems to be that the clariion arrays are active/passive and zfs trips up if it tries to use one of the passive links. Usi

Re: [zfs-discuss] memory hog

2008-06-21 Thread Peter Tribble
are commonly used. (And there you tend to fit the OS onto existing hardware, rather than servers where you are more likely to buy to fit a workload.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing

Re: [zfs-discuss] ?: 1/2 Billion files in ZFS

2008-06-21 Thread Peter Tribble
ry good for small random read access. That said, it's a difficult workload. My limited experience of (the rather more expensive) Veritas on (rather more expensive) big arrays is that they don't handle it particularly well either. -- -Peter Tribble http://www.petertribble.co.uk/ - h

Re: [zfs-discuss] memory hog

2008-06-16 Thread Peter Tribble
On Mon, Jun 16, 2008 at 5:20 PM, dick hoogendijk <[EMAIL PROTECTED]> wrote: > On Mon, 16 Jun 2008 16:21:26 +0100 > "Peter Tribble" <[EMAIL PROTECTED]> wrote: > >> The *real* common thread is that you need ridiculous amounts >> of memory to get decent p

Re: [zfs-discuss] ?: 1/2 Billion files in ZFS

2008-06-16 Thread Peter Tribble
or just by directory hierarchy - into digestible chunks. For us that's at about the 1Tbyte/10 million file point at the most - we're looking at restructuring the directory hierarchy for the filesystems that are beyond this so we can back them up in pieces. > How about NFS access? Seem

Re: [zfs-discuss] memory hog

2008-06-16 Thread Peter Tribble
ch smaller systems. On my servers where 16G minimum is reasonable, ZFS is fine. But the bulk of the installed base of machines accessed by users is still in the 512M-1G range - and Sun are still selling 512M machines. -- -Peter Tribble http://www.petertribble.co.uk/ - htt

Re: [zfs-discuss] memory hog

2008-06-14 Thread Peter Tribble
ufs, the other zfs. Everything else is the same. (SunBlade 150 with 1G of RAM, if you want specifics.) The zfs root box is significantly slower all around. Not only is initial I/O slower, but it seems much less able to cache data. -- -Peter Tribble http://www.petertri

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Peter Tribble
sers, much less work for the helpdesk, and - paradoxically - largely eliminated systems running out of space.) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 3510 JBOD with multipath

2008-05-21 Thread Peter Tribble
6. Are you already multipathed? -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Peter Tribble
lability or performance issues with that? My only concern here would be how hard it would be to delete the snapshots. With that cycle, you're deleting 6000 snapshots a day, and while snapshot creation is "free", my experience is that snapshot deletion is

Re: [zfs-discuss] backup for x4500?

2008-04-20 Thread Peter Tribble
On Sun, Apr 20, 2008 at 4:39 PM, Bob Friesenhahn <[EMAIL PROTECTED]> wrote: > On Sun, 20 Apr 2008, Peter Tribble wrote: > > > > My experience so far is that anything past a terabyte and 10 million > files, > > and any backup software struggles. > > > > Wh

Re: [zfs-discuss] Solaris 10U5 ZFS features?

2008-04-20 Thread Peter Tribble
e. > > Fix was in 127728 (x86) and 127729 (Sparc). I think you have sparc and x86 swapped over. Looking at an S10U5 box I have here, 127728-06 is integrated. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-

Re: [zfs-discuss] backup for x4500?

2008-04-20 Thread Peter Tribble
m into some sort of hierarchy so that it has top-level directories that break it up into smaller chunks. (Some sort of hashing scheme appears to be indicated. Unfortunately our applications fall into two classes: everything in one huge directory, or a hashing scheme that results in many thousands of

Re: [zfs-discuss] Per filesystem scrub

2008-04-07 Thread Peter Tribble
ails and all that... > Sounds like a nice tidy project for a summer intern! > > Jeff > > > > On Sat, Mar 29, 2008 at 05:14:20PM +, Peter Tribble wrote: > > A brief search didn't show anything relevant, so here > > goes: > > > > Would it be

[zfs-discuss] Per filesystem scrub

2008-03-29 Thread Peter Tribble
p, and the data regularly read anyway; for the quiet ones they're neither read nor backed up, so it would be nice to be able to validate those. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing l

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-16 Thread Peter Tribble
this is higher than the network bandwidth into the server, and more bandwidth than the users can make use of at the moment. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Peter Tribble
On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn <[EMAIL PROTECTED]> wrote: > On Fri, 15 Feb 2008, Peter Tribble wrote: > > > > May not be relevant, but still worth checking - I have a 2530 (which ought > > to be that same only SAS instead of FC), and got fairly poo

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-15 Thread Peter Tribble
t; of drives used has not had much effect on write rate. May not be relevant, but still worth checking - I have a 2530 (which ought to be that same only SAS instead of FC), and got fairly poor performance at first. Things improved significantly when I got the LUNs properly balanced acr

Re: [zfs-discuss] Filesystem Benchmark

2007-11-14 Thread Peter Tribble
> Did you use LPe11000-E (Single Channel) or LPe11002-E (dual channel) HBA's? > > Did you encounter any problems with configuring this. My experience in this area is that powerpath doesn't get along with zfs (I couldn't import the pool); using MPxIO worked fine. -- -Pete

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-11-13 Thread Peter Tribble
of the patch on when we can and I too would like confirmation that it's helping and hasn't introduced any other regressions..) -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-d

Re: [zfs-discuss] HAMMER

2007-11-04 Thread Peter Tribble
ndom read (and this isn't helped by raidz which gives you a single disks worth of random read I/O per vdev). I would love to see better ways of backing up huge numbers of files. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ _

[zfs-discuss] Survivability of zfs root

2007-09-27 Thread Peter Tribble
is problem appears to be that you export the pool and import it again. Now, what if that system had been using ZFS root? I have a hardware failure, I replace the raid card, the devid of the boot device changes. Will the system still boot properly? -- -Peter Tribble http://www.petertribble.co.

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-25 Thread Peter Tribble
On 9/24/07, Paul B. Henson <[EMAIL PROTECTED]> wrote: > On Sat, 22 Sep 2007, Peter Tribble wrote: > > > filesystem per user on the server, just to see how it would work. While > > managing 20,00 filesystems with the automounter was trivial, the attempt > > to manage

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-22 Thread Peter Tribble
riggers SMF activity, and can drive SMF up the wall. We saw one of the svc daemons hog a whole cpu on our mailserver (constantly checking for .forward files in user home directories). This has been fixed, I believe, but only very recently in S10.] -- -Peter Tribble http://www.petertribble.co.uk/

Re: [zfs-discuss] How do I get my pool back?

2007-09-13 Thread Peter Tribble
On 9/13/07, Eric Schrock <[EMAIL PROTECTED]> wrote: > On Thu, Sep 13, 2007 at 07:54:12PM +0100, Peter Tribble wrote: > > > > There must be a better way of handling this. It should have just > > brought it online first time around, without all the fiddling around > &

Re: [zfs-discuss] How do I get my pool back?

2007-09-13 Thread Peter Tribble
On 9/13/07, Eric Schrock <[EMAIL PROTECTED]> wrote: > On Thu, Sep 13, 2007 at 06:36:33PM +0100, Peter Tribble wrote: > > > > Doesn't work. (How can you export something that isn't imported > > anyway?) > > > > The pool is imported, or else &#

Re: [zfs-discuss] How do I get my pool back?

2007-09-13 Thread Peter Tribble
On 9/13/07, Mike Lee <[EMAIL PROTECTED]> wrote: > > have you tried zpool clear? > Not yet. Let me give it a try: # zpool clear storage cannot open 'storage': pool is unavailable Bother... Thanks anyway! -- -Peter Tribble http://www.petertribble.co.uk/ - htt

Re: [zfs-discuss] How do I get my pool back?

2007-09-13 Thread Peter Tribble
On 9/13/07, Solaris <[EMAIL PROTECTED]> wrote: > Try exporting the pool then import it. I have seen this after moving disks > between systems, and on a couple of occasions just rebooting. Doesn't work. (How can you export something that isn't imported anyway?) -

[zfs-discuss] How do I get my pool back?

2007-09-13 Thread Peter Tribble
d=0 guid=12723054067535078074 path='/dev/dsk/c1t0d0s7' devid='id1,[EMAIL PROTECTED]/h' whole_disk=0 metaslab_array=13 metaslab_shift=32 ashift=9 asize=448412

Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-12 Thread Peter Tribble
ill eliminate the raid-5 write hole (albeit at some loss in performance because you have to compute and write extra checksums) but you allow multiple independent reads. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ __

Re: [zfs-discuss] ZFS on X4500 and Legato considerations (?)

2007-08-19 Thread Peter Tribble
(with various > sizes ranging from 0.3TB to 1.2TB) . Why multiple pools rather than a single large pool? -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org h

  1   2   >