On Tue, Nov 13, 2012 at 6:16 PM, Karl Wagner wrote:
> On 2012-11-13 17:42, Peter Tribble wrote:
>
> > Given storage provisioned off a SAN (I know, but sometimes that's
> > what you have to work with), what's the best way to expand a pool?
> >
> > Specific
ch vdev in a raidz configuration. In
practice we're finding that our raidz systems actually perform
pretty well when compared with dynamic stripes, mirrors, and
hardware raid LUNs.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are therefore a good thing.)
> How *do* some things get fixed then - can only dittoed data
> or metadata be salvaged from second good copies on raidZ?
You can recover anything you have enough redundancy for. Which
means everything, up to the redundancy of the vdev. B
the system is out of service
and I can reconstruct the data if necessary. Although knowing
how to fix this would be generally useful in the future...
Thanks,
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mail
On Tue, Oct 18, 2011 at 9:12 PM, Tim Cook wrote:
>
>
> On Tue, Oct 18, 2011 at 3:06 PM, Peter Tribble
> wrote:
>>
>> On Tue, Oct 18, 2011 at 8:52 PM, Tim Cook wrote:
>> >
>> > Every scrub I've ever done that has found an error required manual
>
as a
result of a scrub, and I've never had to intervene manually.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Sep 13, 2011 at 8:34 PM, Paul B. Henson wrote:
> On 9/13/2011 5:21 AM, Peter Tribble wrote:
>
>> Update 10 has been out for about 3 weeks.
>
> Where was any announcement posted? I haven't heard anything about it. As far
> as I can tell, the Oracle site still o
(This doesn't affect me all that much, as ACLs on ZFS have never
really worked right, so anything where the ACL is critical gets stored
on ufs [yuck].)
Also, aclmode is no longer listed in the usage message you see
if you do 'zfs get'.
--
-Peter Tribble
http://www.petertribb
have the ability to slot that copy of the data
instantly into service if the primary copy fails.
For tar, you can substitute a free or commercial backup solution.
It works the same way.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
__
ent to disk in the
background.
Second, use a proper benchmark suite, and one that isn't itself
a bottleneck. Something like vdbench, although there are others.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-disc
s (maybe one showing the sizes, one showing the ARC
efficiency, another one for L2ARC).
> 5. Who wants to help with this little project?
I'm definitely interested in emulating arcstat in jkstat. OK, I have
an old version,
but it's pretty much
BA (and one slot
in the server) for each MD1200, which chews up slots pretty quick.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ifree %iused Mounted on
/images/fred 140738056 36000718887 0% /images/fred
average 11k
I've never seen ZFS run out of inodes, though.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sted and supported", and it's reasonably clear that the way to
get support is via the existing Premier Support offering. And it's just the
same deal as with S10 - you want to use it in production, you need to
have a support contract. It's not hard to find this out, just a few seconds
ng.
(And you can do this just on the datasets you really want to keep safe,
you don't have to do it on the whole pool.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.) Tiny changes in block alignment completely ruin the
possibility of significant benefit.
Using ZFS dedup is logically the wrong place to do this; you want a decent
backup system that doesn't generate significant amounts of duplicate data
in the first place.
--
-P
d anyway; I'm playing with replacements
for sar. Top is still pretty useful.
For zfs, zpool iostat has some utility, but I find fsstat to be pretty useful.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-d
whatsoever in the log files, which
are pretty big,
but compress really well. So having both enabled works really well.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolar
esn't even matter if you make the same selections
>> you
>
> With the new Oracle policies, it seems unlikely that you will be able to
> reinstall the OS and achieve what you had before.
And what policies have Oracle introduced that mean you can't reinstall
your system?
cing those
with the serial numbers from the OS (eg from iostat -En) would be a good idea.
(You are, I presume, using regular scrubs to catch latent errors.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-
, when one had old style file systems and exported these as a
> whole iostat -x came in handy, however, with zpools, this is not the case
> anymore, right?
fsstat?
Typically along the lines of
fsstat /tank/* 1
--
-Peter Tribble
http://www.petertribble.co
On Tue, Mar 30, 2010 at 10:42 PM, Eric Schrock wrote:
>
> On Mar 30, 2010, at 5:39 PM, Peter Tribble wrote:
>
>> I have a pool (on an X4540 running S10U8) in which a disk failed, and the
>> hot spare kicked in. That's perfect. I'm happy.
>>
>> Then a
spare to cover the other failed drive? And
can I hotspare it manually? I could do a straight replace, but that
isn't quite the same thing.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing li
lease happened.)
Whether Oracle make changes in the future remains to be seen. I would expect
them to (you can't turn around a loss-making acquisition into a
profitable subsidiary
without making changes).
In terms of OpenSolaris, the word is that a position statement is due shortly.
--
t; Maybe anyone in the know could provide a short blurb on what
> the state is, and what the options are.
Of course they can't. If they're in the know, then they're almost certainly
not in a position to talk about it in public. Asking here does not help,
as I doubt if anyone fr
ven an admittedly sub-optimal configuration
ought to have delivered.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
symlink in the global zone and
other zones, but that's relatively harmless.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
l Solaris, and use this
> pool for other apps?
> Also, what happens if a drive fails?
Swap it for a new one ;-)
(somewhat more complex with the dual layout as I described it).
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
__
detailed system configuration baked into an installed
image. Yes, you can get rid of it, but the idea that you could pull
drives from a failed system and put them into any old system they
might happen to fit in and expect it to just work has always been
optimistic. The advantage of zfs is that it
cks that match the blocks of f1 f2 f3 f4 f5.
Is that likely to happen? dedup is at the block level, so the blocks
in f2 will only
match the same data in f15 if they're aligned, which is only going to happen if
f1 ends on a block boundary.
Besides, you still have to read all the da
how do you keep the metadata
in sync with the real data in the face of modifications by applications
that aren't aware of your scheme?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s that library?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t that feels clumsy. "zfs get creation" will only give me to the nearest
> minute.
'zfs get -p creation' gives you seconds since the epoch, which you can convert
using a utility of your choice.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
ost folks won't know which one they are currently using :-(
It's not *just* a social engineering attack. It's relying on the fact that
(unlike chown -h) the chmod command follows symlinks and there's
no way to disable that behaviour.
--
-Peter Tribble
http://www.petertribble.c
On Thu, Jul 2, 2009 at 2:22 PM, Mike Gerdts wrote:
> On Thu, Jul 2, 2009 at 8:07 AM, Peter Tribble wrote:
>> We've just stumbled across an interesting problem in one of our
>> applications that fails when run on a ZFS filesystem.
>>
>> I don't have the code,
rked for many years.)
If not, I was looking at interposing my own readdir() (that's assuming
the application is using readdir()) that actually returns the entries in
the desired order. However, I'm having a bit of trouble hacking this
together (the current source doesn't compile in i
me, is the fact that it rapidly became essentially
invisible. It just does its job and you soon forget that it's there
(until you have to
deal with one of the alternatives, which throws it into sharp relief).
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogsp
would be impossible to do with spinning media.
>
> 3. The (common) requirement for mirrored boot disks should prove
> obsolete.
Why? Is the possibility of component or path failure and data corruption
so close to zero?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogsp
On Sat, Mar 28, 2009 at 11:06 AM, Michael Shadle wrote:
> On Sat, Mar 28, 2009 at 1:37 AM, Peter Tribble
> wrote:
>
>> zpool add tank raidz1 disk_1 disk_2 disk_3 ...
>>
>> (The syntax is just like creating a pool, only with add instead of create.)
>
> so
nce.
Generally, unless you want different behaviour from different pools, it's easier
to combine them.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hardware to manage disk failures, but have data redundancy provided by
zfs which is where you want it.
If you want random I/O performance, raidz isn't a good choice. For most
things, hardware raid ought to give you more IOPS. You mentioned mail
and file serving, which isn't an obvio
ne set of data that's slow - I've not noticed this
performance falling off a cliff with all the other data that has been moved.
(OK, there could be other datasets that have issues. But most of them don't
and this one is obiously stuck in molasses.)
--
-Peter Tribble
http://www.pe
the directory
real0.610
user0.058
sys 0.551
I don't know whether that explains all the problem, but it's clear
that having ACLs
on files and directories has a definite cost.
--
-Peter Tribble
http://www.petertribble.co.uk/
understanding correct ?
> - if disks a and c fail, then I will be be able to read from disks b
> and d. Is this understanding correct ?
No. That quote is part of the discussion of ditto blocks.
See the following:
http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape
--
-Peter Tribb
On Sun, Jan 18, 2009 at 8:25 PM, Richard Elling wrote:
> Peter Tribble wrote:
>> See fsstat, which is based upon kstats. One of the thing I want to do with
>> JKstat is correlate filesystem operations with underlying disk operations.
>> The
>> hard part is actually con
In the case above, you need to match 4480002, which on
my machine is the following line in /etc/mnttab:
swap/tmptmpfs xattr,dev=4480002 1232289278
so that's /tmp (not a zfs filesystem, but you should get the idea).
--
-Peter Tribble
http://www.petertribble.c
ption.)
I would like to see the pool statistics exposed as kstats, though, which would
make it easier to analyse them with existing tools.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-dis
ate here:
http://www.petertribble.co.uk/Solaris/jkstat.html
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
single-user
> and ran
> $ zpool import disco
>
> The disc was mounted, but none of the hundreds of snapshots was there.
>
> Did Imiss something?
How do you know the snapshots are gone?
Note that the zfs list command no longer shows snapshots by default.
You need 'zfs lis
e whole disk then zfs will do it all for you; you
just need to define partitions/slices if you're going to use slices.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-di
ormat shows that the
> partition exists:
The output you gave shows that there is an fdisk partition.
If you're going to use it then you'll need to at the very least put a
label on it.
format -> partition should offer to label it.
You can then set the size of s0 (to be the same as
ed to type different commands to get the
same output depending on which machine you're on, as '-t all' doesn't
work on older systems.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ris 10 has years of support left in it, but what
happens once SXCE is scrapped and you can't update any further?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sk of creating a pool consisting of two raidz vdevs that
> don't have the same number of disks?
One risk is that you mistyped the command, when you actually meant
to specify a balanced configuration.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
ilesystem a child.
So instead of:
/mnt/zfs1/GroupWS
/mnt/zfs1/GroupWS/Integration
create
/mnt/zfs1/GroupWS
/mnt/zfs1/Integration
and use that for the Integration mountpoint. Then in GroupWS, 'ln -s
../Integration .'.
That way, if you look at Integration in /ws/com you get to something
t
On Wed, Sep 17, 2008 at 10:11 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
> 2008/9/17 Peter Tribble:
>> On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
>>> Am I right in thinking though that for every raidz1/2 vdev, you're
>>> effectively
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo <[EMAIL PROTECTED]> wrote:
> Am I right in thinking though that for every raidz1/2 vdev, you're
> effectively losing the storage of one/two disks in that vdev?
Well yeah - you've got to have some allowance for redundancy.
-
t's a side-effect rather than a cause.
For what it's worth, we put all the disks on our thumpers into a single pool -
mostly it's 5x 8+1 raidz1 vdevs with a hot spare and 2 drives for the OS and
would happily go much bigger.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with 10-20 million files in them.
Backing that up would be a problem, but I can't see zfs having issues.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ot sure of the interpretation, but I've basically taken Ben's code and
lifted it more or less as is.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s how it works. Say with 16 disks:
zpool create tank raidz1 disk1 disk2 disk3 disk4 disk5 \
raidz1 disk6 disk7 disk8 disk9 disk10 \
raidz1 disk11 disk12 disk13 disk14 disk15 \
spare disk16
Gives you a single pool containing 3 raidz vdevs (each 4 data + 1 parity)
and a hot spare.
--
-Pet
drives?
What I have is a local zfs pool from the free space on the internal
drives, so I'm only using a partition and the drive's write cache
should be off, so my theory here is that zfs_nocacheflush shouldn't
have any effect because there's no drive cache in use...
--
On Sat, Jul 12, 2008 at 12:23 AM, Ian Collins <[EMAIL PROTECTED]> wrote:
> Peter Tribble wrote:
>>
>> (The backup problem is the real stumbling block. And backup is an area ripe
>> for disruptive innovation.)
>>
>>
> Is down to volume of data, or man
adequate for our needs, although backup performance isn't.
(The backup problem is the real stumbling block. And backup is an area ripe
for disruptive innovation.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
__
27;re lucky the
array will update the firmware for you. I've also seen the intelligent
controllers in
some of Sun's JBOD units (the S1, and the 3000 series) fail to recognize drives
that work perfectly well elsewhere.
I'm slightly disappointed that there wasn't a model for 2.5 in
came soon after in a patch.)
You can restrict stmsboot to only enable mpxio on the mpt or fibre
interfaces using
'stmsboot -D mpt' or 'stmsboot -D fp'.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
x27;s what mirroring does - you have redundant data. The extra performance is
just a side-effect.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
idz for the lot, or just use mirroring:
zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0
mirror c1t6d0 c1t8d0
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ded sizing would encompass that.
So remind me again - what is our recommended sizing? (Especially
in the light of this discussion.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discus
y a 'zpool export' followed by 'zpool import' - do you get your pool back?)
For this I've had to get rid of powerpath and use mpxio instead.
The problem seems to be that the clariion arrays are active/passive and
zfs trips up if it tries to use one of the passive links. Usi
are
commonly used. (And
there you tend to fit the OS onto existing hardware, rather than
servers where you are
more likely to buy to fit a workload.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing
ry good
for small random read access.
That said, it's a difficult workload. My limited experience of (the rather more
expensive) Veritas on (rather more expensive) big arrays is that they don't
handle it particularly well either.
--
-Peter Tribble
http://www.petertribble.co.uk/ - h
On Mon, Jun 16, 2008 at 5:20 PM, dick hoogendijk <[EMAIL PROTECTED]> wrote:
> On Mon, 16 Jun 2008 16:21:26 +0100
> "Peter Tribble" <[EMAIL PROTECTED]> wrote:
>
>> The *real* common thread is that you need ridiculous amounts
>> of memory to get decent p
or just by directory hierarchy - into digestible chunks. For us that's
at about the
1Tbyte/10 million file point at the most - we're looking at restructuring the
directory hierarchy for the filesystems that are beyond this so we can back them
up in pieces.
> How about NFS access?
Seem
ch smaller systems. On my
servers where 16G minimum is reasonable, ZFS is fine. But the
bulk of the installed base of machines accessed by users is still
in the 512M-1G range - and Sun are still selling 512M machines.
--
-Peter Tribble
http://www.petertribble.co.uk/ - htt
ufs, the other zfs. Everything else is the same. (SunBlade
150 with 1G of RAM, if you want specifics.)
The zfs root box is significantly slower all around. Not only is
initial I/O slower, but it seems much less able to cache data.
--
-Peter Tribble
http://www.petertri
sers, much less work for the helpdesk, and -
paradoxically - largely eliminated systems running out of space.)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
6. Are you already
multipathed?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lability or performance issues with that?
My only concern here would be how hard it would be to delete the
snapshots. With that cycle, you're deleting 6000 snapshots a day,
and while snapshot creation is "free", my experience is that snapshot
deletion is
On Sun, Apr 20, 2008 at 4:39 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Sun, 20 Apr 2008, Peter Tribble wrote:
> >
> > My experience so far is that anything past a terabyte and 10 million
> files,
> > and any backup software struggles.
> >
>
> Wh
e.
>
> Fix was in 127728 (x86) and 127729 (Sparc).
I think you have sparc and x86 swapped over.
Looking at an S10U5 box I have here, 127728-06 is integrated.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-
m into some sort of hierarchy so that it has top-level
directories that break it up into smaller chunks.
(Some sort of hashing scheme appears to be indicated. Unfortunately our
applications fall into two classes: everything in one huge directory,
or a hashing
scheme that results in many thousands of
ails and all that...
> Sounds like a nice tidy project for a summer intern!
>
> Jeff
>
>
>
> On Sat, Mar 29, 2008 at 05:14:20PM +, Peter Tribble wrote:
> > A brief search didn't show anything relevant, so here
> > goes:
> >
> > Would it be
p, and the data regularly read anyway; for the
quiet ones they're neither read nor backed up, so it
would be nice to be able to validate those.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing l
this is higher than the network bandwidth
into the server, and more bandwidth than the users can make use of
at the moment.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Feb 15, 2008 at 8:50 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Fri, 15 Feb 2008, Peter Tribble wrote:
> >
> > May not be relevant, but still worth checking - I have a 2530 (which ought
> > to be that same only SAS instead of FC), and got fairly poo
t; of drives used has not had much effect on write rate.
May not be relevant, but still worth checking - I have a 2530 (which ought
to be that same only SAS instead of FC), and got fairly poor performance
at first. Things improved significantly when I got the LUNs properly
balanced acr
> Did you use LPe11000-E (Single Channel) or LPe11002-E (dual channel) HBA's?
>
> Did you encounter any problems with configuring this.
My experience in this area is that powerpath doesn't get along with zfs
(I couldn't import the pool); using MPxIO worked fine.
--
-Pete
of the
patch on when we can and I too would like confirmation that it's helping and
hasn't introduced any other regressions..)
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-d
ndom read (and this isn't helped by raidz
which gives you a single disks worth of random read I/O per vdev). I
would love to see better ways of backing up huge numbers of files.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
_
is
problem appears to be that you export the pool and import
it again.
Now, what if that system had been using ZFS root? I have a
hardware failure, I replace the raid card, the devid of the boot
device changes.
Will the system still boot properly?
--
-Peter Tribble
http://www.petertribble.co.
On 9/24/07, Paul B. Henson <[EMAIL PROTECTED]> wrote:
> On Sat, 22 Sep 2007, Peter Tribble wrote:
>
> > filesystem per user on the server, just to see how it would work. While
> > managing 20,00 filesystems with the automounter was trivial, the attempt
> > to manage
riggers SMF activity, and
can drive SMF up the wall. We saw one of the svc daemons hog a whole
cpu on our mailserver (constantly checking for .forward files in user home
directories). This has been fixed, I believe, but only very recently in S10.]
--
-Peter Tribble
http://www.petertribble.co.uk/
On 9/13/07, Eric Schrock <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 13, 2007 at 07:54:12PM +0100, Peter Tribble wrote:
> >
> > There must be a better way of handling this. It should have just
> > brought it online first time around, without all the fiddling around
> &
On 9/13/07, Eric Schrock <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 13, 2007 at 06:36:33PM +0100, Peter Tribble wrote:
> >
> > Doesn't work. (How can you export something that isn't imported
> > anyway?)
> >
>
> The pool is imported, or else
On 9/13/07, Mike Lee <[EMAIL PROTECTED]> wrote:
>
> have you tried zpool clear?
>
Not yet. Let me give it a try:
# zpool clear storage
cannot open 'storage': pool is unavailable
Bother...
Thanks anyway!
--
-Peter Tribble
http://www.petertribble.co.uk/ - htt
On 9/13/07, Solaris <[EMAIL PROTECTED]> wrote:
> Try exporting the pool then import it. I have seen this after moving disks
> between systems, and on a couple of occasions just rebooting.
Doesn't work. (How can you export something that isn't imported
anyway?)
-
d=0
guid=12723054067535078074
path='/dev/dsk/c1t0d0s7'
devid='id1,[EMAIL PROTECTED]/h'
whole_disk=0
metaslab_array=13
metaslab_shift=32
ashift=9
asize=448412
ill eliminate the raid-5 write hole
(albeit at some loss in performance because you have to compute
and write extra checksums) but you allow multiple independent reads.
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
__
(with various
> sizes ranging from 0.3TB to 1.2TB) .
Why multiple pools rather than a single large pool?
--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
1 - 100 of 129 matches
Mail list logo