property, so the SMF service doesn't constantly
> scan all the filesystems and volumes for their zfs properties. It just checks
> the conf file and knows instantly which ones need to be chown'd.
>
> ___
> zfs-discuss mailing list
> zf
ing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
--
---
Brian Wilson, Solaris SE, UW-Madison DoIT
Room 3114 CS&S 608
First I'd like to note that contrary to the nomenclature there isn't any one
"SAN" product that all operates the same. There are a number of different
vendor provided solutions that use a FC SAN to deliver luns to hosts, and they
each have their own limitations. Forgive my pedanticism please.
On 07/ 9/12 04:36 PM, Ian Collins wrote:
On 07/10/12 05:26 AM, Brian Wilson wrote:
Yep, thanks, and to answer Ian with more detail on what TruCopy does.
TruCopy mirrors between the two storage arrays, with software running on
the arrays, and keeps a list of dirty/changed 'tracks'
On 07/06/12, Richard Elling wrote:
First things first, the panic is a bug. Please file one with your OS
supplier.More below...
Thanks! It helps that it recurred a second night in a row.
On Jul 6, 2012, at 4:55 PM, Ian Collins wrote:
> On 07/ 7/12 11:29 AM, Brian Wilson wr
On 07/ 6/12 04:17 PM, Ian Collins wrote:
On 07/ 7/12 08:34 AM, Brian Wilson wrote:
Hello,
I'd like a sanity check from people more knowledgeable than myself.
I'm managing backups on a production system. Previously I was using
another volume manager and filesystem on Solaris, and
uns go read-only, but I could be wrong.
Anyway, am I off my rocker? This should work with ZFS, right?
Thanks!
Brian
--
-------
Brian Wilson, Solaris SE, UW-Madison DoIT
Room 3114 CS&S608-263-8047
brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
-------
Brian Wilson, Solaris SE, UW-Madison DoIT
Room 3114 CS&S608-263-8047
brian.wil
n straight
sequential IO, where on something more random I would bet they won't
perform as well as they do in this test. The tool I've seen used for
that sort of testing is iozone - I'm sure there are others as well, and
I can't attest what's better or worse.
cheers,
B
to fix it with. I was still happy to be using ZFS, as a
filesystem without a scrub/scan of some sort wouldn't have even noticed
in my experience - I suspect btrfs would have if it's scan works similarly.
cheers,
Brian
___
zfs-discuss ma
On 10/18/11 11:46 AM, Mark Sandrock wrote:
On Oct 18, 2011, at 11:09 AM, Nico Williams wrote:
On Tue, Oct 18, 2011 at 9:35 AM, Brian Wilson wrote:
I just wanted to add something on fsck on ZFS - because for me that used to
make ZFS 'not ready for prime-time' in 24x7 5+ 9s uptime en
oit.wisc.edu
'I try to save a life a day. Usually it's my own.' - John Crichton
---
--
---
Brian Wilson, Solaris SE, UW-Madison DoIT
Room 3114 CS&S608-263-80
e all my drives
available. I cannot move these drives to any other box because they are
consumer drives and my servers all have ultras.
Most modern boards will be boot from a live USB stick.
--
-------
Brian Wilson,
On Jun 1, 2010, at 2:43 PM, Steve D. Jost wrote:
Definitely not a silly question. And no, we create the pool on
node1 then set up the cluster resources. Once setup, sun cluster
manages importing/exporting the pool into only the active cluster
node. Sorry for the lack of clarity.. not
Silly question - you're not trying to have the ZFS pool imported on
both hosts at the same time, are you? Maybe I misread, had a hard
time following the full description of what exact configuration caused
the scsi resets.
On Jun 1, 2010, at 2:22 PM, Steve Jost wrote:
Hello All,
We are
It's clear from some threads on this list that it IS possible to roll
back a zpool to a previous state, and I seem to even remember reading
someone was working on a tool or tools in that direction.
Is that correct, is it possible to manually roll back a zpool for crash
recovery purposes, if yo
Does creating ZFS pools on multiple partitions on the same physical drive still
run into the performance and other issues that putting pools in slices does?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
- Original Message -
From: Lori Alt <[EMAIL PROTECTED]>
Date: Tuesday, December 2, 2008 11:19 am
Subject: Re: [zfs-discuss] Separate /var
To: Gary Mills <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
> On 12/02/08 09:00, Gary Mills wrote:
> > On Mon, Dec 01, 2008 at 04:45:16PM -0700
- Original Message -
From: Robert Milkowski <[EMAIL PROTECTED]>
Date: Thursday, August 21, 2008 5:47 am
Subject: Re: [zfs-discuss] ZFS with Traditional SAN
To: Aaron Blew <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
> Hello Aaron,
>
>
> Wednesday, August 20, 2008, 7:11:01 PM, y
- Original Message -
From: Brian Wilson <[EMAIL PROTECTED]>
Date: Saturday, June 14, 2008 12:12 pm
Subject: Re: [zfs-discuss] zpool with RAID-5 from intelligent storage arrays
To: Bob Friesenhahn <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
> > On Sat, 14 Jun 2
> On Sat, 14 Jun 2008, zfsmonk wrote:
>
> > Mentioned on
> >
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
>
> > is the following: "ZFS works well with storage based protected LUNs
>
> > (RAID-5 or mirrored LUNs from intelligent storage arrays). However,
>
>
On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote:
Darren Dunham wrote:
If it helps at all. We're having a similar problem. Any LUN's
configured with their default owner to be SP B, don't get along
with
ZFS. We're running on a T2000, With Emulex cards and the ssd
driver.
MPXIO seems
my perspective being mostly a SAN noob
it's all
hearsay.
--
Sean M. Alderman
513.204.2704
-Original Message-
From: Brian Wilson [mailto:[EMAIL PROTECTED]
Sent: Friday, July 13, 2007 1:58 PM
To: Alderman, Sean
Cc: Peter Tribble; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS
Hmm. Odd. I've got PowerPath working fine with ZFS with both
Symmetrix and Clariion back ends.
PowerPath Version is 4.5.0, running on leadville qlogic drivers.
Sparc hardware. (if it matters)
I ran one our test databases on ZFS on the DMX via PowerPath for a
couple months until we switc
24 matches
Mail list logo