Erik Trimble wrote:
I also think I re-started this thread. Mea culpa.
The original comment from me was that I wasn't certain that the bug I
tripped over last year this time (a single-LUN zpool is declared corrupt
if the underlying LUN goes away, usually due to SAN issues) was fixed. I
I do not
I also think I re-started this thread. Mea culpa.
The original comment from me was that I wasn't certain that the bug I
tripped over last year this time (a single-LUN zpool is declared corrupt
if the underlying LUN goes away, usually due to SAN issues) was fixed. I
did see that the host reset cycl
Bob Friesenhahn wrote:
On Fri, 8 May 2009, Miles Nordin wrote:
It's frustrating to keep going in circles. Also I think advising
people they no longer need to avoid single-LUN SAN pools is a bad
idea. And blaming the SAN problems in silent bit-flips when it looks
pretty clearly that they actua
On Fri, 8 May 2009, Miles Nordin wrote:
It's frustrating to keep going in circles. Also I think advising
people they no longer need to avoid single-LUN SAN pools is a bad
idea. And blaming the SAN problems in silent bit-flips when it looks
pretty clearly that they actually lie elsewhere is dis
> "re" == Richard Elling writes:
>> Remember when I said the SAN corruption issue was not
>> root-caused?
re> If your SAN corrupts data, how can you blame ZFS?
(a) the fault has not been isolated to the SAN.
Reading some pretty-printed message from ZFS saying ``it's not my
Miles Nordin wrote:
"re" == Richard Elling writes:
re> PSARC 2007/567
oh, failmode? We were not talking about panics. We're talking about
corrupted pools. Many of the systems in bugs related to this PSARC
are not even using a SAN and are not reporting problems simliar to t
> "re" == Richard Elling writes:
re> PSARC 2007/567
oh, failmode? We were not talking about panics. We're talking about
corrupted pools. Many of the systems in bugs related to this PSARC
are not even using a SAN and are not reporting problems simliar to the
one I described.
Remember
Miles Nordin wrote:
"re" == Richard Elling writes:
re> We forget because it is no longer a problem ;-)
bug number?
PSARC 2007/567
re> I think it is disingenuous to compare an enterprise-class RAID
re> array with the random collection of hardware on which Solari
On Thu, 7 May 2009, Robert Milkowski wrote:
On Wed, 6 May 2009, Miles Nordin wrote:
"re" == Richard Elling writes:
re> We forget because it is no longer a problem ;-)
bug number?
re> I think it is disingenuous to compare an enterprise-class RAID
re> array with the random coll
On Wed, 6 May 2009, Miles Nordin wrote:
"re" == Richard Elling writes:
re> We forget because it is no longer a problem ;-)
bug number?
re> I think it is disingenuous to compare an enterprise-class RAID
re> array with the random collection of hardware on which Solaris
re> runs.
> "re" == Richard Elling writes:
re> We forget because it is no longer a problem ;-)
bug number?
re> I think it is disingenuous to compare an enterprise-class RAID
re> array with the random collection of hardware on which Solaris
re> runs.
compare with a Sun-integrated Sola
Miles Nordin wrote:
"djm" == Darren J Moffat writes:
djm> If you only present a single lun to ZFS it may not be able to
djm> repair any detected errors.
And also the problems with pools becoming corrupt and unimportable,
especially when the SAN reboots or loses connectivity
Has the issue with "disappearing" single-LUN zpools causing corruption
been fixed?
I'd have to look up the bug, but I got bitten by this last year about
this time:
Config:
single LUN export from array to host, attached via FC.
Scenario:
(1) array is turned off while host is alive, but whil
On 5/1/2009 2:01 PM, Miles Nordin wrote:
I've never heard of using multiple-LUN stripes for storage QoS before.
Have you actually measured some improvement in this configuration over
a single LUN? If so that's interesting.
Because of the way queing works in the OS and in most array controllers
> "sl" == Scott Lawson writes:
> "wa" == Wilkinson, Alex writes:
> "dg" == Dale Ghent writes:
> "djm" == Darren J Moffat writes:
sl> Specifically I am talking of ZFS snapshots, rollbacks,
sl> cloning, clone promotion,
[...]
sl> Of course to take maximum advantage
Wilkinson, Alex wrote:
So, shall I forget ZFS and use UFS
I think the writing is on the wall, right next to "Romani ite domum" :-)
Today, laptops have 500 GByte drives, desktops have 1.5 TByte drives.
UFS really does not work well with SMI label and 1 TByte limitations.
-- richard
Dale Ghent wrote:
On May 1, 2009, at 4:01 AM, Ian Collins wrote:
Dale Ghent wrote:
On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:
So, shall I forget ZFS and use UFS ?
Not at all. Just export lots of LUNs from your EMC to get the IO
scheduling win, not one giant one, and configure th
On Fri, May 01, 2009 at 09:52:54AM -0400, Dale Ghent wrote:
>
> EMC. It's where data lives.
I thought it was, "EMC. It's where data goes to die." :-D
-brian
--
"Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard
On May 1, 2009, at 4:01 AM, Ian Collins wrote:
Dale Ghent wrote:
On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:
So, shall I forget ZFS and use UFS ?
Not at all. Just export lots of LUNs from your EMC to get the IO
scheduling win, not one giant one, and configure the zpool as a
str
Dale Ghent wrote:
On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:
So, shall I forget ZFS and use UFS ?
Not at all. Just export lots of LUNs from your EMC to get the IO
scheduling win, not one giant one, and configure the zpool as a stripe.
What, no redundancy?
--
Ian.
___
On May 1, 2009, at 2:09 AM, Wilkinson, Alex wrote:
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
I currently have a single 17TB MetaLUN that i am about to present
to an
OpenSolaris initiator and it will obviously be ZFS. H
Wilkinson, Alex wrote:
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However, I am
constantly
>>
Wilkinson, Alex wrote:
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However, I am
constantly
>>
On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
I currently have a single 17TB MetaLUN that i am about to present to an
OpenSolaris initiator and it will obviously be ZFS. However, I am constantly
reading that presenting a JBOD and using ZFS to manage the RAID is best
practice ? Im not really sure w
> "djm" == Darren J Moffat writes:
djm> If you only present a single lun to ZFS it may not be able to
djm> repair any detected errors.
And also the problems with pools becoming corrupt and unimportable,
especially when the SAN reboots or loses connectivity and the host
does not, that p
Wilkinson, Alex wrote:
Hi all,
In terms of best practices and high performance would it be better to present a
JBOD to an OpenSolaris initiator or a single MetaLUN ?
The scenario is:
I currently have a single 17TB MetaLUN that i am about to present to an
OpenSolaris initiator and it will obvio
Hi all,
In terms of best practices and high performance would it be better to present a
JBOD to an OpenSolaris initiator or a single MetaLUN ?
The scenario is:
I currently have a single 17TB MetaLUN that i am about to present to an
OpenSolaris initiator and it will obviously be ZFS. However, I a
29 matches
Mail list logo