Re: [zfs-discuss] ZFS & array NVRAM cache

2007-10-12 Thread Vincent Fox
So what are the failure modes to worry about?

I'm not exactly sure what the implications of this nocache option for my 
configuration.

Say from a recent example I have an overtemp and first one array shuts down, 
then the other one.

I come in after A/C is returned, shutdown and repower everything.  Bring up the 
zpool and scrub it I would think I should be good.

Any other scenarios I should play out?

I really like mirrored dual-array setups with clustered frontends in failover 
mode.  I want performance but don't want to risk my data, so if there are 
reasons to remove this option from /etc/system I will do that.

I still see little or no usage of the cache according to the status-page on the 
3310.  I really would expect more activity so I'm wondering if it's still not 
being used.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache

2007-10-08 Thread Louwtjie Burger
Battery back-ed cache...

Interestingly enough, I've seen this configuration in production
(V880/SAP on Oracle) running Solaris 8 + Veritas Storage Foundation
(for the RAID-1 part).

Speed is good ... redundancy is good ... price is not (2/3).

Uptime 499 days :)

On 10/9/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
> On 10/6/07, Vincent Fox <[EMAIL PROTECTED]> wrote:
> > So I went ahead and loaded 10u4 on a pair of V210 units.
> >
> > I am going to set this nocacheflush option and cross my fingers and see how 
> > it goes.
> >
> > I have my ZPool mirroring LUNs off 2 different arrays.  I have 
> > single-controllers in each 3310.  My belief is it's OK for me to do this 
> > even without dual controllers for NVRAM security.
> >
> > Worst case I lose an array and/or it's NVRAM contents but the other array 
> > should be doing it's job as part of the mirroring and I should be all good.
>
> Does the 3310 have NVRAM or battery backed cache?  The latter might be
> slightly more dangerous if both arrays lose power together.
>
>
> --
> Just me,
> Wire ...
> Blog: 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache

2007-10-08 Thread Wee Yeh Tan
On 10/6/07, Vincent Fox <[EMAIL PROTECTED]> wrote:
> So I went ahead and loaded 10u4 on a pair of V210 units.
>
> I am going to set this nocacheflush option and cross my fingers and see how 
> it goes.
>
> I have my ZPool mirroring LUNs off 2 different arrays.  I have 
> single-controllers in each 3310.  My belief is it's OK for me to do this even 
> without dual controllers for NVRAM security.
>
> Worst case I lose an array and/or it's NVRAM contents but the other array 
> should be doing it's job as part of the mirroring and I should be all good.

Does the 3310 have NVRAM or battery backed cache?  The latter might be
slightly more dangerous if both arrays lose power together.


-- 
Just me,
Wire ...
Blog: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache

2007-10-06 Thread Selim Daoud
provided 3310 cache does not induce silent block corruption when
writing to disks
s.

On 10/5/07, Vincent Fox <[EMAIL PROTECTED]> wrote:
> So I went ahead and loaded 10u4 on a pair of V210 units.
>
> I am going to set this nocacheflush option and cross my fingers and see how 
> it goes.
>
> I have my ZPool mirroring LUNs off 2 different arrays.  I have 
> single-controllers in each 3310.  My belief is it's OK for me to do this even 
> without dual controllers for NVRAM security.
>
> Worst case I lose an array and/or it's NVRAM contents but the other array 
> should be doing it's job as part of the mirroring and I should be all good.
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-10-05 Thread Vincent Fox
So I went ahead and loaded 10u4 on a pair of V210 units.

I am going to set this nocacheflush option and cross my fingers and see how it 
goes.

I have my ZPool mirroring LUNs off 2 different arrays.  I have 
single-controllers in each 3310.  My belief is it's OK for me to do this even 
without dual controllers for NVRAM security. 

Worst case I lose an array and/or it's NVRAM contents but the other array 
should be doing it's job as part of the mirroring and I should be all good.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-28 Thread Brian H. Nelson
Dale Ghent wrote:
> Yes, it's in there:
>
> [EMAIL PROTECTED]/$ cat /etc/release
>  Solaris 10 8/07 s10x_u4wos_12b X86
>   
It's also available in U3 (and probably earlier releases as well) after 
installing kernel patch 120011-14 or 120012-14. I checked this last night.

Prior releases have the zil_noflush tunable, but it seems that that only 
turned off some of the flushing. This one was present in U3 (and maybe 
U2) as released.

IMO the better option is to configure the array to ignore the syncs, if 
that's possible. I'm not sure if it is in the case of the arrays you listed.

-Brian

-- 
---
Brian H. Nelson Youngstown State University
System Administrator   Media and Academic Computing
  bnelson[at]cis.ysu.edu
---

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-27 Thread Richard Elling
Vincent Fox wrote:
>> Solaris Cluster 3.2 supports Solaris 10 8/07 (aka
>> u4).  Where did you hear that
>> it didn't?
> 
> Took an Advanced Clustering class a few weeks before U4 came out. At that
> time I think the instructor said U3 was the "supported" configuration and
> he wasn't sure when U4 would be a verified and supported configuration on 
> which to run Cluster 3.2.

Ah, ok that explains it.  Prior to an OS release, Solaris Cluster will not be
"supported" on it.  While this seems intuitively obvious, the Office of Sales
Prevention and Customer Disatisfaction insists upon promoting this fact :-(

Solaris Cluster engineering tracks new Solaris releases very closely and will
have support ready at release.  Internal to Sun (and Sun partners) is the
authoritative tome, "Sun Cluster 3 Configuration Guide," which contains the
list of what is "supported."  This is updated about once per month.  But I'm
not sure how that information is communicated externally... probably not well.
It seems as though many docs say something like "ask your local Sun 
representative."
In the long term, the Open HA Cluster community should be more responsive.
http://opensolaris.org/os/community/ha-clusters/

> We have a Support Contract for production systems, and endeavour to stay 
> within the realm of what they will answer questions on when we have problems.
> 
> So will this "nocache" option work in U4?

It should be there.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-27 Thread Dale Ghent
On Sep 27, 2007, at 3:19 PM, Vincent Fox wrote:

> So will this "nocache" option work in U4?

Yes, it's in there:

[EMAIL PROTECTED]/$ cat /etc/release
 Solaris 10 8/07 s10x_u4wos_12b X86
Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
 Use is subject to license terms.
 Assembled 16 August 2007

[EMAIL PROTECTED]/$ nm /kernel/fs/zfs | grep zfs_nocacheflush
[1659]  |  1892|   4|OBJT |GLOB |0|4  |zfs_nocacheflush
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-27 Thread Vincent Fox
>
> Solaris Cluster 3.2 supports Solaris 10 8/07 (aka
> u4).  Where did you hear that
> it didn't?


Took an Advanced Clustering class a few weeks before U4 came out. At that time 
I think the instructor said U3 was the "supported" configuration and he wasn't 
sure when U4 would be a verified and supported configuration on which to run 
Cluster 3.2.

We have a Support Contract for production systems, and endeavour to stay within 
the realm of what they will answer questions on when we have problems.

So will this "nocache" option work in U4?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Richard Elling
Vincent Fox wrote:
>> Vincent Fox wrote:
>>
>> Is this what you're referring to?
>> http://www.solarisinternals.com/wiki/index.php/ZFS_Evi
>> l_Tuning_Guide#Cache_Flushes
> 
> As I wrote several times in this thread, this kernel variable does not work 
> in Sol 10u3.
> 
> Probably not in u4 although I haven't tried it.
> 
> I would like to run Sun Cluster 3.2, hence I must stick with u3 and cannot 
> roll with the latest Nevada build or whatever.

Solaris Cluster 3.2 supports Solaris 10 8/07 (aka u4).  Where did you hear that
it didn't?
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Vincent Fox
> Vincent Fox wrote:
> 
> Is this what you're referring to?
> http://www.solarisinternals.com/wiki/index.php/ZFS_Evi
> l_Tuning_Guide#Cache_Flushes

As I wrote several times in this thread, this kernel variable does not work in 
Sol 10u3.

Probably not in u4 although I haven't tried it.

I would like to run Sun Cluster 3.2, hence I must stick with u3 and cannot roll 
with the latest Nevada build or whatever.

Options boil down to:
1) Live with NVRAM being un-utilized
2) Strip array controllers, go JBOD.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Jonathan Edwards

On Sep 26, 2007, at 14:10, Torrey McMahon wrote:

> You probably don't have to create a LUN the size of the NVRAM  
> either. As
> long as its dedicated to one LUN then it should be pretty quick. The
> 3510 cache, last I checked, doesn't do any per LUN segmentation or
> sizing. Its a simple front end for any LUN that is using cache.

yep - the policy gets set on the controller for everything served by  
it .. you could put the ZIL LUN on one controller and change the  
other controller from write back to write through, but then you  
essentially waste a controller just for the log device and controller  
failover would be a mess .. we might as well just redo the fcode for  
these arrays to be a minimized optimized zfs build, but then again -  
i don't know what does to our OEM relationships for the controllers  
or if it's even worth it in the long run .. seems like it might be  
easier to just roll our own or release a spec for the hardware  
vendors to implement.

---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Bryan Cantrill
On Wed, Sep 26, 2007 at 02:10:39PM -0400, Torrey McMahon wrote:
> Albert Chin wrote:
> > On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote:
> >   
> >> I don't understand.  How do you
> >>
> >> "setup one LUN that has all of the NVRAM on the array dedicated to it"
> >>
> >> I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
> >> thick here, but can you be more specific for the n00b?
> >> 
> >
> > If you're using CAM, disable NVRAM on all of your LUNs. Then, create
> > another LUN equivalent to the size of your NVRAM. Assign the ZIL to
> > this LUN. You'll then have an NVRAM-backed ZIL.
> 
> You probably don't have to create a LUN the size of the NVRAM either. As 
> long as its dedicated to one LUN then it should be pretty quick. The 
> 3510 cache, last I checked, doesn't do any per LUN segmentation or 
> sizing. Its a simple front end for any LUN that is using cache.
> 
> Do we have any log sizing guidelines yet? Max size for example?

That's a really good question -- and the answer essentially depends on the
bandwidth of the underlying storage and the rate of activity to the ZIL.
Both of these questions can be tricky to answer -- and the final answer
also depends on how much headroom you desire.  (That is, what drop
in bandwidth and/or surge in ZIL activity does one want to be able to
absorb without sacrificing latency?)  For the time being, the easiest
way to answer this question is to try some sizes (with 1-2 GB being a
good starting point), throw some workloads at it, and monitor both your
delivered performance and the utilization reported by tools like iostat...

- Bryan

--
Bryan Cantrill, Solaris Kernel Development.   http://blogs.sun.com/bmc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Roch Bourbonnais

The theory I am going by is that 10 seconds worth of your synchronous  
writes is sufficient
for the slog. That breaks down if the main pool is the bottleneck.

-r


Le 26 sept. 07 à 20:10, Torrey McMahon a écrit :

> Albert Chin wrote:
>> On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote:
>>
>>> I don't understand.  How do you
>>>
>>> "setup one LUN that has all of the NVRAM on the array dedicated  
>>> to it"
>>>
>>> I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
>>> thick here, but can you be more specific for the n00b?
>>>
>>
>> If you're using CAM, disable NVRAM on all of your LUNs. Then, create
>> another LUN equivalent to the size of your NVRAM. Assign the ZIL to
>> this LUN. You'll then have an NVRAM-backed ZIL.
>
> You probably don't have to create a LUN the size of the NVRAM  
> either. As
> long as its dedicated to one LUN then it should be pretty quick. The
> 3510 cache, last I checked, doesn't do any per LUN segmentation or
> sizing. Its a simple front end for any LUN that is using cache.
>
> Do we have any log sizing guidelines yet? Max size for example?
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Torrey McMahon
Albert Chin wrote:
> On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote:
>   
>> I don't understand.  How do you
>>
>> "setup one LUN that has all of the NVRAM on the array dedicated to it"
>>
>> I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
>> thick here, but can you be more specific for the n00b?
>> 
>
> If you're using CAM, disable NVRAM on all of your LUNs. Then, create
> another LUN equivalent to the size of your NVRAM. Assign the ZIL to
> this LUN. You'll then have an NVRAM-backed ZIL.

You probably don't have to create a LUN the size of the NVRAM either. As 
long as its dedicated to one LUN then it should be pretty quick. The 
3510 cache, last I checked, doesn't do any per LUN segmentation or 
sizing. Its a simple front end for any LUN that is using cache.

Do we have any log sizing guidelines yet? Max size for example?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Bill Sommerfeld
On Wed, 2007-09-26 at 07:22 -0400, Jonathan Edwards wrote:
> the bottom line is that there's 2 competing cache  
> strategies that aren't very complimentary.

To put it differently, technologies like ZFS change the optimal way to
build systems.  

The ARC exists to speed up reads, and needs to be large, low latency,
and can be (and usually is) stored in volatile memory.

A separate intent log exists to speed up synchronous writes, must be
nonvolatile, doesn't need to be all that large, and provides some
performance benefit if it's faster than the typical disk in the pool.

I wouldn't buy something like a 3510-raid new specifically to use as a
dedicated intent log device, but I had some surplus equipment fall in my
lap (including a 3510 jbod chassis with no disks in it, and 3510-raid
and 3510-jbods with varying sizes of disks) and looked at the optimal
way to use the pile of parts I had on hand.

With the particular pile of parts I have on hand, dedicating a
partly-populated 3510 to intent log storage "wastes" no more than about
7% of capacity in return for what looks like a 30-40% reduction in the
wall-clock elapsed time of some NFS-write intensive jobs.  

Your mileage will vary.

- Bill





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Brian H. Nelson
Vincent Fox wrote:
> It seems like ZIL is a separate issue.
>
> I have read that putting ZIL on a separate device helps, but what about the 
> cache?
>
> OpenSolaris has some flag to disable it.  Solaris 10u3/4 do not.  I have 
> dual-controllers with NVRAM and battery backup, why can't I make use of it?   
> Would I be wasting my time to mess with this on 3310 and 3510 class 
> equipment?  I would think it would help but perhaps not.
>  
>  
>
>   

I'm probably being really daft in thinking that everyone is overlooking 
the obvious, but...

Is this what you're referring to?
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes

-Brian

-- 
---
Brian H. Nelson Youngstown State University
System Administrator   Media and Academic Computing
  bnelson[at]cis.ysu.edu
---

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Jonathan Edwards

On Sep 25, 2007, at 19:57, Bryan Cantrill wrote:

>
> On Tue, Sep 25, 2007 at 04:47:48PM -0700, Vincent Fox wrote:
>> It seems like ZIL is a separate issue.
>
> It is very much the issue:  the seperate log device work was done  
> exactly
> to make better use of this kind of non-volatile memory.  To use  
> this, setup
> one LUN that has all of the NVRAM on the array dedicated to it, and  
> then
> use that device as a separate log device.  Works like a champ...
>

on the 3310/3510 you can't really do this in the same way that you  
can't create a zfs filesystem or zvol and disable the ARC for this ..  
i mean we can dance around the issue and create a really big log  
device on a 3310/3510 and use JBOD for the data, but i don't think  
that's the point - the bottom line is that there's 2 competing cache  
strategies that aren't very complimentary.

---
.je
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Roch - PAE

Vincent Fox writes:
 > I don't understand.  How do you
 > 
 > "setup one LUN that has all of the NVRAM on the array dedicated to it"
 > 
 > I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
 > thick here, but can you be more specific for the n00b?
 > 
 > Do you mean from firmware side or OS side?  Or since the LUNs used
 > for the ZIL are separated out from the other disks in the pool they DO
 > get to make use of the NVRAM, is that it? 
 > 
 > I have a pair of 3310 with 12 36-gig disks for testing.  I have a
 > V240 with PCI dual-SCSI controller so I can drive one array from each
 > port is what I am tinkering with right now.  Looking for maximum
 > reliability/redundancy of course so I would ZFS mirror the arrays.
 > 
 > Can you suggest a setup here?  A single-disk from each array
 > exported as a LUN, then ZFS mirrored together for the ZIL log?
 > An example would be helpful.  Could I then just lump all the
 > remaining disks into a 10-disk RAID-5 LUN, mirror them together
 > and achieve a significant performance improvement?  Still have
 > to have a global spare of course in the HW RAID.   What about
 > sparing for the ZIL?
 >  

With 

PSARC 2007/171 ZFS Separate Intent Log

now in Nevada, you can setup the ZIL on it's own set of
(possibly very fast) luns. The luns can be mirrored if you
have more than one NVRAM cards.

http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on

This will work great to accelerate JBOD using just a small
amount of NVRAM for the ZIL. When a storage is fronted 100%
by NVRAM the benefits of the slog won't be as large.

Last week we had this putback :

PSARC 2007/053 Per-Disk-Device support of non-volatile cache
6462690 sd driver should set SYNC_NV bit when issuing SYNCHRONIZE CACHE 
to SBC-2 devices

which will  preventsome recognised Arrays   from   doing
unnecessary  cache flushes  and  allow tuning  others  using
sd.conf. Otherwise arrays will be queried for support of the
SYNC_NV capability.  IMO, the best is to bug storage vendors
into supporting SYNC_NV.

For earlier releases, to get the full benefit of the NVRAM on 
zil operations you are stuck into a raw tuning  proposition :

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide


http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH

-r

See also :

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide


 >  
 > This message posted from opensolaris.org
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-25 Thread Albert Chin
On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote:
> I don't understand.  How do you
> 
> "setup one LUN that has all of the NVRAM on the array dedicated to it"
> 
> I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
> thick here, but can you be more specific for the n00b?

If you're using CAM, disable NVRAM on all of your LUNs. Then, create
another LUN equivalent to the size of your NVRAM. Assign the ZIL to
this LUN. You'll then have an NVRAM-backed ZIL.

I posted a question along these lines to storage-discuss:
  http://mail.opensolaris.org/pipermail/storage-discuss/2007-July/003080.html

You'll need to determine the performance impact of removing NVRAM from
your data LUNs. Don't blindly do it.

-- 
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-25 Thread Vincent Fox
I don't understand.  How do you

"setup one LUN that has all of the NVRAM on the array dedicated to it"

I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
thick here, but can you be more specific for the n00b?

Do you mean from firmware side or OS side?  Or since the LUNs used
for the ZIL are separated out from the other disks in the pool they DO
get to make use of the NVRAM, is that it? 

I have a pair of 3310 with 12 36-gig disks for testing.  I have a
V240 with PCI dual-SCSI controller so I can drive one array from each
port is what I am tinkering with right now.  Looking for maximum
reliability/redundancy of course so I would ZFS mirror the arrays.

Can you suggest a setup here?  A single-disk from each array
exported as a LUN, then ZFS mirrored together for the ZIL log?
An example would be helpful.  Could I then just lump all the
remaining disks into a 10-disk RAID-5 LUN, mirror them together
and achieve a significant performance improvement?  Still have
to have a global spare of course in the HW RAID.   What about
sparing for the ZIL?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-25 Thread Bryan Cantrill

On Tue, Sep 25, 2007 at 04:47:48PM -0700, Vincent Fox wrote:
> It seems like ZIL is a separate issue.

It is very much the issue:  the seperate log device work was done exactly
to make better use of this kind of non-volatile memory.  To use this, setup
one LUN that has all of the NVRAM on the array dedicated to it, and then
use that device as a separate log device.  Works like a champ...

- Bryan

--
Bryan Cantrill, Solaris Kernel Development.   http://blogs.sun.com/bmc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-25 Thread Vincent Fox
It seems like ZIL is a separate issue.

I have read that putting ZIL on a separate device helps, but what about the 
cache?

OpenSolaris has some flag to disable it.  Solaris 10u3/4 do not.  I have 
dual-controllers with NVRAM and battery backup, why can't I make use of it?   
Would I be wasting my time to mess with this on 3310 and 3510 class equipment?  
I would think it would help but perhaps not.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-25 Thread Bill Sommerfeld
On Tue, 2007-09-25 at 10:14 -0700, Vincent Fox wrote:
> Where is ZFS with regards to the NVRAM cache present on arrays?
> 
> I have a pile of 3310 with 512 megs cache, and even some 3510FC with
> 1-gig cache.  It seems silly that it's going to waste.  These are
> dual-controller units so I have no worry about loss of cache
> information.

I've done a few experiments with using small LUNs from a surplus 3510
raid unit for a separate intent log while putting the main body of the
pool in directly connected 3510 JBOD arrays.  Seems to work well; writes
to the intent log device show a much lower asvc_t (average service time)
value in iostat than writes to the main pool disks, and NFS performance
in a few completely unscientific and uncontrolled tests that are vaguely
representative of our workload seems to be better.

The behavior I'm seeing is consistent with the 3510 raid controller
ignoring the "synchronize cache" command.

I haven't put this into production just yet.

- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-25 Thread Jens Elkner
On Tue, Sep 25, 2007 at 10:14:57AM -0700, Vincent Fox wrote:
> Where is ZFS with regards to the NVRAM cache present on arrays?
> 
> I have a pile of 3310 with 512 megs cache, and even some 3510FC with 1-gig 
> cache.  It seems silly that it's going to waste.  These are dual-controller 
> units so I have no worry about loss of cache information.
> 
> It looks like OpenSolaris has a way to force arguably "correct" behavior, but 
> Solaris 10u3/4 do not.  I see some threads from early this year about it, and 
> nothing since.

Made some simple tests wrt. cont. seq. writes/reads for a 3510 (singl.
controller), single Host (v490) with 2 FC-HBAs - so, yes - I'm running
now ZFS single disk over HW Raid10 (10disks) ...
Haven't had the time, to test all combinations or mixed load cases, 
however, in case you wanna check, what I got:
http://iws.cs.uni-magdeburg.de/~elkner/3510.txt

Have fun,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss