Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-17 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Wolfraider
> 
> target mode, using both ports. We have 1 zvol connected to 1 windows
> server and the other zvol connected to another windows server with both
> windows servers having a qlogic 2462 fibrechannel adapter, using both
> ports and MPIO enabled. The windows servers are running Windows 2008
> R2. The zvols are formatted NTFS and used as a staging area and D2D2T
> system for both Commvault and Microsoft Data Protection Manager backup
> solutions. The SAN system sees mostly writes since it is used for
> backups.

iscsi initiated by windows, ntfs, mostly write.  Yup, I bet you're seeing a ton 
of zil activity.

zilstat is the preferred way to see how much zil you're using, but if you can't 
get that to work, and if you are willing to export & import your zpool, you can 
temporarily disable the zil.
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Disabling_the_ZIL_.28Don.27t.29

If you disable the zil, and there's no performance gain, then you know you 
weren't using the zil.

If you are indeed bottlenecked by the zil, then disabling the zil will 
dramatically improve performance.  You would, however, be susceptible to data 
loss in the event of a system crash.  So you shouldn't leave it disabled on an 
ongoing basis.

When you install dedicated log devices, your goal is to make performance more 
similar to the "disabled zil" performance, without running the risk of disabled 
zil.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-16 Thread Wolfraider
We have the following setup configured.  The drives are running on a couple PAC 
PS-5404s. Since these units do not support JBOD, we have configured each 
individual drive as a RAID0 and shared out all 48 RAID0’s per box. This is 
connected to the solaris box through a dual port 4G Emulex fibrechannel card 
with MPIO enabled (round-robin).  This is configured with the 18 raidz2 vdevs 
and 1 big pool. We currently have 2 zvols created with the size being around 
40TB sparse (30T in use). This in turn is shared out using a fibrechannel 
Qlogic QLA2462 in target mode, using both ports. We have 1 zvol connected to 1 
windows server and the other zvol connected to another windows server with both 
windows servers having a qlogic 2462 fibrechannel adapter, using both ports and 
MPIO enabled. The windows servers are running Windows 2008 R2. The zvols are 
formatted NTFS and used as a staging area and D2D2T system for both Commvault 
and Microsoft Data Protection Manager backup solutions. The SAN system sees 
mostly writes since it is used for backups.

We are using Cisco 9124 fibrechannel switches and we have recently upgraded to 
Cisco 10G Nexus switches for our Ethernet side. Fibrechannel support on the 
Nexus will be in a few years due to the cost. We are just trying to fine tune 
our SAN for the best performance possible and we don’t really have any 
expectations right now. We are always looking to improve something. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-16 Thread Wolfraider
We downloaded zilstat from 
http://www.richardelling.com/Home/scripts-and-programs-1 but we never could get 
the script to run. We are not really sure how to debug. :(

./zilstat.ksh 
dtrace: invalid probe specifier 
#pragma D option quiet
 inline int OPT_time = 0;
 inline int OPT_txg = 0;
 inline int OPT_pool = 0;
 inline int OPT_mega = 0;
 inline int INTERVAL = 1;
 inline int LINES = -1;
 inline int COUNTER = -1;
 inline int FILTER = 0;
 inline string POOL = "";
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-15 Thread Chris Mosetick
We have two Intel X25-E 32GB SSD drives in one of our servers.  I'm using
one for ZIl and one for L2ARC, we are having great results so far.

Cheers,

-Chris

On Wed, Sep 15, 2010 at 9:43 AM, Richard Elling  wrote:

> On Sep 14, 2010, at 6:59 AM, Wolfraider wrote:
>
> > We are looking into the possibility of adding a dedicated ZIL and/or
> L2ARC devices to our pool. We are looking into getting 4 – 32GB  Intel X25-E
> SSD drives. Would this be a good solution to slow write speeds?
>
> Maybe, maybe not.  Use zilstat to check whether the ZIL is actually in use
> before spending money or raising expectations.
>
> > We are currently sharing out different slices of the pool to windows
> servers using comstar and fibrechannel. We are currently getting around
> 300MB/sec performance with 70-100% disk busy.
>
> This seems high, is the blocksize/recordsize matched?
>  -- richard
>
> --
> OpenStorage Summit, October 25-27, Palo Alto, CA
> http://nexenta-summit2010.eventbrite.com
>
> Richard Elling
> rich...@nexenta.com   +1-760-896-4422
> Enterprise class storage for everyone
> www.nexenta.com
>
>
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-15 Thread Richard Elling
On Sep 14, 2010, at 6:59 AM, Wolfraider wrote:

> We are looking into the possibility of adding a dedicated ZIL and/or L2ARC 
> devices to our pool. We are looking into getting 4 – 32GB  Intel X25-E SSD 
> drives. Would this be a good solution to slow write speeds?

Maybe, maybe not.  Use zilstat to check whether the ZIL is actually in use
before spending money or raising expectations.

> We are currently sharing out different slices of the pool to windows servers 
> using comstar and fibrechannel. We are currently getting around 300MB/sec 
> performance with 70-100% disk busy.

This seems high, is the blocksize/recordsize matched?
 -- richard

-- 
OpenStorage Summit, October 25-27, Palo Alto, CA
http://nexenta-summit2010.eventbrite.com

Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-14 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Wolfraider
> 
> We are looking into the possibility of adding a dedicated ZIL and/or
> L2ARC devices to our pool. We are looking into getting 4 – 32GB  Intel
> X25-E SSD drives. Would this be a good solution to slow write speeds?

If you have slow write speeds, a dedicated log device might help.  (log devices 
are for writes, not for reads.)

It sounds like your machine is an iscsi target.  In which case, you're 
certainly doing a lot of sync writes, and therefore hitting your ZIL hard.  So 
it's all but certain adding dedicated log devices will help.

One thing to be aware of:  Once you add dedicated log, *all* of your sync 
writes will hit that log device.  While a single SSD or pair of SSD's will have 
fast IOPS, they can easily become a new bottleneck with worse performance than 
what you had before ... If you've got 80 spindle disks now, and by any chance, 
you perform sequential sync writes, then a single pair of SSD's won't compete.  
I'd suggest adding several SSD's for log devices, and no mirroring.  Perhaps 
one SSD for every raidz2 vdev, or every other, or every third, depending on 
what you can afford.

If you have slow reads, l2arc cache might help.  (cache devices are for read, 
not write.)


> We are currently sharing out different slices of the pool to windows
> servers using comstar and fibrechannel. We are currently getting around
> 300MB/sec performance with 70-100% disk busy.

You may be facing some other problem, aside from just having cache/log devices. 
 I suggest giving us some more detail here.  Such as ...  

Large sequential operations are good on raidz2.  But if you're performing 
random IO, that performs pretty poor on raidz2.

What sort of network are you using?  I know you said "comstar and 
fibrechannel," and "sharing slices to windows" ... I assume this means you're 
doing iscsi, right?  Dual 4Gbit links per server?  You're getting 2.4 Gbit and 
you expect what?

You have a pool made up of 18 raidz2 vdev's with 5 drives each (capacity of 3 
disks each) ... Is each vdev on its own bus?  What type of bus is it?  
(Generally speaking, it is preferable to spread vdev's across buses, instead of 
making 1vdev on 1 bus, for reliability purposes) ...  How many disks, of what 
type, on each bus?  What type of bus, at what speed?

What are the usage characteristics, how are you making your measurement?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-14 Thread Pasi Kärkkäinen
On Tue, Sep 14, 2010 at 08:08:42AM -0700, Ray Van Dolson wrote:
> On Tue, Sep 14, 2010 at 06:59:07AM -0700, Wolfraider wrote:
> > We are looking into the possibility of adding a dedicated ZIL and/or
> > L2ARC devices to our pool. We are looking into getting 4 ??? 32GB
> > Intel X25-E SSD drives. Would this be a good solution to slow write
> > speeds? We are currently sharing out different slices of the pool to
> > windows servers using comstar and fibrechannel. We are currently
> > getting around 300MB/sec performance with 70-100% disk busy.
> > 
> > Opensolaris snv_134
> > Dual 3.2GHz quadcores with hyperthreading
> > 16GB ram
> > Pool_1 ??? 18 raidz2 groups with 5 drives a piece and 2 hot spares
> > Disks are around 30% full
> > No dedup
> 
> It'll probably help.
> 
> I'd get two X-25E's for ZIL (and mirror them) and one or two of Intel's
> lower end X-25M for L2ARC.
> 
> There are some SSD devices out there with a super-capacitor and
> significantly higher IOPs ratings than the X-25E that might be a better
> choice for a ZIL device, but the X-25E is a solid drive and we have
> many of them deployed as ZIL devices here.
> 

I thought Intel SSDs didn't respect CACHE FLUSH command and thus
are subject to ZIL corruption if the server crashes or runs out of electricity?

-- Pasi

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-14 Thread Wolfraider
Cool, we can get the Intel X25-E's for around $300 a piece from HP with the 
sled. I don't see the X25-M available so we will look at 4 of the X25-E's.

Thanks :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-14 Thread Ray Van Dolson
On Tue, Sep 14, 2010 at 06:59:07AM -0700, Wolfraider wrote:
> We are looking into the possibility of adding a dedicated ZIL and/or
> L2ARC devices to our pool. We are looking into getting 4 – 32GB
> Intel X25-E SSD drives. Would this be a good solution to slow write
> speeds? We are currently sharing out different slices of the pool to
> windows servers using comstar and fibrechannel. We are currently
> getting around 300MB/sec performance with 70-100% disk busy.
> 
> Opensolaris snv_134
> Dual 3.2GHz quadcores with hyperthreading
> 16GB ram
> Pool_1 – 18 raidz2 groups with 5 drives a piece and 2 hot spares
> Disks are around 30% full
> No dedup

It'll probably help.

I'd get two X-25E's for ZIL (and mirror them) and one or two of Intel's
lower end X-25M for L2ARC.

There are some SSD devices out there with a super-capacitor and
significantly higher IOPs ratings than the X-25E that might be a better
choice for a ZIL device, but the X-25E is a solid drive and we have
many of them deployed as ZIL devices here.

Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss