Re: [zfs-discuss] ZFS on EMC Symmetrix

2007-10-15 Thread JS
Sun has seen all of this during various problems over the past year and a half, but: CX600 FLARE code 02.07.600.5.027 CX500 FLARE code 02.19.500.5.044 Brocade Fabric, relevant switch models are 4140 (core), 200e (edge), 3800 (edge). Sun Branded Emulex HBAs in the following models: SG-XPCI1FC-

Re: [zfs-discuss] ZFS on EMC Symmetrix

2007-10-13 Thread JS
I've been running ZFS against EMC Clariion CX-600 and CX-500s in various configurations, mostly exported disk situations, with a number of kernel flatlining situations. Most of these situations include Page83 data errors in /var/adm/messages during kernel crashes. As we're outgrowing the speed

[zfs-discuss] ZFS on EMC Symmetrix

2007-10-11 Thread JS
If anyone is running this configuration, I have some questions for you about Page83 data errors. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discus

Re: [zfs-discuss] more love for databases

2007-07-22 Thread JS
There a way to take advantage of this in Sol10/u03? "sorry, variable 'zfs_vdev_cache_max' is not defined in the 'zfs' module" This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.or

Re: [zfs-discuss] ZFS and powerpath

2007-07-15 Thread JS
> Shows up as lpfc (is that Emulex?) lpfc (or fibre-channel) is an Emulex branded emulex card device - sun branded emulex uses the emlxs driver. I run zfs (v2 and v3) on Emulex and Sun Branded emulex on SPARC with Powerpath 4.5.0(and MPxIOin other cases) and Clariion arrays and have never se

[zfs-discuss] Problem in v3 on Solaris 10 and large volume mirroring

2007-06-30 Thread JS
Solaris 10, u3, zfs 3 kernel 118833-36. Running into a weird problem where I attach a mirror to a large, existing filesytem. The attach occurs and then the system starts swallowing all the available memory and the system performance chokes, while the filesystems sync. In some cases all memory

[zfs-discuss] Re: Very Large Filesystems

2007-05-05 Thread JS
> What's the maximum filesystem size you've used in production environment? How > did the experience come out? I have a 26tb pool that will be upgraded to 39tb in the next couple of months. This is the backend for Backup images. The ease of managing this sort of expanding storage is a little b

[zfs-discuss] Re: ZFS and Oracle db production deployment

2007-05-05 Thread JS
Why did you choose to deploy the database on ZFS ? -On disk consistancy was big - one of our datacenters was having power problems and the systems would sometimes drop live. I had a couple of instances of data errors with VXVM/VXFS and we had to restore from tape. -zfs snapshot saves us many hour

[zfs-discuss] Re: Re: error-message from a nexsan-storage

2007-03-28 Thread JS
For the particular HDS array you're working on, or also on NexSAN storage? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: today panic ...

2007-03-28 Thread JS
Any chance these fixes will make it into the normal Solaris R&S patches? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: error-message from a nexsan-storage

2007-03-28 Thread JS
The thought is to start throttling and possibly tune up or down, depending on errors or lack of errors. I don't know of a specific NexSAN throttle preference (we use SATABoy, and go with 20). This message posted from opensolaris.org ___ zfs-discuss

[zfs-discuss] Re: error-message from a nexsan-storage

2007-03-28 Thread JS
Try throttling back the max # of IOs. I saw a number of errors similar to this on Pillar and EMC. In /etc/system, set: set sd:sd_max_throttle=20 and reboot. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensol

[zfs-discuss] Re: Re: ZFS performance with Oracle

2007-03-28 Thread JS
> We are currently recommending separate (ZFS) file systems for redo logs. Did you try that? Or did you go straight to a separate UFS file system for redo logs? I'd answered this directly in email originally. The answer was that yes, I tested using zfs for logpools among a number of disk layo

[zfs-discuss] Re: ZFS performance with Oracle

2007-03-28 Thread JS
I didn't see an advantage in this scenario, though I use zfs/compression happily on my NFS user directory. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/

[zfs-discuss] Re: Re: ZFS performance with Oracle

2007-03-21 Thread JS
I'd definitely prefer owning a sort of SAN solution that would basically just be trays of JBODs exported through redundant controllers, with enterprise level service. The world is still playing catch up to integrate with all the possibilities of zfs. This message posted from opensolaris.org

[zfs-discuss] Re: ZFS performance with Oracle

2007-03-20 Thread JS
The big problem is that if you don't do your redundancy in the zpool, then the loss of a single device flatlines the system. This occurs in single device pools or stripes or concats. Sun support has said in support calls and Sunsolve docs that this is by design, but I've never seen the loss of a

[zfs-discuss] Re: ZFS memory and swap usage

2007-03-19 Thread JS
I currently run 6 Oracle 9i and 10g dbs using 8GB SGA apiece in containers on a v890 and find no difficulties starting Oracle (though we don't start all the dbs truly simultaneously). The ARC cache doesn't ramp up until a lot of IO has passed through after a reboot (typically a steady rise over

[zfs-discuss] Re: C'mon ARC, stay small...

2007-03-18 Thread JS
My biggest concern has been more making sure that Oracle doesn't have to fight to get memory, which it does now. There's definite performance uptick during the process of releasing ARC cache memory to allow Oracle to get what it's asking for and this is passed on to the application. The problem

[zfs-discuss] Re: ZFS performance with Oracle

2007-03-18 Thread JS
General Oracle zpool/zfs tuning, from my tests with Oracle 9i and the APS Memory Based Planner and filebench. All tests completed using Solaris 10 update 2 and update 3.: -use zpools with 8k blocksize for data -don't use zfs for redo logs - use ufs with directio and noatime. Building redo log

[zfs-discuss] ZFS performance with Oracle

2007-03-16 Thread JS
I thought I'd share some lessons learned testing Oracle APS on Solaris 10 using ZFS as backend storage. I just got done running 2 months worth of performance tests on a v490 (32GB/4x1.8Ghz dual core proc system with 2xSun 2G HBAs on separate fabrics) and varying how I managed storage. Storage us

[zfs-discuss] Re: C'mon ARC, stay small...

2007-03-16 Thread JS
I've been seeing this failure to cap on a number of (Solaris 10 update 2 and 3) machines since the script came out (arc hogging is a huge problem for me, esp on Oracle). This is probably a red herring, but my v490 testbed seemed to actually cap on 3 separate tests, but my t2000 testbed doesn't e

[zfs-discuss] Re: Re: Re: How much do we really want zpool remove?

2007-02-23 Thread JS
Actually, I'm using ZFS in a SAN environment often importing LUNS to save management overhead and make snapshots easily available, among other things. I would love zfs remove because it allows me, in conjunction with containers, to build up a single managable pool for a number of local host syst

[zfs-discuss] Re: Re: ZFS with SAN Disks and mutipathing

2007-02-23 Thread JS
For a quick overview of setting up MPxIO and the other configs: [EMAIL PROTECTED]:~]# fcinfo hba-port HBA Port WWN: 1000c952776f OS Device Name: /dev/cfg/c8 Manufacturer: Sun Microsystems, Inc. Model: LP1-S Type: N-port State: online Support

[zfs-discuss] Re: Re: ZFS with SAN Disks and mutipathing

2007-02-23 Thread JS
I'm using only sd_max_throttle and disabling transient errors. Without the max_throttle the system on Pillar becomes unusable and goes into lifetime of bus reset syncs. * set max number of commands to 20 - max 256 set sd:sd_max_throttle=20 * prevent warning messages of non-disruptive operations

[zfs-discuss] Re: ZFS with SAN Disks and mutipathing

2007-02-23 Thread JS
Yes. Works fine, though it's an interim solution until I can get rid of PowerPath. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS with SAN Disks and mutipathing

2007-02-17 Thread JS
I'm using ZFS on both EMC and Pillar arrays with PowerPath and MPxIO, respectively. Both work fine - the only caveat is to drop your sd_queue to around 20 or so, otherwise you can run into an ugly display of bus resets. This message posted from opensolaris.org