Hi Will, Thanks for your reply. Customer has EMC San solution and will not change their current layout. Therefore, asking the customer to give RAW disks to ZFS is no no. Hence, the RaidZ configuration as suppose to Raid - 5. I have given some stats below. I know its a bit difficult to troubleshoot with the type of data you have. But whatever input would be muchly appreciated.
zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT datapool1 2.12T 707G 1.43T 32% ONLINE - datapool2 2.12T 706G 1.44T 32% ONLINE - datapool3 2.12T 702G 1.44T 32% ONLINE - datapool4 2.12T 701G 1.44T 32% ONLINE - dumppool 272G 171G 101G 62% ONLINE - localpool 68G 12.5G 55.5G 18% ONLINE - logpool 272G 157G 115G 57% ONLINE - zfs get all datapool1 NAME PROPERTY VALUE SOURCE datapool1 type filesystem - datapool1 creation Fri Jun 8 18:46 2007 - datapool1 used 615G - datapool1 available 1.22T - datapool1 referenced 42.6K - datapool1 compressratio 2.08x - datapool1 mounted no - datapool1 quota none default datapool1 reservation none default datapool1 recordsize 128K default datapool1 mountpoint none local datapool1 sharenfs off default datapool1 checksum on default datapool1 compression on local datapool1 atime on default datapool1 devices on default datapool1 exec on default datapool1 setuid on default datapool1 readonly off default datapool1 zoned off default datapool1 snapdir hidden default datapool1 aclmode groupmask default datapool1 aclinherit secure default [su621dwdb/root] zpool status -v pool: datapool1 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datapool1 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 emcpower8h ONLINE 0 0 0 emcpower9h ONLINE 0 0 0 emcpower10h ONLINE 0 0 0 emcpower11h ONLINE 0 0 0 emcpower12h ONLINE 0 0 0 emcpower13h ONLINE 0 0 0 emcpower14h ONLINE 0 0 0 emcpower15h ONLINE 0 0 0 errors: No known data errors pool: datapool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datapool2 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 emcpower16h ONLINE 0 0 0 emcpower17h ONLINE 0 0 0 emcpower18h ONLINE 0 0 0 emcpower19h ONLINE 0 0 0 emcpower20h ONLINE 0 0 0 emcpower21h ONLINE 0 0 0 emcpower22h ONLINE 0 0 0 emcpower23h ONLINE 0 0 0 errors: No known data errors pool: datapool3 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datapool3 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 emcpower24h ONLINE 0 0 0 emcpower25h ONLINE 0 0 0 emcpower26h ONLINE 0 0 0 emcpower27h ONLINE 0 0 0 emcpower28h ONLINE 0 0 0 emcpower29h ONLINE 0 0 0 emcpower30h ONLINE 0 0 0 emcpower31h ONLINE 0 0 0 errors: No known data errors pool: datapool4 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datapool4 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 emcpower32h ONLINE 0 0 0 emcpower33h ONLINE 0 0 0 emcpower34h ONLINE 0 0 0 emcpower35h ONLINE 0 0 0 emcpower36h ONLINE 0 0 0 emcpower37h ONLINE 0 0 0 emcpower38h ONLINE 0 0 0 emcpower39h ONLINE 0 0 0 errors: No known data errors pool: dumppool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM dumppool ONLINE 0 0 0 c5t10d0 ONLINE 0 0 0 c5t11d0 ONLINE 0 0 0 c6t10d0 ONLINE 0 0 0 c6t11d0 ONLINE 0 0 0 errors: No known data errors pool: localpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM localpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 c3t9d0 ONLINE 0 0 0 errors: No known data errors pool: logpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM logpool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 emcpower0h ONLINE 0 0 0 emcpower1h ONLINE 0 0 0 emcpower2h ONLINE 0 0 0 emcpower3h ONLINE 0 0 0 emcpower4h ONLINE 0 0 0 emcpower5h ONLINE 0 0 0 emcpower6h ONLINE 0 0 0 emcpower7h ONLINE 0 0 0 errors: No known data errors [su621dwdb/root] ----- Original Message ----- From: Will Murnane <[EMAIL PROTECTED]> Date: Tuesday, June 26, 2007 2:00 pm Subject: Re: [zfs-discuss] ZFS - DB2 Performance To: Roshan Perera <[EMAIL PROTECTED]> Cc: zfs-discuss@opensolaris.org > On 6/26/07, Roshan Perera <[EMAIL PROTECTED]> wrote: > > 25K 12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN > storage (compressed & RaidZ) Solaris 10. > RaidZ is a poor choice for database apps in my opinion; due to the way > it handles checksums on raidz stripes, it must read every disk in > order to satisfy small reads that traditional raid-5 would only have > to read a single disk for. Raid-Z doesn't have the terrible write > performance of raid 5, because you can stick small writes together and > then do full-stripe writes, but by the same token you must do > full-stripe reads, all the time. That's how I understand it, anyways. > Thus, raidz is a poor choice for a database application which tends > to do a lot of small reads. > > Using mirrors (at the zfs level, not the SAN level) would probably > help with this. Mirrors each get their own copy of the data, each > with its own checksum, so you can read a small block by touching only > one disk. > > What is your vdev setup like right now? 'zpool list', in other words. > How wide are your stripes? Is the SAN doing raid-1ish things with > the disks, or something else? > > 2. Unfortunately we are using twice RAID (San level Raid and > RaidZ) to overcome the panic problem my previous blog (for which I > had good response). > Can you convince the customer to give ZFS a chance to do things its > way? Let the SAN export raw disks, and make two- or three-way > mirrored vdevs out of them. > > > 3. Any way of monitoring ZFS performance other than iostat ? > In a word, yes. What are you interested in? DTrace or 'zpool iostat' > (which reports activity of individual disks within the pool) may prove > interesting. Thanks... > > Will > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss