Re: [zfs-discuss] ZFS - DB2 Performance

2007-06-26 Thread Louwtjie Burger


Roshan Perera writes:
 > Hi all,
 >
 > I am after some help/feedback to the subject issue explained below.
 >
 > We are in the process of migrating a big DB2 database from a
 >
 > 6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to
 > 25K   12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN storage
(compressed & RaidZ) Solaris 10.
 >


200Mhz !? You mean 1200Mhz ;) The slowest CPU's in a 6900 was 900Mhz III Cu.

You mention Veritas FS ... as in Veritas filesystem, vxfs ? I suppose
you also include vmsa or the whole Storage Foundation? (could still be
vxva on Solaris 8 ! Oh, those were the days...)

First impressions on the system is ... well, it's fair to say that you
have some extra CPU power (and then some). The old III 1.2Ghz was nice
but by no means screamers. ( years ago)


 > Unfortunately, we are having massive perfomance problems with the new
solution. It all points towards IO and ZFS.
 >


Yep... CPU it isn't. Keep in mind that you have now completely moved
the goal posts when it comes to performance or comparing performance
with the previous installation. Not only do you have a large increase
in CPU performance, Solaris 10 will blitz 8 on a bad day by miles.
With all of the CPU/OS bottlenecks removed I sure hope you have decent
I/O at the back...


 > Couple of questions relating to ZFS.
 > 1. What is the impace on using ZFS compression ? Percentage of system
 > resources required, how much of a overhead is this as suppose to
 > non-compression. In our case DB2 do similar amount of read's and
 > writes.


I'm unsure as to why a person that buys a 24 core 25K would activate
compression on a OLTP database? Surely when you fork out that kind of
cash you want to get every bang for your buck (and then some!). I
don't think compression was created with the view on high performance
OLTP db's.

I would hope that the 25K (which in this case is light years faster
than the 6900) wasn't spec'ed with the idea of running compression
with the extra CPU cycles... oooh... *crash* *burn*.


 > 2. Unfortunately we are using twice RAID (San level Raid and RaidZ)
to
 > overcome the panic problem my previous blog (for which I had good
 > response).


I've yet to deploy a DB on ZFS in production, so I cannot comment on
the real world performance.. what I can comment on is some basic
things.

RAID on top of RAID seems silly. Especially RAID-Z. It's just not as
fast as a mirror or stripe when it comes to a decent db workout.

Are you sure that you want to go with ZFS ... any real reason to go
that way now? I would wait for U4 ... and give the machine/storage a
good workout with SVM and UFS/DirectIO.

Yep... it's a bastard to manage but very little can touch it when it
comes to pure performance. With so many $$$ standing on the datacentre
floor, I'd forget about technology for now and let common sense and
good business practice prevail.



 > 3. Any way of monitoring ZFS performance other than iostat ?


Dtrace guru's can comment... however iostat should suffice.


 > 4. Any help on ZFS tuning in this kind of environment like caching
etc ?
 >


As was posted, read the blog on ZFS and db's.


 > Would appreciate for any feedback/help wher to go next.
 > If this cannot be resolved we may have to go back to VXFS which would
be a shame.


By the way ... if the client has already purchased vmsa/vxfs (oh my
word, how much was that!) then I'm unsure as to what ZFS will bring to
the party... apart from saving the yearly $$$ for updates and
patches/support. Is that the idea? It's not like SF is bad...

Nope, 8TB on a decent configured storage unit is not that big _not_ to
give it a go with SVM, especially if you want to save money on Storage
Foundation.

I'm sure I'm preaching to the converted here but DB performance and
problems will usually reside inside the storage architecture... I've
seldom found a system wanting in the CPU department if the architect
wasn't a moron. With the upgrade that I see here... all the pressure
will move to the back (bar a bad configuration)

If you want to speed up a regular OLTP DB... fiddle with the I/O :)

2c
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


RE: [zfs-discuss] ZFS - DB2 Performance

2007-06-26 Thread Ellis, Mike
At what Solaris10 level (patch/update) was the "single-threaded
compression" situation resolved? 
Could you be hitting that one?

 -- MikeE 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Roch - PAE
Sent: Tuesday, June 26, 2007 12:26 PM
To: Roshan Perera
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS - DB2 Performance


Possibly the  storage is flushing  the write caches  when it
should not.  Until we  get  a fix,  cache flushing  could be
disabled  in  the storage  (ask   the vendor for   the magic
incantation). If that's not forthcoming and if all pools are 
attached to NVRAM protected devices; then these /etc/system
evil tunable might help :

In older solaris releases we have

set zfs:zil_noflush = 1

On newer releases

set zfs:zfs_nocacheflush = 1


If  you implement this,  Do place a   comment that this is a
temporary workaround waiting for bug 6462690 to be fixed.

About Compression, I don't have the numbers but a reasonable
guess would be that it can consumes  roughly 1-Ghz of CPU to
compress 100MB/sec. This will of course depend on the type
of data being compressed.

-r

Roshan Perera writes:
 > Hi all,
 > 
 > I am after some help/feedback to the subject issue explained below.
 > 
 > We are in the process of migrating a big DB2 database from a 
 > 
 > 6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to 
 > 25K   12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN storage
(compressed & RaidZ) Solaris 10.
 > 
 > Unfortunately, we are having massive perfomance problems with the new
solution. It all points towards IO and ZFS. 
 > 
 > Couple of questions relating to ZFS.
 > 1. What is the impace on using ZFS compression ? Percentage of system
 > resources required, how much of a overhead is this as suppose to
 > non-compression. In our case DB2 do similar amount of read's and
 > writes. 
 > 2. Unfortunately we are using twice RAID (San level Raid and RaidZ)
to
 > overcome the panic problem my previous blog (for which I had good
 > response). 
 > 3. Any way of monitoring ZFS performance other than iostat ?
 > 4. Any help on ZFS tuning in this kind of environment like caching
etc ?
 > 
 > Would appreciate for any feedback/help wher to go next. 
 > If this cannot be resolved we may have to go back to VXFS which would
be a shame.
 > 
 > 
 > Thanks in advance.
 > 
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - DB2 Performance

2007-06-26 Thread Roch - PAE

Possibly the  storage is flushing  the write caches  when it
should not.  Until we  get  a fix,  cache flushing  could be
disabled  in  the storage  (ask   the vendor for   the magic
incantation). If that's not forthcoming and if all pools are 
attached to NVRAM protected devices; then these /etc/system
evil tunable might help :

In older solaris releases we have

set zfs:zil_noflush = 1

On newer releases

set zfs:zfs_nocacheflush = 1


If  you implement this,  Do place a   comment that this is a
temporary workaround waiting for bug 6462690 to be fixed.

About Compression, I don't have the numbers but a reasonable
guess would be that it can consumes  roughly 1-Ghz of CPU to
compress 100MB/sec. This will of course depend on the type
of data being compressed.

-r

Roshan Perera writes:
 > Hi all,
 > 
 > I am after some help/feedback to the subject issue explained below.
 > 
 > We are in the process of migrating a big DB2 database from a 
 > 
 > 6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to 
 > 25K   12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN storage 
 > (compressed & RaidZ) Solaris 10.
 > 
 > Unfortunately, we are having massive perfomance problems with the new 
 > solution. It all points towards IO and ZFS. 
 > 
 > Couple of questions relating to ZFS.
 > 1. What is the impace on using ZFS compression ? Percentage of system
 > resources required, how much of a overhead is this as suppose to
 > non-compression. In our case DB2 do similar amount of read's and
 > writes. 
 > 2. Unfortunately we are using twice RAID (San level Raid and RaidZ) to
 > overcome the panic problem my previous blog (for which I had good
 > response). 
 > 3. Any way of monitoring ZFS performance other than iostat ?
 > 4. Any help on ZFS tuning in this kind of environment like caching etc ?
 > 
 > Would appreciate for any feedback/help wher to go next. 
 > If this cannot be resolved we may have to go back to VXFS which would be a 
 > shame.
 > 
 > 
 > Thanks in advance.
 > 
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - DB2 Performance

2007-06-26 Thread eric kustarz


On Jun 26, 2007, at 4:26 AM, Roshan Perera wrote:


Hi all,

I am after some help/feedback to the subject issue explained below.

We are in the process of migrating a big DB2 database from a

6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to
25K   12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN storage  
(compressed & RaidZ) Solaris 10.


Unfortunately, we are having massive perfomance problems with the  
new solution. It all points towards IO and ZFS.


Couple of questions relating to ZFS.
1. What is the impace on using ZFS compression ? Percentage of  
system resources required, how much of a overhead is this as  
suppose to non-compression. In our case DB2 do similar amount of  
read's and writes.
2. Unfortunately we are using twice RAID (San level Raid and RaidZ)  
to overcome the panic problem my previous blog (for which I had  
good response).

3. Any way of monitoring ZFS performance other than iostat ?
4. Any help on ZFS tuning in this kind of environment like caching  
etc ?


Have you looked at:
http://blogs.sun.com/realneel/entry/zfs_and_databases
http://blogs.sun.com/realneel/entry/zfs_and_databases_time_for
?

eric



Would appreciate for any feedback/help wher to go next.
If this cannot be resolved we may have to go back to VXFS which  
would be a shame.



Thanks in advance.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - DB2 Performance

2007-06-26 Thread Roshan Perera
 0 0
  raidz1 ONLINE   0 0 0
emcpower32h  ONLINE   0 0 0
emcpower33h  ONLINE   0 0 0
emcpower34h  ONLINE   0 0 0
emcpower35h  ONLINE   0 0 0
emcpower36h  ONLINE   0 0 0
emcpower37h  ONLINE   0 0 0
emcpower38h  ONLINE   0 0 0
emcpower39h  ONLINE   0 0 0
 
errors: No known data errors
 
  pool: dumppool
 state: ONLINE
 scrub: none requested
config:
 
NAMESTATE READ WRITE CKSUM
dumppoolONLINE   0 0 0
  c5t10d0   ONLINE   0 0 0
  c5t11d0   ONLINE   0 0 0
  c6t10d0   ONLINE   0 0 0
  c6t11d0   ONLINE   0 0 0
 
errors: No known data errors
 
  pool: localpool
 state: ONLINE
 scrub: none requested
config:
 
NAMESTATE READ WRITE CKSUM
localpool   ONLINE   0 0 0
  mirrorONLINE   0 0 0
c2t9d0  ONLINE   0 0 0
c3t9d0  ONLINE   0 0 0
 
errors: No known data errors
 
  pool: logpool
 state: ONLINE
 scrub: none requested
config:
 
NAMESTATE READ WRITE CKSUM
logpool ONLINE   0 0 0
  raidz1ONLINE   0 0 0
emcpower0h  ONLINE   0 0 0
emcpower1h  ONLINE   0 0 0
emcpower2h  ONLINE   0 0 0
emcpower3h  ONLINE   0 0 0
emcpower4h  ONLINE   0 0 0
emcpower5h  ONLINE   0 0 0
emcpower6h  ONLINE   0 0 0
emcpower7h  ONLINE   0 0 0
 
errors: No known data errors
[su621dwdb/root] 



- Original Message -
From: Will Murnane <[EMAIL PROTECTED]>
Date: Tuesday, June 26, 2007 2:00 pm
Subject: Re: [zfs-discuss] ZFS - DB2 Performance
To: Roshan Perera <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org

> On 6/26/07, Roshan Perera <[EMAIL PROTECTED]> wrote:
> > 25K   12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN 
> storage (compressed & RaidZ) Solaris 10.
> RaidZ is a poor choice for database apps in my opinion; due to the way
> it handles checksums on raidz stripes, it must read every disk in
> order to satisfy small reads that traditional raid-5 would only have
> to read a single disk for.  Raid-Z doesn't have the terrible write
> performance of raid 5, because you can stick small writes together and
> then do full-stripe writes, but by the same token you must do
> full-stripe reads, all the time.  That's how I understand it, anyways.
> Thus, raidz is a poor choice for a database application which tends
> to do a lot of small reads.
> 
> Using mirrors (at the zfs level, not the SAN level) would probably
> help with this.  Mirrors each get their own copy of the data, each
> with its own checksum, so you can read a small block by touching only
> one disk.
> 
> What is your vdev setup like right now?  'zpool list', in other words.
> How wide are your stripes?  Is the SAN doing raid-1ish things with
> the disks, or something else?


 
> > 2. Unfortunately we are using twice RAID (San level Raid and 
> RaidZ) to overcome the panic problem my previous blog (for which I 
> had good response).
> Can you convince the customer to give ZFS a chance to do things its
> way?  Let the SAN export raw disks, and make two- or three-way
> mirrored vdevs out of them.
> 
> > 3. Any way of monitoring ZFS performance other than iostat ?
> In a word, yes.  What are you interested in?  DTrace or 'zpool iostat'
> (which reports activity of individual disks within the pool) may prove
> interesting.

Thanks... 


> 
> Will
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - DB2 Performance

2007-06-26 Thread Will Murnane

On 6/26/07, Roshan Perera <[EMAIL PROTECTED]> wrote:

25K   12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN storage (compressed & 
RaidZ) Solaris 10.

RaidZ is a poor choice for database apps in my opinion; due to the way
it handles checksums on raidz stripes, it must read every disk in
order to satisfy small reads that traditional raid-5 would only have
to read a single disk for.  Raid-Z doesn't have the terrible write
performance of raid 5, because you can stick small writes together and
then do full-stripe writes, but by the same token you must do
full-stripe reads, all the time.  That's how I understand it, anyways.
Thus, raidz is a poor choice for a database application which tends
to do a lot of small reads.

Using mirrors (at the zfs level, not the SAN level) would probably
help with this.  Mirrors each get their own copy of the data, each
with its own checksum, so you can read a small block by touching only
one disk.

What is your vdev setup like right now?  'zpool list', in other words.
How wide are your stripes?  Is the SAN doing raid-1ish things with
the disks, or something else?


2. Unfortunately we are using twice RAID (San level Raid and RaidZ) to overcome 
the panic problem my previous blog (for which I had good response).

Can you convince the customer to give ZFS a chance to do things its
way?  Let the SAN export raw disks, and make two- or three-way
mirrored vdevs out of them.


3. Any way of monitoring ZFS performance other than iostat ?

In a word, yes.  What are you interested in?  DTrace or 'zpool iostat'
(which reports activity of individual disks within the pool) may prove
interesting.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss