Ok,

You may want to consider these then. These are for 5.0 on Solaris (but
should
Be easily transferable to most OS'es and even previous versions). Note
that this is generic
Guidelines. Depending on memory pressure, application IO profile and a
host of other things
These may or may not be good tunables. I'd consider them. For databases
there is nothing
That beats the ODM interface (or the quickIO interface for non-oracle
workloads), this is due to the fact that ODM/QIO also alters the
filesystem locking behaviour besides the cache behaviour. There are some
other tricks we employ as well to accelerate on the ODM/QIO IO path.

Anyway, here is a generic write up on the first set of tunables/mount
options I'd consider
For vxfs/Vxvm for three workloads; DSS/OLTP/file serving. I would wait
with doing anything for
The vxtunefs read_pref and nstreams etc before I've analyzed the impact
of these tunables since
The vxtunefs parameters are autotuned (and autotuned really well).

For DMP: Use the balanced path algorithm for A/A type of arrays,
consider using a failover group for A/P arrays. Balanced path gives the
best performance for almost all workloads.


DSS/HPC
==========================
VxFS mount options:
Log mode= delaylog (best performance and enables data integrity)
Logsize= medium (doesn't need to be very much larger than default or
keep default)
Mincache=direct (turn off in-kernel buffering)
Convosync=direct (delay inode updates for writes to files)

Tunefs:
Read_ahead=1 (traditional vxfs read ahead enabled)
Fcl_winterval=6000 (50 minute update interval for FCL)

VxVM kernel tuning (/etc/system)
Set vxio:vol_maxio=32768 (max IO size = 16MB)


OLTP
=======================
VxFS mount options:
Log mode= delaylog (best performance and enables data integrity)
Logsize= medium (doesn't need to be very much larger than default or
keep default)
Mincache=direct (turn off in-kernel buffering)
Convosync=direct (delay inode updates for writes to files)

Tunefs:
Read_ahead=0 (turn off read ahead)
Fcl_winterval=6000 (50 minute update interval for FCL)

VxVM kernel tuning (/etc/system)
Set vxio:vol_maxio=32768 (max IO size = 16MB) 


Fileserving, lots of smallish files (like an NFS home directory server)
========================================================================
Log mode= delaylog (best performance and enables data integrity)
Logsize= large (Make the log big and use a separate device for the log
using volume sets)

Tunefs:
Read_ahead=0 (turn off read ahead)
Fcl_winterval=6000 (50 minute update interval for FCL)

kernel tuning (/etc/system)
Set vxfs:vxfs_ninode= (this value should be viewed as 1KB units, set to
no more than 50% or memory but make it big)
Set ncsize= (this should be 80% of vxfs_ninode, the default is derived
from maxusers)


Other
===========================
On some platforms (AIX) you can reduce vx_bc_bufhwm and vx_vmm_buf_count
to reduce the size of the FS cache in the kernel, on Solaris this is
dynamically done and not directly tuned).
You may also want to consider tuning the fsflusher if pid 3 uses a lot
of cpu (but that's a sun tunable and not really a vxfs/VxVM tunable).


Now, the caveats... These tunables try to optimize the performance by
either changing the way memory is consumed or altering IO behavior. 

Mincacahe, vxfs_ninode, vol_maxio and read_ahead all change memory
consumption. Vol_maxio will use some more memory in order to issue
larger IO's. Mincache tries to use less IO by doing less caching in the
file system. VxFS_ninode tries to use more memory for inode caching to
have a larger ncsize cache hit ratio. 

Log_mode, logsize, fcl_winterval all alter some aspect of write
behaviour. Log_mode attempts to delay logging for any non-data updates
which strictly doesn't need to be logged in order to conform to posix
standards. Logsize should be set so that it never overflows and forces
writes to pause due to log overflows. This is hard to monitor btw. But
using checkpoints will increase your log device usage and you should
consider making the log larger when using checkpoints (btw. Everyone
should use checkpoints -it's really good stuff). Fcl_interwal is all
about the new file change log. If you update the log at a higher
resolution than needed then we put unneeded write pressure against the
underlying device (and the filesystem at large. Set it to a high enough
write interval to be useful for the monitoring requirements you have. 50
minutes or so will give you a really high resolution which is useful for
day to day monitoring but not useful for intra-day monitoring (at least
in my mind).

Best regards,
Par




-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Thursday, October 26, 2006 2:59 AM
To: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] Recommended mount options for VxFS

OLTP (Oracle) on Solaris 10

On Tue, Oct 24, 2006 at 06:20:03AM -0700, Par Botes wrote:
> What's the workload? Transactional or DSS style queries in the
database?
> It doesn't change the recommended mount options but it changes some of

> the VxVM recommendations.
> 
> Platform? There are come kernel tunables which could be modified as 
> well.
> 
> -Par
> 
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> [EMAIL PROTECTED]
> Sent: Monday, October 23, 2006 11:21 PM
> To: veritas-vx@mailman.eng.auburn.edu
> Subject: Re: [Veritas-vx] Recommended mount options for VxFS
> 
> Thanks Par for the advice but, as far as I know, to have ODM we need 
> to buy it.
> I am asking for mount options in typical VxFS+VxVM environment.
> 
> przemol
> 
> On Mon, Oct 23, 2006 at 10:16:02AM -0700, Par Botes wrote:
> > No mount options needed if you use the ODM option of VxFS (or if you

> > use the older QuickIO interface). I strongly recommend anyone who 
> > has oracle to use ODM (or QIO in the past), it makes a difference.
> > 
> > If you do a lot of checkpoints you may want to increase the log size

> > and put it on a different device as well.
> > 
> > -Par
> >  
> > 
> > -----Original Message-----
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of 
> > [EMAIL PROTECTED]
> > Sent: Monday, October 23, 2006 3:56 AM
> > To: veritas-vx@mailman.eng.auburn.edu
> > Subject: [Veritas-vx] Recommended mount options for VxFS
> > 
> > Hi all,
> > 
> > can you share some information what options (convosync,mincache,
> > other) you use mounting VxFS for Oracle database ? Do you treat the 
> > same redo logs, database files and archive logs or mount it with 
> > different options ?
> > 
> > Regards
> > przemol
> > 
> > --------------------------------------------------------------------
> > -- PS. >>> http://link.interia.pl/f19a6
> > 
> > _______________________________________________
> > Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu 
> > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
> 
> ----------------------------------------------------------------------
> Jestes kierowca? To poczytaj! >>> http://link.interia.pl/f199e
> 
> _______________________________________________
> Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu 
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

----------------------------------------------------------------------
PS. >>> http://link.interia.pl/f19a6

_______________________________________________
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

_______________________________________________
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

Reply via email to