Re: [Veritas-vx] cluster performance

2006-08-23 Thread Tharindu Rukshan Bamunuarachchi
Dear Ronald/All,

Can you let me know, what is "Intent Log" file used in SFCFS.
Purpose of intent logfile. How can we know the current size of intent log file.
what would be the effect on the system if we chnage our intent log size.


> What kind of I/O pattern are we talking about (how many I/Os at one time
> from each system, are they random or not, is the file size always the
> same, or are both systems competing to grow the file),

In first instance, We have used simple disk writing scenario.
e.g.
while true; do time date +%s >> test.dat; sleep 0.5; done

Then we wrote our own small program. Which would write 800 bytes of
data to file in shared disk. Writes are sequential. We made 250us
delay in between each write. We wrote around 20M as maximum.

We have used bonnie++ also.

> you using for file access (memory mapped files, regular read and write,
> can you use direct I/O or quick I/O, does your application rely on VxFS
> for locking)?

No direct I/O, no mmap I/O. Simply unix open () without any special
flag. We used O_CREAT.


We are using SFCFS 4.0. Sorry for the wrong info.

> --
> Ronald S. Karr
> tron |-<=>-|[EMAIL PROTECTED]
>
> On August 20, 2006, Tharindu Rukshan Bamunuarachchi wrote:
>  >We are writing to same to file by two nodes. Using one single shared disk.
>  >
>  >We are using CFS 5.0 Hope it is okay!
>  >
>  >Thank you!
>  >Tharindu
>  >
>  >On 8/18/06, Scott Kaiser <[EMAIL PROTECTED]> wrote:
>  >> If it is the same file -
>  >>
>  >> Versions 4.0 and higher handoff regions of a file to different hosts to
>  >> reduce lock contention. Depends on the file size and whether the hosts
>  >> are accessing the same region of the file as to whether this would help
>  >> (also depends what version you're running - if >4, you've already got
>  >> it)
>  >>
>  >> Regards,
>  >> Scott
>  >>
>  >>
>  >> > -Original Message-
>  >> > From: [EMAIL PROTECTED]
>  >> > [mailto:[EMAIL PROTECTED] On Behalf
>  >> > Of Schipper, Mark
>  >> > Sent: Thursday, August 17, 2006 11:08 AM
>  >> > To: Tharindu Rukshan Bamunuarachchi
>  >> > Cc: veritas-vx@mailman.eng.auburn.edu
>  >> > Subject: Re: [Veritas-vx] cluster performance
>  >> >
>  >> >
>  >> > While I have quite a bit of experience with VCS, VXVM and
>  >> > VXFSI have no experience with CFS. So I probably can't
>  >> > help too much with that. But I can take a shot at the storage
>  >> > side of the equation (lots of experience there too).
>  >> >
>  >> > For starters, 40ms is pretty highbut not *that* unusual
>  >> > of both nodes are trying to access the same file frequently.
>  >> > Each has to wait its turn, and file locking (that is robust
>  >> > enough to guard against corruption) takes some time. Your I/O
>  >> > is still sequential as I understand it. If you want to
>  >> > determine whether your problem is CFS specific or a storage
>  >> > problem, here is what I would do:
>  >> >
>  >> > - Create a device with the exact same layout as the one that
>  >> > CFS is using.
>  >> > - Put VXFS on that device
>  >> > - Run I/O tests on that device and the CFS device at the same time
>  >> > - If you see latency on both of them, your problem is not CFS
>  >> > - If you see latency on the CFS device only, it's probably a
>  >> > CFS related problem
>  >> >
>  >> > Try that. If you end up having latency on both devices I can
>  >> > probably help you figure out why. If the latency is on the
>  >> > CFS device only, I probably won't be able to help much.
>  >> >
>  >> >
>  >> >
>  >> >
>  >> > -Original Message-
>  >> > From: Tharindu Rukshan Bamunuarachchi [mailto:[EMAIL PROTECTED]
>  >> > Sent: Wednesday, August 16, 2006 11:38 PM
>  >> > To: Schipper, Mark
>  >> > Cc: veritas-vx@mailman.eng.auburn.edu
>  >> > Subject: Re: [Veritas-vx] cluster performance
>  >> >
>  >> > It is CFS. Actually this is very very basic installation.
>  >> >
>  >> > We have two nodes connected to disk array. Those two nodes
>  >> > are installed with VxCFS.
>  >> >
>  >> > There are two proceses running independently in two nodes.
>  >> > They must be writing to same file

Re: [Veritas-vx] cluster performance

2006-08-21 Thread Ronald S. Karr
What kind of I/O pattern are we talking about (how many I/Os at one time
from each system, are they random or not, is the file size always the
same, or are both systems competing to grow the file), and what method are
you using for file access (memory mapped files, regular read and write,
can you use direct I/O or quick I/O, does your application rely on VxFS
for locking)?
-- 
Ronald S. Karr
tron |-<=>-|[EMAIL PROTECTED]

On August 20, 2006, Tharindu Rukshan Bamunuarachchi wrote:
 >We are writing to same to file by two nodes. Using one single shared disk.
 >
 >We are using CFS 5.0 Hope it is okay!
 >
 >Thank you!
 >Tharindu
 >
 >On 8/18/06, Scott Kaiser <[EMAIL PROTECTED]> wrote:
 >> If it is the same file -
 >>
 >> Versions 4.0 and higher handoff regions of a file to different hosts to
 >> reduce lock contention. Depends on the file size and whether the hosts
 >> are accessing the same region of the file as to whether this would help
 >> (also depends what version you're running - if >4, you've already got
 >> it)
 >>
 >> Regards,
 >> Scott
 >>
 >>
 >> > -Original Message-
 >> > From: [EMAIL PROTECTED]
 >> > [mailto:[EMAIL PROTECTED] On Behalf
 >> > Of Schipper, Mark
 >> > Sent: Thursday, August 17, 2006 11:08 AM
 >> > To: Tharindu Rukshan Bamunuarachchi
 >> > Cc: veritas-vx@mailman.eng.auburn.edu
 >> > Subject: Re: [Veritas-vx] cluster performance
 >> >
 >> >
 >> > While I have quite a bit of experience with VCS, VXVM and
 >> > VXFSI have no experience with CFS. So I probably can't
 >> > help too much with that. But I can take a shot at the storage
 >> > side of the equation (lots of experience there too).
 >> >
 >> > For starters, 40ms is pretty highbut not *that* unusual
 >> > of both nodes are trying to access the same file frequently.
 >> > Each has to wait its turn, and file locking (that is robust
 >> > enough to guard against corruption) takes some time. Your I/O
 >> > is still sequential as I understand it. If you want to
 >> > determine whether your problem is CFS specific or a storage
 >> > problem, here is what I would do:
 >> >
 >> > - Create a device with the exact same layout as the one that
 >> > CFS is using.
 >> > - Put VXFS on that device
 >> > - Run I/O tests on that device and the CFS device at the same time
 >> > - If you see latency on both of them, your problem is not CFS
 >> > - If you see latency on the CFS device only, it's probably a
 >> > CFS related problem
 >> >
 >> > Try that. If you end up having latency on both devices I can
 >> > probably help you figure out why. If the latency is on the
 >> > CFS device only, I probably won't be able to help much.
 >> >
 >> >
 >> >
 >> >
 >> > -Original Message-
 >> > From: Tharindu Rukshan Bamunuarachchi [mailto:[EMAIL PROTECTED]
 >> > Sent: Wednesday, August 16, 2006 11:38 PM
 >> > To: Schipper, Mark
 >> > Cc: veritas-vx@mailman.eng.auburn.edu
 >> > Subject: Re: [Veritas-vx] cluster performance
 >> >
 >> > It is CFS. Actually this is very very basic installation.
 >> >
 >> > We have two nodes connected to disk array. Those two nodes
 >> > are installed with VxCFS.
 >> >
 >> > There are two proceses running independently in two nodes.
 >> > They must be writing to same file. We need to CFS to ensure
 >> > data conssitecey.
 >> >
 >> > We have two issues with cluster installation.
 >> >
 >> > 1. sometimes, max latencey may rise to 50ms to write to disk
 >> > 2. Failover time is about 40s in some instances
 >> >
 >> > Is this normal?? What kind kind of configuration info should
 >> > i ask from them??
 >> >
 >> > They have suggested following options for dcrease failover
 >> > latencey
 >> >
 >> > 1. Reducing intent log file size
 >> > 2. mount with noatime option
 >> > 3. mount with "mincache=closesync" options
 >> >
 >> > How will this impact with performace and I/O bandwidth.
 >> > Could you pls guide to resolve this ?
 >> > I can ask for more config details from them.
 >> >
 >> > Thank You!
 >> >
 >> > Tharindu
 >> >
 >> >
 >> > On 8/17/06, Schipper, Mark <[EMAIL PROTECTED]> wrote:
 >> > >
 >>

Re: [Veritas-vx] cluster performance

2006-08-21 Thread Par Botes
There are some other tunables that are VCS specific that may help with
the 40s failover time.
However, these tunables are normally only given out by support as they
want to know the environment before
Suggesting them. These tunables can cause system instabilities for rare,
but known environment which is why
We prefer to control them via support.

Open a case and ask for recommendations.

-Par
 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tharindu
Rukshan Bamunuarachchi
Sent: Saturday, August 19, 2006 9:57 PM
To: Scott Kaiser
Cc: veritas-vx@mailman.eng.auburn.edu; Schipper,Mark
Subject: Re: [Veritas-vx] cluster performance

We are writing to same to file by two nodes. Using one single shared
disk.

We are using CFS 5.0 Hope it is okay!

Thank you!
Tharindu

On 8/18/06, Scott Kaiser <[EMAIL PROTECTED]> wrote:
> If it is the same file -
>
> Versions 4.0 and higher handoff regions of a file to different hosts 
> to reduce lock contention. Depends on the file size and whether the 
> hosts are accessing the same region of the file as to whether this 
> would help (also depends what version you're running - if >4, you've 
> already got
> it)
>
> Regards,
> Scott
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of 
> > Schipper, Mark
> > Sent: Thursday, August 17, 2006 11:08 AM
> > To: Tharindu Rukshan Bamunuarachchi
> > Cc: veritas-vx@mailman.eng.auburn.edu
> > Subject: Re: [Veritas-vx] cluster performance
> >
> >
> > While I have quite a bit of experience with VCS, VXVM and VXFSI 
> > have no experience with CFS. So I probably can't help too much with 
> > that. But I can take a shot at the storage side of the equation 
> > (lots of experience there too).
> >
> > For starters, 40ms is pretty highbut not *that* unusual of both 
> > nodes are trying to access the same file frequently.
> > Each has to wait its turn, and file locking (that is robust enough 
> > to guard against corruption) takes some time. Your I/O is still 
> > sequential as I understand it. If you want to determine whether your

> > problem is CFS specific or a storage problem, here is what I would 
> > do:
> >
> > - Create a device with the exact same layout as the one that CFS is 
> > using.
> > - Put VXFS on that device
> > - Run I/O tests on that device and the CFS device at the same time
> > - If you see latency on both of them, your problem is not CFS
> > - If you see latency on the CFS device only, it's probably a CFS 
> > related problem
> >
> > Try that. If you end up having latency on both devices I can 
> > probably help you figure out why. If the latency is on the CFS 
> > device only, I probably won't be able to help much.
> >
> >
> >
> >
> > -Original Message-
> > From: Tharindu Rukshan Bamunuarachchi [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, August 16, 2006 11:38 PM
> > To: Schipper, Mark
> > Cc: veritas-vx@mailman.eng.auburn.edu
> > Subject: Re: [Veritas-vx] cluster performance
> >
> > It is CFS. Actually this is very very basic installation.
> >
> > We have two nodes connected to disk array. Those two nodes are 
> > installed with VxCFS.
> >
> > There are two proceses running independently in two nodes.
> > They must be writing to same file. We need to CFS to ensure data 
> > conssitecey.
> >
> > We have two issues with cluster installation.
> >
> > 1. sometimes, max latencey may rise to 50ms to write to disk 2. 
> > Failover time is about 40s in some instances
> >
> > Is this normal?? What kind kind of configuration info should i ask 
> > from them??
> >
> > They have suggested following options for dcrease failover 
> > latencey
> >
> > 1. Reducing intent log file size
> > 2. mount with noatime option
> > 3. mount with "mincache=closesync" options
> >
> > How will this impact with performace and I/O bandwidth.
> > Could you pls guide to resolve this ?
> > I can ask for more config details from them.
> >
> > Thank You!
> >
> > Tharindu
> >
> >
> > On 8/17/06, Schipper, Mark <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > > To clarify, are you using Veritas Cluster Filesystem (CFS)
> > >
> > > or
> > >
> > > Veritas Cluster Server and Veritas File System (VXFS) ??
> > >
> > >
> > >
> > > -Original Message-

Re: [Veritas-vx] cluster performance

2006-08-19 Thread Tharindu Rukshan Bamunuarachchi
Dear All/Mark,

Unfortunately, SFCFS is installed at client location and they says
that it is okay without CFS.

But most critical issue is with failover latencey.  It takes about 40s
to salve node to become master when master is down due to failure.
They have recommeded to do follwoing chages to system.

1. Minimize the Intent Log file size (recommedned is 1M)
2. mount with anotime option
3. use mincache=closesync

Do you have any comment or known effect to system performace with
above options??
Please help

Tharindu




On 8/17/06, Schipper, Mark <[EMAIL PROTECTED]> wrote:
>
> While I have quite a bit of experience with VCS, VXVM and VXFSI have no
> experience with CFS. So I probably can't help too much with that. But I can
> take a shot at the storage side of the equation (lots of experience there
> too).
>
> For starters, 40ms is pretty highbut not *that* unusual of both nodes
> are trying to access the same file frequently. Each has to wait its turn,
> and file locking (that is robust enough to guard against corruption) takes
> some time. Your I/O is still sequential as I understand it. If you want to
> determine whether your problem is CFS specific or a storage problem, here is
> what I would do:
>
> - Create a device with the exact same layout as the one that CFS is using.
> - Put VXFS on that device
> - Run I/O tests on that device and the CFS device at the same time
> - If you see latency on both of them, your problem is not CFS
> - If you see latency on the CFS device only, it's probably a CFS related
> problem
>
> Try that. If you end up having latency on both devices I can probably help
> you figure out why. If the latency is on the CFS device only, I probably
> won't be able to help much.
>
>
>
>
> -Original Message-
> From: Tharindu Rukshan Bamunuarachchi [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, August 16, 2006 11:38 PM
> To: Schipper, Mark
> Cc: veritas-vx@mailman.eng.auburn.edu
> Subject: Re: [Veritas-vx] cluster performance
>
> It is CFS. Actually this is very very basic installation.
>
> We have two nodes connected to disk array. Those two nodes are installed
> with VxCFS.
>
> There are two proceses running independently in two nodes. They must be
> writing to same file. We need to CFS to ensure data conssitecey.
>
> We have two issues with cluster installation.
>
> 1. sometimes, max latencey may rise to 50ms to write to disk 2. Failover
> time is about 40s in some instances
>
> Is this normal?? What kind kind of configuration info should i ask from
> them??
>
> They have suggested following options for dcrease failover latencey
>
> 1. Reducing intent log file size
> 2. mount with noatime option
> 3. mount with "mincache=closesync" options
>
> How will this impact with performace and I/O bandwidth.
> Could you pls guide to resolve this ?
> I can ask for more config details from them.
>
> Thank You!
>
> Tharindu
>
>
> On 8/17/06, Schipper, Mark <[EMAIL PROTECTED]> wrote:
> >
> >
> > To clarify, are you using Veritas Cluster Filesystem (CFS)
> >
> > or
> >
> > Veritas Cluster Server and Veritas File System (VXFS) ??
> >
> >
> >
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of
> > Tharindu Rukshan Bamunuarachchi
> > Sent: Tuesday, August 15, 2006 9:24 PM
> > To: veritas-vx@mailman.eng.auburn.edu
> > Subject: [Veritas-vx] cluster performance
> >
> > We have Veritas cluster at one of our production environment.
> >
> > Recently while running performance tests we found that , there is disk
> > write latency about 50ms while writing to volume in cluster.
> >
> > Do you know any pre tested figures about Veritas cluster file system
> > performance.So I can verify configurations of our system.
> >
> > Cheers!
> > Tharindu
> >
> >
> >
> > --
> > Tharindu Rukshan Bamunuarachchi
> > all fabrications are subject to decay
> > ___
> > Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
> > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
> >
> >
> >
> >
>
>
> --
> Tharindu Rukshan Bamunuarachchi
> all fabrications are subject to decay
>
>
>
>


-- 
Tharindu Rukshan Bamunuarachchi
all fabrications are subject to decay
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] cluster performance

2006-08-19 Thread Tharindu Rukshan Bamunuarachchi
We are writing to same to file by two nodes. Using one single shared disk.

We are using CFS 5.0 Hope it is okay!

Thank you!
Tharindu

On 8/18/06, Scott Kaiser <[EMAIL PROTECTED]> wrote:
> If it is the same file -
>
> Versions 4.0 and higher handoff regions of a file to different hosts to
> reduce lock contention. Depends on the file size and whether the hosts
> are accessing the same region of the file as to whether this would help
> (also depends what version you're running - if >4, you've already got
> it)
>
> Regards,
> Scott
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf
> > Of Schipper, Mark
> > Sent: Thursday, August 17, 2006 11:08 AM
> > To: Tharindu Rukshan Bamunuarachchi
> > Cc: veritas-vx@mailman.eng.auburn.edu
> > Subject: Re: [Veritas-vx] cluster performance
> >
> >
> > While I have quite a bit of experience with VCS, VXVM and
> > VXFSI have no experience with CFS. So I probably can't
> > help too much with that. But I can take a shot at the storage
> > side of the equation (lots of experience there too).
> >
> > For starters, 40ms is pretty highbut not *that* unusual
> > of both nodes are trying to access the same file frequently.
> > Each has to wait its turn, and file locking (that is robust
> > enough to guard against corruption) takes some time. Your I/O
> > is still sequential as I understand it. If you want to
> > determine whether your problem is CFS specific or a storage
> > problem, here is what I would do:
> >
> > - Create a device with the exact same layout as the one that
> > CFS is using.
> > - Put VXFS on that device
> > - Run I/O tests on that device and the CFS device at the same time
> > - If you see latency on both of them, your problem is not CFS
> > - If you see latency on the CFS device only, it's probably a
> > CFS related problem
> >
> > Try that. If you end up having latency on both devices I can
> > probably help you figure out why. If the latency is on the
> > CFS device only, I probably won't be able to help much.
> >
> >
> >
> >
> > -Original Message-
> > From: Tharindu Rukshan Bamunuarachchi [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, August 16, 2006 11:38 PM
> > To: Schipper, Mark
> > Cc: veritas-vx@mailman.eng.auburn.edu
> > Subject: Re: [Veritas-vx] cluster performance
> >
> > It is CFS. Actually this is very very basic installation.
> >
> > We have two nodes connected to disk array. Those two nodes
> > are installed with VxCFS.
> >
> > There are two proceses running independently in two nodes.
> > They must be writing to same file. We need to CFS to ensure
> > data conssitecey.
> >
> > We have two issues with cluster installation.
> >
> > 1. sometimes, max latencey may rise to 50ms to write to disk
> > 2. Failover time is about 40s in some instances
> >
> > Is this normal?? What kind kind of configuration info should
> > i ask from them??
> >
> > They have suggested following options for dcrease failover
> > latencey
> >
> > 1. Reducing intent log file size
> > 2. mount with noatime option
> > 3. mount with "mincache=closesync" options
> >
> > How will this impact with performace and I/O bandwidth.
> > Could you pls guide to resolve this ?
> > I can ask for more config details from them.
> >
> > Thank You!
> >
> > Tharindu
> >
> >
> > On 8/17/06, Schipper, Mark <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > > To clarify, are you using Veritas Cluster Filesystem (CFS)
> > >
> > > or
> > >
> > > Veritas Cluster Server and Veritas File System (VXFS) ??
> > >
> > >
> > >
> > > -Original Message-
> > > From: [EMAIL PROTECTED]
> > > [mailto:[EMAIL PROTECTED] On Behalf Of
> > > Tharindu Rukshan Bamunuarachchi
> > > Sent: Tuesday, August 15, 2006 9:24 PM
> > > To: veritas-vx@mailman.eng.auburn.edu
> > > Subject: [Veritas-vx] cluster performance
> > >
> > > We have Veritas cluster at one of our production environment.
> > >
> > > Recently while running performance tests we found that ,
> > there is disk
> > > write latency about 50ms while writing to volume in cluster.
> > >
> > > Do you know any pre tested figures about Veritas cluster
> > file system
> > > performance.So I can verify configurations of our system.
> > >
> > > Cheers!
> > > Tharindu
> > >
> > >
> > >
> > > --
> > > Tharindu Rukshan Bamunuarachchi
> > > all fabrications are subject to decay
> > > ___
> > > Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
> > > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
> > >
> > >
> > >
> > >
> >
> >
> > --
> > Tharindu Rukshan Bamunuarachchi
> > all fabrications are subject to decay
> >
> >
>


-- 
Tharindu Rukshan Bamunuarachchi
all fabrications are subject to decay
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] cluster performance

2006-08-18 Thread Ronald S. Karr
The test you show isn't good enough because it doesn't compare
multi-node access performance with and without CFS.

In the past (far enough in the past that I don't think it should still
be an issue), we frequently saw disk arrays that performed just
terribly if two hosts did I/O to them in the wrong ways, but fine if
one host issued the same pattern of I/Os that the two nodes issued
together.

I am not saying that the same thing is happening, but it is important
to compare shared access to a LUN with and without CFS.  To do that,
you could try creating two file systems on the same LUN.  For example,
you could use CVM to create two volumes on the same LUN, and then just
use VxFS on those two volumes, one mounted on one node, one on the
other.  Then, see what happens.

So, in what way are these two processes writing to the same file?
Random updates?  Can the writers be changed to use quick-I/O, which
removes a read/write exclusion.

Do the two processes read and write the same exact blocks frequently?
That can just kill performance since CFS reacts to that by flushing
blocks to disk from one node before re-reading it on the other node.
Even then, 30 miliseconds seems like a long time.  What kind of array
is this?
-- 
Ronald S. Karr
tron |-<=>-|[EMAIL PROTECTED]

On August 17, 2006, Schipper, Mark wrote:
 >
 >While I have quite a bit of experience with VCS, VXVM and VXFSI have no
 >experience with CFS. So I probably can't help too much with that. But I can
 >take a shot at the storage side of the equation (lots of experience there
 >too).
 >
 >For starters, 40ms is pretty highbut not *that* unusual of both nodes
 >are trying to access the same file frequently. Each has to wait its turn,
 >and file locking (that is robust enough to guard against corruption) takes
 >some time. Your I/O is still sequential as I understand it. If you want to
 >determine whether your problem is CFS specific or a storage problem, here is
 >what I would do:
 >
 >- Create a device with the exact same layout as the one that CFS is using.
 >- Put VXFS on that device
 >- Run I/O tests on that device and the CFS device at the same time
 >- If you see latency on both of them, your problem is not CFS
 >- If you see latency on the CFS device only, it's probably a CFS related
 >problem
 >
 >Try that. If you end up having latency on both devices I can probably help
 >you figure out why. If the latency is on the CFS device only, I probably
 >won't be able to help much.
 >
 >
 > 
 >
 >-Original Message-
 >From: Tharindu Rukshan Bamunuarachchi [mailto:[EMAIL PROTECTED] 
 >Sent: Wednesday, August 16, 2006 11:38 PM
 >To: Schipper, Mark
 >Cc: veritas-vx@mailman.eng.auburn.edu
 >Subject: Re: [Veritas-vx] cluster performance
 >
 >It is CFS. Actually this is very very basic installation.
 >
 >We have two nodes connected to disk array. Those two nodes are installed
 >with VxCFS.
 >
 >There are two proceses running independently in two nodes. They must be
 >writing to same file. We need to CFS to ensure data conssitecey.
 >
 >We have two issues with cluster installation.
 >
 >1. sometimes, max latencey may rise to 50ms to write to disk 2. Failover
 >time is about 40s in some instances
 >
 >Is this normal?? What kind kind of configuration info should i ask from
 >them??
 >
 >They have suggested following options for dcrease failover latencey
 >
 >1. Reducing intent log file size
 >2. mount with noatime option
 >3. mount with "mincache=closesync" options
 >
 >How will this impact with performace and I/O bandwidth.
 >Could you pls guide to resolve this ?
 >I can ask for more config details from them.
 >
 >Thank You!
 >
 >Tharindu
 >
 >
 >On 8/17/06, Schipper, Mark <[EMAIL PROTECTED]> wrote:
 >>
 >>
 >> To clarify, are you using Veritas Cluster Filesystem (CFS)
 >>
 >> or
 >>
 >> Veritas Cluster Server and Veritas File System (VXFS) ??
 >>
 >>
 >>
 >> -Original Message-
 >> From: [EMAIL PROTECTED]
 >> [mailto:[EMAIL PROTECTED] On Behalf Of 
 >> Tharindu Rukshan Bamunuarachchi
 >> Sent: Tuesday, August 15, 2006 9:24 PM
 >> To: veritas-vx@mailman.eng.auburn.edu
 >> Subject: [Veritas-vx] cluster performance
 >>
 >> We have Veritas cluster at one of our production environment.
 >>
 >> Recently while running performance tests we found that , there is disk 
 >> write latency about 50ms while writing to volume in cluster.
 >>
 >> Do you know any pre tested figures about Veritas cluster file system 
 >> performance.So I can verify configurations of our system.
 >>
 >> Cheers!
 >> Tharindu
 >>
 &

Re: [Veritas-vx] cluster performance

2006-08-18 Thread Scott Kaiser
If it is the same file -

Versions 4.0 and higher handoff regions of a file to different hosts to
reduce lock contention. Depends on the file size and whether the hosts
are accessing the same region of the file as to whether this would help
(also depends what version you're running - if >4, you've already got
it)

Regards,
Scott
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf 
> Of Schipper, Mark
> Sent: Thursday, August 17, 2006 11:08 AM
> To: Tharindu Rukshan Bamunuarachchi
> Cc: veritas-vx@mailman.eng.auburn.edu
> Subject: Re: [Veritas-vx] cluster performance
> 
> 
> While I have quite a bit of experience with VCS, VXVM and 
> VXFSI have no experience with CFS. So I probably can't 
> help too much with that. But I can take a shot at the storage 
> side of the equation (lots of experience there too).
> 
> For starters, 40ms is pretty highbut not *that* unusual 
> of both nodes are trying to access the same file frequently. 
> Each has to wait its turn, and file locking (that is robust 
> enough to guard against corruption) takes some time. Your I/O 
> is still sequential as I understand it. If you want to 
> determine whether your problem is CFS specific or a storage 
> problem, here is what I would do:
> 
> - Create a device with the exact same layout as the one that 
> CFS is using.
> - Put VXFS on that device
> - Run I/O tests on that device and the CFS device at the same time
> - If you see latency on both of them, your problem is not CFS
> - If you see latency on the CFS device only, it's probably a 
> CFS related problem
> 
> Try that. If you end up having latency on both devices I can 
> probably help you figure out why. If the latency is on the 
> CFS device only, I probably won't be able to help much.
> 
> 
>  
> 
> -Original Message-
> From: Tharindu Rukshan Bamunuarachchi [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, August 16, 2006 11:38 PM
> To: Schipper, Mark
> Cc: veritas-vx@mailman.eng.auburn.edu
> Subject: Re: [Veritas-vx] cluster performance
> 
> It is CFS. Actually this is very very basic installation.
> 
> We have two nodes connected to disk array. Those two nodes 
> are installed with VxCFS.
> 
> There are two proceses running independently in two nodes. 
> They must be writing to same file. We need to CFS to ensure 
> data conssitecey.
> 
> We have two issues with cluster installation.
> 
> 1. sometimes, max latencey may rise to 50ms to write to disk 
> 2. Failover time is about 40s in some instances
> 
> Is this normal?? What kind kind of configuration info should 
> i ask from them??
> 
> They have suggested following options for dcrease failover 
> latencey
> 
> 1. Reducing intent log file size
> 2. mount with noatime option
> 3. mount with "mincache=closesync" options
> 
> How will this impact with performace and I/O bandwidth.
> Could you pls guide to resolve this ?
> I can ask for more config details from them.
> 
> Thank You!
> 
> Tharindu
> 
> 
> On 8/17/06, Schipper, Mark <[EMAIL PROTECTED]> wrote:
> >
> >
> > To clarify, are you using Veritas Cluster Filesystem (CFS)
> >
> > or
> >
> > Veritas Cluster Server and Veritas File System (VXFS) ??
> >
> >
> >
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of 
> > Tharindu Rukshan Bamunuarachchi
> > Sent: Tuesday, August 15, 2006 9:24 PM
> > To: veritas-vx@mailman.eng.auburn.edu
> > Subject: [Veritas-vx] cluster performance
> >
> > We have Veritas cluster at one of our production environment.
> >
> > Recently while running performance tests we found that , 
> there is disk 
> > write latency about 50ms while writing to volume in cluster.
> >
> > Do you know any pre tested figures about Veritas cluster 
> file system 
> > performance.So I can verify configurations of our system.
> >
> > Cheers!
> > Tharindu
> >
> >
> >
> > --
> > Tharindu Rukshan Bamunuarachchi
> > all fabrications are subject to decay
> > ___
> > Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu 
> > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
> >
> >
> >
> >
> 
> 
> --
> Tharindu Rukshan Bamunuarachchi
> all fabrications are subject to decay
> 
> 

___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] cluster performance

2006-08-17 Thread Schipper, Mark

While I have quite a bit of experience with VCS, VXVM and VXFSI have no
experience with CFS. So I probably can't help too much with that. But I can
take a shot at the storage side of the equation (lots of experience there
too).

For starters, 40ms is pretty highbut not *that* unusual of both nodes
are trying to access the same file frequently. Each has to wait its turn,
and file locking (that is robust enough to guard against corruption) takes
some time. Your I/O is still sequential as I understand it. If you want to
determine whether your problem is CFS specific or a storage problem, here is
what I would do:

- Create a device with the exact same layout as the one that CFS is using.
- Put VXFS on that device
- Run I/O tests on that device and the CFS device at the same time
- If you see latency on both of them, your problem is not CFS
- If you see latency on the CFS device only, it's probably a CFS related
problem

Try that. If you end up having latency on both devices I can probably help
you figure out why. If the latency is on the CFS device only, I probably
won't be able to help much.


 

-Original Message-
From: Tharindu Rukshan Bamunuarachchi [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 16, 2006 11:38 PM
To: Schipper, Mark
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] cluster performance

It is CFS. Actually this is very very basic installation.

We have two nodes connected to disk array. Those two nodes are installed
with VxCFS.

There are two proceses running independently in two nodes. They must be
writing to same file. We need to CFS to ensure data conssitecey.

We have two issues with cluster installation.

1. sometimes, max latencey may rise to 50ms to write to disk 2. Failover
time is about 40s in some instances

Is this normal?? What kind kind of configuration info should i ask from
them??

They have suggested following options for dcrease failover latencey

1. Reducing intent log file size
2. mount with noatime option
3. mount with "mincache=closesync" options

How will this impact with performace and I/O bandwidth.
Could you pls guide to resolve this ?
I can ask for more config details from them.

Thank You!

Tharindu


On 8/17/06, Schipper, Mark <[EMAIL PROTECTED]> wrote:
>
>
> To clarify, are you using Veritas Cluster Filesystem (CFS)
>
> or
>
> Veritas Cluster Server and Veritas File System (VXFS) ??
>
>
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> Tharindu Rukshan Bamunuarachchi
> Sent: Tuesday, August 15, 2006 9:24 PM
> To: veritas-vx@mailman.eng.auburn.edu
> Subject: [Veritas-vx] cluster performance
>
> We have Veritas cluster at one of our production environment.
>
> Recently while running performance tests we found that , there is disk 
> write latency about 50ms while writing to volume in cluster.
>
> Do you know any pre tested figures about Veritas cluster file system 
> performance.So I can verify configurations of our system.
>
> Cheers!
> Tharindu
>
>
>
> --
> Tharindu Rukshan Bamunuarachchi
> all fabrications are subject to decay
> ___
> Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu 
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
>
>
>
>


--
Tharindu Rukshan Bamunuarachchi
all fabrications are subject to decay



smime.p7s
Description: S/MIME cryptographic signature
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] cluster performance

2006-08-16 Thread Tharindu Rukshan Bamunuarachchi
It is CFS. Actually this is very very basic installation.

We have two nodes connected to disk array. Those two nodes are
installed with VxCFS.

There are two proceses running independently in two nodes. They must
be writing to same file. We need to CFS to ensure data conssitecey.

We have two issues with cluster installation.

1. sometimes, max latencey may rise to 50ms to write to disk
2. Failover time is about 40s in some instances

Is this normal?? What kind kind of configuration info should i ask from them??

They have suggested following options for dcrease failover latencey

1. Reducing intent log file size
2. mount with noatime option
3. mount with "mincache=closesync" options

How will this impact with performace and I/O bandwidth.
Could you pls guide to resolve this ?
I can ask for more config details from them.

Thank You!

Tharindu


On 8/17/06, Schipper, Mark <[EMAIL PROTECTED]> wrote:
>
>
> To clarify, are you using Veritas Cluster Filesystem (CFS)
>
> or
>
> Veritas Cluster Server and Veritas File System (VXFS) ??
>
>
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Tharindu
> Rukshan Bamunuarachchi
> Sent: Tuesday, August 15, 2006 9:24 PM
> To: veritas-vx@mailman.eng.auburn.edu
> Subject: [Veritas-vx] cluster performance
>
> We have Veritas cluster at one of our production environment.
>
> Recently while running performance tests we found that , there is disk write
> latency about 50ms while writing to volume in cluster.
>
> Do you know any pre tested figures about Veritas cluster file system
> performance.So I can verify configurations of our system.
>
> Cheers!
> Tharindu
>
>
>
> --
> Tharindu Rukshan Bamunuarachchi
> all fabrications are subject to decay
> ___
> Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
>
>
>
>


-- 
Tharindu Rukshan Bamunuarachchi
all fabrications are subject to decay
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] cluster performance

2006-08-16 Thread Brown, Rodrick R
This could be any number of issues; I would start by analyzing your
storage subsystem configuration basically looking for any hot spots, or
contention with another system. We use cluster volume manager and file
system here with no noticeable overhead. 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tharindu
Rukshan Bamunuarachchi
Sent: Wednesday, August 16, 2006 12:24 AM
To: veritas-vx@mailman.eng.auburn.edu
Subject: [Veritas-vx] cluster performance

We have Veritas cluster at one of our production environment.

Recently while running performance tests we found that , there is disk
write latency about 50ms while writing to volume in cluster.

Do you know any pre tested figures about Veritas cluster file system
performance.So I can verify configurations of our system.

Cheers!
Tharindu



-- 
Tharindu Rukshan Bamunuarachchi
all fabrications are subject to decay
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


[Veritas-vx] cluster performance

2006-08-15 Thread Tharindu Rukshan Bamunuarachchi
We have Veritas cluster at one of our production environment.

Recently while running performance tests we found that , there is disk
write latency about 50ms while writing to volume in cluster.

Do you know any pre tested figures about Veritas cluster file system
performance.So I can verify configurations of our system.

Cheers!
Tharindu



-- 
Tharindu Rukshan Bamunuarachchi
all fabrications are subject to decay
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx