Re: Tuning CVS performance.

2005-02-14 Thread Todd Denniston
John Carter wrote:
> 
> On Fri, 11 Feb 2005, Michael Schiestl wrote:
> 
> > Where the hell is the bottleneck? Is there a way to make CVS faster?
> 
> While I wouldn't called CVS slow, I would appreciate some hints on
> optimizing / tweaking performance.
> 
> We're using version 1.11.1p1 via pserver on a Redhat Linux version
> 2.4.17+acl on an ext3 file system and things are starting slow
> down painfully for some of our larger modules.
> 

Two things that might speed it up some,
1) update to something past 1.11.1p1, they started using mmap on the files
in 1.11.2 which CAN make things faster. and as an added bonus you get
security fixes.

2) see if you can get a newer kernel, self compiled.  a kernel optimized for
the hardware you have (instead of generic 386 calls) can be a little faster
thus speeding up the whole system.

-- 
Todd Denniston
Crane Division, Naval Surface Warfare Center (NSWC Crane) 
Harnessing the Power of Technology for the Warfighter


___
Info-cvs mailing list
Info-cvs@gnu.org
http://lists.gnu.org/mailman/listinfo/info-cvs


Tuning CVS performance.

2005-02-13 Thread John Carter
On Fri, 11 Feb 2005, Michael Schiestl wrote:
Where the hell is the bottleneck? Is there a way to make CVS faster?
While I wouldn't called CVS slow, I would appreciate some hints on
optimizing / tweaking performance.
We're using version 1.11.1p1 via pserver on a Redhat Linux version
2.4.17+acl on an ext3 file system and things are starting slow
down painfully for some of our larger modules.
Partly to blame is our heavy usage of tagging operations that have
left a legacy of about 4000 tags per file.
We also have a considerable number of "dead" directory
trees. ie. Directories that have effectively been renamed.
("cvs co -P" creates 1025 directories (excluding the /CVS meta
directories), "cvs co" creates 2015.
Any pointers on what would be the most effective tweaks / actions we
could do to regain some speed?
For example, I would consider leaving behind our pre-version 1.0
development trail and pulling our Post-version 1.0 trail into a new module.
Is there any tool to do that? And would it speed things up?
John Carter Phone : (64)(3) 358 6639
Tait ElectronicsFax   : (64)(3) 359 4632
PO Box 1645 ChristchurchEmail : [EMAIL PROTECTED]
New Zealand
No code is faster than no code.
___
Info-cvs mailing list
Info-cvs@gnu.org
http://lists.gnu.org/mailman/listinfo/info-cvs


question about helping cvs performance.

2004-09-10 Thread Lynch, Harold
Title: question about helping cvs performance.







I'm getting complaints from the users about the amount of time it takes to 

do a tag or a branch (especialy the branch).


Are there any "standard" things that can be done, or to watch out for, that

would make these functions run as well as possible?


Harold Lynch



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://lists.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least

2003-10-29 Thread Richard Pfeiffer

>How is that increasing NFS traffic? You're still doing exactly the same>operations, just in a different directory
 
I'm going to go look for my brain and I'll be back in 5 minutes.  
Not sure what I was thinking here.  Of course I knew that ;)  -I'm just 
typing a bit too fast!
 
 
Larry Jones <[EMAIL PROTECTED]> wrote:

> 3) Or, would it be feasble to have LockDir point to a file across NFS,> but outside of the repository? Of course then, we'd be are increasing> NFS traffic.How is that increasing NFS traffic? You're still doing exactly the sameoperations, just in a different directory. In any event, that wouldonly help if the problem were disk contention on the NFS server, whichit isn't.-Larry Jones
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least

2003-10-29 Thread Larry Jones
Richard Pfeiffer writes:
>
> In our case, as a refresher, our repository is prod. level NFS mounted
> on a dedicated NetApp disk and we have proven that in our case, this is
> much faster.  So: 
> 1) Would it still be advisable to try writing lock files to the local
> /tmp? 

If your /tmp is on a memory-based filesystem like mfs or tmpfs, then
yes.  If it's just a regular disk-based filesystem, probably not.

> 2) Could switching LockDir so that lock files are written to the local
> /tmp increase load on the local machine and increase our problem, since
> that's where our problems lie?

That's unlikely; writing to a memory-based filesystem shouldn't cause
any more load than writing to an NFS filesystem.

> 3) Or, would it be feasble to have LockDir point to a file across NFS,
> but outside of the repository?  Of course then, we'd be are increasing
> NFS traffic.

How is that increasing NFS traffic?  You're still doing exactly the same
operations, just in a different directory.  In any event, that would
only help if the problem were disk contention on the NFS server, which
it isn't.

-Larry Jones

Let's just sit here a moment... and savor the impending terror. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least

2003-10-29 Thread Mark D. Baushke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Richard Pfeiffer <[EMAIL PROTECTED]> writes:

> Great info from everyone!  Thank you -
> Have one quick question right now, more comments later - and Mark, I'll
> get the problem with the current version of Eclipse and any cvs version of
> 1.11.7 and higher out as soon as I can.

Thanks. Note that it might have to go to the Eclipse folks too.

> As far as setting this LockDir to something like /tmp/CVSLockDir:
>  
> I know the manual references LockDir to point to an in-file memory system.
> In our case, as a refresher, our repository is prod. level NFS mounted on
> a dedicated NetApp disk and we have proven that in our case, this is much
> faster.  So:
> 1) Would it still be advisable to try writing lock files to the local
> /tmp? 

Be careful that you do not have any symbolic links anywhere in your
repository and that you do not have any processes tha 'clean up' files
and/or directories out of /tmp and you may indeed find it helps.

> 2) Could switching LockDir so that lock files are written to the local
> /tmp increase load on the local machine and increase our problem, since
> that's where our problems lie?

It will increase the space used out of /tmp, but should not increase the
load on the machine.

> 3) Or, would it be feasble to have LockDir point to a file across NFS, but
> outside of the repository?  Of course then, we'd be are increasing NFS
> traffic.

Feasable? Yes. However, I don't see the point in this change unless you
want to alter the permissions of the LockDir or something. Using LockDir
is runs thru a slightly more complicated code path than not using it, so
all other things being equal using it will slow you down.

-- Mark
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (FreeBSD)

iD8DBQE/oAsU3x41pRYZE/gRAuVzAJ9/OsfcXHQQ7fHcU4iwbW6W4zrM7gCguqJA
C9OnbHiEylh86iNzYIbZ1QQ=
=P1l2
-END PGP SIGNATURE-


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least

2003-10-29 Thread Richard Pfeiffer
Great info from everyone!  Thank you -
Have one quick question right now, more comments later - and Mark, I'll get the problem with the current version of Eclipse and any cvs version of 1.11.7 and higher out as soon as I can.
 
As far as setting this LockDir to something like /tmp/CVSLockDir:
 
I know the manual references LockDir to point to an in-file memory system. 
In our case, as a refresher, our repository is prod. level NFS mounted on a dedicated NetApp disk and we have proven that in our case, this is much faster.  So: 
1) Would it still be advisable to try writing lock files to the local /tmp? 
2) Could switching LockDir so that lock files are written to the local /tmp increase load on the local machine and increase our problem, since that's where our problems lie?
3) Or, would it be feasble to have LockDir point to a file across NFS, but outside of the repository?  Of course then, we'd be are increasing NFS traffic.
 
Thx,
-R
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least interesting though!)

2003-10-29 Thread Tom Copeland
On Tue, 2003-10-28 at 23:15, Mark D. Baushke wrote:
> No, there are still multiple directories using a LockDir and a traversal
> is still done. The difference is that the operations are typically
> handled much faster.



Thanks much for the informative post.  This is good info for those of us
who are running busy servers (http://rubyforge.org/ - 100+ repositories,
300+ developers).

Yours,

Tom




___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least interesting though!)

2003-10-29 Thread Derek Robert Price
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Mark D. Baushke wrote:

| The creation of cvs locks is a multi-step process that ends with a
| #cvs.lock directory being created for the duration of the lock and then
| being removed. For some operations, the creation of the read file, the
Just picking a nit.  The #cvs.lock directory exists for the duration or
a write lock or the time it takes to create a new read lock file.  read
lock files exist for the duration of a read-only operation.
| >I then read the link: 10.5-Several developers simultaneously
attempting to
| >run CVS, that goes along with LockDir.
| >
| >The beginning states that cvs will try every 30 seconds  to see if it
| >still needs to wait for lock.
|
| The backoff if two processes try to lock the same directory at the same
| time can be expensive in delay time, but those processes are just doing
| a sleep, so it should not horribly impact the load on your machine.
|
| >e) Any chance this is a parameter that can be decreased * or would
| >it's checking more often just create more overhead and slow things down?
|
| I have not played much with it. The value you want to muck with is in
| src/cvs.h
|
| #define CVSLCKSLEEP 30  /* wait 30 seconds before
retrying */
|
| it is called from lock_wait().
If your problem is excessive swapping, then decreasing this would cause
processes to wake and be swapped in more frequently.
| >2401  stream  tcp  nowait  root  /usr/local/bin/cvs
| >
| >cvs -f --allow-root=/usr/cvsroot/PROJ1 pserver
| >
| >2402  stream  tcp  nowait  root  /usr/local/bin/cvs
| >
| >cvs -f --allow-root=/usr/cvsroot/PROJ2 pserver
| >
| >Or, switching that around, would there be any benefit to having two
| >repositories and connecting both of them to one pserver?
|
| Many folks have multiple --allow-root options on one pserver invokation
| to allow multiple disjoint cvs repositories to be served by one server.
|
| I do not believe it to cause any difference with regard to performance.
And you'd have to hack CVS to handle with only a single init as you
propose, since CVS expects each root to have a CVSROOT directory of its own.
Derek
- --
~*8^)
Email: [EMAIL PROTECTED]

Get CVS support at !
- --
If vegetarians eat vegetables, what do humanitarians eat?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Netscape - http://enigmail.mozdev.org
iD8DBQE/n98QLD1OTBfyMaQRAnzBAKC2UwFh8NqO23NisQzo3FrSJuoqFgCgl5un
BtYNsn1t+0nFQqHJHyfOsNo=
=kbcW
-END PGP SIGNATURE-


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least

2003-10-29 Thread Derek Robert Price
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Larry Jones wrote:

|>a) Just what is an in-memory file system?
|
|
|Just what it says -- a filesystem where the data only exists in memory
|(rather than being written to a disk); they are commonly used for /tmp.
|If you're already using such a filesystem for /tmp, you can just put
|LockDir on /tmp (e.g., /tmp/CVSLockDir).  I believe the Solaris variety
|is called tmpfs.
Although you might have to worry about any /tmp sweepers you run if you
put LockDir in /tmp.  Some programs which clean /tmp periodically will
remove the LockDir if configured improperly.
Derek

- --
~*8^)
Email: [EMAIL PROTECTED]

Get CVS support at !
- --
It is error alone which needs the support of government.  Truth can
stand by itself.
   - Thomas Jefferson
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Netscape - http://enigmail.mozdev.org
iD8DBQE/n+E5LD1OTBfyMaQRAmW1AKCfA9REyJKYj63yG8kBbIiYU9SOMwCfaOLH
g/zaUHDCfdrUUW51RyuLo/U=
=z510
-END PGP SIGNATURE-


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: cvs performance questions

2003-10-28 Thread Larry Jones
Derek Robert Price writes:
> 
> _commit_ (not checkout) is the notable exception, with not-so-notable
> exceptions being `cvs admin' and all of the watch commands.

Yes, that's what I meant.  I can't imagine why my fingers typed
something different.  :-)

-Larry Jones

It doesn't have a moral, does it?  I hate being told how to live my life.
-- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least

2003-10-28 Thread Larry Jones
Mark D. Baushke writes [about locks on in-memory filesystems]:
> 
> If you are able to write to memory faster than to your repository, then
> the difference in speed between those to mediums is how much faster you
> will be able to create your lock. I would guess that in most cases of
> a repository over NFS, or slow local disks the use of a memory filesystem
> would be faster. The use of a swap system that is always being paged out
> to disk may actually be slower if the page disk is slow.

Most filesystems have to worry about the on-disk filesystem being
consistent in the event of a system crash or power failure.  In many
cases, that means that operations that modify the filesystem metadata
(such as creating or deleting a file or directory) are synchronous --
the system actually waits for the data to get written to the disk before
continuing.  Memory filesystems aren't persistent, so they can avoid
that.  Since locking is just creating and deleting lots of little files
and directories, that can make a big difference, even if the memory
filesystem might get paged out to backing store eventually.

And anyone with a "slow" paging disk deserves what they get, the paging
disk should be the fastest disk in the system.

-Larry Jones

These findings suggest a logical course of action. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least

2003-10-28 Thread Larry Jones
Richard Pfeiffer writes:
>
> MIME-Version: 1.0
> Content-Type: text/html; charset=us-ascii

Please do not send MIME and/or HTML encrypted messages to the list.
Plain text only, PLEASE!

> We are running cvs-1.11.  I did migrate us to 1.11.9, but it turned out
> it does not mesh with Eclipse, which is what our developers use.  The
> latest upgrade Eclipse can use is 1.11.6.  From what I read, that has
> its own problems, so 1.11.5 would be the latest we could use.

What was the problem with 1.11.9?  I can't think of any incompatibilites
and you're missing a lot of bug fixes using 1.11.

> We now can have as many as 77 concurrent cvs processes going.

Wow.  That is one busy repository.  Are they all running, or are some of
them sleeping?

> Should cvs even be able to handle this kind of load?  To some of us,
> it's amazing and a credit to cvs that this thing hasn't crashed already.

There isn't any inherent reason that CVS can't handle the load.

> a) should we be splitting up our repository and giving each project
> their own?

That wouldn't help unless you gave each repository it's own server
machine.

> b) is there a way to limit the number of pserver calls made at any
> one time?

Since CVS is invoked by inetd, that depends on your particular inetd
implementation.  I'm pretty sure that xinetd does allow you to limit the
number of concurrent servers for a particular service, so if your
implementation doesn't, you may want to consider switching (see
www.xinetd.org).
 
> c) Should we be going to a 4x4 machine rather than our current 2x2?

It sounds like you should consider it, but you should probably ask
someone familiar with Solaris system performance tuning.  It's possible
that your problem is more with memory or I/O that it is with CPU.  Also,
the system may not be tuned appropriately for your work load.

> Context switching seems to be excessive, especially when we have more
> than 2 or 3 cvs ops running together. In the mornings, it's hitting as
> much as 12K per second, which is definitely a killer on a 2-processor
> system.
> 
> a) Is this normal?

Probably.  CVS is typically I/O intensive, which generally means lots of
context switches.

> b) Is cvs setup with a ping parameter or some kind of "am I alive"
> setting that hits every 1, 2 or 5 seconds?  If so, can it be reset?

No, CVS doesn't do any kind of pinging.

> Is there any kind of performance bug where just a few processes take up
> a lot of CPU - especially branch commands?  We were getting CPU time
> readings of 41 on one sub-branch process.

> In the doc, I read about setting the LockDir=directory in CVSROOT, where
> I assume I create my own dir in the repository (LockDir=TempLockFiles).

No, you create your own dir somewhere other than in the repository (and
you need to give LockDir an absolute path, not a relative path).  At the
very least, that allows you to offload the lock I/O to a different disk
than the regular I/O.

> a) Just what is an in-memory file system?

Just what it says -- a filesystem where the data only exists in memory
(rather than being written to a disk); they are commonly used for /tmp.
If you're already using such a filesystem for /tmp, you can just put
LockDir on /tmp (e.g., /tmp/CVSLockDir).  I believe the Solaris variety
is called tmpfs.

> b) Is speed garnered because all the lock files are in one directory
> and cvs does not need to traverse the project repository?

No, the speed is gained by not waiting for physical I/O to a disk drive.
Because the data doesn't survive a reboot, the system may be able to
take other shortcuts, too.

> c) Is the speed increase significant?

It can be.

> d) Will there be any problems with having lock files from multiple
> different projects  in the repository flooding this same directory?

No.

> In this LockDir case, we are going to have lock files from multiple
> different projects all in one dir. It appears by the statement:  "You
> need to create directory, but CVS will create subdirectories of
> directory as it needs them" that the full path is still used, correct?

Correct.  CVS will mirror the repository directory structure under the
LockDir directory.

> The beginning states that cvs will try every 30 seconds  to see if it
> still needs to wait for lock.
> 
> e) Any chance this is a parameter that can be decreased - or would
> it's checking more often just create more overhead and slow things down?

As of CVS 1.11.6, it's actually a bit more sophisticated -- if your
system allows sub-second sleeping, CVS will first try sleeping for 2, 4,
8, ... , 512 microseconds before giving up and sleeping for 30 seconds. 
In a busy repository like yours, there can be a lot of contention for
the master locks, but they're only held for a very short time, so the
short sleep avoids a long wait in that case.  You get the "waiting for
x's lock' message with every 30 second sleep, so you can get a feel for
how often you're running into lock contention prob

Re: more cvs performance questions (I think they are at least interesting though!)

2003-10-28 Thread Mark D. Baushke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Richard Pfeiffer <[EMAIL PROTECTED]> writes:

> BASICS:
>
> We are running cvs-1.11.  I did migrate us to 1.11.9, but it turned out it
> does not mesh with Eclipse, which is what our developers use.  The latest
> upgrade Eclipse can use is 1.11.6.  From what I read, that has its own
> problems, so 1.11.5 would be the latest we could use.

Have you reported the problems to the Eclipse folks? A separate e-mail
on what problems you ran into may be useful to help other folks some
day.

> 1)
>
> Should cvs even be able to handle this kind of load?  To some of us, it's
> amazing and a credit to cvs that this thing hasn't crashed already.  But,
> to avoid a crash, when we did the metrics and saw what our percentages on
> cpu, switching, kernel, etc., and especially load (46) were, we shut down
> inetd.conf, waited for some cvs processes to complete and the load drop to
> 10 before starting inetd.conf back up.
>
> a) should we be splitting up our repository and giving each project
> their own?

Well, that might help you scale a bit more

> b) is there a way to limit the number of pserver calls made at any one
> time?

I am not aware of any on the solaris inetd. However, you could probably
borrow code from the public tcp_wrapper software and have it check the
load on your system and then refuse connections to the pserver for a
time.

> c) Should we be going to a 4x4 machine rather than our current 2x2?

How fast are you growing? Will it give you enough room? If not, then you
might end up needing to shed some of the projects to another machine as
well as moving to more processors on your current hardware.

To be honest, I think you might be better to drop another 14GB of memory
on the system to see if that improves your performance.

>
>
>
>
>
> 2)
>
> Context switching seems to be excessive, especially when we have more than
> 2 or 3 cvs ops running together. In the mornings, it's hitting as much as
> 12K per second, which is definitely a killer on a 2-processor system.
>
> a) Is this normal?

To be honest, I have not benchmarked cvs in this situation.

>
> b) Is cvs setup with a ping parameter or some kind of *am I alive*
> setting that hits every 1, 2 or 5 seconds?  If so, can it be reset?

No. The :pserver: client/server protocol assumes a tcp connection, but
to the best of my understanding does not send any kind of a keep-alive
over the link.

>
>
>
>
>
> 3)
>
> Is there any kind of performance bug where just a few processes take up a
> lot of CPU * especially branch commands?  We were getting CPU time
> readings of 41 on one sub-branch process.

I am sure that there are many bugs that remain in cvs, but I am not
aware of any particular performance problems. To create a branch tag,
all of the files that are being tagged will be mmaped() into memory,
modified to have the new tag near the head of the file, written into a
',filename,' and then renamed as 'filename,v' when the operation is
complete before it moves along to the next file in the list to be
tagged.

>
>
>
>
>
>
>
> 4)
>
> In the doc, I read about setting the LockDir=directory in CVSROOT, where I
> assume I create my own dir in the repository (LockDir=TempLockFiles).
>
> We DO NOT have this set as yet, but I think I might like to try it for
> speed sake.  All our developers need write access to the repository, but
> the doc states:
>
> It can also be used to put the locks on a very fast in-memory file system
> to speed up locking and unlocking the repository.
>
>
>
> a) Just what is an in-memory file system?

Some operating systems have a way to create a mfs (memory file system).
I believe that the closest that Solaris comes is the use of a swap
filesystem which may be memory resident for much of the time.

>
> b) Is speed garnered because all the lock files are in one directory
> and cvs does not need to traverse the project repository?

No, there are still multiple directories using a LockDir and a traversal
is still done. The difference is that the operations are typically
handled much faster.

The creation of cvs locks is a multi-step process that ends with a
#cvs.lock directory being created for the duration of the lock and then
being removed. For some operations, the creation of the read file, the
creation of the lock file and reading the contents of the directory and
any files needed from the repository and the removal of the lock
directory can take milliseconds. Being able to improve the performance
of lock creation and deletion will improve the overall access time of
the repository.

> c) Is the speed increase significant?

If you are able to write to memory faster than to your repository, then
the difference in speed between those to mediums is how much faster you
will be able to create your lock. I would guess that in most cases of
a repository over NFS, or slow local disks the use of a memory filesystem
would be faster. The use of a swap system that is always being p

more cvs performance questions (I think they are at least interesting though!)

2003-10-28 Thread Richard Pfeiffer
Good Afternoon,
 
I have a few more questions related to performance.  Some MAY be a bit 'out-of-the-box, but please bare with me!
 
BASICS:
We are running cvs-1.11.  I did migrate us to 1.11.9, but it turned out it does not mesh with Eclipse, which is what our developers use.  The latest upgrade Eclipse can use is 1.11.6.  From what I read, that has its own problems, so 1.11.5 would be the latest we could use. 
 
Our server machine is a Solaris 8, 2 processor box, 2GB RAM, 28GB disk,
900 MHz.  This machine is dedicated to cvs.  The only other things on it or hitting it are an LDAP server, bugzilla and viewcvs.
 
Our repository sits on a NetApp slice (just a big, beefy disk) that is NFS mounted to 
our server.  This is a production level NFS mount and there are NO other mounts.  We originally did this in the interest of speed – we had 4 minute checkouts on a local repository, 36 seconds on the NFS mount.  
I know there are NFS/ CVS issues, but I have spoke to this list regarding this and the
conclusion was that with a production level NFS server, “we will almost certainly not have any problems”.  
 
And we haven’t.  We’ve been running like this for over a year now.  Our problem, since so many project and users have been added, is with performance.
 
We now can have as many as 77 concurrent cvs processes going.  That is excessive and very rare, but did happen when an 8Mb xml file was checked in as ascii, which causes a diff
to be made for any and every update command on it.  It was then re-checked in as binary and that took care of that.
But normally, we can have 3 branching processes going at once on one project, along with numerous updates, co, etc against the same project – while various other projects are doing the same against there own.  I’d say 36 cvs processes going at once isn’t a stretch.  So, given this scenario:
 
 
1)
Should cvs even be able to handle this kind of load?  To some of us, it’s amazing and a credit to cvs that this thing hasn’t crashed already.  But, to avoid a crash, when we did the metrics and saw what our percentages on cpu, switching, kernel, etc., and especially load (46) were, we shut down inetd.conf, waited for some cvs processes to complete and the load drop to 10 before starting inetd.conf back up.
a) should we be splitting up our repository and giving each project their own?
b) is there a way to limit the number of pserver calls made at any one time? 
c) Should we be going to a 4x4 machine rather than our current 2x2?
 
 
2)
Context switching seems to be excessive, especially when we have more than 2 or 3 cvs ops running together. In the mornings, it's hitting as much as 12K per second, which is definitely a killer on a 2-processor system.  
a) Is this normal?
b) Is cvs setup with a ping parameter or some kind of “am I alive” setting that hits every 1, 2 or 5 seconds?  If so, can it be reset?
 
 
3) 
Is there any kind of performance bug where just a few processes take up a lot of CPU – especially branch commands?  We were getting CPU time readings of 41 on one sub-branch process.
  
 
 
4) 
In the doc, I read about setting the LockDir=directory in CVSROOT, where I assume I create my own dir in the repository (LockDir=TempLockFiles).  
We DO NOT have this set as yet, but I think I might like to try it for speed sake.  All our developers need write access to the repository, but the doc states:
It can also be used to put the locks on a very fast in-memory file system to speed up locking and unlocking the repository.
 
a) Just what is an in-memory file system?
b) Is speed garnered because all the lock files are in one directory and cvs does not need to traverse the project repository?
c) Is the speed increase significant?  
d) Will there be any problems with having lock files from multiple different projects  in the repository flooding this same directory?
If I need to search for errant locks, the way we are currently set up, I can go to the project where I know they exist and do a find for them.  In this LockDir case, we are going to have lock files from multiple different projects all in one dir. It appears by the statement:  “You need to create directory, but CVS will create subdirectories of directory as it needs them” that the full path is still used, correct?  (So, it would still be an easy search?) 
 
I then read the link: 10.5-Several developers simultaneously attempting to run CVS, that goes along with LockDir.
The beginning states that cvs will try every 30 seconds  to see if it still needs to wait for lock.  
e) Any chance this is a parameter that can be decreased – or would it’s checking more often just create more overhead and slow things down?
  
In the end, it states
if someone runs





 
cvs ci a/two.c b/three.c
and someone else runs cvs update at the same time, the person running update might get only the change to `b/three.c' and not the change to `a/two.c'.
f) I assume this does not relate only to when LockDir is set

Re: cvs performance questions

2003-10-28 Thread Derek Robert Price
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Larry Jones wrote:

| It's probably worth a few words about locking details. Most operations
|
|lock a single directory at a time, which minimizes lock contention.
|Checkout is the notable exception, it locks the entire tree for the
|duration of the checkout.  Some versions of CVS also lock the whole tree
_commit_ (not checkout) is the notable exception, with not-so-notable
exceptions being `cvs admin' and all of the watch commands.
Derek

- --
~*8^)
Email: [EMAIL PROTECTED]

Get CVS support at !
- --
"If triangles had a God, He'd have three sides."
~-- Old Yiddish Proverb
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Netscape - http://enigmail.mozdev.org
iD8DBQE/nmLpLD1OTBfyMaQRAuB1AJwOejj4EfMw7BBFQhGdOevC7PH+HwCdGUS0
uSK5r4heNxkZm9TGt7VHbNU=
=S3al
-END PGP SIGNATURE-


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: cvs performance questions

2003-10-23 Thread Larry Jones
Richard Pfeiffer writes:
>  
> When the branch command is run, usually via Eclipse, the problem lies in
> that the tags get created but the new branch creation sometimes fails.  

I don't understand that statement -- as far as CVS is concerned,
creating the tags *is* creating the branch, there's nothing else that
needs to be done.

> Sometimes it does work, slowly, and the branch is created; but other
> times you get these locking warnings, cvs times out and branching fails:

Once again, I don't understand.  Locking messages are perfectly normal
in a busy repository: the messages are informational, not warnings, and
CVS never times out -- it will wait forever to try to obtain the lock.

> Now it appears to me, by the statement " When we get this error we are
> trying to create a new branch off of the proj-10_5_0_1_perf branch." ,
> that they are creating a branch off of a branch.  Thus, they are getting
> further and further from the trunk. I would imagine this alone would
> have to be causing at least some of their speed problems.

No.  Creating a branch in CVS is just creating a tag, which takes
constant time.  Checkin, checkout, etc., are more expensive, but not tag
creation.

It's probably worth a few words about locking details.  Most operations
lock a single directory at a time, which minimizes lock contention. 
Checkout is the notable exception, it locks the entire tree for the
duration of the checkout.  Some versions of CVS also lock the whole tree
for tag operations -- if your server is one of them, that could explain
the problem; you should probably upgrade to the latest stable release if
you haven't already.

Locking involves creating and deleting lots of little directories and
files; those can be expensive operations on some file systems.  If your
repository is on such a filesystem and you have another filesystem where
those operations are very cheap (such as an asynchronous in-memory
filesystem), you might want to set the LockDir option in your
CVSROOT/config file to put the lock files there rather than in the
repository.

-Larry Jones

All this was funny until she did the same thing to me. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: cvs performance questions

2003-10-23 Thread Richard Pfeiffer
Thank you very much, Larry and Eric, for the responses dealing with cvs performance issues, especially as they relate to branching.  I believe I understand that branching speed is affected by a number of things, including:
1) the size of the project dir 
2) where the branch is taking place in the project relative to the number of subdirs that will need to be locked (lock overhead), 
3) the number of users who are competing for the same dirs and waiting for each others locks,
4) how many revisions exist on the branch
5) branches that were created many revisions ago 
6) the branches' distance from the trunk
 
 
 What follows is the scenario we have here and I would like to get your expert opinions if possible.  I am wondering if I need to start having the developers communicate when a branch is being created so that no one else is also creating one at the same time and causing, at worst, time out errors - or at best, the branch command works but takes forever:
 
As stated by a developer:
Relevant info:  we have a codebase in the project directory.  We created a branch called "proj-10_5_0_1_perf".  When we get this error we are trying to create a new branch off of the proj-10_5_0_1_perf branch.  There are probably a couple of people trying to do this right now - perhaps that's why we're getting lock problems.  Also, due to our development process, people are checking in code and checking out code on a very frequent basis.
 
When the branch command is run, usually via Eclipse, the problem lies in that the tags get created but the new branch creation sometimes fails.  
Sometimes it does work, slowly, and the branch is created; but other times you get these locking warnings, cvs times out and branching fails:

proj_1_2_3: cvs server: [14:16:25]  waiting for bob's lock in /cvs_repos/proj/src ...
proj_1_2_3: cvs server: [14:16:55]  obtained lock in /cvs_repos/proj/src ...
etc...
 
Now it appears to me, by the statement " When we get this error we are trying to create a new branch off of the proj-10_5_0_1_perf branch." , that they are creating a branch off of a branch.  Thus, they are getting further and further from the trunk. I would imagine this alone would have to be causing at least some of their speed problems.  Then combine that with competition amongst other users also doing branching and/or even simple co/ci commands.  We actually have two different project groups here, both of whom are doing massive branching excersises as described.  The rest of the groups I administer are using cvs in a more "normal" way and they have no issues with performance whatsoever.
 
Comments?  Is this level of branching abnormal? 
 
I hope I explained this clearly and I thank you all VERY much for your time.  I have learned a ton from this list!
-R
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: cvs performance questions

2003-10-21 Thread Larry Jones
Richard Pfeiffer writes:
>  
> 1)  If we have multiple users sharing the same userid, would that create
> any kind of locks, etc, that could hinder co/ci times?

Not that I can think of.

> 2) There is a great deal of branching taking place, and it has come to
> my attention that there are some  old undeleted tags hanging around.
> Could enough of these cause a performance issue?

Tags by themselves are very cheap.  Branches, on the other hand, can be
expensive, since the farther a revision is from the head of the trunk,
the more diffs have to be applied to generate it.  If you're actively
using lots of revisions that are far removed from the head of the trunk,
that could explain your performance problems.

Of course, the real way to tell is to take some measurements and analyze
them to determine the overt cause of the problems: is it excessive I/O,
CPU utilization, network latency, or what?  Once you know that, you'll
have a much better chance of fixing it.

-Larry Jones

It works on the same principle as electroshock therapy. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: cvs performance questions

2003-10-21 Thread Eric Siegerman
On Tue, Oct 21, 2003 at 02:10:46PM -0700, Richard Pfeiffer wrote:
> (some of the project directories in the repository are
> rather large, 357 Mb, etc)
>  
> 1)  If we have multiple users sharing the same userid, would
> that create any kind of locks, etc, that could hinder co/ci
> times?

Locks are certainly created, and that will certainly affect co/ci
times.  But the fact that multiple users share the same userid is
irrelevent.  (The practice is usually recommended against for
reasons of accountability, but it should have no impact on
performance.)

The overall size of the source tree being operated on will have
an effect here.  CVS's locking mechanism calls for creating
directories and/or files under *every* directory in the
repository that's to be visited, *not* just at the root of the
subtree of interest.  So locking overhead is O(N) on the number
of directories in the subtree, not O(1).  (But note that I said
"in the subtree".  If you do a "cvs co" on a leaf directory, i.e.
one containing only files, that'll be fast even if the repo as a
whole has thousands of directories.  Same if you do a "cvs co -l"
on any directory; since it doesn't recurse, it doesn't have to
lock the subdirectories.)

Of course, as the number of simultaneous users goes up (whether
or not they're sharing userids), the likelihood of their
contending for the same directories, and thus having to wait for
each others' locks, also goes up.


> 2) There is a great deal of branching taking place, and it has
> come to my attention that there are some  old undeleted tags
> hanging around.  Could enough of these cause a performance
> issue?

Sure, but the value of "enough" is likely *huge* :-)  Tags per se
don't take up much space, or, I imagine, much processing time.
Deleting old tags is unlikely to have a noticeable effect.  (It
can make "cvs log" output less cluttered, at the cost of losing
information of course; but DON'T do it merely as a performance
measure.)

What's more likely to slow things down is performing operations
on branches that:
  - were created many revisions ago
  - have many revisions on the branch

Specifically, the longer the path (along the revision tree) from
the revision you're working with to the one at the head of the
trunk, the longer the operation will take.  See rcsfile(5) for an
explanation.

--

|  | /\
|-_|/  >   Eric Siegerman, Toronto, Ont.[EMAIL PROTECTED]
|  |  /
When I came back around from the dark side, there in front of me would
be the landing area where the crew was, and the Earth, all in the view
of my window. I couldn't help but think that there in front of me was
all of humanity, except me.
- Michael Collins, Apollo 11 Command Module Pilot



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


cvs performance questions

2003-10-21 Thread Richard Pfeiffer
Good Afternoon,
 
We have been experiencing some slow cvs performance and I had a few quick questions to see if either of these would have any bearing (some of the project directories in the repository are rather large, 357 Mb, etc)
 
1)  If we have multiple users sharing the same userid, would that create any kind of locks, etc, that could hinder co/ci times?
 
2) There is a great deal of branching taking place, and it has come to my attention that there are some  old undeleted tags hanging around.  Could enough of these cause a performance issue?
 
Thank you!
-R
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: Improving CVS Performance?

2003-03-30 Thread Larry Jones
Vivek Venugopalan writes:
> 
>We have a 2 GB repository running on a Linux system for over an year.  It
> has a few thousand files (approx 10,000 ) and has started progressievly
> become very slow.  We have a lot of tags (daily builds with a tag for each
> build) and a few branches ( < 10).  Can you folks suggest what are the
> things I can do to improve the overall performance of the system?

Tags are generally very cheap, they don't take up much space or take
much time to process.  I don't have any CVS-specific suggestions, you
just need to do normal system performance tuning: profile your system to
find out what it's doing, identify bottlenecks, then devise solutions. 
Since CVS is not gernerally CPU intensive, it's likely that your problem
involves disk I/O, but you need measurements to be sure.  You can
probably get better advice from a Linux list than you can from here.

-Larry Jones

Sheesh.  Who can fathom the feminine mind? -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: Improving CVS Performance?

2003-03-29 Thread Greg A. Woods
[ On Saturday, March 29, 2003 at 08:34:13 (+0530), Vivek Venugopalan wrote: ]
> Subject: Improving CVS Performance?
>
>We have a 2 GB repository running on a Linux system for over an year.  It
> has a few thousand files (approx 10,000 ) and has started progressievly
> become very slow.  We have a lot of tags (daily builds with a tag for each
> build) and a few branches ( < 10).  Can you folks suggest what are the
> things I can do to improve the overall performance of the system?

Get rid of any unnecessary tags -- e.g. something like:  remove all the
daily tags older than a week, keeping only weekly tags, then remove all
the weekly tags older than a month keeping only the monthly tags (and
arrange the rest of your SCM procedures to fit this "schedule").

> For example, a cvs update takes close to half an hour for the entire
> repository on the local box itself.

That's not too unexpected I suppose.  What you should expect depends
almost entirely on your disk I/O system and filesystem implementation
and how much RAM for buffer cache you have.  CVS is all about I/O
througput (especially when just running on the local system) and the
better your OS can cache FS data then the fewer I/Os will be needed.

Note also that Linux (like NetBSD prior to 1.6) have never been known to
be stellar performers at optimal filesystem cache utilisation.  I would
choose FreeBSD-4.x for my CVS server if I were you.

With today's blindingly fast CPUs you can "waste" a lot of cycles just
to save a few I/Os and you'll still come out way on top, and the way to
do that is get your disks as close to your host bus as possible and then
use OS-based logical volume striping to get I/Os going in parallel and
as fast as possible.  Of course you also should use mirroring to keep
things reliable enough to use in production.

A decently fast (> 300MHz) Intel Xeon CPU (i.e. with at least 2MB of
cache RAM), a GB of RAM or two, with four or six (or so, an even number)
15,000 RPM Ultra-320 LVD SCSI drives on a matching high-end Adaptec
controller (or pair of controllers on a motherboard with at least two
completely separate PCI buses), and using Vinum for software striping
and mirroring (i.e. create two identical pairs of striped volumes from
half your disks and then mirror them) should give you a reasonably fast
CVS server.

I have my /cvs on one external RAID-5 box and my /work on another
(CRD-5500's with 20 MB/s host interfaces but older 8-bit drives and not
enough cache RAM), both on the same host adapter (AIC-7880 onboard).

Here are sample times from my nightly update of my local NetBSD working
directories from a local copy of the NetBSD repository:

START:2003/03/28-23:55:30: cvs update /work/NetBSD/src
[[ ... about 50 files updated ... ]]
DONE:2003/03/29-00:18:41: updating /work/NetBSD/src

START:2003/03/29-00:18:41: cvs update /work/NetBSD/src-1.6
[[ ... no files updated ... ]] 
DONE:2003/03/29-00:45:40: updating /work/NetBSD/src-1.6

START:2003/03/29-00:45:40: cvs update /work/NetBSD/xsrc
[[ .. one file updated ... ]]
DONE:2003/03/29-00:53:53: updating /work/NetBSD/xsrc

START:2003/03/29-00:53:53: cvs update /work/NetBSD/pkgsrc
[[ ... almost 200 very small files updated ... ]]
DONE:2003/03/29-01:26:14: updating /work/NetBSD/pkgsrc

According to the nightly RSYNC job that runs just before the above
updates the repository contains:

Number of files: 187695
Total file size: 2193649940 bytes

(the majority of those files are in pkgsrc, but they're all quite small)

Here's another view where /ocvs is approximately (within 10%) the
non-NetBSD portion of what's all combined in /cvs now:

Filesystem  1K-blocks UsedAvail %Cap  iUsed iAvail %iCap Mounted on
/dev/sd0f 2834732  1966837   726158  73% 183374  529072  25% /ocvs
/dev/sd5c12186596  4428041  7149225  38% 379877 2667545  12% /cvs


The FreeBSD system configuration outlined above should be at least an
order of magnitude faster than my system.

-- 
Greg A. Woods

+1 416 218-0098;<[EMAIL PROTECTED]>;   <[EMAIL PROTECTED]>
Planix, Inc. <[EMAIL PROTECTED]>; VE3TCP; Secrets of the Weird <[EMAIL PROTECTED]>


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Improving CVS Performance?

2003-03-28 Thread Vivek Venugopalan
Hello
   We have a 2 GB repository running on a Linux system for over an year.  It
has a few thousand files (approx 10,000 ) and has started progressievly
become very slow.  We have a lot of tags (daily builds with a tag for each
build) and a few branches ( < 10).  Can you folks suggest what are the
things I can do to improve the overall performance of the system?

For example, a cvs update takes close to half an hour for the entire
repository on the local box itself.

TIA

Vivek




___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: CVS Performance

2001-01-05 Thread Noel L Yap

For permissions maintainability reasons, it'd be great if you could also specify
the location of CVSROOT files that need to be writable by everyone.

Noel




[EMAIL PROTECTED] on 2001.01.05 11:58:12

To:   [EMAIL PROTECTED]
cc:   [EMAIL PROTECTED], [EMAIL PROTECTED] (bcc: Noel L Yap)
Subject:  Re: CVS Performance




Michael Peck writes:
>
> I also discovered that as the history file (CVSROOT/history) grows, it
> really becomes a HUGE bottleneck.
>
> If you delete/rename the file, cvs will stop appending to it.

As of CVS 1.11, you can also set LogHistory in CVSROOT/config to only
record events that you're interested in.

-Larry Jones

I wonder if I can grow fangs when my baby teeth fall out. -- Calvin

___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs





This communication is for informational purposes only.  It is not intended as
an offer or solicitation for the purchase or sale of any financial instrument
or as an official confirmation of any transaction. All market prices, data
and other information are not warranted as to completeness or accuracy and
are subject to change without notice. Any comments or statements made herein
do not necessarily reflect those of J.P. Morgan Chase & Co. Incorporated, its
subsidiaries and affiliates.


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs



Re: CVS Performance

2001-01-05 Thread Larry Jones

Michael Peck writes:
> 
> I also discovered that as the history file (CVSROOT/history) grows, it
> really becomes a HUGE bottleneck.
> 
> If you delete/rename the file, cvs will stop appending to it.

As of CVS 1.11, you can also set LogHistory in CVSROOT/config to only
record events that you're interested in.

-Larry Jones

I wonder if I can grow fangs when my baby teeth fall out. -- Calvin

___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs



Re: CVS Performance

2001-01-05 Thread Michael Peck

I also discovered that as the history file (CVSROOT/history) grows, it
really becomes a HUGE bottleneck.

If you delete/rename the file, cvs will stop appending to it.

Mike

Eric Siegerman wrote:

> On Thu, Jan 04, 2001 at 04:51:23PM -0600, Cheryl Tipple wrote:
> > It appears we have a performance problem within our CVS system.
> > [CPU, network bandwidth, filesystem access path, RAM, swap are
> > all far from saturated]
> >
> > Can anyone tell me if there are any
> > [...] tests I can run to trace this problem?  We are also
> > going to try a network sniffer from our end to identify where the data
> > is moving.
>
> How about physical disk I/O?  You don't say which operations are
> causing you trouble, but update and commit both do a lot more
> disk I/O than one would naively expect -- they do locking by
> creating temporary directories (because mkdir() is atomic, I
> presume):
>   1. create a lock directory in each repository directory to be
>  operated on, recursively all the way down
>   2. do the operation
>   3. delete all the lock directories
>
> Note that:
>   - This is three recursive passes over the affected subtree, NOT
> a single pass in which all three operations are done on each
> directory as it is visited
>   - The per-directory progress messages are all printed during
> step 2; steps 1 and 3 are silent.
>
> To find out how much of the slowness is network-related, remove
> the network from the equation: try doing the same operations
> locally (ie. non-client/server) on the server machine.  To do
> this, set CVSROOT (or the value of the prefix -d option) to the
> unadorned absolute pathname of the repository, ie. with no access
> method specified.  (Even if you don't want to give accounts to
> the developers, the admin can do a checkout or two for testing --
> or even commits, on a scratch copy of the repository.)
>
> Good luck.
>
> --
>
> |  | /\
> |-_|/  >   Eric Siegerman, Toronto, Ont.[EMAIL PROTECTED]
> |  |  /
> Interviewer: You've been looking at the stars all your life:
> Is there anything in astrology?
> Arthur C. Clarke: It's utter nonsense.  But I'm a Sagittarius,
> so I'm naturally skeptical.
>
> ___
> Info-cvs mailing list
> [EMAIL PROTECTED]
> http://mail.gnu.org/mailman/listinfo/info-cvs



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs



Re: CVS Performance

2001-01-04 Thread Eric Siegerman

On Thu, Jan 04, 2001 at 04:51:23PM -0600, Cheryl Tipple wrote:
> It appears we have a performance problem within our CVS system.
> [CPU, network bandwidth, filesystem access path, RAM, swap are
> all far from saturated]
> 
> Can anyone tell me if there are any 
> [...] tests I can run to trace this problem?  We are also
> going to try a network sniffer from our end to identify where the data
> is moving.

How about physical disk I/O?  You don't say which operations are
causing you trouble, but update and commit both do a lot more
disk I/O than one would naively expect -- they do locking by
creating temporary directories (because mkdir() is atomic, I
presume):
  1. create a lock directory in each repository directory to be
 operated on, recursively all the way down
  2. do the operation
  3. delete all the lock directories

Note that:
  - This is three recursive passes over the affected subtree, NOT
a single pass in which all three operations are done on each
directory as it is visited
  - The per-directory progress messages are all printed during
step 2; steps 1 and 3 are silent.


To find out how much of the slowness is network-related, remove
the network from the equation: try doing the same operations
locally (ie. non-client/server) on the server machine.  To do
this, set CVSROOT (or the value of the prefix -d option) to the
unadorned absolute pathname of the repository, ie. with no access
method specified.  (Even if you don't want to give accounts to
the developers, the admin can do a checkout or two for testing --
or even commits, on a scratch copy of the repository.)

Good luck.

--

|  | /\
|-_|/  >   Eric Siegerman, Toronto, Ont.[EMAIL PROTECTED]
|  |  /
Interviewer: You've been looking at the stars all your life:
Is there anything in astrology?
Arthur C. Clarke: It's utter nonsense.  But I'm a Sagittarius,
so I'm naturally skeptical.

___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs



CVS Performance

2001-01-04 Thread Cheryl Tipple

It appears we have a performance problem within our CVS system.  We are
not able to make use of more then approximately 5% of the CPU on an
overall basis.  Our network bandwidth should be sufficient, we are using
100 base T, which is far from saturated.  We are using 1G port into the
file system, yet we are only able to get this small amount of
bandwidth.  Physical ram on the usage machine is 750 Meg, the program
typically uses about 60 Meg, leaving at least 50 Meg available for
expansion.  Virtual memory space available is well over 1G.  The
database we are dealing with is approximately 3 G.

Can anyone tell me if there are any settings that we might have
overlooked or any tests I can run to trace this problem?  We are also
going to try a network sniffer from our end to identify where the data
is moving.

Regards,
Cheryl Tipple



--



begin:vcard 
n:Tipple;Cheryl
tel;pager:1/800skytel2/2929373
tel;fax:512-996-7668
tel;work:512-996-5568
x-mozilla-html:TRUE
org:http://nvmtc.sps.mot.com/images/animated_motorola.gif">;ISD 7700 W. Parmer Lane  Austin, Texas  78729  512-996-5568
adr:;;7700 W. Parmer Lane;Austin;Texas;78729;USA
version:2.1
email;internet:[EMAIL PROTECTED]
title:Engineer
fn:Cheryl Tipple
end:vcard