Re: Resend importinfo patch

2003-10-28 Thread Wu Yongwei
Larry Jones wrote:
*I* can accept Larry's opinions if there are ways to limit the 
import operation to only some specific persons, say, the PM and the
main developer.
Why would you want to do that?  There's nothing you can do with 
import that you can't do with add, commit, tag, etc.
It really sounds a strange question. And it is not only I that want this
feature. However, I'll try to name some reasons:
1) To avoid misuse. I once encountered a developer just trying to use
cvs and importing an existing module. As CVS administrator I do not like
it at all. Of course he should not do it, but it is better to prevent it
for ever than to educate everyone, esp. when I am not his boss.
2) To restrict the creation of modules (and initial files therein).
Maybe I am ignorant, but import is the only way I know to create modules
on the CVS server. And I am sure none of my colleagues knows of yet
another way. I never know how to use "cvs add" to do that as you
implied. Nor have I read any CVS books about it.
Best regards,

Wu Yongwei



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: cvs performance questions

2003-10-28 Thread Larry Jones
Derek Robert Price writes:
> 
> _commit_ (not checkout) is the notable exception, with not-so-notable
> exceptions being `cvs admin' and all of the watch commands.

Yes, that's what I meant.  I can't imagine why my fingers typed
something different.  :-)

-Larry Jones

It doesn't have a moral, does it?  I hate being told how to live my life.
-- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: roll back repository ?

2003-10-28 Thread Larry Jones
Richard Pfeiffer writes:
>  
> Some of the projects in our cvs repository were corrupted, either when
> we had an extreme server overload  (77 cvs processes happening at once)
> and had to shutdown inetd to prevent our cvs server from crashing (swap
> space was almost gone, load was 46%, kernel was at 63%, some users
> sub-branching processes were taking huge amounts of CPU time) or when
> some changes were made to the server kernel parameters.

What kind of corruption?  CVS is very careful with the RCS files, so
unless you had a seriously broken kernel, the individual RCS files
should be intact.  Of course, you may have inconsistencies such as
partial commit or tag operations, but that should be resolvable by just
repeating the operation.

-Larry Jones

He just doesn't want to face up to the fact that I'll be
the life of every party. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: Multilevel vendor branch import

2003-10-28 Thread Larry Jones
Mark D. Baushke writes:
> 
> Between now and the release of svn 1.0, I believe it is possible to
> address a number of the perceived limitations of CVS, but we need to
> determine how such changes fit into the overall way that CVS works
> rather than just adding a series of hacks.
> 
> If anyone has really thought out a series of changes to CVS that allow
> for an evolution into a more useful system, they are encouraged to post
> either on [EMAIL PROTECTED] or [EMAIL PROTECTED] about them.

My personal opinion is that CVS has gone about as it can go without
completely redesigning and reimplementing it.  In some ways, that is
what subversion has tried to do.  If they're successful, more power to
them; I'll be switching along with everyone else.  If not, we'll still
be here.

-Larry Jones

That's one of the remarkable things about life.  It's never so
bad that it can't get worse. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: commitinfo stopped working!

2003-10-28 Thread Larry Jones
Tom Marsh writes:
> 
> We want to control the names of directories that get created. All of a 
> sudden, it seems like commitinfo is getting ignored, as we're getting 
> names of directories not specifically allowed by our commitinfo!

Check to be sure that the actual $CVSROOT/CVSROOT/commitinfo file
contains what you think it should.  Don't trust a checked-out copy,
it's possible that something happened to the checked-out copy in the
repository.  If that's the problem, "cvs init" will re-checkout all the
administrative files.

-Larry Jones

I obey the letter of the law, if not the spirit. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least

2003-10-28 Thread Larry Jones
Mark D. Baushke writes [about locks on in-memory filesystems]:
> 
> If you are able to write to memory faster than to your repository, then
> the difference in speed between those to mediums is how much faster you
> will be able to create your lock. I would guess that in most cases of
> a repository over NFS, or slow local disks the use of a memory filesystem
> would be faster. The use of a swap system that is always being paged out
> to disk may actually be slower if the page disk is slow.

Most filesystems have to worry about the on-disk filesystem being
consistent in the event of a system crash or power failure.  In many
cases, that means that operations that modify the filesystem metadata
(such as creating or deleting a file or directory) are synchronous --
the system actually waits for the data to get written to the disk before
continuing.  Memory filesystems aren't persistent, so they can avoid
that.  Since locking is just creating and deleting lots of little files
and directories, that can make a big difference, even if the memory
filesystem might get paged out to backing store eventually.

And anyone with a "slow" paging disk deserves what they get, the paging
disk should be the fastest disk in the system.

-Larry Jones

These findings suggest a logical course of action. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: Resend importinfo patch

2003-10-28 Thread Larry Jones
Wu Yongwei writes:
> 
> *I* can accept Larry's opinions if there are ways to limit the 
> import operation to only some specific persons, say, the PM and the main 
> developer.

Why would you want to do that?  There's nothing you can do with import
that you can't do with add, commit, tag, etc.

-Larry Jones

Archaeologists have the most mind-numbing job on the planet. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: more cvs performance questions (I think they are at least

2003-10-28 Thread Larry Jones
Richard Pfeiffer writes:
>
> MIME-Version: 1.0
> Content-Type: text/html; charset=us-ascii

Please do not send MIME and/or HTML encrypted messages to the list.
Plain text only, PLEASE!

> We are running cvs-1.11.  I did migrate us to 1.11.9, but it turned out
> it does not mesh with Eclipse, which is what our developers use.  The
> latest upgrade Eclipse can use is 1.11.6.  From what I read, that has
> its own problems, so 1.11.5 would be the latest we could use.

What was the problem with 1.11.9?  I can't think of any incompatibilites
and you're missing a lot of bug fixes using 1.11.

> We now can have as many as 77 concurrent cvs processes going.

Wow.  That is one busy repository.  Are they all running, or are some of
them sleeping?

> Should cvs even be able to handle this kind of load?  To some of us,
> it's amazing and a credit to cvs that this thing hasn't crashed already.

There isn't any inherent reason that CVS can't handle the load.

> a) should we be splitting up our repository and giving each project
> their own?

That wouldn't help unless you gave each repository it's own server
machine.

> b) is there a way to limit the number of pserver calls made at any
> one time?

Since CVS is invoked by inetd, that depends on your particular inetd
implementation.  I'm pretty sure that xinetd does allow you to limit the
number of concurrent servers for a particular service, so if your
implementation doesn't, you may want to consider switching (see
www.xinetd.org).
 
> c) Should we be going to a 4x4 machine rather than our current 2x2?

It sounds like you should consider it, but you should probably ask
someone familiar with Solaris system performance tuning.  It's possible
that your problem is more with memory or I/O that it is with CPU.  Also,
the system may not be tuned appropriately for your work load.

> Context switching seems to be excessive, especially when we have more
> than 2 or 3 cvs ops running together. In the mornings, it's hitting as
> much as 12K per second, which is definitely a killer on a 2-processor
> system.
> 
> a) Is this normal?

Probably.  CVS is typically I/O intensive, which generally means lots of
context switches.

> b) Is cvs setup with a ping parameter or some kind of "am I alive"
> setting that hits every 1, 2 or 5 seconds?  If so, can it be reset?

No, CVS doesn't do any kind of pinging.

> Is there any kind of performance bug where just a few processes take up
> a lot of CPU - especially branch commands?  We were getting CPU time
> readings of 41 on one sub-branch process.

> In the doc, I read about setting the LockDir=directory in CVSROOT, where
> I assume I create my own dir in the repository (LockDir=TempLockFiles).

No, you create your own dir somewhere other than in the repository (and
you need to give LockDir an absolute path, not a relative path).  At the
very least, that allows you to offload the lock I/O to a different disk
than the regular I/O.

> a) Just what is an in-memory file system?

Just what it says -- a filesystem where the data only exists in memory
(rather than being written to a disk); they are commonly used for /tmp.
If you're already using such a filesystem for /tmp, you can just put
LockDir on /tmp (e.g., /tmp/CVSLockDir).  I believe the Solaris variety
is called tmpfs.

> b) Is speed garnered because all the lock files are in one directory
> and cvs does not need to traverse the project repository?

No, the speed is gained by not waiting for physical I/O to a disk drive.
Because the data doesn't survive a reboot, the system may be able to
take other shortcuts, too.

> c) Is the speed increase significant?

It can be.

> d) Will there be any problems with having lock files from multiple
> different projects  in the repository flooding this same directory?

No.

> In this LockDir case, we are going to have lock files from multiple
> different projects all in one dir. It appears by the statement:  "You
> need to create directory, but CVS will create subdirectories of
> directory as it needs them" that the full path is still used, correct?

Correct.  CVS will mirror the repository directory structure under the
LockDir directory.

> The beginning states that cvs will try every 30 seconds  to see if it
> still needs to wait for lock.
> 
> e) Any chance this is a parameter that can be decreased - or would
> it's checking more often just create more overhead and slow things down?

As of CVS 1.11.6, it's actually a bit more sophisticated -- if your
system allows sub-second sleeping, CVS will first try sleeping for 2, 4,
8, ... , 512 microseconds before giving up and sleeping for 30 seconds. 
In a busy repository like yours, there can be a lot of contention for
the master locks, but they're only held for a very short time, so the
short sleep avoids a long wait in that case.  You get the "waiting for
x's lock' message with every 30 second sleep, so you can get a feel for
how often you're running into lock contention prob

Re: more cvs performance questions (I think they are at least interesting though!)

2003-10-28 Thread Mark D. Baushke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Richard Pfeiffer <[EMAIL PROTECTED]> writes:

> BASICS:
>
> We are running cvs-1.11.  I did migrate us to 1.11.9, but it turned out it
> does not mesh with Eclipse, which is what our developers use.  The latest
> upgrade Eclipse can use is 1.11.6.  From what I read, that has its own
> problems, so 1.11.5 would be the latest we could use.

Have you reported the problems to the Eclipse folks? A separate e-mail
on what problems you ran into may be useful to help other folks some
day.

> 1)
>
> Should cvs even be able to handle this kind of load?  To some of us, it's
> amazing and a credit to cvs that this thing hasn't crashed already.  But,
> to avoid a crash, when we did the metrics and saw what our percentages on
> cpu, switching, kernel, etc., and especially load (46) were, we shut down
> inetd.conf, waited for some cvs processes to complete and the load drop to
> 10 before starting inetd.conf back up.
>
> a) should we be splitting up our repository and giving each project
> their own?

Well, that might help you scale a bit more

> b) is there a way to limit the number of pserver calls made at any one
> time?

I am not aware of any on the solaris inetd. However, you could probably
borrow code from the public tcp_wrapper software and have it check the
load on your system and then refuse connections to the pserver for a
time.

> c) Should we be going to a 4x4 machine rather than our current 2x2?

How fast are you growing? Will it give you enough room? If not, then you
might end up needing to shed some of the projects to another machine as
well as moving to more processors on your current hardware.

To be honest, I think you might be better to drop another 14GB of memory
on the system to see if that improves your performance.

>
>
>
>
>
> 2)
>
> Context switching seems to be excessive, especially when we have more than
> 2 or 3 cvs ops running together. In the mornings, it's hitting as much as
> 12K per second, which is definitely a killer on a 2-processor system.
>
> a) Is this normal?

To be honest, I have not benchmarked cvs in this situation.

>
> b) Is cvs setup with a ping parameter or some kind of *am I alive*
> setting that hits every 1, 2 or 5 seconds?  If so, can it be reset?

No. The :pserver: client/server protocol assumes a tcp connection, but
to the best of my understanding does not send any kind of a keep-alive
over the link.

>
>
>
>
>
> 3)
>
> Is there any kind of performance bug where just a few processes take up a
> lot of CPU * especially branch commands?  We were getting CPU time
> readings of 41 on one sub-branch process.

I am sure that there are many bugs that remain in cvs, but I am not
aware of any particular performance problems. To create a branch tag,
all of the files that are being tagged will be mmaped() into memory,
modified to have the new tag near the head of the file, written into a
',filename,' and then renamed as 'filename,v' when the operation is
complete before it moves along to the next file in the list to be
tagged.

>
>
>
>
>
>
>
> 4)
>
> In the doc, I read about setting the LockDir=directory in CVSROOT, where I
> assume I create my own dir in the repository (LockDir=TempLockFiles).
>
> We DO NOT have this set as yet, but I think I might like to try it for
> speed sake.  All our developers need write access to the repository, but
> the doc states:
>
> It can also be used to put the locks on a very fast in-memory file system
> to speed up locking and unlocking the repository.
>
>
>
> a) Just what is an in-memory file system?

Some operating systems have a way to create a mfs (memory file system).
I believe that the closest that Solaris comes is the use of a swap
filesystem which may be memory resident for much of the time.

>
> b) Is speed garnered because all the lock files are in one directory
> and cvs does not need to traverse the project repository?

No, there are still multiple directories using a LockDir and a traversal
is still done. The difference is that the operations are typically
handled much faster.

The creation of cvs locks is a multi-step process that ends with a
#cvs.lock directory being created for the duration of the lock and then
being removed. For some operations, the creation of the read file, the
creation of the lock file and reading the contents of the directory and
any files needed from the repository and the removal of the lock
directory can take milliseconds. Being able to improve the performance
of lock creation and deletion will improve the overall access time of
the repository.

> c) Is the speed increase significant?

If you are able to write to memory faster than to your repository, then
the difference in speed between those to mediums is how much faster you
will be able to create your lock. I would guess that in most cases of
a repository over NFS, or slow local disks the use of a memory filesystem
would be faster. The use of a swap system that is always being p

Re: Resend importinfo patch

2003-10-28 Thread Mark D. Baushke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Wu,

I have not forgotten about the patch. 

I am under the impression that we were waiting for consensus. So far,
Larry has apparently voted that it is not needed. My vote is that I
think it would be okay to add it if it addresses all of the concerns
raised in the thread.

I have not heard Derek's opinion on the matter.

In addition, there is the nature of the other part of the patch
for an admininfo trigger which needed some work.

My guess is that Ralf Engelschall has been busy these last few weeks.

The last message on this thread was here:

  http://mail.gnu.org/archive/html/info-cvs/2003-10/msg00022.html

and there were some outstanding questions for Ralf on the way that
the importinfo trigger was to work.

Enjoy!
-- Mark
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (FreeBSD)

iD8DBQE/ny//3x41pRYZE/gRAnlHAKDUH2ORsb59RfVnR8TpnbY4+ZnDQwCgnaWl
3QY8H+EGEfV/PUXktI4cpYc=
=uivd
-END PGP SIGNATURE-


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


more cvs performance questions (I think they are at least interesting though!)

2003-10-28 Thread Richard Pfeiffer
Good Afternoon,
 
I have a few more questions related to performance.  Some MAY be a bit 'out-of-the-box, but please bare with me!
 
BASICS:
We are running cvs-1.11.  I did migrate us to 1.11.9, but it turned out it does not mesh with Eclipse, which is what our developers use.  The latest upgrade Eclipse can use is 1.11.6.  From what I read, that has its own problems, so 1.11.5 would be the latest we could use. 
 
Our server machine is a Solaris 8, 2 processor box, 2GB RAM, 28GB disk,
900 MHz.  This machine is dedicated to cvs.  The only other things on it or hitting it are an LDAP server, bugzilla and viewcvs.
 
Our repository sits on a NetApp slice (just a big, beefy disk) that is NFS mounted to 
our server.  This is a production level NFS mount and there are NO other mounts.  We originally did this in the interest of speed – we had 4 minute checkouts on a local repository, 36 seconds on the NFS mount.  
I know there are NFS/ CVS issues, but I have spoke to this list regarding this and the
conclusion was that with a production level NFS server, “we will almost certainly not have any problems”.  
 
And we haven’t.  We’ve been running like this for over a year now.  Our problem, since so many project and users have been added, is with performance.
 
We now can have as many as 77 concurrent cvs processes going.  That is excessive and very rare, but did happen when an 8Mb xml file was checked in as ascii, which causes a diff
to be made for any and every update command on it.  It was then re-checked in as binary and that took care of that.
But normally, we can have 3 branching processes going at once on one project, along with numerous updates, co, etc against the same project – while various other projects are doing the same against there own.  I’d say 36 cvs processes going at once isn’t a stretch.  So, given this scenario:
 
 
1)
Should cvs even be able to handle this kind of load?  To some of us, it’s amazing and a credit to cvs that this thing hasn’t crashed already.  But, to avoid a crash, when we did the metrics and saw what our percentages on cpu, switching, kernel, etc., and especially load (46) were, we shut down inetd.conf, waited for some cvs processes to complete and the load drop to 10 before starting inetd.conf back up.
a) should we be splitting up our repository and giving each project their own?
b) is there a way to limit the number of pserver calls made at any one time? 
c) Should we be going to a 4x4 machine rather than our current 2x2?
 
 
2)
Context switching seems to be excessive, especially when we have more than 2 or 3 cvs ops running together. In the mornings, it's hitting as much as 12K per second, which is definitely a killer on a 2-processor system.  
a) Is this normal?
b) Is cvs setup with a ping parameter or some kind of “am I alive” setting that hits every 1, 2 or 5 seconds?  If so, can it be reset?
 
 
3) 
Is there any kind of performance bug where just a few processes take up a lot of CPU – especially branch commands?  We were getting CPU time readings of 41 on one sub-branch process.
  
 
 
4) 
In the doc, I read about setting the LockDir=directory in CVSROOT, where I assume I create my own dir in the repository (LockDir=TempLockFiles).  
We DO NOT have this set as yet, but I think I might like to try it for speed sake.  All our developers need write access to the repository, but the doc states:
It can also be used to put the locks on a very fast in-memory file system to speed up locking and unlocking the repository.
 
a) Just what is an in-memory file system?
b) Is speed garnered because all the lock files are in one directory and cvs does not need to traverse the project repository?
c) Is the speed increase significant?  
d) Will there be any problems with having lock files from multiple different projects  in the repository flooding this same directory?
If I need to search for errant locks, the way we are currently set up, I can go to the project where I know they exist and do a find for them.  In this LockDir case, we are going to have lock files from multiple different projects all in one dir. It appears by the statement:  “You need to create directory, but CVS will create subdirectories of directory as it needs them” that the full path is still used, correct?  (So, it would still be an easy search?) 
 
I then read the link: 10.5-Several developers simultaneously attempting to run CVS, that goes along with LockDir.
The beginning states that cvs will try every 30 seconds  to see if it still needs to wait for lock.  
e) Any chance this is a parameter that can be decreased – or would it’s checking more often just create more overhead and slow things down?
  
In the end, it states
if someone runs





 
cvs ci a/two.c b/three.c
and someone else runs cvs update at the same time, the person running update might get only the change to `b/three.c' and not the change to `a/two.c'.
f) I assume this does not relate only to when LockDir is set

Re: Resend importinfo patch

2003-10-28 Thread Wu Yongwei
Hi, I have seen no reply to my message so far. And it's been twenty 
days. *I* can accept Larry's opinions if there are ways to limit the 
import operation to only some specific persons, say, the PM and the main 
developer. If currently it is not possible, any other ways except 
importinfo?

Best regards,

Wu Yongwei

Wu Yongwei wrote:
Larry Jones wrote:
I'd say the tags should go through taginfo (which could be extended if 
people feel that it is necessary to distinguish between the release 
tag and the vendor tag).  The import wouldn't occur until all
of the preliminaries were successful.  I'm still not convinced that
we need a separate importinfo.

-Larry Jones


OK, let's speak about an example. As project manager, I might give the
main developers full access rights to modify any file, but I want that
only *I* have the right to import. I had also a bad experience that a
developer reimported an existing repository: nothing serious happened,
but new (unwanted) branches and tags were created.
That is exactly what I am doing now. My importinfo executes such a
script ("ALL /path/to/import_acls"):
#! /bin/sh
if [ -n $LOGNAME ]; then
  USERNAME=$LOGNAME
elif [ -n $USER ]; then
  USERNAME=$USER
else
  exit 1
fi
if [ $USERNAME != 'adah' ]; then
  echo You are not permitted to import!
  exit 1
fi




___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: Compression vs. streaming in cvs

2003-10-28 Thread Mark D. Baushke
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Richard Pfeiffer <[EMAIL PROTECTED]> writes:

> Just got a question regarding compression vs. streaming data on cvs. 
> Since I know nothing about this, I looked it up at cvshome.org and
> google as well as in the book "Open source developement with CVS, 2nd
> edition". 
> Nothing - at least that I can find in the index or search engines.
> Can anyone shed some light on what is even being referred to by this? 
> Supposedly, the thought is that streaming would be faster.
>  
> I thought maybe there was a streaming option or a compression option to
> the command line - but now I'm not so sure.
>  
> Thank you.

I believe you are looking for the '-z gzip-level' global option.
Read:

  http://www.cvshome.org/docs/manual/cvs-1.11.7/cvs_16.html#SEC117

for page you were trying to find.

It is useful for :pserver: as well as :ext: (for CVS_RSH=rsh), but
not that useful when using :ext: over ssh (CVS_RSH=ssh) which already
does compression for you.

-- Mark
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (FreeBSD)

iD8DBQE/ntzr3x41pRYZE/gRAv+6AKDMC5RnBmRebevj6nelU56gNgizBgCfUy2p
4zqjXZiJZeVmwr/OOiCRD3Y=
=0ZzG
-END PGP SIGNATURE-


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: Multilevel vendor branch import

2003-10-28 Thread Ross Patterson
On Sunday 26 October 2003 03:22 pm, Rodolfo Schulz de Lima wrote:
> For instance, is kind of interesting and educative to
> have all releases of linux kernel since linux-0.01 in a CVS repository so
> you can see how the kernel evolved through time. 

What you've learned the hard way is that CVS vendor support is pretty limited.  
But the good news is that it doesn't look like what you're trying to do is 
all that hard with normal CVS.  Instead of trying to "cvs import" each 
release, try just checking the updated code in to CVS.  Tag it where tags 
make sense, branch it where branches make sense, etc.

The one thing that the vendor support does that is interesting is helping you 
reconcile your changes to the vendor's code.  But you don't seem to need 
that, so the easiest answer may just be "don't use it."
-- 
Ross A. Patterson
Chief Technology Officer
CatchFIRE Systems, Inc.
5885 Trinity Parkway, Suite 220
Centreville, VA  20120
(703) 563-4164



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Compression vs. streaming in cvs

2003-10-28 Thread Richard Pfeiffer
Just got a question regarding compression vs. streaming data on cvs.  
Since I know nothing about this, I looked it up at cvshome.org and google as well as in the book "Open source developement with CVS, 2nd edition".  
Nothing - at least that I can find in the index or search engines.
Can anyone shed some light on what is even being referred to by this?  Supposedly, the thought is that streaming would be faster.
 
I thought maybe there was a streaming option or a compression option to the command line - but now I'm not so sure.
 
Thank you.
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: Commit Error and Checkout Error

2003-10-28 Thread david
> Hi,
>  
> I am able to login into Cvs as root as well as a user.
> when I  have logged in as a  root i face some difficulties when trying to 
commit a file after making changes to the file.It says root is not allowed 
to commit files.
>
Right; "root" is not allowed to commit files.  This is a design feature,
intended to increase the accountability of changes.  Since "root" is
often a shared account (it's a functional account rather than a personal
one), allowing it to check in would make it impossible to determine
who checked a change in.  (It isn't quite this simple; if CVS is able
to determine who "root" really is, CVS may allow the checkin.)
  
> when i am logging as a user i am unable to checkout the files. It says 
permission denied .
>
That would mean that your user account does not have the permissions
necessary.  Your account must have read permission on all files and
directories you are going to use, execute permission on all directories,
and normally write permission on all directories.  (It is possible to
get around this, for read-only access, by setting LockDir in 
CVSROOT/config.)
  
-- 
Now building a CVS reference site at http://www.thornleyware.com
[EMAIL PROTECTED]



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


roll back repository ?

2003-10-28 Thread Richard Pfeiffer
Good Morning,
 
Some of the projects in our cvs repository were corrupted, either when we had an extreme server overload  (77 cvs processes happening at once) and had to shutdown inetd to prevent our cvs server from crashing (swap space was almost gone, load was 46%, kernel was at 63%, some users sub-branching processes were taking huge amounts of CPU time) or when some changes were made to the server kernel parameters.
 
We have backups to roll back to, I'm just wondering if there are repair tools for cvs that would permit a less drastic approach that having to revert the whole thing?
 
Thx!
 
 
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Checkout Error

2003-10-28 Thread Narendhran K


Hi,
When i try to checkout file logged in as a user i get the following error .
cvs checkout: failed to create lock directory for `/cvs/cvsroot/web' (/cvs/cvsroot/web/#cvs.lock): Permission denied
cvs checkout: failed to obtain dir lock in repository `/cvs/cvsroot/web'
cvs [checkout aborted]: read lock failed - giving up
could anyone clarify me .
Regards
Naren
 
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: the server gets tougher!!!

2003-10-28 Thread Derek Robert Price
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
kent emia wrote:

| yes i re-config the LockDir to $CVSROOT/lock  <- is this O.K. ???
|
|and one more thing how can i say $CVSROOT/CVSROOT can only be accessed
|by root..
|
|chgrp -R root $CVSROOT/CVSROOT
|chmod 700 or 750 
|or can i say
|chmod ug+rwx . $CVSROOT/CVSROOT
You can't set $CVSROOT/CVSROOT to be accessed only by root and expect
CVS to operate properly.  Many CVS commands need to read the files in
$CVSROOT/CVSROOT and processes need write permissions to the val-tags
and sometimes the history file.  It _is_ possible to take write
permissions for the directory and all the files in it away from everyone
but root and set write permissions for everybody for only val-tags and
the history file if you are using it.
Derek
- --
~*8^)
Email: [EMAIL PROTECTED]

Get CVS support at !
- --
"Thank GOD Microsoft doesn't build airplanes."
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Netscape - http://enigmail.mozdev.org
iD8DBQE/nmXELD1OTBfyMaQRAkfoAJ9iAzhyV55PoHHULnJjjciLWfRefACg4a9d
zZCBKAKNFaKoNlj3Y0ahAbI=
=X+P+
-END PGP SIGNATURE-


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: Files disappeared without a trace

2003-10-28 Thread Derek Robert Price
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Yamuna Ramasubramaniyan wrote:

|shows the directory with the files in it.  I am at a loss to figure out
what
|happened to those files.  I'd appreciate any help.
Have you tried the `log' command?

~http://www.cvshome.org/docs/manual/cvs-1.11.7/cvs_16.html#SEC142

Derek

- --
~*8^)
Email: [EMAIL PROTECTED]

Get CVS support at !
- --
They are laughing at me, not with me.
They are laughing at me, not with me.
They are laughing at me, not with me...
~  - Bart Simpson on chalkboard, _The Simpsons_
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Netscape - http://enigmail.mozdev.org
iD8DBQE/nmajLD1OTBfyMaQRAiF/AJ9cN2o+puvSYh8nY8dYegByo0GJZgCgnJff
rw32JZwNATmKgUDcpH4Zx5k=
=Im/X
-END PGP SIGNATURE-


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Commit Error and Checkout Error

2003-10-28 Thread Narendhran K
Hi,
 
I am able to login into Cvs as root as well as a user.
when I  have logged in as a  root i face some difficulties when trying to commit a file after making changes to the file.It says root is not allowed to commit files.
 
when i am logging as a user i am unable to checkout the files. It says permission denied .
 
Could any one help me on this.
 
Regards
Naren
Do you Yahoo!?
Exclusive Video Premiere - Britney Spears___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: cvs performance questions

2003-10-28 Thread Derek Robert Price
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Larry Jones wrote:

| It's probably worth a few words about locking details. Most operations
|
|lock a single directory at a time, which minimizes lock contention.
|Checkout is the notable exception, it locks the entire tree for the
|duration of the checkout.  Some versions of CVS also lock the whole tree
_commit_ (not checkout) is the notable exception, with not-so-notable
exceptions being `cvs admin' and all of the watch commands.
Derek

- --
~*8^)
Email: [EMAIL PROTECTED]

Get CVS support at !
- --
"If triangles had a God, He'd have three sides."
~-- Old Yiddish Proverb
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Using GnuPG with Netscape - http://enigmail.mozdev.org
iD8DBQE/nmLpLD1OTBfyMaQRAuB1AJwOejj4EfMw7BBFQhGdOevC7PH+HwCdGUS0
uSK5r4heNxkZm9TGt7VHbNU=
=S3al
-END PGP SIGNATURE-


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: Multilevel vendor branch import

2003-10-28 Thread Paul Sander
The rename problem has been discussed at length in this forum, and
several methods have been discussed.  I believe that the best way to
solve the problem is to add a layer of indirection in the translation
between working file and RCS file.  The pointer to the RCS file would
be stored in a versioned object; changing an identifier paired with a
pointer renames the file.

I can see two practical implementations of the versioned mapping:
One is to distribute it across the entire repository in the form of
versioned directories, the other is to replace the modules database
with named collections of mappings that define entire modules.
Either way, the number of special cases to handle in a concurrent
environment is quite large and it will take some effort to work out
all of the semantics.  (Examples:  When committing a rename, is it
required to commit both the source and target of the rename in the
same command?  If user A renames file X to Y, should an existing Y be
removed?  If yes, how does user B resolve a merge conflict if he
modifies his copy of Y and when A's new Y is introduced to his
workspace during a subsequent "cvs update"?)

Note that by divorcing the identities of working files from those of
the history containers (RCS files), the CVS directory-level locking
mechanism in the repository becomes impractical to manage access.
It becomes necessary to lock tags (branches and versions) whenever
they are processed, and then lock individual RCS files.  (RCS
already does this, but something would have to be done to defer the
final rename until after all of the ,*, files have been updated.
So a two-phase commit mechanism, in the database sense, is called for.)

There are some tricks that can be brought to bear to improve performance.
Intent-mode locking can be used on the tag locks to manage concurrent reads
and writes.  Identifying desired versions in advance of read operations (by
version number or by branch/timestamp pair) eliminates the need for RCS file
locks on read operations.

When an object is guarded by an intent-mode lock, the following semantics
apply:  Read locks can exist in the presence of other read locks, and a
single intent lock, and no write locks.  An intent lock can exist in the
presence of read locks and in the absence of other intent locks and write
locks.  A write lock can exist in the absence of read and intent locks.
Read locks may upgrade to intent locks after waiting for all other intent
and write locks to vanish, and intent locks may upgrade to write locks
after all read locks have vanished.  Typically what happens is that while
an object has an intent lock, it is copied and the copy is updated.  The
copy replaces the original object while the object is write locked.  The
end result of all this is that write locks need be held only during the
second phase of the two-phase commit procedure, which is nothing but a
series of renames of RCS files.  That means that "cvs update" can run
concurrently with "cvs commit" and will complete first, and "cvs commit"
will only block access to the modified files and tags for a short period
of time at the very end of its processing.

The two-phase commit also has the benefits of making changes to the
repository be truly atomic, and it opens up a better avenue for crash
recovery.

--- Forwarded mail from [EMAIL PROTECTED]

On Mon, Oct 27, 2003 at 11:52:19AM -0800, Mark D. Baushke wrote:
> CVS is a more mature product and needs to move a bit more conservatively
> oriented when considering large changes in how it works. I suspect there
> will be a place for cvs even after svn 1.0 is released.

I know, it's easy to turn a mature, useful and complicated project like 
cvs in a complete mess. Except for some exoteric uses, CVS fits the bill 
of 95% of our versioning needs, and I've been using it for 2 years know 
without major complains. Those 5% missing is due to the lack of 
copy/move/rename changes in a repository (at least in my opinion). Maybe 
some way of recording, alongside with the diffs, the file name of a 
revision, I don't know. RCS doesn't give us the flexibility needed :(

--- End of forwarded message from [EMAIL PROTECTED]



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs