Re: capturing all aspects of a commit

2003-05-27 Thread Ralf S. Engelschall
On Sun, May 25, 2003, Akos Maroy wrote:

> I'd like to find a way to capture all change done by each commit
> operation. I'd like to maintain a database of commits, where I could
> list all the changes done by each commit (and have the possibility
> somehow to get sources prior to and after any commit).
> [...]

You can gather together the information during commit step-by-step the
same way OSSP Shiela (http://www.ossp.org/pkg/tool/shiela/) does it by
hooking into the various CVS info hooks. Or you can parse afterwards
the CVSROOT/history file as CVSTrac (http://www.cvstrac.org/) does.
There you also can identify a commit as a whole. If you need immediate
action, use the first attempt. If it is ok to update your database in
time intervals, use the second attempt.

   Ralf S. Engelschall
   [EMAIL PROTECTED]
   www.engelschall.com



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


outsider's perspective

2003-05-27 Thread Phil R Lawrence
Well, I want to say thank you to all who posted regarding my query regarding dir 
versioning.  That was a heck of a discussion.  My resulting perspective:  CVS seems 
innapropriate for our real world needs, preferring instead to serve a "purer" 
versioning paradigm.  (A paradigm which, by the way, seems too complex for me to 
easily understand.)

To recap, I was looking for:
  - the complete history and versioning of every individual file
  - the ability to recreate dir structures, including hard and
symbolic links

These 2 things would have allowed me to checkout our whole ERP dir structure as of a 
given date.  Sweet!  

Greg says to use the right tool for the right job.  Well, I wish CVS were the right 
tool, because the two "right tools" I've read about have real problems!

ClearCase:
ClearCase costs a lot of money.  I mean a *lot* of money.  Now, my organization might 
pay for it, or they might not, I don't know.  We are a University in the USA, so we do 
have money.  But I guarantee most of this world would never in a million years be able 
to pay that sort of money.  So while my org might get by, the rest of the world 
suffers for the lack of an open source solution.

My own custom build tool, wrapped around CVS:
Gimme a break.  It's taken our ERP vendor a decade (more?) to evolve their current ... 
um...  way of doing things.  I'm pretty good at hacking and munging, but I am not 
prepared to try and automate all of the linking and the recreation of the other 
inconsistent results of their upgrade scripts upon CVS checkout.  No, I need a tool 
that can simply capture the *results* of their way of doing things and leave it at 
that.

In conclusion, I know I have little choice but to follow Greg's advice.  I'll use CVS 
for my little perl modules, but I'll be sorry to report to my boss that CVS won't work 
for our ERP versioning project.

Phil


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Donald Sharp
Have you looked at subversion?  Or what about bitkeeper?

I'm pretty sure that subversion can handle directory versioning.

I don't know about bitkeeper as that I refuse to download the 
source due to their restrictive liscensing agreement...

donald
On Tue, May 27, 2003 at 08:50:02AM -0500, Phil R Lawrence wrote:
> Well, I want to say thank you to all who posted regarding my query regarding dir 
> versioning.  That was a heck of a discussion.  My resulting perspective:  CVS seems 
> innapropriate for our real world needs, preferring instead to serve a "purer" 
> versioning paradigm.  (A paradigm which, by the way, seems too complex for me to 
> easily understand.)
> 
> To recap, I was looking for:
>   - the complete history and versioning of every individual file
>   - the ability to recreate dir structures, including hard and
> symbolic links
> 
> These 2 things would have allowed me to checkout our whole ERP dir structure as of a 
> given date.  Sweet!  
> 
> Greg says to use the right tool for the right job.  Well, I wish CVS were the right 
> tool, because the two "right tools" I've read about have real problems!
> 
> ClearCase:
> ClearCase costs a lot of money.  I mean a *lot* of money.  Now, my organization 
> might pay for it, or they might not, I don't know.  We are a University in the USA, 
> so we do have money.  But I guarantee most of this world would never in a million 
> years be able to pay that sort of money.  So while my org might get by, the rest of 
> the world suffers for the lack of an open source solution.
> 
> My own custom build tool, wrapped around CVS:
> Gimme a break.  It's taken our ERP vendor a decade (more?) to evolve their current 
> ... um...  way of doing things.  I'm pretty good at hacking and munging, but I am 
> not prepared to try and automate all of the linking and the recreation of the other 
> inconsistent results of their upgrade scripts upon CVS checkout.  No, I need a tool 
> that can simply capture the *results* of their way of doing things and leave it at 
> that.
> 
> In conclusion, I know I have little choice but to follow Greg's advice.  I'll use 
> CVS for my little perl modules, but I'll be sorry to report to my boss that CVS 
> won't work for our ERP versioning project.
> 
> Phil
> 
> 
> ___
> Info-cvs mailing list
> [EMAIL PROTECTED]
> http://mail.gnu.org/mailman/listinfo/info-cvs


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Steve deRosier
Phil,

As you know, CVS doesn't version directories and frankly was never 
designed with that in mind (never mind the argument over if it should or 
should not, it simply doesn't).  But...there is a program that does 
record directory structure: tar.  Perhaps you could somehow use tar in 
conjunction w/ CVS to do what you want to do.

OR

Alternatively, since you're a Perl programmer, why not just write a 
simple script to parse the directories.  You could write two scripts:
Script1.pl -> This one recursively parses the directories, writing a 
file (or to stdout) a representation (in text, best if easily human 
readable) of the directory structure.
Script2.pl -> This one parses the representation file and creates the 
directory structure.

The representation since it is in text _could_ and in this scenario 
_should_ be checked into CVS and properly versioned.  There may be a way 
to get CVS to handle a ci and co of the full ERP system using these 
scripts and the wrappers (not sure, I've never used the wrappers), but 
if not, you could always have a project 'ERP' that contains a Makefile 
and the representation file.  You checkout the ERP module, run make, and 
boom, you've got your whole ERP retrieved from the repository.  The 
Makefile would have the instructions to:
1. run Script2.pl on the representation file and build the ERP directory 
structure
2. run the necessary CVS co or export commands to grab the required 
files and put them in the right places.

Maybe your Makefile could have both a ci and a co target so you could 
run 'make co' and 'make ci' to do what you want to do.

Anyone with some competence in Perl and make should be able to build 
this little build environment in a few days (and don't ask me, I already 
have a full time job).  Do it in house or hire a consultant.  (And don't 
forget to check the scripts into CVS while you're at it).

But, you've got to decide on a trade off -> pay the big bucks for 
ClearCase or pay (either time or money) to have someone build you a 
custom build environment.  And, you need to evaluate ClearCase to decide 
if it really does what you are trying to do, or if you're going to have 
to bend to their way of doing things.  I don't know, I have no 
experience or knowledge with ClearCase at all.

Really, this was all anyone was trying to suggest when they said that 
CVS doesn't do directories and you'd have to use a build tool to handle 
them (and frankly, this is the 'proper' way to do it from the CVS 
paradigm).

As to the CVS paradigm, it's pretty simple:
CVS versions files (text files primarily) and allows for multiple people 
to checkout and edit those files concurrently.  CVS doesn't think 
anything about directory structure except that it is possible to nest 
Modules within one another.  (Frankly it seems to work best and is 
happiest when the directory structure is relatively flat.)  Directories 
have no purpose other than to be containers for files (which CVS does 
care about and will version) and as such, other than structure and name, 
are so un-interesting to be beneath notice.

That's it.  Don't think too hard on it.  As is typical with Linux and 
other UNIX systems, CVS is a simple tool meant to do one simple job and 
nothing else.  CVS versions files.  If you want to do more, add more 
tools into the mix (tar collects files and directories into one large 
file, but doesn't compress them; gzip compresses single files but 
doesn't collect them; together the two can grab and archive a large 
swath of the filesystem and make it small enough to fit on one CD).

I don't know if any of this helps, but I hope I gave you an idea that 
you hadn't considered.

- Steve

Phil R Lawrence wrote:
Well, I want to say thank you to all who posted regarding my query regarding dir versioning.  That was a heck of a discussion.  My resulting perspective:  CVS seems innapropriate for our real world needs, preferring instead to serve a "purer" versioning paradigm.  (A paradigm which, by the way, seems too complex for me to easily understand.)

To recap, I was looking for:
  - the complete history and versioning of every individual file
  - the ability to recreate dir structures, including hard and
symbolic links
These 2 things would have allowed me to checkout our whole ERP dir structure as of a given date.  Sweet!  

Greg says to use the right tool for the right job.  Well, I wish CVS were the right tool, because the two "right tools" I've read about have real problems!

ClearCase:
ClearCase costs a lot of money.  I mean a *lot* of money.  Now, my organization might 
pay for it, or they might not, I don't know.  We are a University in the USA, so we do 
have money.  But I guarantee most of this world would never in a million years be able 
to pay that sort of money.  So while my org might get by, the rest of the world 
suffers for the lack of an open source solution.
My own custom build tool, wrapped around CVS:
Gimme a break.  It's taken our ERP vendor a decade (m

Re: outsider's perspective

2003-05-27 Thread Greg A. Woods
[ On Tuesday, May 27, 2003 at 11:39:22 (-0700), Steve deRosier wrote: ]
> Subject: Re: outsider's perspective
>
> As you know, CVS doesn't version directories and frankly was never 
> designed with that in mind (never mind the argument over if it should or 
> should not, it simply doesn't).  But...there is a program that does 
> record directory structure: tar.  Perhaps you could somehow use tar in 
> conjunction w/ CVS to do what you want to do.

CVS already does effectively what tar does, and it already does it with
support for recording the addition and deletion of files temporally,
something which tar obviously cannot do.

-- 
Greg A. Woods

+1 416 218-0098;<[EMAIL PROTECTED]>;   <[EMAIL PROTECTED]>
Planix, Inc. <[EMAIL PROTECTED]>; VE3TCP; Secrets of the Weird <[EMAIL PROTECTED]>


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Phil R Lawrence
On Tue, 27 May 2003 14:24:50 -0400
Donald Sharp <[EMAIL PROTECTED]> wrote:

> Have you looked at subversion?  Or what about bitkeeper?
> 
> I'm pretty sure that subversion can handle directory versioning.

Yeah, subversion looks promising, but symbolic links aren't slated for inclusion 
untill after version 1.0, and I didn't see anything about Hard links.

Thanks for the tip on bitkeeper, though.  I emailed their staff just now.  We'll see 
what they have to say.

Phil


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Phil R Lawrence
On Tue, 27 May 2003 11:39:22 -0700
Steve deRosier <[EMAIL PROTECTED]> wrote:

> That's it.  Don't think too hard on it.  As is typical with Linux and 
> other UNIX systems, CVS is a simple tool meant to do one simple job and 
> nothing else.  CVS versions files.  If you want to do more, add more 
> tools into the mix

You know, that does paint a rosier, and fairer picture.  I believe in the usefulness 
of using the pipe on the command line, and this does seem analogous.  Thanks!

As for the perl wrappers, yeah, I had gone as far in my thinking as you did in your 
note.  The hangups start when considering how to determine at checkin time which of 
the n hard links will become a file actually stored in CVS, and how to track the other 
hard links to be sure they are updated upon checkout.

I suppose that's doable also, but...  how much hacking and wrapping is too much?  I 
guess that's another tradeoff.  Piping "this to that to the other" on my own personal 
command line is one thing, but this will be an scm system to be used by one and all.  
Some in my department are known to be set in their ways, and complexity is anathema.  
When I began looking into CVS, the complexity of the tagging/branching syntax was 
almost enough to make it a non-contendor for our users.

I'll keep looking for now.  Thanks again for your fair(er) thoughts on CVS.

Phil


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Kaz Kylheku
On Tue, 27 May 2003, Steve deRosier wrote:

> Alternatively, since you're a Perl programmer, why not just write a 
> simple script to parse the directories.  You could write two scripts:
> Script1.pl -> This one recursively parses the directories, writing a 
> file (or to stdout) a representation (in text, best if easily human 
> readable) of the directory structure.
> Script2.pl -> This one parses the representation file and creates the 
> directory structure.
> 
> The representation since it is in text _could_ and in this scenario 
> _should_ be checked into CVS and properly versioned.

Or you could just Meta-CVS, which has been out well over a year.

You have the basic idea right, but it takes more than a handful of perl
scripts to make robust, smooth version control application that takes
care of the corner cases and error situations.



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Kaz Kylheku
On Tue, 27 May 2003, Phil R Lawrence wrote:

> Date: Tue, 27 May 2003 10:16:00 -0500
> From: Phil R Lawrence <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: outsider's perspective
> 
> On Tue, 27 May 2003 14:24:50 -0400
> Donald Sharp <[EMAIL PROTECTED]> wrote:
> 
> > Have you looked at subversion?  Or what about bitkeeper?
> > 
> > I'm pretty sure that subversion can handle directory versioning.
> 
> Yeah, subversion looks promising, but symbolic links aren't slated
> for inclusion untill after version 1.0, and I didn't see anything about
> Hard links.

When I heard that the SVN guys were planning symlinks, I took the two
days of effort to put them into Meta-CVS. It took some huffing and
puffing, but it was worth it. I ended up using them in a couple of
projects.

Meta-CVS could in principle support hard links as well. I can visualize
quite crisply how it would work, but I currently have no incentive to
do the work.

If I think that something is useful to the product (i.e. *I* can use it
myself, or it provides some strategic advantage) then I might put it
in.

It shouldn't take more than around four 40 hour developer-weeks to do
it (and I mean do it right, so that every operation is properly aware
of hard link support, including ``mcvs grab'').



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


RE: cvs add

2003-05-27 Thread Paul Sander
>--- Forwarded mail from [EMAIL PROTECTED]

>[ On Sunday, May 25, 2003 at 19:51:07 (-0700), Paul Sander wrote: ]
>> Subject: RE: cvs add 
>>
>> The method you've described in the past depends on a linked structure
>> involving having users write special syntactic sugar in their commit
>> comments, then sorting out which versions of the RCS files to dump out.

>Yup.

>> How does traversing such a structure, robustly, be simpler than a single
>> layer of indirection?  I fail to see the elegance of such a solution to
>> this particular problem.

>Because it is done independently of of the repository, and works just as
>well from any arbitrary client, and it can even be done by hand.

>It might not be as efficient as possible, but it's certainly simple and
>elegant, and since it's not by any measure a critical function it
>doesn't have to be all that efficient.

Efficiency is one of the characteristics of an elegant solution.  Solutions
are frequently both elegant and efficient, but rarely elegant and inefficient.

>Indeed given the rarity of this
>requirement in the real world it's usually sufficient to leave it as a
>manual process.  That's certainly worked well enough for me and for
>several of the free OS projects.  Even the NetBSD repo admins have come
>to realize that this is sufficent as they've backed down from their
>original policy of doing deep copies in the repo.

I think the NetBSD folks gave up when they figured out that the method
they were using to perform directory renames scaled poorly.  They had
no choice but to use a different workaround to the rename problem that
didn't cause their modules database and repository to explode.

As for the "rarity" of having to review the history of any given file,
that's just bogus.  My users review the history every time they perform
a merge.

>> What about converting ASCII text documentation to Rich Text format or
>> XML?  These are not binary formats, yet the same restrictions apply.
>> A product development effort should be able to make this type of change
>> at any time without having the tool balk in some way.

>Paul!  Use the right tool for the job!  If you have to manage change in
>files in which those changes cannot be readily displayed and managed
>with diff, patch, et al, then you need to choose a different version
>tracking tool -- one that can do what you need it to do.

Okay, let's assume for the moment that all of my source files are ASCII
text.  That includes HTML, XML, RTL, C, C++, Perl, plaintext, and whatnot.
Diff, patch, et al, theoretically all work on these.  The merge problem
arises when more than one datatype (XML and plaintext, for example) are
stored in a single RCS file.

What's happening is that CVS is trying to use a single version history
to describe more than one file.  That's just plain wrong.

>CVS is not, and was never intended to be, a complete software
>configuration managment solution.  Please RTFM again and try not to
>forget this fact this time!

We're discussing version control specifically, not general SCM solutions.

>A complete software configuration system for a large project may include
>several different version tracking tools, several different build tools,
>as well as overall processes and procedures which tie them all together.

>If you think you can achieve a coherent and comprehensive software
>configuration solution with one tool, then PLEASE GO USE IT!

Whatsay we limit this discussion to the handling of directories?

>> This only works if you can forsee all possible directions that a project
>> can take over its lifetime.

>Paul that's what "engineering" is all about -- dealing with real-world
>change and managing a project over its lifetime.

"Prescience" is not usually in the job description of the software architect.
They can plan for growth or to phase in new features.  They can try to
anticipate possible future directions.  Their predications are never 100%
accurate, nor should they be expected to be.

>> > > - Storing version histories of distinct files separately, even if they
>> > >   share a spot in a workspace at some time.
>> > 
>> > This is only a requirement if you start down the slippery slope you seem
>> > to like to slide down every so often.  If, on the other hand, you simply
>> > keep things simple as they are now then you don't have to worry about
>> > this issue because it's automatically handled by the current design.
>> 
>> In other words, never reuse part of your filesystem.  Sound advice.  NOT!

>You're spreading F.U.D. again Paul.  That is most definitely not what I
>said -- quite the opposite.  See above.

What's happening is that if a file on one branch is replaced by a new
datatype, and then merged from a second branch, the result is meaningless.
The operation itself has questionable semantics.

In CVS, the only way to avoid this situation is to keep just one datatype
in each RCS file.  I daresay that this goes a step further, which is to

Common code across multiple projects

2003-05-27 Thread thomas . maciejewski
I have a related question to this to which I am seeking an answer.

I have two projects: ProjectA and ProjectB
both of them use part of a common code base that is in a project called:
Common

I would like to set up CVS that I can have a reference to Common shared by
both ProjectA and ProjectB
something like:

ProjectA/Common
ProjectB/Common

and common being in its own project:
Common

Common code should be able to grow on its own and the current development
branches and main trunk of ProjectA and ProjectB should see these changes.
I should be able to go into ProjectA/Common and make a change and commit
and when I update from ProjectB I should see these changes.

However when I create a production branch I want to have have my own copy
of common.
That is,  once I do the production branch and then make a production bug
fix
to ProjectA/Common I do not want it to show up in Common nor in
ProjectB/Common

I tried to do this via & modules but I found that even when I branched
ProjectA common
was not also branched within there so if updated the branch I would get the
current development
version of Common and if I check in a production bug fix it would then be
part of the main production branch

Does anyone have a solution to this?

I am sure this is fairly common to most companies that have multiple
projects but wish to have code re-use

Tom





**
The information contained herein is confidential and is intended solely for the 
addresse(s).  It shall not be construed as a recommendation to buy or sell any 
security.  Any unauthorized access, use, reproduction, disclosure or dissemination is 
prohibited.
Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall assume any 
legal liability or responsibility for any incorrect, misleading or altered information 
contained herein.
**



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Steve deRosier
Greg,

I wasn't trying to indicate that tar was the perfect tool for this, or 
even that it was or might be the right tool.  I was simply suggesting 
that trying multiple tools in sequence might accomplish what was 
desired.  tar was simply an example suggestion of a place to start looking.

But, I would disagree that "CVS already does effectively what tar 
does..."  No, tar creates a single binary file for archive purposes. 
CVS does not do this.  Also, tar DOES handle directory information and 
can preserve owners, permissions and all other directory meta-info.  CVS 
does not do this.  tar DOES NOT handle versioning or history 
information.  CVS does do this.  I was suggesting that somehow combining 
the two tools it may be possible to create a system that did what he was 
looking for.

I've been lurking on the list for a very long time, and it seems to me 
that the one of the largest perceived deficiencies with CVS is it can 
not handle directory meta information (in other words, it can't version 
directories).  "Perceived" because some people consider this a 
deficiency while others consider it the "right way".  Now, I'm not 
willing to debate if CVS is right or wrong in this, but it must be 
conceded that CVS doesn't do this, and as such may not be right for 
everyone.  I've never needed the functionality, so I don't have an 
opinion on it one way or another.

Regardless, it doesn't really matter.  CVS doesn't do as the poster 
wants.  There may be ways to handle it using CVS + some other standard 
Linux tools, or maybe the poster will just need to move on to another 
tool.

Also, if so many people NEED this functionality, why doesn't it get 
added to CVS?  If people only THINK they need it, then why don't we tell 
them how to do what it is they are actually trying to accomplish while 
using CVS?  Maybe this NEED is real, and maybe it's just marketing from 
commercial version control companies, but one way or another it gets 
asked for.  Giving people on this list the usual "if it doesn't work for 
you, well #*&@ you, go somewhere else" rant seems counter-productive.

Perhaps if we all discuss it calmly and logically, we can find a way to 
give people what they want and remain "true" to the "CVS-way".  Some of 
the stuff on this list lately is beginning to look like a religious war, 
instead of useful discussion.

Thanks,
- Steve
Greg A. Woods wrote:
[ On Tuesday, May 27, 2003 at 11:39:22 (-0700), Steve deRosier wrote: ]

Subject: Re: outsider's perspective

As you know, CVS doesn't version directories and frankly was never 
designed with that in mind (never mind the argument over if it should or 
should not, it simply doesn't).  But...there is a program that does 
record directory structure: tar.  Perhaps you could somehow use tar in 
conjunction w/ CVS to do what you want to do.


CVS already does effectively what tar does, and it already does it with
support for recording the addition and deletion of files temporally,
something which tar obviously cannot do.


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


cvs import -I ! hangs

2003-05-27 Thread thomas . maciejewski
I am trying to import a large amount of code from an old project that was
never under source code.

I tried to do a plain import but ALL of the files come back with an I (
Ignored ? )
cd CVS/ProjectA
cvs -m " " -I ! ProjectA ProjectA start

returns:

I ProjectA/foo1.cxx
I ProjectA/foo2.cxx

etc ...

so I read a bit about the ignore list and figured that I dont want to
ignore anything from that distribution so I typed the following line:

cd CVS/ProjectA
cvs -m " " -I ! ProjectA ProjectA start

and it hangs

Any ideas?

Tom





**
The information contained herein is confidential and is intended solely for the 
addresse(s).  It shall not be construed as a recommendation to buy or sell any 
security.  Any unauthorized access, use, reproduction, disclosure or dissemination is 
prohibited.
Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall assume any 
legal liability or responsibility for any incorrect, misleading or altered information 
contained herein.
**



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Policies (was: RE: cvs add )

2003-05-27 Thread Paul Sander
>--- Forwarded mail from Greg Woods:

>[ On Sunday, May 25, 2003 at 19:51:07 (-0700), Paul Sander wrote: ]
>> Subject: RE: cvs add 

>> I believe that developers must be permitted to check in their work
>> arbitrarily.

H  "Arbitrarily" was perhaps the wrong word to use here.

>THEN PLEASE GO USE SOME OTHER TOOL WHERE THIS USAGE PATTERN WORKS!

>In CVS this is even documented as a "Bad Thing", as it will be with most
>types of concurrent versioning systems which encourage developers to
>update early and update often:

>When to commit?
>===
>
>   Your group should decide which policy to use regarding commits.
>Several policies are possible, and as your experience with CVS grows
>you will probably find out what works for you.
>
>   If you commit files too quickly you might commit files that do not
>even compile.  If your partner updates his working sources to include
>your buggy file, he will be unable to compile the code.  On the other
>hand, other persons will not be able to benefit from the improvements
>you make to the code if you commit very seldom, and conflicts will
>probably be more common.
>
>   It is common to only commit files after making sure that they can be
>compiled.  Some sites require that the files pass a test suite.
>Policies like this can be enforced using the commitinfo file (*note
>commitinfo::.), but you should think twice before you enforce such a
>convention.  By making the development environment too controlled it
>might become too regimented and thus counter-productive to the real
>goal, which is to get software written. 

My policy is to allow developers to commit their work at any time they
deem appropriate.  There are many possible related policies that can
accomodate this.  These are a few that I have used on various projects
(usually in combination), all with success:

- Ownership of subsystems is assigned such that the concurrent paradigm
  isn't used; inter-subsystem references are resolved via baselines, and
  each user manages their own subsystems as they see fit.
- Users commit arbitrarily but must coordinate updates and integrations.
- Users commit only after acceptance criteria are met and may update at
  any time.
- The change control system (tags, branch/timestamp pairs, submit/assemble
  tools, whatever) sorts out what others see.  Changes given to the change
  control system meet acceptance criteria, other commits are arbitrary.
- Use personal branches for normal development and are managed as needed;
  merge and commit to shared branches when acceptance criteria are met.

Consider it a treat when someone actually wants to commit something.
Coach them how to do it, create a branch, even create a special place for
them if necessary.  But NEVER, EVER turn somebody away when they want to
save their work.

>Note also that more and more people are choosing to use two-phase commit
>systems such as Aegis, and those definitely do not allow arbitrary
>checkins -- they take testing and peer review to a new level!

Good for them.

>--- End of forwarded message from [EMAIL PROTECTED]



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Kaz Kylheku
On Tue, 27 May 2003, Steve deRosier wrote:

> Also, if so many people NEED this functionality, why doesn't it get 
> added to CVS? 

One reason is that it doesn't have to be literally added into the CVS
program, but rather imposed on top of it. CVS can be used as a
subprocess in a version control system that has the functionality.
This is a legitimate way of creating a ``CVS II''. In fact, this
approach is better in many ways than hacking inside CVS.  Separate
processes provide fault isolation, and avoid forking the codebase. If
``CVS II'' has a bug that stems from CVS, you just upgrade CVS to the
bugfixed version. It's blackbox inheritance! CVS gives us the ``base
class'' which we ``override'' to the get ``CVS II'' behavior with
versioned directories, symbolic links, permissions, etc.

There are a few drawbacks. The command line interface sometimes is less
than ideal, and also systems impose limitations on its length, so the
``CVS II'' layer sometimes has to break up long command lines, turning
one logical CVS operation into multiple actual ones.  Another problem
is that the output of the CVS process sometimes has to be passed
through a text filter so that it makes sense at the ``CVS II'' level.
Doing the substitutions in CVS itself would mean altering its code.

``CVS II'' has already been written, and released almost 1.5 years ago,
but you see, it was unfortunately named ``Meta-CVS'', and so people
somehow don't see it as a proper sequel.  If the sequel to The Matrix
was called ``Riemann Sphere'', perhaps few would get it either.

Meta-CVS does directory structure versioning, and other things. but
it's not CVS II!  Why? Because it's not *called* CVS II, and besides,
it's not backward compatible.  Never mind that it uses CVS for
everything: branching, merging, diffing, annotating, viewing logs etc.
and that it's nearly command-for-command compatible. What it stores
in the CVS repository can't be grokked by CVS I clients. (Just like
ANSI C programs can't be grokked by K&R compilers; are those programs
not written in C?)

Okay, so if this is not legitimate, let's hear a concrete plan about
how CVS can be extended to make a ``CVS II'' which is completely
backward compatible with CVS I clients, and works as well as Meta-CVS.
Better yet, let's see some code. It's not enough to propose alternative
ideas when the existing idea is already coded. The CVS mailing list has
seen more than a *decade* of idle discussions about this subject already.



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


RE: cvs add

2003-05-27 Thread Greg A. Woods
[ On Tuesday, May 27, 2003 at 09:29:13 (-0700), Paul Sander wrote: ]
> Subject: RE: cvs add 
>
> I think the NetBSD folks gave up when they figured out that the method
> they were using to perform directory renames scaled poorly.  They had
> no choice but to use a different workaround to the rename problem that
> didn't cause their modules database and repository to explode.

Actually, no, but you wouldn't know that.

> As for the "rarity" of having to review the history of any given file,
> that's just bogus.  My users review the history every time they perform
> a merge.

Perhaps your users are forced to merge too often because they're working
in a poorly engineered project.

-- 
Greg A. Woods

+1 416 218-0098;<[EMAIL PROTECTED]>;   <[EMAIL PROTECTED]>
Planix, Inc. <[EMAIL PROTECTED]>; VE3TCP; Secrets of the Weird <[EMAIL PROTECTED]>


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Greg A. Woods
[ On Tuesday, May 27, 2003 at 14:07:18 (-0700), Steve deRosier wrote: ]
> Subject: Re: outsider's perspective
>
> But, I would disagree that "CVS already does effectively what tar 
> does..."  No, tar creates a single binary file for archive purposes. 
> CVS does not do this.  Also, tar DOES handle directory information and 
> can preserve owners, permissions and all other directory meta-info.  CVS 
> does not do this.

No concurrent versioning system with a shared repository, and
particularly not one that can operate in a client/server mode, can ever
possibly make any use of ownership, nor even of most permissions bits.
Ownership information, and most permissions bits, "MUST" always be
specific to the client and it MUST NOT be dictated by the repository.

CVS does already partly handles the one permission bit that's meaninful
to copy from the repository, for what it's worth (almost nothing in
practice).

If people would learn two things then all this stupidity would disappear
in a puff of smke:  (1) CVS is a text file content change tracking tool, and
_only_ a text file content version tracking tool; (2) all these things
(file permissions, ownerships, symbolic and hard links, etc.) can far
FAR more elegantly, simply, and clearly be managed by a build script,
the source for which can be stored in CVS.

>  tar DOES NOT handle versioning or history 
> information.  CVS does do this.  I was suggesting that somehow combining 
> the two tools it may be possible to create a system that did what he was 
> looking for.

How do you expect to meaninfully combine a tool that creates binary
files with CVS?!?!?!?

> Also, if so many people NEED this functionality, why doesn't it get 
> added to CVS?

Have you not been paying any attention to the rationales I and others
have given for why CVS is the way it is and why it doesn't do some
things?

-- 
Greg A. Woods

+1 416 218-0098;<[EMAIL PROTECTED]>;   <[EMAIL PROTECTED]>
Planix, Inc. <[EMAIL PROTECTED]>; VE3TCP; Secrets of the Weird <[EMAIL PROTECTED]>


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Steve deRosier
Wait a sec...I was was not trying to suggest a solution (though I've got 
an idea, more in a sec), and if anything just a very specific solution 
to help with a very specific problem that one person was having.

I'm not that familiar with Meta-CVS, having never used it.  Though 
perhaps I should take a look soon.  The most basic CVS functionality has 
always been sufficient for my projects and frankly has more 
functionality than most people at my current company are willing to try 
(why I can't convince some people around here that CVS will always 
properly merge stuff OR complain I don't know; we've got people here 
that when they need to merge: grab a print out of the local file, 
overwrite their local with a fresh checkout, hand edit and then check 
in; they don't trust it to merge automatically even if it can and will; 
will the insanity never stop?).

A suggestion:
If Meta-CVS is essentially CVS II (or CVS III as someone pointed out as 
I was writing this), then why not try to merge the two projects (or 
parts) together.  Make what is currently CVS into the "CVS engine" and 
make MetaCVS the "new" commandline portion of CVS.  Bundle them together 
and call it CVS 2.0.  Old clients can still connect to the CVS engine 
via pserver, ssh or whatever and it can handle old calls, while newer 
commands are handled by the CVS 2.0 client.  Perhaps this requires some 
change in thought and maybe some people will need to upgrade, but as the 
usual response to peoples problems is "upgrade to the current version" I 
don't see how this is an issue.

Okay, so if this is not legitimate, let's hear a concrete plan about
how CVS can be extended to make a ``CVS II'' which is completely
backward compatible with CVS I clients, and works as well as Meta-CVS.
Better yet, let's see some code. It's not enough to propose alternative
ideas when the existing idea is already coded. The CVS mailing list has
seen more than a *decade* of idle discussions about this subject already.
As Don already pointed out, this is very unfair.  I don't have any 
interest working with a project that is held tightly by a very small 
group of CVS gurus that feel that their way is the only way to do it. 
As I pointed out already, I was just trying to help an individual with a 
specific problem, not suggest that what CVS did or does is right or 
wrong (and as stated, I don't have an opinion about it, it works for me 
as is).  But somehow despite the small and narrow scope of my help, I 
get pulled into this morass.  You all should examine your intentions 
here; do you actually _want_ people to help?  If so, you might want to 
make this place more open to new or other ideas.  Why would I spend time 
trying to learn the CVS codebase and then make changes if I feel that 
whatever I do is simply going to be summarily rejected anyway?  I've 
submitted patches on other open source projects (incl. Binutils and GCC) 
and frankly I'd rather spend my precious time helping out projects that 
want the help.

- Steve



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Larry Jones
Kaz Kylheku writes:
>
> This is a legitimate way of creating a ``CVS II''.

Just as a historical note, the current CVS is *already* officially
"CVS II" -- CVS I was a collection of shell scripts.  So please don't
continue using that designation to refer to a successor.

-Larry Jones

At times like these, all Mom can think of is how long she was in
labor with me. -- Calvin


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Donald Sharp
On Tue, May 27, 2003 at 02:45:14PM -0700, Kaz Kylheku wrote:
> On Tue, 27 May 2003, Steve deRosier wrote:
> 
> > Also, if so many people NEED this functionality, why doesn't it get 
> > added to CVS? 
> 
> One reason is that it doesn't have to be literally added into the CVS
> program, but rather imposed on top of it. CVS can be used as a
> subprocess in a version control system that has the functionality.
> This is a legitimate way of creating a ``CVS II''. In fact, this
> approach is better in many ways than hacking inside CVS.  Separate
> processes provide fault isolation, and avoid forking the codebase. If
> ``CVS II'' has a bug that stems from CVS, you just upgrade CVS to the
> bugfixed version. It's blackbox inheritance! CVS gives us the ``base
> class'' which we ``override'' to the get ``CVS II'' behavior with
> versioned directories, symbolic links, permissions, etc.
> 
> There are a few drawbacks. The command line interface sometimes is less
> than ideal, and also systems impose limitations on its length, so the
> ``CVS II'' layer sometimes has to break up long command lines, turning
> one logical CVS operation into multiple actual ones.  Another problem
> is that the output of the CVS process sometimes has to be passed
> through a text filter so that it makes sense at the ``CVS II'' level.
> Doing the substitutions in CVS itself would mean altering its code.
> 
> ``CVS II'' has already been written, and released almost 1.5 years ago,
> but you see, it was unfortunately named ``Meta-CVS'', and so people
> somehow don't see it as a proper sequel.  If the sequel to The Matrix
> was called ``Riemann Sphere'', perhaps few would get it either.
> 
> Meta-CVS does directory structure versioning, and other things. but
> it's not CVS II!  Why? Because it's not *called* CVS II, and besides,
> it's not backward compatible.  Never mind that it uses CVS for
> everything: branching, merging, diffing, annotating, viewing logs etc.
> and that it's nearly command-for-command compatible. What it stores
> in the CVS repository can't be grokked by CVS I clients. (Just like
> ANSI C programs can't be grokked by K&R compilers; are those programs
> not written in C?)
> 
> Okay, so if this is not legitimate, let's hear a concrete plan about
> how CVS can be extended to make a ``CVS II'' which is completely
> backward compatible with CVS I clients, and works as well as Meta-CVS.
> Better yet, let's see some code. It's not enough to propose alternative
> ideas when the existing idea is already coded. The CVS mailing list has
> seen more than a *decade* of idle discussions about this subject already.

In some respects I think this last paragraph is unfair.  I've seen
lots of different ideas over the last couple of years that get 
squashed loudly whenever they are brought up.  Why would people want
to contribute when there is no interest in changing cvs.  Or when
people do show interest they get yelled at for not doing it the
'pure cvs' way.   

donald
> 
> 
> 
> ___
> Info-cvs mailing list
> [EMAIL PROTECTED]
> http://mail.gnu.org/mailman/listinfo/info-cvs


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


CVS 1.11.6 on Windows?

2003-05-27 Thread Rob Clevenger
I just build the 1.11.6 source archive on Windows, and the version
number is reporting itself as 1.11.5.1.  Is the source archive
incorrect?

Also, I had to copy the functions locate_file_in_dir and locate_rcs from
the src/filesubr.c to windows-NT/filesubr.c to get it to compile.  Has
this already been fixed, or did I screw something up when trying to
build it?

Thanks,

Rob




___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Steve deRosier
No concurrent versioning system with a shared repository, and
particularly not one that can operate in a client/server mode, can ever
possibly make any use of ownership, nor even of most permissions bits.
Ownership information, and most permissions bits, "MUST" always be
specific to the client and it MUST NOT be dictated by the repository.
Yes, this makes perfect sense, and I agree with you.  Frankly I'm not 
even sure what it is that various people are asking with keeping the 
other meta info.  But it seems that there are other bits (not as in 1s 
and 0s) of meta information that people want to keep about directories. 
 Are symbolic links kept and versioned by CVS?

If people would learn two things then all this stupidity would disappear
in a puff of smke:  (1) CVS is a text file content change tracking tool, and
_only_ a text file content version tracking tool; (2) all these things
(file permissions, ownerships, symbolic and hard links, etc.) can far
FAR more elegantly, simply, and clearly be managed by a build script,
the source for which can be stored in CVS.
Again, I agree with your point of (1).  Also, I think that a build 
script is a good way to handle things (but I've always been a huge fan 
of the power of make and use it for many things beyond just building my 
latest C/C++ projects).  But, what about large directory structures of a 
web site?  The directory information is meaningful.  And the person 
doing the maintaince can't always login as the psudouser the web server 
runs as.  And a build script isn't necessarilly meaningful in this 
context either.



tar DOES NOT handle versioning or history 
information.  CVS does do this.  I was suggesting that somehow combining 
the two tools it may be possible to create a system that did what he was 
looking for.


How do you expect to meaninfully combine a tool that creates binary
files with CVS?!?!?!?
Frankly, I felt that was an exercise for the reader.  But one idea (not 
necessarily the best idea or even a good idea) was the user could tar up 
the directory structure and put the tar archive into CVS (using the 
approprate -k flag of course).  I would think you'd want to separtely ci 
your text files that were in those directories though so the changes 
could be tracked better.  Perhaps not good and maybe not any better than 
just keeping a directory with date-munged-file-named tar files.  As I 
said, example and not a good idea.  :)

Again, the point was missed.  It was an example that by combining tools 
(using make or other build facility), the user could come up with 
something that would do what it was they were trying to do.  And as I 
recall, isn't that the whole point you've been trying to get across -> 
use a build facility of some sort to do the parts that CVS doesn't do?



Also, if so many people NEED this functionality, why doesn't it get 
added to CVS?


Have you not been paying any attention to the rationales I and others
have given for why CVS is the way it is and why it doesn't do some
things?
Of course I have been.  Considering that the same song and dance has 
been done here regularly to avoid making meaningful improvments in CVS, 
I could hardly miss it.  Many of the rationales are perfectly valid, but 
sometimes it sounds like you (and others) are rationalizing out a reason 
it can't be done simply because you don't want it to be done.  I'm often 
guilty of the same thing :) , but maybe we should try to be honest with 
ourselves and try to come up with a VISION of what CVS should be and 
where it should go in the future.  These problems and questions won't 
stop simply because we rationalize that CVS was designed to solve this 
set of problems but not that set.

I agree that CVS is for versioning text files.  But there are many text 
files formats (XML, HTML, RTF and so on.  All are text files.).  And 
some of these do depend on directory structure to be meaningful.  And 
some of these are occasionally not handled properly by CVS if the 
reports on this list are any indication.

CVS is great, and it handles text files fine.  But putting arbitrary 
roadblocks in front of your users defeats the whole purpose.

- Steve



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: CVS 1.11.6 on Windows?

2003-05-27 Thread Mark D. Baushke
Rob Clevenger <[EMAIL PROTECTED]> writes:

> I just build the 1.11.6 source archive on Windows, and the version
> number is reporting itself as 1.11.5.1.  Is the source archive
> incorrect?

It appears that the tarball has a windows-NT/config.h file that was not
properly updated to use 1.11.6 If you apply the following patch:

Index: windows-NT/config.h
===
RCS file: /cvs/ccvs/windows-NT/config.h,v
retrieving revision 1.46
diff -u -p -r1.46 config.h
--- windows-NT/config.h 20 Jan 2003 21:58:43 -  1.46
+++ windows-NT/config.h 27 May 2003 23:24:39 -
@@ -447,4 +447,4 @@ typedef int ssize_t;
  * platforms, like some of the Makefiles are.  That way, there is only one
  * place the version string needs to be updated by hand for a new release.
  */
-#define PACKAGE_STRING "Concurrent Versions System (CVS) 1.11.5.1"
+#define PACKAGE_STRING "Concurrent Versions System (CVS) 1.11.6"

to your sources and rebuild, it should look better.
 
> Also, I had to copy the functions locate_file_in_dir and locate_rcs from
> the src/filesubr.c to windows-NT/filesubr.c to get it to compile.  Has
> this already been fixed, or did I screw something up when trying to
> build it?

No, that looks like it is still a problem in the cvs sources.
(I'd go ahead and fix it, but I don't have a windows box to
test that things would even still compile...)

-- Mark


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


RE: CVS 1.11.6 on Windows?

2003-05-27 Thread Rob Clevenger
Nevermind, it's fixed as issue 119 in issuezilla

Rob


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rob
Clevenger
Sent: Tuesday, May 27, 2003 3:58 PM
To: [EMAIL PROTECTED]
Subject: CVS 1.11.6 on Windows?


I just build the 1.11.6 source archive on Windows, and the version
number is reporting itself as 1.11.5.1.  Is the source archive
incorrect?

Also, I had to copy the functions locate_file_in_dir and locate_rcs from
the src/filesubr.c to windows-NT/filesubr.c to get it to compile.  Has
this already been fixed, or did I screw something up when trying to
build it?

Thanks,

Rob




___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs





___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Re: outsider's perspective

2003-05-27 Thread Paul Sander
>--- Forwarded mail from [EMAIL PROTECTED]

>> No concurrent versioning system with a shared repository, and
>> particularly not one that can operate in a client/server mode, can ever
>> possibly make any use of ownership, nor even of most permissions bits.
>> Ownership information, and most permissions bits, "MUST" always be
>> specific to the client and it MUST NOT be dictated by the repository.

>Yes, this makes perfect sense, and I agree with you.  Frankly I'm not 
>even sure what it is that various people are asking with keeping the 
>other meta info.  But it seems that there are other bits (not as in 1s 
>and 0s) of meta information that people want to keep about directories. 

Metadata I can think of off the top of my head:
- Mapping to the history containers (RCS files) of its contents.
- Access controls (who can update, who can commit).
- Execute permissions.

CVS actually maintains all of this stuff in very primitive ways.  At one
time or another every one of them has been pointed out has being deficient
in CVS in some way.

>  Are symbolic links kept and versioned by CVS?

Nope.  Arguments go both ways as to whether or not they should be.  I
personally believe they should not be (for portability reasons).

>> If people would learn two things then all this stupidity would disappear
>> in a puff of smke:  (1) CVS is a text file content change tracking tool, and
>> _only_ a text file content version tracking tool; (2) all these things
>> (file permissions, ownerships, symbolic and hard links, etc.) can far
>> FAR more elegantly, simply, and clearly be managed by a build script,
>> the source for which can be stored in CVS.

>Again, I agree with your point of (1).  Also, I think that a build 
>script is a good way to handle things (but I've always been a huge fan 
>of the power of make and use it for many things beyond just building my 
>latest C/C++ projects).  But, what about large directory structures of a 
>web site?  The directory information is meaningful.  And the person 
>doing the maintaince can't always login as the psudouser the web server 
>runs as.  And a build script isn't necessarilly meaningful in this 
>context either.

I agree that CVS *is* an ASCII text file version tracking tool.  I disagree
that it *must be* only an ASCII text file version tracking tool.  All it
needs to lift this restriction is to remove the hard-coded invocations of
diff and diff3 in the user interface, replacing them with a more general
tool that adapts to the data type.  I've already demonstrated that this can
be done by posting a patch to this forum on or around Sept. 18, 2001.

As for the rest:  There's a difference between building a project and
deploying it.  Sometimes a build procedure does both (e.g. with a "make
install" target).  Other procedures build a project in a way that it can
be tested and as a side effect produce data that drive the installation
(e.g. files that drive the creation of installable packages).  The
installers should solve the problems of permissions and access controls
on the deployed product.  Shops are usually receptive to building the
necessary infrastructure to have appropriate people run the installers
successfully.

>>> tar DOES NOT handle versioning or history 
>>>information.  CVS does do this.  I was suggesting that somehow combining 
>>>the two tools it may be possible to create a system that did what he was 
>>>looking for.
>> 
>> How do you expect to meaninfully combine a tool that creates binary
>> files with CVS?!?!?!?

>Frankly, I felt that was an exercise for the reader.  But one idea (not 
>necessarily the best idea or even a good idea) was the user could tar up 
>the directory structure and put the tar archive into CVS (using the 
>approprate -k flag of course).  I would think you'd want to separtely ci 
>your text files that were in those directories though so the changes 
>could be tracked better.  Perhaps not good and maybe not any better than 
>just keeping a directory with date-munged-file-named tar files.  As I 
>said, example and not a good idea.  :)

I think that if you're going to tar up a bunch of files and store them in
CVS then you're probably better off just storing the files themselves.
There are exceptions, of course (NextStep being one of them).  And in these
cases, its appropriate to use merge tools tailored for tar files.  (Yes,
you can do 2- and 3-way diffs on tar files, and 2- and 3-way merges.  I
believe these can even be automated for interactive and non-interactive
use in the "CVS" way.)

>>>Also, if so many people NEED this functionality, why doesn't it get 
>>>added to CVS?
>> 
>> Have you not been paying any attention to the rationales I and others
>> have given for why CVS is the way it is and why it doesn't do some
>> things?

>Of course I have been.  Considering that the same song and dance has 
>been done here regularly to avoid making meaningful improvments in CVS, 
>I could hardly miss it.  Many of the rationales are perfectly val

Re: outsider's perspective

2003-05-27 Thread Greg A. Woods
[ On Tuesday, May 27, 2003 at 14:45:14 (-0700), Kaz Kylheku wrote: ]
> Subject: Re: outsider's perspective
>
> One reason is that it doesn't have to be literally added into the CVS
> program, but rather imposed on top of it. CVS can be used as a
> subprocess in a version control system that has the functionality.
> This is a legitimate way of creating a ``CVS II''.

Note that the CVS we're talking about these days _is_ literally called
``CVS-II''

The first CVS was implemented as a set of shell scripts.  See the paper.

-- 
Greg A. Woods

+1 416 218-0098;<[EMAIL PROTECTED]>;   <[EMAIL PROTECTED]>
Planix, Inc. <[EMAIL PROTECTED]>; VE3TCP; Secrets of the Weird <[EMAIL PROTECTED]>


___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs


Meta-CVS and cmd.exe

2003-05-27 Thread Matthew Herrmann
Hi All,

Following the recent discussions on directory versioning, I'm trying out
Meta-CVS on my PC with Cygwin, possibly moving some projects to it all going
well. I can get it to talk fine through bash, but cmd.exe won't talk to it.

I've tried:

> bash mcvs

and that gives:

>bash mcvs
Meta-CVS requires a command argument.
Use mcvs -H to view help.

but

>bash mcvs -H

gives the same message. is there an easy way to get this to work?


TIA,

Matthew Herrmann
--
VB6/SQL/Java/CVS Consultancy
Far Edge Technology
http://www.faredge.com.au/



___
Info-cvs mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/info-cvs