On Jan 31, 2013, at 20:05, Jason Keltz wrote:

> On 31/01/2013 6:06 PM, Bob Archer wrote:
>> 
>> What you need to do could work. I assume this "software" in order to run can 
>> build built or whatever during your nightly update on each client?
>> 
>> You keep saying "rsyncing" ... you wouldn't use that. You wouldn't use that 
>> of course, you would use the svn client binary.
> Actually, maybe I wasn't clear..
> The software includes various packages like say, Matlab, or Maple, or 
> whatever else, already installed...  imagine a directory on the fileserver.. 
> say, /local/software which includes "bin", "lib", etc...    I'm not 
> "installing" the software.   it's already been installed..  I'm just syncing 
> a directory between machines..
> As for rsyncing.. I would rsync the software from the "file server" to the 
> "software distribution" server, and then use svn from there to check in all 
> the changes.
> 
>> For you initial load... if the software is on the server where you will 
>> house your repository you can just import the data into the repository from 
>> that file... there is no need to send the data twice. In other words, you 
>> can have both a working copy and a repository on your central server.
> Yes.  Initially I would do an import, but the problem is... the next day, the 
> software gets updated on the "real" file server... say, new version of Matlab 
> or something...  in the evening, I want the process to run that would rsync 
> the data (with all the changes) from the file server to the software 
> distribution server,  do something to commit the changes, then the 100 
> clients would eventually each "svn update".     However, to be able to commit 
> the changes, I need to have a working copy on the software distribution 
> server....
> 
>>> However, after the rsync happens, I now need to run a command that would
>>> update the repository with the state of the working directory.  However, 
>>> it's not
>>> exactly clear how this would work?  Running an "svn update"
>> "svn update" brings any changes in the repository to your working copy. "svn 
>> commit" does the opposite... it puts any changes in a working directory into 
>> the repository.
> See, this is where I'm confused... I created a few directories including 
> "bin" and "pkg" for a test.  All committed fine... erased them from the 
> working copy, did a commit then a status and I see:
> 
> !       bin
> !       pkg
> 
> but when I go into a different directory and check out the current state..
> 
> A    pkg
> A    bin
> Checked out revision 2.
> 
> they're still there...

Correct. Subversion does not track your movements. You must tell Subversion 
what you are moving and deleting by doing the moves and deletes using "svn mv" 
and "svn rm", not using regular OS commands.


>> Hth...
>> 
>> That said, if this is actual software, wouldn't using one of the many 
>> package management tools available in Linux be a better fit?
> 
> The thing is, I'm moving around already installed software, and there's 
> nothing that great, as far as I can see, for doing that. The twitter guys are 
> using something they wrote called "murder" which uses torrent to do this kind 
> of thing...  excellent idea, but it uses Ruby and several other tools ...   
> and I don't want to get into that at the moment...

Subversion is not going to be a satisfactory solution for this use case. 
Besides all the issues you're describing with setting up the server-side 
infrastructure for this, and as was already mentioned, when you check out a 
working copy of this on your clients, there will be a "duplicate" pristine copy 
of everything. So if you have 60GB of software, it'll take up 120GB of space on 
the client machine.

Subversion is not a software distribution tool; it is a document and revision 
management system. Use a different tool. As someone else said, rsync seems like 
a good tool for this job; I didn't understand why you think using rsync 
directly between your file server and your clients won't work.



Reply via email to