Imagine the following scenario:

- trunk has several large files (> 20M) which are updated regularly.  These 
files are +only+ changed on trunk.
- there are several branches, each of which updates from trunk at least once a 
week.

The merge of the large files from trunk takes an excessive amount of time and 
creates new very large versions of files
that previously took up effectively no space at all, since they were cheap 
copies.

I'm thinking about writing a script that will detect the situation where a 
merge can actually be done as a copy.
This should be doable by either:
- finding all files that are unchanged on the branch and simply copying the 
latest versions from trunk, followed
  by a normal svn merge which will merge the files that have changed, or
- deleting the branch, then recreating it and copying changed files from the 
old branch to the new branch, again
  followed by a normal svn merge.

Naturally the script will have to handle file creations, deletions and moves on 
the branch, so it won't be trivial.

What I'd really like is a true 'svn rebase', but that doesn't seem to be 
possible.

Is this a realistic approach to my problem, or is there another approach?

Thanks
Lezz Giles

Reply via email to