On Monday, November 24, 2003, at 04:29 PM, Noel J. Bergman wrote:

if the whole thing is generated from source files in CVS,
so long as the actual source code is backed up, the web
site CAN be recovered in a relatively painless fashion
-- just restore the CVS backup and build the site.  Am
I missing something?

Consider the fact that not all of the project web sites are using the same
build tools. Consider, too, that sometimes you need a particular version,
or platform, to do a build due to bugs in a build tool. Add in the time
required to do the build, and any manual steps in the process. Scale that
to a couple of hundred projects. Those play a big part of why the
infrastructure team doesn't consider that a reasonable option. Basically,
if you don't do it in CVS, you're on your own. And that's not considered A
Good Thing. Another thing is that we can verify that a site hasn't been
tampered with by comparing it to the CVS.


Stefano has put forth a fairly detailed proposal for a build system. There
is also discussion on the infrastructure list of getting additional
equipment, so that one server could handle site building from CVS. The
public server would synchronize from the build server. That could obviate
the need to keep a copy of the generated artifacts.


As I said, I don't believe that anyone thinks that keeping 100s of megabytes
of generated artifacts in CVS is a great idea.

That seems like a lot of work. Has anyone considered time stamped tar balls (e.g., incubator-geronimo-20031124)? We could simply upload a new tar ball when we want the site updated. The server could manage disk space by deleting the oldest tar balls and would automatically update the site when it discovers a newer tar ball.


I think it would be great to have an automatic build system for the site and daily binaries, but it seems like a lot of work it the motivation is just to eliminate the '190MB CVS problem.'

-dain



Reply via email to