W dniu czw, 26.04.2018 o godzinie 10∶25 +0900, użytkownik Junio C Hamano
napisał:
> Marc Branchaud <marcn...@xiplink.com> writes:
> 
> > > But Git is not an archiver (tar), but is a source code control
> > > system, so I do not think we should spend any extra cycles to
> > > "improve" its behaviour wrt the relative ordering, at least for the
> > > default case.  Only those who rely on having build artifact *and*
> > > source should pay the runtime (and preferrably also the
> > > maintainance) cost.
> > 
> > Anyone who uses "make" or some other mtime-based tool is affected by
> > this.  I agree that it's not "Everyone" but it sure is a lot of
> > people.
> 
> That's an exaggerated misrepresentation.  Only those who put build
> artifacts as well as source to SCM *AND* depend on mtime are
> affected.
> 
> A shipped tarball often contain configure.in as well as generated
> configure, so that consumers can just say ./configure without having
> the whole autoconf toolchain to regenerate it (I also heard horror
> stories that this is done to control the exact version of autoconf
> to avoid compatibility issues), but do people arrange configure to
> be regenerated from configure.in in their Makefile of such a project
> automatically when building the default target?  In any case, that is
> a tarball usecase, not a SCM one.
> 
> > Are we all that sure that the performance hit is that drastic?  After
> > all, we've just done write_entry().  Calling utime() at that point
> > should just hit the filesystem cache.
> 
> I do not know about others, but I personally am more disburbed by
> the conceptual ugliness that comes from having to have such a piece
> of code in the codebase.

For the record, we're using this with ebuilds and respective cache files
(which are expensive to generate).  We are using separate repository
which combines sources and cache files to keep the development
repository clean.  I have researched different solutions for this but
git turned out the best option for incremental updates for us.

Tarballs are out of question, unless you expect users to fetch >100 MiB
every time, and they are also expensive to update.  Deltas of tarballs
are just slow and require storing a lot of extra data.  Rsync is not
very efficient at frequent updates, and has significant overhead
on every run.  With all its disadvantages, git is still something that
lets our users fetch updates frequently with minimal network overhead.

So what did I do to deserve being called insane here?  Is it because I
wanted to use the tools that work for us?  Because I figured out that I
can improve our use case without really harming anyone in the process?

-- 
Best regards,
Michał Górny

Reply via email to