W dniu 2016-07-11 o 22:51, Eric Wong pisze:

> TL;DR: dumb HTTP clone from a certain badly-packed repo goes from
> ~2 hours to ~30 min memory usage drops from 2G to 360M
> 
> 
> I hadn't packed the public repo at https://public-inbox.org/git
> for a few weeks.  As an admin of a small server limited memory
> and CPU resources but fairly good bandwidth, I prefer clients
> use dumb HTTP for initial clones.

Hopefully the solution / workaround for large initial clone
problem utilizing bundles (`git bundle`), which can be resumably
transferred, would get standarized and automated.

Do you use bitmap indices for speeding up fetches?

BTW. IMVHO the problem with dumb HTTP is the latency, not extra
bandwidth needed...

Best,
-- 
Jakub Narębski


--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to