On 10/9/2018 5:30 AM, Junio C Hamano wrote:
Jonathan Tan writes:
@@ -1635,6 +1635,7 @@ int unpack_trees(unsigned len, struct tree_desc *t,
struct unpack_trees_options
o->result.cache_tree = cache_tree();
if
On 10/8/2018 5:48 PM, Jonathan Tan wrote:
Whenever a sparse checkout occurs, the existence of all blobs in the
index is verified, whether or not they are included or excluded by the
.git/info/sparse-checkout specification. This degrades performance,
significantly in the case of a partial
Jonathan Tan writes:
> @@ -1635,6 +1635,7 @@ int unpack_trees(unsigned len, struct tree_desc *t,
> struct unpack_trees_options
> o->result.cache_tree = cache_tree();
> if (!cache_tree_fully_valid(o->result.cache_tree))
>
Jonathan Tan writes:
> Because cache_tree_update() is used from multiple places, this new
> behavior is guarded by a new flag WRITE_TREE_SKIP_WORKTREE_MISSING_OK.
The name of the new flag is mouthful, but we know we do not need to
materialize these blobs (exactly because the skip-worktree bit
Whenever a sparse checkout occurs, the existence of all blobs in the
index is verified, whether or not they are included or excluded by the
.git/info/sparse-checkout specification. This degrades performance,
significantly in the case of a partial clone, because a lazy fetch
occurs whenever the
5 matches
Mail list logo