On 10/21/2018 10:14 PM, Junio C Hamano wrote:
Jeff King writes:
On Wed, Oct 10, 2018 at 11:59:38AM -0400, Ben Peart wrote:
+static unsigned long load_cache_entries_threaded(struct index_state *istate,
const char *mmap, size_t mmap_size,
+ unsigned long src_offset,
Jeff King writes:
> On Wed, Oct 10, 2018 at 11:59:38AM -0400, Ben Peart wrote:
>
>> +static unsigned long load_cache_entries_threaded(struct index_state
>> *istate, const char *mmap, size_t mmap_size,
>> +unsigned long src_offset, int nr_threads, struct
>>
On Wed, Oct 10, 2018 at 11:59:38AM -0400, Ben Peart wrote:
> +static unsigned long load_cache_entries_threaded(struct index_state *istate,
> const char *mmap, size_t mmap_size,
> + unsigned long src_offset, int nr_threads, struct
> index_entry_offset_table *ieot)
The
From: Ben Peart
This patch helps address the CPU cost of loading the index by utilizing
the Index Entry Offset Table (IEOT) to divide loading and conversion of
the cache entries across multiple threads in parallel.
I used p0002-read-cache.sh to generate some performance data:
Test w/100,000
4 matches
Mail list logo