On 04/20/2018 07:34 PM, Jonathan Tan wrote:
On Tue, 17 Apr 2018 16:34:39 +0000
Jameson Miller <jam...@microsoft.com> wrote:

Jameson Miller (5):
   read-cache: teach refresh_cache_entry to take istate
   Add an API creating / discarding cache_entry structs
   mem-pool: fill out functionality
   Allocate cache entries from memory pools
   Add optional memory validations around cache_entry lifecyle

In this patch set, there is no enforcement that the cache entry created
by make_index_cache_entry() goes into the correct index when
add_index_entry() is invoked. (Junio described similar things, I
believe, in [1].) This might be an issue when we bring up and drop
multiple indexes, and dropping one index causes a cache entry in another
to become invalidated.

Correct - it is up to the caller here to coordinate this. The code should be set up so this is not a problem here. In the case of a split-index, the cache entries should be allocated from the memory pool associated with the "most common" / base index. If you found a place I missed or seems questionable, or have suggestions, I would be glad to look into it.


One solution is to store the index for which the cache entry was created
in the cache entry itself, but that does increase its size. Another is

Yes, this is an option. For this initial patch series, I decided to not add extra fields to the cache_entry type, but I think incorporating this in cache_entry is a viable option, and has some positive properties.

to change the API such that a cache entry is created and added in the
same function, and then have some rollback if the cache entry turns out
to be invalid (to support add-empty-entry -> fill -> verify), but I
don't know if this is feasible. Anyway, all these alternatives should be
at least discussed in the commit message, I think.

I can include a discussion of these in the commit message. Thanks.


The make_transient_cache_entry() function might be poorly named, since
as far as I can tell, the entries produced by that function are actually
the longest lasting, since they are never freed.

They should always be freed (and are usually freed close to where they are allocated, or by the calling function). If you see an instance where this is not the case, please point it out, because that is not the intention.


Along those lines, I was slightly surprised to find out in patch 4 that
cache entry freeing is a no-op. That's fine, but in that case, it would
be better to delete all the calls to the "discard" function, and
document in the others that the entries they create will only be freed
when the memory pool itself is discarded.

I can add a comment inside the function body. In the next commit, I do add logic to perform extra (optional) verification in the discard function. I did wrestle with this fact, but I feel there is value in having the locations where it is appropriate to free these entries in code, even if this particular implementation is not utilizing it right now. Hopefully the verification logic added in 5/5 is enough to justify keeping this function around.


[1] https://public-inbox.org/git/xmqqwox5i0f7....@gitster-ct.c.googlers.com/

Reply via email to