On Thu, Oct 11 2018, Jonathan Tan wrote:

> Using per-commit filters and restricting the bloom filter to a single
> parent increases the relative power of the filter in omitting tree
> inspections compared to the original (107/53096 vs 1183/66459), but the
> lack of coverage w.r.t. the non-first parents had a more significant
> effect than I thought (1.29s vs .24s). It might be best to have one
> filter for each (commit, parent) pair (or, at least, the first two
> parents of each commit - we probably don't need to care that much about
> octopus merges) - this would take up more disk space than if we only
> store filters for the first parent, but is still less than the original
> example of storing information for all commits in one filter.
>
> There are more possibilities like dynamic filter sizing, different
> hashing, and hashing to support wildcard matches, which I haven't looked
> into.

Another way to deal with that is to havet the filter store change since
the merge base, from an E-Mail of mine back in May[1] when this was
discussed:

    From: Ævar Arnfjörð Bjarmason <ava...@gmail.com>
    Date: Fri, 04 May 2018 22:36:07 +0200
    Message-ID: <87h8nnxio8....@evledraar.gmail.com> (raw)

    On Fri, May 04 2018, Jakub Narebski wrote:

    (Just off-the cuff here and I'm surely about to be corrected by
    Derrick...)

    > * What to do about merge commits, and octopus merges in particular?
    >   Should Bloom filter be stored for each of the parents?  How to ensure
    >   fast access then (fixed-width records) - use large edge list?

    You could still store it fixed with, you'd just say that if you
    encounter a merge with N parents the filter wouldn't store files changed
    in that commit, but rather whether any of the N (including the merge)
    had changes to files as of the their common merge-base.

    Then if they did you'd need to walk all sides of the merge where each
    commit would also have the filter to figure out where the change(s)
    was/were, but if they didn't you could skip straight to the merge base
    and keep walking.
    [...]

Ideas are cheap and I don't have any code to back that up, just thought
I'd mention it if someone found it interesting.

Thinking about this again I wonder if something like that could be
generalized more, i.e. in the abstract the idea is really whether we can
store a filter for N commits so we can skip across N in the walk as an
optimization, doing this for merges is just an implementation detail.

So what if the bloom filters were this sort of structure:

    <commit_the_filter_is_for> = [<bloom bitmap>, <next commit with filter>]

So e.g. given a history like ("-> " = parent relationship)

    A -> B
    B -> C
    C -> D
    E -> F

We could store:

    A -> B [<bloom bitmap for A..D>, D]
    B -> C
    C -> D
    D -> E [<bloom bitmap for D..F>, F]
    E -> F
    F -> G [<bloom bitmap for F..G>, G]

Note how the bitmaps aren't evenly spaced. That's because some algorithm
would have walked the graph and e.g. decided that from A..D we had few
enough changes that the bitmap should apply for 4 commits, and then 3
for the next set etc. Whether some range was worth extending could just
be a configurable implementation detail.

1. https://public-inbox.org/git/87h8nnxio8....@evledraar.gmail.com/

Reply via email to