On 12/21/2016 06:43 AM, Trevor Saunders wrote:
So a few interesting things have to be dealt if we want to make this change.
I already mentioned the need to bias based on ref->offset so that the range
of bytes we're tracking is represented 0..size.
While we know the length of the potential dead store we don't know the
length of the subsequent stores that we're hoping make the original a dead
store. Thus when we start clearing LIVE_BYTES based on those subsequent
stores, we have to normalize those against the ref->offset of the original
store.
What's even more fun is sizing. Those subsequent stores may be considerably
larger. Which means that a bitmap_clear_range call has to be a hell of a
lot more careful when working with sbitmaps (we just happily stop all over
memory in that case) whereas a bitmap the right things will "just happen".
On a positive size since we've normalized the potential dead store's byte
range to 0..size, it means computing trims is easier because we inherently
know how many bits were originally set. So compute_trims becomes trivial
and we can simplify trim_complex_store a bit as well.
And, of course, we don't have a bitmap_{clear,set}_range or a
bitmap_count_bits implementation for sbitmaps.
It's all a bit of "ugh", but should be manageable.
yeah, but that seems like enough work that you could reasonably stick
with bitmap instead.
Well, the conversion is done :-) It was largely as I expected with the
devil being in the normalization details which are well isolated.
p.s. sorry I've been falling behind lately.
It happens. You might want to peek briefly at Aldy's auto_bitmap class
as part of the 71691 patches. It looks like a fairly straightforward
conversion to auto_*. I'm sure there's all kinds of places we could use it.
Jeff