On Saturday, 13 February 2016 at 22:01:45 UTC, deadalnix wrote:
On Saturday, 13 February 2016 at 21:10:50 UTC, Andrei
Alexandrescu wrote:
There's no need. I'll do the implementation with the prefix,
and if you do it with a global hashtable within the same or
better speed, my hat is off to you.
That is false dichotomy. What about storing the metadata at an
address that is computable from from the object's address,
while not being contiguous with the object allocated ? Is
substracting a constant really the only option here ? (hint, it
is not)
1) False-sharing overhead:
If you look at the case where the allocated data is non-shared,
you'll get better cache locality, instead of sharing overhead on
the cache coherency system.
2) Unfriendly allocation size:
In general you're correct, but in the end, it depends on the type
of data. If you consider the case of shared_ptr - refcount as
metadata + ptr as the actual allocation - you get a nice
allocation size of 2*size_t.sizeof.
As I see it, AffixAllocator is not a silver bullet, but in some
very specific cases it can actaully be a very good fit. You just
have to carefully consider this case-by-case. E.g. for shared
types with irregular size we can just switch to a different
allocator - that's the whole point of the composable allocators.
3) Increased probability/danger of buffer overflow/underflow:
First, I would say that things like slice bounds checks make D a
safer language for the average user than C/C++, which should make
this a bit less of a problem.
Second, I actually think that even if this suddenly leads to a
lot of problems for the end users, this would bring more pressure
for adding more Ada/Rust-like static analysis for D - which is a
Good Thing.