> https://escholarship.org/content/qt17j227zv/qt17j227zv.pdf
> 
> Thoughts on this?

Lots of misunderstandings in there.

The goal is to make addresses unguessable.

One does not need full entropy for that, because attack code doesn't
have the luxury of doing a search.  Attack code does 1 memory access.
If the object isn't there, it's accessing something else.  Maybe it is
accessing an unmapped page.  Problem solved.

An object just has to be at a different place each runtime.  That
paper does no study of specific objects.  It only looks at the
aggreggate, and appears clueless that we have intentionally placed our
random objects close to each other (in different order) TO REDUCE
FRAGMENTATION AND MEMORY MANAGEMENT OVERHEAD.  At the end, the paper
even admits that they are unfamiliar with the stratey the code takes.

They wrote, but did not try to learn.

Each independent object is still at an unknowable addresses.

If you are trying to find me, and I am trying to hide, I don't
neccessary need to hide in a field in Africa or go to Ellesemere, I
could perhaps just hide 100m from where I am and you can't find me
in small constant time.

I'm still astounded at the misconnect between people who go wholehog
at believing ASLR should consume the whole address space.  Lost the
plot.  Forgot what attackers actually are capable of doing.

There is another problem with going full random.  You are killing the
cpu with additional TLB walks, TLB pressure, cache slots to contain
the page table walk addresses and such.  Performance loss without any
beefit to security.

Mitigations should be as cheap as possible to satisfy the goal.


After the initial work, we have focused on splitting the address
space's objects into *more objects* with tighter permissions.

And, we have focused on never reusuing an address space after a crash,
by designing software to use fork+exec.

Compilers generate code that dribbles library address and such into
registers and constant offsets on the stack.  Attackers use relative
accesses to those locations on the stack to know where your libc is.
Then they reach into the parts of libc with fixed relative offsets.

That's why libc, libcrypto, ld.so, and the kernel are now being
relinked.  This change understands what attackers utilize as tools,
and in an inexpensive fashion tries to make their tools brittle.

<sarcasm>I imagine years from now someone will see the wisdom of and
write a random relinker that adds excessive randomization data in a
super expensive operation done at runtime... and they won't recognize
that the bar has already moved, and the next mitigation should
embrittle some other mechanism...

Reply via email to