> On 28 Mar 2018, at 02:49, Matthew Wilcox <wi...@infradead.org> wrote: > > On Tue, Mar 27, 2018 at 03:53:53PM -0700, Kees Cook wrote: >> I agree: pushing this off to libc leaves a lot of things unprotected. >> I think this should live in the kernel. The question I have is about >> making it maintainable/readable/etc. >> >> The state-of-the-art for ASLR is moving to finer granularity (over >> just base-address offset), so I'd really like to see this supported in >> the kernel. We'll be getting there for other things in the future, and >> I'd like to have a working production example for researchers to >> study, etc. > > One thing we need is to limit the fragmentation of this approach. > Even on 64-bit systems, we can easily get into a situation where there isn't > space to map a contiguous terabyte.
As I wrote before, shift_random is introduced to be fragmentation limit. Even without it, the main question here is ‘if we can’t allocate memory with N size bytes, how many bytes we already allocated?’. From these point of view I already showed in previous version of patch that if application uses not so big memory allocations, it will have enough memory to use. If it uses XX Gigabytes or Terabytes memory, this application has all chances to be exploited with fully randomization or without. Since it is much easier to find(or guess) any usable pointer, etc. For the instance you have only 128 terabytes of memory for user space, so probability to exploit this application is 1/128 what is not secure at all. This is very rough estimate but I try to make things easier to understand. Best regards, Ilya _______________________________________________ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc