On 24.02.20 18:36, Peter Xu wrote:
> On Mon, Feb 24, 2020 at 11:57:03AM +0100, David Hildenbrand wrote:
>> On 24.02.20 11:50, David Hildenbrand wrote:
>>> On 19.02.20 23:46, Peter Xu wrote:
>>>> On Wed, Feb 12, 2020 at 02:42:46PM +0100, David Hildenbrand wrote:
>>>>> Factor it out and add a comment.
>>>>>
>>>>> Reviewed-by: Igor Kotrasinski <i.kotrasi...@partner.samsung.com>
>>>>> Acked-by: Murilo Opsfelder Araujo <muri...@linux.ibm.com>
>>>>> Reviewed-by: Richard Henderson <richard.hender...@linaro.org>
>>>>> Cc: "Michael S. Tsirkin" <m...@redhat.com>
>>>>> Cc: Murilo Opsfelder Araujo <muri...@linux.ibm.com>
>>>>> Cc: Greg Kurz <gr...@kaod.org>
>>>>> Cc: Eduardo Habkost <ehabk...@redhat.com>
>>>>> Cc: "Dr. David Alan Gilbert" <dgilb...@redhat.com>
>>>>> Cc: Igor Mammedov <imamm...@redhat.com>
>>>>> Signed-off-by: David Hildenbrand <da...@redhat.com>
>>>>> ---
>>>>>  util/mmap-alloc.c | 21 ++++++++++++---------
>>>>>  1 file changed, 12 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
>>>>> index 27dcccd8ec..82f02a2cec 100644
>>>>> --- a/util/mmap-alloc.c
>>>>> +++ b/util/mmap-alloc.c
>>>>> @@ -82,17 +82,27 @@ size_t qemu_mempath_getpagesize(const char *mem_path)
>>>>>      return qemu_real_host_page_size;
>>>>>  }
>>>>>  
>>>>> +static inline size_t mmap_pagesize(int fd)
>>>>> +{
>>>>> +#if defined(__powerpc64__) && defined(__linux__)
>>>>> +    /* Mappings in the same segment must share the same page size */
>>>>> +    return qemu_fd_getpagesize(fd);
>>>>> +#else
>>>>> +    return qemu_real_host_page_size;
>>>>> +#endif
>>>>> +}
>>>>
>>>> Pure question: This will return 4K even for huge pages on x86, is this
>>>> what we want?
>>>
>>> (was asking myself the same question) I *think* it's intended. It's
>>> mainly only used to allocate one additional guard page. The callers of
>>> qemu_ram_mmap() make sure that the size is properly aligned (e.g., to
>>> huge pages).
>>>
>>> Of course, a 4k guard page is sufficient - unless we can't use that
>>> (special case for ppc64 here).
>>>
>>> Thanks!
>>>
>>
>> We could rename the function to mmap_guard_pagesize(), thoughts?
> 
> Yeh this looks better.
> 
> Out of topic: Actually I'm still confused on why we'd need to do so
> much to just leave an PROT_NONE page after the buffer.  If the
> hypervisor is buggy, it can logically writes to anywhere after all.
> It's not a stack structure but it can be any guest RAM so I'm not sure
> overflow detection really matters that much...

The comment in the file says

"Leave a single PROT_NONE page allocated after the RAM block, to serve
as a guard page guarding against potential buffer overflows."

So it's really just a safety net.

> 
> Thanks,
> 


-- 
Thanks,

David / dhildenb


Reply via email to