On 10/16/19 4:22 PM, Mark Rutland wrote:
> Hi Andrey,
>
> On Wed, Oct 16, 2019 at 03:19:50PM +0300, Andrey Ryabinin wrote:
>> On 10/14/19 4:57 PM, Daniel Axtens wrote:
> + /*
> + * Ensure poisoning is visible before the shadow is made visible
> + * to other CPUs.
> + */
Hi Andrey,
On Wed, Oct 16, 2019 at 03:19:50PM +0300, Andrey Ryabinin wrote:
> On 10/14/19 4:57 PM, Daniel Axtens wrote:
> >>> + /*
> >>> + * Ensure poisoning is visible before the shadow is made visible
> >>> + * to other CPUs.
> >>> + */
> >>> + smp_wmb();
> >>
> >> I'm not quite understand
On 10/14/19 4:57 PM, Daniel Axtens wrote:
> Hi Andrey,
>
>
>>> + /*
>>> +* Ensure poisoning is visible before the shadow is made visible
>>> +* to other CPUs.
>>> +*/
>>> + smp_wmb();
>>
>> I'm not quite understand what this barrier do and why it needed.
>> And if it's really
> There is a potential problem here, as Will Deacon wrote up at:
>
>
> https://lore.kernel.org/linux-arm-kernel/20190827131818.14724-1-w...@kernel.org/
>
> ... in the section starting:
>
> | *** Other architecture maintainers -- start here! ***
>
> ... whereby the CPU can spuriously fault on
>>> @@ -2497,6 +2533,9 @@ void *__vmalloc_node_range(unsigned long size,
>>> unsigned long align,
>>> if (!addr)
>>> return NULL;
>>>
>>> + if (kasan_populate_vmalloc(real_size, area))
>>> + return NULL;
>>> +
>>
>> KASAN itself uses __vmalloc_node_range() to
Mark Rutland writes:
> On Tue, Oct 01, 2019 at 04:58:30PM +1000, Daniel Axtens wrote:
>> Hook into vmalloc and vmap, and dynamically allocate real shadow
>> memory to back the mappings.
>>
>> Most mappings in vmalloc space are small, requiring less than a full
>> page of shadow space.
On Tue, Oct 01, 2019 at 04:58:30PM +1000, Daniel Axtens wrote:
> Hook into vmalloc and vmap, and dynamically allocate real shadow
> memory to back the mappings.
>
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per
On Tue, Oct 15, 2019 at 12:57:44AM +1100, Daniel Axtens wrote:
> Hi Andrey,
>
>
> >> + /*
> >> + * Ensure poisoning is visible before the shadow is made visible
> >> + * to other CPUs.
> >> + */
> >> + smp_wmb();
> >
> > I'm not quite understand what this barrier do and why it needed.
>
Hi Andrey,
>> +/*
>> + * Ensure poisoning is visible before the shadow is made visible
>> + * to other CPUs.
>> + */
>> +smp_wmb();
>
> I'm not quite understand what this barrier do and why it needed.
> And if it's really needed there should be a pairing barrier
> on the
On 10/1/19 9:58 AM, Daniel Axtens wrote:
> core_initcall(kasan_memhotplug_init);
> #endif
> +
> +#ifdef CONFIG_KASAN_VMALLOC
> +static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
> + void *unused)
> +{
> + unsigned long page;
> +
Hi Uladzislau,
> Looking at it one more, i think above part of code is a bit wrong
> and should be separated from merge_or_add_vmap_area() logic. The
> reason is to keep it simple and do only what it is supposed to do:
> merging or adding.
>
> Also the kasan_release_vmalloc() gets called twice
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index a3c70e275f4e..9fb7a16f42ae 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va,
> struct list_head *next;
> struct rb_node **link;
> struct rb_node *parent;
> +
On Wed, Oct 02, 2019 at 11:23:06AM +1000, Daniel Axtens wrote:
> Hi,
>
> >>/*
> >> * Find a place in the tree where VA potentially will be
> >> * inserted, unless it is merged with its sibling/siblings.
> >> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
> >>
Daniel Axtens a écrit :
Hi,
/*
* Find a place in the tree where VA potentially will be
* inserted, unless it is merged with its sibling/siblings.
@@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
if (sibling->va_end == va->va_start) {
Hi,
>> /*
>> * Find a place in the tree where VA potentially will be
>> * inserted, unless it is merged with its sibling/siblings.
>> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
>> if (sibling->va_end == va->va_start) {
>>
Hello, Daniel.
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index a3c70e275f4e..9fb7a16f42ae 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va,
> struct list_head *next;
> struct rb_node **link;
> struct rb_node
Hook into vmalloc and vmap, and dynamically allocate real shadow
memory to back the mappings.
Most mappings in vmalloc space are small, requiring less than a full
page of shadow space. Allocating a full shadow page per mapping would
therefore be wasteful. Furthermore, to ensure that different
17 matches
Mail list logo