Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-09-09 Thread Jan Beulich
>>> On 09.09.16 at 15:50,  wrote:
> On Fri, Sep 09, 2016 at 02:33:18PM +0100, Ross Lagerwall wrote:
>> On 08/24/2016 03:22 AM, Konrad Rzeszutek Wilk wrote:
>> > So that when we apply the patch again the .bss is cleared.
>> > Otherwise we may find some variables containing old values.
>> > 
>> > The payloads may contain various .bss - especially if -fdata-sections
>> > is used which can create .bss. sections.
>> > 
>> 
>> After having thought about this again, I'm not sure it makes much sense. Any
>> data sections in the payload are not reset to their initial values, so
>> resetting the bss only may result in an unexpected combination of new & old
>> data/bss.
> 
> Regardless of that I think clearing the .bss upon applying the livepatch is
> still the right thing to do.
> 
> Regarding of the .data - we could have a copy of the .data the first time
> we load - and then during application copy over it from the original one?.
> 
>> 
>> Perhaps it just needs to be documented that a payload's bss/data is
>> untouched across revert/apply?
> 
> It really cuts down on bugs if we clear the .bss. It is kind of ingrained in 
> every
> developer that the .bss is zero-ed out at startup.

I don't think Ross meant to suggest to not clear it at all: Clearing it
at load time is the equivalent of loading actual data into initialized
sections.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-09-09 Thread Ross Lagerwall

On 09/09/2016 02:50 PM, Konrad Rzeszutek Wilk wrote:

On Fri, Sep 09, 2016 at 02:33:18PM +0100, Ross Lagerwall wrote:

On 08/24/2016 03:22 AM, Konrad Rzeszutek Wilk wrote:

So that when we apply the patch again the .bss is cleared.
Otherwise we may find some variables containing old values.

The payloads may contain various .bss - especially if -fdata-sections
is used which can create .bss. sections.



After having thought about this again, I'm not sure it makes much sense. Any
data sections in the payload are not reset to their initial values, so
resetting the bss only may result in an unexpected combination of new & old
data/bss.


Regardless of that I think clearing the .bss upon applying the livepatch is
still the right thing to do.

Regarding of the .data - we could have a copy of the .data the first time
we load - and then during application copy over it from the original one?.



Perhaps it just needs to be documented that a payload's bss/data is
untouched across revert/apply?


It really cuts down on bugs if we clear the .bss. It is kind of ingrained in 
every
developer that the .bss is zero-ed out at startup.




Sure.

IMO clearing one but not resetting the other is even more unexpected. 
However, if we agree that it is desirable to do both, then this patch is 
acceptable as a step in the right direction.


--
Ross Lagerwall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-09-09 Thread Konrad Rzeszutek Wilk
On Fri, Sep 09, 2016 at 02:33:18PM +0100, Ross Lagerwall wrote:
> On 08/24/2016 03:22 AM, Konrad Rzeszutek Wilk wrote:
> > So that when we apply the patch again the .bss is cleared.
> > Otherwise we may find some variables containing old values.
> > 
> > The payloads may contain various .bss - especially if -fdata-sections
> > is used which can create .bss. sections.
> > 
> 
> After having thought about this again, I'm not sure it makes much sense. Any
> data sections in the payload are not reset to their initial values, so
> resetting the bss only may result in an unexpected combination of new & old
> data/bss.

Regardless of that I think clearing the .bss upon applying the livepatch is
still the right thing to do.

Regarding of the .data - we could have a copy of the .data the first time
we load - and then during application copy over it from the original one?.

> 
> Perhaps it just needs to be documented that a payload's bss/data is
> untouched across revert/apply?

It really cuts down on bugs if we clear the .bss. It is kind of ingrained in 
every
developer that the .bss is zero-ed out at startup.
> 
> -- 
> Ross Lagerwall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-09-09 Thread Ross Lagerwall

On 08/24/2016 03:22 AM, Konrad Rzeszutek Wilk wrote:

So that when we apply the patch again the .bss is cleared.
Otherwise we may find some variables containing old values.

The payloads may contain various .bss - especially if -fdata-sections
is used which can create .bss. sections.



After having thought about this again, I'm not sure it makes much sense. 
Any data sections in the payload are not reset to their initial values, 
so resetting the bss only may result in an unexpected combination of new 
& old data/bss.


Perhaps it just needs to be documented that a payload's bss/data is 
untouched across revert/apply?


--
Ross Lagerwall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-09-08 Thread Konrad Rzeszutek Wilk
On Wed, Sep 07, 2016 at 02:02:44AM -0600, Jan Beulich wrote:
> >>> On 06.09.16 at 18:47,  wrote:
> > On Wed, Aug 24, 2016 at 02:55:21AM -0600, Jan Beulich wrote:
> >> >>> On 24.08.16 at 04:22,  wrote:
> >> > --- a/xen/common/livepatch.c
> >> > +++ b/xen/common/livepatch.c
> >> > @@ -70,6 +70,9 @@ struct payload {
> >> >  unsigned int nsyms;  /* Nr of entries in .strtab 
> >> > and symbols. */
> >> >  struct livepatch_build_id id;/* 
> >> > ELFNOTE_DESC(.note.gnu.build-id) of the payload. */
> >> >  struct livepatch_build_id dep;   /* 
> >> > ELFNOTE_DESC(.livepatch.depends). */
> >> > +void **bss;  /* .bss's of the payload. */
> >> > +size_t *bss_size;/* and their sizes. */
> >> 
> >> Is size_t wide enough in the extreme case? Perhaps yes, because I
> >> don't think we'll ever load 64-bit ELF on a 32-bit platform.
> > 
> > Nonethless having a huge .bss is a kind of extreme? Perhaps we should
> > have an seperate patch that checks the SHT_NOBITS and disallows .bss's
> > bigger than say 2MB?
> 
> Well, the extra check certainly wouldn't hurt, but I think before
> hitting the size_t limit you'd run out of address space to place
> the payload in (as that's iirc a less than 1Gb area).

True. And on ARM 32|64 even smaller (2MB). Let me force an
even smaller width type - say 'unsigned int'.
> 
> Jan
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-09-07 Thread Jan Beulich
>>> On 06.09.16 at 18:51,  wrote:
> --- a/xen/common/livepatch_elf.c
> +++ b/xen/common/livepatch_elf.c
> @@ -86,7 +86,16 @@ static int elf_resolve_sections(struct livepatch_elf *elf, 
> const void *data)
>  delta < sizeof(Elf_Ehdr) ? "at ELF header" : "is past 
> end");
>  return -EINVAL;
>  }
> -
> +else if ( !(sec[i].sec->sh_flags & SHF_EXECINSTR) &&
> +  (sec[i].sec->sh_flags & SHF_WRITE) &&
> +  sec[i].sec->sh_type == SHT_NOBITS &&
> +  sec[i].sec->sh_size > MB(2) )

What good do the two sh_flags checks do here?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-09-07 Thread Jan Beulich
>>> On 06.09.16 at 18:47,  wrote:
> On Wed, Aug 24, 2016 at 02:55:21AM -0600, Jan Beulich wrote:
>> >>> On 24.08.16 at 04:22,  wrote:
>> > --- a/xen/common/livepatch.c
>> > +++ b/xen/common/livepatch.c
>> > @@ -70,6 +70,9 @@ struct payload {
>> >  unsigned int nsyms;  /* Nr of entries in .strtab and 
>> > symbols. */
>> >  struct livepatch_build_id id;/* 
>> > ELFNOTE_DESC(.note.gnu.build-id) of the payload. */
>> >  struct livepatch_build_id dep;   /* 
>> > ELFNOTE_DESC(.livepatch.depends). */
>> > +void **bss;  /* .bss's of the payload. */
>> > +size_t *bss_size;/* and their sizes. */
>> 
>> Is size_t wide enough in the extreme case? Perhaps yes, because I
>> don't think we'll ever load 64-bit ELF on a 32-bit platform.
> 
> Nonethless having a huge .bss is a kind of extreme? Perhaps we should
> have an seperate patch that checks the SHT_NOBITS and disallows .bss's
> bigger than say 2MB?

Well, the extra check certainly wouldn't hurt, but I think before
hitting the size_t limit you'd run out of address space to place
the payload in (as that's iirc a less than 1Gb area).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-09-06 Thread Konrad Rzeszutek Wilk
On Thu, Aug 25, 2016 at 05:08:16PM +0100, Andrew Cooper wrote:
> On 24/08/16 09:55, Jan Beulich wrote:
> > > > > On 24.08.16 at 04:22,  wrote:
> > > --- a/xen/common/livepatch.c
> > > +++ b/xen/common/livepatch.c
> > > @@ -70,6 +70,9 @@ struct payload {
> > >   unsigned int nsyms;  /* Nr of entries in .strtab 
> > > and symbols. */
> > >   struct livepatch_build_id id;/* 
> > > ELFNOTE_DESC(.note.gnu.build-id) of the payload. */
> > >   struct livepatch_build_id dep;   /* 
> > > ELFNOTE_DESC(.livepatch.depends). */
> > > +void **bss;  /* .bss's of the payload. */
> > > +size_t *bss_size;/* and their sizes. */
> > Is size_t wide enough in the extreme case? Perhaps yes, because I
> > don't think we'll ever load 64-bit ELF on a 32-bit platform.
> 
> Even if we did, there is no chance that more than a single size_t's worth of
> data needs clearing, or the payload wouldn't fit in the current virtual
> address space.

Perhaps go even in further an add an arbitrary limit?
Like so (compile tested only):

From 22d0a6e0c6fc61a9e257ec4db78c2e58978b2976 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk 
Date: Tue, 6 Sep 2016 12:45:50 -0400
Subject: [PATCH] livepatch: Add limit of 2MB to payload .bss sections.

The initial patch: 11ff40fa7bb5fdcc69a58d0fec49c904ffca4793
"xen/xsplice: Hypervisor implementation of XEN_XSPLICE_op" caps the
size of the binary at 2MB. We follow that in capping the size
of the .BSSes to be at maximum 2MB.

Signed-off-by: Konrad Rzeszutek Wilk 
---
 xen/common/livepatch_elf.c | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/common/livepatch_elf.c b/xen/common/livepatch_elf.c
index 789e8fc..4a4111d 100644
--- a/xen/common/livepatch_elf.c
+++ b/xen/common/livepatch_elf.c
@@ -86,7 +86,16 @@ static int elf_resolve_sections(struct livepatch_elf *elf, 
const void *data)
 delta < sizeof(Elf_Ehdr) ? "at ELF header" : "is past 
end");
 return -EINVAL;
 }
-
+else if ( !(sec[i].sec->sh_flags & SHF_EXECINSTR) &&
+  (sec[i].sec->sh_flags & SHF_WRITE) &&
+  sec[i].sec->sh_type == SHT_NOBITS &&
+  sec[i].sec->sh_size > MB(2) )
+{
+/* Arbitrary limit. */
+dprintk(XENLOG_ERR, LIVEPATCH "%s: Section [%u] .bss is bigger 
than 2MB!\n",
+elf->name, i);
+return -EINVAL;
+}
 sec[i].data = data + delta;
 /* Name is populated in elf_resolve_section_names. */
 sec[i].name = NULL;
-- 
2.4.11


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-09-06 Thread Konrad Rzeszutek Wilk
On Wed, Aug 24, 2016 at 02:55:21AM -0600, Jan Beulich wrote:
> >>> On 24.08.16 at 04:22,  wrote:
> > --- a/xen/common/livepatch.c
> > +++ b/xen/common/livepatch.c
> > @@ -70,6 +70,9 @@ struct payload {
> >  unsigned int nsyms;  /* Nr of entries in .strtab and 
> > symbols. */
> >  struct livepatch_build_id id;/* 
> > ELFNOTE_DESC(.note.gnu.build-id) of the payload. */
> >  struct livepatch_build_id dep;   /* 
> > ELFNOTE_DESC(.livepatch.depends). */
> > +void **bss;  /* .bss's of the payload. */
> > +size_t *bss_size;/* and their sizes. */
> 
> Is size_t wide enough in the extreme case? Perhaps yes, because I
> don't think we'll ever load 64-bit ELF on a 32-bit platform.

Nonethless having a huge .bss is a kind of extreme? Perhaps we should
have an seperate patch that checks the SHT_NOBITS and disallows .bss's
bigger than say 2MB?

I am using 2MB as that is the limit of the livepatch binaries right
now (see verify_payload:

 
 127 if ( upload->size > MB(2) )
 
 128 return -EINVAL;


> 
> > +size_t n_bss;/* Size of the array. */
> 
> As opposed to that, I think this one could be unsigned int (or else
> you end up with inconsistencies in {move,apply}_payload()).

/me nods. Changed to unsitned int.
> 
> > @@ -374,14 +392,24 @@ static int move_payload(struct payload *payload, 
> > struct livepatch_elf *elf)
> >  elf->name, elf->sec[i].name, 
> > elf->sec[i].load_addr);
> >  }
> >  else
> > -memset(elf->sec[i].load_addr, 0, elf->sec[i].sec->sh_size);
> > +{
> > +payload->bss[n_bss] = elf->sec[i].load_addr;
> > +payload->bss_size[n_bss++] = elf->sec[i].sec->sh_size;
> > +}
> >  }
> >  }
> > +ASSERT(n_bss == payload->n_bss);
> >  
> >   out:
> >  xfree(offset);
> >  
> >  return rc;
> > +
> > + out_mem:
> > +dprintk(XENLOG_ERR, LIVEPATCH "%s: Could not allocate memory for 
> > payload!\n",
> > +elf->name);
> > +rc = -ENOMEM;
> > +goto out;
> 
> You leak any of the three buffers here which you managed to
> successfully allocate.

I added a call to 'free_payload_data(payload)' there to make it a direct call 
to it.
It is not needed per say as the caller unconditionally calls 
free_payload_data() if
the return of any of the functions is non-zero. But in case things get moved 
around
and that assumption goes away - having a call to free_payload_data makes sense
in the function (plus it looks much more nicer to free/alloc in the same 
function).

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-08-25 Thread Andrew Cooper

On 24/08/16 09:55, Jan Beulich wrote:

On 24.08.16 at 04:22,  wrote:

--- a/xen/common/livepatch.c
+++ b/xen/common/livepatch.c
@@ -70,6 +70,9 @@ struct payload {
  unsigned int nsyms;  /* Nr of entries in .strtab and 
symbols. */
  struct livepatch_build_id id;/* ELFNOTE_DESC(.note.gnu.build-id) 
of the payload. */
  struct livepatch_build_id dep;   /* ELFNOTE_DESC(.livepatch.depends). 
*/
+void **bss;  /* .bss's of the payload. */
+size_t *bss_size;/* and their sizes. */

Is size_t wide enough in the extreme case? Perhaps yes, because I
don't think we'll ever load 64-bit ELF on a 32-bit platform.


Even if we did, there is no chance that more than a single size_t's 
worth of data needs clearing, or the payload wouldn't fit in the current 
virtual address space.


~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 1/9] livepatch: Clear .bss when payload is reverted

2016-08-24 Thread Jan Beulich
>>> On 24.08.16 at 04:22,  wrote:
> --- a/xen/common/livepatch.c
> +++ b/xen/common/livepatch.c
> @@ -70,6 +70,9 @@ struct payload {
>  unsigned int nsyms;  /* Nr of entries in .strtab and 
> symbols. */
>  struct livepatch_build_id id;/* ELFNOTE_DESC(.note.gnu.build-id) 
> of the payload. */
>  struct livepatch_build_id dep;   /* 
> ELFNOTE_DESC(.livepatch.depends). */
> +void **bss;  /* .bss's of the payload. */
> +size_t *bss_size;/* and their sizes. */

Is size_t wide enough in the extreme case? Perhaps yes, because I
don't think we'll ever load 64-bit ELF on a 32-bit platform.

> +size_t n_bss;/* Size of the array. */

As opposed to that, I think this one could be unsigned int (or else
you end up with inconsistencies in {move,apply}_payload()).

> @@ -374,14 +392,24 @@ static int move_payload(struct payload *payload, struct 
> livepatch_elf *elf)
>  elf->name, elf->sec[i].name, elf->sec[i].load_addr);
>  }
>  else
> -memset(elf->sec[i].load_addr, 0, elf->sec[i].sec->sh_size);
> +{
> +payload->bss[n_bss] = elf->sec[i].load_addr;
> +payload->bss_size[n_bss++] = elf->sec[i].sec->sh_size;
> +}
>  }
>  }
> +ASSERT(n_bss == payload->n_bss);
>  
>   out:
>  xfree(offset);
>  
>  return rc;
> +
> + out_mem:
> +dprintk(XENLOG_ERR, LIVEPATCH "%s: Could not allocate memory for 
> payload!\n",
> +elf->name);
> +rc = -ENOMEM;
> +goto out;

You leak any of the three buffers here which you managed to
successfully allocate.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel