. snip ..
> >> * You may need support for adapting or augmenting exception tables if
> >>   patching such code is desired (it probably is).  Hotpatches may need
> >>   to bring their own small exception tables (similar to how Linux
> >>   modules support this).  If you don't plan on supporting hotpatches
> >>   that introduce additional exception-locations, one could also change
> >>   the exception table in-place and reorder it afterwards.
> > 
> > Each patch carries 'section' structures which define what kind
> > of massaging needs to be done. As in each of these 'section' structues
> > says:
> >  - At the original address X there was an excetion table
> >  - (or) At the origian address Y there was an alt assembler
> > 
> > And we would:
> >  - During 'check' state, confirm that indeed X is in the exception table
> >    (or Y in the alt assembler)
> > 
> >  - During the apply state, fix the exception table X offset to point to
> >    the new virtual address.
> > 
> > Or perhaps I am over-simplying it? My recollection of the exception
> > table code is that it would be mostly dealing with the table and
> > changing the virtual address, but I hadn't dug it in details.
> 
> A simple approach would indeed be to patch the table in-place with the
> new addresses for the replacement functions and fix-up sections.  But,
> you would also have to reorder the table to keep the binary search working.
> 
> And of course this approach would not support adding additional entries
> to the table as it is allocated at compile / link time.  So you cannot
> support a hotpatch that would introduce an additional entry.
> 
> Linux modules, in contrast can bring their own exception tables and
> those are walked by the handler whenever the main table does not contain
> an entry.  I have not implemented that aspect yet, but would consider
> this to be the most desirable design upon first look.

It is in some way an implementation part. I will include it in the doc
as there are many parts here and having them all nicely outlined
makes my head spin less.
.. snip..
> >>>
> >>> We must also have a mechanism to: provide an copy of the old code - so 
> >>> that
> >>> the hypervisor can verify it against the code in memory; the new code;
> >>
> >> As Xen has no stable in-hypervisor API / ABI, you need to make sure
> >> that a generated module matches a target hypervisor.  In our design,
> >> we use build IDs for that (ld --build-id).  We embed build IDs at Xen
> >> compile time and can query a running hypervisor for its ID and only
> >> load matching patches.
> >>
> >> This seems to be an alternative to your proposal to include old code
> >> into hotpatch modules.
> > 
> > That is much simpler.
> 
> Just to clarify, are you agreeing that the build ID approach is much
> simpler?

Yes.
> 
> > There is a nice part of the old code check - you
> > can check (and deal with) patching an already patched code.
> > As in, if the payload was configured to be applied on top of an already
> > patched function it would patch nicely. But if the payload is against
> > the virgin code - and the hypervisor is running an older patch, we would
> > bail out.
> 
> You can do that too with the build IDs if there is some mechanism that
> loads hotpatches in the same order as they were built in (if they
> overlap).  The simplest approach that comes to mind is a hotpatch stack,
> instead of independent patches.

True. Murphy law though says somebody will do this in reverse order :-)
And that is my worry - some system admin will reverse the order, or pick
an patch out of order, and we end up patching .. and things eventually
break and blow up.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to