Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-21 Thread Ross Philipson
On 10/21/20 12:18 PM, Arvind Sankar wrote:
> On Wed, Oct 21, 2020 at 05:28:33PM +0200, Daniel Kiper wrote:
>> On Mon, Oct 19, 2020 at 01:18:22PM -0400, Arvind Sankar wrote:
>>> On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
 On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
>>
>> I am discussing with Ross the other option. We can create
>> .rodata.mle_header section and put it at fixed offset as
>> kernel_info is. So, we would have, e.g.:
>>
>> arch/x86/boot/compressed/vmlinux.lds.S:
>> .rodata.kernel_info KERNEL_INFO_OFFSET : {
>> *(.rodata.kernel_info)
>> }
>> ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info 
>> at bad address!")
>>
>> .rodata.mle_header MLE_HEADER_OFFSET : {
>> *(.rodata.mle_header)
>> }
>> ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at 
>> bad address!")
>>
>> arch/x86/boot/compressed/sl_stub.S:
>> #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
>>
>> .section ".rodata.mle_header", "a"
>>
>> SYM_DATA_START(mle_header)
>> .long   0x9082ac5a/* UUID0 */
>> .long   0x74a7476f/* UUID1 */
>> .long   0xa2555c0f/* UUID2 */
>> .long   0x42b651cb/* UUID3 */
>> .long   0x0034/* MLE header size */
>> .long   0x00020002/* MLE version 2.2 */
>> .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
>> (virt. address) */
>> .long   0x/* First valid page of MLE */
>> .long   0x/* Offset within binary of first byte of 
>> MLE */
>> .long   0x/* Offset within binary of last byte + 1 
>> of MLE */
>> .long   0x0223/* Bit vector of MLE-supported 
>> capabilities */
>> .long   0x/* Starting linear address of command line 
>> (unused) */
>> .long   0x/* Ending linear address of command line 
>> (unused) */
>> SYM_DATA_END(mle_header)
>>
>> Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
>> Anyway, is it acceptable?

 What do you think about my MLE_HEADER_OFFSET and related stuff proposal?

>>>
>>> I'm wondering if it would be easier to just allow relocations in these
>>> special "header" sections. I need to check how easy/hard it is to do
>>> that without triggering linker warnings.
>>
>> Ross and I still bouncing some ideas. We came to the conclusion that
>> putting mle_header into kernel .rodata.kernel_info section or even
>> arch/x86/boot/compressed/kernel_info.S file would be the easiest thing
>> to do at this point. Of course I would suggest some renaming too. E.g.
>> .rodata.kernel_info to .rodata.kernel_headers, etc. Does it make sense
>> for you?
>>
>> Daniel
> 
> I haven't been able to come up with any different options that don't
> require post-processing of the kernel image. Allowing relocations in
> specific sections seems to not be possible with lld, and anyway would
> require the fields to be 64-bit sized so it doesn't really help.
> 
> Putting mle_header into kernel_info seems like a reasonable thing to me,
> and if you do that, putting it into kernel_info.S would seem to be
> necessary?  Would you also have a fixed field with the offset of the

That seems like a reasonable place for it to go.

> mle_header from kernel_info?  That seems nicer than having the
> bootloader scan the variable data for magic strings.

Yes kernel_info will have a field to the offset of the mle_header. I
agree that would be nicer.

Thanks
Ross

> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-21 Thread Arvind Sankar
On Wed, Oct 21, 2020 at 05:28:33PM +0200, Daniel Kiper wrote:
> On Mon, Oct 19, 2020 at 01:18:22PM -0400, Arvind Sankar wrote:
> > On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
> > > On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> > > > On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> > > > >
> > > > > I am discussing with Ross the other option. We can create
> > > > > .rodata.mle_header section and put it at fixed offset as
> > > > > kernel_info is. So, we would have, e.g.:
> > > > >
> > > > > arch/x86/boot/compressed/vmlinux.lds.S:
> > > > > .rodata.kernel_info KERNEL_INFO_OFFSET : {
> > > > > *(.rodata.kernel_info)
> > > > > }
> > > > > ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, 
> > > > > "kernel_info at bad address!")
> > > > >
> > > > > .rodata.mle_header MLE_HEADER_OFFSET : {
> > > > > *(.rodata.mle_header)
> > > > > }
> > > > > ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header 
> > > > > at bad address!")
> > > > >
> > > > > arch/x86/boot/compressed/sl_stub.S:
> > > > > #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> > > > >
> > > > > .section ".rodata.mle_header", "a"
> > > > >
> > > > > SYM_DATA_START(mle_header)
> > > > > .long   0x9082ac5a/* UUID0 */
> > > > > .long   0x74a7476f/* UUID1 */
> > > > > .long   0xa2555c0f/* UUID2 */
> > > > > .long   0x42b651cb/* UUID3 */
> > > > > .long   0x0034/* MLE header size */
> > > > > .long   0x00020002/* MLE version 2.2 */
> > > > > .long   mleh_rva(sl_stub_entry)/* Linear entry point of 
> > > > > MLE (virt. address) */
> > > > > .long   0x/* First valid page of MLE */
> > > > > .long   0x/* Offset within binary of first byte 
> > > > > of MLE */
> > > > > .long   0x/* Offset within binary of last byte + 
> > > > > 1 of MLE */
> > > > > .long   0x0223/* Bit vector of MLE-supported 
> > > > > capabilities */
> > > > > .long   0x/* Starting linear address of command 
> > > > > line (unused) */
> > > > > .long   0x/* Ending linear address of command 
> > > > > line (unused) */
> > > > > SYM_DATA_END(mle_header)
> > > > >
> > > > > Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> > > > > Anyway, is it acceptable?
> > >
> > > What do you think about my MLE_HEADER_OFFSET and related stuff proposal?
> > >
> >
> > I'm wondering if it would be easier to just allow relocations in these
> > special "header" sections. I need to check how easy/hard it is to do
> > that without triggering linker warnings.
> 
> Ross and I still bouncing some ideas. We came to the conclusion that
> putting mle_header into kernel .rodata.kernel_info section or even
> arch/x86/boot/compressed/kernel_info.S file would be the easiest thing
> to do at this point. Of course I would suggest some renaming too. E.g.
> .rodata.kernel_info to .rodata.kernel_headers, etc. Does it make sense
> for you?
> 
> Daniel

I haven't been able to come up with any different options that don't
require post-processing of the kernel image. Allowing relocations in
specific sections seems to not be possible with lld, and anyway would
require the fields to be 64-bit sized so it doesn't really help.

Putting mle_header into kernel_info seems like a reasonable thing to me,
and if you do that, putting it into kernel_info.S would seem to be
necessary?  Would you also have a fixed field with the offset of the
mle_header from kernel_info?  That seems nicer than having the
bootloader scan the variable data for magic strings.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-21 Thread Daniel Kiper
On Mon, Oct 19, 2020 at 01:18:22PM -0400, Arvind Sankar wrote:
> On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
> > On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> > > On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> > > >
> > > > I am discussing with Ross the other option. We can create
> > > > .rodata.mle_header section and put it at fixed offset as
> > > > kernel_info is. So, we would have, e.g.:
> > > >
> > > > arch/x86/boot/compressed/vmlinux.lds.S:
> > > > .rodata.kernel_info KERNEL_INFO_OFFSET : {
> > > > *(.rodata.kernel_info)
> > > > }
> > > > ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, 
> > > > "kernel_info at bad address!")
> > > >
> > > > .rodata.mle_header MLE_HEADER_OFFSET : {
> > > > *(.rodata.mle_header)
> > > > }
> > > > ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header 
> > > > at bad address!")
> > > >
> > > > arch/x86/boot/compressed/sl_stub.S:
> > > > #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> > > >
> > > > .section ".rodata.mle_header", "a"
> > > >
> > > > SYM_DATA_START(mle_header)
> > > > .long   0x9082ac5a/* UUID0 */
> > > > .long   0x74a7476f/* UUID1 */
> > > > .long   0xa2555c0f/* UUID2 */
> > > > .long   0x42b651cb/* UUID3 */
> > > > .long   0x0034/* MLE header size */
> > > > .long   0x00020002/* MLE version 2.2 */
> > > > .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
> > > > (virt. address) */
> > > > .long   0x/* First valid page of MLE */
> > > > .long   0x/* Offset within binary of first byte of 
> > > > MLE */
> > > > .long   0x/* Offset within binary of last byte + 1 
> > > > of MLE */
> > > > .long   0x0223/* Bit vector of MLE-supported 
> > > > capabilities */
> > > > .long   0x/* Starting linear address of command 
> > > > line (unused) */
> > > > .long   0x/* Ending linear address of command line 
> > > > (unused) */
> > > > SYM_DATA_END(mle_header)
> > > >
> > > > Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> > > > Anyway, is it acceptable?
> >
> > What do you think about my MLE_HEADER_OFFSET and related stuff proposal?
> >
>
> I'm wondering if it would be easier to just allow relocations in these
> special "header" sections. I need to check how easy/hard it is to do
> that without triggering linker warnings.

Ross and I still bouncing some ideas. We came to the conclusion that
putting mle_header into kernel .rodata.kernel_info section or even
arch/x86/boot/compressed/kernel_info.S file would be the easiest thing
to do at this point. Of course I would suggest some renaming too. E.g.
.rodata.kernel_info to .rodata.kernel_headers, etc. Does it make sense
for you?

Daniel
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-19 Thread Ross Philipson
On 10/19/20 1:06 PM, Arvind Sankar wrote:
> On Mon, Oct 19, 2020 at 10:38:08AM -0400, Ross Philipson wrote:
>> On 10/16/20 4:51 PM, Arvind Sankar wrote:
>>> On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:

 I am discussing with Ross the other option. We can create
 .rodata.mle_header section and put it at fixed offset as
 kernel_info is. So, we would have, e.g.:

 arch/x86/boot/compressed/vmlinux.lds.S:
 .rodata.kernel_info KERNEL_INFO_OFFSET : {
 *(.rodata.kernel_info)
 }
 ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info 
 at bad address!")

 .rodata.mle_header MLE_HEADER_OFFSET : {
 *(.rodata.mle_header)
 }
 ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at 
 bad address!")

 arch/x86/boot/compressed/sl_stub.S:
 #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)

 .section ".rodata.mle_header", "a"

 SYM_DATA_START(mle_header)
 .long   0x9082ac5a/* UUID0 */
 .long   0x74a7476f/* UUID1 */
 .long   0xa2555c0f/* UUID2 */
 .long   0x42b651cb/* UUID3 */
 .long   0x0034/* MLE header size */
 .long   0x00020002/* MLE version 2.2 */
 .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
 (virt. address) */
 .long   0x/* First valid page of MLE */
 .long   0x/* Offset within binary of first byte of MLE 
 */
 .long   0x/* Offset within binary of last byte + 1 of 
 MLE */
 .long   0x0223/* Bit vector of MLE-supported capabilities 
 */
 .long   0x/* Starting linear address of command line 
 (unused) */
 .long   0x/* Ending linear address of command line 
 (unused) */
 SYM_DATA_END(mle_header)

 Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
 Anyway, is it acceptable?

 There is also another problem. We have to put into mle_header size of
 the Linux kernel image. Currently it is done by the bootloader but
 I think it is not a role of the bootloader. The kernel image should
 provide all data describing its properties and do not rely on the
 bootloader to do that. Ross and I investigated various options but we
 did not find a good/simple way to do that. Could you suggest how we
 should do that or at least where we should take a look to get some
 ideas?

 Daniel
>>>
>>> What exactly is the size you need here? Is it just the size of the
>>> protected mode image, that's startup_32 to _edata. Or is it the size of
>>
>> It is the size of the protected mode image. Though how to reference
>> those symbols to get the size might all more relocation issues.
>>
> 
> Ok, then I think mleh_rva(_edata) should get you that -- I assume you
> don't want to include the uninitialized data in the size? The kernel
> will access memory beyond _edata (upto the init_size in the setup
> header), but that's not part of the image itself.

Yea we basically want the size of the image. There is nothing to measure
beyond the image as loaded into memory by the bootloader. rva(_edata)
seems to be the ticket.

Thanks
Ross

> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-19 Thread Arvind Sankar
On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
> On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> > On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> > >
> > > I am discussing with Ross the other option. We can create
> > > .rodata.mle_header section and put it at fixed offset as
> > > kernel_info is. So, we would have, e.g.:
> > >
> > > arch/x86/boot/compressed/vmlinux.lds.S:
> > > .rodata.kernel_info KERNEL_INFO_OFFSET : {
> > > *(.rodata.kernel_info)
> > > }
> > > ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info 
> > > at bad address!")
> > >
> > > .rodata.mle_header MLE_HEADER_OFFSET : {
> > > *(.rodata.mle_header)
> > > }
> > > ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at 
> > > bad address!")
> > >
> > > arch/x86/boot/compressed/sl_stub.S:
> > > #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> > >
> > > .section ".rodata.mle_header", "a"
> > >
> > > SYM_DATA_START(mle_header)
> > > .long   0x9082ac5a/* UUID0 */
> > > .long   0x74a7476f/* UUID1 */
> > > .long   0xa2555c0f/* UUID2 */
> > > .long   0x42b651cb/* UUID3 */
> > > .long   0x0034/* MLE header size */
> > > .long   0x00020002/* MLE version 2.2 */
> > > .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
> > > (virt. address) */
> > > .long   0x/* First valid page of MLE */
> > > .long   0x/* Offset within binary of first byte of 
> > > MLE */
> > > .long   0x/* Offset within binary of last byte + 1 of 
> > > MLE */
> > > .long   0x0223/* Bit vector of MLE-supported capabilities 
> > > */
> > > .long   0x/* Starting linear address of command line 
> > > (unused) */
> > > .long   0x/* Ending linear address of command line 
> > > (unused) */
> > > SYM_DATA_END(mle_header)
> > >
> > > Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> > > Anyway, is it acceptable?
> 
> What do you think about my MLE_HEADER_OFFSET and related stuff proposal?
> 

I'm wondering if it would be easier to just allow relocations in these
special "header" sections. I need to check how easy/hard it is to do
that without triggering linker warnings.

Thanks.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-19 Thread Arvind Sankar
On Mon, Oct 19, 2020 at 10:38:08AM -0400, Ross Philipson wrote:
> On 10/16/20 4:51 PM, Arvind Sankar wrote:
> > On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> >>
> >> I am discussing with Ross the other option. We can create
> >> .rodata.mle_header section and put it at fixed offset as
> >> kernel_info is. So, we would have, e.g.:
> >>
> >> arch/x86/boot/compressed/vmlinux.lds.S:
> >> .rodata.kernel_info KERNEL_INFO_OFFSET : {
> >> *(.rodata.kernel_info)
> >> }
> >> ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info 
> >> at bad address!")
> >>
> >> .rodata.mle_header MLE_HEADER_OFFSET : {
> >> *(.rodata.mle_header)
> >> }
> >> ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at 
> >> bad address!")
> >>
> >> arch/x86/boot/compressed/sl_stub.S:
> >> #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> >>
> >> .section ".rodata.mle_header", "a"
> >>
> >> SYM_DATA_START(mle_header)
> >> .long   0x9082ac5a/* UUID0 */
> >> .long   0x74a7476f/* UUID1 */
> >> .long   0xa2555c0f/* UUID2 */
> >> .long   0x42b651cb/* UUID3 */
> >> .long   0x0034/* MLE header size */
> >> .long   0x00020002/* MLE version 2.2 */
> >> .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
> >> (virt. address) */
> >> .long   0x/* First valid page of MLE */
> >> .long   0x/* Offset within binary of first byte of MLE 
> >> */
> >> .long   0x/* Offset within binary of last byte + 1 of 
> >> MLE */
> >> .long   0x0223/* Bit vector of MLE-supported capabilities 
> >> */
> >> .long   0x/* Starting linear address of command line 
> >> (unused) */
> >> .long   0x/* Ending linear address of command line 
> >> (unused) */
> >> SYM_DATA_END(mle_header)
> >>
> >> Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> >> Anyway, is it acceptable?
> >>
> >> There is also another problem. We have to put into mle_header size of
> >> the Linux kernel image. Currently it is done by the bootloader but
> >> I think it is not a role of the bootloader. The kernel image should
> >> provide all data describing its properties and do not rely on the
> >> bootloader to do that. Ross and I investigated various options but we
> >> did not find a good/simple way to do that. Could you suggest how we
> >> should do that or at least where we should take a look to get some
> >> ideas?
> >>
> >> Daniel
> > 
> > What exactly is the size you need here? Is it just the size of the
> > protected mode image, that's startup_32 to _edata. Or is it the size of
> 
> It is the size of the protected mode image. Though how to reference
> those symbols to get the size might all more relocation issues.
> 

Ok, then I think mleh_rva(_edata) should get you that -- I assume you
don't want to include the uninitialized data in the size? The kernel
will access memory beyond _edata (upto the init_size in the setup
header), but that's not part of the image itself.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-19 Thread Daniel Kiper
On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> >
> > I am discussing with Ross the other option. We can create
> > .rodata.mle_header section and put it at fixed offset as
> > kernel_info is. So, we would have, e.g.:
> >
> > arch/x86/boot/compressed/vmlinux.lds.S:
> > .rodata.kernel_info KERNEL_INFO_OFFSET : {
> > *(.rodata.kernel_info)
> > }
> > ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at 
> > bad address!")
> >
> > .rodata.mle_header MLE_HEADER_OFFSET : {
> > *(.rodata.mle_header)
> > }
> > ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at 
> > bad address!")
> >
> > arch/x86/boot/compressed/sl_stub.S:
> > #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> >
> > .section ".rodata.mle_header", "a"
> >
> > SYM_DATA_START(mle_header)
> > .long   0x9082ac5a/* UUID0 */
> > .long   0x74a7476f/* UUID1 */
> > .long   0xa2555c0f/* UUID2 */
> > .long   0x42b651cb/* UUID3 */
> > .long   0x0034/* MLE header size */
> > .long   0x00020002/* MLE version 2.2 */
> > .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
> > (virt. address) */
> > .long   0x/* First valid page of MLE */
> > .long   0x/* Offset within binary of first byte of MLE 
> > */
> > .long   0x/* Offset within binary of last byte + 1 of 
> > MLE */
> > .long   0x0223/* Bit vector of MLE-supported capabilities */
> > .long   0x/* Starting linear address of command line 
> > (unused) */
> > .long   0x/* Ending linear address of command line 
> > (unused) */
> > SYM_DATA_END(mle_header)
> >
> > Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> > Anyway, is it acceptable?

What do you think about my MLE_HEADER_OFFSET and related stuff proposal?

> > There is also another problem. We have to put into mle_header size of
> > the Linux kernel image. Currently it is done by the bootloader but
> > I think it is not a role of the bootloader. The kernel image should
> > provide all data describing its properties and do not rely on the
> > bootloader to do that. Ross and I investigated various options but we
> > did not find a good/simple way to do that. Could you suggest how we
> > should do that or at least where we should take a look to get some
> > ideas?
> >
> > Daniel
>
> What exactly is the size you need here? Is it just the size of the
> protected mode image, that's startup_32 to _edata. Or is it the size of
> the whole bzImage file, or something else? I guess the same question
> applies to "first valid page of MLE" and "first byte of MLE", and the
> linear entry point -- are those all relative to startup_32 or do they
> need to be relative to the start of the bzImage, i.e. you have to add
> the size of the real-mode boot stub?
>
> If you need to include the size of the bzImage file, that's not known
> when the files in boot/compressed are built. It's only known after the
> real-mode stub is linked. arch/x86/boot/tools/build.c fills in various
> details in the setup header and creates the bzImage file, but it does
> not currently modify anything in the protected-mode portion of the
> compressed kernel (i.e. arch/x86/boot/compressed/vmlinux, which then
> gets converted to binary format as arch/x86/boot/vmlinux.bin), so it
> would need to be extended if you need to modify the MLE header to
> include the bzImage size or anything depending on the size of the
> real-mode stub.

Ross clarified this. So, I not have to add much here.

Daniel
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-19 Thread Ross Philipson
On 10/16/20 4:51 PM, Arvind Sankar wrote:
> On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
>>
>> I am discussing with Ross the other option. We can create
>> .rodata.mle_header section and put it at fixed offset as
>> kernel_info is. So, we would have, e.g.:
>>
>> arch/x86/boot/compressed/vmlinux.lds.S:
>> .rodata.kernel_info KERNEL_INFO_OFFSET : {
>> *(.rodata.kernel_info)
>> }
>> ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at 
>> bad address!")
>>
>> .rodata.mle_header MLE_HEADER_OFFSET : {
>> *(.rodata.mle_header)
>> }
>> ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at bad 
>> address!")
>>
>> arch/x86/boot/compressed/sl_stub.S:
>> #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
>>
>> .section ".rodata.mle_header", "a"
>>
>> SYM_DATA_START(mle_header)
>> .long   0x9082ac5a/* UUID0 */
>> .long   0x74a7476f/* UUID1 */
>> .long   0xa2555c0f/* UUID2 */
>> .long   0x42b651cb/* UUID3 */
>> .long   0x0034/* MLE header size */
>> .long   0x00020002/* MLE version 2.2 */
>> .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
>> (virt. address) */
>> .long   0x/* First valid page of MLE */
>> .long   0x/* Offset within binary of first byte of MLE */
>> .long   0x/* Offset within binary of last byte + 1 of 
>> MLE */
>> .long   0x0223/* Bit vector of MLE-supported capabilities */
>> .long   0x/* Starting linear address of command line 
>> (unused) */
>> .long   0x/* Ending linear address of command line 
>> (unused) */
>> SYM_DATA_END(mle_header)
>>
>> Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
>> Anyway, is it acceptable?
>>
>> There is also another problem. We have to put into mle_header size of
>> the Linux kernel image. Currently it is done by the bootloader but
>> I think it is not a role of the bootloader. The kernel image should
>> provide all data describing its properties and do not rely on the
>> bootloader to do that. Ross and I investigated various options but we
>> did not find a good/simple way to do that. Could you suggest how we
>> should do that or at least where we should take a look to get some
>> ideas?
>>
>> Daniel
> 
> What exactly is the size you need here? Is it just the size of the
> protected mode image, that's startup_32 to _edata. Or is it the size of

It is the size of the protected mode image. Though how to reference
those symbols to get the size might all more relocation issues.

> the whole bzImage file, or something else? I guess the same question
> applies to "first valid page of MLE" and "first byte of MLE", and the

Because of the way the launch environment is setup, both "first valid
page of MLE" and "first byte of MLE" are always zero so nothing to have
to sort out there. The only fields that need to be updated are "Linear
entry point of MLE" and "Offset within binary of last byte + 1 of MLE".

Thanks
Ross

> linear entry point -- are those all relative to startup_32 or do they
> need to be relative to the start of the bzImage, i.e. you have to add
> the size of the real-mode boot stub?
> 
> If you need to include the size of the bzImage file, that's not known
> when the files in boot/compressed are built. It's only known after the
> real-mode stub is linked. arch/x86/boot/tools/build.c fills in various
> details in the setup header and creates the bzImage file, but it does
> not currently modify anything in the protected-mode portion of the
> compressed kernel (i.e. arch/x86/boot/compressed/vmlinux, which then
> gets converted to binary format as arch/x86/boot/vmlinux.bin), so it
> would need to be extended if you need to modify the MLE header to
> include the bzImage size or anything depending on the size of the
> real-mode stub.
> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-16 Thread Arvind Sankar
On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> 
> I am discussing with Ross the other option. We can create
> .rodata.mle_header section and put it at fixed offset as
> kernel_info is. So, we would have, e.g.:
> 
> arch/x86/boot/compressed/vmlinux.lds.S:
> .rodata.kernel_info KERNEL_INFO_OFFSET : {
> *(.rodata.kernel_info)
> }
> ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at 
> bad address!")
> 
> .rodata.mle_header MLE_HEADER_OFFSET : {
> *(.rodata.mle_header)
> }
> ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at bad 
> address!")
> 
> arch/x86/boot/compressed/sl_stub.S:
> #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
> 
> .section ".rodata.mle_header", "a"
> 
> SYM_DATA_START(mle_header)
> .long   0x9082ac5a/* UUID0 */
> .long   0x74a7476f/* UUID1 */
> .long   0xa2555c0f/* UUID2 */
> .long   0x42b651cb/* UUID3 */
> .long   0x0034/* MLE header size */
> .long   0x00020002/* MLE version 2.2 */
> .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
> (virt. address) */
> .long   0x/* First valid page of MLE */
> .long   0x/* Offset within binary of first byte of MLE */
> .long   0x/* Offset within binary of last byte + 1 of MLE 
> */
> .long   0x0223/* Bit vector of MLE-supported capabilities */
> .long   0x/* Starting linear address of command line 
> (unused) */
> .long   0x/* Ending linear address of command line 
> (unused) */
> SYM_DATA_END(mle_header)
> 
> Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
> Anyway, is it acceptable?
> 
> There is also another problem. We have to put into mle_header size of
> the Linux kernel image. Currently it is done by the bootloader but
> I think it is not a role of the bootloader. The kernel image should
> provide all data describing its properties and do not rely on the
> bootloader to do that. Ross and I investigated various options but we
> did not find a good/simple way to do that. Could you suggest how we
> should do that or at least where we should take a look to get some
> ideas?
> 
> Daniel

What exactly is the size you need here? Is it just the size of the
protected mode image, that's startup_32 to _edata. Or is it the size of
the whole bzImage file, or something else? I guess the same question
applies to "first valid page of MLE" and "first byte of MLE", and the
linear entry point -- are those all relative to startup_32 or do they
need to be relative to the start of the bzImage, i.e. you have to add
the size of the real-mode boot stub?

If you need to include the size of the bzImage file, that's not known
when the files in boot/compressed are built. It's only known after the
real-mode stub is linked. arch/x86/boot/tools/build.c fills in various
details in the setup header and creates the bzImage file, but it does
not currently modify anything in the protected-mode portion of the
compressed kernel (i.e. arch/x86/boot/compressed/vmlinux, which then
gets converted to binary format as arch/x86/boot/vmlinux.bin), so it
would need to be extended if you need to modify the MLE header to
include the bzImage size or anything depending on the size of the
real-mode stub.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-15 Thread Daniel Kiper
On Tue, Sep 29, 2020 at 10:03:47AM -0400, Ross Philipson wrote:
> On 9/25/20 3:18 PM, Arvind Sankar wrote:

[...]

> > You should see them if you do
> > readelf -r arch/x86/boot/compressed/vmlinux
> >
> > In terms of the code, things like:
> >
> > addl%ebx, (sl_gdt_desc + 2)(%ebx)
> >
> > will create a relocation, because the linker interprets this as wanting
> > the runtime address of sl_gdt_desc, rather than just the offset from
> > startup_32.
> >
> > https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/tree/arch/x86/boot/compressed/head_64.S*n48__;Iw!!GqivPVa7Brio!JpZWv1cCPZdjD2jbCCGT7P9UIVl_lhX7YjckAnUcvi927jwZI7X3nX0MpIAZOyktJds$
> >
> > has a comment with some explanation and a macro that the 32-bit code in
> > startup_32 uses to avoid creating relocations.
> >
> > Since the SL code is in a different assembler file (and a different
> > section), you can't directly use the same macro. I would suggest getting
> > rid of sl_stub_entry and entering directly at sl_stub, and then the code
> > in sl_stub.S can use sl_stub for the base address, defining the rva()
> > macro there as
> >
> > #define rva(X) ((X) - sl_stub)
> >
> > You will also need to avoid initializing data with symbol addresses.
> >
> > .long mle_header
> > .long sl_stub_entry
> > .long sl_gdt
> >
> > will create relocations. The third one is easy, just replace it with
> > sl_gdt - sl_gdt_desc and initialize it at runtime with
> >
> > lealrva(sl_gdt_desc)(%ebx), %eax
> > addl%eax, 2(%eax)
> > lgdt(%eax)
> >
> > The other two are more messy, unfortunately there is no easy way to tell
> > the linker what we want here. The other entry point addresses (for the
> > EFI stub) are populated in a post-processing step after the compressed
> > kernel has been linked, we could teach it to also update kernel_info.
> >
> > Without that, for kernel_info, you could change it to store the offset
> > of the MLE header from kernel_info, instead of from the start of the
> > image.
> >
> > For the MLE header, it could be moved to .head.text in head_64.S, and
> > initialized with
> > .long rva(sl_stub)
> > This will also let it be placed at a fixed offset from startup_32, so
> > that kernel_info can just be populated with a constant.

I am discussing with Ross the other option. We can create
.rodata.mle_header section and put it at fixed offset as
kernel_info is. So, we would have, e.g.:

arch/x86/boot/compressed/vmlinux.lds.S:
.rodata.kernel_info KERNEL_INFO_OFFSET : {
*(.rodata.kernel_info)
}
ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at bad 
address!")

.rodata.mle_header MLE_HEADER_OFFSET : {
*(.rodata.mle_header)
}
ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at bad 
address!")

arch/x86/boot/compressed/sl_stub.S:
#define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)

.section ".rodata.mle_header", "a"

SYM_DATA_START(mle_header)
.long   0x9082ac5a/* UUID0 */
.long   0x74a7476f/* UUID1 */
.long   0xa2555c0f/* UUID2 */
.long   0x42b651cb/* UUID3 */
.long   0x0034/* MLE header size */
.long   0x00020002/* MLE version 2.2 */
.long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE (virt. 
address) */
.long   0x/* First valid page of MLE */
.long   0x/* Offset within binary of first byte of MLE */
.long   0x/* Offset within binary of last byte + 1 of MLE */
.long   0x0223/* Bit vector of MLE-supported capabilities */
.long   0x/* Starting linear address of command line 
(unused) */
.long   0x/* Ending linear address of command line (unused) 
*/
SYM_DATA_END(mle_header)

Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
Anyway, is it acceptable?

There is also another problem. We have to put into mle_header size of
the Linux kernel image. Currently it is done by the bootloader but
I think it is not a role of the bootloader. The kernel image should
provide all data describing its properties and do not rely on the
bootloader to do that. Ross and I investigated various options but we
did not find a good/simple way to do that. Could you suggest how we
should do that or at least where we should take a look to get some
ideas?

Daniel
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-29 Thread Arvind Sankar
On Tue, Sep 29, 2020 at 10:03:47AM -0400, Ross Philipson wrote:
> On 9/25/20 3:18 PM, Arvind Sankar wrote:
> > You will also need to avoid initializing data with symbol addresses.
> > 
> > .long mle_header
> > .long sl_stub_entry
> > .long sl_gdt
> > 
...
> > 
> > The other two are more messy, unfortunately there is no easy way to tell
> > the linker what we want here. The other entry point addresses (for the
> > EFI stub) are populated in a post-processing step after the compressed
> > kernel has been linked, we could teach it to also update kernel_info.
> > 
> > Without that, for kernel_info, you could change it to store the offset
> > of the MLE header from kernel_info, instead of from the start of the
> > image.

Actually, kernel_info is currently placed inside .rodata, which is quite
a ways into the compressed kernel, so an offset from kernel_info would
end up having to be negative, which would be ugly. I'll see if I can
come up with some way to avoid this.

> > 
> > For the MLE header, it could be moved to .head.text in head_64.S, and
> > initialized with
> > .long rva(sl_stub)
> > This will also let it be placed at a fixed offset from startup_32, so
> > that kernel_info can just be populated with a constant.
> 
> Thank you for the detailed reply. I am going to start digging into this now.
> 
> Ross
> 
> > 
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-29 Thread Ross Philipson
On 9/25/20 3:18 PM, Arvind Sankar wrote:
> On Fri, Sep 25, 2020 at 10:56:43AM -0400, Ross Philipson wrote:
>> On 9/24/20 1:38 PM, Arvind Sankar wrote:
>>> On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
>>>
 diff --git a/arch/x86/boot/compressed/head_64.S 
 b/arch/x86/boot/compressed/head_64.S
 index 97d37f0..42043bf 100644
 --- a/arch/x86/boot/compressed/head_64.S
 +++ b/arch/x86/boot/compressed/head_64.S
 @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
  SYM_FUNC_END(efi32_stub_entry)
  #endif
  
 +#ifdef CONFIG_SECURE_LAUNCH
 +SYM_FUNC_START(sl_stub_entry)
 +  /*
 +   * On entry, %ebx has the entry abs offset to sl_stub_entry. To
 +   * find the beginning of where we are loaded, sub off from the
 +   * beginning.
 +   */
>>>
>>> This requirement should be added to the documentation. Is it necessary
>>> or can this stub just figure out the address the same way as the other
>>> 32-bit entry points, using the scratch space in bootparams as a little
>>> stack?
>>
>> It is based on the state of the BSP when TXT vectors to the measured
>> launch environment. It is documented in the TXT spec and the SDMs.
>>
> 
> I think it would be useful to add to the x86 boot documentation how
> exactly this new entry point is called, even if it's just adding a link
> to some section of those specs. The doc should also say that an
> mle_header_offset of 0 means the kernel isn't secure launch enabled.

Ok will do.

> 
>>>
>>> For the 32-bit assembler code that's being added, tip/master now has
>>> changes that prevent the compressed kernel from having any runtime
>>> relocations.  You'll need to revise some of the code and the data
>>> structures initial values to avoid creating relocations.
>>
>> Could you elaborate on this some more? I am not sure I see places in the
>> secure launch asm that would be creating relocations like this.
>>
>> Thank you,
>> Ross
>>
> 
> You should see them if you do
>   readelf -r arch/x86/boot/compressed/vmlinux
> 
> In terms of the code, things like:
> 
>   addl%ebx, (sl_gdt_desc + 2)(%ebx)
> 
> will create a relocation, because the linker interprets this as wanting
> the runtime address of sl_gdt_desc, rather than just the offset from
> startup_32.
> 
> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/tree/arch/x86/boot/compressed/head_64.S*n48__;Iw!!GqivPVa7Brio!JpZWv1cCPZdjD2jbCCGT7P9UIVl_lhX7YjckAnUcvi927jwZI7X3nX0MpIAZOyktJds$
>  
> 
> has a comment with some explanation and a macro that the 32-bit code in
> startup_32 uses to avoid creating relocations.
> 
> Since the SL code is in a different assembler file (and a different
> section), you can't directly use the same macro. I would suggest getting
> rid of sl_stub_entry and entering directly at sl_stub, and then the code
> in sl_stub.S can use sl_stub for the base address, defining the rva()
> macro there as
> 
>   #define rva(X) ((X) - sl_stub)
> 
> You will also need to avoid initializing data with symbol addresses.
> 
>   .long mle_header
>   .long sl_stub_entry
>   .long sl_gdt
> 
> will create relocations. The third one is easy, just replace it with
> sl_gdt - sl_gdt_desc and initialize it at runtime with
> 
>   lealrva(sl_gdt_desc)(%ebx), %eax
>   addl%eax, 2(%eax)
>   lgdt(%eax)
> 
> The other two are more messy, unfortunately there is no easy way to tell
> the linker what we want here. The other entry point addresses (for the
> EFI stub) are populated in a post-processing step after the compressed
> kernel has been linked, we could teach it to also update kernel_info.
> 
> Without that, for kernel_info, you could change it to store the offset
> of the MLE header from kernel_info, instead of from the start of the
> image.
> 
> For the MLE header, it could be moved to .head.text in head_64.S, and
> initialized with
>   .long rva(sl_stub)
> This will also let it be placed at a fixed offset from startup_32, so
> that kernel_info can just be populated with a constant.

Thank you for the detailed reply. I am going to start digging into this now.

Ross

> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-25 Thread Arvind Sankar
On Fri, Sep 25, 2020 at 10:56:43AM -0400, Ross Philipson wrote:
> On 9/24/20 1:38 PM, Arvind Sankar wrote:
> > On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
> > 
> >> diff --git a/arch/x86/boot/compressed/head_64.S 
> >> b/arch/x86/boot/compressed/head_64.S
> >> index 97d37f0..42043bf 100644
> >> --- a/arch/x86/boot/compressed/head_64.S
> >> +++ b/arch/x86/boot/compressed/head_64.S
> >> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
> >>  SYM_FUNC_END(efi32_stub_entry)
> >>  #endif
> >>  
> >> +#ifdef CONFIG_SECURE_LAUNCH
> >> +SYM_FUNC_START(sl_stub_entry)
> >> +  /*
> >> +   * On entry, %ebx has the entry abs offset to sl_stub_entry. To
> >> +   * find the beginning of where we are loaded, sub off from the
> >> +   * beginning.
> >> +   */
> > 
> > This requirement should be added to the documentation. Is it necessary
> > or can this stub just figure out the address the same way as the other
> > 32-bit entry points, using the scratch space in bootparams as a little
> > stack?
> 
> It is based on the state of the BSP when TXT vectors to the measured
> launch environment. It is documented in the TXT spec and the SDMs.
> 

I think it would be useful to add to the x86 boot documentation how
exactly this new entry point is called, even if it's just adding a link
to some section of those specs. The doc should also say that an
mle_header_offset of 0 means the kernel isn't secure launch enabled.

> > 
> > For the 32-bit assembler code that's being added, tip/master now has
> > changes that prevent the compressed kernel from having any runtime
> > relocations.  You'll need to revise some of the code and the data
> > structures initial values to avoid creating relocations.
> 
> Could you elaborate on this some more? I am not sure I see places in the
> secure launch asm that would be creating relocations like this.
> 
> Thank you,
> Ross
> 

You should see them if you do
readelf -r arch/x86/boot/compressed/vmlinux

In terms of the code, things like:

addl%ebx, (sl_gdt_desc + 2)(%ebx)

will create a relocation, because the linker interprets this as wanting
the runtime address of sl_gdt_desc, rather than just the offset from
startup_32.

https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/tree/arch/x86/boot/compressed/head_64.S#n48

has a comment with some explanation and a macro that the 32-bit code in
startup_32 uses to avoid creating relocations.

Since the SL code is in a different assembler file (and a different
section), you can't directly use the same macro. I would suggest getting
rid of sl_stub_entry and entering directly at sl_stub, and then the code
in sl_stub.S can use sl_stub for the base address, defining the rva()
macro there as

#define rva(X) ((X) - sl_stub)

You will also need to avoid initializing data with symbol addresses.

.long mle_header
.long sl_stub_entry
.long sl_gdt

will create relocations. The third one is easy, just replace it with
sl_gdt - sl_gdt_desc and initialize it at runtime with

lealrva(sl_gdt_desc)(%ebx), %eax
addl%eax, 2(%eax)
lgdt(%eax)

The other two are more messy, unfortunately there is no easy way to tell
the linker what we want here. The other entry point addresses (for the
EFI stub) are populated in a post-processing step after the compressed
kernel has been linked, we could teach it to also update kernel_info.

Without that, for kernel_info, you could change it to store the offset
of the MLE header from kernel_info, instead of from the start of the
image.

For the MLE header, it could be moved to .head.text in head_64.S, and
initialized with
.long rva(sl_stub)
This will also let it be placed at a fixed offset from startup_32, so
that kernel_info can just be populated with a constant.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-25 Thread Ross Philipson
On 9/24/20 1:38 PM, Arvind Sankar wrote:
> On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
>> The Secure Launch (SL) stub provides the entry point for Intel TXT (and
>> later AMD SKINIT) to vector to during the late launch. The symbol
>> sl_stub_entry is that entry point and its offset into the kernel is
>> conveyed to the launching code using the MLE (Measured Launch
>> Environment) header in the structure named mle_header. The offset of the
>> MLE header is set in the kernel_info. The routine sl_stub contains the
>> very early late launch setup code responsible for setting up the basic
>> environment to allow the normal kernel startup_32 code to proceed. It is
>> also responsible for properly waking and handling the APs on Intel
>> platforms. The routine sl_main which runs after entering 64b mode is
>> responsible for measuring configuration and module information before
>> it is used like the boot params, the kernel command line, the TXT heap,
>> an external initramfs, etc.
>>
>> Signed-off-by: Ross Philipson 
> 
> Which version of the kernel is this based on?

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

master branch

> 
>> diff --git a/arch/x86/boot/compressed/head_64.S 
>> b/arch/x86/boot/compressed/head_64.S
>> index 97d37f0..42043bf 100644
>> --- a/arch/x86/boot/compressed/head_64.S
>> +++ b/arch/x86/boot/compressed/head_64.S
>> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
>>  SYM_FUNC_END(efi32_stub_entry)
>>  #endif
>>  
>> +#ifdef CONFIG_SECURE_LAUNCH
>> +SYM_FUNC_START(sl_stub_entry)
>> +/*
>> + * On entry, %ebx has the entry abs offset to sl_stub_entry. To
>> + * find the beginning of where we are loaded, sub off from the
>> + * beginning.
>> + */
> 
> This requirement should be added to the documentation. Is it necessary
> or can this stub just figure out the address the same way as the other
> 32-bit entry points, using the scratch space in bootparams as a little
> stack?

It is based on the state of the BSP when TXT vectors to the measured
launch environment. It is documented in the TXT spec and the SDMs.

> 
>> +leal(startup_32 - sl_stub_entry)(%ebx), %ebx
>> +
>> +/* More room to work in sl_stub in the text section */
>> +jmp sl_stub
>> +
>> +SYM_FUNC_END(sl_stub_entry)
>> +#endif
>> +
>>  .code64
>>  .org 0x200
>>  SYM_CODE_START(startup_64)
>> @@ -537,6 +552,25 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
>>  shrq$3, %rcx
>>  rep stosq
>>  
>> +#ifdef CONFIG_SECURE_LAUNCH
>> +/*
>> + * Have to do the final early sl stub work in 64b area.
>> + *
>> + * *** NOTE ***
>> + *
>> + * Several boot params get used before we get a chance to measure
>> + * them in this call. This is a known issue and we currently don't
>> + * have a solution. The scratch field doesn't matter and loadflags
>> + * have KEEP_SEGMENTS set by the stub code. There is no obvious way
>> + * to do anything about the use of kernel_alignment or init_size
>> + * though these seem low risk.
>> + */
> 
> There are various fields in bootparams that depend on where the
> kernel/initrd and cmdline are loaded in memory. If the entire bootparams
> page is getting measured, does that mean they all have to be at fixed
> addresses on every boot?

Yes that is a very good point. In other places when measuring we make
sure to skip things like addresses and sizes of things outside of the
structure being measured. This needs to be done with boot params too.

> 
> Also KEEP_SEGMENTS support is gone from the kernel since v5.7, since it
> was unused. startup_32 now always loads a GDT and then the segment
> registers. I think this should be ok for you as the only thing the flag
> used to do in the 64-bit kernel was to stop startup_32 from blindly
> loading __BOOT_DS into the segment registers before it had setup its own
> GDT.

Yea this was there to prevent that blind loading of __BOOT_DS. I see it
is gone so I will remove the comment and the place where the flag is set.

> 
> For the 32-bit assembler code that's being added, tip/master now has
> changes that prevent the compressed kernel from having any runtime
> relocations.  You'll need to revise some of the code and the data
> structures initial values to avoid creating relocations.

Could you elaborate on this some more? I am not sure I see places in the
secure launch asm that would be creating relocations like this.

Thank you,
Ross

> 
> Thanks.
> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-24 Thread Arvind Sankar
On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
> The Secure Launch (SL) stub provides the entry point for Intel TXT (and
> later AMD SKINIT) to vector to during the late launch. The symbol
> sl_stub_entry is that entry point and its offset into the kernel is
> conveyed to the launching code using the MLE (Measured Launch
> Environment) header in the structure named mle_header. The offset of the
> MLE header is set in the kernel_info. The routine sl_stub contains the
> very early late launch setup code responsible for setting up the basic
> environment to allow the normal kernel startup_32 code to proceed. It is
> also responsible for properly waking and handling the APs on Intel
> platforms. The routine sl_main which runs after entering 64b mode is
> responsible for measuring configuration and module information before
> it is used like the boot params, the kernel command line, the TXT heap,
> an external initramfs, etc.
> 
> Signed-off-by: Ross Philipson 

Which version of the kernel is this based on?

> diff --git a/arch/x86/boot/compressed/head_64.S 
> b/arch/x86/boot/compressed/head_64.S
> index 97d37f0..42043bf 100644
> --- a/arch/x86/boot/compressed/head_64.S
> +++ b/arch/x86/boot/compressed/head_64.S
> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
>  SYM_FUNC_END(efi32_stub_entry)
>  #endif
>  
> +#ifdef CONFIG_SECURE_LAUNCH
> +SYM_FUNC_START(sl_stub_entry)
> + /*
> +  * On entry, %ebx has the entry abs offset to sl_stub_entry. To
> +  * find the beginning of where we are loaded, sub off from the
> +  * beginning.
> +  */

This requirement should be added to the documentation. Is it necessary
or can this stub just figure out the address the same way as the other
32-bit entry points, using the scratch space in bootparams as a little
stack?

> + leal(startup_32 - sl_stub_entry)(%ebx), %ebx
> +
> + /* More room to work in sl_stub in the text section */
> + jmp sl_stub
> +
> +SYM_FUNC_END(sl_stub_entry)
> +#endif
> +
>   .code64
>   .org 0x200
>  SYM_CODE_START(startup_64)
> @@ -537,6 +552,25 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
>   shrq$3, %rcx
>   rep stosq
>  
> +#ifdef CONFIG_SECURE_LAUNCH
> + /*
> +  * Have to do the final early sl stub work in 64b area.
> +  *
> +  * *** NOTE ***
> +  *
> +  * Several boot params get used before we get a chance to measure
> +  * them in this call. This is a known issue and we currently don't
> +  * have a solution. The scratch field doesn't matter and loadflags
> +  * have KEEP_SEGMENTS set by the stub code. There is no obvious way
> +  * to do anything about the use of kernel_alignment or init_size
> +  * though these seem low risk.
> +  */

There are various fields in bootparams that depend on where the
kernel/initrd and cmdline are loaded in memory. If the entire bootparams
page is getting measured, does that mean they all have to be at fixed
addresses on every boot?

Also KEEP_SEGMENTS support is gone from the kernel since v5.7, since it
was unused. startup_32 now always loads a GDT and then the segment
registers. I think this should be ok for you as the only thing the flag
used to do in the 64-bit kernel was to stop startup_32 from blindly
loading __BOOT_DS into the segment registers before it had setup its own
GDT.

For the 32-bit assembler code that's being added, tip/master now has
changes that prevent the compressed kernel from having any runtime
relocations.  You'll need to revise some of the code and the data
structures initial values to avoid creating relocations.

Thanks.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-24 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/x86/boot.rst |   9 +
 arch/x86/boot/compressed/Makefile  |   1 +
 arch/x86/boot/compressed/head_64.S |  34 ++
 arch/x86/boot/compressed/kernel_info.S |   7 +
 arch/x86/boot/compressed/sl_main.c | 390 +
 arch/x86/boot/compressed/sl_stub.S | 606 +
 arch/x86/kernel/asm-offsets.c  |  16 +
 7 files changed, 1063 insertions(+)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
index 7fafc7a..7232801 100644
--- a/Documentation/x86/boot.rst
+++ b/Documentation/x86/boot.rst
@@ -1026,6 +1026,15 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT and AMD SKINIT.
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 35947b9..d881ff7 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -102,6 +102,7 @@ vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA512) += 
$(obj)/early_sha512.o
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/tpm/tpmio.o 
$(obj)/tpm/tpm_buff.o \
$(obj)/tpm/tis.o $(obj)/tpm/crb.o $(obj)/tpm/tpm1_cmds.o \
$(obj)/tpm/tpm2_cmds.o $(obj)/tpm/tpm2_auth.o $(obj)/tpm/tpm.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sl_main.o $(obj)/sl_stub.o
 
 # The compressed kernel is built with -fPIC/-fPIE so that a boot loader
 # can place it anywhere in memory and it will still run. However, since
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 97d37f0..42043bf 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
 SYM_FUNC_END(efi32_stub_entry)
 #endif
 
+#ifdef CONFIG_SECURE_LAUNCH
+SYM_FUNC_START(sl_stub_entry)
+   /*
+* On entry, %ebx has the entry abs offset to sl_stub_entry. To
+* find the beginning of where we are loaded, sub off from the
+* beginning.
+*/
+   leal(startup_32 - sl_stub_entry)(%ebx), %ebx
+
+   /* More room to work in sl_stub in the text section */
+   jmp sl_stub
+
+SYM_FUNC_END(sl_stub_entry)
+#endif
+
.code64
.org 0x200
 SYM_CODE_START(startup_64)
@@ -537,6 +552,25 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter and loadflags
+* have KEEP_SEGMENTS set by the stub code. There is no obvious way
+* to do anything about the use of kernel_alignment or init_size
+* though these seem low risk.
+*/
+   pushq   %rsi
+   movq%rsi, %rdi
+   callq   sl_main
+   popq%rsi
+#endif
+
 /*
  * Do the extraction, and jump to the new kernel..
  */
diff --git a/arch/x86/boot/compressed/kernel_info.S 
b/arch/x86/boot/compressed/kernel_info.S
index f818ee8..192d557 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -17,6 +17,13 @@ kernel_info:
/* Maximal allowed type for setup_data and setup_indirect structs. */
.long   SETUP_TYPE_MAX
 
+   /* Offset to the MLE header structure */
+#ifdef CONFIG_SECURE_LAUNCH
+   .long   mle_header