Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-20 Thread ross . philipson

On 6/19/24 5:18 PM, Jarkko Sakkinen wrote:

On Thu Jun 6, 2024 at 7:49 PM EEST,  wrote:

For any architectures dig a similar fact:

1. Is not dead.
2. Will be there also in future.

Make any architecture existentially relevant for and not too much
coloring in the text that is easy to check.

It is nearing 5k lines so you should be really good with measured
facts too (not just launch) :-)


... but overall I get your meaning. We will spend time on this sort of
documentation for the v10 release.


Yeah, I mean we live in the universe of 3 letter acronyms so
it is better to summarize the existential part, especially
in a ~5 KSLOC patch set ;-)


Indeed, thanks.

Ross



BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-06 Thread ross . philipson

On 6/5/24 11:02 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 10:03 PM EEST,  wrote:

So I did not mean to imply that DRTM support on various
platforms/architectures has a short expiration date. In fact we are
actively working on DRTM support through the TrenchBoot project on
several platforms/architectures. Just a quick rundown here:

Intel: Plenty of Intel platforms are vPro with TXT. It is really just
the lower end systems that don't have it available (like Core i3). And
my guess was wrong about x86s. You can find the spec on the page in the
following link. There is an entire subsection on SMX support on x86s and
the changes to the various GETSEC instruction leaves that were made to
make it work there (see 3.15).

https://urldefense.com/v3/__https://www.intel.com/content/www/us/en/developer/articles/technical/envisioning-future-simplified-architecture.html__;!!ACWV5N9M2RV99hQ!Lt-srkRLHstA9PPCB-NWogvHP-9mfh2bHjkml-lARY79BhYlWJjhrHb6RyCN_WdGstcABq1FdqPUKn5dCdw$


Happend to bump into same PDF specification and exactly the seeked
information is "3.15 SMX Changes". So just write this down to some
patch that starts adding SMX things.

Link: 
https://urldefense.com/v3/__https://cdrdv2.intel.com/v1/dl/getContent/776648__;!!ACWV5N9M2RV99hQ!Lt-srkRLHstA9PPCB-NWogvHP-9mfh2bHjkml-lARY79BhYlWJjhrHb6RyCN_WdGstcABq1FdqPUuZy8Sfk$

So link and document, and other stuff above is not relevant from
upstream context, only potential maintenance burden :-)


I am not 100% sure what you mean exactly here...



For any architectures dig a similar fact:

1. Is not dead.
2. Will be there also in future.

Make any architecture existentially relevant for and not too much
coloring in the text that is easy to check.

It is nearing 5k lines so you should be really good with measured
facts too (not just launch) :-)


... but overall I get your meaning. We will spend time on this sort of 
documentation for the v10 release.


Thanks for the feedback,
Ross



BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-05 Thread ross . philipson

On 6/4/24 9:04 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 5:33 AM EEST,  wrote:

On 6/4/24 5:22 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 2:00 AM EEST,  wrote:

On 6/4/24 3:36 PM, Jarkko Sakkinen wrote:

On Tue Jun 4, 2024 at 11:31 PM EEST,  wrote:

On 6/4/24 11:21 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 


If a uarch specific, I'd appreciate Intel SDM reference here so that I
can look it up and compare. Like in section granularity.


This table is meant to not be architecture specific though it can
contain architecture specific sub-entities. E.g. there is a TXT specific
table and in the future there will be an AMD and ARM one (and hopefully
some others). I hope that addresses what you are pointing out or maybe I
don't fully understand what you mean here...


At least Intel SDM has a definition of any possible architecture
specific data structure. It is handy to also have this available
in inline comment for any possible such structure pointing out the
section where it is defined.


The TXT specific structure is not defined in the SDM or the TXT dev
guide. Part of it is driven by requirements in the TXT dev guide but
that guide does not contain implementation details.

That said, if you would like links to relevant documents in the comments
before arch specific structures, I can add them.


Vol. 2D 7-40, in the description of GETSEC[WAKEUP] there is in fact a
description of MLE JOINT structure at least:

1. GDT limit (offset 0)
2. GDT base (offset 4)
3. Segment selector initializer (offset 8)
4. EIP (offset 12)

So is this only exercised in protect mode, and not in long mode? Just
wondering whether I should make a bug report on this for SDM or not.


I believe you can issue the SENTER instruction in long mode, compat mode
or protected mode. On the other side thought, you will pop out of the
TXT initialization in protected mode. The SDM outlines what registers
will hold what values and what is valid and not valid. The APs will also
vector through the join structure mentioned above to the location
specified in protected mode using the GDT information you provide.



Especially this puzzles me, given that x86s won't have protected
mode in the first place...


My guess is the simplified x86 architecture will not support TXT. It is
not supported on a number of CPUs/chipsets as it stands today. Just a
guess but we know only vPro systems support TXT today.


I'm wondering could this bootstrap itself inside TDX or SNP, and that
way provide path forward? AFAIK, TDX can be nested straight of the bat
and SNP from 2nd generation EPYC's, which contain the feature.

I do buy the idea of attesting the host, not just the guests, even in
the "confidential world". That said, I'm not sure does it make sense
to add all this infrastructure for a technology with such a short
expiration date?

I would not want to say this at v9, and it is not really your fault
either, but for me this would make a lot more sense if the core of
Trenchboot was redesigned around these newer technologies with a
long-term future.


So I did not mean to imply that DRTM support on various 
platforms/architectures has a short expiration date. In fact we are 
actively working on DRTM support through the TrenchBoot project on 
several platforms/architectures. Just a quick rundown here:


Intel: Plenty of Intel platforms are vPro with TXT. It is really just 
the lower end systems that don't have it available (like Core i3). And 
my guess was wrong about x86s. You can find the spec on the page in the 
following link. There is an entire subsection on SMX support on x86s and 
the changes to the various GETSEC instruction leaves that were made to 
make it work there (see 3.15).


https://www.intel.com/content/www/us/en/developer/articles/technical/envisioning-future-simplified-architecture.html

AMD: We are actively working on SKINIT DRTM support that will go into 
TrenchBoot. There are changes coming soon to AMD SKINIT to make it more 
robust and address some earlier issues. We hope to be able to start 
sending AMD DRTM support up in the posts to LKML in the not too distant 
future.


Arm: They have recently released their DRTM specification and at least 
one Arm vendor is close to releasing firmware that will support DRTM. 
Again we are actively working in this area on the TrenchBoot project.


https://developer.arm.com/documentation/den0113/latest/

One final thought I had. The technologies you mentioned above seem to be 
to be complementary to DRTM as opposed to being a replacement for it, at 
least to me but I am not an expert on them.


Perhaps Daniel Smith would like to expand on what I have said here.

Thanks
Ross




The idea itself is great!

BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-04 Thread ross . philipson

On 6/4/24 5:22 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 2:00 AM EEST,  wrote:

On 6/4/24 3:36 PM, Jarkko Sakkinen wrote:

On Tue Jun 4, 2024 at 11:31 PM EEST,  wrote:

On 6/4/24 11:21 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 


If a uarch specific, I'd appreciate Intel SDM reference here so that I
can look it up and compare. Like in section granularity.


This table is meant to not be architecture specific though it can
contain architecture specific sub-entities. E.g. there is a TXT specific
table and in the future there will be an AMD and ARM one (and hopefully
some others). I hope that addresses what you are pointing out or maybe I
don't fully understand what you mean here...


At least Intel SDM has a definition of any possible architecture
specific data structure. It is handy to also have this available
in inline comment for any possible such structure pointing out the
section where it is defined.


The TXT specific structure is not defined in the SDM or the TXT dev
guide. Part of it is driven by requirements in the TXT dev guide but
that guide does not contain implementation details.

That said, if you would like links to relevant documents in the comments
before arch specific structures, I can add them.


Vol. 2D 7-40, in the description of GETSEC[WAKEUP] there is in fact a
description of MLE JOINT structure at least:

1. GDT limit (offset 0)
2. GDT base (offset 4)
3. Segment selector initializer (offset 8)
4. EIP (offset 12)

So is this only exercised in protect mode, and not in long mode? Just
wondering whether I should make a bug report on this for SDM or not.


I believe you can issue the SENTER instruction in long mode, compat mode 
or protected mode. On the other side thought, you will pop out of the 
TXT initialization in protected mode. The SDM outlines what registers 
will hold what values and what is valid and not valid. The APs will also 
vector through the join structure mentioned above to the location 
specified in protected mode using the GDT information you provide.




Especially this puzzles me, given that x86s won't have protected
mode in the first place...


My guess is the simplified x86 architecture will not support TXT. It is 
not supported on a number of CPUs/chipsets as it stands today. Just a 
guess but we know only vPro systems support TXT today.


Thanks
Ross



BR, Jarkko






Re: [PATCH v9 16/19] tpm: Add ability to set the preferred locality the TPM chip uses

2024-06-04 Thread ross . philipson

On 6/4/24 3:50 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 1:14 AM EEST,  wrote:

On 6/4/24 1:27 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Curently the locality is hard coded to 0 but for DRTM support, access
is needed to localities 1 through 4.

Signed-off-by: Ross Philipson 
---
   drivers/char/tpm/tpm-chip.c  | 24 +++-
   drivers/char/tpm/tpm-interface.c | 15 +++
   drivers/char/tpm/tpm.h   |  1 +
   include/linux/tpm.h  |  4 
   4 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 854546000c92..73eac54d61fb 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -44,7 +44,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
if (!chip->ops->request_locality)
return 0;
   
-	rc = chip->ops->request_locality(chip, 0);

+   rc = chip->ops->request_locality(chip, chip->pref_locality);
if (rc < 0)
return rc;
   
@@ -143,6 +143,27 @@ void tpm_chip_stop(struct tpm_chip *chip)

   }
   EXPORT_SYMBOL_GPL(tpm_chip_stop);
   
+/**

+ * tpm_chip_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_chip_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   if (locality < 0 || locality >=TPM_MAX_LOCALITY)
+   return false;
+
+   mutex_lock(>tpm_mutex);
+   chip->pref_locality = locality;
+   mutex_unlock(>tpm_mutex);
+   return true;
+}
+EXPORT_SYMBOL_GPL(tpm_chip_preferred_locality);
+
   /**
* tpm_try_get_ops() - Get a ref to the tpm_chip
* @chip: Chip to ref
@@ -374,6 +395,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
}
   
   	chip->locality = -1;

+   chip->pref_locality = 0;
return chip;
   
   out:

diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 5da134f12c9a..35f14ccecf0e 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -274,6 +274,21 @@ int tpm_is_tpm2(struct tpm_chip *chip)
   }
   EXPORT_SYMBOL_GPL(tpm_is_tpm2);
   
+/**

+ * tpm_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   return tpm_chip_preferred_locality(chip, locality);
+}
+EXPORT_SYMBOL_GPL(tpm_preferred_locality);


   What good does this extra wrapping do?

   tpm_set_default_locality() and default_locality would make so much more
   sense in any case.


Are you mainly just talking about my naming choices here and in the
follow-on response? Can you clarify what you are requesting?


I'd prefer:

1. Name the variable as default_locality.
2. Only create a single expored to function to tpm-chip.c:
tpm_chip_set_default_locality().
3. Call this function in all call sites.

"tpm_preferred_locality" should be just removed, as tpm_chip_*
is exported anyway.


Ok got it, thanks.



BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-04 Thread ross . philipson

On 6/4/24 3:36 PM, Jarkko Sakkinen wrote:

On Tue Jun 4, 2024 at 11:31 PM EEST,  wrote:

On 6/4/24 11:21 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 


If a uarch specific, I'd appreciate Intel SDM reference here so that I
can look it up and compare. Like in section granularity.


This table is meant to not be architecture specific though it can
contain architecture specific sub-entities. E.g. there is a TXT specific
table and in the future there will be an AMD and ARM one (and hopefully
some others). I hope that addresses what you are pointing out or maybe I
don't fully understand what you mean here...


At least Intel SDM has a definition of any possible architecture
specific data structure. It is handy to also have this available
in inline comment for any possible such structure pointing out the
section where it is defined.


The TXT specific structure is not defined in the SDM or the TXT dev 
guide. Part of it is driven by requirements in the TXT dev guide but 
that guide does not contain implementation details.


That said, if you would like links to relevant documents in the comments 
before arch specific structures, I can add them.


Ross



BR, Jarkko





Re: [PATCH v9 16/19] tpm: Add ability to set the preferred locality the TPM chip uses

2024-06-04 Thread ross . philipson

On 6/4/24 1:27 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Curently the locality is hard coded to 0 but for DRTM support, access
is needed to localities 1 through 4.

Signed-off-by: Ross Philipson 
---
  drivers/char/tpm/tpm-chip.c  | 24 +++-
  drivers/char/tpm/tpm-interface.c | 15 +++
  drivers/char/tpm/tpm.h   |  1 +
  include/linux/tpm.h  |  4 
  4 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 854546000c92..73eac54d61fb 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -44,7 +44,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
if (!chip->ops->request_locality)
return 0;
  
-	rc = chip->ops->request_locality(chip, 0);

+   rc = chip->ops->request_locality(chip, chip->pref_locality);
if (rc < 0)
return rc;
  
@@ -143,6 +143,27 @@ void tpm_chip_stop(struct tpm_chip *chip)

  }
  EXPORT_SYMBOL_GPL(tpm_chip_stop);
  
+/**

+ * tpm_chip_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_chip_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   if (locality < 0 || locality >=TPM_MAX_LOCALITY)
+   return false;
+
+   mutex_lock(>tpm_mutex);
+   chip->pref_locality = locality;
+   mutex_unlock(>tpm_mutex);
+   return true;
+}
+EXPORT_SYMBOL_GPL(tpm_chip_preferred_locality);
+
  /**
   * tpm_try_get_ops() - Get a ref to the tpm_chip
   * @chip: Chip to ref
@@ -374,6 +395,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
}
  
  	chip->locality = -1;

+   chip->pref_locality = 0;
return chip;
  
  out:

diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 5da134f12c9a..35f14ccecf0e 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -274,6 +274,21 @@ int tpm_is_tpm2(struct tpm_chip *chip)
  }
  EXPORT_SYMBOL_GPL(tpm_is_tpm2);
  
+/**

+ * tpm_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   return tpm_chip_preferred_locality(chip, locality);
+}
+EXPORT_SYMBOL_GPL(tpm_preferred_locality);


  What good does this extra wrapping do?

  tpm_set_default_locality() and default_locality would make so much more
  sense in any case.


Are you mainly just talking about my naming choices here and in the 
follow-on response? Can you clarify what you are requesting?


Thanks
Ross



  BR, Jarkko





Re: [PATCH v9 10/19] x86: Secure Launch SMP bringup support

2024-06-04 Thread ross . philipson

On 6/4/24 1:05 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code uses MONITOR and MWAIT to park the APs. The realmode/init.c
code updates the jump address for the waiting APs with the location of the
Secure Launch entry point in the RM piggy after it is loaded and fixed up.
As the APs are woken up by writing the monitor, the APs jump to the Secure
Launch entry point in the RM piggy which mimics what the real mode code would
do then jumps to the standard RM piggy protected mode entry point.

Signed-off-by: Ross Philipson 
---
  arch/x86/include/asm/realmode.h  |  3 ++
  arch/x86/kernel/smpboot.c| 58 +++-
  arch/x86/realmode/init.c |  3 ++
  arch/x86/realmode/rm/header.S|  3 ++
  arch/x86/realmode/rm/trampoline_64.S | 32 +++
  5 files changed, 97 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 87e5482acd0d..339b48e2543d 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -38,6 +38,9 @@ struct real_mode_header {
  #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
  #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
  };
  
  /* This must match data at realmode/rm/trampoline_{32,64}.S */

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 0c35207320cb..adb521221d6c 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -60,6 +60,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #include 

  #include 
@@ -868,6 +869,56 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
  }
  
+#ifdef CONFIG_SECURE_LAUNCH

+
+static bool slaunch_is_txt_launch(void)
+{
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return true;
+
+   return false;
+}


static inline bool slaunch_is_txt_launch(void)
{
u32 mask =  SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT;

return slaunch_get_flags() & mask == mask;
}


Actually I think I can take your suggested change and move this function 
to the main header files since this check is done elsewhere. And later I 
can make others like slaunch_is_skinit_launch(). Thanks.






+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs using monitor/mwait. This will wake the APs by writing the monitor
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   struct sl_ap_wake_info *ap_wake_info;
+   struct sl_ap_stack_and_monitor *stack_monitor = NULL;


struct sl_ap_stack_and_monitor *stack_monitor; /* note: no initialization */
struct sl_ap_wake_info *ap_wake_info;


Will fix.





+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   stack_monitor = (struct sl_ap_stack_and_monitor 
*)__va(ap_wake_info->ap_wake_block +
+  
ap_wake_info->ap_stacks_offset);
+
+   for (unsigned int i = TXT_MAX_CPUS - 1; i >= 0; i--) {
+   if (stack_monitor[i].apicid == apicid) {
+   /* Write the monitor */


I'd remove this comment.


Sure.

Ross




+   stack_monitor[i].monitor = 1;
+   break;
+   }
+   }
+}
+
+#else
+
+static inline bool slaunch_is_txt_launch(void)
+{
+   return false;
+}
+
+static inline void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+}
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
  /*
   * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
   * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -877,7 +928,7 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
  static int do_boot_cpu(u32 apicid, int cpu, struct task_struct *idle)
  {
unsigned long start_ip = real_mode_header->trampoline_start;
-   int ret;
+   int ret = 0;
  
  #ifdef CONFIG_X86_64

/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -922,12 +973,15 @@ static int do_boot_cpu(u32 apicid, int cpu, struct 
task_struct *idle)
  
  	/*

 * Wake up a CPU in difference cases:
+* - Intel TXT DRTM launch uses its own method to wake the APs
 * - Use a method from the APIC driver if one defined, with wakeup
 *   straight to 64-bit mode preferred over wakeup to RM.
 * Otherwise,
 

Re: [PATCH v9 09/19] x86: Secure Launch kernel late boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 12:59 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
  arch/x86/kernel/Makefile   |   1 +
  arch/x86/kernel/setup.c|   3 +
  arch/x86/kernel/slaunch.c  | 525 +
  drivers/iommu/intel/dmar.c |   4 +
  4 files changed, 533 insertions(+)
  create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d128167e2e2..b35ca99ab0a0 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -76,6 +76,7 @@ obj-$(CONFIG_X86_32)  += tls.o
  obj-$(CONFIG_IA32_EMULATION)  += tls.o
  obj-y += step.o
  obj-$(CONFIG_INTEL_TXT)   += tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o


Hmm... should that be CONFIG_X86_SECURE_LAUNCH?

Just asking...


It could be if you would like. I guess we just thought it was implied 
given its location.


Ross



BR, Jarkko






Re: [PATCH v9 09/19] x86: Secure Launch kernel late boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 12:58 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.


"memory protections" is not too helpful tbh.

Better to describe very briefly the VT-d usage.


We can enhance the commit message and talk about VT-d usage and what 
PMRs are and do.


Thanks



BR, Jarkko





Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 1:54 PM, Ard Biesheuvel wrote:

On Tue, 4 Jun 2024 at 19:34,  wrote:


On 6/4/24 10:27 AM, Ard Biesheuvel wrote:

On Tue, 4 Jun 2024 at 19:24,  wrote:


On 5/31/24 6:33 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 13:00, Ard Biesheuvel  wrote:


Hello Ross,

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
Documentation/arch/x86/boot.rst|  21 +
arch/x86/boot/compressed/Makefile  |   3 +-
arch/x86/boot/compressed/head_64.S |  30 +
arch/x86/boot/compressed/kernel_info.S |  34 ++
arch/x86/boot/compressed/sl_main.c | 577 
arch/x86/boot/compressed/sl_stub.S | 725 +
arch/x86/include/asm/msr-index.h   |   5 +
arch/x86/include/uapi/asm/bootparam.h  |   1 +
arch/x86/kernel/asm-offsets.c  |  20 +
9 files changed, 1415 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/boot/compressed/sl_main.c
create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
   - If 1, KASLR enabled.
   - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
  Bit 5 (write): QUIET_FLAG

   - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

  This field contains maximal allowed type for setup_data and 
setup_indirect structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!Mng0gnPhOYZ8D02t1rYwQfY6U3uWaypJyd1T2rsWz3QNHr9GhIZ9ANB_-cgPExxX0e0KmCpda-3VX8Fj$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.



The Image Checksum
==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

$(obj)/vmlinux: $(vmlinux-objs-y) FORCE
   $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
   pushq   $0
   popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd

Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 12:56 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
  Documentation/arch/x86/boot.rst|  21 +
  arch/x86/boot/compressed/Makefile  |   3 +-
  arch/x86/boot/compressed/head_64.S |  30 +
  arch/x86/boot/compressed/kernel_info.S |  34 ++
  arch/x86/boot/compressed/sl_main.c | 577 
  arch/x86/boot/compressed/sl_stub.S | 725 +
  arch/x86/include/asm/msr-index.h   |   5 +
  arch/x86/include/uapi/asm/bootparam.h  |   1 +
  arch/x86/kernel/asm-offsets.c  |  20 +
  9 files changed, 1415 insertions(+), 1 deletion(-)
  create mode 100644 arch/x86/boot/compressed/sl_main.c
  create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
- If 1, KASLR enabled.
- If 0, KASLR disabled.
  
+  Bit 2 (kernel internal): SLAUNCH_FLAG

+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
Bit 5 (write): QUIET_FLAG
  
  	- If 0, print early messages.

@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4
  
This field contains maximal allowed type for setup_data and setup_indirect structs.
  
+	=

+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!KPXGsFBxHXv1-jmHhyS3xHCC_3EnOUbN697TXyjlZlNw9YPQG9tQKo2s-6cn-HEv3gP_PpQqGwTYYQT3jxE$
+
  
  The Image Checksum

  ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
  
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o $(obj)/early_sha256.o

+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
  
  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE

$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
pushq   $0
popfq
  
+#ifdef CONFIG_SECURE_LAUNCH

+   /* Ensure the relocation region is coverd by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
  /*
   * Copy the compressed kernel to the end of our buffer
   * where decompression in place becomes safe.
@@ -462,6 +469,29 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
  
+#ifdef CONFIG_SECURE_LAUNCH

+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure

Re: [PATCH v9 06/19] x86: Add early SHA-1 support for Secure Launch early measurements

2024-06-04 Thread ross . philipson

On 6/4/24 11:52 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

From: "Daniel P. Smith" 

For better or worse, Secure Launch needs SHA-1 and SHA-256. The
choice of hashes used lie with the platform firmware, not with
software, and is often outside of the users control.

Even if we'd prefer to use SHA-256-only, if firmware elected to start us
with the SHA-1 and SHA-256 backs active, we still need SHA-1 to parse
the TPM event log thus far, and deliberately cap the SHA-1 PCRs in order
to safely use SHA-256 for everything else.

The SHA-1 code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c
to bring it in line with the SHA-256 code and allow it to be pulled into the
setup kernel in the same manner as SHA-256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
  arch/x86/boot/compressed/Makefile |  2 +
  arch/x86/boot/compressed/early_sha1.c | 12 
  include/crypto/sha1.h |  1 +
  lib/crypto/sha1.c | 81 +++
  4 files changed, 96 insertions(+)
  create mode 100644 arch/x86/boot/compressed/early_sha1.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index e9522c6893be..3307ebef4e1b 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,6 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
  
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o

+
  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
  
diff --git a/arch/x86/boot/compressed/early_sha1.c b/arch/x86/boot/compressed/early_sha1.c

new file mode 100644
index ..8a9b904a73ab
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../lib/crypto/sha1.c"

}

Yep, make sense. Thinking only that should this be just sha1.c.

Comparing this to mainly drivers/firmware/efi/tpm.c, which is not
early_tpm.c where the early actually probably would make more sense
than here. Here sha1 primitive is just needed.

This is definitely a nitpick but why carry a prefix that is not
that useful, right?


I am not 100% sure what you mean here, sorry. Could you clarify about 
the prefix? Do you mean why did we choose early_*? There was precedent 
for doing that like early_serial_console.c.





diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h
index 044ecea60ac8..d715dd5332e1 100644
--- a/include/crypto/sha1.h
+++ b/include/crypto/sha1.h
@@ -42,5 +42,6 @@ extern int crypto_sha1_finup(struct shash_desc *desc, const 
u8 *data,
  #define SHA1_WORKSPACE_WORDS  16
  void sha1_init(__u32 *buf);
  void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);
  
  #endif /* _CRYPTO_SHA1_H */

diff --git a/lib/crypto/sha1.c b/lib/crypto/sha1.c
index 1aebe7be9401..10152125b338 100644
--- a/lib/crypto/sha1.c
+++ b/lib/crypto/sha1.c
@@ -137,4 +137,85 @@ void sha1_init(__u32 *buf)
  }
  EXPORT_SYMBOL(sha1_init);
  
+static void __sha1_transform(u32 *digest, const char *data)

+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memzero_explicit(ws, sizeof(ws));


For the sake of future reference I'd carry always some inline comment
with any memzero_explicit() call site.


We can do that.




+}
+
+static void sha1_update(struct sha1_state *sctx, const u8 *data, unsigned int 
len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {



if (unlikely((partial + len) < SHA1_BLOCK_SIZE))
goto out;

?


We could do it that way. I guess it would cut down in indenting. I defer 
to Daniel Smith on this...





+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha1_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+  

Re: [PATCH v9 05/19] x86: Secure Launch main header file

2024-06-04 Thread ross . philipson

On 6/4/24 11:24 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 


Right and anything AMD specific should also have legit references. I
actually always compare to the spec when I review, so not just
nitpicking really.

I'm actually bit confused, is this usable both Intel and AMD in the
current state? Might be just that have not had time follow this for some
time.


This header file mostly has TXT/Intel specific definitions in it right 
now but that is just because TXT is the first target architecture. I am 
working on the AMD side of things as we speak and yes, AMD specific 
definitions will go in here and later ARM specific definitions too.


If you would like to see say a comment block with links to relevant 
specifications in this header file, that can be done and they will be 
added as new support is added.


Thanks
Ross



BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-04 Thread ross . philipson

On 6/4/24 11:21 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 


If a uarch specific, I'd appreciate Intel SDM reference here so that I
can look it up and compare. Like in section granularity.


This table is meant to not be architecture specific though it can 
contain architecture specific sub-entities. E.g. there is a TXT specific 
table and in the future there will be an AMD and ARM one (and hopefully 
some others). I hope that addresses what you are pointing out or maybe I 
don't fully understand what you mean here...


Thanks
Ross



BR, Jarkko





Re: [PATCH v9 01/19] x86/boot: Place kernel_info at a fixed offset

2024-06-04 Thread ross . philipson

On 6/4/24 11:18 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

From: Arvind Sankar 

There are use cases for storing the offset of a symbol in kernel_info.
For example, the trenchboot series [0] needs to store the offset of the
Measured Launch Environment header in kernel_info.


So either there are other use cases that you should enumerate, or just
be straight and state that this is done for Trenchboot.


The kernel_info concept came about because of the work we were doing on 
TrenchBoot but it was not done for TrenchBoot. It was a collaborative 
effort between the TrenchBoot team and H. Peter Anvin at Intel. He 
actually envisioned it being useful elsewhere. If you find the original 
commits for it (that went in stand-alone) from Daniel Kiper, there is a 
fair amount of detail what kernel_info is supposed to be and should be 
used for.




I believe latter is the case, and there is no reason to project further.
If it does not interfere kernel otherwise, it should be fine just by
that.

Also I believe that it is written as Trenchboot, without "series" ;-)
Think when writing commit message that it will some day be part of the
commit log, not a series flying in the air.

Sorry for the nitpicks but better to be punctual and that way also
transparent as possible, right?


No problem. We submit the patch sets to get feedback :)

Thanks for the feedback.





Since commit (note: commit ID from tip/master)

commit 527afc212231 ("x86/boot: Check that there are no run-time relocations")

run-time relocations are not allowed in the compressed kernel, so simply
using the symbol in kernel_info, as

.long   symbol

will cause a linker error because this is not position-independent.

With kernel_info being a separate object file and in a different section
from startup_32, there is no way to calculate the offset of a symbol
from the start of the image in a position-independent way.

To enable such use cases, put kernel_info into its own section which is


"To allow Trenchboot to access the fields of kernel_info..."

Much more understandable.


placed at a predetermined offset (KERNEL_INFO_OFFSET) via the linker
script. This will allow calculating the symbol offset in a
position-independent way, by adding the offset from the start of
kernel_info to KERNEL_INFO_OFFSET.

Ensure that kernel_info is aligned, and use the SYM_DATA.* macros
instead of bare labels. This stores the size of the kernel_info
structure in the ELF symbol table.


Aligned to which boundary and short explanation why to that boundary,
i.e. state the obvious if you bring it up anyway here.

Just seems to be progressing pretty well so taking my eye glass and
looking into nitty gritty details...


So a lot of this is up in the air if you read the responses between us 
and Ard Biesheuvel. It would be nice to get rid of the part where 
kernel_info is forced to a fixed offset in the setup kernel.


Thanks
Ross



BR, Jarkko





Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 10:27 AM, Ard Biesheuvel wrote:

On Tue, 4 Jun 2024 at 19:24,  wrote:


On 5/31/24 6:33 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 13:00, Ard Biesheuvel  wrote:


Hello Ross,

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
   Documentation/arch/x86/boot.rst|  21 +
   arch/x86/boot/compressed/Makefile  |   3 +-
   arch/x86/boot/compressed/head_64.S |  30 +
   arch/x86/boot/compressed/kernel_info.S |  34 ++
   arch/x86/boot/compressed/sl_main.c | 577 
   arch/x86/boot/compressed/sl_stub.S | 725 +
   arch/x86/include/asm/msr-index.h   |   5 +
   arch/x86/include/uapi/asm/bootparam.h  |   1 +
   arch/x86/kernel/asm-offsets.c  |  20 +
   9 files changed, 1415 insertions(+), 1 deletion(-)
   create mode 100644 arch/x86/boot/compressed/sl_main.c
   create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
  - If 1, KASLR enabled.
  - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
 Bit 5 (write): QUIET_FLAG

  - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

 This field contains maximal allowed type for setup_data and setup_indirect 
structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!Mng0gnPhOYZ8D02t1rYwQfY6U3uWaypJyd1T2rsWz3QNHr9GhIZ9ANB_-cgPExxX0e0KmCpda-3VX8Fj$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.



   The Image Checksum
   ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
   vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
   vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

   $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
  $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
  pushq   $0
  popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */


covered


+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq

Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 5/31/24 7:04 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 15:33, Ard Biesheuvel  wrote:


On Fri, 31 May 2024 at 13:00, Ard Biesheuvel  wrote:


Hello Ross,

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
  Documentation/arch/x86/boot.rst|  21 +
  arch/x86/boot/compressed/Makefile  |   3 +-
  arch/x86/boot/compressed/head_64.S |  30 +
  arch/x86/boot/compressed/kernel_info.S |  34 ++
  arch/x86/boot/compressed/sl_main.c | 577 
  arch/x86/boot/compressed/sl_stub.S | 725 +
  arch/x86/include/asm/msr-index.h   |   5 +
  arch/x86/include/uapi/asm/bootparam.h  |   1 +
  arch/x86/kernel/asm-offsets.c  |  20 +
  9 files changed, 1415 insertions(+), 1 deletion(-)
  create mode 100644 arch/x86/boot/compressed/sl_main.c
  create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
 - If 1, KASLR enabled.
 - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
Bit 5 (write): QUIET_FLAG

 - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

This field contains maximal allowed type for setup_data and setup_indirect 
structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!ItP96GzpIqxa7wGXth63mmzkWPbBgoixpG3-Gj1tlstBVkReH_hagE-Sa_E6DwcvYtu5xLOwbVWeeXGa$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.



  The Image Checksum
  ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
 $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
 pushq   $0
 popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */


covered


+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
  /*
   * Copy

Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 5/31/24 6:33 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 13:00, Ard Biesheuvel  wrote:


Hello Ross,

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
  Documentation/arch/x86/boot.rst|  21 +
  arch/x86/boot/compressed/Makefile  |   3 +-
  arch/x86/boot/compressed/head_64.S |  30 +
  arch/x86/boot/compressed/kernel_info.S |  34 ++
  arch/x86/boot/compressed/sl_main.c | 577 
  arch/x86/boot/compressed/sl_stub.S | 725 +
  arch/x86/include/asm/msr-index.h   |   5 +
  arch/x86/include/uapi/asm/bootparam.h  |   1 +
  arch/x86/kernel/asm-offsets.c  |  20 +
  9 files changed, 1415 insertions(+), 1 deletion(-)
  create mode 100644 arch/x86/boot/compressed/sl_main.c
  create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
 - If 1, KASLR enabled.
 - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
Bit 5 (write): QUIET_FLAG

 - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

This field contains maximal allowed type for setup_data and setup_indirect 
structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!Mng0gnPhOYZ8D02t1rYwQfY6U3uWaypJyd1T2rsWz3QNHr9GhIZ9ANB_-cgPExxX0e0KmCpda-3VX8Fj$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.



  The Image Checksum
  ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
 $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
 pushq   $0
 popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */


covered


+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
  /*
   * Copy the compressed kernel to the end of our buffer
   * where

Re: [PATCH v9 19/19] x86: EFI stub DRTM launch support for Secure Launch

2024-06-04 Thread ross . philipson

On 5/31/24 4:09 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


This support allows the DRTM launch to be initiated after an EFI stub
launch of the Linux kernel is done. This is accomplished by providing
a handler to jump to when a Secure Launch is in progress. This has to be
called after the EFI stub does Exit Boot Services.

Signed-off-by: Ross Philipson 


Just some minor remarks below. The overall approach in this patch
looks fine now.



---
  drivers/firmware/efi/libstub/x86-stub.c | 98 +
  1 file changed, 98 insertions(+)

diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
b/drivers/firmware/efi/libstub/x86-stub.c
index d5a8182cf2e1..a1143d006202 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -9,6 +9,8 @@
  #include 
  #include 
  #include 
+#include 
+#include 

  #include 
  #include 
@@ -830,6 +832,97 @@ static efi_status_t efi_decompress_kernel(unsigned long 
*kernel_entry)
 return efi_adjust_memory_range_protection(addr, kernel_text_size);
  }

+#if (IS_ENABLED(CONFIG_SECURE_LAUNCH))


IS_ENABLED() is mostly used for C conditionals not CPP ones.

It would be nice if this #if could be dropped, and replaced with ... (see below)



+static bool efi_secure_launch_update_boot_params(struct slr_table *slrt,
+struct boot_params 
*boot_params)
+{
+   struct slr_entry_intel_info *txt_info;
+   struct slr_entry_policy *policy;
+   struct txt_os_mle_data *os_mle;
+   bool updated = false;
+   int i;
+
+   txt_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+   if (!txt_info)
+   return false;
+
+   os_mle = txt_os_mle_data_start((void *)txt_info->txt_heap);
+   if (!os_mle)
+   return false;
+
+   os_mle->boot_params_addr = (u32)(u64)boot_params;
+


Why is this safe?


The size of the boot_params_addr is a holdover from the legacy boot 
world when boot params were always loaded at a low address. We will 
increase the size of the field.





+   policy = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_ENTRY_POLICY);
+   if (!policy)
+   return false;
+
+   for (i = 0; i < policy->nr_entries; i++) {
+   if (policy->policy_entries[i].entity_type == 
SLR_ET_BOOT_PARAMS) {
+   policy->policy_entries[i].entity = (u64)boot_params;
+   updated = true;
+   break;
+   }
+   }
+
+   /*
+* If this is a PE entry into EFI stub the mocked up boot params will
+* be missing some of the setup header data needed for the second stage
+* of the Secure Launch boot.
+*/
+   if (image) {
+   struct setup_header *hdr = (struct setup_header *)((u8 
*)image->image_base + 0x1f1);


Could we use something other than a bare 0x1f1 constant here? struct
boot_params has a struct setup_header at the correct offset, so with
some casting of offsetof() use, we can make this look a lot more self
explanatory.


Yes we can do this.





+   u64 cmdline_ptr, hi_val;
+
+   boot_params->hdr.setup_sects = hdr->setup_sects;
+   boot_params->hdr.syssize = hdr->syssize;
+   boot_params->hdr.version = hdr->version;
+   boot_params->hdr.loadflags = hdr->loadflags;
+   boot_params->hdr.kernel_alignment = hdr->kernel_alignment;
+   boot_params->hdr.min_alignment = hdr->min_alignment;
+   boot_params->hdr.xloadflags = hdr->xloadflags;
+   boot_params->hdr.init_size = hdr->init_size;
+   boot_params->hdr.kernel_info_offset = hdr->kernel_info_offset;
+   hi_val = boot_params->ext_cmd_line_ptr;


We have efi_set_u64_split() for this.


Ok I will use that then.




+   cmdline_ptr = boot_params->hdr.cmd_line_ptr | hi_val << 32;
+   boot_params->hdr.cmdline_size = strlen((const char 
*)cmdline_ptr);;
+   }
+
+   return updated;
+}
+
+static void efi_secure_launch(struct boot_params *boot_params)
+{
+   struct slr_entry_dl_info *dlinfo;
+   efi_guid_t guid = SLR_TABLE_GUID;
+   dl_handler_func handler_callback;
+   struct slr_table *slrt;
+


... a C conditional here, e.g.,

if (!IS_ENABLED(CONFIG_SECURE_LAUNCH))
 return;

The difference is that all the code will get compile test coverage
every time, instead of only in configs that enable
CONFIG_SECURE_LAUNCH.

This significantly reduces the risk that your stuff will get broken
inadvertently.


Understood, I will address these as you suggest.




+   /*
+* The presence of this table indicated a Secure Launch
+* is being requested.
+*/
+   slrt = (struct slr_table *)get_efi_confi

Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 5/31/24 4:00 AM, Ard Biesheuvel wrote:

Hello Ross,


Hi Ard,



On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
  Documentation/arch/x86/boot.rst|  21 +
  arch/x86/boot/compressed/Makefile  |   3 +-
  arch/x86/boot/compressed/head_64.S |  30 +
  arch/x86/boot/compressed/kernel_info.S |  34 ++
  arch/x86/boot/compressed/sl_main.c | 577 
  arch/x86/boot/compressed/sl_stub.S | 725 +
  arch/x86/include/asm/msr-index.h   |   5 +
  arch/x86/include/uapi/asm/bootparam.h  |   1 +
  arch/x86/kernel/asm-offsets.c  |  20 +
  9 files changed, 1415 insertions(+), 1 deletion(-)
  create mode 100644 arch/x86/boot/compressed/sl_main.c
  create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
 - If 1, KASLR enabled.
 - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
Bit 5 (write): QUIET_FLAG

 - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

This field contains maximal allowed type for setup_data and setup_indirect 
structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!MdqTLUxfB5YUUOB1qyhkN8TF9HAoGW7yy_2qwZGWz8eb73CYkDXY-h1SeyaZjwzsSWz408D3LgDD8Zmw$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.


What you suggest here may be possible with respect to the location of 
the MLE header itself, we need to give that more thought. The real issue 
though is covered in my response below concerning the fields in the MLE 
header.






  The Image Checksum
  ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
 $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
 pushq   $0
 popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */


covered


Ack

Re: [PATCH v9 06/19] x86: Add early SHA-1 support for Secure Launch early measurements

2024-05-31 Thread ross . philipson

On 5/30/24 7:16 PM, Eric Biggers wrote:

On Thu, May 30, 2024 at 06:03:18PM -0700, Ross Philipson wrote:

From: "Daniel P. Smith" 

For better or worse, Secure Launch needs SHA-1 and SHA-256. The
choice of hashes used lie with the platform firmware, not with
software, and is often outside of the users control.

Even if we'd prefer to use SHA-256-only, if firmware elected to start us
with the SHA-1 and SHA-256 backs active, we still need SHA-1 to parse
the TPM event log thus far, and deliberately cap the SHA-1 PCRs in order
to safely use SHA-256 for everything else.

The SHA-1 code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c
to bring it in line with the SHA-256 code and allow it to be pulled into the
setup kernel in the same manner as SHA-256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 


Thanks.  This explanation doesn't seem to have made it into the actual code or
documentation.  Can you please get it into a more permanent location?

Also, can you point to where the "deliberately cap the SHA-1 PCRs" thing happens
in the code?

That paragraph is also phrased as a hypothetical, "Even if we'd prefer to use
SHA-256-only".  That implies that you do not, in fact, prefer SHA-256 only.  Is
that the case?  Sure, maybe there are situations where you *have* to use SHA-1,
but why would you not at least *prefer* SHA-256?


Yes those are fair points. We will address them and indicate we prefer 
SHA-256 or better.





/*
  * An implementation of SHA-1's compression function.  Don't use in new code!
  * You shouldn't be using SHA-1, and even if you *have* to use SHA-1, this 
isn't
  * the correct way to hash something with SHA-1 (use crypto_shash instead).
  */
#define SHA1_DIGEST_WORDS   (SHA1_DIGEST_SIZE / 4)
#define SHA1_WORKSPACE_WORDS16
void sha1_init(__u32 *buf);
void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);

 > Also, the comment above needs to be updated.


Ack, will address.

Thank you



- Eric





[PATCH v9 18/19] x86: Secure Launch late initcall platform module

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

The Secure Launch platform module is a late init module. During the
init call, the TPM event log is read and measurements taken in the
early boot stub code are located. These measurements are extended
into the TPM PCRs using the mainline TPM kernel driver.

The platform module also registers the securityfs nodes to allow
access to TXT register fields on Intel along with the fetching of
and writing events to the late launch TPM log.

Signed-off-by: Daniel P. Smith 
Signed-off-by: garnetgrimm 
Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/slmodule.c | 513 +
 2 files changed, 514 insertions(+)
 create mode 100644 arch/x86/kernel/slmodule.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index b35ca99ab0a0..f2432c4a747a 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -77,6 +77,7 @@ obj-$(CONFIG_IA32_EMULATION)  += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
 obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slmodule.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
new file mode 100644
index ..0e1354e3a914
--- /dev/null
+++ b/arch/x86/kernel/slmodule.c
@@ -0,0 +1,513 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and finalization.
+ *
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ * Copyright (c) 2024 Assured Information Security, Inc.
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ *
+ * Co-developed-by: Garnet T. Grimm 
+ * Signed-off-by: Garnet T. Grimm 
+ * Signed-off-by: Daniel P. Smith 
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/*
+ * The macro DECLARE_TXT_PUB_READ_U is used to read values from the TXT
+ * public registers as unsigned values.
+ */
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size)\
+static ssize_t txt_pub_read_u##size(unsigned int offset,   \
+   loff_t *read_offset,\
+   size_t read_len,\
+   char __user *buf)   \
+{  \
+   char msg_buffer[msg_size];  \
+   u##size reg_value = 0;  \
+   void __iomem *txt;  \
+   \
+   txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+   TXT_NR_CONFIG_PAGES * PAGE_SIZE);   \
+   if (!txt)   \
+   return -EFAULT; \
+   memcpy_fromio(_value, txt + offset, sizeof(u##size));   \
+   iounmap(txt);   \
+   snprintf(msg_buffer, msg_size, fmt, reg_value); \
+   return simple_read_from_buffer(buf, read_len, read_offset,  \
+   _buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size)   \
+static ssize_t txt_##reg_name##_read(struct file *flip,
\
+   char __user *buf, size_t read_len, loff_t *read_offset) \
+{  \
+   return txt_pub_read_u##reg_size(reg_offset, read_offset,\
+   read_len, buf); \
+}  \
+static const struct file_operations reg_name##_ops = { \
+   .read = txt_##reg_name##_read,  \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+   char *name;
+   void *addr;
+   size_t size;
+};
+
+static struct memfile sl_evtlog = {"even

[PATCH v9 16/19] tpm: Add ability to set the preferred locality the TPM chip uses

2024-05-30 Thread Ross Philipson
Curently the locality is hard coded to 0 but for DRTM support, access
is needed to localities 1 through 4.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-chip.c  | 24 +++-
 drivers/char/tpm/tpm-interface.c | 15 +++
 drivers/char/tpm/tpm.h   |  1 +
 include/linux/tpm.h  |  4 
 4 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 854546000c92..73eac54d61fb 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -44,7 +44,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
if (!chip->ops->request_locality)
return 0;
 
-   rc = chip->ops->request_locality(chip, 0);
+   rc = chip->ops->request_locality(chip, chip->pref_locality);
if (rc < 0)
return rc;
 
@@ -143,6 +143,27 @@ void tpm_chip_stop(struct tpm_chip *chip)
 }
 EXPORT_SYMBOL_GPL(tpm_chip_stop);
 
+/**
+ * tpm_chip_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_chip_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   if (locality < 0 || locality >=TPM_MAX_LOCALITY)
+   return false;
+
+   mutex_lock(>tpm_mutex);
+   chip->pref_locality = locality;
+   mutex_unlock(>tpm_mutex);
+   return true;
+}
+EXPORT_SYMBOL_GPL(tpm_chip_preferred_locality);
+
 /**
  * tpm_try_get_ops() - Get a ref to the tpm_chip
  * @chip: Chip to ref
@@ -374,6 +395,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
}
 
chip->locality = -1;
+   chip->pref_locality = 0;
return chip;
 
 out:
diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 5da134f12c9a..35f14ccecf0e 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -274,6 +274,21 @@ int tpm_is_tpm2(struct tpm_chip *chip)
 }
 EXPORT_SYMBOL_GPL(tpm_is_tpm2);
 
+/**
+ * tpm_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   return tpm_chip_preferred_locality(chip, locality);
+}
+EXPORT_SYMBOL_GPL(tpm_preferred_locality);
+
 /**
  * tpm_pcr_read - read a PCR value from SHA1 bank
  * @chip:  a  tpm_chip instance, %NULL for the default chip
diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
index 6b8b9956ba69..be465422d3fa 100644
--- a/drivers/char/tpm/tpm.h
+++ b/drivers/char/tpm/tpm.h
@@ -267,6 +267,7 @@ static inline void tpm_msleep(unsigned int delay_msec)
 int tpm_chip_bootstrap(struct tpm_chip *chip);
 int tpm_chip_start(struct tpm_chip *chip);
 void tpm_chip_stop(struct tpm_chip *chip);
+bool tpm_chip_preferred_locality(struct tpm_chip *chip, int locality);
 struct tpm_chip *tpm_find_get_ops(struct tpm_chip *chip);
 
 struct tpm_chip *tpm_chip_alloc(struct device *dev,
diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index 363f7078c3a9..935a3457d7c8 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -219,6 +219,9 @@ struct tpm_chip {
u8 null_ec_key_y[EC_PT_SZ];
struct tpm2_auth *auth;
 #endif
+
+   /* preferred locality - default 0 */
+   int pref_locality;
 };
 
 #define TPM_HEADER_SIZE10
@@ -461,6 +464,7 @@ static inline u32 tpm2_rc_value(u32 rc)
 #if defined(CONFIG_TCG_TPM) || defined(CONFIG_TCG_TPM_MODULE)
 
 extern int tpm_is_tpm2(struct tpm_chip *chip);
+extern bool tpm_preferred_locality(struct tpm_chip *chip, int locality);
 extern __must_check int tpm_try_get_ops(struct tpm_chip *chip);
 extern void tpm_put_ops(struct tpm_chip *chip);
 extern ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf,
-- 
2.39.3




[PATCH v9 15/19] tpm: Make locality requests return consistent values

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

The function tpm_tis_request_locality() is expected to return the locality
value that was requested, or a negative error code upon failure. If it is called
while locality_count of struct tis_data is non-zero, no actual locality request
will be sent. Because the ret variable is initially set to 0, the
locality_count will still get increased, and the function will return 0. For a
caller, this would indicate that locality 0 was successfully requested and not
the state changes just mentioned.

Additionally, the function __tpm_tis_request_locality() provides inconsistent
error codes. It will provide either a failed IO write or a -1 should it have
timed out waiting for locality request to succeed.

This commit changes __tpm_tis_request_locality() to return valid negative error
codes to reflect the reason it fails. It then adjusts the return value check in
tpm_tis_request_locality() to check for a non-negative return value before
incrementing locality_cout. In addition, the initial value of the ret value is
set to a negative error to ensure the check does not pass if
__tpm_tis_request_locality() is not called.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm_tis_core.c | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index 9fb53bb3e73f..685bdeadec51 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -208,7 +208,7 @@ static int __tpm_tis_request_locality(struct tpm_chip 
*chip, int l)
 again:
timeout = stop - jiffies;
if ((long)timeout <= 0)
-   return -1;
+   return -EBUSY;
rc = wait_event_interruptible_timeout(priv->int_queue,
  (check_locality
   (chip, l)),
@@ -227,18 +227,21 @@ static int __tpm_tis_request_locality(struct tpm_chip 
*chip, int l)
tpm_msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
}
-   return -1;
+   return -EBUSY;
 }
 
 static int tpm_tis_request_locality(struct tpm_chip *chip, int l)
 {
struct tpm_tis_data *priv = dev_get_drvdata(>dev);
-   int ret = 0;
+   int ret = -EBUSY;
+
+   if (l < 0 || l > TPM_MAX_LOCALITY)
+   return -EINVAL;
 
mutex_lock(>locality_count_mutex);
if (priv->locality_count == 0)
ret = __tpm_tis_request_locality(chip, l);
-   if (!ret)
+   if (ret >= 0)
priv->locality_count++;
mutex_unlock(>locality_count_mutex);
return ret;
-- 
2.39.3




[PATCH v9 14/19] tpm: Ensure tpm is in known state at startup

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

When tis core initializes, it assumes all localities are closed. There
are cases when this may not be the case. This commit addresses this by
ensuring all localities are closed before initializing begins.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm_tis_core.c | 11 ++-
 include/linux/tpm.h |  6 ++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index 7c1761bd6000..9fb53bb3e73f 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -1104,7 +1104,7 @@ int tpm_tis_core_init(struct device *dev, struct 
tpm_tis_data *priv, int irq,
u32 intmask;
u32 clkrun_val;
u8 rid;
-   int rc, probe;
+   int rc, probe, i;
struct tpm_chip *chip;
 
chip = tpmm_chip_alloc(dev, _tis);
@@ -1166,6 +1166,15 @@ int tpm_tis_core_init(struct device *dev, struct 
tpm_tis_data *priv, int irq,
goto out_err;
}
 
+   /*
+* There are environments, like Intel TXT, that may leave a TPM
+* locality open. Close all localities to start from a known state.
+*/
+   for (i = 0; i <= TPM_MAX_LOCALITY; i++) {
+   if (check_locality(chip, i))
+   tpm_tis_relinquish_locality(chip, i);
+   }
+
/* Take control of the TPM's interrupt hardware and shut it off */
rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), );
if (rc < 0)
diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index c17e4efbb2e5..363f7078c3a9 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -147,6 +147,12 @@ struct tpm_chip_seqops {
  */
 #define TPM2_MAX_CONTEXT_SIZE 4096
 
+/*
+ * The maximum locality (0 - 4) for a TPM, as defined in section 3.2 of the
+ * Client Platform Profile Specification.
+ */
+#define TPM_MAX_LOCALITY   4
+
 struct tpm_chip {
struct device dev;
struct device devs;
-- 
2.39.3




[PATCH v9 11/19] kexec: Secure Launch kexec SEXIT support

2024-05-30 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 73 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 77 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index 48c9ca78e241..f35b4ba433fa 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -523,3 +523,76 @@ void __init slaunch_setup_txt(void)
 
pr_info("Intel TXT setup complete\n");
 }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile ("getsec\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+/*
+ * Used during kexec and on reboot paths to finalize the TXT state
+ * and do an SEXIT exiting the DRTM and disabling SMX mode.
+ */
+void slaunch_finalize(int do_sexit)
+{
+   u64 one = TXT_REGVALUE_ONE, val;
+   void __iomem *config;
+
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT)) !=
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Close the TXT private register space */
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.\n");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0)
+   panic("Error TXT SEXIT must be called on CPU 0\n");
+
+   /* In case SMX mode was disabled, enable it for SEXIT */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_info("TXT SEXIT complete.\n");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 0e96f6b24344..ba2fd1c0ddd9 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -40,6 +40,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1046,6 +1047,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
kmsg_dump(KMSG_DUMP_SHUTDOWN);
-- 
2.39.3




[PATCH v9 13/19] tpm: Protect against locality counter underflow

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

Commit 933bfc5ad213 introduced the use of a locality counter to control when a
locality request is allowed to be sent to the TPM. In the commit, the counter
is indiscriminately decremented. Thus creating a situation for an integer
underflow of the counter.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reported-by: Kanth Ghatraju 
Fixes: 933bfc5ad213 ("tpm, tpm: Implement usage counter for locality")
---
 drivers/char/tpm/tpm_tis_core.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index 176cd8dbf1db..7c1761bd6000 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -180,7 +180,8 @@ static int tpm_tis_relinquish_locality(struct tpm_chip 
*chip, int l)
struct tpm_tis_data *priv = dev_get_drvdata(>dev);
 
mutex_lock(>locality_count_mutex);
-   priv->locality_count--;
+   if (priv->locality_count > 0)
+   priv->locality_count--;
if (priv->locality_count == 0)
__tpm_tis_relinquish_locality(priv, l);
mutex_unlock(>locality_count_mutex);
-- 
2.39.3




[PATCH v9 09/19] x86: Secure Launch kernel late boot stub

2024-05-30 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 525 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 533 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d128167e2e2..b35ca99ab0a0 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -76,6 +76,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 55a1fc332e20..31d1e6b9bd36 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -936,6 +937,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup_txt();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index ..48c9ca78e241
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,525 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags __ro_after_init;
+static struct sl_ap_wake_info ap_wake_info __ro_after_init;
+static u64 evtlog_addr __ro_after_init;
+static u32 evtlog_size __ro_after_init;
+static u64 vtd_pmr_lo_size __ro_after_init;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+/*
+ * Get the Secure Launch flags that indicate what kind of launch is being done.
+ * E.g. a TXT launch is in progress or no Secure Launch is happening.
+ */
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+
+/*
+ * Return the AP wakeup information used in the SMP boot code to start up
+ * the APs that are parked using MONITOR/MWAIT.
+ */
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return _wake_info;
+}
+
+/*
+ * On Intel platforms, TXT passes a safe copy of the DMAR ACPI table to the
+ * DRTM. The DRTM is supposed to use this instead of the one found in the
+ * ACPI tables.
+ */
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(txt_dmar);
+}
+
+/*
+ * If running within a TXT established DRTM, this is the proper way to reset
+ * the system if a failure occurs or a security issue is found.
+ */
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, , sizeof(error));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_RESET, , sizeof(one));
+
+   for ( ; ; )
+   asm volatile ("hlt");
+
+   unreachable();
+}
+
+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+u32 bytes)
+{
+   u64 base, size, offset = 0;
+   void *heap;
+   int i;
+
+   if (type > TXT_SINIT_TABLE_MAX)
+   slaun

[PATCH v9 17/19] tpm: Add sysfs interface to allow setting and querying the preferred locality

2024-05-30 Thread Ross Philipson
Expose a sysfs interface to allow user mode to set and query the preferred
locality for the TPM chip.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-sysfs.c | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/drivers/char/tpm/tpm-sysfs.c b/drivers/char/tpm/tpm-sysfs.c
index 94231f052ea7..5f4a966a4599 100644
--- a/drivers/char/tpm/tpm-sysfs.c
+++ b/drivers/char/tpm/tpm-sysfs.c
@@ -324,6 +324,34 @@ static ssize_t null_name_show(struct device *dev, struct 
device_attribute *attr,
 static DEVICE_ATTR_RO(null_name);
 #endif
 
+static ssize_t preferred_locality_show(struct device *dev,
+  struct device_attribute *attr, char *buf)
+{
+   struct tpm_chip *chip = to_tpm_chip(dev);
+
+   return sprintf(buf, "%d\n", chip->pref_locality);
+}
+
+static ssize_t preferred_locality_store(struct device *dev, struct 
device_attribute *attr,
+   const char *buf, size_t count)
+{
+   struct tpm_chip *chip = to_tpm_chip(dev);
+   unsigned int locality;
+
+   if (kstrtouint(buf, 0, ))
+   return -ERANGE;
+
+   if (locality >= TPM_MAX_LOCALITY)
+   return -ERANGE;
+
+   if (tpm_chip_preferred_locality(chip, (int)locality))
+   return count;
+   else
+   return 0;
+}
+
+static DEVICE_ATTR_RW(preferred_locality);
+
 static struct attribute *tpm1_dev_attrs[] = {
_attr_pubek.attr,
_attr_pcrs.attr,
@@ -336,6 +364,7 @@ static struct attribute *tpm1_dev_attrs[] = {
_attr_durations.attr,
_attr_timeouts.attr,
_attr_tpm_version_major.attr,
+   _attr_preferred_locality.attr,
NULL,
 };
 
@@ -344,6 +373,7 @@ static struct attribute *tpm2_dev_attrs[] = {
 #ifdef CONFIG_TCG_TPM2_HMAC
_attr_null_name.attr,
 #endif
+   _attr_preferred_locality.attr,
NULL
 };
 
-- 
2.39.3




[PATCH v9 12/19] reboot: Secure Launch SEXIT support on reboot paths

2024-05-30 Thread Ross Philipson
If the MLE kernel is being powered off, rebooted or halted,
then SEXIT must be called. Note that the SEXIT GETSEC leaf
can only be called after a machine_shutdown() has been done on
these paths. The machine_shutdown() is not called on a few paths
like when poweroff action does not have a poweroff callback (into
ACPI code) or when an emergency reset is done. In these cases,
just the TXT registers are finalized but SEXIT is skipped.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/reboot.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index f3130f762784..66060fdb0822 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -766,6 +767,7 @@ static void native_machine_restart(char *__unused)
 
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
__machine_emergency_restart(0);
 }
 
@@ -776,6 +778,9 @@ static void native_machine_halt(void)
 
tboot_shutdown(TB_SHUTDOWN_HALT);
 
+   /* SEXIT done after machine_shutdown() to meet TXT requirements */
+   slaunch_finalize(1);
+
stop_this_cpu(NULL);
 }
 
@@ -784,8 +789,12 @@ static void native_machine_power_off(void)
if (kernel_can_power_off()) {
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
do_kernel_power_off();
+   } else {
+   slaunch_finalize(0);
}
+
/* A fallback in case there is no PM info available */
tboot_shutdown(TB_SHUTDOWN_HALT);
 }
@@ -813,6 +822,7 @@ void machine_shutdown(void)
 
 void machine_emergency_restart(void)
 {
+   slaunch_finalize(0);
__machine_emergency_restart(1);
 }
 
-- 
2.39.3




[PATCH v9 10/19] x86: Secure Launch SMP bringup support

2024-05-30 Thread Ross Philipson
On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code uses MONITOR and MWAIT to park the APs. The realmode/init.c
code updates the jump address for the waiting APs with the location of the
Secure Launch entry point in the RM piggy after it is loaded and fixed up.
As the APs are woken up by writing the monitor, the APs jump to the Secure
Launch entry point in the RM piggy which mimics what the real mode code would
do then jumps to the standard RM piggy protected mode entry point.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/realmode.h  |  3 ++
 arch/x86/kernel/smpboot.c| 58 +++-
 arch/x86/realmode/init.c |  3 ++
 arch/x86/realmode/rm/header.S|  3 ++
 arch/x86/realmode/rm/trampoline_64.S | 32 +++
 5 files changed, 97 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 87e5482acd0d..339b48e2543d 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -38,6 +38,9 @@ struct real_mode_header {
 #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
 #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
 };
 
 /* This must match data at realmode/rm/trampoline_{32,64}.S */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 0c35207320cb..adb521221d6c 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -60,6 +60,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -868,6 +869,56 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
 }
 
+#ifdef CONFIG_SECURE_LAUNCH
+
+static bool slaunch_is_txt_launch(void)
+{
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return true;
+
+   return false;
+}
+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs using monitor/mwait. This will wake the APs by writing the monitor
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   struct sl_ap_wake_info *ap_wake_info;
+   struct sl_ap_stack_and_monitor *stack_monitor = NULL;
+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   stack_monitor = (struct sl_ap_stack_and_monitor 
*)__va(ap_wake_info->ap_wake_block +
+  
ap_wake_info->ap_stacks_offset);
+
+   for (unsigned int i = TXT_MAX_CPUS - 1; i >= 0; i--) {
+   if (stack_monitor[i].apicid == apicid) {
+   /* Write the monitor */
+   stack_monitor[i].monitor = 1;
+   break;
+   }
+   }
+}
+
+#else
+
+static inline bool slaunch_is_txt_launch(void)
+{
+   return false;
+}
+
+static inline void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+}
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -877,7 +928,7 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
 static int do_boot_cpu(u32 apicid, int cpu, struct task_struct *idle)
 {
unsigned long start_ip = real_mode_header->trampoline_start;
-   int ret;
+   int ret = 0;
 
 #ifdef CONFIG_X86_64
/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -922,12 +973,15 @@ static int do_boot_cpu(u32 apicid, int cpu, struct 
task_struct *idle)
 
/*
 * Wake up a CPU in difference cases:
+* - Intel TXT DRTM launch uses its own method to wake the APs
 * - Use a method from the APIC driver if one defined, with wakeup
 *   straight to 64-bit mode preferred over wakeup to RM.
 * Otherwise,
 * - Use an INIT boot APIC message
 */
-   if (apic->wakeup_secondary_cpu_64)
+   if (slaunch_is_txt_launch())
+   slaunch_wakeup_cpu_from_txt(cpu, apicid);
+   else if (apic->wakeup_secondary_cpu_64)
ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
else if (apic->wakeup_secondary_cpu)
ret = apic->wakeup_secondary_cpu(apicid, start_ip);
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f9bc444a3064..d95776cb30d3 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -4,6 +4,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 

[PATCH v9 07/19] x86: Add early SHA-256 support for Secure Launch early measurements

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

The SHA-256 algorithm is necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA-256 libraries directly in
the code since the compressed kernel is not uncompressed at this point.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile   | 2 +-
 arch/x86/boot/compressed/early_sha256.c | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/early_sha256.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 3307ebef4e1b..9189a0e28686 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,7 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/early_sha256.c 
b/arch/x86/boot/compressed/early_sha256.c
new file mode 100644
index ..293742a90ddc
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha256.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ */
+
+#include "../../../../lib/crypto/sha256.c"
-- 
2.39.3




[PATCH v9 02/19] Documentation/x86: Secure Launch kernel documentation

2024-05-30 Thread Ross Philipson
Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reviewed-by: Bagas Sanjaya 
---
 Documentation/security/index.rst  |   1 +
 .../security/launch-integrity/index.rst   |  11 +
 .../security/launch-integrity/principles.rst  | 320 ++
 .../secure_launch_details.rst | 587 ++
 .../secure_launch_overview.rst| 227 +++
 5 files changed, 1146 insertions(+)
 create mode 100644 Documentation/security/launch-integrity/index.rst
 create mode 100644 Documentation/security/launch-integrity/principles.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_details.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_overview.rst

diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index 59f8fc106cb0..56e31fb3d91f 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -19,3 +19,4 @@ Security Documentation
digsig
landlock
secrets/index
+   launch-integrity/index
diff --git a/Documentation/security/launch-integrity/index.rst 
b/Documentation/security/launch-integrity/index.rst
new file mode 100644
index ..838328186dd2
--- /dev/null
+++ b/Documentation/security/launch-integrity/index.rst
@@ -0,0 +1,11 @@
+=
+System Launch Integrity documentation
+=
+
+.. toctree::
+   :maxdepth: 1
+
+   principles
+   secure_launch_overview
+   secure_launch_details
+
diff --git a/Documentation/security/launch-integrity/principles.rst 
b/Documentation/security/launch-integrity/principles.rst
new file mode 100644
index ..68a415aec545
--- /dev/null
+++ b/Documentation/security/launch-integrity/principles.rst
@@ -0,0 +1,320 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. Copyright © 2019-2023 Daniel P. Smith 
+
+===
+System Launch Integrity
+===
+
+:Author: Daniel P. Smith
+:Date: October 2023
+
+This document serves to establish a common understanding of what is system
+launch, the integrity concern for system launch, and why using a Root of Trust
+(RoT) from a Dynamic Launch may be desired. Throughout this document
+terminology from the Trusted Computing Group (TCG) and National Institute for
+Science and Technology (NIST) is used to ensure a vendor natural language is
+used to describe and reference security-related concepts.
+
+System Launch
+=
+
+There is a tendency to only consider the classical power-on boot as the only
+means to launch an Operating System (OS) on a computer system, but in fact most
+modern processors support two methods to launch the system. To provide clarity
+a common definition of a system launch should be established. This definition
+is that a during a single power life cycle of a system, a System Launch
+consists of an initialization event, typically in hardware, that is followed by
+an executing software payload that takes the system from the initialized state
+to a running state. Driven by the Trusted Computing Group (TCG) architecture,
+modern processors are able to support two methods to launch a system, these two
+types of system launch are known as Static Launch and Dynamic Launch.
+
+Static Launch
+-
+
+Static launch is the system launch associated with the power cycle of the CPU.
+Thus, static launch refers to the classical power-on boot where the
+initialization event is the release of the CPU from reset and the system
+firmware is the software payload that brings the system up to a running state.
+Since static launch is the system launch associated with the beginning of the
+power lifecycle of a system, it is therefore a fixed, one-time system launch.
+It is because of this that static launch is referred to and thought of as being
+"static".
+
+Dynamic Launch
+--
+
+Modern CPUs architectures provides a mechanism to re-initialize the system to a
+"known good" state without requiring a power event. This re-initialization
+event is the event for a dynamic launch and is referred to as the Dynamic
+Launch Event (DLE). The DLE functions by accepting a software payload, referred
+to as the Dynamic Configuration Environment (DCE), that execution is handed to
+after the DLE is invoked. The DCE is responsible for bringing the system back
+to a running state. Since the dynamic launch is not tied to a power event like
+the static launch, this enables a dynamic launch to be initiated at any time
+and multiple times during a single power life cycle. This dynamism is the
+reasoning behind referring to this system launch as being dynamic.
+
+Because a dynamic launch can be conducted at any time during a single power
+life cycle, they are classified into one of two types, an early launch or a
+late launch.
+
+:Early Launch: W

[PATCH v9 05/19] x86: Secure Launch main header file

2024-05-30 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 542 
 1 file changed, 542 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index ..90a7f22ddbdd
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,542 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_SKINIT0x0002
+#define SL_FLAG_ARCH_TXT   0x0004
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_AMD 1
+#define SL_CPU_INTEL   2
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE   0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STSBIT(0)
+#define TXT_SEXIT_DONE_STS BIT(1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+#define TXT_SINIT_TABLE_MAXTXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW  0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB  0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE   0xc0008014
+#define SL_ERROR_HI_PMR_SIZE   0xc

[PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-05-30 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/arch/x86/boot.rst|  21 +
 arch/x86/boot/compressed/Makefile  |   3 +-
 arch/x86/boot/compressed/head_64.S |  30 +
 arch/x86/boot/compressed/kernel_info.S |  34 ++
 arch/x86/boot/compressed/sl_main.c | 577 
 arch/x86/boot/compressed/sl_stub.S | 725 +
 arch/x86/include/asm/msr-index.h   |   5 +
 arch/x86/include/uapi/asm/bootparam.h  |   1 +
 arch/x86/kernel/asm-offsets.c  |  20 +
 9 files changed, 1415 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
- If 1, KASLR enabled.
- If 0, KASLR disabled.
 
+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
   Bit 5 (write): QUIET_FLAG
 
- If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
pushq   $0
popfq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
 /*
  * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
@@ -462,6 +469,29 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter. There is no
+* obvious way to do anything about the use of kernel_alignment or
+* init_size though these seem low risk

[PATCH v9 06/19] x86: Add early SHA-1 support for Secure Launch early measurements

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

For better or worse, Secure Launch needs SHA-1 and SHA-256. The
choice of hashes used lie with the platform firmware, not with
software, and is often outside of the users control.

Even if we'd prefer to use SHA-256-only, if firmware elected to start us
with the SHA-1 and SHA-256 backs active, we still need SHA-1 to parse
the TPM event log thus far, and deliberately cap the SHA-1 PCRs in order
to safely use SHA-256 for everything else.

The SHA-1 code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c
to bring it in line with the SHA-256 code and allow it to be pulled into the
setup kernel in the same manner as SHA-256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/boot/compressed/early_sha1.c | 12 
 include/crypto/sha1.h |  1 +
 lib/crypto/sha1.c | 81 +++
 4 files changed, 96 insertions(+)
 create mode 100644 arch/x86/boot/compressed/early_sha1.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index e9522c6893be..3307ebef4e1b 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,6 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o
+
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
 
diff --git a/arch/x86/boot/compressed/early_sha1.c 
b/arch/x86/boot/compressed/early_sha1.c
new file mode 100644
index ..8a9b904a73ab
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../lib/crypto/sha1.c"
diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h
index 044ecea60ac8..d715dd5332e1 100644
--- a/include/crypto/sha1.h
+++ b/include/crypto/sha1.h
@@ -42,5 +42,6 @@ extern int crypto_sha1_finup(struct shash_desc *desc, const 
u8 *data,
 #define SHA1_WORKSPACE_WORDS   16
 void sha1_init(__u32 *buf);
 void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);
 
 #endif /* _CRYPTO_SHA1_H */
diff --git a/lib/crypto/sha1.c b/lib/crypto/sha1.c
index 1aebe7be9401..10152125b338 100644
--- a/lib/crypto/sha1.c
+++ b/lib/crypto/sha1.c
@@ -137,4 +137,85 @@ void sha1_init(__u32 *buf)
 }
 EXPORT_SYMBOL(sha1_init);
 
+static void __sha1_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memzero_explicit(ws, sizeof(ws));
+}
+
+static void sha1_update(struct sha1_state *sctx, const u8 *data, unsigned int 
len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha1_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+   }
+
+   if (len)
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+static void sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (partial > bit_offset) {
+   memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+   *bits = cpu_to_be64(sctx->count << 3);
+   __sha1_transform(sctx->state, sctx->buffer);
+
+   for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++)
+   put_unaligned_be32(sctx->sta

[PATCH v9 01/19] x86/boot: Place kernel_info at a fixed offset

2024-05-30 Thread Ross Philipson
From: Arvind Sankar 

There are use cases for storing the offset of a symbol in kernel_info.
For example, the trenchboot series [0] needs to store the offset of the
Measured Launch Environment header in kernel_info.

Since commit (note: commit ID from tip/master)

commit 527afc212231 ("x86/boot: Check that there are no run-time relocations")

run-time relocations are not allowed in the compressed kernel, so simply
using the symbol in kernel_info, as

.long   symbol

will cause a linker error because this is not position-independent.

With kernel_info being a separate object file and in a different section
from startup_32, there is no way to calculate the offset of a symbol
from the start of the image in a position-independent way.

To enable such use cases, put kernel_info into its own section which is
placed at a predetermined offset (KERNEL_INFO_OFFSET) via the linker
script. This will allow calculating the symbol offset in a
position-independent way, by adding the offset from the start of
kernel_info to KERNEL_INFO_OFFSET.

Ensure that kernel_info is aligned, and use the SYM_DATA.* macros
instead of bare labels. This stores the size of the kernel_info
structure in the ELF symbol table.

Signed-off-by: Arvind Sankar 
Cc: Ross Philipson 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/kernel_info.S | 19 +++
 arch/x86/boot/compressed/kernel_info.h | 12 
 arch/x86/boot/compressed/vmlinux.lds.S |  6 ++
 3 files changed, 33 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/boot/compressed/kernel_info.h

diff --git a/arch/x86/boot/compressed/kernel_info.S 
b/arch/x86/boot/compressed/kernel_info.S
index f818ee8fba38..c18f07181dd5 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -1,12 +1,23 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
+#include 
 #include 
+#include "kernel_info.h"
 
-   .section ".rodata.kernel_info", "a"
+/*
+ * If a field needs to hold the offset of a symbol from the start
+ * of the image, use the macro below, eg
+ * .long   rva(symbol)
+ * This will avoid creating run-time relocations, which are not
+ * allowed in the compressed kernel.
+ */
+
+#define rva(X) (((X) - kernel_info) + KERNEL_INFO_OFFSET)
 
-   .global kernel_info
+   .section ".rodata.kernel_info", "a"
 
-kernel_info:
+   .balign 16
+SYM_DATA_START(kernel_info)
/* Header, Linux top (structure). */
.ascii  "LToP"
/* Size. */
@@ -19,4 +30,4 @@ kernel_info:
 
 kernel_info_var_len_data:
/* Empty for time being... */
-kernel_info_end:
+SYM_DATA_END_LABEL(kernel_info, SYM_L_LOCAL, kernel_info_end)
diff --git a/arch/x86/boot/compressed/kernel_info.h 
b/arch/x86/boot/compressed/kernel_info.h
new file mode 100644
index ..c127f84aec63
--- /dev/null
+++ b/arch/x86/boot/compressed/kernel_info.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef BOOT_COMPRESSED_KERNEL_INFO_H
+#define BOOT_COMPRESSED_KERNEL_INFO_H
+
+#ifdef CONFIG_X86_64
+#define KERNEL_INFO_OFFSET 0x500
+#else /* 32-bit */
+#define KERNEL_INFO_OFFSET 0x100
+#endif
+
+#endif /* BOOT_COMPRESSED_KERNEL_INFO_H */
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S 
b/arch/x86/boot/compressed/vmlinux.lds.S
index 083ec6d7722a..718c52f3f1e6 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -7,6 +7,7 @@ OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT)
 
 #include 
 #include 
+#include "kernel_info.h"
 
 #ifdef CONFIG_X86_64
 OUTPUT_ARCH(i386:x86-64)
@@ -27,6 +28,11 @@ SECTIONS
HEAD_TEXT
_ehead = . ;
}
+   .rodata.kernel_info KERNEL_INFO_OFFSET : {
+   *(.rodata.kernel_info)
+   }
+   ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at bad 
address!")
+
.rodata..compressed : {
*(.rodata..compressed)
}
-- 
2.39.3




[PATCH v9 00/19] x86: Trenchboot secure dynamic launch Linux kernel support

2024-05-30 Thread Ross Philipson
The larger focus of the TrenchBoot project (https://github.com/TrenchBoot) is to
enhance the boot security and integrity in a unified manner. The first area of
focus has been on the Trusted Computing Group's Dynamic Launch for establishing
a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root of
Trust for Measurement). The project has been and continues to work on providing
a unified means to Dynamic Launch that is a cross-platform (Intel and AMD) and
cross-architecture (x86 and Arm), with our recent involvment in the upcoming
Arm DRTM specification. The order of introducing DRTM to the Linux kernel
follows the maturity of DRTM in the architectures. Intel's Trusted eXecution
Technology (TXT) is present today and only requires a preamble loader, e.g. a
boot loader, and an OS kernel that is TXT-aware. AMD DRTM implementation has
been present since the introduction of AMD-V but requires an additional
component that is AMD specific and referred to in the specification as the
Secure Loader, which the TrenchBoot project has an active prototype in
development. Finally Arm's implementation is in specification development stage
and the project is looking to support it when it becomes available.

This patchset provides detailed documentation of DRTM, the approach used for
adding the capbility, and relevant API/ABI documentation. In addition to the
documentation the patch set introduces Intel TXT support as the first platform
for Linux Secure Launch.

A quick note on terminology. The larger open source project itself is called
TrenchBoot, which is hosted on Github (links below). The kernel feature enabling
the use of Dynamic Launch technology is referred to as "Secure Launch" within
the kernel code. As such the prefixes sl_/SL_ or slaunch/SLAUNCH will be seen
in the code. The stub code discussed above is referred to as the SL stub.

The Secure Launch feature starts with patch #2. Patch #1 was authored by Arvind
Sankar. There is no further status on this patch at this point but
Secure Launch depends on it so it is included with the set.

Links:

The TrenchBoot project including documentation:

https://trenchboot.org

The TrenchBoot project on Github:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set 
volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

The TrenchBoot project provides a quick start guide to help get a system
up and running with Secure Launch for Linux:

https://github.com/TrenchBoot/documentation/blob/master/QUICKSTART.md

Patch set based on commit:

torvalds/master/ea5f6ad9ad9645733b72ab53a98e719b460d36a6

Thanks
Ross Philipson and Daniel P. Smith

Changes in v2:

 - Modified 32b entry code to prevent causing relocations in the compressed
   kernel.
 - Dropped patches for compressed kernel TPM PCR extender.
 - Modified event log code to insert log delimiter events and not rely
   on TPM access.
 - Stop extending PCRs in the early Secure Launch stub code.
 - Removed Kconfig options for hash algorithms and use the algorithms the
   ACM used.
 - Match Secure Launch measurement algorithm use to those reported in the
   TPM 2.0 event log.
 - Read the TPM events out of the TPM and extend them into the PCRs using
   the mainline TPM driver. This is done in the late initcall module.
 - Allow use of alternate PCR 19 and 20 for post ACM measurements.
 - Add Kconfig constraints needed by Secure Launch (disable KASLR
   and add x2apic dependency).
 - Fix testing of SL_FLAGS when determining if Secure Launch is active
   and the architecture is TXT.
 - Use SYM_DATA_START_LOCAL macros in early entry point code.
 - Security audit changes:
   - Validate buffers passed to MLE do not overlap the MLE and are
 properly laid out.
   - Validate buffers and memory regions used by the MLE are
 protected by IOMMU PMRs.
 - Force IOMMU to not use passthrough mode during a Secure Launch.
 - Prevent KASLR use during a Secure Launch.

Changes in v3:

 - Introduce x86 documentation patch to provide background, overview
   and configuration/ABI information for the Secure Launch kernel
   feature.
 - Remove the IOMMU patch with special cases for disabling IOMMU
   passthrough. Configuring the IOMMU is now a documentation matter
   in the previously mentioned new patch.
 - Remove special case KASLR disabling code. Configuring KASLR is now
   a documentation matter in the previously mentioned new patch.
 - Fix incorrect panic on TXT public register read.
 - Properly handle and measure setup_indirect bootparams in the early
   launch code.
 - Use correct compressed kernel image base address when testing buffers
   in the early launch stub code. This bug was introduced by the changes
   to avoid 

[PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-05-30 Thread Ross Philipson
Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 
---
 include/linux/slr_table.h | 271 ++
 1 file changed, 271 insertions(+)
 create mode 100644 include/linux/slr_table.h

diff --git a/include/linux/slr_table.h b/include/linux/slr_table.h
new file mode 100644
index ..213d8ac16f0f
--- /dev/null
+++ b/include/linux/slr_table.h
@@ -0,0 +1,271 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Secure Launch Resource Table
+ *
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLR_TABLE_H
+#define _LINUX_SLR_TABLE_H
+
+/* Put this in efi.h if it becomes a standard */
+#define SLR_TABLE_GUID EFI_GUID(0x877a9b2a, 0x0385, 
0x45d1, 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f)
+
+/* SLR table header values */
+#define SLR_TABLE_MAGIC0x4452544d
+#define SLR_TABLE_REVISION 1
+
+/* Current revisions for the policy and UEFI config */
+#define SLR_POLICY_REVISION1
+#define SLR_UEFI_CONFIG_REVISION   1
+
+/* SLR defined architectures */
+#define SLR_INTEL_TXT  1
+#define SLR_AMD_SKINIT 2
+
+/* SLR defined bootloaders */
+#define SLR_BOOTLOADER_INVALID 0
+#define SLR_BOOTLOADER_GRUB1
+
+/* Log formats */
+#define SLR_DRTM_TPM12_LOG 1
+#define SLR_DRTM_TPM20_LOG 2
+
+/* DRTM Policy Entry Flags */
+#define SLR_POLICY_FLAG_MEASURED   0x1
+#define SLR_POLICY_IMPLICIT_SIZE   0x2
+
+/* Array Lengths */
+#define TPM_EVENT_INFO_LENGTH  32
+#define TXT_VARIABLE_MTRRS_LENGTH  32
+
+/* Tags */
+#define SLR_ENTRY_INVALID  0x
+#define SLR_ENTRY_DL_INFO  0x0001
+#define SLR_ENTRY_LOG_INFO 0x0002
+#define SLR_ENTRY_ENTRY_POLICY 0x0003
+#define SLR_ENTRY_INTEL_INFO   0x0004
+#define SLR_ENTRY_AMD_INFO 0x0005
+#define SLR_ENTRY_ARM_INFO 0x0006
+#define SLR_ENTRY_UEFI_INFO0x0007
+#define SLR_ENTRY_UEFI_CONFIG  0x0008
+#define SLR_ENTRY_END  0x
+
+/* Entity Types */
+#define SLR_ET_UNSPECIFIED 0x
+#define SLR_ET_SLRT0x0001
+#define SLR_ET_BOOT_PARAMS 0x0002
+#define SLR_ET_SETUP_DATA  0x0003
+#define SLR_ET_CMDLINE 0x0004
+#define SLR_ET_UEFI_MEMMAP 0x0005
+#define SLR_ET_RAMDISK 0x0006
+#define SLR_ET_TXT_OS2MLE  0x0010
+#define SLR_ET_UNUSED  0x
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Primary Secure Launch Resource Table Header
+ */
+struct slr_table {
+   u32 magic;
+   u16 revision;
+   u16 architecture;
+   u32 size;
+   u32 max_size;
+   /* table entries */
+} __packed;
+
+/*
+ * Common SLRT Table Header
+ */
+struct slr_entry_hdr {
+   u16 tag;
+   u16 size;
+} __packed;
+
+/*
+ * Boot loader context
+ */
+struct slr_bl_context {
+   u16 bootloader;
+   u16 reserved[3];
+   u64 context;
+} __packed;
+
+/*
+ * Dynamic Launch Callback Function type
+ */
+typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);
+
+/*
+ * DRTM Dynamic Launch Configuration
+ */
+struct slr_entry_dl_info {
+   struct slr_entry_hdr hdr;
+   u32 dce_size;
+   u64 dce_base;
+   u64 dlme_size;
+   u64 dlme_base;
+   u64 dlme_entry;
+   struct slr_bl_context bl_context;
+   u64 dl_handler;
+} __packed;
+
+/*
+ * TPM Log Information
+ */
+struct slr_entry_log_info {
+   struct slr_entry_hdr hdr;
+   u16 format;
+   u16 reserved[3];
+   u32 size;
+   u64 addr;
+} __packed;
+
+/*
+ * DRTM Measurement Entry
+ */
+struct slr_policy_entry {
+   u16 pcr;
+   u16 entity_type;
+   u16 flags;
+   u16 reserved;
+   u64 size;
+   u64 entity;
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * DRTM Measurement Policy
+ */
+struct slr_entry_policy {
+   struct slr_entry_hdr hdr;
+   u16 revision;
+   u16 nr_entries;
+   struct slr_policy_entry policy_entries[];
+} __packed;
+
+/*
+ * Secure Launch defined MTRR saving structures
+ */
+struct slr_txt_mtrr_pair {
+   u64 mtrr_physbase;
+   u64 mtrr_physmask;
+} __packed;
+
+struct slr_txt_mtrr_state {
+   u64 default_mem_type;
+   u64 mtrr_vcnt;
+   struct slr_txt_mtrr_pair mtrr_pair[TXT_VARIABLE_MTRRS_LENGTH];
+} __packed;
+
+/*
+ * Intel TXT Info table
+ */
+struct slr_entry_intel_info {
+   struct slr_entry_hdr hdr;
+   u16 reserved[2];
+   u64 txt_heap;
+   u64 saved_misc_enable_msr;
+   struct slr_txt_mtrr_state saved_bsp_mtrrs;
+} __packed;
+
+/*
+ * UEFI config measurement entry
+ */
+struct slr_uefi_cfg_entry {
+   u16 pcr;
+   u16 reserved;
+   u32 size;
+   u64 cfg; /* address or value */
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * UEFI config measurements
+ */
+struct slr_entry_uefi_config {
+   struct slr_entry_hdr hdr;
+   u16 revision;
+   u16 nr_entries;
+   struct slr_uefi_cfg_entry

[PATCH v9 03/19] x86: Secure Launch Kconfig

2024-05-30 Thread Ross Philipson
Initial bits to bring in Secure Launch functionality. Add Kconfig
options for compiling in/out the Secure Launch code.

Signed-off-by: Ross Philipson 
---
 arch/x86/Kconfig | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index bc47bc9841ff..ee8e0cbc9a3e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2067,6 +2067,17 @@ config EFI_RUNTIME_MAP
 
  See also Documentation/ABI/testing/sysfs-firmware-efi-runtime-map.
 
+config SECURE_LAUNCH
+   bool "Secure Launch support"
+   depends on X86_64 && X86_X2APIC && TCG_TPM && CRYPTO_LIB_SHA1 && 
CRYPTO_LIB_SHA256
+   help
+  The Secure Launch feature allows a kernel to be loaded
+  directly through an Intel TXT measured launch. Intel TXT
+  establishes a Dynamic Root of Trust for Measurement (DRTM)
+  where the CPU measures the kernel image. This feature then
+  continues the measurement chain over kernel configuration
+  information and init images.
+
 source "kernel/Kconfig.hz"
 
 config ARCH_SUPPORTS_KEXEC
-- 
2.39.3




[PATCH v9 19/19] x86: EFI stub DRTM launch support for Secure Launch

2024-05-30 Thread Ross Philipson
This support allows the DRTM launch to be initiated after an EFI stub
launch of the Linux kernel is done. This is accomplished by providing
a handler to jump to when a Secure Launch is in progress. This has to be
called after the EFI stub does Exit Boot Services.

Signed-off-by: Ross Philipson 
---
 drivers/firmware/efi/libstub/x86-stub.c | 98 +
 1 file changed, 98 insertions(+)

diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
b/drivers/firmware/efi/libstub/x86-stub.c
index d5a8182cf2e1..a1143d006202 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -9,6 +9,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -830,6 +832,97 @@ static efi_status_t efi_decompress_kernel(unsigned long 
*kernel_entry)
return efi_adjust_memory_range_protection(addr, kernel_text_size);
 }
 
+#if (IS_ENABLED(CONFIG_SECURE_LAUNCH))
+static bool efi_secure_launch_update_boot_params(struct slr_table *slrt,
+struct boot_params 
*boot_params)
+{
+   struct slr_entry_intel_info *txt_info;
+   struct slr_entry_policy *policy;
+   struct txt_os_mle_data *os_mle;
+   bool updated = false;
+   int i;
+
+   txt_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+   if (!txt_info)
+   return false;
+
+   os_mle = txt_os_mle_data_start((void *)txt_info->txt_heap);
+   if (!os_mle)
+   return false;
+
+   os_mle->boot_params_addr = (u32)(u64)boot_params;
+
+   policy = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_ENTRY_POLICY);
+   if (!policy)
+   return false;
+
+   for (i = 0; i < policy->nr_entries; i++) {
+   if (policy->policy_entries[i].entity_type == 
SLR_ET_BOOT_PARAMS) {
+   policy->policy_entries[i].entity = (u64)boot_params;
+   updated = true;
+   break;
+   }
+   }
+
+   /*
+* If this is a PE entry into EFI stub the mocked up boot params will
+* be missing some of the setup header data needed for the second stage
+* of the Secure Launch boot.
+*/
+   if (image) {
+   struct setup_header *hdr = (struct setup_header *)((u8 
*)image->image_base + 0x1f1);
+   u64 cmdline_ptr, hi_val;
+
+   boot_params->hdr.setup_sects = hdr->setup_sects;
+   boot_params->hdr.syssize = hdr->syssize;
+   boot_params->hdr.version = hdr->version;
+   boot_params->hdr.loadflags = hdr->loadflags;
+   boot_params->hdr.kernel_alignment = hdr->kernel_alignment;
+   boot_params->hdr.min_alignment = hdr->min_alignment;
+   boot_params->hdr.xloadflags = hdr->xloadflags;
+   boot_params->hdr.init_size = hdr->init_size;
+   boot_params->hdr.kernel_info_offset = hdr->kernel_info_offset;
+   hi_val = boot_params->ext_cmd_line_ptr;
+   cmdline_ptr = boot_params->hdr.cmd_line_ptr | hi_val << 32;
+   boot_params->hdr.cmdline_size = strlen((const char 
*)cmdline_ptr);;
+   }
+
+   return updated;
+}
+
+static void efi_secure_launch(struct boot_params *boot_params)
+{
+   struct slr_entry_dl_info *dlinfo;
+   efi_guid_t guid = SLR_TABLE_GUID;
+   dl_handler_func handler_callback;
+   struct slr_table *slrt;
+
+   /*
+* The presence of this table indicated a Secure Launch
+* is being requested.
+*/
+   slrt = (struct slr_table *)get_efi_config_table(guid);
+   if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
+   return;
+
+   /*
+* Since the EFI stub library creates its own boot_params on entry, the
+* SLRT and TXT heap have to be updated with this version.
+*/
+   if (!efi_secure_launch_update_boot_params(slrt, boot_params))
+   return;
+
+   /* Jump through DL stub to initiate Secure Launch */
+   dlinfo = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
+
+   handler_callback = (dl_handler_func)dlinfo->dl_handler;
+
+   handler_callback(>bl_context);
+
+   unreachable();
+}
+#endif
+
 static void __noreturn enter_kernel(unsigned long kernel_addr,
struct boot_params *boot_params)
 {
@@ -957,6 +1050,11 @@ void __noreturn efi_stub_entry(efi_handle_t handle,
goto fail;
}
 
+#if (IS_ENABLED(CONFIG_SECURE_LAUNCH))
+   /* If a Secure Launch is in progress, this never returns */
+   efi_secure_launch(boot_params);
+#endif
+
/*
 * Call the SEV init code while still running with the firmware's
 * GDT/IDT, so #VC exceptions will be handled by EFI.
-- 
2.39.3




Re: [PATCH v7 02/13] Documentation/x86: Secure Launch kernel documentation

2023-11-16 Thread ross . philipson

On 11/12/23 10:07 AM, Alyssa Ross wrote:

+Load-time Integrity
+---
+
+It is critical to understand what load-time integrity establishes about a
+system and what is assumed, i.e. what is being trusted. Load-time integrity is
+when a trusted entity, i.e. an entity with an assumed integrity, takes an
+action to assess an entity being loaded into memory before it is used. A
+variety of mechanisms may be used to conduct the assessment, each with
+different properties. A particular property is whether the mechanism creates an
+evidence of the assessment. Often either cryptographic signature checking or
+hashing are the common assessment operations used.
+
+A signature checking assessment functions by requiring a representation of the
+accepted authorities and uses those representations to assess if the entity has
+been signed by an accepted authority. The benefit to this process is that
+assessment process includes an adjudication of the assessment. The drawbacks
+are that 1) the adjudication is susceptible to tampering by the Trusted
+Computing Base (TCB), 2) there is no evidence to assert that an untampered
+adjudication was completed, and 3) the system must be an active participant in
+the key management infrastructure.
+
+A cryptographic hashing assessment does not adjudicate the assessment but
+instead, generates evidence of the assessment to be adjudicated independently.
+The benefits to this approach is that the assessment may be simple such that it
+may be implemented in an immutable mechanism, e.g. in hardware.  Additionally,
+it is possible for the adjudication to be conducted where it cannot be tampered
+with by the TCB. The drawback is that a compromised environment will be allowed
+to execute until an adjudication can be completed.
+
+Ultimately, load-time integrity provides confidence that the correct entity was
+loaded and in the absence of a run-time integrity mechanism assumes, i.e.
+trusts, that the entity will never become corrupted.


I'm somewhat familiar with this area, but not massively (so probably the
sort of person this documentation is aimed at!), and this was the only
section of the documentation I had trouble understanding.

The thing that confused me was that the first time I read this, I was
thinking that a hashing assessment would be comparing the generated hash
to a baked-in known good hash, simliar to how e.g. a verity root hash
might be specified on the kernel command line, baked in to the OS image.
This made me wonder why it wasn't considered to be adjudicated during
assessment.  Upon reading it a second time, I now understand that what
it's actually talking about is generating a hash, but not comparing it
automatically against anything, and making it available for external
adjudication somehow.


Yes there is nothing baked into an image in the way we currently use is. 
I take what you call a hashing assessment to be what we would call 
remote attestation where an independent agent assesses the state of the 
measured launch. This is indeed one of the primary use cases. There is 
another use case closer to the baked in one where secrets on the system 
are sealed to the TPM using a known good PCR configuration. Only by 
launching and attaining that known good state can the secrets be unsealed.




I don't know if the approach I first thought of is used in early boot
at all, but it might be worth contrasting the cryptographic hashing
assessment described here with it, because I imagine that I'm not going
to be the only reader who's more used to thinking about integrity
slightly later in the boot process where adjudicating based on a static
hash is common, and who's mind is going to go to that when they read
about a "cryptographic hashing assessment".

The rest of the documentation was easy to understand and very helpful to
understanding system launch integrity.  Thanks!


I am glad it was helpful. We will revisit the section that caused 
confusion and see if we can make it clearer.


Thank you,
Ross



Re: [PATCH v7 10/13] kexec: Secure Launch kexec SEXIT support

2023-11-15 Thread ross . philipson

On 11/10/23 3:41 PM, Sean Christopherson wrote:

On Fri, Nov 10, 2023, Ross Philipson wrote:

Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
  arch/x86/kernel/slaunch.c | 73 +++
  kernel/kexec_core.c   |  4 +++
  2 files changed, 77 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index cd5aa34e395c..32b0c24a6484 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -523,3 +523,76 @@ void __init slaunch_setup_txt(void)
  
  	pr_info("Intel TXT setup complete\n");

  }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile (".byte 0x0f,0x37\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));


SMX has been around for what, two decades?  Is open coding getsec actually 
necessary?


There were some older gcc compilers that still did not like the getsec 
mnemonic. Perhaps they are old enough now where they don't matter any 
longer. I will check on that...





+   /* Disable SMX mode */


Heh, the code and the comment don't really agree.  I'm guessing the intent of 
the
comment is referring to leaving the measured environment, but it looks odd.   If
manually setting SMXE is necessary, I'd just delete this comment, or maybe move
it to above SEXIT.


I will look it over and see what makes sense.




+   cr4_set_bits(X86_CR4_SMXE);


Is it actually legal to clear CR4.SMXE while post-SENTER?  I don't see anything
in the SDM that says it's illegal, but allowing software to clear SMXE in that
case seems all kinds of odd.


I am pretty sure I coded this up using the pseudo code in the TXT dev 
guide and some guidance from Intel/former Intel folks. I will revisit it 
to make sure it is correct.


Thanks
Ross




+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_info("TXT SEXIT complete.\n");
+}





[PATCH v7 02/13] Documentation/x86: Secure Launch kernel documentation

2023-11-10 Thread Ross Philipson
Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reviewed-by: Bagas Sanjaya 
---
 Documentation/security/index.rst  |   1 +
 .../security/launch-integrity/index.rst   |  11 +
 .../security/launch-integrity/principles.rst  | 320 ++
 .../secure_launch_details.rst | 584 ++
 .../secure_launch_overview.rst| 226 +++
 5 files changed, 1142 insertions(+)
 create mode 100644 Documentation/security/launch-integrity/index.rst
 create mode 100644 Documentation/security/launch-integrity/principles.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_details.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_overview.rst

diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index 59f8fc106cb0..56e31fb3d91f 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -19,3 +19,4 @@ Security Documentation
digsig
landlock
secrets/index
+   launch-integrity/index
diff --git a/Documentation/security/launch-integrity/index.rst 
b/Documentation/security/launch-integrity/index.rst
new file mode 100644
index ..838328186dd2
--- /dev/null
+++ b/Documentation/security/launch-integrity/index.rst
@@ -0,0 +1,11 @@
+=
+System Launch Integrity documentation
+=
+
+.. toctree::
+   :maxdepth: 1
+
+   principles
+   secure_launch_overview
+   secure_launch_details
+
diff --git a/Documentation/security/launch-integrity/principles.rst 
b/Documentation/security/launch-integrity/principles.rst
new file mode 100644
index ..68a415aec545
--- /dev/null
+++ b/Documentation/security/launch-integrity/principles.rst
@@ -0,0 +1,320 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. Copyright © 2019-2023 Daniel P. Smith 
+
+===
+System Launch Integrity
+===
+
+:Author: Daniel P. Smith
+:Date: October 2023
+
+This document serves to establish a common understanding of what is system
+launch, the integrity concern for system launch, and why using a Root of Trust
+(RoT) from a Dynamic Launch may be desired. Throughout this document
+terminology from the Trusted Computing Group (TCG) and National Institute for
+Science and Technology (NIST) is used to ensure a vendor natural language is
+used to describe and reference security-related concepts.
+
+System Launch
+=
+
+There is a tendency to only consider the classical power-on boot as the only
+means to launch an Operating System (OS) on a computer system, but in fact most
+modern processors support two methods to launch the system. To provide clarity
+a common definition of a system launch should be established. This definition
+is that a during a single power life cycle of a system, a System Launch
+consists of an initialization event, typically in hardware, that is followed by
+an executing software payload that takes the system from the initialized state
+to a running state. Driven by the Trusted Computing Group (TCG) architecture,
+modern processors are able to support two methods to launch a system, these two
+types of system launch are known as Static Launch and Dynamic Launch.
+
+Static Launch
+-
+
+Static launch is the system launch associated with the power cycle of the CPU.
+Thus, static launch refers to the classical power-on boot where the
+initialization event is the release of the CPU from reset and the system
+firmware is the software payload that brings the system up to a running state.
+Since static launch is the system launch associated with the beginning of the
+power lifecycle of a system, it is therefore a fixed, one-time system launch.
+It is because of this that static launch is referred to and thought of as being
+"static".
+
+Dynamic Launch
+--
+
+Modern CPUs architectures provides a mechanism to re-initialize the system to a
+"known good" state without requiring a power event. This re-initialization
+event is the event for a dynamic launch and is referred to as the Dynamic
+Launch Event (DLE). The DLE functions by accepting a software payload, referred
+to as the Dynamic Configuration Environment (DCE), that execution is handed to
+after the DLE is invoked. The DCE is responsible for bringing the system back
+to a running state. Since the dynamic launch is not tied to a power event like
+the static launch, this enables a dynamic launch to be initiated at any time
+and multiple times during a single power life cycle. This dynamism is the
+reasoning behind referring to this system launch as being dynamic.
+
+Because a dynamic launch can be conducted at any time during a single power
+life cycle, they are classified into one of two types, an early launch or a
+late launch.
+
+:Early Launch: W

[PATCH v7 07/13] x86: Secure Launch kernel early boot stub

2023-11-10 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/arch/x86/boot.rst|  21 +
 arch/x86/boot/compressed/Makefile  |   3 +-
 arch/x86/boot/compressed/head_64.S |  34 ++
 arch/x86/boot/compressed/kernel_info.S |  34 ++
 arch/x86/boot/compressed/sl_main.c | 582 
 arch/x86/boot/compressed/sl_stub.S | 705 +
 arch/x86/include/asm/msr-index.h   |   5 +
 arch/x86/include/uapi/asm/bootparam.h  |   1 +
 arch/x86/kernel/asm-offsets.c  |  20 +
 9 files changed, 1404 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index f5d2f2414de8..03a2c5302a89 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
- If 1, KASLR enabled.
- If 0, KASLR disabled.
 
+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the compressed kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
   Bit 5 (write): QUIET_FLAG
 
- If 0, print early messages.
@@ -1027,6 +1035,19 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 07a2f56cd571..3186d303ec8b 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index bf4a10a5794f..6fa5bb87195b 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -415,6 +415,17 @@ SYM_CODE_START(startup_64)
pushq   $0
popfq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   pushq   %rsi
+
+   /* Ensure the relocation region coverd by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+
+   popq%rsi
+#endif
+
 /*
  * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
@@ -457,6 +468,29 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter. There is no
+* obvious way to do anything about the use of kernel_alignment

[PATCH v7 08/13] x86: Secure Launch kernel late boot stub

2023-11-10 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 525 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 533 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 325ab98f..5848ea310175 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -74,6 +74,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 1526747bedf2..0b885742c297 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -937,6 +938,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup_txt();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index ..cd5aa34e395c
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,525 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags __ro_after_init;
+static struct sl_ap_wake_info ap_wake_info __ro_after_init;
+static u64 evtlog_addr __ro_after_init;
+static u32 evtlog_size __ro_after_init;
+static u64 vtd_pmr_lo_size __ro_after_init;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+/*
+ * Get the Secure Launch flags that indicate what kind of launch is being done.
+ * E.g. a TXT launch is in progress or no Secure Launch is happening.
+ */
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+
+/*
+ * Return the AP wakeup information used in the SMP boot code to start up
+ * the APs that are parked using MONITOR/MWAIT.
+ */
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return _wake_info;
+}
+
+/*
+ * On Intel platforms, TXT passes a safe copy of the DMAR ACPI table to the
+ * DRTM. The DRTM is supposed to use this instead of the one found in the
+ * ACPI tables.
+ */
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(txt_dmar);
+}
+
+/*
+ * If running within a TXT established DRTM, this is the proper way to reset
+ * the system if a failure occurs or a security issue is found.
+ */
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, , sizeof(error));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_RESET, , sizeof(one));
+
+   for ( ; ; )
+   asm volatile ("hlt");
+
+   unreachable();
+}
+
+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+u32 bytes)
+{
+   u64 base, size, offset = 0;
+   void *heap;
+   int i;
+
+   if (type > TXT_SINIT_TABLE_MAX)
+   slaun

[PATCH v7 12/13] x86: Secure Launch late initcall platform module

2023-11-10 Thread Ross Philipson
From: "Daniel P. Smith" 

The Secure Launch platform module is a late init module. During the
init call, the TPM event log is read and measurements taken in the
early boot stub code are located. These measurements are extended
into the TPM PCRs using the mainline TPM kernel driver.

The platform module also registers the securityfs nodes to allow
access to TXT register fields on Intel along with the fetching of
and writing events to the late launch TPM log.

Signed-off-by: Daniel P. Smith 
Signed-off-by: garnetgrimm 
Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/slmodule.c | 517 +
 2 files changed, 518 insertions(+)
 create mode 100644 arch/x86/kernel/slmodule.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5848ea310175..948346ff4595 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -75,6 +75,7 @@ obj-$(CONFIG_IA32_EMULATION)  += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
 obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slmodule.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
new file mode 100644
index ..992469bf15a4
--- /dev/null
+++ b/arch/x86/kernel/slmodule.c
@@ -0,0 +1,517 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and finalization.
+ *
+ * Copyright (c) 2022 Apertus Solutions, LLC
+ * Copyright (c) 2021 Assured Information Security, Inc.
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ *
+ * Co-developed-by: Garnet T. Grimm 
+ * Signed-off-by: Garnet T. Grimm 
+ * Signed-off-by: Daniel P. Smith 
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/*
+ * The macro DECLARE_TXT_PUB_READ_U is used to read values from the TXT
+ * public registers as unsigned values.
+ */
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size)\
+static ssize_t txt_pub_read_u##size(unsigned int offset,   \
+   loff_t *read_offset,\
+   size_t read_len,\
+   char __user *buf)   \
+{  \
+   char msg_buffer[msg_size];  \
+   u##size reg_value = 0;  \
+   void __iomem *txt;  \
+   \
+   txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+   TXT_NR_CONFIG_PAGES * PAGE_SIZE);   \
+   if (!txt)   \
+   return -EFAULT; \
+   memcpy_fromio(_value, txt + offset, sizeof(u##size));   \
+   iounmap(txt);   \
+   snprintf(msg_buffer, msg_size, fmt, reg_value); \
+   return simple_read_from_buffer(buf, read_len, read_offset,  \
+   _buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size)   \
+static ssize_t txt_##reg_name##_read(struct file *flip,
\
+   char __user *buf, size_t read_len, loff_t *read_offset) \
+{  \
+   return txt_pub_read_u##reg_size(reg_offset, read_offset,\
+   read_len, buf); \
+}  \
+static const struct file_operations reg_name##_ops = { \
+   .read = txt_##reg_name##_read,  \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+   char *name;
+   void *addr;
+   size_t size;
+};
+
+static struct memfile sl_evtlog = {"eventlog", NULL, 0};
+static void *tx

[PATCH v7 05/13] x86: Secure Launch main header file

2023-11-10 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 542 
 1 file changed, 542 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index ..da2988e32ada
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,542 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_SKINIT0x0002
+#define SL_FLAG_ARCH_TXT   0x0004
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_AMD 1
+#define SL_CPU_INTEL   2
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE   0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STSBIT(0)
+#define TXT_SEXIT_DONE_STS BIT(1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+#define TXT_SINIT_TABLE_MAXTXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW  0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB  0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE   0xc0008014
+#define SL_ERROR_HI_PMR_SIZE   0xc

[PATCH v7 10/13] kexec: Secure Launch kexec SEXIT support

2023-11-10 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 73 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 77 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index cd5aa34e395c..32b0c24a6484 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -523,3 +523,76 @@ void __init slaunch_setup_txt(void)
 
pr_info("Intel TXT setup complete\n");
 }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile (".byte 0x0f,0x37\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+/*
+ * Used during kexec and on reboot paths to finalize the TXT state
+ * and do an SEXIT exiting the DRTM and disabling SMX mode.
+ */
+void slaunch_finalize(int do_sexit)
+{
+   u64 one = TXT_REGVALUE_ONE, val;
+   void __iomem *config;
+
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT)) !=
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Close the TXT private register space */
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.\n");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0)
+   panic("Error TXT SEXIT must be called on CPU 0\n");
+
+   /* Disable SMX mode */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_info("TXT SEXIT complete.\n");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index be5642a4ec49..98b2db21a952 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -40,6 +40,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1264,6 +1265,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
kmsg_dump(KMSG_DUMP_SHUTDOWN);
-- 
2.39.3




[PATCH v7 06/13] x86: Add early SHA support for Secure Launch early measurements

2023-11-10 Thread Ross Philipson
From: "Daniel P. Smith" 

The SHA algorithms are necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA libraries directly in
the code since the compressed kernel is not uncompressed at this point.

The SHA code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c
to bring it in line with the sha256 code and allow it to be pulled into the
setup kernel in the same manner as sha256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile   |  2 +
 arch/x86/boot/compressed/early_sha1.c   | 12 
 arch/x86/boot/compressed/early_sha256.c |  6 ++
 include/crypto/sha1.h   |  1 +
 lib/crypto/sha1.c   | 81 +
 5 files changed, 102 insertions(+)
 create mode 100644 arch/x86/boot/compressed/early_sha1.c
 create mode 100644 arch/x86/boot/compressed/early_sha256.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 71fc531b95b4..07a2f56cd571 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,6 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
 
diff --git a/arch/x86/boot/compressed/early_sha1.c 
b/arch/x86/boot/compressed/early_sha1.c
new file mode 100644
index ..0c7cf6f8157a
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../lib/crypto/sha1.c"
diff --git a/arch/x86/boot/compressed/early_sha256.c 
b/arch/x86/boot/compressed/early_sha256.c
new file mode 100644
index ..54930166ffee
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha256.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022 Apertus Solutions, LLC
+ */
+
+#include "../../../../lib/crypto/sha256.c"
diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h
index 044ecea60ac8..d715dd5332e1 100644
--- a/include/crypto/sha1.h
+++ b/include/crypto/sha1.h
@@ -42,5 +42,6 @@ extern int crypto_sha1_finup(struct shash_desc *desc, const 
u8 *data,
 #define SHA1_WORKSPACE_WORDS   16
 void sha1_init(__u32 *buf);
 void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);
 
 #endif /* _CRYPTO_SHA1_H */
diff --git a/lib/crypto/sha1.c b/lib/crypto/sha1.c
index 1aebe7be9401..10152125b338 100644
--- a/lib/crypto/sha1.c
+++ b/lib/crypto/sha1.c
@@ -137,4 +137,85 @@ void sha1_init(__u32 *buf)
 }
 EXPORT_SYMBOL(sha1_init);
 
+static void __sha1_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memzero_explicit(ws, sizeof(ws));
+}
+
+static void sha1_update(struct sha1_state *sctx, const u8 *data, unsigned int 
len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha1_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+   }
+
+   if (len)
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+static void sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (partial > bit_offset) {
+   memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+

[PATCH v7 00/13] x86: Trenchboot secure dynamic launch Linux kernel support

2023-11-10 Thread Ross Philipson
The larger focus of the TrenchBoot project (https://github.com/TrenchBoot) is to
enhance the boot security and integrity in a unified manner. The first area of
focus has been on the Trusted Computing Group's Dynamic Launch for establishing
a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root of
Trust for Measurement). The project has been and continues to work on providing
a unified means to Dynamic Launch that is a cross-platform (Intel and AMD) and
cross-architecture (x86 and Arm), with our recent involvment in the upcoming
Arm DRTM specification. The order of introducing DRTM to the Linux kernel
follows the maturity of DRTM in the architectures. Intel's Trusted eXecution
Technology (TXT) is present today and only requires a preamble loader, e.g. a
boot loader, and an OS kernel that is TXT-aware. AMD DRTM implementation has
been present since the introduction of AMD-V but requires an additional
component that is AMD specific and referred to in the specification as the
Secure Loader, which the TrenchBoot project has an active prototype in
development. Finally Arm's implementation is in specification development stage
and the project is looking to support it when it becomes available.

This patchset provides detailed documentation of DRTM, the approach used for
adding the capbility, and relevant API/ABI documentation. In addition to the
documentation the patch set introduces Intel TXT support as the first platform
for Linux Secure Launch.

A quick note on terminology. The larger open source project itself is called
TrenchBoot, which is hosted on Github (links below). The kernel feature enabling
the use of Dynamic Launch technology is referred to as "Secure Launch" within
the kernel code. As such the prefixes sl_/SL_ or slaunch/SLAUNCH will be seen
in the code. The stub code discussed above is referred to as the SL stub.

The Secure Launch feature starts with patch #2. Patch #1 was authored by Arvind
Sankar. There is no further status on this patch at this point but
Secure Launch depends on it so it is included with the set.

## NOTE: EFI-STUB CONFLICTS

The primary focus of the v7 patch set was to align with Thomas Gleixner's
changes to support parallel CPU bring-up on x86 platforms. In the process of
rebasing and testing v7, it was discovered that there were significant changes
to the efi-stub code. As a result, the efi-stub patch was dropped pending
maintainer feedback on an appropriate means to re-integrate Secure Launch. The
primary goal being to best align the DL stub functionality with efi-stub design.

It was discovered that the efi-stub now subsumes all the setup which head_64.S
was responsible. When attempting to rebase the DL stub patch on these changes,
it became apparent that it would not be a simple relocation of the Secure Launch
call. There are numerous things, such as efi-stub decompressing the main line
kernel, which make simple relocation challenging. There may also be additional
changes that should be considered when integrating Secure Launch support. It
would be beneficial, and much appreciated, to obtain guidance from maintainers.
Upon successful collaboration with the efi-stub maintainers, a Secure Launch v8
series will be produce to re-introduce the DL stub patch.

Links:

The TrenchBoot project including documentation:

https://trenchboot.org

The TrenchBoot project on Github:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set 
volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

GRUB2 pre-launch support branch (WIP):

https://github.com/TrenchBoot/grub/tree/grub-sl-fc-38-dlstub

Patch set based on commit:

torvolds/master/6bc986ab839c844e78a2333a02e55f02c9e57935

Thanks
Ross Philipson and Daniel P. Smith

Changes in v2:

 - Modified 32b entry code to prevent causing relocations in the compressed
   kernel.
 - Dropped patches for compressed kernel TPM PCR extender.
 - Modified event log code to insert log delimiter events and not rely
   on TPM access.
 - Stop extending PCRs in the early Secure Launch stub code.
 - Removed Kconfig options for hash algorithms and use the algorithms the
   ACM used.
 - Match Secure Launch measurement algorithm use to those reported in the
   TPM 2.0 event log.
 - Read the TPM events out of the TPM and extend them into the PCRs using
   the mainline TPM driver. This is done in the late initcall module.
 - Allow use of alternate PCR 19 and 20 for post ACM measurements.
 - Add Kconfig constraints needed by Secure Launch (disable KASLR
   and add x2apic dependency).
 - Fix testing of SL_FLAGS when determining if Secure Launch is active
   and the architecture is TXT.
 - Use SYM_DATA_START_LOCAL macros in early entry point code.
 - Secu

[PATCH v7 04/13] x86: Secure Launch Resource Table header file

2023-11-10 Thread Ross Philipson
Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 
---
 include/linux/slr_table.h | 270 ++
 1 file changed, 270 insertions(+)
 create mode 100644 include/linux/slr_table.h

diff --git a/include/linux/slr_table.h b/include/linux/slr_table.h
new file mode 100644
index ..42020988233a
--- /dev/null
+++ b/include/linux/slr_table.h
@@ -0,0 +1,270 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Secure Launch Resource Table
+ *
+ * Copyright (c) 2023, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLR_TABLE_H
+#define _LINUX_SLR_TABLE_H
+
+/* Put this in efi.h if it becomes a standard */
+#define SLR_TABLE_GUID EFI_GUID(0x877a9b2a, 0x0385, 
0x45d1, 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f)
+
+/* SLR table header values */
+#define SLR_TABLE_MAGIC0x4452544d
+#define SLR_TABLE_REVISION 1
+
+/* Current revisions for the policy and UEFI config */
+#define SLR_POLICY_REVISION1
+#define SLR_UEFI_CONFIG_REVISION   1
+
+/* SLR defined architectures */
+#define SLR_INTEL_TXT  1
+#define SLR_AMD_SKINIT 2
+
+/* SLR defined bootloaders */
+#define SLR_BOOTLOADER_INVALID 0
+#define SLR_BOOTLOADER_GRUB1
+
+/* Log formats */
+#define SLR_DRTM_TPM12_LOG 1
+#define SLR_DRTM_TPM20_LOG 2
+
+/* DRTM Policy Entry Flags */
+#define SLR_POLICY_FLAG_MEASURED   0x1
+#define SLR_POLICY_IMPLICIT_SIZE   0x2
+
+/* Array Lengths */
+#define TPM_EVENT_INFO_LENGTH  32
+#define TXT_VARIABLE_MTRRS_LENGTH  32
+
+/* Tags */
+#define SLR_ENTRY_INVALID  0x
+#define SLR_ENTRY_DL_INFO  0x0001
+#define SLR_ENTRY_LOG_INFO 0x0002
+#define SLR_ENTRY_ENTRY_POLICY 0x0003
+#define SLR_ENTRY_INTEL_INFO   0x0004
+#define SLR_ENTRY_AMD_INFO 0x0005
+#define SLR_ENTRY_ARM_INFO 0x0006
+#define SLR_ENTRY_UEFI_INFO0x0007
+#define SLR_ENTRY_UEFI_CONFIG  0x0008
+#define SLR_ENTRY_END  0x
+
+/* Entity Types */
+#define SLR_ET_UNSPECIFIED 0x
+#define SLR_ET_SLRT0x0001
+#define SLR_ET_BOOT_PARAMS 0x0002
+#define SLR_ET_SETUP_DATA  0x0003
+#define SLR_ET_CMDLINE 0x0004
+#define SLR_ET_UEFI_MEMMAP 0x0005
+#define SLR_ET_RAMDISK 0x0006
+#define SLR_ET_TXT_OS2MLE  0x0010
+#define SLR_ET_UNUSED  0x
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Primary SLR Table Header
+ */
+struct slr_table {
+   u32 magic;
+   u16 revision;
+   u16 architecture;
+   u32 size;
+   u32 max_size;
+   /* entries[] */
+} __packed;
+
+/*
+ * Common SLRT Table Header
+ */
+struct slr_entry_hdr {
+   u16 tag;
+   u16 size;
+} __packed;
+
+/*
+ * Boot loader context
+ */
+struct slr_bl_context {
+   u16 bootloader;
+   u16 reserved;
+   u64 context;
+} __packed;
+
+/*
+ * DRTM Dynamic Launch Configuration
+ */
+struct slr_entry_dl_info {
+   struct slr_entry_hdr hdr;
+   struct slr_bl_context bl_context;
+   u64 dl_handler;
+   u64 dce_base;
+   u32 dce_size;
+   u64 dlme_entry;
+} __packed;
+
+/*
+ * TPM Log Information
+ */
+struct slr_entry_log_info {
+   struct slr_entry_hdr hdr;
+   u16 format;
+   u16 reserved;
+   u64 addr;
+   u32 size;
+} __packed;
+
+/*
+ * DRTM Measurement Policy
+ */
+struct slr_entry_policy {
+   struct slr_entry_hdr hdr;
+   u16 revision;
+   u16 nr_entries;
+   /* policy_entries[] */
+} __packed;
+
+/*
+ * DRTM Measurement Entry
+ */
+struct slr_policy_entry {
+   u16 pcr;
+   u16 entity_type;
+   u16 flags;
+   u16 reserved;
+   u64 entity;
+   u64 size;
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * Secure Launch defined MTRR saving structures
+ */
+struct slr_txt_mtrr_pair {
+   u64 mtrr_physbase;
+   u64 mtrr_physmask;
+} __packed;
+
+struct slr_txt_mtrr_state {
+   u64 default_mem_type;
+   u64 mtrr_vcnt;
+   struct slr_txt_mtrr_pair mtrr_pair[TXT_VARIABLE_MTRRS_LENGTH];
+} __packed;
+
+/*
+ * Intel TXT Info table
+ */
+struct slr_entry_intel_info {
+   struct slr_entry_hdr hdr;
+   u64 saved_misc_enable_msr;
+   struct slr_txt_mtrr_state saved_bsp_mtrrs;
+} __packed;
+
+/*
+ * AMD SKINIT Info table
+ */
+struct slr_entry_amd_info {
+   struct slr_entry_hdr hdr;
+} __packed;
+
+/*
+ * ARM DRTM Info table
+ */
+struct slr_entry_arm_info {
+   struct slr_entry_hdr hdr;
+} __packed;
+
+struct slr_entry_uefi_config {
+   struct slr_entry_hdr hdr;
+   u16 revision;
+   u16 nr_entries;
+   /* uefi_cfg_entries[] */
+} __packed;
+
+struct slr_uefi_cfg_entry {
+   u16 pcr;
+   u16 reserved;
+   u64 cfg; /* address or value */
+   u32 size;
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+static inline void *slr_end_of_entrys(struct slr_table *table)
+{
+   return (((void *)table) + table

[PATCH v7 13/13] tpm: Allow locality 2 to be set when initializing the TPM for Secure Launch

2023-11-10 Thread Ross Philipson
The Secure Launch MLE environment uses PCRs that are only accessible from
the DRTM locality 2. By default the TPM drivers always initialize the
locality to 0. When a Secure Launch is in progress, initialize the
locality to 2.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-chip.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 42b1062e33cd..0217ceb96c42 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -23,6 +23,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "tpm.h"
 
 DEFINE_IDR(dev_nums_idr);
@@ -39,12 +40,18 @@ dev_t tpm_devt;
 
 static int tpm_request_locality(struct tpm_chip *chip)
 {
+   int locality;
int rc;
 
if (!chip->ops->request_locality)
return 0;
 
-   rc = chip->ops->request_locality(chip, 0);
+   if (slaunch_get_flags() & SL_FLAG_ACTIVE)
+   locality = 2;
+   else
+   locality = 0;
+
+   rc = chip->ops->request_locality(chip, locality);
if (rc < 0)
return rc;
 
-- 
2.39.3




[PATCH v7 01/13] x86/boot: Place kernel_info at a fixed offset

2023-11-10 Thread Ross Philipson
From: Arvind Sankar 

There are use cases for storing the offset of a symbol in kernel_info.
For example, the trenchboot series [0] needs to store the offset of the
Measured Launch Environment header in kernel_info.

Since commit (note: commit ID from tip/master)

commit 527afc212231 ("x86/boot: Check that there are no run-time relocations")

run-time relocations are not allowed in the compressed kernel, so simply
using the symbol in kernel_info, as

.long   symbol

will cause a linker error because this is not position-independent.

With kernel_info being a separate object file and in a different section
from startup_32, there is no way to calculate the offset of a symbol
from the start of the image in a position-independent way.

To enable such use cases, put kernel_info into its own section which is
placed at a predetermined offset (KERNEL_INFO_OFFSET) via the linker
script. This will allow calculating the symbol offset in a
position-independent way, by adding the offset from the start of
kernel_info to KERNEL_INFO_OFFSET.

Ensure that kernel_info is aligned, and use the SYM_DATA.* macros
instead of bare labels. This stores the size of the kernel_info
structure in the ELF symbol table.

Signed-off-by: Arvind Sankar 
Cc: Ross Philipson 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/kernel_info.S | 19 +++
 arch/x86/boot/compressed/kernel_info.h | 12 
 arch/x86/boot/compressed/vmlinux.lds.S |  6 ++
 3 files changed, 33 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/boot/compressed/kernel_info.h

diff --git a/arch/x86/boot/compressed/kernel_info.S 
b/arch/x86/boot/compressed/kernel_info.S
index f818ee8fba38..c18f07181dd5 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -1,12 +1,23 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
+#include 
 #include 
+#include "kernel_info.h"
 
-   .section ".rodata.kernel_info", "a"
+/*
+ * If a field needs to hold the offset of a symbol from the start
+ * of the image, use the macro below, eg
+ * .long   rva(symbol)
+ * This will avoid creating run-time relocations, which are not
+ * allowed in the compressed kernel.
+ */
+
+#define rva(X) (((X) - kernel_info) + KERNEL_INFO_OFFSET)
 
-   .global kernel_info
+   .section ".rodata.kernel_info", "a"
 
-kernel_info:
+   .balign 16
+SYM_DATA_START(kernel_info)
/* Header, Linux top (structure). */
.ascii  "LToP"
/* Size. */
@@ -19,4 +30,4 @@ kernel_info:
 
 kernel_info_var_len_data:
/* Empty for time being... */
-kernel_info_end:
+SYM_DATA_END_LABEL(kernel_info, SYM_L_LOCAL, kernel_info_end)
diff --git a/arch/x86/boot/compressed/kernel_info.h 
b/arch/x86/boot/compressed/kernel_info.h
new file mode 100644
index ..c127f84aec63
--- /dev/null
+++ b/arch/x86/boot/compressed/kernel_info.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef BOOT_COMPRESSED_KERNEL_INFO_H
+#define BOOT_COMPRESSED_KERNEL_INFO_H
+
+#ifdef CONFIG_X86_64
+#define KERNEL_INFO_OFFSET 0x500
+#else /* 32-bit */
+#define KERNEL_INFO_OFFSET 0x100
+#endif
+
+#endif /* BOOT_COMPRESSED_KERNEL_INFO_H */
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S 
b/arch/x86/boot/compressed/vmlinux.lds.S
index 083ec6d7722a..718c52f3f1e6 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -7,6 +7,7 @@ OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT)
 
 #include 
 #include 
+#include "kernel_info.h"
 
 #ifdef CONFIG_X86_64
 OUTPUT_ARCH(i386:x86-64)
@@ -27,6 +28,11 @@ SECTIONS
HEAD_TEXT
_ehead = . ;
}
+   .rodata.kernel_info KERNEL_INFO_OFFSET : {
+   *(.rodata.kernel_info)
+   }
+   ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at bad 
address!")
+
.rodata..compressed : {
*(.rodata..compressed)
}
-- 
2.39.3




[PATCH v7 09/13] x86: Secure Launch SMP bringup support

2023-11-10 Thread Ross Philipson
On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code uses MONITOR and MWAIT to park the APs. The realmode/init.c
code updates the jump address for the waiting APs with the location of the
Secure Launch entry point in the RM piggy after it is loaded and fixed up.
As the APs are woken up by writing the monitor, the APs jump to the Secure
Launch entry point in the RM piggy which mimics what the real mode code would
do then jumps to the standard RM piggy protected mode entry point.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/realmode.h  |  3 ++
 arch/x86/kernel/smpboot.c| 56 +++-
 arch/x86/realmode/init.c |  3 ++
 arch/x86/realmode/rm/header.S|  3 ++
 arch/x86/realmode/rm/trampoline_64.S | 32 
 5 files changed, 96 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 87e5482acd0d..339b48e2543d 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -38,6 +38,9 @@ struct real_mode_header {
 #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
 #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
 };
 
 /* This must match data at realmode/rm/trampoline_{32,64}.S */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 2cc2aa120b4b..6f2a5ee458ce 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -60,6 +60,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -986,6 +987,56 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
 }
 
+#ifdef CONFIG_SECURE_LAUNCH
+
+static bool slaunch_is_txt_launch(void)
+{
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return true;
+
+   return false;
+}
+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs using monitor/mwait. This will wake the APs by writing the monitor
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   struct sl_ap_wake_info *ap_wake_info;
+   struct sl_ap_stack_and_monitor *stack_monitor = NULL;
+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   stack_monitor = (struct sl_ap_stack_and_monitor 
*)__va(ap_wake_info->ap_wake_block +
+  
ap_wake_info->ap_stacks_offset);
+
+   for (unsigned int i = TXT_MAX_CPUS - 1; i >= 0; i--) {
+   if (stack_monitor[i].apicid == apicid) {
+   /* Write the monitor */
+   stack_monitor[i].monitor = 1;
+   break;
+   }
+   }
+}
+
+#else
+
+static inline bool slaunch_is_txt_launch(void)
+{
+   return false;
+}
+
+static inline void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+}
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -1040,12 +1091,15 @@ static int do_boot_cpu(u32 apicid, int cpu, struct 
task_struct *idle)
 
/*
 * Wake up a CPU in difference cases:
+* - Intel TXT DRTM launch uses its own method to wake the APs
 * - Use a method from the APIC driver if one defined, with wakeup
 *   straight to 64-bit mode preferred over wakeup to RM.
 * Otherwise,
 * - Use an INIT boot APIC message
 */
-   if (apic->wakeup_secondary_cpu_64)
+   if (slaunch_is_txt_launch())
+   slaunch_wakeup_cpu_from_txt(cpu, apicid);
+   else if (apic->wakeup_secondary_cpu_64)
ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
else if (apic->wakeup_secondary_cpu)
ret = apic->wakeup_secondary_cpu(apicid, start_ip);
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 788e5559549f..b548b3376765 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -4,6 +4,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -210,6 +211,8 @@ void __init init_real_mode(void)
 
setup_real_mode();
set_real_mode_permissions();
+
+   slaunch_fixup_jump_vector();
 }
 
 static int __init do_init_real_mode(void)
diff --git a/arch/x86/realmode/rm/header.S b/arch/x86/realmode/rm/header.S
index 2eb62be6d256..3b5cbcbbfc90 100644
--- a/arch/x86/realmode/rm/header.S
+++ b/arch/x86/r

[PATCH v7 03/13] x86: Secure Launch Kconfig

2023-11-10 Thread Ross Philipson
Initial bits to bring in Secure Launch functionality. Add Kconfig
options for compiling in/out the Secure Launch code.

Signed-off-by: Ross Philipson 
---
 arch/x86/Kconfig | 12 
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3762f41bb092..1b983e336611 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2066,6 +2066,18 @@ config EFI_RUNTIME_MAP
 
  See also Documentation/ABI/testing/sysfs-firmware-efi-runtime-map.
 
+config SECURE_LAUNCH
+   bool "Secure Launch support"
+   default n
+   depends on X86_64 && X86_X2APIC
+   help
+  The Secure Launch feature allows a kernel to be loaded
+  directly through an Intel TXT measured launch. Intel TXT
+  establishes a Dynamic Root of Trust for Measurement (DRTM)
+  where the CPU measures the kernel image. This feature then
+  continues the measurement chain over kernel configuration
+  information and init images.
+
 source "kernel/Kconfig.hz"
 
 config ARCH_SUPPORTS_KEXEC
-- 
2.39.3




[PATCH v7 11/13] reboot: Secure Launch SEXIT support on reboot paths

2023-11-10 Thread Ross Philipson
If the MLE kernel is being powered off, rebooted or halted,
then SEXIT must be called. Note that the SEXIT GETSEC leaf
can only be called after a machine_shutdown() has been done on
these paths. The machine_shutdown() is not called on a few paths
like when poweroff action does not have a poweroff callback (into
ACPI code) or when an emergency reset is done. In these cases,
just the TXT registers are finalized but SEXIT is skipped.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/reboot.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 830425e6d38e..668cfc5e4c92 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -766,6 +767,7 @@ static void native_machine_restart(char *__unused)
 
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
__machine_emergency_restart(0);
 }
 
@@ -776,6 +778,9 @@ static void native_machine_halt(void)
 
tboot_shutdown(TB_SHUTDOWN_HALT);
 
+   /* SEXIT done after machine_shutdown() to meet TXT requirements */
+   slaunch_finalize(1);
+
stop_this_cpu(NULL);
 }
 
@@ -784,8 +789,12 @@ static void native_machine_power_off(void)
if (kernel_can_power_off()) {
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
do_kernel_power_off();
+   } else {
+   slaunch_finalize(0);
}
+
/* A fallback in case there is no PM info available */
tboot_shutdown(TB_SHUTDOWN_HALT);
 }
@@ -813,6 +822,7 @@ void machine_shutdown(void)
 
 void machine_emergency_restart(void)
 {
+   slaunch_finalize(0);
__machine_emergency_restart(1);
 }
 
-- 
2.39.3




Re: [PATCH v6 05/14] x86: Secure Launch main header file

2023-10-31 Thread ross . philipson

On 5/12/23 9:10 AM, Ross Philipson wrote:

On 5/12/23 07:00, Matthew Garrett wrote:

On Thu, May 04, 2023 at 02:50:14PM +, Ross Philipson wrote:


+static inline int tpm12_log_event(void *evtlog_base, u32 evtlog_size,
+  u32 event_size, void *event)
+{
+    struct tpm12_event_log_header *evtlog =
+    (struct tpm12_event_log_header *)evtlog_base;
+
+    if (memcmp(evtlog->signature, TPM12_EVTLOG_SIGNATURE,
+   sizeof(TPM12_EVTLOG_SIGNATURE)))
+    return -EINVAL;
+
+    if (evtlog->container_size > evtlog_size)
+    return -EINVAL;
+
+    if (evtlog->next_event_offset + event_size > 
evtlog->container_size)

+    return -E2BIG;
+
+    memcpy(evtlog_base + evtlog->next_event_offset, event, event_size);
+    evtlog->next_event_offset += event_size;
+
+    return 0;
+}
+
+static inline int tpm20_log_event(struct 
txt_heap_event_log_pointer2_1_element *elem,

+  void *evtlog_base, u32 evtlog_size,
+  u32 event_size, void *event)
+{
+    struct tcg_pcr_event *header =
+    (struct tcg_pcr_event *)evtlog_base;
+
+    /* Has to be at least big enough for the signature */
+    if (header->event_size < sizeof(TCG_SPECID_SIG))
+    return -EINVAL;
+
+    if (memcmp((u8 *)header + sizeof(struct tcg_pcr_event),
+   TCG_SPECID_SIG, sizeof(TCG_SPECID_SIG)))
+    return -EINVAL;
+
+    if (elem->allocated_event_container_size > evtlog_size)
+    return -EINVAL;
+
+    if (elem->next_record_offset + event_size >
+    elem->allocated_event_container_size)
+    return -E2BIG;
+
+    memcpy(evtlog_base + elem->next_record_offset, event, event_size);
+    elem->next_record_offset += event_size;
+
+    return 0;
+}
+


These seem like they'd potentially be useful outside the context of SL,
maybe put them in a more generic location? Very much a nice to have, not
a blocker from my side.


Yea we can look into finding a nice home somewhere in the TPM event log 
code for these.


After looking at it, it seems we would have to drag a whole bunch of TXT 
related structures into the TPM event log code. I don't think this is 
really worth it for what these functions do.


Thanks
Ross






+/*
+ * External functions avalailable in mainline kernel.


Nit: "available"


Ack

Thanks









Re: [PATCH v6 07/14] x86: Secure Launch kernel early boot stub

2023-09-20 Thread ross . philipson

On 5/12/23 11:04 AM, Thomas Gleixner wrote:


On Thu, May 04 2023 at 14:50, Ross Philipson wrote:

+
+/* CPUID: leaf 1, ECX, SMX feature bit */
+#define X86_FEATURE_BIT_SMX(1 << 6)
+
+/* Can't include apiddef.h in asm */


Why not? All it needs is a #ifndef __ASSEMBLY__ guard around the C parts.


+#define XAPIC_ENABLE   (1 << 11)
+#define X2APIC_ENABLE  (1 << 10)
+
+/* Can't include traps.h in asm */


NMI_VECTOR is defined in irq_vectors.h which just has a include
 for no real good reason.


+#define X86_TRAP_NMI   2





+/*
+ * See the comment in head_64.S for detailed informatoin on what this macro
+ * is used for.
+ */
+#define rva(X) ((X) - sl_stub_entry)


I'm having a hard time to find that comment in head_64.S. At least it's
not in this patch.


+.Lsl_ap_cs:
+   /* Load the relocated AP IDT */

[ 11 more citation lines. Click/Enter to show. ]

+   lidt(sl_ap_idt_desc - sl_txt_ap_wake_begin)(%ecx)
+
+   /* Fixup MTRRs and misc enable MSR on APs too */
+   callsl_txt_load_regs
+
+   /* Enable SMI with GETSEC[SMCTRL] */
+   GETSEC $(SMX_X86_GETSEC_SMCTRL)
+
+   /* IRET-to-self can be used to enable NMIs which SENTER disabled */
+   lealrva(.Lnmi_enabled_ap)(%ebx), %eax
+   pushfl
+   pushl   $(__SL32_CS)
+   pushl   %eax
+   iret


So from here on any NMI which hits the AP before it can reach the wait
loop will corrupt EDX...


+/* This is the beginning of the relocated AP wake code block */
+   .global sl_txt_ap_wake_begin

[ 10 more citation lines. Click/Enter to show. ]

+sl_txt_ap_wake_begin:
+
+   /*
+* Wait for NMI IPI in the relocated AP wake block which was provided
+* and protected in the memory map by the prelaunch code. Leave all
+* other interrupts masked since we do not expect anything but an NMI.
+*/
+   xorl%edx, %edx
+
+1:
+   hlt
+   testl   %edx, %edx
+   jz  1b


This really makes me nervous. A stray NMI and the AP starts going.

Can't this NMI just bring the AP out of HLT w/o changing any state and
the AP evaluates a memory location which indicates whether it should
start up or not.


I have switched the existing code to use MONITOR/MWAIT and got rid of 
the use of the NMIs here. I am currently using a monitor variable on the 
stack of each AP but I think I may refactor that. The next step is to 
rebase this work on top of your hotplug patchset. Is the code in your 
devel repo on the hotplug branch still the latest bits?


I had a question - more a request for your thoughts on this. I am 
currently assuming a cache line size and alignment of 64b for the 
monitor variable location. Do you think this is sufficient for x86 
platforms or do I need to dynamically find a way to read the CPUID 
information for MONITOR and get my size/alignment values from there?


Thanks
Ross Philipson




+   /*
+* This is the long absolute jump to the 32b Secure Launch protected
+* mode stub code in the rmpiggy. The jump address will be fixed in


Providing an actual name for the stub might spare to rummage through
code to figure out where this is supposed to jump to.


+* the SMP boot code when the first AP is brought up. This whole area
+* is provided and protected in the memory map by the prelaunch code.

[ 2 more citation lines. Click/Enter to show. ]

+*/
+   .byte   0xea
+sl_ap_jmp_offset:
+   .long   0x
+   .word   __SL32_CS


Thanks,

tglx




[PATCH v5 02/12] Documentation/x86: Secure Launch kernel documentation

2022-02-18 Thread Ross Philipson
Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 Documentation/security/index.rst   |   1 +
 Documentation/security/launch-integrity/index.rst  |  10 +
 .../security/launch-integrity/principles.rst   | 313 
 .../launch-integrity/secure_launch_details.rst | 552 +
 .../launch-integrity/secure_launch_overview.rst| 214 
 5 files changed, 1090 insertions(+)
 create mode 100644 Documentation/security/launch-integrity/index.rst
 create mode 100644 Documentation/security/launch-integrity/principles.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_details.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_overview.rst

diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index 16335de..e8dadec 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -17,3 +17,4 @@ Security Documentation
tpm/index
digsig
landlock
+   launch-integrity/index
diff --git a/Documentation/security/launch-integrity/index.rst 
b/Documentation/security/launch-integrity/index.rst
new file mode 100644
index ..28eed91d
--- /dev/null
+++ b/Documentation/security/launch-integrity/index.rst
@@ -0,0 +1,10 @@
+=
+System Launch Integrity documentation
+=
+
+.. toctree::
+
+   principles
+   secure_launch_overview
+   secure_launch_details
+
diff --git a/Documentation/security/launch-integrity/principles.rst 
b/Documentation/security/launch-integrity/principles.rst
new file mode 100644
index ..73cf063
--- /dev/null
+++ b/Documentation/security/launch-integrity/principles.rst
@@ -0,0 +1,313 @@
+===
+System Launch Integrity
+===
+
+This document serves to establish a common understanding of what is system
+launch, the integrity concern for system launch, and why using a Root of Trust
+(RoT) from a Dynamic Launch may be desired. Through out this document
+terminology from the Trusted Computing Group (TCG) and National Institue for
+Science and Technology (NIST) is used to ensure a vendor nutrual language is
+used to describe and reference security-related concepts.
+
+System Launch
+=
+
+There is a tendency to only consider the classical power-on boot as the only
+means to launch an Operating System (OS) on a computer system, but in fact most
+modern processors support two methods to launch the system. To provide clarity 
a
+common definition of a system launch should be established. This definition is
+that a during a single power life cycle of a system, a System Launch consists
+of an initialization event, typically in hardware, that is followed by an
+executing software payload that takes the system from the initialized state to
+a running state. Driven by the Trusted Computing Group (TCG) architecture,
+modern processors are able to support two methods to launch a system, these two
+types of system launch are known as Static Launch and Dynamic Launch.
+
+Static Launch
+-
+
+Static launch is the system launch associated with the power cycle of the CPU.
+Thus static launch refers to the classical power-on boot where the
+initialization event is the release of the CPU from reset and the system
+firmware is the software payload that brings the system up to a running state.
+Since static launch is the system launch associated with the beginning of the
+power lifecycle of a system, it is therefore a fixed, one-time system launch.
+It is because of this that static launch is referred to and thought of as being
+"static".
+
+Dynamic Launch
+--
+
+Modern CPUs architectures provides a mechanism to re-initialize the system to a
+"known good" state without requiring a power event. This re-initialization
+event is the event for a dynamic launch and is referred to as the Dynamic
+Launch Event (DLE). The DLE functions by accepting a software payload, referred
+to as the Dynamic Configuration Environment (DCE), that execution is handed to
+after the DLE is invoked. The DCE is responsible for bringing the system back
+to a running state. Since the dynamic launch is not tied to a power event like
+the static launch, this enables a dynamic launch to be initiated at any time
+and multiple times during a single power life cycle. This dynamism is the
+reasoning behind referring to this system launch as being dynamic.
+
+Because a dynamic launch can be conducted at any time during a single power
+life cycle, they are classified into one of two types, an early launch or a
+late launch.
+
+:Early Launch: When a dynamic launch is used as a transition from a static
+   launch chain to the final Operating System.
+
+:Late Launch: The usage of a dynamic launch by an execut

[PATCH v5 01/12] x86/boot: Place kernel_info at a fixed offset

2022-02-18 Thread Ross Philipson
From: Arvind Sankar 

There are use cases for storing the offset of a symbol in kernel_info.
For example, the trenchboot series [0] needs to store the offset of the
Measured Launch Environment header in kernel_info.

Since commit (note: commit ID from tip/master)

commit 527afc212231 ("x86/boot: Check that there are no run-time relocations")

run-time relocations are not allowed in the compressed kernel, so simply
using the symbol in kernel_info, as

.long   symbol

will cause a linker error because this is not position-independent.

With kernel_info being a separate object file and in a different section
from startup_32, there is no way to calculate the offset of a symbol
from the start of the image in a position-independent way.

To enable such use cases, put kernel_info into its own section which is
placed at a predetermined offset (KERNEL_INFO_OFFSET) via the linker
script. This will allow calculating the symbol offset in a
position-independent way, by adding the offset from the start of
kernel_info to KERNEL_INFO_OFFSET.

Ensure that kernel_info is aligned, and use the SYM_DATA.* macros
instead of bare labels. This stores the size of the kernel_info
structure in the ELF symbol table.

Signed-off-by: Arvind Sankar 
Cc: Ross Philipson 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/kernel_info.S | 19 +++
 arch/x86/boot/compressed/kernel_info.h | 12 
 arch/x86/boot/compressed/vmlinux.lds.S |  6 ++
 3 files changed, 33 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/boot/compressed/kernel_info.h

diff --git a/arch/x86/boot/compressed/kernel_info.S 
b/arch/x86/boot/compressed/kernel_info.S
index f818ee8..c18f071 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -1,12 +1,23 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
+#include 
 #include 
+#include "kernel_info.h"
 
-   .section ".rodata.kernel_info", "a"
+/*
+ * If a field needs to hold the offset of a symbol from the start
+ * of the image, use the macro below, eg
+ * .long   rva(symbol)
+ * This will avoid creating run-time relocations, which are not
+ * allowed in the compressed kernel.
+ */
+
+#define rva(X) (((X) - kernel_info) + KERNEL_INFO_OFFSET)
 
-   .global kernel_info
+   .section ".rodata.kernel_info", "a"
 
-kernel_info:
+   .balign 16
+SYM_DATA_START(kernel_info)
/* Header, Linux top (structure). */
.ascii  "LToP"
/* Size. */
@@ -19,4 +30,4 @@ kernel_info:
 
 kernel_info_var_len_data:
/* Empty for time being... */
-kernel_info_end:
+SYM_DATA_END_LABEL(kernel_info, SYM_L_LOCAL, kernel_info_end)
diff --git a/arch/x86/boot/compressed/kernel_info.h 
b/arch/x86/boot/compressed/kernel_info.h
new file mode 100644
index ..c127f84
--- /dev/null
+++ b/arch/x86/boot/compressed/kernel_info.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef BOOT_COMPRESSED_KERNEL_INFO_H
+#define BOOT_COMPRESSED_KERNEL_INFO_H
+
+#ifdef CONFIG_X86_64
+#define KERNEL_INFO_OFFSET 0x500
+#else /* 32-bit */
+#define KERNEL_INFO_OFFSET 0x100
+#endif
+
+#endif /* BOOT_COMPRESSED_KERNEL_INFO_H */
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S 
b/arch/x86/boot/compressed/vmlinux.lds.S
index 112b237..84c7b4d 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -7,6 +7,7 @@ OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT)
 
 #include 
 #include 
+#include "kernel_info.h"
 
 #ifdef CONFIG_X86_64
 OUTPUT_ARCH(i386:x86-64)
@@ -27,6 +28,11 @@ SECTIONS
HEAD_TEXT
_ehead = . ;
}
+   .rodata.kernel_info KERNEL_INFO_OFFSET : {
+   *(.rodata.kernel_info)
+   }
+   ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at bad 
address!")
+
.rodata..compressed : {
*(.rodata..compressed)
}
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v5 06/12] x86: Secure Launch kernel early boot stub

2022-02-18 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/x86/boot.rst |  21 +
 arch/x86/boot/compressed/Makefile  |   3 +-
 arch/x86/boot/compressed/head_64.S |  37 ++
 arch/x86/boot/compressed/kernel_info.S |  34 ++
 arch/x86/boot/compressed/sl_main.c | 556 ++
 arch/x86/boot/compressed/sl_stub.S | 685 +
 arch/x86/include/uapi/asm/bootparam.h  |   1 +
 arch/x86/kernel/asm-offsets.c  |  19 +
 8 files changed, 1355 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
index 894a198..4f10c40 100644
--- a/Documentation/x86/boot.rst
+++ b/Documentation/x86/boot.rst
@@ -481,6 +481,14 @@ Protocol:  2.00+
- If 1, KASLR enabled.
- If 0, KASLR disabled.
 
+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the compressed kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
   Bit 5 (write): QUIET_FLAG
 
- If 0, print early messages.
@@ -1026,6 +1034,19 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index d1791bb..3a4cd6f 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -105,7 +105,8 @@ vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
 efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index fd9441f..6ef0089 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -501,6 +501,17 @@ trampoline_return:
pushq   $0
popfq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   pushq   %rsi
+
+   /* Ensure the relocation region coverd by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+
+   popq%rsi
+#endif
+
 /*
  * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
@@ -559,6 +570,32 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter. There is no
+* obvious way to do anything about the use of kernel_alignment or
+* init_size though these seem low risk with all the PMR and overlap

[PATCH v5 07/12] x86: Secure Launch kernel late boot stub

2022-02-18 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 467 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 475 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 6aef9ee..5a189ad 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 6e29c20..eb9e2a2 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -970,6 +971,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup_txt();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index ..ef6ef08
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,467 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags;
+static struct sl_ap_wake_info ap_wake_info;
+static u64 evtlog_addr;
+static u32 evtlog_size;
+static u64 vtd_pmr_lo_size;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+EXPORT_SYMBOL(slaunch_get_flags);
+
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return _wake_info;
+}
+
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(_dmar[0]);
+}
+
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, , sizeof(error));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_RESET, , sizeof(one));
+
+   for ( ; ; )
+   asm volatile ("hlt");
+
+   unreachable();
+}
+
+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+u32 bytes)
+{
+   u64 base, size, offset = 0;
+   void *heap;
+   int i;
+
+   if (type > TXT_SINIT_TABLE_MAX)
+   slaunch_txt_reset(txt,
+   "Error invalid table type for early heap walk\n",
+   SL_ERROR_HEAP_WALK);
+
+   memcpy_fromio(, txt + TXT_CR_HEAP_BASE, sizeof(base));
+   memcpy_fromio(, txt + TXT_CR_HEAP_SIZE, sizeof(size));
+
+   /* Iterate over heap tables looking for table of "type" */
+   for (i = 0; i < type; i++) {
+   base += offset;
+   heap = early_memremap(base, sizeof(u64));
+   if (!heap)
+   slaunch_txt_reset(txt,
+   "Error early_memremap of heap for heap walk\n",
+   SL_ERROR_HEAP_MAP);
+
+ 

[PATCH v5 11/12] x86: Secure Launch late initcall platform module

2022-02-18 Thread Ross Philipson
From: "Daniel P. Smith" 

The Secure Launch platform module is a late init module. During the
init call, the TPM event log is read and measurements taken in the
early boot stub code are located. These measurements are extended
into the TPM PCRs using the mainline TPM kernel driver.

The platform module also registers the securityfs nodes to allow
access to TXT register fields on Intel along with the fetching of
and writing events to the late launch TPM log.

Signed-off-by: Daniel P. Smith 
Signed-off-by: garnetgrimm 
Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/slmodule.c | 493 +
 2 files changed, 494 insertions(+)
 create mode 100644 arch/x86/kernel/slmodule.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5a189ad..8c4d602 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -84,6 +84,7 @@ obj-$(CONFIG_IA32_EMULATION)  += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
 obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slmodule.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
new file mode 100644
index ..5bb4c0c
--- /dev/null
+++ b/arch/x86/kernel/slmodule.c
@@ -0,0 +1,493 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and
+ * finalization support.
+ *
+ * Copyright (c) 2022 Apertus Solutions, LLC
+ * Copyright (c) 2021 Assured Information Security, Inc.
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ *
+ * Author(s):
+ * Daniel P. Smith 
+ * Garnet T. Grimm 
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size)\
+static ssize_t txt_pub_read_u##size(unsigned int offset,   \
+   loff_t *read_offset,\
+   size_t read_len,\
+   char __user *buf)   \
+{  \
+   void __iomem *txt;  \
+   char msg_buffer[msg_size];  \
+   u##size reg_value = 0;  \
+   txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+   TXT_NR_CONFIG_PAGES * PAGE_SIZE);   \
+   if (!txt)   \
+   return -EFAULT; \
+   memcpy_fromio(_value, txt + offset, sizeof(u##size));   \
+   iounmap(txt);   \
+   snprintf(msg_buffer, msg_size, fmt, reg_value); \
+   return simple_read_from_buffer(buf, read_len, read_offset,  \
+   _buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size)   \
+static ssize_t txt_##reg_name##_read(struct file *flip,
\
+   char __user *buf, size_t read_len, loff_t *read_offset) \
+{  \
+   return txt_pub_read_u##reg_size(reg_offset, read_offset,\
+   read_len, buf); \
+}  \
+static const struct file_operations reg_name##_ops = { \
+   .read = txt_##reg_name##_read,  \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+   char *name;
+   void *addr;
+   size_t size;
+};
+
+static struct memfile sl_evtlog = {"eventlog", 0, 0};
+static void *txt_heap;
+static struct txt_heap_event_log_pointer2_1_element __iomem *evtlog20;
+static DEFINE_MUTEX(sl_evt_log_mutex);
+
+static ssize_t sl_evtlog_read(struct file *file, char __user *buf,
+ size_t count, loff_t *pos)
+{
+   ssize_t s

[PATCH v5 00/12] x86: Trenchboot secure dynamic launch Linux kernel support

2022-02-18 Thread Ross Philipson
The larger focus of the TrenchBoot project (https://github.com/TrenchBoot) is to
enhance the boot security and integrity in a unified manner. The first area of
focus has been on the Trusted Computing Group's Dynamic Launch for establishing
a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root of
Trust for Measurement). The project has been and continues to work on providing
a unified means to Dynamic Launch that is a cross-platform (Intel and AMD) and
cross-architecture (x86 and Arm), with our recent involvment in the upcoming
Arm DRTM specification. The order of introducing DRTM to the Linux kernel
follows the maturity of DRTM in the architectures. Intel's Trusted eXecution
Technology (TXT) is present today and only requires a preamble loader, e.g. a
boot loader, and an OS kernel that is TXT-aware. AMD DRTM implementation has
been present since the introduction of AMD-V but requires an additional
component that is AMD specific and referred to in the specification as the
Secure Loader, which the TrenchBoot project has an active prototype in
development. Finally Arm's implementation is in specification development stage
and the project is looking to support it when it becomes available.

This patchset provides detailed documentation of DRTM, the approach used for
adding the capbility, and relevant API/ABI documentation. In addition to the
documentation the patch set introduces Intel TXT support as the first platform
for Linux Secure Launch.

A quick note on terminology. The larger open source project itself is called
TrenchBoot, which is hosted on Github (links below). The kernel feature enabling
the use of Dynamic Launch technology is referred to as "Secure Launch" within
the kernel code. As such the prefixes sl_/SL_ or slaunch/SLAUNCH will be seen
in the code. The stub code discussed above is referred to as the SL stub.

The Secure Launch feature starts with patch #2. Patch #1 was authored by Arvind
Sankar. There is no further status on this patch at this point but
Secure Launch depends on it so it is included with the set.

Links:

The TrenchBoot project including documentation:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set 
volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

GRUB2 pre-launch support patchset (WIP):

https://lists.gnu.org/archive/html/grub-devel/2020-05/msg00011.html

Thanks
Ross Philipson and Daniel P. Smith

Changes in v2:

 - Modified 32b entry code to prevent causing relocations in the compressed
   kernel.
 - Dropped patches for compressed kernel TPM PCR extender.
 - Modified event log code to insert log delimiter events and not rely
   on TPM access.
 - Stop extending PCRs in the early Secure Launch stub code.
 - Removed Kconfig options for hash algorithms and use the algorithms the
   ACM used.
 - Match Secure Launch measurement algorithm use to those reported in the
   TPM 2.0 event log.
 - Read the TPM events out of the TPM and extend them into the PCRs using
   the mainline TPM driver. This is done in the late initcall module.
 - Allow use of alternate PCR 19 and 20 for post ACM measurements.
 - Add Kconfig constraints needed by Secure Launch (disable KASLR
   and add x2apic dependency).
 - Fix testing of SL_FLAGS when determining if Secure Launch is active
   and the architecture is TXT.
 - Use SYM_DATA_START_LOCAL macros in early entry point code.
 - Security audit changes:
   - Validate buffers passed to MLE do not overlap the MLE and are
 properly laid out.
   - Validate buffers and memory regions used by the MLE are
 protected by IOMMU PMRs.
 - Force IOMMU to not use passthrough mode during a Secure Launch.
 - Prevent KASLR use during a Secure Launch.

Changes in v3:

 - Introduce x86 documentation patch to provide background, overview
   and configuration/ABI information for the Secure Launch kernel
   feature.
 - Remove the IOMMU patch with special cases for disabling IOMMU
   passthrough. Configuring the IOMMU is now a documentation matter
   in the previously mentioned new patch.
 - Remove special case KASLR disabling code. Configuring KASLR is now
   a documentation matter in the previously mentioned new patch.
 - Fix incorrect panic on TXT public register read.
 - Properly handle and measure setup_indirect bootparams in the early
   launch code.
 - Use correct compressed kernel image base address when testing buffers
   in the early launch stub code. This bug was introduced by the changes
   to avoid relocation in the compressed kernel.
 - Use CPUID feature bits instead of CPUID vendor strings to determine
   if SMX mode is supported and the system is Intel.
 - Remove early NMI re-enable on the BSP. This can be safely done later
   

[PATCH v5 10/12] reboot: Secure Launch SEXIT support on reboot paths

2022-02-18 Thread Ross Philipson
If the MLE kernel is being powered off, rebooted or halted,
then SEXIT must be called. Note that the SEXIT GETSEC leaf
can only be called after a machine_shutdown() has been done on
these paths. The machine_shutdown() is not called on a few paths
like when poweroff action does not have a poweroff callback (into
ACPI code) or when an emergency reset is done. In these cases,
just the TXT registers are finalized but SEXIT is skipped.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/reboot.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index fa700b4..96d9c78 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -724,6 +725,7 @@ static void native_machine_restart(char *__unused)
 
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
__machine_emergency_restart(0);
 }
 
@@ -734,6 +736,9 @@ static void native_machine_halt(void)
 
tboot_shutdown(TB_SHUTDOWN_HALT);
 
+   /* SEXIT done after machine_shutdown() to meet TXT requirements */
+   slaunch_finalize(1);
+
stop_this_cpu(NULL);
 }
 
@@ -742,8 +747,12 @@ static void native_machine_power_off(void)
if (pm_power_off) {
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
pm_power_off();
+   } else {
+   slaunch_finalize(0);
}
+
/* A fallback in case there is no PM info available */
tboot_shutdown(TB_SHUTDOWN_HALT);
 }
@@ -771,6 +780,7 @@ void machine_shutdown(void)
 
 void machine_emergency_restart(void)
 {
+   slaunch_finalize(0);
__machine_emergency_restart(1);
 }
 
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v5 12/12] tpm: Allow locality 2 to be set when initializing the TPM for Secure Launch

2022-02-18 Thread Ross Philipson
The Secure Launch MLE environment uses PCRs that are only accessible from
the DRTM locality 2. By default the TPM drivers always initialize the
locality to 0. When a Secure Launch is in progress, initialize the
locality to 2.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-chip.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index b009e74..7b8d4bb 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -23,6 +23,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "tpm.h"
 
 DEFINE_IDR(dev_nums_idr);
@@ -34,12 +35,18 @@
 
 static int tpm_request_locality(struct tpm_chip *chip)
 {
+   int locality;
int rc;
 
if (!chip->ops->request_locality)
return 0;
 
-   rc = chip->ops->request_locality(chip, 0);
+   if (slaunch_get_flags() & SL_FLAG_ACTIVE)
+   locality = 2;
+   else
+   locality = 0;
+
+   rc = chip->ops->request_locality(chip, locality);
if (rc < 0)
return rc;
 
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v5 09/12] kexec: Secure Launch kexec SEXIT support

2022-02-18 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 69 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 73 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index ef6ef08..d3ea491 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -465,3 +465,72 @@ void __init slaunch_setup_txt(void)
 
pr_info("Intel TXT setup complete\n");
 }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile (".byte 0x0f,0x37\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+void slaunch_finalize(int do_sexit)
+{
+   u64 one = TXT_REGVALUE_ONE, val;
+   void __iomem *config;
+
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) !=
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Close the TXT private register space */
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.\n");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0)
+   panic("Error TXT SEXIT must be called on CPU 0\n");
+
+   /* Disable SMX mode */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_info("TXT SEXIT complete.\n");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 68480f7..cb67bfb 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1178,6 +1179,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
kmsg_dump(KMSG_DUMP_SHUTDOWN);
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v5 03/12] x86: Secure Launch Kconfig

2022-02-18 Thread Ross Philipson
Initial bits to bring in Secure Launch functionality. Add Kconfig
options for compiling in/out the Secure Launch code.

Signed-off-by: Ross Philipson 
---
 arch/x86/Kconfig | 34 ++
 1 file changed, 34 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9f5bd41..3f69aeb 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1983,6 +1983,40 @@ config EFI_MIXED
 
   If unsure, say N.
 
+config SECURE_LAUNCH
+   bool "Secure Launch support"
+   default n
+   depends on X86_64 && X86_X2APIC
+   help
+  The Secure Launch feature allows a kernel to be loaded
+  directly through an Intel TXT measured launch. Intel TXT
+  establishes a Dynamic Root of Trust for Measurement (DRTM)
+  where the CPU measures the kernel image. This feature then
+  continues the measurement chain over kernel configuration
+  information and init images.
+
+config SECURE_LAUNCH_ALT_DLME_AUTHORITY
+   bool "Secure Launch Alternate DLME Authority PCR"
+   default n
+   depends on SECURE_LAUNCH
+   help
+  As the DLME environment, Secure Launch by default measures
+  the configuration information as the DLME Authority into
+  PCR18. This feature allows separating these measurements
+  into the TCG DRTM specification PCR (PCR.DLME_AUTHORITY),
+  PCR19.
+
+config SECURE_LAUNCH_ALT_DLME_DETAIL
+   bool "Secure Launch Alternate DLME Detail PCR"
+   default n
+   depends on SECURE_LAUNCH
+   help
+  As the DLME environment, Secure Launch by default measures
+  the image data like any external initrd as a DRTM Detail
+  into PCR17. This feature allows separating these
+  measurements into the Secure Launch's Detail PCR
+  (PCR.DLME_DETAIL), PCR20.
+
 source "kernel/Kconfig.hz"
 
 config KEXEC
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v5 04/12] x86: Secure Launch main header file

2022-02-18 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 532 
 1 file changed, 532 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index ..87ab663
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,532 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_SKINIT0x0002
+#define SL_FLAG_ARCH_TXT   0x0004
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_AMD 1
+#define SL_CPU_INTEL   2
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE   0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STS(1<<0)
+#define TXT_SEXIT_DONE_STS (1<<1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+#define TXT_SINIT_TABLE_MAXTXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW  0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB  0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE   0xc0008014
+#define SL_ERROR_HI_PMR_SIZE  

[PATCH v5 08/12] x86: Secure Launch SMP bringup support

2022-02-18 Thread Ross Philipson
On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code parked the APs in a pause/jmp loop waiting for an NMI.
The modified SMP boot code is called for the Secure Launch case. The
jump address for the RM piggy entry point is fixed up in the jump where
the APs are waiting and an NMI IPI is sent to the AP. The AP vectors to
the Secure Launch entry point in the RM piggy which mimics what the real
mode code would do then jumps to the standard RM piggy protected mode
entry point.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/realmode.h  |  3 ++
 arch/x86/kernel/smpboot.c| 86 
 arch/x86/realmode/rm/header.S|  3 ++
 arch/x86/realmode/rm/trampoline_64.S | 37 
 4 files changed, 129 insertions(+)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 331474b..151d09a 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -37,6 +37,9 @@ struct real_mode_header {
 #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
 #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
 };
 
 /* This must match data at realmode/rm/trampoline_{32,64}.S */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 617012f..94b37ab 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -57,6 +57,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1073,6 +1074,83 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
 }
 
+#ifdef CONFIG_SECURE_LAUNCH
+
+static atomic_t first_ap_only = {1};
+
+/*
+ * Called to fix the long jump address for the waiting APs to vector to
+ * the correct startup location in the Secure Launch stub in the rmpiggy.
+ */
+static int
+slaunch_fixup_jump_vector(void)
+{
+   struct sl_ap_wake_info *ap_wake_info;
+   u32 *ap_jmp_ptr = NULL;
+
+   if (!atomic_dec_and_test(_ap_only))
+   return 0;
+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   ap_jmp_ptr = (u32 *)__va(ap_wake_info->ap_wake_block +
+ap_wake_info->ap_jmp_offset);
+
+   *ap_jmp_ptr = real_mode_header->sl_trampoline_start32;
+
+   pr_debug("TXT AP long jump address updated\n");
+
+   return 0;
+}
+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs in a pause loop waiting to receive an NMI. This will wake the APs
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static int
+slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   unsigned long send_status = 0, accept_status = 0;
+
+   /* Only done once */
+   if (slaunch_fixup_jump_vector())
+   return -1;
+
+   /* Send NMI IPI to idling AP and wake it up */
+   apic_icr_write(APIC_DM_NMI, apicid);
+
+   if (init_udelay == 0)
+   udelay(10);
+   else
+   udelay(300);
+
+   send_status = safe_apic_wait_icr_idle();
+
+   if (init_udelay == 0)
+   udelay(10);
+   else
+   udelay(300);
+
+   accept_status = (apic_read(APIC_ESR) & 0xEF);
+
+   if (send_status)
+   pr_err("Secure Launch IPI never delivered???\n");
+   if (accept_status)
+   pr_err("Secure Launch IPI delivery error (%lx)\n",
+   accept_status);
+
+   return (send_status | accept_status);
+}
+
+#else
+
+#define slaunch_wakeup_cpu_from_txt(cpu, apicid)   0
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -1127,6 +1205,13 @@ static int do_boot_cpu(int apicid, int cpu, struct 
task_struct *idle,
cpumask_clear_cpu(cpu, cpu_initialized_mask);
smp_mb();
 
+   /* With Intel TXT, the AP startup is totally different */
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==
+  (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) {
+   boot_error = slaunch_wakeup_cpu_from_txt(cpu, apicid);
+   goto txt_wake;
+   }
+
/*
 * Wake up a CPU in difference cases:
 * - Use the method in the APIC driver if it's defined
@@ -1139,6 +1224,7 @@ static int do_boot_cpu(int apicid, int cpu, struct 
task_struct *idle,
boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
 cpu0_nmi_registered);
 
+txt_wake:
if (!boot_error) {
 

[PATCH v5 05/12] x86: Add early SHA support for Secure Launch early measurements

2022-02-18 Thread Ross Philipson
From: "Daniel P. Smith" 

The SHA algorithms are necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA libraries directly in
the code since the compressed kernel is not uncompressed at this point.

The SHA code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

That code could not be pulled directly into the setup portion of the
compressed kernel because of other dependencies it pulls in. The result
is this is a modified copy of that code that still leverages the core
SHA algorithms.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile   |  2 +
 arch/x86/boot/compressed/early_sha1.c   | 97 +
 arch/x86/boot/compressed/early_sha1.h   | 17 ++
 arch/x86/boot/compressed/early_sha256.c |  7 +++
 lib/crypto/sha256.c |  8 +++
 lib/sha1.c  |  4 ++
 6 files changed, 135 insertions(+)
 create mode 100644 arch/x86/boot/compressed/early_sha1.c
 create mode 100644 arch/x86/boot/compressed/early_sha1.h
 create mode 100644 arch/x86/boot/compressed/early_sha256.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 6115274..d1791bb 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -105,6 +105,8 @@ vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
 efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+
 $(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE
$(call if_changed,ld)
 
diff --git a/arch/x86/boot/compressed/early_sha1.c 
b/arch/x86/boot/compressed/early_sha1.c
new file mode 100644
index ..476bda2
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,97 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "early_sha1.h"
+
+#define SHA1_DISABLE_EXPORT
+#include "../../../../lib/sha1.c"
+
+/* The SHA1 implementation in lib/sha1.c was written to get the workspace
+ * buffer as a parameter. This wrapper function provides a container
+ * around a temporary workspace that is cleared after the transform completes.
+ */
+static void __sha_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memzero_explicit(ws, sizeof(ws));
+}
+
+void early_sha1_init(struct sha1_state *sctx)
+{
+   sha1_init(sctx->state);
+   sctx->count = 0;
+}
+
+void early_sha1_update(struct sha1_state *sctx,
+  const u8 *data,
+  unsigned int len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+   }
+
+   if (len)
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+void early_sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (partial > bit_offset) {
+   memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+   __sha_transform(sctx->state, sctx->buffer);
+   }
+
+   memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+   *bits = cpu_to_be64(sctx->count << 3);
+   __sha_transform(sctx->state, sctx->buffer);
+
+   for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++)
+   put_unaligned_be32(sctx->state[i], digest++);
+
+   *sctx = (struct sha1_state){};
+}
diff --git a/arch/x86/boot/com

Re: [PATCH v4 04/14] Documentation/x86: Secure Launch kernel documentation

2021-12-03 Thread Ross Philipson
On 12/3/21 11:03, Robin Murphy wrote:
> On 2021-12-03 15:47, Ross Philipson wrote:
>> On 12/2/21 12:26, Robin Murphy wrote:
>>> On 2021-08-27 14:28, Ross Philipson wrote:
>>> [...]
>>>> +IOMMU Configuration
>>>> +---
>>>> +
>>>> +When doing a Secure Launch, the IOMMU should always be enabled and
>>>> the drivers
>>>> +loaded. However, IOMMU passthrough mode should never be used. This
>>>> leaves the
>>>> +MLE completely exposed to DMA after the PMR's [2]_ are disabled.
>>>> First, IOMMU
>>>> +passthrough should be disabled by default in the build configuration::
>>>> +
>>>> +  "Device Drivers" -->
>>>> +  "IOMMU Hardware Support" -->
>>>> +  "IOMMU passthrough by default [ ]"
>>>> +
>>>> +This unset the Kconfig value CONFIG_IOMMU_DEFAULT_PASSTHROUGH.
>>>
>>> Note that the config structure has now changed, and if set, passthrough
>>> is deselected by choosing a different default domain type.
>>
>> Thanks for the heads up. We will have to modify this for how things
>> exist today.
>>
>>>
>>>> +In addition, passthrough must be disabled on the kernel command line
>>>> when doing
>>>> +a Secure Launch as follows::
>>>> +
>>>> +  iommu=nopt iommu.passthrough=0
>>>
>>> This part is a bit silly - those options are literally aliases for the
>>> exact same thing, and furthermore if the config is already set as
>>> required then the sole effect either of them will have is to cause "(set
>>> by kernel command line)" to be printed. There is no value in explicitly
>>> overriding the default value with the default value - if anyone can
>>> append an additional "iommu.passthrough=1" (or "iommu=pt") to the end of
>>> the command line they'll still win.
>>
>> I feel like when we worked on this, it was still important to set those
>> values. This could have been in an older kernel version. We will go back
>> and verify what you are saying here and adjust the documentation
>> accordingly.
>>
>> As to anyone just adding values to the command line, that is why the
>> command line is part of the DRTM measurements.
> 
> Yeah, I had a vague memory that that was the case - basically if you can
> trust the command line as much as the config then it's definitely
> redundant to pass an option for this (see iommu_subsys_init() - it's now
> all plumbed through iommu_def_domain_type), and if you can't then
> passing them is futile anyway.

Thanks you for your feedback.

Ross

> 
> Cheers,
> Robin.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v4 04/14] Documentation/x86: Secure Launch kernel documentation

2021-12-03 Thread Ross Philipson
On 12/2/21 12:26, Robin Murphy wrote:
> On 2021-08-27 14:28, Ross Philipson wrote:
> [...]
>> +IOMMU Configuration
>> +---
>> +
>> +When doing a Secure Launch, the IOMMU should always be enabled and
>> the drivers
>> +loaded. However, IOMMU passthrough mode should never be used. This
>> leaves the
>> +MLE completely exposed to DMA after the PMR's [2]_ are disabled.
>> First, IOMMU
>> +passthrough should be disabled by default in the build configuration::
>> +
>> +  "Device Drivers" -->
>> +  "IOMMU Hardware Support" -->
>> +  "IOMMU passthrough by default [ ]"
>> +
>> +This unset the Kconfig value CONFIG_IOMMU_DEFAULT_PASSTHROUGH.
> 
> Note that the config structure has now changed, and if set, passthrough
> is deselected by choosing a different default domain type.

Thanks for the heads up. We will have to modify this for how things
exist today.

> 
>> +In addition, passthrough must be disabled on the kernel command line
>> when doing
>> +a Secure Launch as follows::
>> +
>> +  iommu=nopt iommu.passthrough=0
> 
> This part is a bit silly - those options are literally aliases for the
> exact same thing, and furthermore if the config is already set as
> required then the sole effect either of them will have is to cause "(set
> by kernel command line)" to be printed. There is no value in explicitly
> overriding the default value with the default value - if anyone can
> append an additional "iommu.passthrough=1" (or "iommu=pt") to the end of
> the command line they'll still win.

I feel like when we worked on this, it was still important to set those
values. This could have been in an older kernel version. We will go back
and verify what you are saying here and adjust the documentation
accordingly.

As to anyone just adding values to the command line, that is why the
command line is part of the DRTM measurements.

Thank you,
Ross

> 
> Robin.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH v4 08/14] x86: Secure Launch kernel early boot stub

2021-08-27 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/x86/boot.rst |  13 +
 arch/x86/boot/compressed/Makefile  |   3 +-
 arch/x86/boot/compressed/head_64.S |  37 ++
 arch/x86/boot/compressed/kernel_info.S |  34 ++
 arch/x86/boot/compressed/sl_main.c | 549 ++
 arch/x86/boot/compressed/sl_stub.S | 685 +
 arch/x86/kernel/asm-offsets.c  |  19 +
 7 files changed, 1339 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
index 894a198..2fbcb77 100644
--- a/Documentation/x86/boot.rst
+++ b/Documentation/x86/boot.rst
@@ -1026,6 +1026,19 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 059d49a..1fe55a5 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -102,7 +102,8 @@ vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
 efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index a2347de..b35e072 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -498,6 +498,17 @@ trampoline_return:
pushq   $0
popfq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   pushq   %rsi
+
+   /* Ensure the relocation region coverd by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+
+   popq%rsi
+#endif
+
 /*
  * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
@@ -556,6 +567,32 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter. There is no
+* obvious way to do anything about the use of kernel_alignment or
+* init_size though these seem low risk with all the PMR and overlap
+* checks in place.
+*/
+   pushq   %rsi
+
+   movq%rsi, %rdi
+   callq   sl_main
+
+   /* Ensure the decompression location is coverd by a PMR */
+   movq%rbp, %rdi
+   movqoutput_len(%rip), %rsi
+   callq   sl_check_region
+
+   popq%rsi
+#endif
+
 /*
  * If running as an SEV guest, the encryption mask is required in the
  * page-table setup code below. When the guest also has SEV-ES enabled
diff --git a/arch/x86

[PATCH v4 13/14] x86: Secure Launch late initcall platform module

2021-08-27 Thread Ross Philipson
From: "Daniel P. Smith" 

The Secure Launch platform module is a late init module. During the
init call, the TPM event log is read and measurements taken in the
early boot stub code are located. These measurements are extended
into the TPM PCRs using the mainline TPM kernel driver.

The platform module also registers the securityfs nodes to allow
access to TXT register fields on Intel along with the fetching of
and writing events to the late launch TPM log.

Signed-off-by: Daniel P. Smith 
Signed-off-by: garnetgrimm 
Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/slmodule.c | 494 +
 2 files changed, 495 insertions(+)
 create mode 100644 arch/x86/kernel/slmodule.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index d6ee904..09b730a 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -81,6 +81,7 @@ obj-$(CONFIG_IA32_EMULATION)  += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
 obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slmodule.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-$(CONFIG_STACKTRACE)   += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
new file mode 100644
index 000..bad50f0a
--- /dev/null
+++ b/arch/x86/kernel/slmodule.c
@@ -0,0 +1,494 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and
+ * finalization support.
+ *
+ * Copyright (c) 2021 Apertus Solutions, LLC
+ * Copyright (c) 2021 Assured Information Security, Inc.
+ * Copyright (c) 2021, Oracle and/or its affiliates.
+ *
+ * Author(s):
+ * Daniel P. Smith 
+ * Garnet T. Grimm 
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define SL_FS_ENTRIES  10
+/* root directory node must be last */
+#define SL_ROOT_DIR_ENTRY  (SL_FS_ENTRIES - 1)
+#define SL_TXT_DIR_ENTRY   (SL_FS_ENTRIES - 2)
+#define SL_TXT_FILE_FIRST  (SL_TXT_DIR_ENTRY - 1)
+#define SL_TXT_ENTRY_COUNT 7
+
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size)\
+static ssize_t txt_pub_read_u##size(unsigned int offset,   \
+   loff_t *read_offset,\
+   size_t read_len,\
+   char __user *buf)   \
+{  \
+   void __iomem *txt;  \
+   char msg_buffer[msg_size];  \
+   u##size reg_value = 0;  \
+   txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+   TXT_NR_CONFIG_PAGES * PAGE_SIZE);   \
+   if (IS_ERR(txt))\
+   return PTR_ERR(txt);\
+   memcpy_fromio(_value, txt + offset, sizeof(u##size));   \
+   iounmap(txt);   \
+   snprintf(msg_buffer, msg_size, fmt, reg_value); \
+   return simple_read_from_buffer(buf, read_len, read_offset,  \
+   _buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size)   \
+static ssize_t txt_##reg_name##_read(struct file *flip,
\
+   char __user *buf, size_t read_len, loff_t *read_offset) \
+{  \
+   return txt_pub_read_u##reg_size(reg_offset, read_offset,\
+   read_len, buf); \
+}  \
+static const struct file_operations reg_name##_ops = { \
+   .read = txt_##reg_name##_read,  \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+   char *name;
+   void *addr;
+   size_t size;
+};
+
+static struct memfile sl_evtlog = {"e

[PATCH v4 04/14] Documentation/x86: Secure Launch kernel documentation

2021-08-27 Thread Ross Philipson
Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 Documentation/x86/index.rst |   1 +
 Documentation/x86/secure-launch.rst | 716 
 2 files changed, 717 insertions(+)
 create mode 100644 Documentation/x86/secure-launch.rst

diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
index 3830483..e5a058f 100644
--- a/Documentation/x86/index.rst
+++ b/Documentation/x86/index.rst
@@ -31,6 +31,7 @@ x86-specific Documentation
tsx_async_abort
buslock
usb-legacy-support
+   secure-launch
i386/index
x86_64/index
sva
diff --git a/Documentation/x86/secure-launch.rst 
b/Documentation/x86/secure-launch.rst
new file mode 100644
index 000..95bb193
--- /dev/null
+++ b/Documentation/x86/secure-launch.rst
@@ -0,0 +1,716 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=
+Secure Launch
+=
+
+Background
+==
+
+The Trusted Computing Group (TCG) architecture defines two methods in
+which the target operating system is started, aka launched, on a system
+for which Intel and AMD provides implementations. These two launch types
+are static launch and dynamic launch. Static launch is referred to as
+such because it happens at one fixed point, at system startup, during
+the defined life-cycle of a system. Dynamic launch is referred to as
+such because it is not limited to being done once nor bound to system
+startup. It can in fact happen at anytime without incurring/requiring an
+associated power event for the system. Since dynamic launch can happen
+at anytime, this results in dynamic launch being split into two type of
+its own. The first is referred to as an early launch, where the dynamic
+launch is done in conjunction with the static lunch of the system. The
+second is referred to as a late launch, where a dynamic launch is
+initiated after the static launch was fully completed and the system was
+under the control of some target operating system or run-time kernel.
+These two top-level launch methods, static launch and dynamic launch
+provide different models for establishing the launch integrity, i.e. the
+load-time integrity, of the target operating system. When cryptographic
+hashing is used to create an integrity assessment for the launch
+integrity, then for a static launch this is referred to as the Static
+Root of Trust for Measurement (SRTM) and for dynamic launch it is
+referred to as the Dynamic Root of Trust for Measurement (DRTM).
+
+The reasoning for needing the two integrity models is driven by the fact
+that these models leverage what is referred to as a "transitive trust".
+A transitive trust is commonly referred to as a "trust chain", which is
+created through the process of an entity making an integrity assessment
+of another entity and upon success transfers control to the new entity.
+This process is then repeated by each entity until the Trusted Computing
+Base (TCB) of system has been established. A challenge for transitive
+trust is that the process is susceptible to cumulative error
+and in this case that error is inaccurate or improper integrity
+assessments. The way to address cumulative error is to reduce the
+number of instances that can create error involved in the process.  In
+this case it would be to reduce then number of entities involved in the
+transitive trust. It is not possible to reduce the number of firmware
+components or the boot loader(s) involved during static launch. This is
+where dynamic launch comes in, as it introduces the concept for a CPU to
+provide an instruction that results in a transitive trust starting with
+the CPU doing an integrity assessment of a special loader that can then
+start a target operating system. This reduces the trust chain down to
+the CPU, a special loader, and the target operation system.  It is also
+why it is said that the DRTM is rooted in hardware since the CPU is what
+does the first integrity assessment, i.e. the first measurement, in the
+trust chain.
+
+Overview
+
+
+Prior to the start of the TrenchBoot project, the only active Open
+Source project supporting dynamic launch was Intel's tboot project to
+support their implementation of dynamic launch known as Intel Trusted
+eXecution Technology (TXT). The approach taken by tboot was to provide
+an exokernel that could handle the launch protocol implemented by
+Intel's special loader, the SINIT Authenticated Code Module (ACM [3]_)
+and remained in memory to manage the SMX CPU mode that a dynamic launch
+would put a system. While it is not precluded from being used for doing
+a late launch, tboot's primary use case was to be used as an early
+launch solution. As a result the TrenchBoot project started the
+development of Secure Launch kernel feature to provide a more
+generalized approach. The focus of the effort is twofold, the first i

[PATCH v4 09/14] x86: Secure Launch kernel late boot stub

2021-08-27 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 460 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 468 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 3e625c6..d6ee904 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -80,6 +80,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-$(CONFIG_STACKTRACE)   += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 055a834..482bd76 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -19,6 +19,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -976,6 +977,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup_txt();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index 000..f91f0b5
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,460 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2021, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags;
+static struct sl_ap_wake_info ap_wake_info;
+static u64 evtlog_addr;
+static u32 evtlog_size;
+static u64 vtd_pmr_lo_size;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+EXPORT_SYMBOL(slaunch_get_flags);
+
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return _wake_info;
+}
+
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(_dmar[0]);
+}
+
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, , sizeof(error));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_RESET, , sizeof(one));
+
+   for ( ; ; )
+   asm volatile ("hlt");
+
+   unreachable();
+}
+
+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+u32 bytes)
+{
+   void *heap;
+   u64 base, size, offset = 0;
+   int i;
+
+   if (type > TXT_SINIT_TABLE_MAX)
+   slaunch_txt_reset(txt,
+   "Error invalid table type for early heap walk\n",
+   SL_ERROR_HEAP_WALK);
+
+   memcpy_fromio(, txt + TXT_CR_HEAP_BASE, sizeof(base));
+   memcpy_fromio(, txt + TXT_CR_HEAP_SIZE, sizeof(size));
+
+   /* Iterate over heap tables looking for table of "type" */
+   for (i = 0; i < type; i++) {
+   base += offset;
+   heap = early_memremap(base, sizeof(u64));
+   if (!heap)
+   slaunch_txt_reset(txt,
+   "Error early_memremap of heap for heap walk\n",
+   SL_ERROR_HEAP_MAP);
+
+ 

[PATCH v4 02/14] x86/boot: Add setup_indirect support in early_memremap_is_setup_data

2021-08-27 Thread Ross Philipson
The x86 boot documentation describes the setup_indirect structures and
how they are used. Only one of the two functions in ioremap.c that needed
to be modified to be aware of the introduction of setup_indirect
functionality was updated. This adds comparable support to the other
function where it was missing.

Fixes: b3c72fc9a78e ("x86/boot: Introduce setup_indirect")

Signed-off-by: Ross Philipson 
---
 arch/x86/mm/ioremap.c | 21 +++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index ab74e4f..f2b34c5 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -669,17 +669,34 @@ static bool __init 
early_memremap_is_setup_data(resource_size_t phys_addr,
 
paddr = boot_params.hdr.setup_data;
while (paddr) {
-   unsigned int len;
+   unsigned int len, size;
 
if (phys_addr == paddr)
return true;
 
data = early_memremap_decrypted(paddr, sizeof(*data));
+   size = sizeof(*data);
 
paddr_next = data->next;
len = data->len;
 
-   early_memunmap(data, sizeof(*data));
+   if ((phys_addr > paddr) && (phys_addr < (paddr + len))) {
+   early_memunmap(data, sizeof(*data));
+   return true;
+   }
+
+   if (data->type == SETUP_INDIRECT) {
+   size += len;
+   early_memunmap(data, sizeof(*data));
+   data = early_memremap_decrypted(paddr, size);
+
+   if (((struct setup_indirect *)data->data)->type != 
SETUP_INDIRECT) {
+   paddr = ((struct setup_indirect 
*)data->data)->addr;
+   len = ((struct setup_indirect 
*)data->data)->len;
+   }
+   }
+
+   early_memunmap(data, size);
 
if ((phys_addr > paddr) && (phys_addr < (paddr + len)))
return true;
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v4 05/14] x86: Secure Launch Kconfig

2021-08-27 Thread Ross Philipson
Initial bits to bring in Secure Launch functionality. Add Kconfig
options for compiling in/out the Secure Launch code.

Signed-off-by: Ross Philipson 
---
 arch/x86/Kconfig | 32 
 1 file changed, 32 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 88fb922..b5e25c5 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1949,6 +1949,38 @@ config EFI_MIXED
 
   If unsure, say N.
 
+config SECURE_LAUNCH
+   bool "Secure Launch support"
+   default n
+   depends on X86_64 && X86_X2APIC
+   help
+  The Secure Launch feature allows a kernel to be loaded
+  directly through an Intel TXT measured launch. Intel TXT
+  establishes a Dynamic Root of Trust for Measurement (DRTM)
+  where the CPU measures the kernel image. This feature then
+  continues the measurement chain over kernel configuration
+  information and init images.
+
+config SECURE_LAUNCH_ALT_PCR19
+   bool "Secure Launch Alternate PCR 19 usage"
+   default n
+   depends on SECURE_LAUNCH
+   help
+  In the post ACM environment, Secure Launch by default measures
+  configuration information into PCR18. This feature allows finer
+  control over measurements by moving configuration measurements
+  into PCR19.
+
+config SECURE_LAUNCH_ALT_PCR20
+   bool "Secure Launch Alternate PCR 20 usage"
+   default n
+   depends on SECURE_LAUNCH
+   help
+  In the post ACM environment, Secure Launch by default measures
+  image data like any external initrd into PCR17. This feature
+  allows finer control over measurements by moving image measurements
+  into PCR20.
+
 source "kernel/Kconfig.hz"
 
 config KEXEC
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v4 14/14] tpm: Allow locality 2 to be set when initializing the TPM for Secure Launch

2021-08-27 Thread Ross Philipson
The Secure Launch MLE environment uses PCRs that are only accessible from
the DRTM locality 2. By default the TPM drivers always initialize the
locality to 0. When a Secure Launch is in progress, initialize the
locality to 2.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-chip.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index ddaeceb..ce0a3b7 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -23,6 +23,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "tpm.h"
 
 DEFINE_IDR(dev_nums_idr);
@@ -34,12 +35,18 @@
 
 static int tpm_request_locality(struct tpm_chip *chip)
 {
+   int locality;
int rc;
 
if (!chip->ops->request_locality)
return 0;
 
-   rc = chip->ops->request_locality(chip, 0);
+   if (slaunch_get_flags() & SL_FLAG_ACTIVE)
+   locality = 2;
+   else
+   locality = 0;
+
+   rc = chip->ops->request_locality(chip, locality);
if (rc < 0)
return rc;
 
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v4 01/14] x86/boot: Fix memremap of setup_indirect structures

2021-08-27 Thread Ross Philipson
As documented, the setup_indirect structure is nested inside
the setup_data structures in the setup_data list. The code currently
accesses the fields inside the setup_indirect structure but only
the sizeof(struct setup_data) is being memremapped. No crash
occured but this is just due to how the area is remapped under the
covers.

The fix is to properly memremap both the setup_data and setup_indirect
structures in these cases before accessing them.

Fixes: b3c72fc9a78e ("x86/boot: Introduce setup_indirect")

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/e820.c | 31 -
 arch/x86/kernel/kdebugfs.c | 32 +++---
 arch/x86/kernel/ksysfs.c   | 56 --
 arch/x86/kernel/setup.c| 23 +--
 arch/x86/mm/ioremap.c  | 13 +++
 5 files changed, 113 insertions(+), 42 deletions(-)

diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index bc0657f..e023950 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -996,7 +996,8 @@ static int __init parse_memmap_opt(char *str)
 void __init e820__reserve_setup_data(void)
 {
struct setup_data *data;
-   u64 pa_data;
+   u64 pa_data, pa_next;
+   u32 len;
 
pa_data = boot_params.hdr.setup_data;
if (!pa_data)
@@ -1004,6 +1005,9 @@ void __init e820__reserve_setup_data(void)
 
while (pa_data) {
data = early_memremap(pa_data, sizeof(*data));
+   len = sizeof(*data);
+   pa_next = data->next;
+
e820__range_update(pa_data, sizeof(*data)+data->len, 
E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
 
/*
@@ -1015,18 +1019,23 @@ void __init e820__reserve_setup_data(void)
 sizeof(*data) + data->len,
 E820_TYPE_RAM, 
E820_TYPE_RESERVED_KERN);
 
-   if (data->type == SETUP_INDIRECT &&
-   ((struct setup_indirect *)data->data)->type != 
SETUP_INDIRECT) {
-   e820__range_update(((struct setup_indirect 
*)data->data)->addr,
-  ((struct setup_indirect 
*)data->data)->len,
-  E820_TYPE_RAM, 
E820_TYPE_RESERVED_KERN);
-   e820__range_update_kexec(((struct setup_indirect 
*)data->data)->addr,
-((struct setup_indirect 
*)data->data)->len,
-E820_TYPE_RAM, 
E820_TYPE_RESERVED_KERN);
+   if (data->type == SETUP_INDIRECT) {
+   len += data->len;
+   early_memunmap(data, sizeof(*data));
+   data = early_memremap(pa_data, len);
+
+   if (((struct setup_indirect *)data->data)->type != 
SETUP_INDIRECT) {
+   e820__range_update(((struct setup_indirect 
*)data->data)->addr,
+  ((struct setup_indirect 
*)data->data)->len,
+  E820_TYPE_RAM, 
E820_TYPE_RESERVED_KERN);
+   e820__range_update_kexec(((struct 
setup_indirect *)data->data)->addr,
+((struct 
setup_indirect *)data->data)->len,
+E820_TYPE_RAM, 
E820_TYPE_RESERVED_KERN);
+   }
}
 
-   pa_data = data->next;
-   early_memunmap(data, sizeof(*data));
+   pa_data = pa_next;
+   early_memunmap(data, len);
}
 
e820__update_table(e820_table);
diff --git a/arch/x86/kernel/kdebugfs.c b/arch/x86/kernel/kdebugfs.c
index 64b6da9..e5c72d8 100644
--- a/arch/x86/kernel/kdebugfs.c
+++ b/arch/x86/kernel/kdebugfs.c
@@ -92,7 +92,8 @@ static int __init create_setup_data_nodes(struct dentry 
*parent)
struct setup_data *data;
int error;
struct dentry *d;
-   u64 pa_data;
+   u64 pa_data, pa_next;
+   u32 len;
int no = 0;
 
d = debugfs_create_dir("setup_data", parent);
@@ -112,12 +113,27 @@ static int __init create_setup_data_nodes(struct dentry 
*parent)
error = -ENOMEM;
goto err_dir;
}
-
-   if (data->type == SETUP_INDIRECT &&
-   ((struct setup_indirect *)data->data)->type != 
SETUP_INDIRECT) {
-   node->paddr = ((struct setup_indirect 
*)data->data)->addr;
-   node->type  = ((struct setup_indirect 
*)data->data)->type;
-   node->len   = ((struct setup_indirect 
*)data->data)->

[PATCH v4 03/14] x86/boot: Place kernel_info at a fixed offset

2021-08-27 Thread Ross Philipson
From: Arvind Sankar 

There are use cases for storing the offset of a symbol in kernel_info.
For example, the trenchboot series [0] needs to store the offset of the
Measured Launch Environment header in kernel_info.

Since commit (note: commit ID from tip/master)

  527afc212231 ("x86/boot: Check that there are no run-time relocations")

run-time relocations are not allowed in the compressed kernel, so simply
using the symbol in kernel_info, as

.long   symbol

will cause a linker error because this is not position-independent.

With kernel_info being a separate object file and in a different section
from startup_32, there is no way to calculate the offset of a symbol
from the start of the image in a position-independent way.

To enable such use cases, put kernel_info into its own section which is
placed at a predetermined offset (KERNEL_INFO_OFFSET) via the linker
script. This will allow calculating the symbol offset in a
position-independent way, by adding the offset from the start of
kernel_info to KERNEL_INFO_OFFSET.

Ensure that kernel_info is aligned, and use the SYM_DATA.* macros
instead of bare labels. This stores the size of the kernel_info
structure in the ELF symbol table.

Signed-off-by: Arvind Sankar 
Cc: Ross Philipson 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/kernel_info.S | 19 +++
 arch/x86/boot/compressed/kernel_info.h | 12 
 arch/x86/boot/compressed/vmlinux.lds.S |  6 ++
 3 files changed, 33 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/boot/compressed/kernel_info.h

diff --git a/arch/x86/boot/compressed/kernel_info.S 
b/arch/x86/boot/compressed/kernel_info.S
index f818ee8..c18f071 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -1,12 +1,23 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
+#include 
 #include 
+#include "kernel_info.h"
 
-   .section ".rodata.kernel_info", "a"
+/*
+ * If a field needs to hold the offset of a symbol from the start
+ * of the image, use the macro below, eg
+ * .long   rva(symbol)
+ * This will avoid creating run-time relocations, which are not
+ * allowed in the compressed kernel.
+ */
+
+#define rva(X) (((X) - kernel_info) + KERNEL_INFO_OFFSET)
 
-   .global kernel_info
+   .section ".rodata.kernel_info", "a"
 
-kernel_info:
+   .balign 16
+SYM_DATA_START(kernel_info)
/* Header, Linux top (structure). */
.ascii  "LToP"
/* Size. */
@@ -19,4 +30,4 @@ kernel_info:
 
 kernel_info_var_len_data:
/* Empty for time being... */
-kernel_info_end:
+SYM_DATA_END_LABEL(kernel_info, SYM_L_LOCAL, kernel_info_end)
diff --git a/arch/x86/boot/compressed/kernel_info.h 
b/arch/x86/boot/compressed/kernel_info.h
new file mode 100644
index 000..c127f84
--- /dev/null
+++ b/arch/x86/boot/compressed/kernel_info.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef BOOT_COMPRESSED_KERNEL_INFO_H
+#define BOOT_COMPRESSED_KERNEL_INFO_H
+
+#ifdef CONFIG_X86_64
+#define KERNEL_INFO_OFFSET 0x500
+#else /* 32-bit */
+#define KERNEL_INFO_OFFSET 0x100
+#endif
+
+#endif /* BOOT_COMPRESSED_KERNEL_INFO_H */
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S 
b/arch/x86/boot/compressed/vmlinux.lds.S
index 112b237..84c7b4d 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -7,6 +7,7 @@ OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT)
 
 #include 
 #include 
+#include "kernel_info.h"
 
 #ifdef CONFIG_X86_64
 OUTPUT_ARCH(i386:x86-64)
@@ -27,6 +28,11 @@ SECTIONS
HEAD_TEXT
_ehead = . ;
}
+   .rodata.kernel_info KERNEL_INFO_OFFSET : {
+   *(.rodata.kernel_info)
+   }
+   ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at bad 
address!")
+
.rodata..compressed : {
*(.rodata..compressed)
}
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v4 07/14] x86: Add early SHA support for Secure Launch early measurements

2021-08-27 Thread Ross Philipson
From: "Daniel P. Smith" 

The SHA algorithms are necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA libraries directly in
the code since the compressed kernel is not uncompressed at this point.

The SHA code here has its origins in the code from the main kernel, commit
c4d5b9ffa31f (crypto: sha1 - implement base layer for SHA-1). That code could
not be pulled directly into the setup portion of the compressed kernel because
of other dependencies it pulls in. The result is this is a modified copy of
that code that still leverages the core SHA algorithms.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile   |   2 +
 arch/x86/boot/compressed/early_sha1.c   | 103 
 arch/x86/boot/compressed/early_sha1.h   |  17 ++
 arch/x86/boot/compressed/early_sha256.c |   7 +++
 lib/crypto/sha256.c |   8 +++
 lib/sha1.c  |   4 ++
 6 files changed, 141 insertions(+)
 create mode 100644 arch/x86/boot/compressed/early_sha1.c
 create mode 100644 arch/x86/boot/compressed/early_sha1.h
 create mode 100644 arch/x86/boot/compressed/early_sha256.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 431bf7f..059d49a 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -102,6 +102,8 @@ vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
 efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+
 $(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE
$(call if_changed,ld)
 
diff --git a/arch/x86/boot/compressed/early_sha1.c 
b/arch/x86/boot/compressed/early_sha1.c
new file mode 100644
index 000..74f4654
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "early_sha1.h"
+
+#define SHA1_DISABLE_EXPORT
+#include "../../../../lib/sha1.c"
+
+/* The SHA1 implementation in lib/sha1.c was written to get the workspace
+ * buffer as a parameter. This wrapper function provides a container
+ * around a temporary workspace that is cleared after the transform completes.
+ */
+static void __sha_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memset(ws, 0, sizeof(ws));
+   /*
+* As this is cryptographic code, prevent the memset 0 from being
+* optimized out potentially leaving secrets in memory.
+*/
+   wmb();
+
+}
+
+void early_sha1_init(struct sha1_state *sctx)
+{
+   sha1_init(sctx->state);
+   sctx->count = 0;
+}
+
+void early_sha1_update(struct sha1_state *sctx,
+  const u8 *data,
+  unsigned int len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+   }
+
+   if (len)
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+void early_sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (partial > bit_offset) {
+   memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+   __sha_transform(sctx->state, sctx->buffer);
+   }
+
+   memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+   *bits = cpu_to_be64(sctx->count << 3);
+   __sha_transform(sctx->state, sctx->buffer);
+
+   for (i = 0; i < SHA1_DIGEST_SIZ

[PATCH v4 00/14] x86: Trenchboot secure dynamic launch Linux kernel support

2021-08-27 Thread Ross Philipson
The larger focus of the Trechboot project (https://github.com/TrenchBoot) is to
enhance the boot security and integrity in a unified manner. The first area of
focus has been on the Trusted Computing Group's Dynamic Launch for establishing
a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root of
Trust for Measurement). The project has been and continues to work on providing
a unified means to Dynamic Launch that is a cross-platform (Intel and AMD) and
cross-architecture (x86 and Arm), with our recent involvment in the upcoming
Arm DRTM specification. The order of introducing DRTM to the Linux kernel
follows the maturity of DRTM in the architectures. Intel's Trusted eXecution
Technology (TXT) is present today and only requires a preamble loader, e.g. a
boot loader, and an OS kernel that is TXT-aware. AMD DRTM implementation has
been present since the introduction of AMD-V but requires an additional
component that is AMD specific and referred to in the specification as the
Secure Loader, which the TrenchBoot project has an active prototype in
development. Finally Arm's implementation is in specification development stage
and the project is looking to support it when it becomes available.

The approach that the TrenchBoot project is taking requires the Linux kernel
to be directly invoked by the Dynamic Launch. The Dynamic Launch will
be initiated by a boot loader with associated support added to it, for
example the first targeted boot loader will be GRUB2. An integral part of
establishing the DRTM involves measuring everything that is intended to
be run (kernel image, initrd, etc) and everything that will configure
that kernel to run (command line, boot params, etc) into specific PCRs,
the DRTM PCRs (17-22), in the TPM. Another key aspect is the Dynamic
Launch is rooted in hardware, that is to say the hardware (CPU) is what
takes the first measurement for the chain of integrity measurements. On
Intel this is done using the GETSEC instruction provided by Intel's TXT
and the SKINIT instruction provided by AMD's AMD-V. Information on these
technologies can be readily found online. This patchset introduces Intel
TXT support.

To enable the kernel to be launched by GETSEC, a stub must be built
into the setup section of the compressed kernel to handle the specific
state that the dynamic launch process leaves the BSP in. Also this stub
must measure everything that is going to be used as early as possible.
This stub code and subsequent code must also deal with the specific
state that the dynamic launch leaves the APs in.

A quick note on terminology. The larger open source project itself is
called Trenchboot, which is hosted on Github (links below). The kernel
feature enabling the use of the x86 technology is referred to as "Secure
Launch" within the kernel code. As such the prefixes sl_/SL_ or
slaunch/SLAUNCH will be seen in the code. The stub code discussed above
is referred to as the SL stub.

The new feature starts with patch #4. There are several preceeding patches
before that. Patches 1 and 2 are fixes to an earlier patch set that
itroduced the x86 setup_data type setup_indirect. Patch 3 was authored
by Arvind Sankar. There is no further status on this patch at this point but
Secure Launch depends on it so it is included with the set.

The basic flow is:

 - Entry from the dynamic launch jumps to the SL stub
 - SL stub fixes up the world on the BSP
 - For TXT, SL stub wakes the APs, fixes up their worlds
 - For TXT, APs are left halted waiting for an NMI to wake them
 - SL stub jumps to startup_32
 - SL main locates the TPM event log and writes the measurements of
   configuration and module information into it.
 - Kernel boot proceeds normally from this point.
 - During early setup, slaunch_setup() runs to finish some validation
   and setup tasks.
 - The SMP bringup code is modified to wake the waiting APs. APs vector
   to rmpiggy and start up normally from that point.
 - SL platform module is registered as a late initcall module. It reads
   the TPM event log and extends the measurements taken into the TPM PCRs.
 - SL platform module initializes the securityfs interface to allow
   asccess to the TPM event log and TXT public registers.
 - Kernel boot finishes booting normally
 - SEXIT support to leave SMX mode is present on the kexec path and
   the various reboot paths (poweroff, reset, halt).

Links:

The Trenchboot project including documentation:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set 
volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

GRUB2 pre-launch support patchset (WIP):

https://lists.gnu.org/archive/html/grub-devel/2020-05/msg00011.html

Thanks
Ross Philipson an

[PATCH v4 06/14] x86: Secure Launch main header file

2021-08-27 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 532 
 1 file changed, 532 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index 000..c125b67
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,532 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2021, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_SKINIT0x0002
+#define SL_FLAG_ARCH_TXT   0x0004
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_AMD 1
+#define SL_CPU_INTEL   2
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE   0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STS(1<<0)
+#define TXT_SEXIT_DONE_STS (1<<1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+#define TXT_SINIT_TABLE_MAXTXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW  0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB  0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE   0xc0008014
+#define SL_ERROR_HI_PMR_SIZE  

[PATCH v4 11/14] kexec: Secure Launch kexec SEXIT support

2021-08-27 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 71 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 75 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index f91f0b5..60a193a 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -458,3 +458,74 @@ void __init slaunch_setup_txt(void)
 
pr_info("Intel TXT setup complete\n");
 }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile (".byte 0x0f,0x37\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+void slaunch_finalize(int do_sexit)
+{
+   void __iomem *config;
+   u64 one = TXT_REGVALUE_ONE, val;
+
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) !=
+   (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT))
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Close the TXT private register space */
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0) {
+   pr_emerg("Error TXT SEXIT must be called on CPU 0\n");
+   return;
+   }
+
+   /* Disable SMX mode */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_emerg("TXT SEXIT complete.");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 4b34a9a..fdf0a27 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1179,6 +1180,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
kmsg_dump(KMSG_DUMP_SHUTDOWN);
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v4 10/14] x86: Secure Launch SMP bringup support

2021-08-27 Thread Ross Philipson
On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code parked the APs in a pause/jmp loop waiting for an NMI.
The modified SMP boot code is called for the Secure Launch case. The
jump address for the RM piggy entry point is fixed up in the jump where
the APs are waiting and an NMI IPI is sent to the AP. The AP vectors to
the Secure Launch entry point in the RM piggy which mimics what the real
mode code would do then jumps the the standard RM piggy protected mode
entry point.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/realmode.h  |  3 ++
 arch/x86/kernel/smpboot.c| 86 
 arch/x86/realmode/rm/header.S|  3 ++
 arch/x86/realmode/rm/trampoline_64.S | 37 
 4 files changed, 129 insertions(+)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 5db5d08..ef37bf1 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -37,6 +37,9 @@ struct real_mode_header {
 #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
 #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
 };
 
 /* This must match data at realmode/rm/trampoline_{32,64}.S */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 9320285..aafe627 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -57,6 +57,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1021,6 +1022,83 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
 }
 
+#ifdef CONFIG_SECURE_LAUNCH
+
+static atomic_t first_ap_only = {1};
+
+/*
+ * Called to fix the long jump address for the waiting APs to vector to
+ * the correct startup location in the Secure Launch stub in the rmpiggy.
+ */
+static int
+slaunch_fixup_jump_vector(void)
+{
+   struct sl_ap_wake_info *ap_wake_info;
+   u32 *ap_jmp_ptr = NULL;
+
+   if (!atomic_dec_and_test(_ap_only))
+   return 0;
+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   ap_jmp_ptr = (u32 *)__va(ap_wake_info->ap_wake_block +
+ap_wake_info->ap_jmp_offset);
+
+   *ap_jmp_ptr = real_mode_header->sl_trampoline_start32;
+
+   pr_info("TXT AP long jump address updated\n");
+
+   return 0;
+}
+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs in a pause loop waiting to receive an NMI. This will wake the APs
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static int
+slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   unsigned long send_status = 0, accept_status = 0;
+
+   /* Only done once */
+   if (slaunch_fixup_jump_vector())
+   return -1;
+
+   /* Send NMI IPI to idling AP and wake it up */
+   apic_icr_write(APIC_DM_NMI, apicid);
+
+   if (init_udelay == 0)
+   udelay(10);
+   else
+   udelay(300);
+
+   send_status = safe_apic_wait_icr_idle();
+
+   if (init_udelay == 0)
+   udelay(10);
+   else
+   udelay(300);
+
+   accept_status = (apic_read(APIC_ESR) & 0xEF);
+
+   if (send_status)
+   pr_err("Secure Launch IPI never delivered???\n");
+   if (accept_status)
+   pr_err("Secure Launch IPI delivery error (%lx)\n",
+   accept_status);
+
+   return (send_status | accept_status);
+}
+
+#else
+
+#define slaunch_wakeup_cpu_from_txt(cpu, apicid)   0
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -1075,6 +1153,13 @@ static int do_boot_cpu(int apicid, int cpu, struct 
task_struct *idle,
cpumask_clear_cpu(cpu, cpu_initialized_mask);
smp_mb();
 
+   /* With Intel TXT, the AP startup is totally different */
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==
+  (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) {
+   boot_error = slaunch_wakeup_cpu_from_txt(cpu, apicid);
+   goto txt_wake;
+   }
+
/*
 * Wake up a CPU in difference cases:
 * - Use the method in the APIC driver if it's defined
@@ -1087,6 +1172,7 @@ static int do_boot_cpu(int apicid, int cpu, struct 
task_struct *idle,
boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
 cpu0_nmi_registered);
 
+txt_wake:
if (!boot_error) {
 

[PATCH v4 12/14] reboot: Secure Launch SEXIT support on reboot paths

2021-08-27 Thread Ross Philipson
If the MLE kernel is being powered off, rebooted or halted,
then SEXIT must be called. Note that the SEXIT GETSEC leaf
can only be called after a machine_shutdown() has been done on
these paths. The machine_shutdown() is not called on a few paths
like when poweroff action does not have a poweroff callback (into
ACPI code) or when an emergency reset is done. In these cases,
just the TXT registers are finalized but SEXIT is skipped.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/reboot.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index ebfb911..fe9d8cc 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -731,6 +732,7 @@ static void native_machine_restart(char *__unused)
 
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
__machine_emergency_restart(0);
 }
 
@@ -741,6 +743,9 @@ static void native_machine_halt(void)
 
tboot_shutdown(TB_SHUTDOWN_HALT);
 
+   /* SEXIT done after machine_shutdown() to meet TXT requirements */
+   slaunch_finalize(1);
+
stop_this_cpu(NULL);
 }
 
@@ -749,8 +754,12 @@ static void native_machine_power_off(void)
if (pm_power_off) {
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
pm_power_off();
+   } else {
+   slaunch_finalize(0);
}
+
/* A fallback in case there is no PM info available */
tboot_shutdown(TB_SHUTDOWN_HALT);
 }
@@ -778,6 +787,7 @@ void machine_shutdown(void)
 
 void machine_emergency_restart(void)
 {
+   slaunch_finalize(0);
__machine_emergency_restart(1);
 }
 
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 14/14] tpm: Allow locality 2 to be set when initializing the TPM for Secure Launch

2021-08-16 Thread Ross Philipson

On 8/10/21 12:21 PM, Jarkko Sakkinen wrote:

On Mon, Aug 09, 2021 at 12:38:56PM -0400, Ross Philipson wrote:

The Secure Launch MLE environment uses PCRs that are only accessible from
the DRTM locality 2. By default the TPM drivers always initialize the
locality to 0. When a Secure Launch is in progress, initialize the
locality to 2.

Signed-off-by: Ross Philipson 
---
  drivers/char/tpm/tpm-chip.c | 13 +++--
  1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index ddaeceb..48b9351 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -23,6 +23,7 @@
  #include 
  #include 
  #include 
+#include 
  #include "tpm.h"
  
  DEFINE_IDR(dev_nums_idr);

@@ -34,12 +35,20 @@
  
  static int tpm_request_locality(struct tpm_chip *chip)

  {
-   int rc;
+   int rc, locality;


 int locality;
 int rc;


Will do.



  
  	if (!chip->ops->request_locality)

return 0;
  
-	rc = chip->ops->request_locality(chip, 0);

+   if (slaunch_get_flags() & SL_FLAG_ACTIVE) {
+   dev_dbg(>dev, "setting TPM locality to 2 for MLE\n");
+   locality = 2;
+   } else {
+   dev_dbg(>dev, "setting TPM locality to 0\n");
+   locality = 0;
+   }


Please, remove dev_dbg()'s.


Will do.

Thanks
Ross




+
+   rc = chip->ops->request_locality(chip, locality);
if (rc < 0)
return rc;
  
--

1.8.3.1


/Jarkko



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 02/14] x86/boot: Add missing handling of setup_indirect structures

2021-08-16 Thread Ross Philipson

On 8/10/21 12:19 PM, Jarkko Sakkinen wrote:

On Mon, Aug 09, 2021 at 12:38:44PM -0400, Ross Philipson wrote:

One of the two functions in ioremap.c that handles setup_data was
missing the correct handling of setup_indirect structures.


What is "correct handling", and how was it broken?

What is 'setup_indirect'?


Functionality missing from original commit:


Remove this sentence.


commit b3c72fc9a78e (x86/boot: Introduce setup_indirect)


Should be.

Fixes: b3c72fc9a78e ("x86/boot: Introduce setup_indirect")


I will fix these things and make the commit message clearer.

Thanks,
Ross



  

Signed-off-by: Ross Philipson 
---
  arch/x86/mm/ioremap.c | 21 +++--
  1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index ab74e4f..f2b34c5 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -669,17 +669,34 @@ static bool __init 
early_memremap_is_setup_data(resource_size_t phys_addr,
  
  	paddr = boot_params.hdr.setup_data;

while (paddr) {
-   unsigned int len;
+   unsigned int len, size;
  
  		if (phys_addr == paddr)

return true;
  
  		data = early_memremap_decrypted(paddr, sizeof(*data));

+   size = sizeof(*data);
  
  		paddr_next = data->next;

len = data->len;
  
-		early_memunmap(data, sizeof(*data));

+   if ((phys_addr > paddr) && (phys_addr < (paddr + len))) {
+   early_memunmap(data, sizeof(*data));
+   return true;
+   }
+
+   if (data->type == SETUP_INDIRECT) {
+   size += len;
+   early_memunmap(data, sizeof(*data));
+   data = early_memremap_decrypted(paddr, size);
+
+   if (((struct setup_indirect *)data->data)->type != 
SETUP_INDIRECT) {
+   paddr = ((struct setup_indirect 
*)data->data)->addr;
+   len = ((struct setup_indirect 
*)data->data)->len;
+   }
+   }
+
+   early_memunmap(data, size);
  
  		if ((phys_addr > paddr) && (phys_addr < (paddr + len)))

return true;
--
1.8.3.1




/Jarkko



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v3 06/14] x86: Secure Launch main header file

2021-08-09 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 532 
 1 file changed, 532 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index 000..c125b67
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,532 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2021, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_SKINIT0x0002
+#define SL_FLAG_ARCH_TXT   0x0004
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_AMD 1
+#define SL_CPU_INTEL   2
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE   0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STS(1<<0)
+#define TXT_SEXIT_DONE_STS (1<<1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+#define TXT_SINIT_TABLE_MAXTXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW  0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB  0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE   0xc0008014
+#define SL_ERROR_HI_PMR_SIZE  

[PATCH v3 12/14] reboot: Secure Launch SEXIT support on reboot paths

2021-08-09 Thread Ross Philipson
If the MLE kernel is being powered off, rebooted or halted,
then SEXIT must be called. Note that the SEXIT GETSEC leaf
can only be called after a machine_shutdown() has been done on
these paths. The machine_shutdown() is not called on a few paths
like when poweroff action does not have a poweroff callback (into
ACPI code) or when an emergency reset is done. In these cases,
just the TXT registers are finalized but SEXIT is skipped.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/reboot.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index ebfb911..fe9d8cc 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -731,6 +732,7 @@ static void native_machine_restart(char *__unused)
 
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
__machine_emergency_restart(0);
 }
 
@@ -741,6 +743,9 @@ static void native_machine_halt(void)
 
tboot_shutdown(TB_SHUTDOWN_HALT);
 
+   /* SEXIT done after machine_shutdown() to meet TXT requirements */
+   slaunch_finalize(1);
+
stop_this_cpu(NULL);
 }
 
@@ -749,8 +754,12 @@ static void native_machine_power_off(void)
if (pm_power_off) {
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
pm_power_off();
+   } else {
+   slaunch_finalize(0);
}
+
/* A fallback in case there is no PM info available */
tboot_shutdown(TB_SHUTDOWN_HALT);
 }
@@ -778,6 +787,7 @@ void machine_shutdown(void)
 
 void machine_emergency_restart(void)
 {
+   slaunch_finalize(0);
__machine_emergency_restart(1);
 }
 
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v3 08/14] x86: Secure Launch kernel early boot stub

2021-08-09 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/x86/boot.rst |  13 +
 arch/x86/boot/compressed/Makefile  |   3 +-
 arch/x86/boot/compressed/head_64.S |  37 ++
 arch/x86/boot/compressed/kernel_info.S |  34 ++
 arch/x86/boot/compressed/sl_main.c | 549 ++
 arch/x86/boot/compressed/sl_stub.S | 685 +
 arch/x86/kernel/asm-offsets.c  |  19 +
 7 files changed, 1339 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
index 894a198..2fbcb77 100644
--- a/Documentation/x86/boot.rst
+++ b/Documentation/x86/boot.rst
@@ -1026,6 +1026,19 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 059d49a..1fe55a5 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -102,7 +102,8 @@ vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
 efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index a2347de..b35e072 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -498,6 +498,17 @@ trampoline_return:
pushq   $0
popfq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   pushq   %rsi
+
+   /* Ensure the relocation region coverd by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+
+   popq%rsi
+#endif
+
 /*
  * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
@@ -556,6 +567,32 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter. There is no
+* obvious way to do anything about the use of kernel_alignment or
+* init_size though these seem low risk with all the PMR and overlap
+* checks in place.
+*/
+   pushq   %rsi
+
+   movq%rsi, %rdi
+   callq   sl_main
+
+   /* Ensure the decompression location is coverd by a PMR */
+   movq%rbp, %rdi
+   movqoutput_len(%rip), %rsi
+   callq   sl_check_region
+
+   popq%rsi
+#endif
+
 /*
  * If running as an SEV guest, the encryption mask is required in the
  * page-table setup code below. When the guest also has SEV-ES enabled
diff --git a/arch/x86

[PATCH v3 07/14] x86: Add early SHA support for Secure Launch early measurements

2021-08-09 Thread Ross Philipson
From: "Daniel P. Smith" 

The SHA algorithms are necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA libraries directly in
the code since the compressed kernel is not uncompressed at this point.

The SHA code here has its origins in the code from the main kernel, commit
c4d5b9ffa31f (crypto: sha1 - implement base layer for SHA-1). That code could
not be pulled directly into the setup portion of the compressed kernel because
of other dependencies it pulls in. The result is this is a modified copy of
that code that still leverages the core SHA algorithms.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile   |   2 +
 arch/x86/boot/compressed/early_sha1.c   | 103 
 arch/x86/boot/compressed/early_sha1.h   |  17 ++
 arch/x86/boot/compressed/early_sha256.c |   7 +++
 lib/crypto/sha256.c |   8 +++
 lib/sha1.c  |   4 ++
 6 files changed, 141 insertions(+)
 create mode 100644 arch/x86/boot/compressed/early_sha1.c
 create mode 100644 arch/x86/boot/compressed/early_sha1.h
 create mode 100644 arch/x86/boot/compressed/early_sha256.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 431bf7f..059d49a 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -102,6 +102,8 @@ vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
 efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+
 $(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE
$(call if_changed,ld)
 
diff --git a/arch/x86/boot/compressed/early_sha1.c 
b/arch/x86/boot/compressed/early_sha1.c
new file mode 100644
index 000..74f4654
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2021 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "early_sha1.h"
+
+#define SHA1_DISABLE_EXPORT
+#include "../../../../lib/sha1.c"
+
+/* The SHA1 implementation in lib/sha1.c was written to get the workspace
+ * buffer as a parameter. This wrapper function provides a container
+ * around a temporary workspace that is cleared after the transform completes.
+ */
+static void __sha_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memset(ws, 0, sizeof(ws));
+   /*
+* As this is cryptographic code, prevent the memset 0 from being
+* optimized out potentially leaving secrets in memory.
+*/
+   wmb();
+
+}
+
+void early_sha1_init(struct sha1_state *sctx)
+{
+   sha1_init(sctx->state);
+   sctx->count = 0;
+}
+
+void early_sha1_update(struct sha1_state *sctx,
+  const u8 *data,
+  unsigned int len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+   }
+
+   if (len)
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+void early_sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (partial > bit_offset) {
+   memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+   __sha_transform(sctx->state, sctx->buffer);
+   }
+
+   memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+   *bits = cpu_to_be64(sctx->count << 3);
+   __sha_transform(sctx->state, sctx->buffer);
+
+   for (i = 0; i < SHA1_DIGEST_SIZ

[PATCH v3 09/14] x86: Secure Launch kernel late boot stub

2021-08-09 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 460 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 468 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 3e625c6..d6ee904 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -80,6 +80,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-$(CONFIG_STACKTRACE)   += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 055a834..482bd76 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -19,6 +19,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -976,6 +977,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup_txt();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index 000..f91f0b5
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,460 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2021, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags;
+static struct sl_ap_wake_info ap_wake_info;
+static u64 evtlog_addr;
+static u32 evtlog_size;
+static u64 vtd_pmr_lo_size;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+EXPORT_SYMBOL(slaunch_get_flags);
+
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return _wake_info;
+}
+
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(_dmar[0]);
+}
+
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, , sizeof(error));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_RESET, , sizeof(one));
+
+   for ( ; ; )
+   asm volatile ("hlt");
+
+   unreachable();
+}
+
+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+u32 bytes)
+{
+   void *heap;
+   u64 base, size, offset = 0;
+   int i;
+
+   if (type > TXT_SINIT_TABLE_MAX)
+   slaunch_txt_reset(txt,
+   "Error invalid table type for early heap walk\n",
+   SL_ERROR_HEAP_WALK);
+
+   memcpy_fromio(, txt + TXT_CR_HEAP_BASE, sizeof(base));
+   memcpy_fromio(, txt + TXT_CR_HEAP_SIZE, sizeof(size));
+
+   /* Iterate over heap tables looking for table of "type" */
+   for (i = 0; i < type; i++) {
+   base += offset;
+   heap = early_memremap(base, sizeof(u64));
+   if (!heap)
+   slaunch_txt_reset(txt,
+   "Error early_memremap of heap for heap walk\n",
+   SL_ERROR_HEAP_MAP);
+
+ 

[PATCH v3 04/14] Documentation/x86: Secure Launch kernel documentation

2021-08-09 Thread Ross Philipson
Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 Documentation/x86/index.rst |   1 +
 Documentation/x86/secure-launch.rst | 714 
 2 files changed, 715 insertions(+)
 create mode 100644 Documentation/x86/secure-launch.rst

diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
index 3830483..e5a058f 100644
--- a/Documentation/x86/index.rst
+++ b/Documentation/x86/index.rst
@@ -31,6 +31,7 @@ x86-specific Documentation
tsx_async_abort
buslock
usb-legacy-support
+   secure-launch
i386/index
x86_64/index
sva
diff --git a/Documentation/x86/secure-launch.rst 
b/Documentation/x86/secure-launch.rst
new file mode 100644
index 000..cc5995c6
--- /dev/null
+++ b/Documentation/x86/secure-launch.rst
@@ -0,0 +1,714 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=
+Secure Launch
+=
+
+Background
+==
+
+The Trusted Computing Group (TCG) architecture defines two methods in
+which the target operating system is started, aka launched, on a system
+for which Intel and AMD provides implementations. These two launch types
+are static launch and dynamic launch. Static launch is referred to as
+such because it happens at one fixed point, at system startup, during
+the defined life-cycle of a system. Dynamic launch is referred to as
+such because it is not limited to being done once nor bound to system
+startup. It can in fact happen at anytime without incurring/requiring an
+associated power event for the system. Since dynamic launch can happen
+at anytime, this results in dynamic launch being split into two type of
+its own. The first is referred to as an early launch, where the dynamic
+launch is done in conjunction with the static lunch of the system. The
+second is referred to as a late launch, where a dynamic launch is
+initiated after the static launch was fully completed and the system was
+under the control of some target operating system or run-time kernel.
+These two top-level launch methods, static launch and dynamic launch
+provide different models for establishing the launch integrity, i.e. the
+load-time integrity, of the target operating system. When cryptographic
+hashing is used to create an integrity assessment for the launch
+integrity, then for a static launch this is referred to as the Static
+Root of Trust for Measurement (SRTM) and for dynamic launch it is
+referred to as the Dynamic Root of Trust for Measurement (DRTM).
+
+The reasoning for needing the two integrity models is driven by the fact
+that these models leverage what is referred to as a "transitive trust".
+A transitive trust is commonly referred to as a "trust chain", which is
+created through the process of an entity making an integrity assessment
+of another entity and upon success transfers control to the new entity.
+This process is then repeated by each entity until the Trusted Computing
+Base (TCB) of system has been established. A challenge for transitive
+trust is that the process is susceptible to cumulative error
+and in this case that error is inaccurate or improper integrity
+assessments. The way to address cumulative error is to reduce the
+number of instances that can create error involved in the process.  In
+this case it would be to reduce then number of entities involved in the
+transitive trust. It is not possible to reduce the number of firmware
+components or the boot loader(s) involved during static launch. This is
+where dynamic launch comes in, as it introduces the concept for a CPU to
+provide an instruction that results in a transitive trust starting with
+the CPU doing an integrity assessment of a special loader that can then
+start a target operating system. This reduces the trust chain down to
+the CPU, a special loader, and the target operation system.  It is also
+why it is said that the DRTM is rooted in hardware since the CPU is what
+does the first integrity assessment, i.e. the first measurement, in the
+trust chain.
+
+Overview
+
+
+Prior to the start of the TrenchBoot project, the only active Open
+Source project supporting dynamic launch was Intel's Tboot project to
+support their implementation of dynamic launch known as Intel Trusted
+eXecution Technology (TXT). The approach taken by Tboot was to provide
+an exokernel that could handle the launch protocol implemented by
+Intel's special loader, the SINIT Authenticated Code Module (ACM [3]_)
+and remained in memory to manage the SMX CPU mode that a dynamic launch
+would put a system. While it is not precluded from being used for doing
+a late launch, Tboot's primary use case was to be used as an early
+launch solution. As a result the TrenchBoot project started the
+development of Secure Launch kernel feature to provide a more
+generalized approach. The focus of the effort is twofold, the first i

[PATCH v3 14/14] tpm: Allow locality 2 to be set when initializing the TPM for Secure Launch

2021-08-09 Thread Ross Philipson
The Secure Launch MLE environment uses PCRs that are only accessible from
the DRTM locality 2. By default the TPM drivers always initialize the
locality to 0. When a Secure Launch is in progress, initialize the
locality to 2.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-chip.c | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index ddaeceb..48b9351 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -23,6 +23,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "tpm.h"
 
 DEFINE_IDR(dev_nums_idr);
@@ -34,12 +35,20 @@
 
 static int tpm_request_locality(struct tpm_chip *chip)
 {
-   int rc;
+   int rc, locality;
 
if (!chip->ops->request_locality)
return 0;
 
-   rc = chip->ops->request_locality(chip, 0);
+   if (slaunch_get_flags() & SL_FLAG_ACTIVE) {
+   dev_dbg(>dev, "setting TPM locality to 2 for MLE\n");
+   locality = 2;
+   } else {
+   dev_dbg(>dev, "setting TPM locality to 0\n");
+   locality = 0;
+   }
+
+   rc = chip->ops->request_locality(chip, locality);
if (rc < 0)
return rc;
 
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v3 11/14] kexec: Secure Launch kexec SEXIT support

2021-08-09 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 71 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 75 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index f91f0b5..60a193a 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -458,3 +458,74 @@ void __init slaunch_setup_txt(void)
 
pr_info("Intel TXT setup complete\n");
 }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile (".byte 0x0f,0x37\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+void slaunch_finalize(int do_sexit)
+{
+   void __iomem *config;
+   u64 one = TXT_REGVALUE_ONE, val;
+
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) !=
+   (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT))
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Close the TXT private register space */
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, , sizeof(one));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(val));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0) {
+   pr_emerg("Error TXT SEXIT must be called on CPU 0\n");
+   return;
+   }
+
+   /* Disable SMX mode */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_emerg("TXT SEXIT complete.");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 4b34a9a..fdf0a27 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1179,6 +1180,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
kmsg_dump(KMSG_DUMP_SHUTDOWN);
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


  1   2   >