Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-13 Thread Andy Lutomirski
On May 13, 2016 2:42 AM, "Dr. Greg Wettstein"  wrote:
>
> On Sun, May 08, 2016 at 06:32:10PM -0700, Andy Lutomirski wrote:
>
> Good morning, running behind on e-mail this week but wanted to get
> some reflections out on Andy's well taken comments and concerns.
>
> > On May 8, 2016 2:59 AM, "Dr. Greg Wettstein"  wrote:
> > >
> > >
> > > This now means the security of SGX on 'unlocked' platforms, at least
> > > from a trust perspective, will be dependent on using TXT so as to
> > > provide a hardware root of trust on which to base the SGX trust model.
>
> > Can you explain what you mean by "trust"?  In particular, what kind
> > of "trust" would you have with a verified or trusted boot plus
> > verified SGX launch root key that you would not have in the complete
> > absence of hardware launch control.
> >
> > I've heard multiple people say they want launch control but I've
> > never heard a cogent explanation of a threat model in which it's
> > useful.
>
> Trust means a lot of things and does not always have a 'threat model'
> associated with it.  Security is all about the intersection of
> technology and economics and moving forward, will be driven by
> contractual obligations and re-insurance requirements, that is at
> least what we see happening and are involved with.
>
> In a single root of trust, as was originally developed for SGX, trust
> consists of a bi-lateral contractual guarantee between Intel and a
> software developer.  A contractual guarantee by Intel that an enclave
> launched under the security of its root key will have prescribed
> integrity and confidentiality guarantees.  In reciprocation the
> developer delivers to Intel an implied trust that it will not use
> those guarantees to protect illicit or malicious software behavior.
>
> That may not have implications with respect to a specific threat model
> but it could have significance in a re-insurance model where a client
> of the software environment can indicate that they had an expectation
> that code/data which was committed to this environment was
> appropriately protected.  The refusal to launch, ie. a launch control
> policy, provides a hardware implementation of that trust guarantee.
>

If so, this means that the client and/or their lawyers screwed up
severely.  The certification that integrity and confidentiality are
protected has *nothing* to do with launch control.  (It's trivial to
break, too - just run the code in an SGX simulator or tweak the MACs
and launch in debug mode.  The code can't directly tell the
difference.)  In this regard, SGX is very much like TPM-based
security.  With a TPM, even if the hardware is somehow fully protected
against physical attacks, the TPM will not prevent corrupted or
malicious code from running.  At best, it prevents such code from
unsealing things protected by PCRs or from attesting to PCR state.
Similarly, SGX won't prevent code from running on a corrupted platform
(e.g. one that simulates SGX instructions).  Instead, it prevents
EREPORT and EGETKEY from deriving the expected keys in such an
environment.

The correct way to do this is using the Quoting Enclave from Intel or
using a differently bootstrapped approach -- see below.  The verifier
and Intel have an agreement and share some keys, and quoting actually
checks something relevant.

> All of this changes in a future which includes unlocked identity
> modulus signatures.

No, the only thing that changes is that the the ability for the
lawyers or security architects to screw up in this particular way is
reduced.  EREPORT still checks signatures.

>
> > > I would assume that everyone is using signed Launch Control Policies
> > > (LCP) as we are.  This means that TXT/tboot already has access to the
> > > public key which is used for the LCP data file signature.  It would
> > > seem logical to have tboot compute the signature on that public key
> > > and program that signature into the module signature registers.  That
> > > would tie the hardware root of trust to the SGX root of trust.
>
> > Now I'm confused.  TXT, in theory*, lets you establish a good root
> > of trust for TPM PCR measurements.  So, with TXT, if you had
> > one-shop launch control MSRs, you could attest that you've locked
> > the launch control policy.
>
> Correct.
>
> In the absence of launch control with an authoritative root of trust
> an alternative trust root has to be established.  Integrating the load
> of the SGX identity signatures into tboot provides a framework where
> the trust guarantees discussed previously can be tied to the identity
> of the hardware platform and its provisioner.
>
> This in turn provides a framework for contractual security guarantees
> between the platform provisioner and potential clients.

No, at most it provides a way for the platform provisioner and the
client to mess up.

I can see two ways to get these types of assurances.

1. Use Intel's provisioning and quoting mechanism and sign Intel's

Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-13 Thread Dr. Greg Wettstein
On Sun, May 08, 2016 at 06:32:10PM -0700, Andy Lutomirski wrote:

Good morning, running behind on e-mail this week but wanted to get
some reflections out on Andy's well taken comments and concerns.

> On May 8, 2016 2:59 AM, "Dr. Greg Wettstein"  wrote:
> >
> >
> > This now means the security of SGX on 'unlocked' platforms, at least
> > from a trust perspective, will be dependent on using TXT so as to
> > provide a hardware root of trust on which to base the SGX trust model.

> Can you explain what you mean by "trust"?  In particular, what kind
> of "trust" would you have with a verified or trusted boot plus
> verified SGX launch root key that you would not have in the complete
> absence of hardware launch control.
>
> I've heard multiple people say they want launch control but I've
> never heard a cogent explanation of a threat model in which it's
> useful.

Trust means a lot of things and does not always have a 'threat model'
associated with it.  Security is all about the intersection of
technology and economics and moving forward, will be driven by
contractual obligations and re-insurance requirements, that is at
least what we see happening and are involved with.

In a single root of trust, as was originally developed for SGX, trust
consists of a bi-lateral contractual guarantee between Intel and a
software developer.  A contractual guarantee by Intel that an enclave
launched under the security of its root key will have prescribed
integrity and confidentiality guarantees.  In reciprocation the
developer delivers to Intel an implied trust that it will not use
those guarantees to protect illicit or malicious software behavior.

That may not have implications with respect to a specific threat model
but it could have significance in a re-insurance model where a client
of the software environment can indicate that they had an expectation
that code/data which was committed to this environment was
appropriately protected.  The refusal to launch, ie. a launch control
policy, provides a hardware implementation of that trust guarantee.

All of this changes in a future which includes unlocked identity
modulus signatures.

> > I would assume that everyone is using signed Launch Control Policies
> > (LCP) as we are.  This means that TXT/tboot already has access to the
> > public key which is used for the LCP data file signature.  It would
> > seem logical to have tboot compute the signature on that public key
> > and program that signature into the module signature registers.  That
> > would tie the hardware root of trust to the SGX root of trust.

> Now I'm confused.  TXT, in theory*, lets you establish a good root
> of trust for TPM PCR measurements.  So, with TXT, if you had
> one-shop launch control MSRs, you could attest that you've locked
> the launch control policy.

Correct.

In the absence of launch control with an authoritative root of trust
an alternative trust root has to be established.  Integrating the load
of the SGX identity signatures into tboot provides a framework where
the trust guarantees discussed previously can be tied to the identity
of the hardware platform and its provisioner.

This in turn provides a framework for contractual security guarantees
between the platform provisioner and potential clients.

> But what do you gain by doing such a thing?  All you're actually
> attesting is that you locked it until the next reboot.  Someone who
> subsequently compromises you can reboot you, bypass TXT on the next
> boot, and launch any enclave they want.  In any event, SGX is
> supposed to make it so that your enclaves remain secure regardless
> of what happens to the kernel, so I'm at a loss for what you're
> trying to do.

As a Trusted Execution Environment (TEE) the notion of SGX is that it
run run code and data in an Iago threat environment, where the
hardware and operating system have been lost to an aggressor.  You can
technically provide that guarantee in the original SGX root of trust
model but that changes in the presence of an unlocked identity model.

The TPM2 architecture will be the hardware security model moving
forward.  If you look at the new attestation model the hardware
reference quote includes an irreversible clock field in order to
defeat the threat model you describe above with a malware induced
platform reboot.

Beyond that, at least in our work, we directly tie the launch of our
security supervisor to an integrity chain which must be delivered from
a functional TXT/TPM implementation.  Our platform won't boot and
deliver its qualifying attestation to a security counter-party if the
TXT based boot was bypassed.

Given Microsoft's findings in their follow on paper to their Haven
work a hardware/OS root of trust model is still important in a single
root of trust model.  At least until Intel comes up with a constant
time hardware memory model, which is what we expect to see if Intel
continues to move forward with refinenments to SGX and the notion of a

Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-12 Thread Dr. Greg Wettstein
On Mon, May 09, 2016 at 08:27:04AM +0200, Thomas Gleixner wrote:

Good morning.

> > On Fri, 6 May 2016, Jarkko Sakkinen wrote:
> > I fully understand if you (and others) want to keep this standpoint but
> > what if we could get it to staging after I've revised it with suggested
> >
> This should not go to staging at all. Either this is going to be a
> real useful driver or we just keep it out of tree.
> >
> > changes and internal changes in my TODO? Then it would not pollute the
> > mainline kernel but still would be easily available for experimentation.

> How are we supposed to experiment with that if there is no launch
> enclave for Linux available?

Build one in a simulator where an independent root enclave key can be
established.  At least thats the approach we are working on with
Jarkko's patches.

Intel does have an instruction accurate simulator, Microsoft used it
for the work which was reported in the Haven paper.  I believe the Air
Force academy used that simulator for their work on SGX as well.

As with other issues SGX related it is unclear why access to the
simulator was/is restricted.  Given that Gen6 hardware is now emerging
there would seem to be even less reason to not have the simulator
generically available to allow implementations to be tested.

> Thanks,
> 
>   tglx

Have a good day.

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.   Specializing in information infra-structure
Fargo, ND  58102development.
PH: 701-281-1686
FAX: 701-281-3949   EMAIL: g...@enjellic.com
--
"Everything should be made as simple as possible, but not simpler."
-- Albert Einstein
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-06 Thread Andy Lutomirski
On Fri, May 6, 2016 at 4:23 AM, Jarkko Sakkinen
 wrote:
> On Wed, Apr 27, 2016 at 10:18:05AM +0200, Ingo Molnar wrote:
>>
>> * Andy Lutomirski  wrote:
>>
>> > > What new syscalls would be needed for ssh to get all this support?
>> >
>> > This patchset or similar, plus some user code and an enclave to use.
>> >
>> > Sadly, on current CPUs, you also need Intel to bless the enclave.  It 
>> > looks like
>> > new CPUs might relax that requirement.
>>
>> That looks like a fundamental technical limitation in my book - to an open 
>> source
>> user this is essentially a very similar capability as tboot: it only allows 
>> the
>> execution of externally blessed static binary blobs...
>>
>> I don't think we can merge any of this upstream until it's clear that the 
>> hardware
>> owner running open-source user-space can also freely define/start his own 
>> secure
>> enclaves without having to sign the enclave with any external party. I.e.
>> self-signed enclaves should be fundamentally supported as well.
>
> Post Skylake we will have a set of MSRs for defining your own root of
> trust: IA32_SGXLEPUBKEYHASH.
>
> Andy had a concern that you could set root of trust multiple times,
> which could lead to potential attack scenarios. These MSRs are one-shot.
> ENCLS will fail if the launch control is locked. There's no possiblity
> to have a root of trust that is unlocked.

If this is actually true, can you ask the architecture folks to
clarify their manual.

The MSR description in table 35-2 says "Write permitted if
CPUID.(EAX=12H,ECX=0H): EAX[0]=1 && IA32_FEATURE_CONTROL[17] = 1 &&
IA32_FEATURE_CONTROL[0] = 1"

39.1.4 says "If IA32_FEATURE_CONTROL is locked with bit 17 set,
IA32_SGXLEPUBKEYHASH MSRs are reconfigurable (writeable). If either
IA32_FEATURE_CONTROL is not locked or
bit 17 is clear, the MSRs are read only. By leaving these MSRs
writable, system SW or a VMM can support a plurality of Launch
Enclaves for hosting multiple execution environments."

This does not sound like one-shot to me.  It sounds quite clear, in
fact, that it's *not* one-shot so a "plurality" of these things are
supported.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-06 Thread Jarkko Sakkinen
On Wed, Apr 27, 2016 at 10:18:05AM +0200, Ingo Molnar wrote:
> 
> * Andy Lutomirski  wrote:
> 
> > > What new syscalls would be needed for ssh to get all this support?
> > 
> > This patchset or similar, plus some user code and an enclave to use.
> > 
> > Sadly, on current CPUs, you also need Intel to bless the enclave.  It looks 
> > like 
> > new CPUs might relax that requirement.
> 
> That looks like a fundamental technical limitation in my book - to an open 
> source 
> user this is essentially a very similar capability as tboot: it only allows 
> the 
> execution of externally blessed static binary blobs...
> 
> I don't think we can merge any of this upstream until it's clear that the 
> hardware 
> owner running open-source user-space can also freely define/start his own 
> secure 
> enclaves without having to sign the enclave with any external party. I.e. 
> self-signed enclaves should be fundamentally supported as well.

Post Skylake we will have a set of MSRs for defining your own root of
trust: IA32_SGXLEPUBKEYHASH.

Andy had a concern that you could set root of trust multiple times,
which could lead to potential attack scenarios. These MSRs are one-shot.
ENCLS will fail if the launch control is locked. There's no possiblity
to have a root of trust that is unlocked.

> Thanks,
> 
>   Ingo

/Jarkko
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-06 Thread Pavel Machek
On Fri 2016-05-06 01:52:04, Jarkko Sakkinen wrote:
> On Mon, May 02, 2016 at 11:37:52AM -0400, Austin S. Hemmelgarn wrote:
> > On 2016-04-29 16:17, Jarkko Sakkinen wrote:
> > >On Tue, Apr 26, 2016 at 09:00:10PM +0200, Pavel Machek wrote:
> > >>On Mon 2016-04-25 20:34:07, Jarkko Sakkinen wrote:
> > >>>Intel(R) SGX is a set of CPU instructions that can be used by
> > >>>applications to set aside private regions of code and data.  The code
> > >>>outside the enclave is disallowed to access the memory inside the
> > >>>enclave by the CPU access control.
> > >>>
> > >>>The firmware uses PRMRR registers to reserve an area of physical memory
> > >>>called Enclave Page Cache (EPC). There is a hardware unit in the
> > >>>processor called Memory Encryption Engine. The MEE encrypts and decrypts
> > >>>the EPC pages as they enter and leave the processor package.
> > >>
> > >>What are non-evil use cases for this?
> > >
> > >I'm not sure what you mean by non-evil.
> > >
> > I would think that this should be pretty straightforward.  Pretty much every
> > security technology integrated in every computer in existence has the
> > potential to be used by malware for various purposes.  Based on a cursory
> > look at SGX, it is pretty easy to figure out how to use this to hide
> > arbitrary code from virus scanners and the OS itself unless you have some
> > way to force everything to be a debug enclave, which entirely defeats the
> > stated purpose of the extensions.  I can see this being useful for tight
> > embedded systems.  On a desktop which I have full control of physical access
> > to though, it's something I'd immediately turn off, because the risk of
> > misuse is so significant (I've done so on my new Thinkpad L560 too, although
> > that's mostly because Linux doesn't support it yet).
> 
> The code in enclave binary is in clear text so it does not really
> allow you to completely hide any code. It's a signed binary, not
> encypted binary.

Umm. Now you are evil.

Yes, the code that starts in the enclave may not be encrypted, but I'm
pretty sure the enclave will download some more code from remote
server after attestation... x86 or some kind of interpretted code.

(But of course we already know that the technology is evil, as only
Intel can use it, see Ingo's reply.)
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-05 Thread Jarkko Sakkinen
On Mon, May 02, 2016 at 11:37:52AM -0400, Austin S. Hemmelgarn wrote:
> On 2016-04-29 16:17, Jarkko Sakkinen wrote:
> >On Tue, Apr 26, 2016 at 09:00:10PM +0200, Pavel Machek wrote:
> >>On Mon 2016-04-25 20:34:07, Jarkko Sakkinen wrote:
> >>>Intel(R) SGX is a set of CPU instructions that can be used by
> >>>applications to set aside private regions of code and data.  The code
> >>>outside the enclave is disallowed to access the memory inside the
> >>>enclave by the CPU access control.
> >>>
> >>>The firmware uses PRMRR registers to reserve an area of physical memory
> >>>called Enclave Page Cache (EPC). There is a hardware unit in the
> >>>processor called Memory Encryption Engine. The MEE encrypts and decrypts
> >>>the EPC pages as they enter and leave the processor package.
> >>
> >>What are non-evil use cases for this?
> >
> >I'm not sure what you mean by non-evil.
> >
> I would think that this should be pretty straightforward.  Pretty much every
> security technology integrated in every computer in existence has the
> potential to be used by malware for various purposes.  Based on a cursory
> look at SGX, it is pretty easy to figure out how to use this to hide
> arbitrary code from virus scanners and the OS itself unless you have some
> way to force everything to be a debug enclave, which entirely defeats the
> stated purpose of the extensions.  I can see this being useful for tight
> embedded systems.  On a desktop which I have full control of physical access
> to though, it's something I'd immediately turn off, because the risk of
> misuse is so significant (I've done so on my new Thinkpad L560 too, although
> that's mostly because Linux doesn't support it yet).

The code in enclave binary is in clear text so it does not really
allow you to completely hide any code. It's a signed binary, not
encypted binary.

/Jarkko
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-04 Thread Pavel Machek
Hi!

> Good morning, I hope everyone's day is starting out well.

:-). Rainy day here.

> > > In the TL;DR department I would highly recommend that anyone
> > > interested in all of this read MIT's 170+ page review of the
> > > technology before jumping to any conclusions :-)
> 
> > Would you have links for 1-5?
> 
> First off my apologies to the list as I loathe personal inaccuracy,
> the MIT review paper is only 117 pages long.  I was typing the last
> e-mail at 0405 in the morning and was scrambling for the opportunity
> to get 50 minutes of sleep so my proofreading was sloppy... :-)

Thanks a lot for the links, I'd still say it was more accurate than
average for the lkml.

Best regards,
Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-04 Thread Dr. Greg Wettstein
On Tue, May 03, 2016 at 05:38:40PM +0200, Pavel Machek wrote:

> Hi!

Good morning, I hope everyone's day is starting out well.

> > I told my associates the first time I reviewed this technology that
> > SGX has the ability to be a bit of a Pandora's box and it seems to be
> > following that course.

> Can you elaborate on the Pandora's box? System administrator should
> be able to disable SGX on the system, and use system to do anything
> that could be done with the older CPUs, right?

Correct, there is certainly the on/off switch.

I viewed it as a Pandora's box secondary to the fact that it was the
first commodity based shrouded TEE that had the opportunity for
significant market penetration.  As such and secondary to its
technical characteristics, it has the potential for both good and bad
and like TXT in the last decade it was/is bound to induce significant
debate secondary to software freedom and potential monopolistic
practices.

> > Intel is obviously cognizant of the risk surrounding illicit uses of
> > this technology since it clearly calls out that, by agreeing to have
> > their key signed, a developer agrees to not implement nefarious or
> > privacy invasive software.  Given the known issues that Certificate

> Yeah, that's likely to work ... not :-(. "It is not spyware, it is
> just collecting some anonymous statistics."

The notion that an enclave can look out but could not be looked into
introduces privacy issues into the conversation, see my reflections on
Pandoras box... :-)

> > domination and control.  They probably have enough on their hands with
> > attempting to convert humanity to FPGA's and away from devices which
> > are capable of maintaining a context of exection... :-)

> Heh. FPGAs are not designed to replace CPUs anytime soon... And
> probably never.

Never is a long time.

Intel has clearly drawn a very significant line in the sand with
respect to FPGA technology if you read Krzanich's reflections
regarding his re-organization of Intel.  Whether or not they are
successful, they are going to declare a demarcation point with respect
to IOT devices which has the potential to impact the industry in
general and security in particular.  On one side are going to be FPGA
based devices and on the other side devices with a context of
execution.

I doesn't require a long stretch of the imagination to see hordes of
IOT devices with specific behaviors burned into them which export
sensor or telemetry data upstream.  Depending on how successful they
are with the Altera acquisition there are potentially positive
economic security factors which could be in play.

All of that is certainly not a conversation specific to SGX though.

> > In the TL;DR department I would highly recommend that anyone
> > interested in all of this read MIT's 170+ page review of the
> > technology before jumping to any conclusions :-)

> Would you have links for 1-5?

First off my apologies to the list as I loathe personal inaccuracy,
the MIT review paper is only 117 pages long.  I was typing the last
e-mail at 0405 in the morning and was scrambling for the opportunity
to get 50 minutes of sleep so my proofreading was sloppy... :-)

The following should provide ample bedstand reading material for those
interested in SGX and TEE's:

1.) HASP/SGX paper:
https://software.intel.com/sites/default/files/article/413939/hasp-2013-innovative-technology-for-attestation-and-sealing.pdf

2.) IAGO threat model:
https://cseweb.ucsd.edu/~hovav/dist/iago.pdf

3.) Haven paper:
http://research.microsoft.com/pubs/223450/osdi2014-haven.pdf

4.) Controlled sidechannel attacks:
http://research.microsoft.com/pubs/246400/ctrlchannels-oakland-2015.pdf

https://software.intel.com/en-us/blogs/2015/05/19/look-both-ways-and-watch-out-for-side-channels

5.) MIT/SGX analysis:
https://eprint.iacr.org/2016/086.pdf

> Thanks,
>   Pavel

No problem, enjoy the reading :-)

Have a good day.

Greg

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.   Specializing in information infra-structure
Fargo, ND  58102development.
PH: 701-281-1686
FAX: 701-281-3949   EMAIL: g...@enjellic.com
--
"One problem with monolithic business structures is losing sight
 of the fundamental importance of mathematics.  Consider committees;
 commonly forgotten is the relationship that given a projection of N
 individuals to complete an assignment the most effective number of
 people to assign to the committee is given by f(N) = N - (N-1)."
-- Dr. G.W. Wettstein
   Guerrilla Tactics for Corporate Survival
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-03 Thread Dr. Greg Wettstein
On May 2, 11:37am, "Austin S. Hemmelgarn" wrote:
} Subject: Re: [PATCH 0/6] Intel Secure Guard Extensions

Good morning, I hope the day is starting out well for everyone.

> On 2016-04-29 16:17, Jarkko Sakkinen wrote:
> > On Tue, Apr 26, 2016 at 09:00:10PM +0200, Pavel Machek wrote:
> >> On Mon 2016-04-25 20:34:07, Jarkko Sakkinen wrote:
> >>> Intel(R) SGX is a set of CPU instructions that can be used by
> >>> applications to set aside private regions of code and data.  The code
> >>> outside the enclave is disallowed to access the memory inside the
> >>> enclave by the CPU access control.
> >>>
> >>> The firmware uses PRMRR registers to reserve an area of physical memory
> >>> called Enclave Page Cache (EPC). There is a hardware unit in the
> >>> processor called Memory Encryption Engine. The MEE encrypts and decrypts
> >>> the EPC pages as they enter and leave the processor package.
> >>
> >> What are non-evil use cases for this?
> >
> > I'm not sure what you mean by non-evil.

> I would think that this should be pretty straightforward.  Pretty
> much every security technology integrated in every computer in
> existence has the potential to be used by malware for various
> purposes.  Based on a cursory look at SGX, it is pretty easy to
> figure out how to use this to hide arbitrary code from virus
> scanners and the OS itself unless you have some way to force
> everything to be a debug enclave, which entirely defeats the stated
> purpose of the extensions.  I can see this being useful for tight
> embedded systems.  On a desktop which I have full control of
> physical access to though, it's something I'd immediately turn off,
> because the risk of misuse is so significant (I've done so on my new
> Thinkpad L560 too, although that's mostly because Linux doesn't
> support it yet).

We were somewhat surprised to see Intel announce the SGX driver for
Linux without a bit more community preparation given the nature of the
technology.  But, given the history of opacity around this technology,
it probably isn't surprising.  We thought it may be useful to offer a
few thoughts on this technology as discussion around integrating the
driver moves forward.

We have been following and analyzing this technology since the first
HASP paper was published detailing its development.  We have been
working to integrate, at least at the simulator level, portions of
this technology in solutions we deliver.  We have just recently begun
to acquire validated reference platforms to test these
implementations.

I told my associates the first time I reviewed this technology that
SGX has the ability to be a bit of a Pandora's box and it seems to be
following that course.

SGX belongs to a genre of solutions collectively known as Trusted
Execution Environments (TEE's).  The intent of these platforms is to
support data and application confidentiality and integrity in the face
of an Iago threat environment, ie. a situation where a security
aggressor has complete control of the hardware and operating system,
up to and including the OS 'lying' about what it is doing to the
application.

There are those, including us, who question the quality of the
security gurantee that can be provided but that doesn't diminish the
usefulness or demand for such technology.  If one buys the notion that
all IT delivery will move into the 'cloud' there is certainly a
rationale for a guarantee that clients can push data into a cloud
without concern for whether or not the platform is compromised or
being used to spy on the user's application or data.

As is the case with any security technology, the only way that such a
guarantee can be made is to have a definable origin or root of trust.
At the current time, and this may be the biggest problem with SGX, the
only origin for that root of trust is Intel itself.  Given the nature
and design of SGX this is actually a bilateral root of trust since
Intel, by signing a developer's enclave key, is trusting the developer
to agree to do nothing nefarious while being shrouded by the security
guarantee that SGX provides.

It would be helpful and instructive for anyone involved in this debate
to review the following URL which details Intel's SGX licening
program:

https://software.intel.com/en-us/articles/intel-sgx-product-licensing

Which details what a developer is required to do in order to obtain an
enclave signing key which will be recognized by an SGX capable
processor.   Without a valid signing key an SGX capable system will
only launch an enclave in 'debug' mode which allows the enclave to be
single stepped and examined in a debugger, which obviously invalidates
any TEE based security guarantees which SGX is designed to effect.

Intel is obviously cognizant of the risk surrounding illicit uses of
this technology

Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-02 Thread Austin S. Hemmelgarn

On 2016-04-29 16:17, Jarkko Sakkinen wrote:

On Tue, Apr 26, 2016 at 09:00:10PM +0200, Pavel Machek wrote:

On Mon 2016-04-25 20:34:07, Jarkko Sakkinen wrote:

Intel(R) SGX is a set of CPU instructions that can be used by
applications to set aside private regions of code and data.  The code
outside the enclave is disallowed to access the memory inside the
enclave by the CPU access control.

The firmware uses PRMRR registers to reserve an area of physical memory
called Enclave Page Cache (EPC). There is a hardware unit in the
processor called Memory Encryption Engine. The MEE encrypts and decrypts
the EPC pages as they enter and leave the processor package.


What are non-evil use cases for this?


I'm not sure what you mean by non-evil.

I would think that this should be pretty straightforward.  Pretty much 
every security technology integrated in every computer in existence has 
the potential to be used by malware for various purposes.  Based on a 
cursory look at SGX, it is pretty easy to figure out how to use this to 
hide arbitrary code from virus scanners and the OS itself unless you 
have some way to force everything to be a debug enclave, which entirely 
defeats the stated purpose of the extensions.  I can see this being 
useful for tight embedded systems.  On a desktop which I have full 
control of physical access to though, it's something I'd immediately 
turn off, because the risk of misuse is so significant (I've done so on 
my new Thinkpad L560 too, although that's mostly because Linux doesn't 
support it yet).

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-29 Thread Jarkko Sakkinen
On Tue, Apr 26, 2016 at 09:00:10PM +0200, Pavel Machek wrote:
> On Mon 2016-04-25 20:34:07, Jarkko Sakkinen wrote:
> > Intel(R) SGX is a set of CPU instructions that can be used by
> > applications to set aside private regions of code and data.  The code
> > outside the enclave is disallowed to access the memory inside the
> > enclave by the CPU access control.
> > 
> > The firmware uses PRMRR registers to reserve an area of physical memory
> > called Enclave Page Cache (EPC). There is a hardware unit in the
> > processor called Memory Encryption Engine. The MEE encrypts and decrypts
> > the EPC pages as they enter and leave the processor package.
> 
> What are non-evil use cases for this?

Virtual TPMs for containers/guests would be one such use case.

/Jarkko
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-29 Thread Jarkko Sakkinen
On Tue, Apr 26, 2016 at 09:00:10PM +0200, Pavel Machek wrote:
> On Mon 2016-04-25 20:34:07, Jarkko Sakkinen wrote:
> > Intel(R) SGX is a set of CPU instructions that can be used by
> > applications to set aside private regions of code and data.  The code
> > outside the enclave is disallowed to access the memory inside the
> > enclave by the CPU access control.
> > 
> > The firmware uses PRMRR registers to reserve an area of physical memory
> > called Enclave Page Cache (EPC). There is a hardware unit in the
> > processor called Memory Encryption Engine. The MEE encrypts and decrypts
> > the EPC pages as they enter and leave the processor package.
> 
> What are non-evil use cases for this?

I'm not sure what you mean by non-evil.

> 
>   Pavel
> 
> -- 
> (english) http://www.livejournal.com/~pavelmachek
> (cesky, pictures) 
> http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

/Jarkko
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-27 Thread Andy Lutomirski
On Apr 27, 2016 1:18 AM, "Ingo Molnar"  wrote:
>
>
> * Andy Lutomirski  wrote:
>
> > > What new syscalls would be needed for ssh to get all this support?
> >
> > This patchset or similar, plus some user code and an enclave to use.
> >
> > Sadly, on current CPUs, you also need Intel to bless the enclave.  It looks 
> > like
> > new CPUs might relax that requirement.
>
> That looks like a fundamental technical limitation in my book - to an open 
> source
> user this is essentially a very similar capability as tboot: it only allows 
> the
> execution of externally blessed static binary blobs...
>
> I don't think we can merge any of this upstream until it's clear that the 
> hardware
> owner running open-source user-space can also freely define/start his own 
> secure
> enclaves without having to sign the enclave with any external party. I.e.
> self-signed enclaves should be fundamentally supported as well.

Certainly, if this were a *graphics* driver, airlied would refuse to
merge it without open source userspace available.

We're all used to Intel sending patches that no one outside Intel can
test without because no one has the hardware.  Heck, I recently sent a
vdso patch that *I* can't test.  But in this case I have the hardware
and there is no way that I can test it, and I don't like this at all.

See my earlier comments about not allowing user code to provide
EINITTOKEN.  Implementing that would mostly solve this problem, with
the big caveat that it may be impossible to implement that suggestion
until Intel changes its stance (which is clearly in progress, given
the recent SDM updates).

This could easily end up bring a CNL-only feature in Linux.  (Or
whatever generation that change is in.)

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-27 Thread Ingo Molnar

* Andy Lutomirski  wrote:

> > What new syscalls would be needed for ssh to get all this support?
> 
> This patchset or similar, plus some user code and an enclave to use.
> 
> Sadly, on current CPUs, you also need Intel to bless the enclave.  It looks 
> like 
> new CPUs might relax that requirement.

That looks like a fundamental technical limitation in my book - to an open 
source 
user this is essentially a very similar capability as tboot: it only allows the 
execution of externally blessed static binary blobs...

I don't think we can merge any of this upstream until it's clear that the 
hardware 
owner running open-source user-space can also freely define/start his own 
secure 
enclaves without having to sign the enclave with any external party. I.e. 
self-signed enclaves should be fundamentally supported as well.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-27 Thread Pavel Machek
Hi!

> > > Preventing cold boot attacks is really just icing on the cake.  The
> > > real point of this is to allow you to run an "enclave".  An SGX
> > > enclave has unencrypted code but gets access to a key that only it can
> > > access.  It could use that key to unwrap your ssh private key and sign
> > > with it without ever revealing the unwrapped key.  No one, not even
> > > root, can read enclave memory once the enclave is initialized and gets
> > > access to its personalized key.  The point of the memory encryption
> > > engine to to prevent even cold boot attacks from being used to read
> > > enclave memory.
> >
> > Ok, so the attacker can still access the "other" machine, but ok, key
> > is protected.
> >
> > But... that will mean that my ssh will need to be SGX-aware, and that
> > I will not be able to switch to AMD machine in future. ... or to other
> > Intel machine for that matter, right?
> 
> That's the whole point.  You could keep an unwrapped copy of the key
> offline so you could provision another machine if needed.
> 
> >
> > What new syscalls would be needed for ssh to get all this support?
> 
> This patchset or similar, plus some user code and an enclave to use.
> 
> Sadly, on current CPUs, you also need Intel to bless the enclave.  It
> looks like new CPUs might relax that requirement.

Umm. I'm afraid my evil meter just went over "smells evil" and "bit
evil" areas straight to "certainly looks evil".

> > > Replay Protected Memory Block.  It's a device that allows someone to
> > > write to it and confirm that the write happened and the old contents
> > > is no longer available.  You could use it to implement an enclave that
> > > checks a password for your disk but only allows you to try a certain
> > > number of times.
> >
> > Ookay... I guess I can get a fake Replay Protected Memory block, which
> > will confirm that write happened and not do anything from China, but
> > ok, if you put that memory on the CPU, you raise the bar to a "rather
> > difficult" (tm) level. Nice.
> 
> It's not so easy for the RPMB to leak things.  It would be much easier
> for it to simply not provide replay protection (i.e. more or less what
> the FBI asked from Apple: keep allowing guesses even though that
> shouldn't work).

Yup.

> > But that also means that when my CPU dies, I'll no longer be able to
> > access the encrypted data.
> 
> You could implement your own escrow policy and keep a copy in the
> safe.

And then Intel would have to bless my own escrow policy, which is,
realistically, not going to happen, right?

> > And, again, it means that quite complex new kernel-user interface will
> > be needed, right?
> 
> It's actually fairly straightforward, and the kernel part doesn't care
> what you use it for (the kernel part is the same for disk encryption
> and ssh, for example, except that disk encryption would care about
> replay protection, whereas ssh wouldn't).

So we end up with parts of kernel we can not change, and where we may
not even change the compiler. That means assembly. Hey, user, you have
freedom to this code, except it will not work. That was called TiVo
before. We'd have security-relevant parts of kernel where we could not
even fix a securit holes without Intel.

If anything, this is reason to switch to GPLv3.

I'm sorry. This is evil.

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-26 Thread Pavel Machek
On Tue 2016-04-26 21:59:52, One Thousand Gnomes wrote:
> > But... that will mean that my ssh will need to be SGX-aware, and that
> > I will not be able to switch to AMD machine in future. ... or to other
> > Intel machine for that matter, right?
> 
> I'm not privy to AMD's CPU design plans.
> 
> However I think for the ssl/ssh case you'd use the same interfaces
> currently available for plugging in TPMs and dongles. It's a solved
> problem in the crypto libraries.
> 
> > What new syscalls would be needed for ssh to get all this support?
> 
> I don't see why you'd need new syscalls.

So the kernel will implement few selected crypto algorithms, similar
to what TPM would provide, using SGX, and then userspace no longer
needs to know about SGX?

Ok, I guess that's simple.

It also means it is boring, and the multiuser-game-of-the-day will not
be able to protect the (plain text) password from the cold boot
attack.

Nor will be emacs be able to protect in-memory copy of my diary from
cold boot attack.

So I guess yes, some new syscalls would be nice :-).
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-26 Thread One Thousand Gnomes
> But... that will mean that my ssh will need to be SGX-aware, and that
> I will not be able to switch to AMD machine in future. ... or to other
> Intel machine for that matter, right?

I'm not privy to AMD's CPU design plans.

However I think for the ssl/ssh case you'd use the same interfaces
currently available for plugging in TPMs and dongles. It's a solved
problem in the crypto libraries.

> What new syscalls would be needed for ssh to get all this support?

I don't see why you'd need new syscalls.

> Ookay... I guess I can get a fake Replay Protected Memory block, which
> will confirm that write happened and not do anything from China, but

It's not quite that simple because there are keys and a counter involved
but I am sure doable.

> And, again, it means that quite complex new kernel-user interface will
> be needed, right?

Why ? For user space we have perfectly good existing system calls, for
kernel space we have existing interfaces to the crypto and key layers for
modules to use.

Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-26 Thread One Thousand Gnomes
> Replay Protected Memory Block.  It's a device that allows someone to
> write to it and confirm that the write happened and the old contents
> is no longer available.  You could use it to implement an enclave that
> checks a password for your disk but only allows you to try a certain
> number of times.

rpmb is found in a load of hardware today notably MMC/SD cards. Android
phones often use it to store sensitive system data.

Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-26 Thread Andy Lutomirski
On Tue, Apr 26, 2016 at 12:41 PM, Pavel Machek  wrote:
> On Tue 2016-04-26 12:05:48, Andy Lutomirski wrote:
>> On Tue, Apr 26, 2016 at 12:00 PM, Pavel Machek  wrote:
>> > On Mon 2016-04-25 20:34:07, Jarkko Sakkinen wrote:
>> >> Intel(R) SGX is a set of CPU instructions that can be used by
>> >> applications to set aside private regions of code and data.  The code
>> >> outside the enclave is disallowed to access the memory inside the
>> >> enclave by the CPU access control.
>> >>
>> >> The firmware uses PRMRR registers to reserve an area of physical memory
>> >> called Enclave Page Cache (EPC). There is a hardware unit in the
>> >> processor called Memory Encryption Engine. The MEE encrypts and decrypts
>> >> the EPC pages as they enter and leave the processor package.
>> >
>> > What are non-evil use cases for this?
>>
>> Storing your ssh private key encrypted such that even someone who
>> completely compromises your system can't get the actual private key
>
> Well, if someone gets root on my system, he can get my ssh private
> key right?
>
> So, you can use this to prevent "cold boot" attacks? (You know,
> stealing machine, liquid nitrogen, moving DIMMs to different machine
> to read them?) Ok. That's non-evil.

Preventing cold boot attacks is really just icing on the cake.  The
real point of this is to allow you to run an "enclave".  An SGX
enclave has unencrypted code but gets access to a key that only it can
access.  It could use that key to unwrap your ssh private key and sign
with it without ever revealing the unwrapped key.  No one, not even
root, can read enclave memory once the enclave is initialized and gets
access to its personalized key.  The point of the memory encryption
engine to to prevent even cold boot attacks from being used to read
enclave memory.

This could probably be used for evil, but I think the evil uses are
outweighed by the good uses.

>
> Is there reason not to enable this for whole RAM if the hw can do it?

The HW can't, at least not in the current implementation.  Also, the
metadata has considerable overhead (no clue whether there's a
performance hit, but there's certainly a memory usage hit).

>
>> out.  Using this in conjunction with an RPMB device to make it Rather
>> Difficult (tm) for third parties to decrypt your disk even if you
>> password has low entropy.  There are plenty more.
>
> I'm not sure what RPMB is, but I don't think you can make it too hard
> to decrypt my disk if my password has low entropy. ... And I don't see
> how encrypting RAM helps there.

Replay Protected Memory Block.  It's a device that allows someone to
write to it and confirm that the write happened and the old contents
is no longer available.  You could use it to implement an enclave that
checks a password for your disk but only allows you to try a certain
number of times.

There are some hints in the whitepapers that such a mechanism might be
present on existing Skylake chipsets.  I'm not really sure.

>
> Pavel
> --
> (english) http://www.livejournal.com/~pavelmachek
> (cesky, pictures) 
> http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html



-- 
Andy Lutomirski
AMA Capital Management, LLC
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-25 Thread Jarkko Sakkinen
On Mon, Apr 25, 2016 at 10:53:52AM -0700, Greg KH wrote:
> On Mon, Apr 25, 2016 at 08:34:07PM +0300, Jarkko Sakkinen wrote:
> > Intel(R) SGX is a set of CPU instructions that can be used by
> > applications to set aside private regions of code and data.  The code
> > outside the enclave is disallowed to access the memory inside the
> > enclave by the CPU access control.
> > 
> > The firmware uses PRMRR registers to reserve an area of physical memory
> > called Enclave Page Cache (EPC). There is a hardware unit in the
> > processor called Memory Encryption Engine. The MEE encrypts and decrypts
> > the EPC pages as they enter and leave the processor package.
> > 
> > Jarkko Sakkinen (5):
> >   x86, sgx:  common macros and definitions
> >   intel_sgx: driver for Intel Secure Guard eXtensions
> >   intel_sgx: ptrace() support for the driver
> >   intel_sgx: driver documentation
> >   intel_sgx: TODO file for the staging area
> > 
> > Kai Huang (1):
> >   x86: add SGX definition to cpufeature
> > 
> >  Documentation/x86/intel_sgx.txt   |  86 +++
> >  arch/x86/include/asm/cpufeature.h |   1 +
> >  arch/x86/include/asm/sgx.h| 253 +++
> 
> Why are you asking for this to go into staging?
> 
> What is keeping it out of the "real" part of the kernel tree?

Now that I think of it nothing as long as the API is fixed the way you
suggested and my TODO list is cleared.

I think I prepare a new version of the patches and point it directly
to arch/x86.

> And staging code is self-contained, putting files in arch/* isn't ok for
> it, which kind of implies that you should get this merged correctly.
> 
> I need a lot more information here before I can take this code...
> 
> thanks,
> 
> greg k-h

/Jarkko
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-04-25 Thread Greg KH
On Mon, Apr 25, 2016 at 08:34:07PM +0300, Jarkko Sakkinen wrote:
> Intel(R) SGX is a set of CPU instructions that can be used by
> applications to set aside private regions of code and data.  The code
> outside the enclave is disallowed to access the memory inside the
> enclave by the CPU access control.
> 
> The firmware uses PRMRR registers to reserve an area of physical memory
> called Enclave Page Cache (EPC). There is a hardware unit in the
> processor called Memory Encryption Engine. The MEE encrypts and decrypts
> the EPC pages as they enter and leave the processor package.
> 
> Jarkko Sakkinen (5):
>   x86, sgx:  common macros and definitions
>   intel_sgx: driver for Intel Secure Guard eXtensions
>   intel_sgx: ptrace() support for the driver
>   intel_sgx: driver documentation
>   intel_sgx: TODO file for the staging area
> 
> Kai Huang (1):
>   x86: add SGX definition to cpufeature
> 
>  Documentation/x86/intel_sgx.txt   |  86 +++
>  arch/x86/include/asm/cpufeature.h |   1 +
>  arch/x86/include/asm/sgx.h| 253 +++

Why are you asking for this to go into staging?

What is keeping it out of the "real" part of the kernel tree?

And staging code is self-contained, putting files in arch/* isn't ok for
it, which kind of implies that you should get this merged correctly.

I need a lot more information here before I can take this code...

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html