Re: [PATCH v6 00/11] Intel SGX Driver

2018-01-09 Thread Dr. Greg Wettstein
On Jan 9,  4:25pm, Jarkko Sakkinen wrote:
} Subject: Re: [PATCH v6 00/11] Intel SGX Driver

Good afternoon I hope the week is going well for everyone.

In order to minimize spamming mailboxes with two mails I'm
incorporating a reply to Jarkko's second e-mail on the Memory
Encryption Engine below as well, since the issues are all related.

> On Thu, Jan 04, 2018 at 03:06:43AM -0600, Dr. Greg Wettstein wrote:
> > If we are talking about the issues motivating the KPTI work I don't
> > have any useful information beyond what is raging through the industry
> > right now.
> > 
> > With respect to SGX, the issues giving rise to KPTI are characteristic
> > of what this technology is designed to address.  The technical 'news'
> > sites, which are even more of an abomination then usual with this
> > issue, are talking about privileged information such as credentials,
> > passwords et.al being leaked by this vulnerability.
> > 
> > Data committed to enclaves are only accessible by the enclave, even
> > the kernel, by definition, can't access the memory.  Given current
> > events that is an arguably useful behavior.

> Exactly. You could think adversary using meltdown leak utilizing
> malware as having same capabilities as peripheral connected to a
> bus, which we can defend against with SGX.

I believe caution needs to be applied to these statements

Since we design high assurance computing devices that use SGX to
protect our autonomous introspection engine, we obviously have very
significant concerns regarding whether the SGX security guarantees are
still operative in the face of these micro-architectural probing
attacks.  Absent official guidance, we have been pouring over the SGX
architectural documents for a week in order to develop risk guidance.

Based on that review, our conclusion was that there was nothing
inherent in the SGX architectural model that implies protection
against confidentiality losses through micro-architectural side
channel inspection.  Our conclusion was reinforced by a group in
London which has reportedly demonstrated the effectiveness of the
conditional branch misprediction exploit against data processed inside
of an enclave.

We have not yet verified the exploit in our lab, but given our
architectural review there would seem to be no reason why it shouldn't
work.  I posted a note to the SGX developer's forum early this morning
with a summary of our analysis but haven't received any responses.

To 'wit in summary.

In this attack scenario, the potential lack of confidentiality inside
of an enclave is the same as if the code was running in unprotected
memory space.  The MM{U,E} infrastructure is servicing micro-op
resource requests for instructions inside of an enclave, just as it
would normally do in untrusted space.  As a result, code running in an
enclave induces cache state changes which can be externally probed,
ie. the effects of a forced branch mispredict on cache state are the
same if the code executes inside of an enclave as if it were in
untrusted memory.

As I noted in my post to the SGX forum, this would be really
interesting if it could be done by an arbitrary process against an
enclave.  As the sample code demonstrates however, the exploit binary
has to be able to invoke at last two ECALL's (invocation of functions
in trusted space) in order to carry out the attack.  This is somewhat
analogous to an exploit where a process is able to attack its own
memory map.

With respect to the other mail:

> Everything going out of L1 gets encrypted. This is done to defend
> against peripheral like adversaries and should work also against
> meltdown.

I don't believe this is an architecturally correct assertion.  The
encryption/decryption occurs at the 'bottom' of the cache heirarchy.

Based on Shay Gueron's paper, which describes the Memory Encryption
Engine (MEE) and its security characteristics and proofs, the MEE acts
as an extension of the memory controller and mediates CACHE<->DRAM
traffic to the Enclave Page Cache (EPC), ie, the protected data
region.  It is responsible for encrypting and decrypting page data as
well as the generation of the tags which are used to populate the
Merkle integrity tree.

As I mentioned in a previous mail, the MEE is responsible for emitting
the 'drop and lock' verification signal which locks the memory
controller if a memory integrity check fails.  This is to support a
fundamental design tenant of the architecture that no unverified data
reaches the caches.

Based on this I believe all of the data in the caches is in plaintext,
not just from L1 upward.  So by inference, speculative execution is
able to induce the population of the caches with unencrypted data and
act on those results.  If this were not the case it would be difficult
to understand how the demonstrated branch mispredict attack could be
successful.

With respect to protecting access t

Re: [PATCH v6 00/11] Intel SGX Driver

2018-01-04 Thread Dr. Greg Wettstein
On Jan 4,  3:27pm, Greg Kroah-Hartman wrote:
} Subject: Re: [PATCH v6 00/11] Intel SGX Driver

Wild day, enjoyed by all I'm sure.

> On Thu, Jan 04, 2018 at 03:17:24PM +0100, Cedric Blancher wrote:
> > So how does this protect against the MELTDOWN attack (CVE-2017-5754)
> > and the MELTATOMBOMBA4 worm which uses this exploit?

> It has nothing to do with it at all, sorry.

Precision seems to be everything in these discussions.

Since SGX obviously does not mitigate micro-architectural state
probing it is not an effective general remediation against MELTDOWN.
Does your statement indicate there is solid documentation that
MELTDOWN can be used by a process of any privilege level to dump out
the unencrypted contents of an initialized enclave?

That would obviously be a big story as well.

> greg k-h

Have a good evening.

Greg

}-- End of excerpt from Greg Kroah-Hartman

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.   Specializing in information infra-structure
Fargo, ND  58102development.
PH: 701-281-1686
FAX: 701-281-3949   EMAIL: g...@enjellic.com
--
"If you get to thinkin' you're a person of some influence, try
 orderin' somebody else's dog around."
-- Cowboy Wisdom
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 00/11] Intel SGX Driver

2018-01-04 Thread Dr. Greg Wettstein
On Jan 3, 10:48am, Pavel Machek wrote:
} Subject: Re: [PATCH v6 00/11] Intel SGX Driver

> Hi!

Good morning.

> :-). Stuff proceeds as usual. Too bad it is raining outside, instead
> of snowing.

-19C here, so we have snow... :-)

> > > So ... even with SGX, host can generate bitflips in the enclave,
> > > right?

> > Correct.

> I'd say that you can't generate bitflips because if you do hardware
> will kill the enclave. This seems to be significant difference from
> AMD "secure" memory encryption.

SGX is an entirely different class of technology compared to AME SME
or the Intel equivalent TME (Total Memory Encryption).  Both of these
are best described as the notion of applying the concept of whole disk
encryption to memory.

There are lots of well understood issues surrounding this approach,
whether the target is memory pages or disk sectors.  I think the issue
comes down to the fact that there is a desire to enable a BIOS option
and become 'secure', unfortunately the world is not that simple.

> > Forcing a bitflip in enclave memory causes the next page fetch
> > containing the bitflipped location to fail its integrity check.
> > Since this technically shouldn't be possible, this situation was
> > classified as a hardware failure which is handled by the processor
> > locking its execution state, thus taking the machine down.

> So you can't really do bitflips on the SGX protected memory, because
> MM{E,U} hardware will catch that and kill machine if you try?

Correct.

Which obviously has issues in a multi-tenant cloud environment, but
again, it comes down to risk management.  Killing a machine is
problematic, a massive data compromise isn't much fun either.

> So SGX protected memory is not swappable?

The architecture provides support for swapping enclave pages and the
Linux driver supports it.

The swapped pages retain their confidentiality and integrity
protections.

> > It would seem to be a misfeature for the self-protection mechanism to
> > not generate some type of trappable fault rather then generating a
> > processor lockup but hindsight is always 20/20.  Philosophically this
> > is a good example of security risk managment.  Locking a machine is
> > obviously problematic in a cloud service environment, but it has to be
> > taken in the perspective of whether or not it would be preferable to
> > have a successful privilege escalation attack which could result in
> > exfiltration of sensitive data.

> Ok, right, it should fault. They can fix it in new version?

Good question and something only Intel can answer.

A large part of SGX is implemented in microcode, in part due to the
complexity of the technologies involved, notably the group signing
(EPID) implementation.

Beyond that, there was a specific acknowledgement that this was
security sensitive code and may need an upgrade, hence the microcode
implementation.

Since the drop and lock 'feature' is closely tied to the MM{E,U}
implementation, the question would be whether or not this behavior
could be changed with updated firmware.  If it was easy to change the
behavior of the MMU with microcode the industry would be less frantic
right now... :-)

If it would be possible to change the drop and lock response it would
arguably improve the utility of the technology in certain
environments.  SGX2 would have been a great time for that.

> > Arguably not as much fun as what appears to be pending, given what
> > appears to be the difficulty of some Intel processors to deal with
> > page faults induced by speculative memory references... :-)

> Do you have more info on that? Will they actually leak information,
> or is it just good for rowhammering the kernel memory?

If we are talking about the issues motivating the KPTI work I don't
have any useful information beyond what is raging through the industry
right now.

With respect to SGX, the issues giving rise to KPTI are characteristic
of what this technology is designed to address.  The technical 'news'
sites, which are even more of an abomination then usual with this
issue, are talking about privileged information such as credentials,
passwords et.al being leaked by this vulnerability.

Data committed to enclaves are only accessible by the enclave, even
the kernel, by definition, can't access the memory.  Given current
events that is an arguably useful behavior.

> Best regards,
>   Pavel

Stay dry.

Have a good day.

Greg

}-- End of excerpt from Pavel Machek

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.   Specializing in information infra-structure
Fargo, ND  58102development.
PH: 701-281-1686
FAX: 701-281-3949   EMAIL: g...@enjellic.com
--
"Lots of folks confuse bad management with destiny."
-- Kin Hubbard

-- 
--
To unsubscribe from this list: 

Re: [PATCH v6 00/11] Intel SGX Driver

2018-01-02 Thread Dr. Greg Wettstein
On Dec 27,  9:46pm, Pavel Machek wrote:
} Subject: Re: [PATCH v6 00/11] Intel SGX Driver

> Hi!

Good evening Pavel et.al., I hope the New Year has started well for
everyone.

> > > Would you list guarantees provided by SGX?
> >
> > Obviously, confidentiality and integrity.  SGX was designed to address
> > an Iago threat model, a very difficult challenge to address in
> > reality.

> Do you have link on "Iago threat model"?

https://cseweb.ucsd.edu/~hovav/dist/iago.pdf

> > I don't have the citation immediately available, but a bit-flip attack
> > has also been described on enclaves.  Due to the nature of the
> > architecture, they tend to crash the enclave so they are more in the
> > category of a denial-of-service attack, rather then a functional
> > confidentiality or integrity compromise.

> So ... even with SGX, host can generate bitflips in the enclave,
> right?

Correct.

Here is the reference I was trying to recall in my last e-mail:

https://sslab.gtisc.gatech.edu/assets/papers/2017/jang:sgx-bomb.pdf

> People usually assume that bitflip will lead "only" to
> denial-of-service, but rowhammer work shows that even "random" bit
> flips easily lead to priviledge escalation on javascript virtual
> machines, and in similar way you can get root if you have user and
> bit flips happen.
>
> So... I believe we should assume compromise is possible, not just
> denial-of-service.

Prudence always dictates that one assumes the worst.  In this case
however, the bitflip attacks against SGX enclaves are very definitely
in the denial-of-service category.  The attack is designed to trigger
a hardware self-protection feature on the processor.

Each page of memory which is initialized into an enclave has a
metadata block associated with it which contains the integrity state
of that page of memory.  The MM{E,U} hardware on an SGX capable
platform checks this integrity data on each page fetch request arising
from addresses/pages inside of an enclave.

Forcing a bitflip in enclave memory causes the next page fetch
containing the bitflipped location to fail its integrity check.  Since
this technically shouldn't be possible, this situation was classified
as a hardware failure which is handled by the processor locking its
execution state, thus taking the machine down.

It would seem to be a misfeature for the self-protection mechanism to
not generate some type of trappable fault rather then generating a
processor lockup but hindsight is always 20/20.  Philosophically this
is a good example of security risk managment.  Locking a machine is
obviously problematic in a cloud service environment, but it has to be
taken in the perspective of whether or not it would be preferable to
have a successful privilege escalation attack which could result in
exfiltration of sensitive data.

Philosophically we take the approach that for high security assurance
environments it is virtually impossible to allow any untrusted code to
run on a platform.  Which is why we focus on autonomous introspection
for these environments.

> > Unfortunately, in the security field it is way more fun, and
> > seemingly advantageous from a reputational perspective, to break
> > things then to build solutions :-)(

> Well, yes :-). And I believe someone is going to have fun with SGX
> ;-).
>   Pavel

Arguably not as much fun as what appears to be pending, given what
appears to be the difficulty of some Intel processors to deal with
page faults induced by speculative memory references... :-)

Best wishes for a productive New Year.

Dr. Greg

}-- End of excerpt from Pavel Machek

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.   Specializing in information infra-structure
Fargo, ND  58102development.
PH: 701-281-1686
FAX: 701-281-3949   EMAIL: g...@enjellic.com
--
"It is difficult to produce a television documentary that is both
 incisive and probing when every twelve minutes one is interrupted by
 twelve dancing rabbits singing about toilet paper."
-- Rod Serling
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 00/11] Intel SGX Driver

2017-12-27 Thread Dr. Greg Wettstein
On Dec 12,  3:07pm, Pavel Machek wrote:
} Subject: Re: [PATCH v6 00/11] Intel SGX Driver

Good morning, I hope this note finds the holiday season going well for
everyone.  This note is a bit delayed due to the holidays, my
apologies.

Pretty wide swath on this e-mail but will include the copy list due to
the possible general interest and impact of these issues.  We have
done an independent implementation of Intel's platform software (PSW),
directed at the use of SGX on intelligent network endpoint devices, so
we have some experience with the issues under discussion.

> On Sat 2017-11-25 21:29:17, Jarkko Sakkinen wrote:
> > Intel(R) SGX is a set of CPU instructions that can be used by
> > applications to set aside private regions of code and data. The
> > code outside the enclave is disallowed to access the memory inside
> > the enclave by the CPU access control.  In a way you can think
> > that SGX provides inverted sandbox. It protects the application
> > from a malicious host.

> Would you list guarantees provided by SGX?

Obviously, confidentiality and integrity.  SGX was designed to address
an Iago threat model, a very difficult challenge to address in
reality.

On SGX capable platforms, the Memory Encryption Engine (MEE) is an
integrated component of the hardware MMU, as SGX is a virtual memory
play.  As a result, the executable code and data are encrypted in main
memory and only decrypted when the data is fed from memory onto the
hardware fetch queues.  Irregardless of anything else, this has
implications with respect to cold boot attacks, if an architect
chooses to worry about that threat modality.

In reality, we believe the guarantee that is most important is
integrity, given the issues below.

> For example, host can still observe timing of cachelines being
> accessed by "protected" app, right? Can it also introduce bit flips?

Timing attacks are the bane of SGX, just as they are throughout the
rest of the commodity architectures.  Jarkko cited Beecham's work,
which is a good reference.  Oakland's work on controlled side-channel
attacks is also a very good, and fundamental, read on the issues
involved.

Microsoft Research and Georgia Tech have a paper out discussing the
use of transactional memory to mitigate these.

I don't have the citation immediately available, but a bit-flip attack
has also been described on enclaves.  Due to the nature of the
architecture, they tend to crash the enclave so they are more in the
category of a denial-of-service attack, rather then a functional
confidentiality or integrity compromise.

At the end of the day, giving up complete observational and functional
control to an adversary is a difficult challenge to address.  There is
also a large difference between attacks that can be conducted in a
carefully controlled lab environment and what an adversary or malware
can implement in practice.

Platforms which require security assurances ultimately need a root of
trust.  That either comes from a TPM or a Trusted Execution
Environment like SGX.  Realistically, we think the future involves an
integration of both technologies.  The only other alternative is
perfect software and I think the jury has already weighed in on that.

The advantage of SGX over a TPM is that it is blindingly fast with
respect to performance.  The IMA community has been involved in a
debate over the list digest patches in order to overcome performance
issues with TPM based extension measurements.  We lifted most of the
IMA infrastructure into an SGX enclave and demonstrated significant
performance impacts as a result.

The bigger question, for community integration, is the availability of
hardware.  I see Jarkko's patches are based on the notion of having
flexible launch control available, ie. the ability to program the
relevant MSR's with the checksum of the identity modulus which is to
serve as the root of trust.  I'm not sure there is any hardware in the
wild that currently supports this, Jarkko comments?

Even with that, the question arises as to what is going to be trusted
to program those registers.  The obvious candidate for this is
TXT/tboot which underscores a future involving the integration of
these technologies.

Unfortunately, in the security field it is way more fun, and seemingly
advantageous from a reputational perspective, to break things then to
build solutions :-)(

> Pavel

I hope the above clarifications are helpful.

Best wishes for a pleasant holiday weekend to everyone.

Dr. Greg

}-- End of excerpt from Pavel Machek

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.   Specializing in information infra-structure
Fargo, ND  58102development.
PH: 701-281-1686
FAX: 701-281-3949   EMAIL: g...@enjellic.com
--
"I suppose that could could happen but he wouldn't know a Galois Field
 if it kicked him in the nuts."
  

Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-13 Thread Dr. Greg Wettstein
On Sun, May 08, 2016 at 06:32:10PM -0700, Andy Lutomirski wrote:

Good morning, running behind on e-mail this week but wanted to get
some reflections out on Andy's well taken comments and concerns.

> On May 8, 2016 2:59 AM, "Dr. Greg Wettstein" <g...@enjellic.com> wrote:
> >
> >
> > This now means the security of SGX on 'unlocked' platforms, at least
> > from a trust perspective, will be dependent on using TXT so as to
> > provide a hardware root of trust on which to base the SGX trust model.

> Can you explain what you mean by "trust"?  In particular, what kind
> of "trust" would you have with a verified or trusted boot plus
> verified SGX launch root key that you would not have in the complete
> absence of hardware launch control.
>
> I've heard multiple people say they want launch control but I've
> never heard a cogent explanation of a threat model in which it's
> useful.

Trust means a lot of things and does not always have a 'threat model'
associated with it.  Security is all about the intersection of
technology and economics and moving forward, will be driven by
contractual obligations and re-insurance requirements, that is at
least what we see happening and are involved with.

In a single root of trust, as was originally developed for SGX, trust
consists of a bi-lateral contractual guarantee between Intel and a
software developer.  A contractual guarantee by Intel that an enclave
launched under the security of its root key will have prescribed
integrity and confidentiality guarantees.  In reciprocation the
developer delivers to Intel an implied trust that it will not use
those guarantees to protect illicit or malicious software behavior.

That may not have implications with respect to a specific threat model
but it could have significance in a re-insurance model where a client
of the software environment can indicate that they had an expectation
that code/data which was committed to this environment was
appropriately protected.  The refusal to launch, ie. a launch control
policy, provides a hardware implementation of that trust guarantee.

All of this changes in a future which includes unlocked identity
modulus signatures.

> > I would assume that everyone is using signed Launch Control Policies
> > (LCP) as we are.  This means that TXT/tboot already has access to the
> > public key which is used for the LCP data file signature.  It would
> > seem logical to have tboot compute the signature on that public key
> > and program that signature into the module signature registers.  That
> > would tie the hardware root of trust to the SGX root of trust.

> Now I'm confused.  TXT, in theory*, lets you establish a good root
> of trust for TPM PCR measurements.  So, with TXT, if you had
> one-shop launch control MSRs, you could attest that you've locked
> the launch control policy.

Correct.

In the absence of launch control with an authoritative root of trust
an alternative trust root has to be established.  Integrating the load
of the SGX identity signatures into tboot provides a framework where
the trust guarantees discussed previously can be tied to the identity
of the hardware platform and its provisioner.

This in turn provides a framework for contractual security guarantees
between the platform provisioner and potential clients.

> But what do you gain by doing such a thing?  All you're actually
> attesting is that you locked it until the next reboot.  Someone who
> subsequently compromises you can reboot you, bypass TXT on the next
> boot, and launch any enclave they want.  In any event, SGX is
> supposed to make it so that your enclaves remain secure regardless
> of what happens to the kernel, so I'm at a loss for what you're
> trying to do.

As a Trusted Execution Environment (TEE) the notion of SGX is that it
run run code and data in an Iago threat environment, where the
hardware and operating system have been lost to an aggressor.  You can
technically provide that guarantee in the original SGX root of trust
model but that changes in the presence of an unlocked identity model.

The TPM2 architecture will be the hardware security model moving
forward.  If you look at the new attestation model the hardware
reference quote includes an irreversible clock field in order to
defeat the threat model you describe above with a malware induced
platform reboot.

Beyond that, at least in our work, we directly tie the launch of our
security supervisor to an integrity chain which must be delivered from
a functional TXT/TPM implementation.  Our platform won't boot and
deliver its qualifying attestation to a security counter-party if the
TXT based boot was bypassed.

Given Microsoft's findings in their follow on paper to their Haven
work a hardware/OS root of trust model is still important in a single
root of trust model.  At least until In

Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-12 Thread Dr. Greg Wettstein
On Mon, May 09, 2016 at 08:27:04AM +0200, Thomas Gleixner wrote:

Good morning.

> > On Fri, 6 May 2016, Jarkko Sakkinen wrote:
> > I fully understand if you (and others) want to keep this standpoint but
> > what if we could get it to staging after I've revised it with suggested
> >
> This should not go to staging at all. Either this is going to be a
> real useful driver or we just keep it out of tree.
> >
> > changes and internal changes in my TODO? Then it would not pollute the
> > mainline kernel but still would be easily available for experimentation.

> How are we supposed to experiment with that if there is no launch
> enclave for Linux available?

Build one in a simulator where an independent root enclave key can be
established.  At least thats the approach we are working on with
Jarkko's patches.

Intel does have an instruction accurate simulator, Microsoft used it
for the work which was reported in the Haven paper.  I believe the Air
Force academy used that simulator for their work on SGX as well.

As with other issues SGX related it is unclear why access to the
simulator was/is restricted.  Given that Gen6 hardware is now emerging
there would seem to be even less reason to not have the simulator
generically available to allow implementations to be tested.

> Thanks,
> 
>   tglx

Have a good day.

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.   Specializing in information infra-structure
Fargo, ND  58102development.
PH: 701-281-1686
FAX: 701-281-3949   EMAIL: g...@enjellic.com
--
"Everything should be made as simple as possible, but not simpler."
-- Albert Einstein
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-04 Thread Dr. Greg Wettstein
On Tue, May 03, 2016 at 05:38:40PM +0200, Pavel Machek wrote:

> Hi!

Good morning, I hope everyone's day is starting out well.

> > I told my associates the first time I reviewed this technology that
> > SGX has the ability to be a bit of a Pandora's box and it seems to be
> > following that course.

> Can you elaborate on the Pandora's box? System administrator should
> be able to disable SGX on the system, and use system to do anything
> that could be done with the older CPUs, right?

Correct, there is certainly the on/off switch.

I viewed it as a Pandora's box secondary to the fact that it was the
first commodity based shrouded TEE that had the opportunity for
significant market penetration.  As such and secondary to its
technical characteristics, it has the potential for both good and bad
and like TXT in the last decade it was/is bound to induce significant
debate secondary to software freedom and potential monopolistic
practices.

> > Intel is obviously cognizant of the risk surrounding illicit uses of
> > this technology since it clearly calls out that, by agreeing to have
> > their key signed, a developer agrees to not implement nefarious or
> > privacy invasive software.  Given the known issues that Certificate

> Yeah, that's likely to work ... not :-(. "It is not spyware, it is
> just collecting some anonymous statistics."

The notion that an enclave can look out but could not be looked into
introduces privacy issues into the conversation, see my reflections on
Pandoras box... :-)

> > domination and control.  They probably have enough on their hands with
> > attempting to convert humanity to FPGA's and away from devices which
> > are capable of maintaining a context of exection... :-)

> Heh. FPGAs are not designed to replace CPUs anytime soon... And
> probably never.

Never is a long time.

Intel has clearly drawn a very significant line in the sand with
respect to FPGA technology if you read Krzanich's reflections
regarding his re-organization of Intel.  Whether or not they are
successful, they are going to declare a demarcation point with respect
to IOT devices which has the potential to impact the industry in
general and security in particular.  On one side are going to be FPGA
based devices and on the other side devices with a context of
execution.

I doesn't require a long stretch of the imagination to see hordes of
IOT devices with specific behaviors burned into them which export
sensor or telemetry data upstream.  Depending on how successful they
are with the Altera acquisition there are potentially positive
economic security factors which could be in play.

All of that is certainly not a conversation specific to SGX though.

> > In the TL;DR department I would highly recommend that anyone
> > interested in all of this read MIT's 170+ page review of the
> > technology before jumping to any conclusions :-)

> Would you have links for 1-5?

First off my apologies to the list as I loathe personal inaccuracy,
the MIT review paper is only 117 pages long.  I was typing the last
e-mail at 0405 in the morning and was scrambling for the opportunity
to get 50 minutes of sleep so my proofreading was sloppy... :-)

The following should provide ample bedstand reading material for those
interested in SGX and TEE's:

1.) HASP/SGX paper:
https://software.intel.com/sites/default/files/article/413939/hasp-2013-innovative-technology-for-attestation-and-sealing.pdf

2.) IAGO threat model:
https://cseweb.ucsd.edu/~hovav/dist/iago.pdf

3.) Haven paper:
http://research.microsoft.com/pubs/223450/osdi2014-haven.pdf

4.) Controlled sidechannel attacks:
http://research.microsoft.com/pubs/246400/ctrlchannels-oakland-2015.pdf

https://software.intel.com/en-us/blogs/2015/05/19/look-both-ways-and-watch-out-for-side-channels

5.) MIT/SGX analysis:
https://eprint.iacr.org/2016/086.pdf

> Thanks,
>   Pavel

No problem, enjoy the reading :-)

Have a good day.

Greg

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.   Specializing in information infra-structure
Fargo, ND  58102development.
PH: 701-281-1686
FAX: 701-281-3949   EMAIL: g...@enjellic.com
--
"One problem with monolithic business structures is losing sight
 of the fundamental importance of mathematics.  Consider committees;
 commonly forgotten is the relationship that given a projection of N
 individuals to complete an assignment the most effective number of
 people to assign to the committee is given by f(N) = N - (N-1)."
-- Dr. G.W. Wettstein
   Guerrilla Tactics for Corporate Survival
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/6] Intel Secure Guard Extensions

2016-05-03 Thread Dr. Greg Wettstein
On May 2, 11:37am, "Austin S. Hemmelgarn" wrote:
} Subject: Re: [PATCH 0/6] Intel Secure Guard Extensions

Good morning, I hope the day is starting out well for everyone.

> On 2016-04-29 16:17, Jarkko Sakkinen wrote:
> > On Tue, Apr 26, 2016 at 09:00:10PM +0200, Pavel Machek wrote:
> >> On Mon 2016-04-25 20:34:07, Jarkko Sakkinen wrote:
> >>> Intel(R) SGX is a set of CPU instructions that can be used by
> >>> applications to set aside private regions of code and data.  The code
> >>> outside the enclave is disallowed to access the memory inside the
> >>> enclave by the CPU access control.
> >>>
> >>> The firmware uses PRMRR registers to reserve an area of physical memory
> >>> called Enclave Page Cache (EPC). There is a hardware unit in the
> >>> processor called Memory Encryption Engine. The MEE encrypts and decrypts
> >>> the EPC pages as they enter and leave the processor package.
> >>
> >> What are non-evil use cases for this?
> >
> > I'm not sure what you mean by non-evil.

> I would think that this should be pretty straightforward.  Pretty
> much every security technology integrated in every computer in
> existence has the potential to be used by malware for various
> purposes.  Based on a cursory look at SGX, it is pretty easy to
> figure out how to use this to hide arbitrary code from virus
> scanners and the OS itself unless you have some way to force
> everything to be a debug enclave, which entirely defeats the stated
> purpose of the extensions.  I can see this being useful for tight
> embedded systems.  On a desktop which I have full control of
> physical access to though, it's something I'd immediately turn off,
> because the risk of misuse is so significant (I've done so on my new
> Thinkpad L560 too, although that's mostly because Linux doesn't
> support it yet).

We were somewhat surprised to see Intel announce the SGX driver for
Linux without a bit more community preparation given the nature of the
technology.  But, given the history of opacity around this technology,
it probably isn't surprising.  We thought it may be useful to offer a
few thoughts on this technology as discussion around integrating the
driver moves forward.

We have been following and analyzing this technology since the first
HASP paper was published detailing its development.  We have been
working to integrate, at least at the simulator level, portions of
this technology in solutions we deliver.  We have just recently begun
to acquire validated reference platforms to test these
implementations.

I told my associates the first time I reviewed this technology that
SGX has the ability to be a bit of a Pandora's box and it seems to be
following that course.

SGX belongs to a genre of solutions collectively known as Trusted
Execution Environments (TEE's).  The intent of these platforms is to
support data and application confidentiality and integrity in the face
of an Iago threat environment, ie. a situation where a security
aggressor has complete control of the hardware and operating system,
up to and including the OS 'lying' about what it is doing to the
application.

There are those, including us, who question the quality of the
security gurantee that can be provided but that doesn't diminish the
usefulness or demand for such technology.  If one buys the notion that
all IT delivery will move into the 'cloud' there is certainly a
rationale for a guarantee that clients can push data into a cloud
without concern for whether or not the platform is compromised or
being used to spy on the user's application or data.

As is the case with any security technology, the only way that such a
guarantee can be made is to have a definable origin or root of trust.
At the current time, and this may be the biggest problem with SGX, the
only origin for that root of trust is Intel itself.  Given the nature
and design of SGX this is actually a bilateral root of trust since
Intel, by signing a developer's enclave key, is trusting the developer
to agree to do nothing nefarious while being shrouded by the security
guarantee that SGX provides.

It would be helpful and instructive for anyone involved in this debate
to review the following URL which details Intel's SGX licening
program:

https://software.intel.com/en-us/articles/intel-sgx-product-licensing

Which details what a developer is required to do in order to obtain an
enclave signing key which will be recognized by an SGX capable
processor.   Without a valid signing key an SGX capable system will
only launch an enclave in 'debug' mode which allows the enclave to be
single stepped and examined in a debugger, which obviously invalidates
any TEE based security guarantees which SGX is designed to effect.

Intel is obviously cognizant of the risk surrounding illicit uses of
this technology since it clearly calls out that, by agreeing to have
their key signed, a developer agrees to not implement nefarious or
privacy invasive software.  Given the known issues that Certificate
Authorities have