On Feb 4, 2005, at 6:58 AM, Eric Murray wrote:
So a question for the TCPA proponents (or opponents):
how would I do that using TCPA?
check out
enforcer.sourceforge.net
We also had a paper at ACSAC 2004 with some of the apps we've built on
it.
Two things we've built that haven't made it yet to th
has a TLS server (or client, for that matter) key ever actually been
compromised?
Hi, Marc!
I don't know about in-the-wild attacks.
However, proof-of-concept attacks:
Server-side: Brumley and Boneh did timing attacks on Apache SSL
servers---see their Usenix Security paper from 2003.
Client-side
For what it's worth, last week, I had the chance to eat dinner with
Carlisle Adams (author of the PoP RFC), and he commented that he didn't
know of any CA that did PoP any other way than have the client sign
part of a CRM.
Clearly, this seems to contradict Peter's experience.
I'd REALLY love to
it isn't sufficient that you show there is some specific
authentication protocol with unread, random data ... that has
countermeasures against a dual-use attack ... but you have to
exhaustively show that the private key has never, ever signed any
unread random data that failed to contain dual-u
at the NIST PKI workshop a couple months ago there were a number
of infrastructure presentations where various entities in the
infrastructure were ...signing random data as part of authentication
protocol
I believe our paper may have been one of those that Lynn objected to.
We used the sam
>(To those people who missed the original comment a year or two back, the first
> PKI workshop required that people use plain passwords for the web-based
> submission system due to the lack of a PKI to handle the task).
Hey, but at least the password was protected by an SSL channel,
which was aut
Just to clarify...
I'm NOT saying that any particular piece of "secure" hardware can never be
broken. Steve Weingart (the hw security guy for the 4758) used to insist that
there was no such thing as "tamper-proof." On the HW level, all you can do is
talk about what defenses you tried, what att
>You propose to put a key into a physical device and give it
>to the public, and expect that they will never recover
>the key from it?
It's been on the market for six years now; so far, the foundation
has held up.(We also were darn careful about the design
and evaluation; we ended up earning
> So this doesn't
> work unless you put a "speed limit" on CPU's, and that's ridiculous.
Go read about the 4758. CPU speed won't help unless
you can crack 2048-bit RSA, or figure out a way around
the physical security, or find a flaw in the application.
> Yes. Protocol designers have been exp
>
> >How can you verify that a remote computer is the "real thing, doing
> >the right thing?"
>
> You cannot.
Using a high-end secure coprocessor (such as the 4758, but not
with a flawed application) will raise the threshold for the adversary
significantly.
No, there are no absolutes. But ther
The Bear/Enforcer Project
Dartmouth College
http://enforcer.sourceforge.net
http://www.cs.dartmouth.edu/~sws/abstracts/msmw03.shtml
How can you verify that a remote computer is the "real thing, doing
the right thing?" High-end secure coprocessors are expensive and
computationally limited; lower
>A guy in Google can do it.
Check out:
Using caching for browsing anonymity
Anna M. Shubina, Sean W. Smith
Dartmouth TR2003-470
http://www.cs.dartmouth.edu/reports/abstracts/TR2003-470/
The code's available for download, too.
--
Sean W. Smith, Ph.D. [EMAIL
> I apologise for the snippety email last night,
no problem!
> That is significant! Was this code not
> folded back into Mozilla?
No, unfortunately.
According to Eileen (who was the lead on this),
it didn't easily fit into things:
- it was not clearly a "bug fix"
- it touched many modules (so
> > Are other platforms more secure or do they just receive
> > less scrutiny? Or is it that Microsoft does not react quickly to
> > found bugs? .
My point was just that the browser paradigm was not really designed with the
idea of making the security status information always clearly distin
Does this really surprise anyone?
When I had some students try this out (providing content
that browsers render in a way that makes it look like security
info from the browser) a few years ago, there was just no end
to the tricks one could play...
If you don't design a trusted path into the s
>Yuan, Ye and Smith, Trusted Path for Browsers, 11th Usenix security symp,
>2002.
Minor nit: just Ye and Smith. (Yuan had helped with some of the spoofing)
Advertisement: we also built this into Mozilla, for Linux and Windows.
http://www.cs.dartmouth.edu/~pkilab/demos/countermeasures/
--Sean
16 matches
Mail list logo