Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread SM

At 20:32 05-09-2013, Vinayak Hegde wrote:
While it is nice to do a dedication of this meeting to the SA 
surveillance, I do not see us solving any issue here. It is merely a 
"feel-good" measure without real impact.


:-)

Second, technology can never fix what is essentially a political 
problem. for eg. We mandate strong security protocols and end-to-end 
encryption in HTTP(S) by default. Lets


In a Last Call comment a few months ago it was mentioned that a 
specification takes the stance that security is an optional 
feature.  I once watched a Security Area Director spend thirty 
minutes trying to explain to a working group that security feature 
should be implemented.  If I recall correctly the working group was 
unconvinced.


Would the community raise it as an issue during a Last Call if a 
proposed protocol did not have strong security features?  It's up to 
the reader to determine the answer to that.


 assume all browsers implement this and do this perfectly without 
software flaws. All the NSA has to do is to compromise the other 
endpoint (controlled by ACME major corp). ACME gives over the 
encryption keys and access to all the unencrypted data to the NSA. So now


Yes.

 what are we going to do. The IETF can make an political statement 
by taking a stand but that may mean nothing in reality when the 
laws are weak. Another example is when you have


Taking a stand that means nothing is a feel-good measure.

 encrypted your drive and do not want to hand over the keys as it 
has some personal (and possibly incriminating evidence). In several 
countries you can be held in jail indefinitely (with obvious 
renewals of sentences) until you hand the keys over[1]. So in 
summary, technology cannot solve political and legal issues. At 
best it can make it harder. But in this case maybe not even that.


The IETF outlook does not apply in several countries.  The IETF does 
not seem to pay much attention to that details (re. hand the 
keys).  It's not clear what the emergency is.  Phillip Hallam-Baker 
and Brian Carpenter already mentioned that it's not like this is a surprise.


According to a news article key architects of the Internet plan to 
fight back by drawing a plan to defend against state-sponsored 
surveillance.  Anyway, if someone really wanted to call for an 
emergency response the person would have sent it to an IETF mailing list.


At 20:08 05-09-2013, Ted Lemon wrote:
I think we all knew NSA was collecting the data.   Why didn't we do 
something about it sooner?   Wasn't it an emergency when the PATRIOT 
act was passed?   We certainly thought it was an emergency back in 
the days of Skipjack, but then they convinced us we'd won.   Turns 
out they just went around us.


I would describe it as a scuffle instead of a battle.  My guess is 
that the IETF did not do anything sooner as nobody knows what to do, 
or it may be that the IETF has become conservative and it does not 
pay attention to the minority report.


At 23:04 05-09-2013, Jari Arkko wrote:
I think we should seize this opportunity to take a hard look at what 
we can do better.


:-)

And please do not think about all this just in terms of the recent 
revelations. The


That's an interesting perspective.

 security in the Internet is still a challenge, and if there are 
improvements they will be generally useful for many reasons and for 
many years to come. Perhaps this year's discussions are our ticket 
to motivate the world to move from "by default insecure" 
communications to "by default secure". Publicity and motivation are 
important, too.


Yes.

Regards,
-sm  



Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Måns Nilsson
Subject: Re: Bruce Schneier's Proposal to dedicate November meeting to saving 
the Internet from the NSA Date: Fri, Sep 06, 2013 at 09:04:41AM +0300 Quoting 
Jari Arkko (jari.ar...@piuha.net):
> I think we should seize this opportunity to take a hard look at what we can 
> do better. Yes, it is completely correct that this is only partially a 
> technical problem, and that there is a lot of technology that, if used, would 
> help. And that technical issues outside IETF space, like endpoint security, 
> or the properties of specific products or implements affects the end result 
> in major ways. And that no amount of communication security helps you if you 
> do not the guy at the other end.
> 
> But it is also obvious to me that we do not have a situation where everything 
> that could be done has been done. I think we can do more. Some examples:
> 
> * we're having a discussion in http 2.0 work whether encryption should be 
> mandatory

Given the relative impact of http I think that this is the most important
of your suggestions. Frankly, I do not think it is sensible to block
mandatory crypto in http 2.0. 

However, I think it is also important to look at how we handle the
key distribution problem. The traditional X.509 model has repeatedly
been shown to be extremely vulnerable to bad management and directed
attacks. Further, the dependency on relatively few root CA instances
and the lack of domain name scope limitations makes an attack on said
CA not only likely but also most rewarding to the attacker.

I do think that more distributed technoligies like DANE play an important
rôle here.

-- 
Måns Nilsson primary/secondary/besserwisser/machina
MN-1334-RIPE +46 705 989668
Like I always say -- nothing can beat the BRATWURST here in DUSSELDORF!!


signature.asc
Description: Digital signature


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Hannes Tschofenig
Bruce might not know that we have already various activities ongoing. I 
just recently produced a short writeup about the efforts related to this 
topic ongoing at the last IETF meeting on my blog:

http://www.tschofenig.priv.at/wp/?p=993

Ciao
Hannes

On 06.09.2013 03:17, Dean Willis wrote:


This is bigger than the "perpass" list.

I suggested that the surveillance/broken crypto challenge represents "damage to the 
Internet". I'm not the only one thinking that way.

I'd like to share the challenge raised by Bruce Schneier in:

http://www.theguardian.com/commentisfree/2013/sep/05/government-betrayed-internet-nsa-spying


To quote:

---
We need to know how exactly how the NSA and other agencies are subverting 
routers, switches, the internet backbone, encryption technologies and cloud 
systems. I already have five stories from people like you, and I've just 
started collecting. I want 50. There's safety in numbers, and this form of 
civil disobedience is the moral thing to do.

Two, we can design. We need to figure out how to re-engineer the internet to 
prevent this kind of wholesale spying. We need new techniques to prevent 
communications intermediaries from leaking private information.

We can make surveillance expensive again. In particular, we need open 
protocols, open implementations, open systems – these will be harder for the 
NSA to subvert.

The Internet Engineering Task Force, the group that defines the standards that 
make the internet run, has a meeting planned for early November in Vancouver. 
This group needs dedicate its next meeting to this task. This is an emergency, 
and demands an emergency response.


The gauntlet is in our face. What are we going to do about it?


--
Dean Willis




RE: [CCAMP] Last Call: (Generalized Multi-Protocol Label Switching (GMPLS) Signaling Extensions for the evolving G.709 Optical Transport Networks Contr

2013-09-06 Thread Fatai Zhang
Hi Adrian,

I am updating this draft, but one issue is about the new small section.

You said "adding a small section including all of the statements you made in 
your email", but I really don't know which kind of section should be added to 
cover various aspects including management tools, OAM, alarm, MIB, etc.

I also suspect the value to have this kind of new section to talk about 
something (and nothing new), which may not be related to the subject of this 
draft (ie., RSVP-TE extensions for OTN *connection* establishment).

Therefore, I would like to hear more from you. Could you give a title for this 
new section?


Best Regards

Fatai

From: Adrian Farrel [mailto:adr...@olddog.co.uk]
Sent: Wednesday, August 21, 2013 6:12 PM
To: Fatai Zhang; ietf@ietf.org
Cc: cc...@ietf.org
Subject: RE: [CCAMP] Last Call: 
 (Generalized Multi-Protocol 
Label Switching (GMPLS) Signaling Extensions for the evolving G.709 Optical 
Transport Networks Control) to Proposed Standard

Hi Fatai,

I think you nicely answered your own questions :-)

I would suggest adding a small section including all of the statements you made 
in your email. (Well, no need to refer to Berlin and the CCAMP chairs :-)

Cheers,
Adrian

From: Fatai Zhang [mailto:zhangfa...@huawei.com]
Sent: 21 August 2013 08:40
To: adr...@olddog.co.uk; ietf@ietf.org
Cc: cc...@ietf.org
Subject: RE: [CCAMP] Last Call: 
 (Generalized Multi-Protocol 
Label Switching (GMPLS) Signaling Extensions for the evolving G.709 Optical 
Transport Networks Control) to Proposed Standard


Hi Adrian,



Thanks very much.



I can update the nits and editorial issues quickly, but I would like to discuss 
more with you for the following points to make things clear before I update the 
draft.



=

Please consider and note what updates to GMPLS management tools are needed.



[Fatai]This has been mentioned in [Framework] document. Did you mean that we 
need add one sentence in some place of this document to refer to [Framework] 
document to mention management tools?



Are there any changes to the Alarms that might arise? We have a document for 
that.



[Fatai] No. RFC4783 is still applicable.



Are there any changes to the way OAM is controlled? We have a document for that.



[Fatai] No, it could be done through NMS or 
[draft-ietf-ccamp-rsvp-te-sdh-otn-oam-ext].



Should the new G-PIDs show in the TC MIB managed by IANA at

https://www.iana.org/assignments/ianagmplstc-mib/ianagmplstc-mib.xhtml

This should happen automgically when the feeding registries are updated

but it is probably best to add a specific request for IANA.



[Fatai] Will do that.



Will other MIB work be needed (in the future) to make it possible to

read new information (labels, tspecs) from network devices?



[Fatai] I am not sure. I asked the similar question (not on this draft) during 
Berlin meeting. The chairs answered that it could be driven by drafts.







Best Regards



Fatai



-Original Message-
From: ccamp-boun...@ietf.org [mailto:ccamp-boun...@ietf.org] On Behalf Of 
Adrian Farrel
Sent: Wednesday, August 21, 2013 2:51 AM
To: ietf@ietf.org
Cc: cc...@ietf.org
Subject: Re: [CCAMP] Last Call: 
 (Generalized Multi-Protocol 
Label Switching (GMPLS) Signaling Extensions for the evolving G.709 Optical 
Transport Networks Control) to Proposed Standard



As sponsoring AD I have the following last call comments I hope you will take on

board.



Thanks,

Adrian



Please fix the two lines that are too long (see idnits)



---



Please expand "OTN" on first use in the main text.

Please expand "TS" on its first use.



---



6.2



   The ingress node of an LSP MAY include Label ERO (Explicit Route

   Object) to indicate the label in each hops along the path.



Missing "subobject".



---



6.2.1



   When an upstream node receives a Resv message containing an

   GENERALIZED_LABEL object



s/an/a/



---



Please consider and note what updates to GMPLS management tools are

needed.



Are there any changes to the Alarms that might arise? We have a document

for that.



Are there any changes to the way OAM is controlled? We have a document

for that.



Should the new G-PIDs show in the TC MIB managed by IANA at

https://www.iana.org/assignments/ianagmplstc-mib/ianagmplstc-mib.xhtml

This should happen automgically when the feeding registries are updated

but it is probably best to add a specific request for IANA.



Will other MIB work be needed (in the future) to make it possible to

read new information (labels, tspecs) from network devices?



---



Please fix so that you have three sections:



Authors' Addresses (only those people on the front page)

Contributors (other people who made significant text contributions to

the document)

Acknowledgements (other people who helped with the work)



---



[OTN-OSPF] should be a normative reference for its use to define the

value of the switching

Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Adam Novak

On 09/05/2013 08:19 PM, Brian E Carpenter wrote:

Tell me what the IETF could be doing that it isn't already doing.

I'm not talking about what implementors and operators and users should
be doing; still less about what legislators should or shouldn't be
doing. I care about all those things, but the question here is what
standards or informational outputs from the IETF are needed, in addition
to what's already done or in the works.

I don't intend that to be a rhetorical question.

  Brian


One way to frustrate this sort of dragnet surveillance would be to 
reduce centralization in the Internet's architecture. Right now, the way 
the Internet works in practice for private individuals, all your traffic 
goes up one pipe to your ISP. It's trivial to tap, since the tapping can 
be centralized at the ISP end.


The IETF focused on developing protocols (and reserving the necessary 
network numbers) to facilitate direct network peering between private 
individuals, it could make it much more expensive to mount large-scale 
traffic interception attacks.


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread t . p .
- Original Message -
From: "Phillip Hallam-Baker" 
To: "Andrew Sullivan" 
Cc: "IETF Discussion Mailing List" 
Sent: Friday, September 06, 2013 4:56 AM
> On Thu, Sep 5, 2013 at 11:32 PM, Andrew Sullivan
wrote:
>
> > On Fri, Sep 06, 2013 at 03:28:28PM +1200, Brian E Carpenter wrote:
> > >
> > > OK, that's actionable in the IETF, so can we see the I-D before
> > > the cutoff?
> >
> > Why is that discussion of this nailed to the cycle of IETF meetings?
>
>
> It is not. I raised the challenge over a week ago in another forum.
Last
> thing I would do is to give any institution veto power.
>
>
> The design I think is practical is to eliminate all UI issues by
insisting
> that encryption and decryption are transparent. Any email that can be
sent
> encrypted is sent encrypted.

That sounds like the 'End User Fallacy number one' that I encounter all
the time in my work.  If only everything were encrypted, then we would
be completely safe.  Well, no (as you Phillip know well).  It depends on
the strength of the ciphers (you can get a little padlock on your screen
with SSL 2 which was the default in my local public access system until
recently).  It depends on the keys being secret (one enterprise system I
was enrolled on in 2003 will not let me change my password, ever - only
the system administrator has that power).  It depends on authentication
(I have a totally secure channel, unbreakable in the next 50 years, but
it is not to my bank but to a Far Eastern Power).  And so on.  Yet every
few weeks I hear the media saying, 'look for the padlock'.

I think that the obvious step to improving security is to get the world
at large possessing and using certificates, in the same way as the
governments of the world, not very long agao, persuaded us to use
passports.

Tom Petch

>
> So that means that we have to have a key distribution infrastructure
such
> that when you register a key it becomes available to anyone who might
need
> to send you a message. We would also wish to apply the Certificate
> Transparency approach to protect the Trusted Third Parties from being
> coerced, infiltrated or compromised.
>
> Packaging the implementation is not difficult, a set of proxies for
IMAP
> and SUBMIT enhance and decrypt the messages.
>
> The client side complexity is separated from the proxy using
Omnibroker.
>




Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Vinayak Hegde
On Fri, Sep 6, 2013 at 12:16 PM, SM  wrote:

> In a Last Call comment a few months ago it was mentioned that a
> specification takes the stance that security is an optional feature.  I
> once watched a Security Area Director spend thirty minutes trying to
> explain to a working group that security feature should be implemented.  If
> I recall correctly the working group was unconvinced.
>
> Would the community raise it as an issue during a Last Call if a proposed
> protocol did not have strong security features?  It's up to the reader to
> determine the answer to that.


It is tragic if the community does understand strong encryption is
essential in many cases (with the caveat that it is not a panacea for all
security breaches) As for raising issues at the last-call. Why not ? The
last-call is no different than any other mailing list discussion or going
to the mic in a physical meeting. (Other than the urgency of having the
last chance to comment ?)

-- Vinayak


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Hannes Tschofenig

On 06.09.2013 04:36, Brian E Carpenter wrote:

I'm not saying there's no issue or no work to do, but what's new about
any of this?


Still at the end of last year I remember conversations in working groups 
that questions why we need TLS security for protocols like SCIM (a 
protocol that shuffles credentials around).


I don't think that the decision in the RTCWeb group against SDES would 
have been possible without the NSA news.


I also remember the Internet Privacy workshop the IAB and others 
organized about 2 years ago and back then we argued whether government 
surveillance is something we should focus on or whether we are mainly 
interested in companies who impact your privacy.


While some (many) have already anticipated that the NSA (and other 
governments) deploy massive surveillance technologies the extend to 
which it is done has surprised most security persons I know.


In a nutshell, the understanding and awareness of the wider Internet 
community has changed with those news.


Ciao
Hannes


RE: pgp signing in van

2013-09-06 Thread l.wood

Surely, pgp signing in vain?

Don't know about you, but I value plausible deniability.

Lloyd Wood
http://sat-net.com/L.Wood/



From: ietf-boun...@ietf.org [ietf-boun...@ietf.org] On Behalf Of Randy Bush 
[ra...@psg.com]
Sent: 06 September 2013 01:45
To: IETF Disgust
Subject: pgp signing in van

so, it might be a good idea to hold a pgp signing party in van.  but
there are interesting issues in doing so.  we have done lots of parties
so have the social protocols and n00b cheat sheets.  but that is the
trivial tip of the iceberg.

  o is pgp compromised?  just because it is not listed in [0] is not
very strong assurance in these dark days.

  o what are the hashes of audited software, and who did the audits?

  o what are the recommended algs/digest/keylen parameters?

  o do we really need eliptical, or is that a poison pill?

  o your questions go here ...

randy

---

[0] 
http://www.nytimes.com/interactive/2013/09/05/us/unlocking-private-communications.html


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Stewart Bryant

On 06/09/2013 04:19, Brian E Carpenter wrote:

On 06/09/2013 15:08, Ted Lemon wrote:

On Sep 5, 2013, at 9:36 PM, Brian E Carpenter  
wrote:

I'm sorry, I don't detect the emergency.

I think we all knew NSA was collecting the data.   Why didn't we do something 
about it sooner?   Wasn't it an emergency when the PATRIOT act was passed?   We 
certainly thought it was an emergency back in the days of Skipjack, but then 
they convinced us we'd won.   Turns out they just went around us.

Tell me what the IETF could be doing that it isn't already doing.

I'm not talking about what implementors and operators and users should
be doing; still less about what legislators should or shouldn't be
doing. I care about all those things, but the question here is what
standards or informational outputs from the IETF are needed, in addition
to what's already done or in the works.


There is a whole bunch of stuff we can do to make transit traffic less 
observable.


In other words we can modify things so the only think you know about a 
packet is where it is going, not what it is or who it came from.


Stewart


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Stephen Farrell

Summarising a *lot* :-)

On 09/06/2013 11:30 AM, Stewart Bryant wrote:
> 
> There is a whole bunch of stuff we can do

I fully agree. Some more detail on one of those...

We setup the perpass list [1] as a venue for triaging
specific proposals in this space. A few weeks in, we
have one I-D [2] (very much a -00) that tries to describe
a threat model that matches the recent revelations,
and that could be a good reference when folks are
developing protocols.

We have found volunteers to write a draft for a BCP
on how to use perfect forward secrecy in TLS, more
common use of which (we still think) would mitigate a
bunch of the ways in which TLS traffic could be
subverted, given various forms of collusion/coercion.
I hope the -00 for that will pop out in a weekish.

We've had some discussion about how to do better with
email, but that's not yet landed on specifics that
could be taken further. And a couple of other topics
have come up. More are welcome.

For any such topic that looks like it'll turn into
something actionable (in the IETF context), I'm very
happy to push to get it adopted by a relevant WG or
to get it AD sponsored.

If you care about this stuff, then get on that list
and make concrete proposals and write I-Ds about ways
the IETF can improve the situation. If the content
is good, you'll find you're pushing on an open door
(at least as far as the SEC ADs are concerned:-).

And as we all know the IETF cannot "solve the problem"
here, but as Stewart rightly said: there is stuff we
can do better. So let's do it.

I do think some kind of session in Vancouver would be
useful to move this along some more and there's
discussion ongoing within the IESG and IAB on how to
best do that. If we (IESG/IAB) fail in that, please do
beat us up mightily at the mic in Vancouver.

Cheers,
S.

[1] https://www.ietf.org/mailman/listinfo/perpass
[2] http://tools.ietf.org/html/draft-trammell-perpass-ppa




Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Hannes Tschofenig

On 06.09.2013 13:30, Stewart Bryant wrote:

Tell me what the IETF could be doing that it isn't already doing.

It really depends where you see the boundaries of the IETF.

For some the IETF only produces documents and that's it. Clearly, we 
have a lot of specification work ongoing in different areas that helps 
to mitigate various security vulnerabilities. This ranges from recent 
work on XMPP end-to-end security (as in 
http://tools.ietf.org/html/draft-miller-3923bis-02) all the way to the 
recent RTCWEB discussions on using DTLS-SRTP as a key management protocol.


For other folks the IETF does much more, such as to reach out to those 
deploying our technology. Many folks involved in the IETF community 
produce open source code, write article in popular computer magazines 
explaining how to use the technology, give presentations at various 
conferences, teach at universities and research institutes, provide 
consulting, etc. The list is long.


It is obviously easier to write (security) documents but somewhat more 
complex to get them widely deployed. Example: TLS everywhere, DNSSEC, 
email security, routing security, etc.


While we are able to fill gaps in security protocols fairly quickly we 
don't always seem to make the right choices because the interests of 
various participants are not necessarily aligned. In general, we seem to 
develop an insecure version and a secure version of a protocol. 
Unfortunately, the insecure version gets widely deployed and we have an 
incredible hard time to introduce the secure version.


In addition to the specification work we could think about how to reach 
out to the broader Internet ecosystem a bit better. Since we have lots 
of folks in the IETF I don't think it is an impossible task but it might 
require a bit of coordination. Right now would be a good time to launch 
some of those initiatives since most people currently understand the 
need for security.


Ciao
Hannes



Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Jorge Amodio
IMHO. There is no amount of engineering that can fix stupid people doing
stupid things... on both sides of the stupid line.

-J


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Peter Saint-Andre
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 9/6/13 1:47 AM, Adam Novak wrote:
> On 09/05/2013 08:19 PM, Brian E Carpenter wrote:
>> Tell me what the IETF could be doing that it isn't already
>> doing.
>> 
>> I'm not talking about what implementors and operators and users
>> should be doing; still less about what legislators should or
>> shouldn't be doing. I care about all those things, but the
>> question here is what standards or informational outputs from the
>> IETF are needed, in addition to what's already done or in the
>> works.
>> 
>> I don't intend that to be a rhetorical question.
>> 
>> Brian
> 
> One way to frustrate this sort of dragnet surveillance would be to 
> reduce centralization in the Internet's architecture. Right now,
> the way the Internet works in practice for private individuals, all
> your traffic goes up one pipe to your ISP. It's trivial to tap,
> since the tapping can be centralized at the ISP end.
> 
> The IETF focused on developing protocols (and reserving the
> necessary network numbers) to facilitate direct network peering
> between private individuals, it could make it much more expensive
> to mount large-scale traffic interception attacks.

+1. There's already work on things like MANET, but this seems a useful
avenue of work.

Peter

- -- 
Peter Saint-Andre
https://stpeter.im/


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJSKdGmAAoJEOoGpJErxa2pL+cP/3lYe7AP9rkPSYdvSVvt073N
sZ5hc4aIC/PDTUTLrZV7dPTsCs0vEqn5kcgm0C/w8KHZ4XFLY1iDJiXebekAZWTd
F3jpOgrO1kQZsyb2t65xjasdVAeg9sWZ0+l9epoMa5qLPQeOhD7fVBVmZ2NVccCu
k0Js14lJZTsVNMdU9R7pujcvk8uaoqys+Ac+y1ZjCoW26I9Fo357cNVtODyGRBUb
ARKOsniaZPrPExPj5ZGIfeOabMEQoy3AvC/O74J+/llqItku+i7iJRqwVTHhRni3
70/A91l3icQ4Ke6uIGio9VHRv/XRJgr2lMBf5qzHneWKdFBUfVqRbSVKZdZS2qzK
dB2YOH+Dx847mzKhv+bRo+WgXsjoRLQQziOGeARikbDmMOdXdVP/vGnT++vMksBO
FP+T8jTWRStQNU9Uj04geLo/GwOO8/i72FvSfX8FJxmGWq2p9mocInnFM/Tg7dxr
zyrVV3Ou9VLzJuIE3xoUT1Xqr//GNgdmELCOQvn4+C866bjKwPkhqtxYpLOoObMK
qL8XEe4h5SvE2YKyC1fhhzMVDFZ9dHH9bD8FMsbzUb43/3a1IBk6ti+U85FmjzWH
fNbbi+oB5zoRh/ib3yqgQ6sMrPTdwuXu5WnpJBtggALebZ8NNWPB0VCYHXk8nP4u
1Y0TkCGeJFCXG4a7kyTC
=Sezw
-END PGP SIGNATURE-


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Alan Johnston
On Fri, Sep 6, 2013 at 7:07 AM, Hannes Tschofenig  wrote:

> On 06.09.2013 13:30, Stewart Bryant wrote:
>
>> Tell me what the IETF could be doing that it isn't already doing.
>>
> It really depends where you see the boundaries of the IETF.
>
> For some the IETF only produces documents and that's it. Clearly, we have
> a lot of specification work ongoing in different areas that helps to
> mitigate various security vulnerabilities. This ranges from recent work on
> XMPP end-to-end security (as in http://tools.ietf.org/html/**
> draft-miller-3923bis-02)
> all the way to the recent RTCWEB discussions on using DTLS-SRTP as a key
> management protocol.
>

If we took protection against MitM attacks seriously, we would be using
ZRTP for RTCWEB instead of DTLS-SRTP.  See

 http://tools.ietf.org/html/draft-johnston-rtcweb-zrtp

- Alan -


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Martin Sustrik

On 06/09/13 14:07, Hannes Tschofenig wrote:


While we are able to fill gaps in security protocols fairly quickly we
don't always seem to make the right choices because the interests of
various participants are not necessarily aligned.


So, what if an NSA guys comes in and proposes backdoor to be added to a 
protocol? Is it even a valid interest? Does IETF as an organisation have 
anything to say about that or does it remain strictly neutral?


Martin



Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Eliot Lear

On 9/6/13 3:04 PM, Martin Sustrik wrote:
> So, what if an NSA guys comes in and proposes backdoor to be added to
> a protocol? Is it even a valid interest? Does IETF as an organisation
> have anything to say about that or does it remain strictly neutral?
>
It's happened before and we as a community have said no.  See RFC 2804.


Re: pgp signing in van

2013-09-06 Thread Russ Housley
Dave:

>> is pgp compromised?
> 
> PGP is a packaging method.  Absent grossly incompetent packaging -- and I've 
> never heard claims that PGP or S/MIME were guilty of that -- my sense is that 
> the interesting security mechanisms are the underlying algorithms.
> 
> Is there something about PGP that creates different exposures than S/MIME, in 
> terms of those algorithms?  (Key management has obvious differences, of 
> course.)

The biggest difference is PKI vs. web of trust.  You do not need a key signing 
event for a PKI -- you have already decided (or a vendor decided for you) to 
trust the Certificate Authority.

Russ



Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Pete Resnick

On 9/6/13 12:54 AM, t.p. wrote:

- Original Message -
From: "Phillip Hallam-Baker" 
Cc: "IETF Discussion Mailing List" 
Sent: Friday, September 06, 2013 4:56 AM

The design I think is practical is to eliminate all UI issues by 
insisting that encryption and decryption are transparent. Any email 
that can be sent encrypted is sent encrypted.


That sounds like the 'End User Fallacy number one' that I encounter 
all the time in my work. If only everything were encrypted, then we 
would be completely safe.


Actually, I disagree that this fallacy is at play here. I think we need 
to separate the concept of end-to-end encryption from authentication 
when it comes to UI transparency. We design UIs now where we get in the 
user's face about doing encryption if we cannot authenticate the other 
side and we need to get over that. In email, we insist that you 
authenticate the recipient's certificate before we allow you to install 
it and to start encrypting, and prefer to send things in the clear until 
that is done. That's silly and is based on the assumption that 
encryption isn't worth doing *until* we know it's going to be done 
completely safely. We need to separate the trust and guarantees of 
safeness (which require *later* out-of-band verification) from the whole 
endeavor of getting encryption used in the first place.


pr

--
Pete Resnick
Qualcomm Technologies, Inc. - +1 (858)651-4478



Re: REVISED Last Call: (The Pseudowire (PW) & Virtual Circuit Connectivity Verification (VCCV) Implementation Survey Results) to Informational RFC

2013-09-06 Thread Andrew G. Malis
Abdussalam,

Thanks again, following IETF last call I'll discuss actions to take on the
draft with the IESG.

Cheers,
Andy


On Thu, Sep 5, 2013 at 6:00 PM, Abdussalam Baryun <
abdussalambar...@gmail.com> wrote:

> Thanks Andrew, I am happy to see a survey draft, I never seen one
> before in IETF, however, if there was a survey done before in IETF, it
> will be interesting to mention that if you think necessary related.
>
> On 9/5/13, Andrew G. Malis  wrote:
> > Abdussalam,
> >
> > Many thanks for your review and comments on the draft. I have some
> answers
> > inline.
> >
> > On Wed, Sep 4, 2013 at 10:24 PM, Abdussalam Baryun <
> > abdussalambar...@gmail.com> wrote:
> >
> >> The Reviewer: Abdussalam Baryun
> >> Date: 05.09.2013
> >> I-D name: draft-ietf-pwe3-vccv-impl-survey-results
> >> Received your Request dated 04.09.2013
> >> ++
> >>
> >> The reviewer supports the draft subject to amendments. Overall the
> >> survey is not easy to be used as source of information related to such
> >> technology users, but easier as source of information related to
> >> respondings of companies.
> >>
> >> AB> I prefer the title to start as: A Survey of ..
> >>
> >
> > Andy> The draft is reporting the results of the survey, rather than being
> > the survey, so the title couldn't start as you suggested. A possibility
> > could be "The Results of a Survey on Pseudowire (PW) & Virtual Circuit
> > Connectivity Verification (VCCV) Implementations", but I think the
> existing
> > title is more concise.
>
> Yes that was my aim, thanks,
> >
> > Abstract> This survey of the PW/VCCV user community was conducted to
> >> determine implementation trends. The survey and results is presented
> >> herein.
> >>
> >> AB> How did the survey determine implementations related to users (are
> >> they general known or uknown or chosen by authors...etc). What kind of
> >> results?
> >>
> >
> > Andy> The survey was of service providers deploying pseudowires and VCCV.
> > The "users", in this case, are service providers.
>
> ok, if described in the document, and how were they selected, is it on
> there work volume basis, or etc.
> >
> >
> >> AB> the abstract starts interesting but ends making the results not
> >> clear what it was (good, reasonable, expected, positive, had
> >> conclusions..etc)?
> >> AB> The draft states that it has no conclusion, because it is not
> >> intended for that but to help in knowing results to help in other
> >> future drafts. However, the abstract mentions that the survey
> >> conducted to determine (not understood how to determine without
> >> conclusions or analysis).
> >>
> >
> > Andy> It wasn't the job of the people conducting the survey to draw
> > conclusions from the results, it was for them to report the results so
> that
> > the working group could collectively draw conclusions in their ongoing
> > work. At the time, the WG needed information on which combinations of PW
> > and VCCV options were actually in use, and the survey was used to collect
> > that information.
>
> Ok, the WG needs information, but if I still remember, the document
> does not state/define such need to match the survey.
>
> >
> >
> >> Introduction>
> >> In order to assess the best approach to address the observed
> >> interoperability issues, the PWE3 working group decided to solicit
> >> feedback from the PW and VCCV user community regarding
> >> implementation.  This document presents the survey and the
> >> information returned by the user community who participated.
> >>
> >> AB> the introduction needs to show the importance of the survey, or
> >> what makes such decision from the WG (i.e. seems like the WG has not
> >> cover all types of community, not sure)?
> >> AB> Why did the WG decide the survey by using questionnair?
> >>
> >
> > Andy> The part of the Introduction on page 3 provides the background,
> > rationale, and importance of the survey. We used a questionnaire as that
> > form of survey is easiest for the respondents and allowed us to use
> > SurveyMonkey to conduct the survey.
>
> The questionnaire method has advantages and disadvantages, so if on
> section mentions the result validity in linked to method, I think the
> reader will know how much he can depend on such results.
> >
> >
> >> AB> suggest amending> the document presents the questionnair form
> >> questions and information returned ..
> >>
> >
> > Andy> We could change the sentence to say "This document presents the
> > survey questionnaire and the information returned by the user community
> who
> > participated."
> >
>
> my language may not be perfect, but I agree that amending it to show
> survey method and method of result collection.
> >
> >> Sections 1.1 1.2 and 1.3>
> >> ..questions based on direction of the WG chairs..
> >> There were seventeen responses to the survey that met the validity
> >> requirements in
> >> Section 3.  The responding companies are listed below in Section 2.1.
> >>
> >> AB> Why we

Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Scott Brim
I wouldn't focus on government surveillance per se.  The IETF should
consider that breaking privacy is much easier than it used to be,
particularly given consolidation of services at all layers, and take
that into account in our engineering best practices.  Our mission is
to make the Internet better, and right now the Internet's weakness in
privacy is far from "better".  The mandatory security considerations
section should become security and privacy considerations.  The
privacy RFC should be expanded and worded more strongly than just nice
suggestions.  Perhaps the Nomcom should ask candidates about their
understanding of privacy considerations.

Scott


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Bjoern Hoehrmann
* Brian E Carpenter wrote:
>Tell me what the IETF could be doing that it isn't already doing.

The United States justify these programs saying they are primarily used
to support their various current and future war efforts. Not meeting at
any level in countries currently at war might be a sound IETF policy.
-- 
Björn Höhrmann · mailto:bjo...@hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Noel Chiappa
> From: Martin Millnert 

> Bruce was ... suggesting that encrypting everything on the wire makes
> both metadata and payload collection from wires less valuable. Here
> comes the key point: Encrypting everything on the wire raises the cost
> for untargeted mass surveillance significantly. And that is what it is
> all about.

I have no problems with encrypting everything, as long as we realize that in
doing so, we're only solving one corner of the problem, and the watchers will
just move their efforts elsewhere; all intelligent attackers always look for
the weak point, no?

(Although I have to wonder at the computing load needed to do so. I gather
e.g. Google's datacenters use enormous amounts of energy - I wonder if mass
encryption of all traffic on the Internet would be literally a 'boiling the
ocean' solution... I'm amused by the memory of people who used to react with
shock and horror to variable length addresses, because of the extra
computational load required to handle _them_)

> And best is of course if this can be end to end

That's going to take quite a while to accomplish; it requires updating all the
hosts. (I know, we don't have to get to 99.9%, but it's still non-trivial to
get to, say, 70%.)

Noel


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 06:20 -0700 Pete Resnick
 wrote:

> Actually, I disagree that this fallacy is at play here. I
> think we need to separate the concept of end-to-end encryption
> from authentication when it comes to UI transparency. We
> design UIs now where we get in the user's face about doing
> encryption if we cannot authenticate the other side and we
> need to get over that. In email, we insist that you
> authenticate the recipient's certificate before we allow you
> to install it and to start encrypting, and prefer to send
> things in the clear until that is done. That's silly and is
> based on the assumption that encryption isn't worth doing
> *until* we know it's going to be done completely safely. We
> need to separate the trust and guarantees of safeness (which
> require *later* out-of-band verification) from the whole
> endeavor of getting encryption used in the first place.

Pete,

At one level, I completely agree.  At another, it depends on the
threat model.  If the presumed attacker is skilled and has
access to packets in transit then it is necessary to assume that
safeguards against MITM attacks are well within that attacker's
resource set.  If those conditions are met, then encrypting on
the basis of a a key or certificate that can't be authenticated
is delusional protection against that threat.  It may still be
good protection against more casual attacks, but we do the users
the same disservice by telling them that their transmissions are
secure under those circumstances that we do by telling them that
their data are secure when they see a little lock in their web
browsers.

Certainly "encrypt first, authenticate later" is reasonable if
one doesn't send anything sensitive until authentication has
been established, but it seems to me that would require a rather
significant redesign of how people do things, not just how
protocols work.

best,
   john



Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Tony Finch
Theodore Ts'o  wrote:

> Speaking of which, Jim Gettys was trying to tell me yesterday that
> BIND refuses to do DNSSEC lookups until the endpoint client has
> generated a certificate.

That is wrong. DNSSEC validation affects a whole view - i.e. it is
effectively global.

Clients can request DNSSEC records or not, regardless of whether they do
any transaction security. Clients can do DNSSEC validation without any
private keys.

Tony.
-- 
f.anthony.n.finchhttp://dotat.at/
Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at first.
Rough, becoming slight or moderate. Showers, rain at first. Moderate or good,
occasionally poor at first.


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Theodore Ts'o
On Fri, Sep 06, 2013 at 03:26:42PM +0100, Tony Finch wrote:
> Theodore Ts'o  wrote:
> 
> > Speaking of which, Jim Gettys was trying to tell me yesterday that
> > BIND refuses to do DNSSEC lookups until the endpoint client has
> > generated a certificate.
> 
> That is wrong. DNSSEC validation affects a whole view - i.e. it is
> effectively global.
> 
> Clients can request DNSSEC records or not, regardless of whether they do
> any transaction security. Clients can do DNSSEC validation without any
> private keys.

That's what I hoped, thanks.

- Ted


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Stefan Winter
+1. I'd +10 if I could :-)

> One thing that would be helpful is to encourage the use of
> Diffie-Hellman everywhere.  Even without certificates that can be
> trusted, we can eliminate the ability of casual, dragnet-style
> surveillance.  Sure, an attacker can still do a MITM attack.  But (a)
> people who are more clueful can do certificate pinning/verification,
> and (b) if the NSA is really putting data taps into tier 1 providers'
> high speed interconnects, they can only carry out MITM attacks on a
> bulk scale by placing racks and racks of servers, which will require
> significant amounts of cooling and power, in places that are much more
> likely where they would be noticed.  It's no longer a data tap hidden
> away somewhere in a closet near a tier 1's NAP.
> 
> For too long, I think, we've let the perfect be the enemy of the good.
> Using TLS with DH to secure SMTP connections is valuable even if it is
> subject to MITM attacks, and even if the NSA/FBI can hand a National
> Security Letter to the cloud provider.  At least this way they will be
> forced to go the NSL route (and it will show up in whatever
> transparency reports that Google or Microsoft or Facebook are allowed
> to show to the public), or spend $$$ on huge racks of servers in
> public data centers, which maybe means less money to subvert standards
> setting activities.
> 
> Although perfect security is ideal, increasing the cost of casual
> style dragnet surveillance is still a Good Thing.
> 
>   - Ted
> 


-- 
Stefan WINTER
Ingenieur de Recherche
Fondation RESTENA - Réseau Téléinformatique de l'Education Nationale et
de la Recherche
6, rue Richard Coudenhove-Kalergi
L-1359 Luxembourg

Tel: +352 424409 1
Fax: +352 422473

PGP key updated to 4096 Bit RSA - I will encrypt all mails if the
recipient's key is known to me

http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xC0DE6A358A39DC66


0x8A39DC66.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Pete Resnick

On 9/6/13 7:02 AM, John C Klensin wrote:

...It may still be
good protection against more casual attacks, but we do the users
the same disservice by telling them that their transmissions are
secure under those circumstances that we do by telling them that
their data are secure when they see a little lock in their web
browsers.

Certainly "encrypt first, authenticate later" is reasonable if
one doesn't send anything sensitive until authentication has
been established, but it seems to me that would require a rather
significant redesign of how people do things, not just how
protocols work.
   


Actually, I think the latter is really what I'm suggesting. We've got do 
the encryption (for both the minimal protection from passive attacks as 
well as setting things up for doing good security later), but we've also 
got to design UIs that not only make it easier for users to deal with 
encrpytion, but change the way people think about it.


(Back when we were working on Eudora, we got user support complaints 
that "people can read my email without typing my password". What they in 
fact meant was that if you started the application, it would normally 
ask for your POP password in order to check mail, but you could always 
click "Cancel" and read the mail that had been previously downloaded. 
Users presumed that since they were being prompted for the password when 
the program launched -- just like what used to happen when they "logged 
in" to read mail on their Unix/etc. accounts -- the password was 
protecting the local data, not that it was only being used to 
authenticate to the server to download mail. You'd ask them why they 
weren't so worried about people reading their Microsoft Word files and 
they'd give you dumb looks. Sometimes you do have to redesign "how 
people do things".)


pr

--
Pete Resnick
Qualcomm Technologies, Inc. - +1 (858)651-4478



Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Theodore Ts'o
One thing that would be helpful is to encourage the use of
Diffie-Hellman everywhere.  Even without certificates that can be
trusted, we can eliminate the ability of casual, dragnet-style
surveillance.  Sure, an attacker can still do a MITM attack.  But (a)
people who are more clueful can do certificate pinning/verification,
and (b) if the NSA is really putting data taps into tier 1 providers'
high speed interconnects, they can only carry out MITM attacks on a
bulk scale by placing racks and racks of servers, which will require
significant amounts of cooling and power, in places that are much more
likely where they would be noticed.  It's no longer a data tap hidden
away somewhere in a closet near a tier 1's NAP.

For too long, I think, we've let the perfect be the enemy of the good.
Using TLS with DH to secure SMTP connections is valuable even if it is
subject to MITM attacks, and even if the NSA/FBI can hand a National
Security Letter to the cloud provider.  At least this way they will be
forced to go the NSL route (and it will show up in whatever
transparency reports that Google or Microsoft or Facebook are allowed
to show to the public), or spend $$$ on huge racks of servers in
public data centers, which maybe means less money to subvert standards
setting activities.

Although perfect security is ideal, increasing the cost of casual
style dragnet surveillance is still a Good Thing.

- Ted


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Theodore Ts'o
On Fri, Sep 06, 2013 at 06:20:48AM -0700, Pete Resnick wrote:
> 
> In email,
> we insist that you authenticate the recipient's certificate before
> we allow you to install it and to start encrypting, and prefer to
> send things in the clear until that is done. That's silly and is
> based on the assumption that encryption isn't worth doing *until* we
> know it's going to be done completely safely.

Speaking of which, Jim Gettys was trying to tell me yesterday that
BIND refuses to do DNSSEC lookups until the endpoint client has
generated a certificate.  Which is bad, since out-of-box, a home
router doesn't have much in the way of entropy at that point, so you
shouldn't be trying to generate certificates at the time of the first
boot-up, but rather to delay until you've had enough of a chance to
gather some entropy.  (Or put in a real hardware RNG, but a
race-to-the-bottom in terms of BOM costs makes that not realistic.)  I
told him that sounds insane, since you shouldn't need a
certificate/private key in order to do digital signature verification.

Can someone please tell me that BIND isn't being this stupid?

- Ted


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Joe Abley

On 2013-09-06, at 10:16, Theodore Ts'o  wrote:

> On Fri, Sep 06, 2013 at 06:20:48AM -0700, Pete Resnick wrote:
>> 
>> In email,
>> we insist that you authenticate the recipient's certificate before
>> we allow you to install it and to start encrypting, and prefer to
>> send things in the clear until that is done. That's silly and is
>> based on the assumption that encryption isn't worth doing *until* we
>> know it's going to be done completely safely.
> 
> Speaking of which, Jim Gettys was trying to tell me yesterday that
> BIND refuses to do DNSSEC lookups until the endpoint client has
> generated a certificate.

All modern DNSSEC-capable resolvers (regardless of whether validation has been 
turned on) will set DO=1 in the EDNS0 header and will retrieve signatures in 
responses if they are available. BIND9 is not a counter-example. Regardless, an 
end host downstream of a resolver that behaves differently (but that is capable 
of and desires to perform its own validation) can detect an inability to 
receive signatures, and can act accordingly.

There is no client certificate component of DNSSEC. The trust anchor for the 
system is published as part of root zone processes at IANA, and a variety of 
mechanisms are available to infer trust in a retrieved trust anchor. (These 
could use more work, but they exist.)

There is a (somewhat poorly-characterised and insufficiently-measured) 
interaction with a variety of middleware in firewalls, captive hotel hotspot, 
etc that will prevent an end host from being able to validate responses from 
the DNS, but in those cases the inability to validate is known by the end host; 
you still have the option of closing your laptop and reattaching it to the 
network somewhere else.

>  Which is bad, since out-of-box, a home
> router doesn't have much in the way of entropy at that point, so you
> shouldn't be trying to generate certificates at the time of the first
> boot-up, but rather to delay until you've had enough of a chance to
> gather some entropy.

In DNSSEC, signatures are generated before publication of zone data, and are 
verified by validators. You don't need a high-quality entropy source to 
validate a signature. There is no DNSSEC requirement for entropy in a home 
router or an end host.

>  (Or put in a real hardware RNG, but a
> race-to-the-bottom in terms of BOM costs makes that not realistic.)  I
> told him that sounds insane, since you shouldn't need a
> certificate/private key in order to do digital signature verification.

I think you were on the right track, there.

> Can someone please tell me that BIND isn't being this stupid?

This thread has mainly been about privacy and confidentiality. There is nothing 
in DNSSEC that offers either of those, directly (although it's an enabler 
through approaches like DANE to provide a framework for secure distribution of 
certificates). If every zone was signed and if every response was validated, it 
would still be possible to tap queries and tell who was asking for what name, 
and what response was returned.


Joe

Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Noel Chiappa
> From: Scott Brim 

> I wouldn't focus on government surveillance per se. The IETF should
> consider that breaking privacy is much easier than it used to be ...
> right now the Internet's weakness in privacy is far from "better". The
> mandatory security considerations section should become security and
> privacy considerations. The privacy RFC should be expanded and worded
> more strongly than just nice suggestions. 

Excellent point. There are a lot more threats to privacy than just the NSA
(and similar agencies in other large, powerful countries, which probably do
their own snooping, although not on the scale of the NSA's).

I am minded of the 'recent' revelations that Google, etc trawl through email
they handle, looking for URLs, which they then crawl. (I say 'recent' because
I discovered this some years ago. A 'private' page of mine - i.e. one with no
links to it - wound up in Google's search results, because I'd sent someone
on gmail a message with the URL in it...) Etc, etc. Added up across all the
large companies, I reckon the amount of 'private' surveillance is probably
close to what the NSA does.


> From: Theodore Ts'o 

> For too long, I think, we've let the perfect be the enemy of the good.
> At least this way they will be forced to go the NSL route ... or spend
> $$$ on huge racks of servers in public data centers, which maybe means
> less money to subvert standards setting activities.
> ...
> Although perfect security is ideal, increasing the cost of casual style
> dragnet surveillance is still a Good Thing.

Good point. But let's not make a similar diversion ourselves.

I suspect that for most people, the results of having their machine infected
with a virus, or identity theft from compromised information, is probably a
lot more painful than being the subject of dragnet surveillance by a
government (irritating though that may be).

So if we throw resources at attacking the dragnet surveillance, and take
those resources from efforts to tackle other security problems, that might
not be in the best overall interests of the networks' users.

Noel


PS: I'm having fun trying to imagine the reaction of the people at the NSA,
GCHQ, etc who are reading this thread. (Hi, all!)


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 2:46 AM, SM  wrote:
> At 20:08 05-09-2013, Ted Lemon wrote:
>> I think we all knew NSA was collecting the data.   Why didn't we do 
>> something about it sooner?   Wasn't it an emergency when the PATRIOT act was 
>> passed?   We certainly thought it was an emergency back in the days of 
>> Skipjack, but then they convinced us we'd won.   Turns out they just went 
>> around us.
> 
> I would describe it as a scuffle instead of a battle.  My guess is that the 
> IETF did not do anything sooner as nobody knows what to do, or it may be that 
> the IETF has become conservative and it does not pay attention to the 
> minority report.

It was definitely a battle.   There were threats of imprisonment, massive 
propaganda dumps (think of the children!), etc.   People broke the law, moved 
countries, etc.   We just forget it because "we" "won" it, and it seems smaller 
in memory than it was when it was happening.

The IETF didn't do anything because the tin foil hat contingent didn't have 
consensus, and we had no data to force the point.   As you alluded to earlier, 
it's historically been very difficult to get people to treat security and 
privacy seriously, and frankly it still is.

So this isn't an emergency.   It's a teachable moment.   We should pay 
attention.



Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Stephane Bortzmeyer
On Fri, Sep 06, 2013 at 08:20:17AM -0700,
 Dave Crocker  wrote 
 a message of 21 lines which said:

> We currently do not have a concise catalog the basic 'privacy'
> threats and their typical mitigations, appropriate for concern with
> IETF protocols.

What about RFC 6973?


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Scott Brim
On Fri, Sep 6, 2013 at 10:55 AM, Dave Crocker  wrote:
> In other words, the IETF needs to assume that we don't know what will work
> for end users and we need to therefore focus more on processing by end
> /systems/ rather than end /users/.

... and do not close off any options because we assume people won't
want to use them.


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Dave Crocker

On 9/6/2013 5:51 AM, Jorge Amodio wrote:


IMHO. There is no amount of engineering that can fix stupid people doing
stupid things... on both sides of the stupid line.



Correct.  Within the IETF, the most serious example of stupidity is any 
line of analysis that considers end-users to be stupid or lazy, rather 
than treating them as system components with various pragmatic 
constraints, just like any other system component.


So the real challenge is for us to be clear about the pragmatics when we 
talk about end-users.  Here the real problem is that the pragmatics are 
only superficially understood, even by the usability (HCI, UXD, UCE, 
UCD...) experts.


That points to a second serious challenge, namely that we can't know 
very well what will work for end-users and what won't.


The model that I've described for some years is that the best user 
design cognitive processing models -- processing limits, memory limits, 
attention limits, etc. -- about end-users suggest reasonable theories 
for /starting/ designs, but never ensure good /final/ designs.  That 
requires testing.


At this summer's SOUPS conference I floated this summary past a variety 
of senior Usable Security folks during one of the sessions and folks 
generally nodded in agreement.


In other words, the IETF needs to assume that we don't know what will 
work for end users and we need to therefore focus more on processing by 
end /systems/ rather than end /users/.


We also need to avoid the 'then a miracle happens' faith that end system 
designers will magically figure out the best user interface design for 
security, since they have failed at that for the last 25 years; they'll 
eventually succeed but they haven't, so far.


d/


--
Dave Crocker
Brandenburg InternetWorking
bbiw.net


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Scott Brim
On Fri, Sep 6, 2013 at 11:41 AM, Pete Resnick  wrote:
> OK, one last nostalgic anecdote about Eudora before I go back to finishing
> my spfbis Last Call writeup:
>
> MacTCP (the TCP/IP stack for the original MacOS) required a handler routine
> for ICMP messages for some dumb reason; you couldn't just set it to null in
> your code. So Steve implemented one. Whenever an ICMP message came in for a
> current connection (e.g., Destination Unreachable), Eudora would put up a
> dialog box. It read "Eudora has received an ICMP Destination Unreachable
> message." The box had a single button. It read, "So What?"
>
> Working for Steve was a hoot.

(like)


Re: Re: [v6ops] Last Call: (Internet Protocol Version 6 (IPv6) Profile for 3GPP Mobile Devices) to Informational RFC

2013-09-06 Thread Ray Hunter

Gert Doering wrote:
> Hi,
>
> On Wed, Sep 04, 2013 at 06:25:17PM +0900, Lorenzo Colitti wrote:
>>> Sure, but the majority are mandatory, and don't forget that some of them
>>> are quite large (e.g., "implement RFC 6204"). Also, I believe it's not the
>>> IETF's role to produce vendor requirements documents. The considerations
>>> that the IETF deals with are primarily technical, and "we want this stuff
>>> from our vendors" is not a technical issue.
>>>
>>> *[Med] With all due respect, you are keeping the same argument since the
>>> initial call for adoption and you seem ignore we are not in that stage.
>>> That?s not fair at all.*
>>>
>> I'm just saying it here so that everyone in the community can see it. If
>> it's an IETF document it has to have IETF consensus, and since I feel that
>> the arguments were not properly taken into account in the WG (read:
>> ignored), I think it's important that the community see them before we
>> publish this document.
>
> +1
>
> Gert Doering
> -- NetMaster

I know I'm formally a couple of days late on the WGLC (work!).

I agree with Lorenzo.

And in any case it isn't ready to ship IMHO. e.g. How can REQ#33 and
REQ#34 be enforced by a manufacturer (during compliance testing)?

-- 
Regards,
RayH



Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Noel Chiappa
> From: Spencer Dawkins 

> I have to wonder whether weakening crypto systems to allow pervasive
> passive monitoring by "national agencies" would weaken them enough for
> technologically savvy corporations to monitor their competitors, for
> instance.

More importantly, if crypto systems are weaked so that the intelligence
agencies of the 'good guys' can monitor them, they're probably weak enough
that the intelligence agencies of the 'bad guys' can monitor them too.

The smarts level on the other side should not be under-estimated, although I
fear this often happens. 


> From: Ted Lemon 

> What we should probably be thinking about here is:
> - Mitigating single points of failure (IOW, we _cannot_ rely
>  on just the root key)
> - Hybrid solutions (more trust sources means more work to
> compromise)
> ...
> - Multiple trust anchors (for stuff that really matters, we
> can't rely on the root or on a third party CA)

I'm not sure if this is entirely responsive to your points here, but it is
possible to have multiple 'root trust anchors' with the DNS. I have worked
this out in some detail, which I won't give here.

But basically the concept is that multiple entities (e.g. IEEE, EFF,
add-your-favourite here) can all sign the root zone (independently, but in
parallel), and also any subsidiary zones they care about (e.g. .EDU).
(Signing everything all the way down is clearly impractical, but if you can
n-way secure the root of the tree, that will help.)

I seem to recall that DNSSEC as it stands could deal with this; the real
issue would be gaining agreement from the zone owner to include multiple
signatures. Of course, it's possible to distributed those signatures in other
ways, but that would require new mechanism.

Noel


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 3:25 AM, Måns Nilsson  wrote:
> I do think that more distributed technoligies like DANE play an important
> rôle here.

Right, because there's no way the NSA could ever pwn the DNS root key.

What we should probably be thinking about here is:

  - Mitigating single points of failure (IOW, we _cannot_ rely
on just the root key)
  - Hybrid solutions (more trust sources means more work to
compromise)
  - Sanity checking (if a key changes unexpectedly, we should
be able to notice)
  - Multiple trust anchors (for stuff that really matters, we
can't rely on the root or on a third party CA)
  - Trust anchor establishment for sensitive communications
(e.g. with banks)

The threat model isn't really the NSA per se—if they really want to bug you, 
they will, and you can't stop them, and that's not a uniformly bad thing.   The 
problem is the breathtakingly irresponsible weakening of crypto systems that 
has been alleged here, and what we can do to mitigate that.   Even if we aren't 
sure that it's happened, or precisely what's happened, it's likely that it has 
happened, or will happen in the near future.  We should be thinking in those 
terms, not crossing our fingers and hoping for the best.



Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 08:41 -0700 Pete Resnick
 wrote:

>...
> Absolutely. There is clearly a good motivation: A particular
> UI choice should not *constrain* a protocol, so it is
> essential that we make sure that the protocol is not
> *dependent* on the UI. But that doesn't mean that UI issues
> should not *inform* protocol design. If we design a protocol
> such that it makes assumptions about what the UI will be able
> to provide without verifying those assumptions are realistic,
> we're in serious trouble. I think we've done that quite a bit
> in the security/application protocol space.

Yes.  It also has another implication that goes to Dave's point
about how the IETF should interact with UI designers.   In my
youth I worked with some very good early generation HCI/ UI
design folks.  Their main and most consistent message was that,
from a UI functionality standpoint, the single most important
consideration for a protocol, API, or similar interface was to
be sure that one had done a thorough analysis of the possible
error and failure conditions and that sufficient information
about those conditions could get to the outside to permit the UI
to report things and take action in an appropriate way.From
that point of view, any flavor of a "you lose" -> "ok" message,
including blue screens and "I got irritated and disconnected
you"  is a symptom of bad design and much more commonly bad
design in the protocols and interfaces than in the UI.  

Leaving the UI designs to the UI designers is fine but, if we
don't give them the tools and information they need, most of the
inevitable problems are ours.


> OK, one last nostalgic anecdote about Eudora before I go back
> to finishing my spfbis Last Call writeup:
>...
> Working for Steve was a hoot.

I can only imagine, but the story is not a great surprise.

   john





Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Dave Crocker



There are a lot more threats to privacy than just the NSA


We currently do not have a concise catalog the basic 'privacy' threats 
and their typical mitigations, appropriate for concern with IETF 
protocols.  In effect, every new protocol effort must start with a blank 
sheet, and invent its own list of threats and possible protections 
against them.


One common outcome from this is that we tend to think of very localized 
mechanisms, rather than end-to-end.  So we assume a model of things 
being one-hop or we implicitly trust intermediaries.  (Hint, the web is 
often not 1-hop, what with proxies, etc...)


We need privacy templates for protocol design.

d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 07:38 -0700 Pete Resnick
 wrote:

> Actually, I think the latter is really what I'm suggesting.
> We've got do the encryption (for both the minimal protection
> from passive attacks as well as setting things up for doing
> good security later), but we've also got to design UIs that
> not only make it easier for users to deal with encrpytion, but
> change the way people think about it.
> 
> (Back when we were working on Eudora, we got user support
> complaints that "people can read my email without typing my
> password". What they in fact meant was that if you started the
> application, it would normally ask for your POP password in
>...

Indeed.  And I think that one of the more important things we
can do is to rethink UIs to give casual users more information
about what it going on and to enable them to take intelligent
action on decisions that should be under their control.  There
are good reasons why the IETF has generally stayed out of the UI
area but, for the security and privacy areas discussed in this
thread, there may be no practical way to design protocols that
solve real problems without starting from what information a UI
needs to inform the user and what actions the user should be
able to take and then working backwards.  As I think you know,
one of my personal peeves is the range of unsatisfactory
conditions --from an older version of certificate format or
minor error to a verified revoked certificate -- that can
produce a message that essentially says "continuing may cause
unspeakable evil to happen to you" with an "ok" button (and only
an "ok" button).  

Similarly, even if users can figure out which CAs to trust and
which ones not (another issue and one where protocol work to
standardize distribution of CA reputation information might be
appropriate) editing CA lists whose main admission qualification
today seems to be cosy relationships with vendors (and maybe the
US Govt) to remove untrusted ones and add trusted ones requires
rocket scientist-level skills.  If we were serous, it wouldn't
be that way.  

And the fact that those are 75% of more UI issues is probably no
longer an excuse.

john





Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Brian Trammell
hi Scott, all,

On Sep 6, 2013, at 3:45 PM, Scott Brim  wrote:

> I wouldn't focus on government surveillance per se.  The IETF should
> consider that breaking privacy is much easier than it used to be,
> particularly given consolidation of services at all layers, and take
> that into account in our engineering best practices.  Our mission is
> to make the Internet better, and right now the Internet's weakness in
> privacy is far from "better".

Indeed, pervasive surveillance is merely a special case of eavesdropping as a 
privacy threat, with the important difference that eavesdropping (as discussed 
in RFC 6973) explicitly has an target in mind, while pervasive surveillance 
explicitly doesn't. So what we do to improve privacy will naturally make 
surveillance harder, in most cases; I hope that draft-trammell-perpass-ppa will 
evolve to fill in the gaps.

> The mandatory security considerations
> section should become security and privacy considerations.  The
> privacy RFC should be expanded and worded more strongly than just nice
> suggestions.  Perhaps the Nomcom should ask candidates about their
> understanding of privacy considerations.

Having read RFC 6973 in detail while working on that draft, I'd say it's a very 
good starting point, and indeed even consider it required reading. We can 
certainly take its guidance to heart as if it were more strongly worded than it 
is. :)

Cheers,

Brian

Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 10:43 -0400 Joe Abley
 wrote:

>> Can someone please tell me that BIND isn't being this stupid?
> 
> This thread has mainly been about privacy and confidentiality.
> There is nothing in DNSSEC that offers either of those,
> directly (although it's an enabler through approaches like
> DANE to provide a framework for secure distribution of
> certificates). If every zone was signed and if every response
> was validated, it would still be possible to tap queries and
> tell who was asking for what name, and what response was
> returned.

Please correct me if I'm wrong, but it seems to me that
DANE-like approaches are significantly better than traditional
PKI ones only to the extent to which:

- The entities needing or generating the certificates
are significantly more in control of the associated DNS
infrastructure than entities using conventional CAs are
in control of those CAs.

- For domains that are managed by registrars or other
third parties (I gather a very large fraction of them at
the second level), whether one believes those registrars
or other operators have significantly more integrity and
are harder to compromise than traditional third party CA
operators.

best,
   john




Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Dave Crocker

On 9/6/2013 8:34 AM, Stephane Bortzmeyer wrote:

On Fri, Sep 06, 2013 at 08:20:17AM -0700,
  Dave Crocker  wrote
  a message of 21 lines which said:


We currently do not have a concise catalog the basic 'privacy'
threats and their typical mitigations, appropriate for concern with
IETF protocols.


What about RFC 6973?



It certainly provides useful background.  As such, it's an excellent 
starting point for the topic.


However it is not concise nor does it offer threat templates nor design 
templates.


It also doesn't define privacy...

d/


--
Dave Crocker
Brandenburg InternetWorking
bbiw.net


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread SM

Hi Vinayak,
At 01:13 06-09-2013, Vinayak Hegde wrote:
It is tragic if the community does understand strong encryption is 
essential in many cases (with the caveat that it is not a panacea 
for all security breaches) As for raising issues at the last-call. 
Why not ? The last-call is no different than any other mailing list 
discussion or going to the mic in a physical meeting. (Other than 
the urgency of having the last chance to comment ?)


The Last Call is different from going to the microphone in a physical 
meeting.  Fancy speeches at the microphone are impressive.  It takes 
more work to identify issues and explain why it is or can be a 
problem.  Martin Sustrik asked:


At 06:04 06-09-2013, Martin Sustrik wrote:
So, what if an NSA guys comes in and proposes backdoor to be added 
to a protocol? Is it even a valid interest? Does IETF as an 
organisation have anything to say about that or does it remain 
strictly neutral?


Would anyone notice it on a Last Call?  Would anyone say something 
about it?  I doubt that.  Ted Lemon said it nicely: "we should pay attention".


Regards,
-sm 



Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Pete Resnick

On 9/6/13 8:23 AM, John C Klensin wrote:


I think that one of the more important things we
can do is to rethink UIs to give casual users more information
about what it going on and to enable them to take intelligent
action on decisions that should be under their control.  There
are good reasons why the IETF has generally stayed out of the UI
area but, for the security and privacy areas discussed in this
thread, there may be no practical way to design protocols that
solve real problems without starting from what information a UI
needs to inform the user and what actions the user should be
able to take and then working backwards.
[...]
And the fact that those are 75% of more UI issues is probably no
longer an excuse.
   


Absolutely. There is clearly a good motivation: A particular UI choice 
should not *constrain* a protocol, so it is essential that we make sure 
that the protocol is not *dependent* on the UI. But that doesn't mean 
that UI issues should not *inform* protocol design. If we design a 
protocol such that it makes assumptions about what the UI will be able 
to provide without verifying those assumptions are realistic, we're in 
serious trouble. I think we've done that quite a bit in the 
security/application protocol space.



one of my personal peeves is the range of unsatisfactory
conditions --from an older version of certificate format or
minor error to a verified revoked certificate -- that can
produce a message that essentially says "continuing may cause
unspeakable evil to happen to you" with an "ok" button (and only
an "ok" button).
   


OK, one last nostalgic anecdote about Eudora before I go back to 
finishing my spfbis Last Call writeup:


MacTCP (the TCP/IP stack for the original MacOS) required a handler 
routine for ICMP messages for some dumb reason; you couldn't just set it 
to null in your code. So Steve implemented one. Whenever an ICMP message 
came in for a current connection (e.g., Destination Unreachable), Eudora 
would put up a dialog box. It read "Eudora has received an ICMP 
Destination Unreachable message." The box had a single button. It read, 
"So What?"


Working for Steve was a hoot.

pr

--
Pete Resnick
Qualcomm Technologies, Inc. - +1 (858)651-4478



Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Michael Richardson

Brian E Carpenter  wrote:
>> I think we all knew NSA was collecting the data.  Why didn't we do
>> something about it sooner?  Wasn't it an emergency when the PATRIOT
>> act was passed?  We certainly thought it was an emergency back in the
>> days of Skipjack, but then they convinced us we'd won.  Turns out they
>> just went around us.

> Tell me what the IETF could be doing that it isn't already doing.

1) We could be telling the public about the protocols that we designed 10, 15,
   and even 20 years ago. Some of which even have rather widespread
   implementation, but seem to have zero use.
   (S/MIME is in every copy of Outlook and Thunderbird, AFAIK)

What would the spam situation be like if 90% of emails were regularly
signed back in 1999?  Yes, and DKIM can sign message bodies now too.
We should be telling people about it.

2) Use this stuff ourselves

--
Michael Richardson , Sandelman Software Works




pgpmBrxgfskmQ.pgp
Description: PGP signature


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Adam Novak
You're right that a flat mesh is not the best topology for
long-distance communication, especially with current routing
protocols, which require things like global lists of all routeable
prefixes.

On the protocol front, I suggest that the IETF develop routing
protocols that can work well in a flat mesh topology. Parallelizing
traffic streams over many available routes, so it all doesn't try to
take the shortest path, would appear to be a particularly important
feature, as would preventing all of a node's links from being swamped
by through traffic that other nodes want it to route.

The problem with long-distance traffic over flat mesh networks is less
with throughput (if everything isn't taking the shortest path) than
with the latencies involved in sending traffic over a very large
number of hops. I think the solution there is to send traffic that's
leaving your local area over the existing (tapable) long-distance
infrastructure. The idea is to make tapping expensive, not impossible.

There's also the point to be made that current traffic patterns depend
to a significant extent on current Internet architectural decisions.
If everyone had a gigabit connection to their neighbors, but only a 10
megabit uplink to route long-distance traffic over, they might find a
use for all that extra local bandwidth.

On Fri, Sep 6, 2013 at 7:22 AM, Noel Chiappa  wrote:
> > One way to frustrate this sort of dragnet surveillance would be to
> > reduce centralization in the Internet's architecture.
> > ...
> > [If] The IETF focused on developing protocols (and reserving the
> > necessary network numbers) to facilitate direct network peering between
> > private individuals, it could make it much more expensive to mount
> > large-scale traffic interception attacks.
>
> I'm not sure this is viable (although it's an interesting concept).
>
> With our current routing tools, switching to a flat mesh, as opposed to the
> current fairly-structured system, would require enormous amounts of
> configuration/etc work on the part of smaller entities.
>
> Also, traffic patterns being what they are (e.g. most of my traffic goes
> quite a distance, and hardly any to things close by), everyone would wind up
> handling a lot of 'through' traffic - orders of magnitude more than their
> current traffic load.
>
> Noel


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Tony Finch
John C Klensin  wrote:
>
> Please correct me if I'm wrong, but it seems to me that
> DANE-like approaches are significantly better than traditional
> PKI ones only to the extent to which:
>
>   - The entities needing or generating the certificates
>   are significantly more in control of the associated DNS
>   infrastructure than entities using conventional CAs are
>   in control of those CAs.
>
>   - For domains that are managed by registrars or other
>   third parties (I gather a very large fraction of them at
>   the second level), whether one believes those registrars
>   or other operators have significantly more integrity and
>   are harder to compromise than traditional third party CA
>   operators.

Yes, but there are some compensating pluses:

You can get a meaningful improvement to your security by good choice of
registrar (and registry if you have flexibility in your choice of name).
Other weak registries and registrars don't reduce your DNSSEC security,
whereas PKIX is only as secure as the weakest CA.

DNSSEC has tricky timing requirements for key rollovers. This makes it
hard to steal a domain without causing validation failures.

An attacker can use a compromise of your DNS infrastructure to get a
certificate from a conventional CA, just as much as they could compromise
DNSSEC-based service authentication.

Tony.
-- 
f.anthony.n.finchhttp://dotat.at/
Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at first.
Rough, becoming slight or moderate. Showers, rain at first. Moderate or good,
occasionally poor at first.


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Abdussalam Baryun
On 9/6/13, Brian E Carpenter  wrote:
>
> Tell me what the IETF could be doing that it isn't already doing.
>
> I'm not talking about what implementors and operators and users should
> be doing; still less about what legislators should or shouldn't be
> doing. I care about all those things, but the question here is what
> standards or informational outputs from the IETF are needed, in addition
> to what's already done or in the works.

I think we need to rethink the way we do protocols or the way security
WGs do standards. It will be easy to blame/ask the Security Area
participants/experts of what was not done or what should been done,
however, this area seems more as a cross-area, and I suggest that the
IETF re-thinks or re-structures the Security area and/or its WGs'
Charteres.

AB


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Hannes Tschofenig

On 06.09.2013 18:53, SM wrote:

At 06:04 06-09-2013, Martin Sustrik wrote:

So, what if an NSA guys comes in and proposes backdoor to be added to
a protocol? Is it even a valid interest? Does IETF as an organisation
have anything to say about that or does it remain strictly neutral?


Would anyone notice it on a Last Call?  Would anyone say something about
it?  I doubt that.  Ted Lemon said it nicely: "we should pay attention".



You will have to interpret what a backdoor in a protocol would be.
I guess that would be a weaker security feature, delaying work or
starting some other work that plays in their favor.

That would, however, be a bit tricky.

In some sense this is not really needed by them since we have lots of 
companies who already argue for weaker security properties, for a 
variety of different reasons.


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Hannes Tschofenig

Dave,

On 06.09.2013 18:58, Dave Crocker wrote:

On 9/6/2013 8:34 AM, Stephane Bortzmeyer wrote:

On Fri, Sep 06, 2013 at 08:20:17AM -0700,
Dave Crocker  wrote
a message of 21 lines which said:


We currently do not have a concise catalog the basic 'privacy'
threats and their typical mitigations, appropriate for concern with
IETF protocols.


What about RFC 6973?



It certainly provides useful background. As such, it's an excellent
starting point for the topic.

However it is not concise nor does it offer threat templates nor design
templates.


The document actually contains a list of common threats that we found 
applicable in the Internet protocol standardization context.


The design template is essentially the questions listed in the 
guidelines section.


Unfortunately, like in security the story is not that easy that you can 
give simple recommendation. As a protocol designer, you unfortunately 
have to think a bit.



It also doesn't define privacy...


It does define privacy but not in a single sentence.


Ciao
Hannes



d/






Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Dean Willis

On Sep 6, 2013, at 8:07 AM, Eliot Lear  wrote:

> 
> On 9/6/13 3:04 PM, Martin Sustrik wrote:
>> So, what if an NSA guys comes in and proposes backdoor to be added to
>> a protocol? Is it even a valid interest? Does IETF as an organisation
>> have anything to say about that or does it remain strictly neutral?
>> 
> It's happened before and we as a community have said no.  See RFC 2804.

What if they didn't say they were NSA guys, but just discretely worked a 
weakness into a protocol? What if they were a trusted senior member of the 
community?

That way lies madness -- but it is a madness we must contemplate. Broader REAL 
consensus, rather than apathetic agreement with a single contributor's 
assertions is probably the right way to go.

That means an increasing thrust on educating IETFers, broadly, about security 
issues. Not just the math, but the whole op-sec envelope.

--
Dean

Re: pgp signing in van

2013-09-06 Thread Peter Saint-Andre
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 9/6/13 11:17 AM, Michael Richardson wrote:

> We just put our GPG fingerprint into the MEMO part of a vcard,

Actually, vCard has a KEY field:

http://tools.ietf.org/html/rfc6350#section-6.8.1

Peter

- -- 
Peter Saint-Andre
https://stpeter.im/


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJSKhiiAAoJEOoGpJErxa2pU0EQAJT5Ksg7HcSH/3YL+Ykyuy9+
QCGgNMHDpnuU9r+Vv9wUeBFpyPu6V/ZEByvEIle5Kw0lfc0NPE7EKdtWJcZhwb9I
PCJ6Fx96BAh1Bpt3EBjGRYAK/CvNLeuiFPQWTTCUm/Ou3cVaSZ0cs1/FeO1neRht
u5aUXdKVfsSyf6JBYGXVDJ2KtCmcHUzHnZfL6kaJnkIUEONypAuf0SStdI9kAMXy
2+rQ4Tcxtq5gJgXh39m7daXvi8m24TORvX8uU3kFFnxVEyxDITCmoh4qOsUo7xit
yN42Q63Q8eNcFXg6B/GgtfbKgNsu3oN43W0gKqhCYrvnC+SxzjNP160E47GlZUZr
bVUp4WiMF3V7t06BzOaKEXMSNtpzd7on03TlPVyVlUX4LvUWFKIlF++sYrgneUb3
foa5E/0f5frhhESBik3IIzbkJo5caum/8lYjkm+UeGku18LgmCyEH5qV+faWUpaW
91lUUS4ZNzxJ6Ylnd6AZyr7LKlukk5Z2IIEEoAP1HKnTZNMyassPxpQPl4Wrm2We
v/dQ+mPnTAWFawKh83YFaHH1IiNVMFwoUJAgxy5+SJktIKNpd/R5AnSHwwCCIskn
e2rZ9OWIkkDTgnM0q4l1LUnfX8QGvoYCthCOGI7SvrOPsdSsU200qPKd5XTmuSuJ
WGeoo/1pvq3YNQpNmBWy
=h1xT
-END PGP SIGNATURE-


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Spencer Dawkins

On 9/6/2013 10:46 AM, Ted Lemon wrote:
The threat model isn't really the NSA per se—if they really want to 
bug you, they will, and you can't stop them, and that's not a 
uniformly bad thing. The problem is the breathtakingly irresponsible 
weakening of crypto systems that has been alleged here, and what we 
can do to mitigate that. Even if we aren't sure that it's happened, or 
precisely what's happened, it's likely that it has happened, or will 
happen in the near future. We should be thinking in those terms, not 
crossing our fingers and hoping for the best. 


IIUC, I'm with Ted.

We should be thinking in those terms, and thinking broadly.

I have to wonder whether weakening crypto systems to allow pervasive 
passive monitoring by "national agencies" would weaken them enough for 
technologically savvy corporations to monitor their competitors, for 
instance.


Spencer


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Arturo Servin

   
On 9/6/13 4:47 AM, Adam Novak wrote:
> On 09/05/2013 08:19 PM, Brian E Carpenter wrote:
>> Tell me what the IETF could be doing that it isn't already doing.
>>
>> I'm not talking about what implementors and operators and users should
>> be doing; still less about what legislators should or shouldn't be
>> doing. I care about all those things, but the question here is what
>> standards or informational outputs from the IETF are needed, in addition
>> to what's already done or in the works.
>>
>> I don't intend that to be a rhetorical question.
>>
>>   Brian
>
> One way to frustrate this sort of dragnet surveillance would be to
> reduce centralization in the Internet's architecture. Right now, the
> way the Internet works in practice for private individuals, all your
> traffic goes up one pipe to your ISP. It's trivial to tap, since the
> tapping can be centralized at the ISP end.
And all our security is based in single points that are easy to abuse.

/as


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Dean Willis

On Sep 6, 2013, at 9:55 AM, Dave Crocker  wrote:
> 
> In other words, the IETF needs to assume that we don't know what will work 
> for end users and we need to therefore focus more on processing by end 
> /systems/ rather than end /users/.

But we are also end users. I recall being laughed at 6 or 7 years ago when I 
suggested that email security implementations would "get better" if the IETF 
insisted on using them for our email. My proposal at the time was, that since 
we thought S/MIME was the cat's whiskers, we should set up a CA and issue free 
end-user certs to all participants. Messages to IETF lists would require 
signing with said certs to be considered valid. This would make it easy to 
eliminate most of our SPAM.

So, we could eat our own dogfood, with whatever anti-surveillance mechanisms we 
specify. I am positive that would make things more end-user usable, over time.

--
Dean

Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Dave Crocker

On 9/6/2013 11:42 AM, Dean Willis wrote:

On Sep 6, 2013, at 9:55 AM, Dave Crocker  wrote:

In other words, the IETF needs to assume that we don't know what
will work for end users and we need to therefore focus more on
processing by end /systems/ rather than end /users/.


But we are also end users.


Mostly we are /not/.

That is... of course we are end-users.  And to the extent that the
target market for something is users similar to "us", then fine.

The problem is when the target market is mass-market end-users.  The 3-4
billion other folk who don't participate in the IETF.  The average IETF
participant is  wildly different from the average mass-market end-user,
in many different ways.  Very many.



So, we could eat our own dogfood,


Oh we definitely /should/ eat our own dogfood.  If the stuff we produce
is not even usable for us, well then...  And I think we can learn quite
a bit of how to improve things.

But my deeper point is that that is nowhere close to sufficient, for
demonstrating mass-market usability or efficacy.




On 9/6/2013 10:25 AM, Michael Richardson wrote:

1) We could be telling the public about the protocols that we
designed 10, 15, and even 20 years ago. Some of which even have
rather widespread implementation, but seem to have zero use. (S/MIME
is in every copy of Outlook and Thunderbird, AFAIK)


To what end?  Their poor uptake clearly demonstrates some basic 
usability deficiencies.  That doesn't get fixed by promotional efforts.




What would the spam situation be like if 90% of emails were
regularly signed back in 1999?


You mean the way that postal mail and telephone calls require you to 
authenticate yourself personally before you can use them?


Or the way you have to authenticate yourself before you can buy anything 
in a store?


There are tradeoffs here and they can have very considerable downsides.

d/

--
Dave Crocker
Brandenburg InternetWorking
bbiw.net


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 2:31 PM, Dean Willis  wrote:
> What if they didn't say they were NSA guys, but just discretely worked a 
> weakness into a protocol? What if they were a trusted senior member of the 
> community?

If we have trusted senior members making false statements that can be shown to 
be false, then they won't get consensus if we do our consensus process right.   
So if this has happened in the past, you should be able to find evidence that 
it has happened.  If you can't find such evidence, I think it's harmful to 
assume that it has happened.   The IETF process is in principle extremely 
robust in the face of this kind of behavior.   I encourage you to go looking, 
but don't descend into madness.



Re: pgp signing in van

2013-09-06 Thread Michael Richardson

I will be happy to participate in a pgp signing party.
Organized or not.

I suggest that an appropriate venue is during the last 15 minutes of the
newcomer welcome and the first 15 minutes of the welcome reception.

Because:
  1) the WG-chairs and IESG will all be there, and a web of trust
 still needs some significant good connectivity, and we already
 know each other rather well, without needing "ID"
 (I am not interested myself in verifying anyone's NSA^WGovernment
 identity. I don't trust that Certification Authority...)

  2) getting newbies on-board, meeting them well enough to sign
 their key seems like a good thing.

But, Randy, of what use is my signing your key, if you never use it?

I would happy to sign a key for a network personality who posts
signed message regularly to @ietf.org mailing lists.  I would simply give
them a nonce to sign.   (For awhile, I was convince s...@resistor.net,
whose full name I did not know until Orlando, was a gestalt network
identity...)

My key is still available via finger m...@sandelman.ca, and r...@sandelman.ca
is offline (I used to have a 286 in the corner), and has web of trust
signatures going back to 1994.
pub   1024R/B0C8713D 1994-11-08 <- it's a bit weak these days.
pub   2208R/FCA16F90 2006-10-10 <- new "modern" offline key.

We just put our GPG fingerprint into the MEMO part of a vcard,
http://zxing.appspot.com/generator/ or using qrencode
http://fukuchi.org/works/qrencode/index.html.en (in debian/ubuntu)

I suggest that perhaps this might be a useful way to exchange info:
   http://www.sandelman.ca/tmp/IMG_20130906_125920.jpg
one would take a picture of the other person with their QR code
and fingerprint.  It also just works to remember the names of new people!

(Sadly, I can't scan the QR code with my phone from the photo displayed
on my screen, but I can read the fingerprint)

Patrik has a blog post: http://stupid.domain.name/node/1323
that does exactly that.

ps: nice address book entry for ietf@ietf.

--
]   Never tell me the odds! | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works| network architect  [
] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails[








pgpaR_j0ca0Jz.pgp
Description: PGP signature


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Keith Moore
On 09/06/2013 11:46 AM, Ted Lemon wrote:
> The threat model isn't really the NSA per se—if they really want to bug you, 
> they will, and you can't stop them, and that's not a uniformly bad thing. 

I disagree, or at least, I think that your statement conflates two
different threat models.

One kind of threat is that the NSA will bug you specifically.   And yes,
if they consider it important to do so, they very likely will.  There is
almost certainly some vulnerability in your hardware or software or
physical security, and they have lots of resources that can be invested
in finding it.

The other kind of threat, is that NSA will bug you because it's
currently really easy for them to engage in mass surveillance.   Most
traffic isn't even encrypted; and at least some of what is encrypted is
trivially broken.

I don't think IETF can (or should) do much about the former kind of
threat.   Most of it is out of our scope.But we should be working
hard to address the latter kind of threat.

Keith



Re: pgp signing in van

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 2:51 PM, Phillip Hallam-Baker  wrote:
> The issue is that smime email clients are more common so I would
> rather teach the smime doggie pgp like tricks than vice versa

The problem is getting your smime program to stop using CA keys and only use 
your local key as a CA key.   And someone would have to code up something to do 
all the certs.   It's not a bad idea in theory though, if it can be made to 
work.



Re: pgp signing in van

2013-09-06 Thread Phillip Hallam-Baker
Could we do smime as well?

If we had a list of smime cert fingerprints it can be used for trust
reinforcement

The issue is that smime email clients are more common so I would
rather teach the smime doggie pgp like tricks than vice versa


Sent from my difference engine


On Sep 6, 2013, at 1:20 PM, Michael Richardson  wrote:

>
> I will be happy to participate in a pgp signing party.
> Organized or not.
>
> I suggest that an appropriate venue is during the last 15 minutes of the
> newcomer welcome and the first 15 minutes of the welcome reception.
>
> Because:
>  1) the WG-chairs and IESG will all be there, and a web of trust
> still needs some significant good connectivity, and we already
> know each other rather well, without needing "ID"
> (I am not interested myself in verifying anyone's NSA^WGovernment
> identity. I don't trust that Certification Authority...)
>
>  2) getting newbies on-board, meeting them well enough to sign
> their key seems like a good thing.
>
> But, Randy, of what use is my signing your key, if you never use it?
>
> I would happy to sign a key for a network personality who posts
> signed message regularly to @ietf.org mailing lists.  I would simply give
> them a nonce to sign.   (For awhile, I was convince s...@resistor.net,
> whose full name I did not know until Orlando, was a gestalt network
> identity...)
>
> My key is still available via finger m...@sandelman.ca, and r...@sandelman.ca
> is offline (I used to have a 286 in the corner), and has web of trust
> signatures going back to 1994.
> pub   1024R/B0C8713D 1994-11-08 <- it's a bit weak these days.
> pub   2208R/FCA16F90 2006-10-10 <- new "modern" offline key.
>
> We just put our GPG fingerprint into the MEMO part of a vcard,
> http://zxing.appspot.com/generator/ or using qrencode
> http://fukuchi.org/works/qrencode/index.html.en (in debian/ubuntu)
>
> I suggest that perhaps this might be a useful way to exchange info:
>   http://www.sandelman.ca/tmp/IMG_20130906_125920.jpg
> one would take a picture of the other person with their QR code
> and fingerprint.  It also just works to remember the names of new people!
>
> (Sadly, I can't scan the QR code with my phone from the photo displayed
> on my screen, but I can read the fingerprint)
>
> Patrik has a blog post: http://stupid.domain.name/node/1323
> that does exactly that.
>
> ps: nice address book entry for ietf@ietf.
>
> --
> ]   Never tell me the odds! | ipv6 mesh networks [
> ]   Michael Richardson, Sandelman Software Works| network architect  [
> ] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails
> [
>
>
>
>
>
>


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Spencer Dawkins

On 9/6/2013 11:38 AM, Noel Chiappa wrote:

 > From: Spencer Dawkins 

 > I have to wonder whether weakening crypto systems to allow pervasive
 > passive monitoring by "national agencies" would weaken them enough for
 > technologically savvy corporations to monitor their competitors, for
 > instance.

More importantly, if crypto systems are weaked so that the intelligence
agencies of the 'good guys' can monitor them, they're probably weak enough
that the intelligence agencies of the 'bad guys' can monitor them too.

The smarts level on the other side should not be under-estimated, although I
fear this often happens.


Noel,

I agree that's important (and perhaps "more important"), and that 
underestimating 'bad guys' is all too tempting, and all too easy.


I thought to call attention to the opportunities for commercial leakage, 
from everything from trade secrets to medical records, if our strong 
crypto turns out to contain intentional weaknesses.


We have plenty of potential exposures to worry about, depending on who's 
likely to be interested in seeing what we're trying to hide.


Spencer


decentralization of Internet (was Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Roger Jørgensen
On Fri, Sep 6, 2013 at 9:47 AM, Adam Novak  wrote:
>
> One way to frustrate this sort of dragnet surveillance would be to reduce
> centralization in the Internet's architecture. Right now, the way the
> Internet works in practice for private individuals, all your traffic goes up
> one pipe to your ISP. It's trivial to tap, since the tapping can be
> centralized at the ISP end.

excellent idea... any suggestion on how that should be done?

Only one I can remember right now are LISP which sort of create a new
network on top of our current network, and the EID-block drafts being
worked on by some people (including me) tries to address how the
IP-space of this "new" network can be done.

But there must be other ways than through LISP-alike way of doing it?


> The IETF focused on developing protocols (and reserving the necessary
> network numbers) to facilitate direct network peering between private
> individuals, it could make it much more expensive to mount large-scale
> traffic interception attacks.

Think there are work being done on the topic? However, how are you
going to interconnect all of this private peerings? It sort of imply
that everyone need to have their own netblock they can exchange with
others.



-- 

Roger Jorgensen   | ROJO9-RIPE
rog...@gmail.com  | - IPv6 is The Key!
http://www.jorgensen.no   | ro...@jorgensen.no


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread David Conrad
On Sep 6, 2013, at 2:06 PM, Måns Nilsson  wrote:
>> Right, because there's no way the NSA could ever pwn the DNS root key.
> It is probably easier for NSA or similar agencies in other countries
> to coerce X.509 root CA providers that operate on a competetive market
> than fooling the entire international DNS black helicopter cabal. 

Probably the wrong place to apply the paranoia. How much do you trust the AEP 
Keyper HSM tamperproof blackbox hasn't had a backdoor installed into it at the 
factory?

> Audit and open source seem to be good starting points. 

Where feasible, sure. Unfortunately, the rabbit hole is deep.  How many 
billions of transistors are there in commodity chips these days?

Regards,
-drc



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: decentralization of Internet (was Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Tim Chown
On 6 Sep 2013, at 21:32, Roger Jørgensen  wrote:

> On Fri, Sep 6, 2013 at 9:47 AM, Adam Novak  wrote:


>> The IETF focused on developing protocols (and reserving the necessary
>> network numbers) to facilitate direct network peering between private
>> individuals, it could make it much more expensive to mount large-scale
>> traffic interception attacks.
> 
> Think there are work being done on the topic? However, how are you
> going to interconnect all of this private peerings? It sort of imply
> that everyone need to have their own netblock they can exchange with
> others.

Mobile IPv6 gives one way to run multiple devices in one subnet. Someone needs 
to be the HA though. And/or if future homes have multiple /64's, it's not 
infeasible to dedicate one or more to virtual/overlay LANs.

Tim



RE: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread George, Wes
+Bruce Schneier (at least the email address published in his latest I-D), since 
he should be at least aware of the discussion his callout has generated.

> -Original Message-
> From: ietf-boun...@ietf.org [mailto:ietf-boun...@ietf.org] On Behalf Of
> Ted Lemon
>
> On Sep 5, 2013, at 8:46 PM, Lucy Lynch  wrote:
> >> I'd like to share the challenge raised by Bruce Schneier in:
>
> I thought it was a great call to action.   Is Bruce coming to Vancouver?

[WEG] Sounds to me like he just volunteered to be the keynote for the Tech 
Plenary.

Wes George

Anything below this line has been added by my company's mail server, I have no 
control over it.
-


This E-mail and any of its attachments may contain Time Warner Cable 
proprietary information, which is privileged, confidential, or subject to 
copyright belonging to Time Warner Cable. This E-mail is intended solely for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient of this E-mail, you are hereby notified that any 
dissemination, distribution, copying, or action taken in relation to the 
contents of and attachments to this E-mail is strictly prohibited and may be 
unlawful. If you have received this E-mail in error, please notify the sender 
immediately and permanently delete the original and any copy of this E-mail and 
any printout.


Re: decentralization of Internet (was Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Brian E Carpenter
On 07/09/2013 08:55, Tim Chown wrote:
> On 6 Sep 2013, at 21:32, Roger Jørgensen  wrote:
> 
>> On Fri, Sep 6, 2013 at 9:47 AM, Adam Novak  wrote:
> 
> 
>>> The IETF focused on developing protocols (and reserving the necessary
>>> network numbers) to facilitate direct network peering between private
>>> individuals, it could make it much more expensive to mount large-scale
>>> traffic interception attacks.
>> Think there are work being done on the topic? However, how are you
>> going to interconnect all of this private peerings? It sort of imply
>> that everyone need to have their own netblock they can exchange with
>> others.
> 
> Mobile IPv6 gives one way to run multiple devices in one subnet. Someone 
> needs to be the HA though. And/or if future homes have multiple /64's, it's 
> not infeasible to dedicate one or more to virtual/overlay LANs.

It serves no purpose as long as there's an underlying customer/provider
relationship, because it's the provider that is suborned by the government
agency.

 Brian



Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread SM

Hi Dean,
At 11:31 06-09-2013, Dean Willis wrote:
What if they didn't say they were NSA guys, but just discretely 
worked a weakness into a protocol? What if they were a trusted 
senior member of the community?


Trust does not work well without accountability.  There is less to 
worry about if you do not implicitly trust senior members of the community.


Regards,
-sm 



Teachable moment

2013-09-06 Thread Brian E Carpenter
Ted,

On 07/09/2013 03:32, Ted Lemon wrote:
> On Sep 6, 2013, at 2:46 AM, SM  wrote:
>> At 20:08 05-09-2013, Ted Lemon wrote:
>>> I think we all knew NSA was collecting the data.   Why didn't we do 
>>> something about it sooner?   Wasn't it an emergency when the PATRIOT act 
>>> was passed?   We certainly thought it was an emergency back in the days of 
>>> Skipjack, but then they convinced us we'd won.   Turns out they just went 
>>> around us.
>> I would describe it as a scuffle instead of a battle.  My guess is that the 
>> IETF did not do anything sooner as nobody knows what to do, or it may be 
>> that the IETF has become conservative and it does not pay attention to the 
>> minority report.
> 
> It was definitely a battle.   There were threats of imprisonment, massive 
> propaganda dumps (think of the children!), etc.   People broke the law, moved 
> countries, etc.   We just forget it because "we" "won" it, and it seems 
> smaller in memory than it was when it was happening.
> 
> The IETF didn't do anything because the tin foil hat contingent didn't have 
> consensus, and we had no data to force the point.   As you alluded to 
> earlier, it's historically been very difficult to get people to treat 
> security and privacy seriously, and frankly it still is.
> 
> So this isn't an emergency.   It's a teachable moment.   We should pay 
> attention.

Absolutely. I have noted at least 20 messages in the recent flood that
mention useful things the IETF can do, which is exactly what my provocative
message asked for. But (as Bruce's own recent posts show) the main weak spots
are not protocols and algorithms.

  Brian


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Måns Nilsson
Subject: Re: Bruce Schneier's Proposal to dedicate November meeting to saving 
the Internet from the NSA Date: Fri, Sep 06, 2013 at 11:46:17AM -0400 Quoting 
Ted Lemon (ted.le...@nominum.com):
> On Sep 6, 2013, at 3:25 AM, Måns Nilsson  wrote:
> > I do think that more distributed technoligies like DANE play an important
> > rôle here.
> 
> Right, because there's no way the NSA could ever pwn the DNS root key.

It is probably easier for NSA or similar agencies in other countries
to coerce X.509 root CA providers that operate on a competetive market
than fooling the entire international DNS black helicopter cabal. But
that is -- I admit -- an educated guess, based on personal relations.

> What we should probably be thinking about here is:
> 
>   - Mitigating single points of failure (IOW, we _cannot_ rely
> on just the root key)

In effect, DANE exchanges one trust model for another. I happen
to believe that the damage risque is lower with DNSSEC + DANE than the
traditional "any CA can issue a certificate for any domain name" setup.

>   - Hybrid solutions (more trust sources means more work to
> compromise)
>   - Sanity checking (if a key changes unexpectedly, we should
> be able to notice)
>   - Multiple trust anchors (for stuff that really matters, we
> can't rely on the root or on a third party CA)
>   - Trust anchor establishment for sensitive communications
> (e.g. with banks)

agree on all. 
 
> The threat model isn't really the NSA per se—if they really want to bug you, 
> they will, and you can't stop them, and that's not a uniformly bad thing.   
> The problem is the breathtakingly irresponsible weakening of crypto systems 
> that has been alleged here, and what we can do to mitigate that.   Even if we 
> aren't sure that it's happened, or precisely what's happened, it's likely 
> that it has happened, or will happen in the near future.  We should be 
> thinking in those terms, not crossing our fingers and hoping for the best.
 
Audit and open source seem to be good starting points. 

-- 
Måns Nilsson primary/secondary/besserwisser/machina
MN-1334-RIPE +46 705 989668
Yow!  It's some people inside the wall!  This is better than mopping!


signature.asc
Description: Digital signature


Re: decentralization of Internet (was Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread manning bill
hum…

i did work on a DNS architecture that can be fully disconnected from 
the "Internet" and still work with nodes within the visible topology.

Needs serious rework of DNSSEC and has some assumptions about topology 
discovery -  but it might be a basis for starting some discussion

on decentralization of that part of the centralized DNS.


/bill



Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Tim Bray
How about a BCP saying conforming implementations of a wide-variety of
security-area RFCs MUST be open-source?

*ducks*


On Fri, Sep 6, 2013 at 2:34 PM, David Conrad  wrote:

> On Sep 6, 2013, at 2:06 PM, Måns Nilsson 
> wrote:
> >> Right, because there's no way the NSA could ever pwn the DNS root key.
> > It is probably easier for NSA or similar agencies in other countries
> > to coerce X.509 root CA providers that operate on a competetive market
> > than fooling the entire international DNS black helicopter cabal.
>
> Probably the wrong place to apply the paranoia. How much do you trust the
> AEP Keyper HSM tamperproof blackbox hasn't had a backdoor installed into it
> at the factory?
>
> > Audit and open source seem to be good starting points.
>
> Where feasible, sure. Unfortunately, the rabbit hole is deep.  How many
> billions of transistors are there in commodity chips these days?
>
> Regards,
> -drc
>
>


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 6:02 PM, Tim Bray  wrote:
> How about a BCP saying conforming implementations of a wide-variety of 
> security-area RFCs MUST be open-source?

So clearly we should do all our crypto on devices built out of 7400-series 
logic.   Hm, where has my old wire-wrap tool gone?



Re: pgp signing in van

2013-09-06 Thread Joe Touch



On 9/6/2013 10:17 AM, Michael Richardson wrote:


I will be happy to participate in a pgp signing party.
Organized or not.

I suggest that an appropriate venue is during the last 15 minutes of the
newcomer welcome and the first 15 minutes of the welcome reception.

Because:
   1) the WG-chairs and IESG will all be there, and a web of trust
  still needs some significant good connectivity, and we already
  know each other rather well, without needing "ID"
  (I am not interested myself in verifying anyone's NSA^WGovernment
  identity. I don't trust that Certification Authority...)

   2) getting newbies on-board, meeting them well enough to sign
  their key seems like a good thing.


And whose key would you sign? Anyone who showed up with a form of ID?

I've noted elsewhere that the current typical key-signing party methods 
are very weak. You should sign only the keys of those who you know well 
enough to claim you can attest to their identity.


If that's the case, how will this get newbies on-board except to invite 
them to have keys whose signatures aren't relevant, and to devalue the 
trust in WG-chairs and IESG members?


Joe


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread SM

Hi Tim,
At 15:02 06-09-2013, Tim Bray wrote:
How about a BCP saying conforming implementations of a wide-variety 
of security-area RFCs MUST be open-source?


A BCP is not needed to do that.  It is already doable "but we [1] 
know that you [2] are not going to do it".


Speaking of open source, 
http://svn.debian.org/viewsvn/pkg-openssl/openssl/trunk/rand/md_rand.c?rev=141&view=diff&r1=141&r2=140&p1=openssl/trunk/rand/md_rand.c&p2=/openssl/trunk/rand/md_rand.c



*ducks*


Where?  I don't see any ducks. :-)

Regards,
-sm

1. The word "we" is used in a general context.
1. The word "you" is used in a general context. 



Re: pgp signing in van

2013-09-06 Thread Phillip Hallam-Baker
On Fri, Sep 6, 2013 at 3:34 PM, Ted Lemon  wrote:

> On Sep 6, 2013, at 2:51 PM, Phillip Hallam-Baker  wrote:
> > The issue is that smime email clients are more common so I would
> > rather teach the smime doggie pgp like tricks than vice versa
>
> The problem is getting your smime program to stop using CA keys and only
> use your local key as a CA key.   And someone would have to code up
> something to do all the certs.   It's not a bad idea in theory though, if
> it can be made to work
>

I am working towards doing both.

Point is that it would be very useful to be able to offer confidentiality
to a core of people in the developer community. There may not be a way to
make use of the data leaving Vancouver but there might be a way to use it
before London.





-- 
Website: http://hallambaker.com/


Re: pgp signing in van

2013-09-06 Thread Phillip Hallam-Baker
On Fri, Sep 6, 2013 at 6:42 PM, Joe Touch  wrote:

>
>
> On 9/6/2013 10:17 AM, Michael Richardson wrote:
>
>>
>> I will be happy to participate in a pgp signing party.
>> Organized or not.
>>
>> I suggest that an appropriate venue is during the last 15 minutes of the
>> newcomer welcome and the first 15 minutes of the welcome reception.
>>
>> Because:
>>1) the WG-chairs and IESG will all be there, and a web of trust
>>   still needs some significant good connectivity, and we already
>>   know each other rather well, without needing "ID"
>>   (I am not interested myself in verifying anyone's NSA^WGovernment
>>   identity. I don't trust that Certification Authority...)
>>
>>2) getting newbies on-board, meeting them well enough to sign
>>   their key seems like a good thing.
>>
>
> And whose key would you sign? Anyone who showed up with a form of ID?
>
> I've noted elsewhere that the current typical key-signing party methods
> are very weak. You should sign only the keys of those who you know well
> enough to claim you can attest to their identity.
>
> If that's the case, how will this get newbies on-board except to invite
> them to have keys whose signatures aren't relevant, and to devalue the
> trust in WG-chairs and IESG members?
>
> Joe
>

I can write a key ceremony spec. I have done that before.

Almost everyone arriving in Vancouver will have a passport in any case. The
protocol will probably be something like provide your key etc data in
advance, print something out and present that plus your ID document in the
ceremony.


-- 
Website: http://hallambaker.com/


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Dave Crocker

On 9/6/2013 4:19 PM, Scott Brim wrote:

On Sep 6, 2013 3:34 PM, "Dave Crocker" mailto:d...@dcrocker.net>> wrote:
 > To what end?  Their poor uptake clearly demonstrates some basic
usability deficiencies.  That doesn't get fixed by promotional efforts.

Or rather, as we've seen in other cases, people just don't see potential
benefits large enough to motivate them.



Perhaps.  But fundamental usability deficiencies can move these issues 
into the realm that warrants quoting Marshall Rose: "With enough thrust, 
pigs /can/ fly."  Only in this case, it's more like "for some issues, no 
amount of thrust can get this pig into the air."


In other words, considering the issues only in terms of user motivation 
ignores actual basic usability design deficiencies.


Currently, problems with security usability include:

   0. Systems providing very poor information

   1. Systems providing information at very poor times

   2. Users having to know too much

   3. Users having to do too much

Working on user motivation can help a little bit with #3 and none of the 
rest.  It can't help with all of #3 because there are cognitive limits 
that frequently apply.


d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread David Morris


On Fri, 6 Sep 2013, Ted Lemon wrote:

> On Sep 6, 2013, at 6:02 PM, Tim Bray  wrote:
> > How about a BCP saying conforming implementations of a wide-variety of 
> > security-area RFCs MUST be open-source?
> 
> So clearly we should do all our crypto on devices built out of 7400-series 
> logic.   Hm, where has my old wire-wrap tool gone?

Only if you purchased the 7400 stuff 20 years ago so that you know modern 
logic isn't hidden in the 74xx case.

Seriously though, NSA makes a nice villan, but much of our hardware is 
manufactured in counties with fewer restraints than the NSA when it
comes the right to privacy, etc. Wouldn't suprise me that my major
brand router has sniffers from more than one country's security agency.


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Scott Brim
On Sep 6, 2013 3:34 PM, "Dave Crocker"  wrote:
> To what end?  Their poor uptake clearly demonstrates some basic usability
deficiencies.  That doesn't get fixed by promotional efforts.

Or rather, as we've seen in other cases, people just don't see potential
benefits large enough to motivate them.


Re: pgp signing in van

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 6:42 PM, Joe Touch  wrote:
> I've noted elsewhere that the current typical key-signing party methods are 
> very weak. You should sign only the keys of those who you know well enough to 
> claim you can attest to their identity.

This is a ridiculously high bar.   The bar should be about at the level of a 
facebook friend request.  The PGP key signing model of attesting to legal 
identities is solving the wrong problem.But you are right that we can't 
require this sort of thing in order for people to participate in the IETF.



Re: pgp signing in van

2013-09-06 Thread Melinda Shore
On 9/6/13 4:10 PM, Ted Lemon wrote:
> On Sep 6, 2013, at 6:42 PM, Joe Touch  wrote:
>> I've noted elsewhere that the current typical key-signing party
>> methods are very weak. You should sign only the keys of those who
>> you know well enough to claim you can attest to their identity.

> This is a ridiculously high bar.   The bar should be about at the
> level of a facebook friend request.  

People's personal policies about Facebook friend requests seem
to be all over the map, so I'm not sure what that means in
practice.  I'm not sure that's a great model in any event, since
when you vouch for someone's identity - in an authoritative
trust system - you're also vouching for the authenticity of
their transactions.  Those transactions would also include
*them* making attestations about the identity of people you've
likely never heard of.

Melinda



Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread Phillip Hallam-Baker
On Fri, Sep 6, 2013 at 9:20 AM, Pete Resnick wrote:

> On 9/6/13 12:54 AM, t.p. wrote:
>
>> - Original Message -
>> From: "Phillip Hallam-Baker" 
>> Cc: "IETF Discussion Mailing List" 
>> Sent: Friday, September 06, 2013 4:56 AM
>>
>>  The design I think is practical is to eliminate all UI issues by
>>> insisting that encryption and decryption are transparent. Any email that
>>> can be sent encrypted is sent encrypted.
>>>
>>
>> That sounds like the 'End User Fallacy number one' that I encounter all
>> the time in my work. If only everything were encrypted, then we would be
>> completely safe.
>>
>
> Actually, I disagree that this fallacy is at play here. I think we need to
> separate the concept of end-to-end encryption from authentication when it
> comes to UI transparency. We design UIs now where we get in the user's face
> about doing encryption if we cannot authenticate the other side and we need
> to get over that. In email, we insist that you authenticate the recipient's
> certificate before we allow you to install it and to start encrypting, and
> prefer to send things in the clear until that is done. That's silly and is
> based on the assumption that encryption isn't worth doing *until* we know
> it's going to be done completely safely. We need to separate the trust and
> guarantees of safeness (which require *later* out-of-band verification)
> from the whole endeavor of getting encryption used in the first place.


Actually, let me correct my earlier statement.

I believe that UIs fail because they require too much effort from the user
and they fail because they present too little information. Many times they
do both.

What I have been looking at as short term is how to make sending and
receiving secure email to be ZERO effort and how to make initialization no
more difficult than installing and configuring a regular email app. And I
think I can show how that can be done. And I think that is a part of the
puzzle we can just start going to work on in weeks without having to do
usability studies.


The other part, too little (or inconsistent) information is also a big
problem. Take the email I got from gmail this morning telling me that
someone tried to access my email from Sao Paulo. The message told me to
change my password but did not tell me that the attacker had known my
password. That is a problem of too little information.

The problem security usability often faces is that the usability mafia are
trained how to make things easy to learn in ten minutes because that is how
to sell a product. They are frequently completely clueless when it comes to
making software actually easy to use long term. Apple, Google and Microsoft
are all terrible at this. They all hide information the user needs to know.

I have some ideas on how to fix that problem as well, in fact I wrote a
whole chapter in my book suggesting how to make email security usable by
putting an analog of the corporate letterhead onto emails. But that part is
a longer discussion and focuses on authentication rather than
confidentiality.


The perfect is the enemy of the good. I think that the NSA/GCHQ has often
managed to discourage the use of crypto by pushing the standards community
to make the pudding so rich nobody can eat it.



-- 
Website: http://hallambaker.com/


Re: pgp signing in van

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 8:21 PM, Melinda Shore  wrote:
> when you vouch for someone's identity - in an authoritative
> trust system - you're also vouching for the authenticity of
> their transactions.

This is what I mean by "a high bar."   Signing someone's PGP key should mean "I 
know this person as X," not "this person is X."



Re: pgp signing in van

2013-09-06 Thread Melinda Shore
On 9/6/13 5:09 PM, Ted Lemon wrote:
> This is what I mean by "a high bar."   Signing someone's PGP key
> should mean "I know this person as X," not "this person is X."

I have no idea what "should" means in this context.  It seems
to me, from looking at this discussion (as well as from other
discussions around this topic) that different people have
different trust models in mind with quite possibly no two alike.
I guess part of the question here is whether not PGP key
signatures entail the signer being willing to vouch that the
key holder is who they say they are.  I'm not sure why
"I know this person as " provides much more reliability
than someone asserting their own identity.

Melinda


Re: pgp signing in van

2013-09-06 Thread Joe Touch



On 9/6/2013 5:10 PM, Ted Lemon wrote:

On Sep 6, 2013, at 6:42 PM, Joe Touch  wrote:

I've noted elsewhere that the current typical key-signing party
methods are very weak. You should sign only the keys of those who you
know well enough to claim you can attest to their identity.


This is a ridiculously high bar.   The bar should be about at the
level of a facebook friend request.


Given I'm not on Facebook, the latter bar is infinitely high.

As per the PGP description:

---
There are several levels of confidence which can be included in such 
signatures. Although many programs read and write this information, few 
(if any) include this level of certification when calculating whether to 
trust a key.

---

And that's the problem - as long as endorsements are equal, they're only 
as good as your weakest one.


Joe


Re: pgp signing in van

2013-09-06 Thread Scott Kitterman


Phillip Hallam-Baker  wrote:
>On Fri, Sep 6, 2013 at 6:42 PM, Joe Touch  wrote:
>
>>
>>
>> On 9/6/2013 10:17 AM, Michael Richardson wrote:
>>
>>>
>>> I will be happy to participate in a pgp signing party.
>>> Organized or not.
>>>
>>> I suggest that an appropriate venue is during the last 15 minutes of
>the
>>> newcomer welcome and the first 15 minutes of the welcome reception.
>>>
>>> Because:
>>>1) the WG-chairs and IESG will all be there, and a web of trust
>>>   still needs some significant good connectivity, and we already
>>>   know each other rather well, without needing "ID"
>>>   (I am not interested myself in verifying anyone's
>NSA^WGovernment
>>>   identity. I don't trust that Certification Authority...)
>>>
>>>2) getting newbies on-board, meeting them well enough to sign
>>>   their key seems like a good thing.
>>>
>>
>> And whose key would you sign? Anyone who showed up with a form of ID?
>>
>> I've noted elsewhere that the current typical key-signing party
>methods
>> are very weak. You should sign only the keys of those who you know
>well
>> enough to claim you can attest to their identity.
>>
>> If that's the case, how will this get newbies on-board except to
>invite
>> them to have keys whose signatures aren't relevant, and to devalue
>the
>> trust in WG-chairs and IESG members?
>>
>> Joe
>>
>
>I can write a key ceremony spec. I have done that before.
>
>Almost everyone arriving in Vancouver will have a passport in any case.
>The
>protocol will probably be something like provide your key etc data in
>advance, print something out and present that plus your ID document in
>the
>ceremony.

Here's one approach that works reasonably well:


http://www.debian.org/events/keysigning

The scripts in the mentioned signing party package make things much easier. 

Scott K


Re: decentralization of Internet (was Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Scott Brim
On Sep 6, 2013 4:33 PM, "Roger Jørgensen"  wrote:
>
> On Fri, Sep 6, 2013 at 9:47 AM, Adam Novak  wrote:
> >
> > One way to frustrate this sort of dragnet surveillance would be to
reduce
> > centralization in the Internet's architecture. Right now, the way the
> > Internet works in practice for private individuals, all your traffic
goes up
> > one pipe to your ISP. It's trivial to tap, since the tapping can be
> > centralized at the ISP end.
>
> excellent idea... any suggestion on how that should be done?
>
> Only one I can remember right now are LISP which sort of create a new
> network on top of our current network, and the EID-block drafts being
> worked on by some people (including me) tries to address how the
> IP-space of this "new" network can be done.

LISP does nothing for decentralization.  Traffic still flows
hierarchically,  encapsulated or not, and you add the mapping system which
is naturally hierarchical and another vulnerability.  The diameter of the
Internet has not increased much despite its growth, due to both
cross-connects and hubs. I don't think there is much more that can be done
practically to decentralize traffic flow.

Scott


Re: decentralization of Internet (was Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Noel Chiappa
> From: Scott Brim 

> LISP does nothing for decentralization. Traffic still flows
> hierarchically

Umm, no. In fact, one of LISP's architectural scaling issues is that it's
non-hierarchical, so xTRs have neighbour fanouts that are much larger than
typical packet switches. In basic unicast mode, any xTR is always a direct
neighbour to any other xTR; no xTR (in basic unicast mode, at least) ever goes
_through_ another xTR to get to a third xTR. All LISP basic unicast paths
always include exactly two xTRs.

The actual detailed paths do mimic the underlying network, of course: if the
network is hierarchical, the paths will be hierarchical, but if the network
were flat, the paths would be flat. (Or is that what you meant?)

> you add the mapping system which is naturally hierarchical and another
> vulnerability.  

No more so than DNS; they are exactly parallel in their functional design.

Noel


Re: decentralization of Internet (was Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Scott Brim
On Sep 6, 2013 10:06 PM, "Noel Chiappa"  wrote:
>
> > From: Scott Brim 
>
> > LISP does nothing for decentralization. Traffic still flows
> > hierarchically
>
> Umm, no. In fact, one of LISP's architectural scaling issues is that it's
> non-hierarchical, so xTRs have neighbour fanouts that are much larger than
> typical packet switches. In basic unicast mode, any xTR is always a direct
> neighbour to any other xTR; no xTR (in basic unicast mode, at least) ever
goes
> _through_ another xTR to get to a third xTR. All LISP basic unicast paths
> always include exactly two xTRs.
> The actual detailed paths do mimic the underlying network, of course: if
the
> network is hierarchical, the paths will be hierarchical, but if the
network
> were flat, the paths would be flat. (Or is that what you meant?)

Yup. The encapsulation is not much of an obstacle to packet examination.

> > you add the mapping system which is naturally hierarchical and
another
> > vulnerability.
>
> No more so than DNS; they are exactly parallel in their functional design.

Yes but DNS vulnerabilities have been covered elsewhere.

Cheers... Scott


Re: pgp signing in van

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 9:24 PM, Melinda Shore  wrote:
> I'm not sure why
> "I know this person as " provides much more reliability
> than someone asserting their own identity.

Actually it's quite useful.   It allows me to differentiate email coming from 
someone I know as X from email coming from someone claiming to be that person, 
but who does not possess their key.



Re: pgp signing in van

2013-09-06 Thread Scott Brim
On Sep 6, 2013 9:10 PM, "Ted Lemon"  wrote:
>
> On Sep 6, 2013, at 8:21 PM, Melinda Shore  wrote:
> > when you vouch for someone's identity - in an authoritative
> > trust system - you're also vouching for the authenticity of
> > their transactions.
>
> This is what I mean by "a high bar."   Signing someone's PGP key should
mean "I know this person as X," not "this person is X."
>

Dilution of trust is a problem with PGP. "I know this person as X" is way
too lax if you want the system to scale.

Scott


Re: pgp signing in van

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 10:18 PM, Scott Brim  wrote:
> Dilution of trust is a problem with PGP. "I know this person as X" is way too 
> lax if you want the system to scale.

It's naive to think that keys are any more trustworthy than this, because any 
signature's trustworthiness is only as good as the trustworthiness of the 
individual who decides to sign it.   If you trust a key signed by someone you 
don't know, but who someone you know trusts, just how trustworthy is that?

The web of trust scales just fine if you don't expect too much from it.   If 
you expect the kind of trustworthiness you seem to be talking about, then it's 
pretty much useless, because you can really only trust yourself to that degree.

I don't know if this is the sort of absolutism Ted Ts'o was talking about, but 
I think it is.   Sometimes best is the enemy of good enough, and this is 
particularly true when best is actually not achievable anyway.



Re: pgp signing in van

2013-09-06 Thread Melinda Shore
On 9/6/13 6:24 PM, Ted Lemon wrote:
> It's naive to think that keys are any more trustworthy than this,
> because any signature's trustworthiness is only as good as the
> trustworthiness of the individual who decides to sign it.   If you
> trust a key signed by someone you don't know, but who someone you
> know trusts, just how trustworthy is that?

I actually don't think that pgp is likely to be particularly
useful as a "serious" trust mechanism, mostly because of
issues like this.  I don't believe that it's an argument for
less rigor in how we assign trust to signatures but rather
an example of several underlying problems, including lack
of agreement about what it actually means to sign something,
acknowledgment that you don't know much about how the
people whose keys you're signing think about trust ("My friends
are fine but some of their friends are jerks"), etc.

One of the useful things that PKI provides is some agreement,
at least, about what we expect from certification authorities
and what it means to issue and sign a certificate.  That is
to say, the semantics are reasonably well sorted-out, which is
not the case with pgp.

Melinda



Re: decentralization of Internet (was Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Noel Chiappa
> From: Scott Brim 

> The encapsulation is not much of an obstacle to packet examination.

There was actually a proposal a couple of weeks back in the WG to encrypt all
traffic on the inter-xTR stage.

The win in doing it in the xTRs, of course, is that you don't have to go
change all the hosts, application by application: _all_ traffic, of any kind,
from that site to any/all other sites which are encryption-enabled, will get
a certain degree of confidentiality.

Does this count as something the IETF can do reasonably quickly that will
help somewhat? :-)

Noel


Re: pgp signing in van

2013-09-06 Thread Ted Lemon
On Sep 6, 2013, at 10:35 PM, Melinda Shore  wrote:
> I actually don't think that pgp is likely to be particularly
> useful as a "serious" trust mechanism, mostly because of
> issues like this.

It's not at all clear to me that "serious" trust mechanisms should be digital 
at all.   Be that as it may, we have an existence proof that a web of trust is 
useful—Facebook, G+ and LinkedIn all operate on a web of trust model, and it 
works well, and, privacy issues aside, adds a lot of value.   IETF uses an 
informal web of trust, and it works well.   Most open source projects use 
informal webs of trust, and they work well.   PGP signing for software 
distribution works well.

What these mechanisms are not is a web of trust that you could use to 
authenticate a real estate transaction.   You shouldn't accept them as 
signatures on legal contracts.   You shouldn't use them to transfer large sums 
of money to strangers.   But they are definitely useful.



  1   2   >