On Mon, Jun 10, 2013 at 10:50 AM, Ben Laurie <[email protected]> wrote:
> On 7 June 2013 17:30, Phillip Hallam-Baker <[email protected]> wrote: > > On Mon, Jun 3, 2013 at 5:21 AM, Ben Laurie <[email protected]> wrote: > >> > >> > >> Omnibroker introduces a trusted third party. It may be better than the > >> status quo, but I think we've got adequate proof that we can't > >> actually trust TTPs. > > > > > > Like the 9 companies allegedly involved in PRISM? > > > The real point is not that I get to choose whether to trust Google but > not > > whether to trust Comodo or ICANN. And that is the difference between a > CA an > > Omnibroker as TTPs. They are both TTPs but one is an agent of and chosen > by > > the server operator and the other is chosen by the client operator. > > My point is that CT introduces _untrusted_ third parties, and I think > that's the way forward. Replacing the UTP with a TTP you trust to > verify the UTP seems like a step backwards, regardless of my freedom > to choose who I misplace my trust in. What CT does is to provide a mechanism that (1) allows any party to audit certain aspects of the operations of a TTP from public data and (2) allows the audit to be performed as part of the certificate validation checking. If your view of the Web is limited to the Web Browser on a capable machine then there is an argument to be made for moving all checking etc into the browser. But I have network enabled light bulbs in the house. I do not want to be configuring trust in every lightbulb. Regardless, Omnibroker supports two models the same as SCVP does: 1) Validation: Client delegates trust to the broker completely, broker makes decision 2) Discovery: Client delegates a veto to the broker which can refuse a connection to a site known to be dangerous and revalidates all the trust data returned by the broker before relying on it. One advantage of Discovery only model is that the broker can collect all the data required to make the choice for the browser ahead of the decision and cache it. So in the case of DANE data, browser checking adds latency so a browser is unlikely to do checking for DANE records unless at least 10% of sites have them. But a broker can do most of its checks offline so they don't add latency. In the case of DANE there is a second point to the discovery mode and that is to provide a bypass round local DNS resolvers that block TLSA / DNSSEC records. A second advantage is that it enables discretion. Any security check that you implement in the browser has to be reduced to a set of codified, completely standard rules. The spam filtering companies don't take that approach, they have a more tactical scheme making use of heuristic data and actively change their strategy in response to changes in opposition tactics. > > Ah but now you are going to say that I can compile Chrome from source. > Which > > just leaves me with the task of checking a billion lines of source for a > > backdoor. Omnibroker is designed to provide the same option. > > > > If you install your own Omnibroker service you can do all the checking > > yourself for every client that supports the protocol. You can do DANE > checks > > and CT and Convergence and anything else that you might invent in the > > future. You can do all those checks all by yourself or you can ask a > > Symantec or a Comodo or a Kaspersky for information on possible bad IP > > addresses, botnets etc. > > Hmm. I think you need to incorporate some way for an evil Omnibroker > to be both detectable and possible to bring to justice (i.e. something > like CT consistency proofs). > The broker can be required to hand over all the data used to make its decision. -- Website: http://hallambaker.com/
_______________________________________________ dane mailing list [email protected] https://www.ietf.org/mailman/listinfo/dane
