> On 25 Apr 2019, at 08:39, t3sserakt <[email protected]> wrote: > I recently watched your Talk "Praktikal Mix Network Design" at 34c3. > > At Minute 8:45 > > https://media.ccc.de/v/34c3-8974-practical_mix_network_design#t=525 > > you are talking about epistemic attacks against a PKI. I did not get it > right if you stated, that is it better to have a central or at least > federated PKI over a complete distributed one, and you are in favor of > not running a mix net on top of GNUnet.
It’s not that simple. I've never believed RPS was quite ready for route selection, but also never agreed with David on abandoning a decentralised solution. > Can you please point me to papers dealing with epistemic attacks against > distributed PKIs? I’m not overly familiar with that literature, but the attacks styles are clear enough, either you poison the relay selection database of a target, or else you use your knowledge of a target’s replay protection database when launching a long-term statistical disclosure attack. I think the sensible question is: In what ways should the PKI be centralised or decentralised? Should relay admission be open/close or centralised/decentralised? Should relay measurement be open or closed, or even closed and manual? Tor has open admission but centralised admission and automated measurement, and no stratification, unless you count MyFamily. David’s project Panoramix/Katxenpost had closed and centralised admission and even manual measurement and stratification. In my opinion, we require decentralised admission and measurement because the corruption/censorship risk remains high, and thus the optics remain extremely bad for Tor-like approaches. We thus require gossip about new relay admission at least. We’ll always have “consensus” on some relay list, if only for bootstrap, but should we have a large consensus list or just gossip relay information? Should clients see the full consensus list or does it merely serve network functions? In the long run, we cannot give clients the full consensus list because after 10k relays even some minimised list requires like 400 megs, or 1gig with CSIDH. If we accept using slow insecure pairing-based crypto for privacy, and reject post-quantum protections of CSIDH, then we can compress the full consensus enormously for clients, but that’s two big ifs. In the short run, if we do not give clients some full consensus list then we’re vulnerable to epistemic attacks. We should always place relays into network strata using collaborative randomness, but a large enough network can likely do this without a consensus list, while a small network should benefit more from the consensus list, and capacity measurements. We thus favour a consensus list for strata until we can argue we do not require one. Jeff p.s. I’m interested in capacity measurements for admitted relays, or even incentivising them. You can read my proposal at https://github.com/w3f/messaging/blob/master/incentives.md In this, we sample relays using cover traffic generated by VRF lottery, but a consensus list avoids biassed sampling. It’s plausible Danezis’ Blockmania might provide some nominally non-global "verifiable gossip" scheme, which also suffices, but a consensus list sounds much easier initially. In the distant future, we’d love if relay admission took similar capacity measurements into consideration too. Applying my scheme naively creates bias, and converges too slowly, but maybe biased information beats no information. If we do sample new relays just under consideration, but do not tell users about them, then we distinguish some relay generated cover traffic from client generated cover traffic. Now relay generated cover packets do less to protect users, but we might still argue that this difference means sharing new nodes even to users.
signature.asc
Description: Message signed with OpenPGP
_______________________________________________ GNUnet-developers mailing list [email protected] https://lists.gnu.org/mailman/listinfo/gnunet-developers
