hi, Patrick Schleizer wrote (16 Feb 2016 05:36:08 GMT) : > Patrick Schleizer: >> intrigeri wrote: >> Is the reasoning behind Whonix design decision on this topic summed up >> anywhere?
> No. > To make it quick to save time, let's see if the following of the top of > my head sounds already convincing enough... Thanks! > - The burden of proof is on you. Only from the place you're in :) I believe my complex design is better. You believe your simple design is better. Fine. I think I understand what you mean (it's simple enough). The fact you don't understand how my design works, and why it is the way it is (as is made obvious below), doesn't put the burden of the _proof_ on me. Besides, we have nothing to prove to each other :) What this fact does, though, is that it puts on me the burden of _explaining_ the rationale of my complex design, because I would like to build some minimal understanding with you on this one. So I'll try. > The claim here is "asking foes for something is better than just > asking pals". I believe that's not the claim. See below. (And yes, maybe I could stand behind even _that_ claim, but really that's not the point, since the one we're really making, and relying on, is much weaker than that.) > -- The absence of an argument in favor of that. Nothing in the design > documentation or mailing list discussion explained, why it makes sense > ask someone you think to be a foe [by putting them into a pool called > foe] for anything. I _believe_ I see what you mean. (Spoiler alert: I happen to disagree.) If $we were to trust each "foe" more than each "pal" (and this is not only about trusting the people behind the systems, it's also about trusting the systems themselves), and if we were willing to base a security design on this trust, then what you say here would make some sense to me. But I don't think that this is the case here. Let's clear up a misunderstanging first (I didn't expect we would have to nitpick about the exact meaning of the words we wrote, but well): the design doc tries to make it clear that "foe" and "pal" (quotes _are_ there) are *relative to each other*. I admit there is some ambiguity here, starting with the name of the pools, and the fact we put into the "pals" pool some systems run by organizations that host Tails infrastructure. It's probably the source of the misunderstanding. I'm sorry about that. I now think we should call them pool A, pool B and pool C. So, yes, I personally trust the people behind Riseup (who I was lucky enough to build trust with since 12+ years) more than the people behind github.com (who I have never interacted with). But it's not relevant here: * this doesn't translate as-is to their systems, and here we are asking the time to computers, not to people; e.g. I personally believe that compromising github.com is harder, for most attackers, than compromising db.debian.org, even if this doesn't match the trust I would maybe put into the individuals behind these systems; * when feasible, I'd rather not base a security design on my personal trust relationships; Tails is not only for me, nor for people like me; * what matters in our current design is not how much we happen to trust X or Y: it's about "any member in a one pool should be unlikely [...] to agree to send fake time information, with a member from the the other pools". Of course, the design you prefer (ask only people you personally trust) has its advantages. E.g. it's simpler to analyze and evaluate. Tails' current design also has its advantages. The way I see it, in security, a design that requires trusting any given actor is generally weaker than a design that avoids it in the first place. I'm not quite sure how to explain that more clearly. E.g. here, compared to your design, we avoid increasing the value of what you call "pals", as targets for some attackers, and in turn this makes their replies to our query even more trustworthy (we're dealing with something akin to an ecosystem here: Tails querying example.com for its time is not a pure function). And we limit the risk of cooperative attacks from members of one unique category against Tails users, which IMO is good both for them, and for us. > - Try a real life analogy. Would you walk around and ask someone likely > to work against you "what time is it please?" I think not. You probably > try to avoid these and ask people you consider 'pal'. If I had a secure communication channel to these "pals", and these "pals" benefited from perfect freedom, and people I consider "pals" were reliable when it comes to implementations of time, then: yes, sure. Except this is really not what we are dealing with here. We're dealing with communication channels whose security is suboptimal, and talking with systems that can be compromised. Try another analogy: the Tor network isn't only made of nodes vouched by core Tor people. Do you think it would be stronger if it was? > - The marketing argument. Try to explain why you must connect to nsa.gov > [made up foe example]. In the absence of a good explanation why this is > useful, I would avoid it. I can relate to that, at least for the sake of limiting our user support workload. So far I believe it's been OK. This is why we don't have nsa.gov in our current "foe" pool :P Cheers, -- intrigeri _______________________________________________ Tails-dev mailing list Tails-dev@boum.org https://mailman.boum.org/listinfo/tails-dev To unsubscribe from this list, send an empty email to tails-dev-unsubscr...@boum.org.