Re: [tor-dev] Scalability or Onionbalance for v3 ephemeral/ADD_ONION services

2021-07-26 Thread George Kadianakis
Holmes Wilson  writes:

> Hi George,
>
> Sorry for the slow reply here! Just getting back to this. 
>
>>> For our application (a messaging app) it would be super useful to get the
>>> full list of known online (or recently seen online) onion addresses in
>>> possession of some frontend key. This would let us use onionbalance for
>>> peer discovery instead of blindly trying the set of all known peers, which
>>> won't work well for large groups / large numbers of peers.
>>> 
>> 
>> Hmm, can you please give us some more details on what you are looking
>> for? What is peer discovery in the above context, and what do you mean
>> with "full list of ... onion addresses in possession of some frontend
>> key"? I'm asking because the frontend key of onionbalance is also the
>> onion address that users should access.
>
> Our context is we are building a Discord-like team messaging app where peers 
> are connected to each other over Tor, via onion addresses, rather than to a 
> central server. So each user connects to a few peers, and messages travel 
> across peers on a gossip network, and there’s a mechanism for syncing 
> messages you missed, say, if you went offline for a bit. 
>
> One problem we have is, when a new peer comes online, how do they know which 
> other peers are online? Right now, they can try all of the peers they know 
> about, or perhaps try recently-seen peers. But if there are hundreds of peers 
> and only a few are currently online, it will be necessary to try many 
> unreachable peers before finding one who’s online. So that’s not ideal.
>
> One solution to this would be for each online peer to host the same onion 
> service, using a shared key, in addition to their normal peer onion address. 
> And at this address they could return a list of peers they knew were online. 
> So a user would just have to connect to one address, at which point the Tor 
> network would connect them to some online peer, and then that peer could tell 
> them about other online peers. The problem with this approach, as pointed out 
> by folks on this list, was that all those peers would have to really trust 
> each other, since any one of them could go rogue and host malicious 
> information instead of the peer list, gumming up the works. I’m not sure this 
> is a fatal problem, since it would still *help* in cases where there wasn’t a 
> malicious peer, and users could still fall back to the slower method of 
> trying every peer. 
>
> But what I’m wondering is whether there is any mechanism for a bunch of onion 
> addresses that *don’t* completely trust each other to share a “meta” onion 
> address on the Tor network, such that when the user looks up that identifier 
> instead of getting connected directly to whatever content one of those onion 
> addresses is serving, they get a list of all onion addresses that hold the 
> keys to the “meta” address. 
>
> It’d be like asking Tor, "show me a list of all onion addresses that have 
> registered this meta address.” Sort of like asking, “show me a list of 
> mirrors for this address…” at which point the user could try connecting to 
> one or more of them, but would not have as serious problem if one of the 
> sites went rogue and started serving useless content.
>
> This is a bit of a long explanation, and my guess is that there isn’t 
> anything like this and that the above scenario isn’t common enough to be 
> worth targeting, but I was curious if anything like this had ever been 
> discussed.
>

Hello Holmes,

I don't know much about these kind of P2P protocols, but my intuition is
that this "get list of online peers" should be handled on the gossip
protocol layer, and not on the Tor layer. As a naive strawman example,
each peer can keep its own list of online peers and return it when asked.

I feel like the idea of "connect to meta address to get list of online
peers" is kinda the same as "ask any peer you can find for the list of
online peers". That's because with the meta address idea you don't have
a way to know whether the meta address result is a trusted peer; in the
same way that you don't know whether the peers you get through gossip
are trusted peers. This means that in either case you will have to
handle malicious nodes somehow.

In any case, the "meta" address idea is not handled natively by Tor
right now. You could in theory do it by having multiple peers share the
same private key, but I don't know if the results would be ideal. For
example, a single such peer can DoS the system by continously sending a
corrupt onion descriptor for the meta address.

Good luck with designing your P2P protocol!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] A series of questions about Tor (m1 support, forward secrecy, v3 auth)

2021-07-26 Thread George Kadianakis
Holmes Wilson  writes:

> Hi everyone,
>

Hello Holmes,

here are some attempts to answer your questions.

> 2. FORWARD SECRECY
>
> Is there a good source for documentation on how forward secrecy works in Tor, 
> and on what security guarantees it provides? Googling finds things like this 
> reddit post (https://www.reddit.com/r/TOR/comments/cryrjx/does_tor_use_pfs/) 
> but I can’t find any detailed information about it, what threat models it 
> fits, etc. 
>
> One specific question is, if two users are communicating by sending messages 
> over a connection to an onion service (like ricochet) and an attacker 
> surveils their internet traffic and compromises their devices at a later 
> date, will the attacker be able to recover the clear text of their 
> conversation? When are keys for a given connection destroyed? Does it happen 
> continuously throughout the course of a Tor connection? Or on the creation of 
> a new circuit? Or what?
>

tl;dr Onion service sessions are protected with forward secrecy.

In particular, v3 onion services use a variant of the ntor key exchange
(see [NTOR-WITH-EXTRA-DATA] in rend-spec-v3.txt) when doing their
rendezvous. The ntor key exchange provides forward secrecy which means
that if the long-term public key is compromised (e.g. by pwning their
device), the session remains secure as long as the short-term ephemeral
session secrets don't get compromised.

The forward secrecy "happens" at the creation of the rendezvous circuit
and not continuously through the course of a Tor connection (i.e. no
ratcheting happens). This means that if an attacker has the transcript
of the entire circuit, and manages to compromise the session in its
midpoint, it should be possible for her to decrypt back to the start of
the session.

Here is the original ntor paper:  http://www.cypherpunks.ca/~iang/pubs/ntor.pdf

> 3. V3 AUTH AND DOS ATTACKS
>
> Does v3 onion authentication protect against DOS attacks? That is, can 
> someone who is not authorized to connect to an onion address with 
> authentication enabled still cause problems for that onion address? Can they 
> connect to it at all, in the sense of being able to send data to the tor 
> client at that onion address? Or does the Tor network itself prevent this 
> connection from even happening? 
>
> A related question is, if we’re looking to deny connections to an onion 
> address to any unauthorized users, and we’re considering turning off onion 
> authentication and implementing some standard authentication scheme that 
> seems fairly well-supported at the web server layer, is there any 
> security-related reason why we would be better off using Tor’s own 
> authentication instead? Using our own authentication scheme will be a bit 
> easier to control, rather than having to send commands to Tor (and possibly 
> restart it for removing users?) but I’m wondering if there are security 
> properties we lose by doing that. 
>

Like hackerncoder said, v3 onion authentication protects against DoS
attacks because the access control happens very early in the connection
process.

An attacker with no access to the auth keys cannot decrypt the onion
descriptor, which means that they cannot do introduction or rendezvous
with the onion service. It so happens that all onion DoS attack vectors
are during intro or rendezvous, and hence v3 onion auth protects against
them.

WRT your second question, if you swap the client authentication with
your own application-layer authentication scheme, you are losing the
above properties, since it means that an attacker will be able to reach
the web server before they get denied access. This means that the
attacker will be able to abuse DoS vectors during the intro and
rendezvous steps of the connection.

There is something to be said about the UX issues of having this custom
authentication mechanism and it not being in the application-layer, and
this is something we should be improving in the future.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Scalability or Onionbalance for v3 ephemeral/ADD_ONION services

2021-06-28 Thread George Kadianakis
Holmes Wilson  writes:

> Would this return a list of currently-online onion addresses in possession
> of the frontend address key?
>
> Or would it just route traffic to one of those addresses invisibly?
>

Hello Holmes,

I think the feature that Chad was asking for would just allow them to
enable OnionBalance through the control port (since setting
OnionbalanceMasterKey is a necessary step of configuring onionbalance
backends).

> For our application (a messaging app) it would be super useful to get the
> full list of known online (or recently seen online) onion addresses in
> possession of some frontend key. This would let us use onionbalance for
> peer discovery instead of blindly trying the set of all known peers, which
> won't work well for large groups / large numbers of peers.
>

Hmm, can you please give us some more details on what you are looking
for? What is peer discovery in the above context, and what do you mean
with "full list of ... onion addresses in possession of some frontend
key"? I'm asking because the frontend key of onionbalance is also the
onion address that users should access.

Cheers!


> I'd be interested in working with others on a spec for this!
>
> On Mon, Jun 14, 2021 at 6:25 AM George Kadianakis 
> wrote:
>
>> Chad Retz  writes:
>>
>> > A quick glance at the code shows that ADD_ONION (i.e. "ephemeral"
>> > onion services) doesn't support setting an Onionbalance
>> > frontend/master onion address (specifically
>> > https://gitlab.torproject.org/tpo/core/tor/-/issues/32709 doesn't seem
>> > to have a control-side analogue). Would a feature request for adding a
>> > `*(SP "OnionbalanceMasterKey=" OBKey)` (or "OBMasterKey" or whatever)
>> > to ADD_ONION be reasonable? If so, just add in Gitlab?
>> >
>>
>> Hell Ched,
>>
>> that's indeed something that is missing and a reasonable feature
>> request. A spec/code patch would be particularly welcome ;)
>>
>> > Also curious alternative scalability and load balancing options for
>> > ephemeral v3 onion services. I have read
>> >
>> https://www.benthamsgaze.org/wp-content/uploads/2015/11/sucu-torscaling.pdf
>> > but unsure if anything more recent has been written. Beyond that and
>> > Onionbalance, any other interesting approaches I could employ
>> > (assuming I can dev anything from a control port pov, but am wanting
>> > to work w/ an unmodified Tor binary)?
>> >
>>
>> Another complementary approach is to split the 'introduction' and
>> 'rendezvous' functionalities to different hosts:
>>
>> https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/255-hs-load-balancing.txt
>> However it hasn't been implemented yet...
>>
>> Cheers!
>> ___
>> tor-dev mailing list
>> tor-dev@lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Scalability or Onionbalance for v3 ephemeral/ADD_ONION services

2021-06-14 Thread George Kadianakis
Chad Retz  writes:

> A quick glance at the code shows that ADD_ONION (i.e. "ephemeral"
> onion services) doesn't support setting an Onionbalance
> frontend/master onion address (specifically
> https://gitlab.torproject.org/tpo/core/tor/-/issues/32709 doesn't seem
> to have a control-side analogue). Would a feature request for adding a
> `*(SP "OnionbalanceMasterKey=" OBKey)` (or "OBMasterKey" or whatever)
> to ADD_ONION be reasonable? If so, just add in Gitlab?
>

Hell Ched,

that's indeed something that is missing and a reasonable feature
request. A spec/code patch would be particularly welcome ;)

> Also curious alternative scalability and load balancing options for
> ephemeral v3 onion services. I have read
> https://www.benthamsgaze.org/wp-content/uploads/2015/11/sucu-torscaling.pdf
> but unsure if anything more recent has been written. Beyond that and
> Onionbalance, any other interesting approaches I could employ
> (assuming I can dev anything from a control port pov, but am wanting
> to work w/ an unmodified Tor binary)?
>

Another complementary approach is to split the 'introduction' and
'rendezvous' functionalities to different hosts:
 
https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/255-hs-load-balancing.txt
However it hasn't been implemented yet...

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] [RFC] Proposal 332: Vanguards lite

2021-06-01 Thread George Kadianakis
Hello list,

I present you with a simplified version of prop292 which protects
against guard discovery attacks.

The proposal can also be found in:
   https://gitlab.torproject.org/asn/torspec/-/commits/vg-lite

---

```
Filename: 332-vanguards-lite.md
Title: Vanguards lite
Author: George Kadianakis, Mike Perry
Created: 2021-05-20
Status: Draft
```

# 0. Introduction & Motivation

  This proposal specifies a simplified version of Proposal 292 "Mesh-based
  vanguards" for the purposes of implementing it directly into the C Tor
  codebase.

  For more details on guard discovery attacks and how vanguards defend against
  it, we refer to Proposal 292 [PROP292_REF].

# 1. Overview

  We propose an identical system to the Mesh-based Vanguards from proposal 292,
  but with the following differences:

  - No third layer of guards is used.
  - The Layer2 lifetime uses the max(x,x) distribution with a minimum of one
day and maximum of 12 days. This makes the average lifetime approximately a
week. We let NUM_LAYER2_GUARDS=4.
  - We don't write guards on disk. This means that the guard topology resets
when tor restarts.

  By avoiding a third-layer of guards we reduce the linkability issues
  of Proposal 292, which means that we don't have to add an extra hop on top of
  our paths. This simplifies engineering.

# 2. Rotation Period Analysis

  From the table in Section 3.1 of Proposal 292, with NUM_LAYER2_GUARDS=4 it
  can be seen that this means that the Sybil attack on Layer2 will complete
  with 50% chance in 18*7 days (126 days) for the 1% adversary, 4*7 days (one
  month) for the 5% adversary, and 2*7 days (two weeks) for the 10% adversary.

# 3. Tradeoffs from Proposal 292

  This proposal has several advantages over Proposal 292:

  By avoiding a third-layer of guards we reduce the linkability issues of
  Proposal 292, which means that we don't have to add an extra hop on top of
  our paths. This simplifies engineering and makes paths shorter by default:
  this means less latency and quicker page load times.

  This proposal also comes with disadvantages:

  The lack of third-layer guards makes it easier to launch guard discovery
  attacks against clients and onion services. Long-lived services are not well
  protected, and this proposal might provide those services with a false sense
  of security. Such services should still use the vanguards addon 
[VANGUARDS_REF].

# 4. References

  [PROP292_REF]: 
https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/292-mesh-vanguards.txt
  [VANGUARDS_REF]: https://github.com/mikeperry-tor/vanguards
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Uptime stats for "Tor user can access an otherwise-functional hidden service"?

2021-05-06 Thread George Kadianakis
Holmes Wilson  writes:

> And I just saw today's blog post about the new status page. Congrats on 
> launching this! Someone is reading my mind :) 
>

Hello Holmes,

glad you like the status page! It's indeed great!

> But is there any good source for historical data on incidents or re: the 
> questions below? 
>

I don't think we have historical data about v3 downtimes unfortunately,
apart from the January event mentioned on status.torproject.org . 

We *have* developed tools to monitor the health of v3 onion services in
an attempt to weed down reachability issues [0] but we mainly used the
tool to find specific bugs, and not as a downtime scanner. That is, we
never performed truly long-term experiments with it, or hooked it to
some global dashboard.

> Also, is there currently some monitoring in place such that someone on the 
> Tor team gets a phonecall or SMS alert if onion services seem to be globally 
> down? Several projects I’ve been a part of over the years have benefitted 
> immensely from this, using tools like PagerDuty, so I’m curious if Tor has 
> something like this for onion services. 
>

We are currently not aware of any unresolved reachability issues with v3
onion services.

Fortunately, the onion community is pretty active so we usually become
aware of such issues pretty quickly if they appear on a widespread
scale. That said, I don't think getting alerted more quickly (via an
SMS) would be a bad idea.

Regarding your question: 

> 2. What percentage of attempts by a user attempting to connect to a
> onion service are successful, assuming no successful censorship of the
> user’s network?

I would say 100% of attempts modulo unknown reachability bugs. Even if
the client picks a bad path, and a circuit gets broken, the Tor client
should be smart enough to rebuild the circuit and retry the onion
connection.

As always, if you have encountered reachability issues, please do get in
touch with us (and also please provide some logs) so that we can look
this more deeply.

[0]: https://gitlab.torproject.org/tpo/core/tor/-/issues/28841

>
>> On May 5, 2021, at 3:27 PM, Holmes Wilson  wrote:
>> 
>> Hi everyone,
>> 
>> I’m building a messaging app based on Tor v3 onion services and I’m 
>> wondering what kind of uptime expectations we should set with users and 
>> other stakeholders. 
>> 
>> Is there data over time on uptime for onion service functionality? That is, 
>> not for a particular onion service, but for something like, given that the 
>> user’s access to Tor is not being limited by their ISP, and given that the 
>> onion service is fully operational, whether a Tor user can reach the onion 
>> service?
>> 
>> Some more concrete versions of this question are: 
>> 
>> 1. For what percentage of time over a given time period (say the past 3 
>> years) are there no known network-wide problems affecting onion services?
>> 2. What percentage of attempts by a user attempting to connect to a onion 
>> service are successful, assuming no successful censorship of the user’s 
>> network?
>> 3. Is there some incident log somewhere of problems that affected onion 
>> services network wide that includes how long these problems persisted for? 
>> (I don’t see any onion service outage notes in this document, though I seem 
>> to remember there was an issue a few months back? 
>> https://metrics.torproject.org/news.html 
>> )  
>> 
>> I see there’s uptime data for various relays, but I’m not sure how to 
>> translate this into a meaningful answer to the two above questions. Are 
>> there any good answers to these questions out there in the wild? Even 
>> approximate answers or lower bounds for uptime are fine and super helpful!  
>> 
>> Thanks!!!
>> Holmes
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Question about hidden services shared by multiple hosts

2021-04-06 Thread George Kadianakis
David Goulet  writes:

> On 26 Mar (08:55:54), Holmes Wilson wrote:
>> Hi everyone,
>
> Greetings,
>
>> 
>> We’re working on a peer-to-peer group chat app where peers connect over v3
>> onion addresses. 
>> 
>> One issue are groups where there are many users but only a few are online in
>> a given moment.  Onion addresses are forever, and existing peers might know
>> every peer in the network, but it will take a while to try connecting to all
>> of them to find one that is online. 
>> 
>> In this case, it seems helpful for one or more peers to share one or more
>> onion addresses that would serve as reliable  “trackers", e.g. 
>> 
>> 1. All members know the keypairs for these addresses.
>> 2. All online members ping these addresses at random intervals to say
>>they’re online.
>> 3. If they can’t connect to an address, they start hosting it themselves.
>> 
>> We’re going to start testing it, but we’re wondering if folks here know the
>> likely outcome of trying to “share” hosting of an onion service in this
>> spontaneous-volunteer sort of way and if there are downsides.
>> 
>> I *think* the most important question is how long it takes for the network
>> to stop routing incoming traffic to an offline client when there’s an online
>> one available. How long will the address likely be unreachable in one of
>> these transition moments, assuming some peer immediately detects that a
>> “tracker” onion address has gone offline and begins hosting it themselves?
>> (And does this question make sense?)
>
> Interesting idea!
>
> So sharing onion address key material between peers can be fine until they are
> used at the same time. What will happen is that the two peers hosting the same
> onion address (service) will start competing on the onion service directory
> side where service's upload what we call a "descriptor" which is what client
> fetch in order to initiate a connection to the service.
>
> With v3, it gets even more complicated actually because of the "revision
> counter" in the descriptor which v2 didn't have.
>
> It is simply a number that keeps going up in the descriptor so the onion
> service directory (relay) doesn't accept a previous descriptor (replay). And
> so, your two peers sharing the onion keys will require to somehow sync that
> revision counter for your idea to work (located on disk in the state file,
> "HidServRevCounter").
>
> Else, one will inevitably be higher than the other and thus will always
> succeed where it will always fail for the other peer.
>

Hello all,

this revision counter sync issue is not a problem anymore since we
introduced the Order-Preserving-Encryption revision counter logic:

https://gitlab.torproject.org/tpo/core/tor/-/blob/master/src/feature/hs/hs_service.c#L2979

https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/rend-spec-v3.txt#L2548
Feel free to try it and let us know if it doesn't work. The solution
assumes that all peers have reasonably synchronized clocks.

In other news, the above "all members know all keypairs" approach seems
super dangerous in terms of security, especially if not all those
members are 100% trusted by each other.

To answer the performance question, if a peer immediately notices the
onion service being offline and begins hosting it, it should be
pretty-much immediately reachable by new clients.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 328: Make Relays Report When They Are Overloaded

2021-03-02 Thread George Kadianakis
David Goulet  writes:

> Greetings,
>
> Attached is a proposal from Mike Perry and I. Merge requsest is here:
>
> https://gitlab.torproject.org/tpo/core/torspec/-/merge_requests/22
>

Hello all,

while working on this proposal I had to change it slightly to add a few
more metrics and also to simplify some engineering issues that we would
encounter. You can find the changes here:
   
https://gitlab.torproject.org/asn/torspec/-/commit/b57743b9764bd8e6ef8de689d14483b7ec9c91ec

Mike, based on your comments in the #40222 ticket, I would appreciate
comments on the way the DNS issues will be reported. David argued that
they should not be part of the "overload-general" line because they are
not an overload and it's not the fault of the network in any way. This
is why we added them as separate lines. Furthermore, David suggested we
turn them into a threshold "only report if 25% of the total requests
have timed out" instead of "only report if at least one time out has
occured" since that would be more useful.

We also decided to simplify the 'overload-ratelimits' line to make it
easier to implement (learning whether it was a burst or rate overload in
Tor seems to be quite hard, so we decided to merge these two events).

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] [RFC] Proposal: "Res tokens: Anonymous Credentials for Onion Service DoS Resilience"

2021-02-11 Thread George Kadianakis
Hello all,

after lots of investigation on anonymous credentials, we are glad to
present you with a draft of the onion services anti-DoS proposal using
tokens.

While the basic idea of the proposal should remain reasonably solid,
there are various XXX sprinkled around the proposal and some of them
definitely need to be addressed before the proposal becomes truly
usable.

We are particularly looking forward to feedback about:
- Token issuance services
- The anonymous credential scheme chosen
- The XXXs and design decisions of the proposal

Hope you have a pleasant read!

---

```
Filename: 331-res-tokens-for-anti-dos.md
Title: Res tokens: Anonymous Credentials for Onion Service DoS Resilience
Author: George Kadianakis, Mike Perry
Created: 11-02-2021
Status: Draft
```

  +--+   +--+
  | Token Issuer |   | Onion Service|
  +--+   +--+
 ^^
 |+--+|
Issuance |  1.|  |   2.   | Redemption
 +--->|  Alice   |<---+
  |  |
  +--+


# 0. Introduction

  This proposal specifies a simple anonymous credential scheme based on Blind
  RSA signatures designed to fight DoS abuse against onion services. We call
  the scheme "Res tokens".

  Res tokens are issued by third-party issuance services, and are verified by
  onion services during the introduction protocol (through the INTRODUCE1
  cell).

  While Res tokens are used for denial of service protection in this proposal,
  we demonstrate how they can have application in other Tor areas as well, like
  improving the IP reputation of Tor exit nodes.

# 1. Motivation

  Denial of service attacks against onion services have been explored in the 
past
  and various defenses have been proposed:
  - Tor proposal #305 specifies network-level rate-limiting mechanisms.
  - Onionbalance allows operators to scale their onions horizontally.
  - Tor proposal #327 increases the attacker's computational requirements (not 
implemented yet).

  While the above proposals in tandem should provide reasonable protection
  against many DoS attackers, they fundamentally work by reducing the assymetry
  between the onion service and the attacker. This won't work if the attacker
  is extremely powerful because the assymetry is already huge and cutting it
  down does not help.

  we believe that a proposal based on cryptographic guarantees -- like Res
  tokens -- can offer protection against even extremely strong attackers.

# 2. Overview

  In this proposal we introduce an anonymous credential scheme -- Res tokens --
  that is well fitted for protecting onion services against DoS attacks. We
  also introduce a system where clients can acquire such anonymous credentials
  from various types of Token Issuers and then redeem them at the onion service
  to gain access even when under DoS conditions.

  In section [TOKEN_DESIGN], we list our requirements from an anonymous
  credential scheme and provide a high-level overview of how the Res token
  scheme works.

  In section [PROTOCOL_SPEC], we specify the token issuance and redemption 
protocols,
  as well as the mathematical operations that need to be conducted for these to 
work.

  In section [TOKEN_ISSUERS], we provide a few examples and guidelines for
  various token issuer services that could exist.

  In section [DISCUSSION], we provide more use cases for Res tokens as well as
  future improvements we can conduct to the scheme.

# 3. Design [TOKEN_DESIGN]

  In this section we will go over the high-level design of the system, and on
  the next section we will delve into the lower-level details of the protocol.

## 3.1. Anonymous credentials

  Anonymous credentials or tokens are cryptographic identifiers that allow
  their bearer to maintain an identity while also preserving anonymity.

  Clients can acquire a token in a variety of ways (e.g. registering on a
  third-party service, solving a CAPTCHA, completing a PoW puzzle) and then
  redeem it at the onion service proving this way that work was done, but
  without linking the act of token acquisition with the act of token
  redemption.

## 3.2. Anonymous credential properties

  The anonymous credential literature is vast and there are dozens of
  credential schemes with different properties [REF_TOKEN_ZOO], in this section
  we detail the properties we care about for this use case:

  - Public Verifiability: Because of the distributed trust properties of the
  Tor network, we need anonymous credentials that can be issued by one
  party (the token issuer) and verified by a different party (in this case
  the onion service).

  - Perfect unlinkability: Unlinkability between token issuance and token

Re: [tor-dev] Trouble with onionperf visualize and S61 performance experiments

2020-11-23 Thread George Kadianakis
Karsten Loesing  writes:

> On 2020-11-03 17:16, Karsten Loesing wrote:
>> On 2020-11-03 15:01, George Kadianakis wrote:
>>> Hello Karsten,
>> 
>> Hi George!
>
> Hi again!
>
>>> hope you are doing well!
>>>
>>> I've been working on the S61 performance experiments [0] and I would 
>>> appreciate
>>> some help with onionperf.
>>>
>>> I have done various onionperf measurements using something the following 
>>> command:
>>>   $ onionperf measure -i --tgen ~/tgen/build/src/tgen --tor 
>>> ~/onionperf/tor/src/app/tor --drop-guards 10
>>>
>>> I put each of the measurements on a different directory and now I want
>>> to analyze them and derive the CDF-TTFB graphs etc. I attempted doing
>>> that using the following calls:
>>>
>>>  $ onionperf analyze --tgen ./tgen-client/onionperf.tgen.log --torctl 
>>> ./tor-client/onionperf.torctl.log
>>>  $ onionperf visualize --data onionperf.analysis.json.xz "test"
>>>
>>> Unfortunately, the 'visualize' call can fail for the attached 
>>> 'onionperf-mbps.json.xz':
>>>
>>>   $ onionperf visualize --data onionperf.analysis.json.xz "Test 
>>> Measurements"
>>>   2020-11-03 15:51:31 1604411491.540736 [onionperf] [INFO] loading analysis 
>>> results from /user/tmp/onionperf/analysis/onionperf.analysis.json.xz
>>>   2020-11-03 15:51:31 1604411491.577864 [onionperf] [INFO] done!
>>>   2020-11-03 15:51:31 1604411491.586845 [onionperf] [INFO] NumExpr 
>>> defaulting to 8 threads.
>>>   
>>> /user/.local/lib/python3.8/site-packages/OnionPerf-0.8-py3.8.egg/onionperf/visualization.py:251:
>>>  UserWarning: Attempting to set identical left == right == -1e-06 results 
>>> in singular transformations; automatically expanding.
>>>   
>>> /user/.local/lib/python3.8/site-packages/OnionPerf-0.8-py3.8.egg/onionperf/visualization.py:251:
>>>  UserWarning: Attempting to set identical left == right == -1e-06 results 
>>> in singular transformations; automatically expanding.
>>>   Traceback (most recent call last):
>>> File "/user/.local/bin/onionperf", line 4, in 
>>>   __import__('pkg_resources').run_script('OnionPerf==0.8', 'onionperf')
>>> File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 
>>> 650, in run_script
>>>   self.require(requires)[0].run_script(script_name, ns)
>>> File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 
>>> 1453, in run_script
>>>   exec(script_code, namespace, namespace)
>>> File 
>>> "/user/.local/lib/python3.8/site-packages/OnionPerf-0.8-py3.8.egg/EGG-INFO/scripts/onionperf",
>>>  line 622, in 
>>> File 
>>> "/user/.local/lib/python3.8/site-packages/OnionPerf-0.8-py3.8.egg/EGG-INFO/scripts/onionperf",
>>>  line 382, in main
>>> File 
>>> "/user/.local/lib/python3.8/site-packages/OnionPerf-0.8-py3.8.egg/EGG-INFO/scripts/onionperf",
>>>  line 522, in visualize
>>> File 
>>> "/user/.local/lib/python3.8/site-packages/OnionPerf-0.8-py3.8.egg/onionperf/visualization.py",
>>>  line 48, in plot_all
>>> File 
>>> "/user/.local/lib/python3.8/site-packages/OnionPerf-0.8-py3.8.egg/onionperf/visualization.py",
>>>  line 205, in __plot_throughput_ecdf
>>> File 
>>> "/user/.local/lib/python3.8/site-packages/OnionPerf-0.8-py3.8.egg/onionperf/visualization.py",
>>>  line 235, in __draw_ecdf
>>> File "/usr/lib/python3/dist-packages/pandas/core/frame.py", line 5000, 
>>> in dropna
>>>   raise KeyError(list(np.compress(check, subset)))
>>>   KeyError: ['mbps']
>> 
>> Indeed, that's a bug in the visualize mode.
>> 
>> However, before it fails it writes a .csv file that tells us why: none
>> of the measurements are successful! I'm seeing lots of TOR/CANT_ATTACH
>> errors in that file. There's something wrong in your measurement setup.
>> If you fix that, you'll be able to visualize the results.
>
> Did you figure out what went wrong? Do you need help figuring that out?
>
>> (We should still fix the bug and produce a nicer error message.)
>
> I'm going to file an issue and start working on a possible fix tomorrow.
>
> All the best,
> Karsten
>

Hey hey Karsten,

yes there was an issue with the port forwarding (or the incoming IP
addr) and tgen could not do its thing, and I didn't realize because
there were not any errors exposed to this effect.

In any case, I fixed this and then onionperf worked just fine. For
example see here 
https://gitlab.torproject.org/tpo/core/tor/-/issues/40157#note_2714605

So no worries about this, it's all good on this front.

Also onionperf has been perfoming just fine in general for the purposes
of #40157 so far.

Cheers!
(and welcome back (?))

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Proposal: A First Take at PoW Over Introduction Circuits

2020-09-22 Thread George Kadianakis
George Kadianakis  writes:

> tevador  writes:
>
>> Hi all,
>>

Hello,

I have pushed another update to the PoW proposal here:
  https://github.com/asn-d6/torspec/tree/pow-over-intro
I also (finally) merged it upstream to torspec as proposal #327:
  
https://github.com/torproject/torspec/blob/master/proposals/327-pow-over-intro.txt

The most important improvements are:
- Add tevador as an author.
- Update PoW algorithms based on tevador's Equix feedback.
- Update effort estimation algorithm based on tevador's simulation.
- Include hybrid attack section.
- Remove a bunch of blocker tags.

Two things I'd like to work more on:

- I'd like people to take tevador's Equix PoW function and run it on
  their boxes and post back benchmarks of how it performed. Particularly
  so if you have a GPU-enabled box, so that we can get some benchmarks
  from GPUs as well. That will help us tune the proposal even more.

  For my laptop (with an Intel CPU i7-8550U CPU @ 1.80GHz) I got pretty
  accurate benchmarks (compared to 
https://github.com/tevador/equix#performance):
  $ ./equix-bench 
 Solving nonces 0-499 (interpret: 0, hugepages: 0, threads: 1) ...
 1.91 solutions/nonce
 283.829505 solutions/sec. (1 thread)
 22810.327943 verifications/sec. (1 thread)
  $ ./equix-bench --threads 16
 Solving nonces 0-499 (interpret: 0, hugepages: 0, threads: 16) ...
 1.91 solutions/nonce
 2296.585708 solutions/sec. (16 threads)
 20223.196324 verifications/sec. (1 thread)

  See how to do this here: https://github.com/tevador/equix#build

- I'd like to improve the effort estimation algorithm by dynamically adjusting
  SVC_BOTTOM_CAPACITY instead of having it as a static value. Otherwise, I
  would like to reduce the currently suggested SVC_BOTTOM_CAPACITY because I
  feel that 180 is too big. I would like to put it to 100 which is much more
  conservative.  I tried to do so while updating tevador's simulation
  accordingly, but I found out that the simulation code does not do the graphs
  itself, so I didn't make much progress here.

  tevador do you have the graphing code somewhere so that I can run the
  experiments again and see how the graphs are influenced?

Apart from that, I think the proposal is really solid. I have hence merged it
as proposal #327 to torspec and further revisions can be done on top of that
from now on.

Thanks for all the work here and I'm looking forward to further feedback!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Proposal: A First Take at PoW Over Introduction Circuits

2020-08-26 Thread George Kadianakis
tevador  writes:

> Hi all,
>

Hello tevador,

thanks so much for your work here and for the great simulation. Also for
the hybrid attack which was definitely missing from the puzzle.

I've been working on a further revision of the proposal based on your
comments. I have just one small question I would like your feedback on.

>> 3.4.3. PoW effort estimation [EFFORT_ESTIMATION]
>> {XXX: BLOCKER: Figure out of this system makes sense}
>
> I wrote a simple simulation in Python to test different ways of
> adjusting the suggested effort. The results are here:
> https://github.com/tevador/scratchpad/blob/master/tor-pow/effort_sim.md
>
> In summary, I suggest to use MIN_EFFORT = 1000 and the following
> algorithm to calculate the suggested effort:
>
> 1. Sum the effort of all valid requests that have been received since the
>last HS descriptor update. This includes all handled requests, trimmed
>requests and requests still in the queue.
> 2. Divide the sum by the max. number of requests that the service could have
>handled during that time (SVC_BOTTOM_CAPACITY * HS_UPDATE_PERIOD).
> 3. Suggested effort = max(MIN_EFFORT, result)
>
> This algorithm can both increase and reduce the suggested effort.
>

I like the above logic but I'm wondering of how we can get the real
SVC_BOTTOM_CAPACITY for every scenario. In particular, the
SVC_BOTTOM_CAPACITY=180 value from 6.2.2 might have been true for
David's testing but it will not be true for every computer and every
network.

I wonder if we can adapt the above effort estimation algorithm to use an
initial SVC_BOTTOM_CAPACITY magic value for the first run (let's say
180), but then derive the real SVC_BOTTOM_CAPACITY of the host in
runtime and use that for subsequent runs of the algorithm.

Do you think this is possible?
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Safe Alternative Uses of Onion Service Keys

2020-07-30 Thread George Kadianakis
Matthew Finkel  writes:

> Hello everyone,
>

Hello hello!

These are all good questions and they become more and more important as
the onionspace grows and more use cases appear.

> 
>
> For computing the blinded key, the first 32 bytes of the long-term
> secret key (LH) are multiplied with a blinding factor (h*a mod l), see
> the specification for the value of **h** [4]. This becomes LH'
> (LH-prime). The second 32 bytes of the secret key (RH) are concatenated
> with a string prefix and then the SHA3-256 digest is computed of the
> concatenated string. The first 32 bytes of the resulting digest become
> RH' (RH-prime). LH' and RH' are used as regular ed25519 secret keys for
> signing and verifying messages following EdDSA.
>

Hmm, not sure about this last sentence. Are you implying that LH' and RH' are
two different secret keys? Because I don't think that's the case. LH' and RH'
are components of the final public/private keypair.

> Tor's EdDSA signature is "R|S", R concatenated with S (the message is
> not included in the signature).
>
>
> The above process seems like a lot to ask from application developers.
> Can we make it easier for them?
>

Yes I totally agree that this procedure is too much to ask from application
developers.

> Open questions:
>
>  1) Going back to the long-term secret key, can LH and RH be used
> directly in EdDSA without reducing the security and unlinkability of
> the blinded keys?
>

In which way would we use LH and RH directly in EdDSA?

>  2) Should other use cases of the long-term keys only derive distinct
> (blinded) keys, instead of using the long-term keys directly?
>

I think that's the most important question. That is, what can we safely do with
the long-term keys (given that they are also used for this key blinding 
procedure)?

My intuition is that it's safe to use the long-term keys to sign other things
directly, because the blinding factor of the key blinding procedure does not
contain any attacker-controlled input. However, this is just an intuition from
a non-cryptographer and hence we need a proper security proof, especially given
the various complexities of the whole system
(e.g. clamping: 
https://lists.torproject.org/pipermail/tor-dev/2017-April/012204.html),
and especially if we want to do more complicated things than just signing (like
use those keys for x25519 or something).

In some way I think that exploring this problem is the first step before
deciding to use derived keys, since as you said having to use derived keys will
complicate things a lot for application developers.

>  3) If other use cases should only use derived keys, then is there an
> alternative derivation scheme for when unlinkability between derived
> keys is not needed (without reducing the security properties of the
> onion service blinded keys), and is allowing linkability
> useful/worthwhile?
>

Hm, if linkability between derived keys is _desired_, then perhaps you could
generate subsequent derived keys by iterating on top of previous derived keys,
instead of iterating on top of the long-term key like HSv3 does. This way you
can prove relations between derived public keys without leaking the long-term
key.

In any case, IANAC so a security proof is what we need here.

>
>  4) Is the above example derivation scheme safe if different
> applications tweak the above prefix strings in similar ways?
>

Again my intuition says that it should be OK, since tweaking the BLIND_STRING
tweaks the blind factor 'h':
  h = H(BLIND_STRING | A | s | B | N)

And because the blind factor is the output of a hash function, tweaking the
BLIND_STRING is not any different from using a blind factor with a different
time period value:
  N = "key-blind" | INT_8(period-number) | INT_8(period_length)

In any case, IANAC so a security proof is what we need here.

>  5) Should Tor simply derive one blinded key that can be used by all
> alternative applications? Is that safe?
>

If we assume that derived keys are as safe as long-term keys, then that should
be fine, as it is safe for many applications to use the same long-term ed25519
key, assuming that they don't do anything silly with it (like converting
between ed25519 and x25519 and exposing a DH oracle that might generate valid
signatures for the attacker).

In any case, IANAC so a security proof is what we need here.

---

Hope that was useful. It's as far as I can get here without spending days on it
and without going into dangerous waters.

I've seen more and more interest about hierarchical key derivation
lately, and it seems like our design is one of the popular ones (and
probably the oldest), but there are more these days:
https://mailarchive.ietf.org/arch/msg/cfrg/qDJKIMRctVvYuZBYBcACLLeS7hM/
https://forum.web3.foundation/t/key-recovery-attack-on-bip32-ed25519/44
https://github.com/satoshilabs/slips/blob/master/slip-0010.md

In general, I'd be interested in participating in 

Re: [tor-dev] [RFC] Proposal: A First Take at PoW Over Introduction Circuits

2020-06-22 Thread George Kadianakis
Hello there,

here is another round of PoW revisions:
 https://github.com/asn-d6/torspec/tree/pow-over-intro
I'm inlining the full proposal in the end of this email.

Here is a changelog:
- Actually used tevador's EquiX scheme as our PoW scheme for now. This is still
  tentative, but I needed some ingredients to cook with so I went for it.
- Fold in David's performance measurements and use them to get some
  guesstimates on the default PoW difficulty etc.
- Enable overlapping seed system.
- Enrich the attack section of the proposal some more.
- Attempt to fix an effort estimation attack pointed by tevador.
- Added a bunch of "BLOCKER" tags around the proposal for things that we need
  to figure out or at least have some good intuition if we want to have
  guarantees that the proposal can work before we start implementing.

Here is what needs to happen next:

- David's performance measurements have been really useful, but they open a
  bunch of questions on auxiliary overheads. We are now performing more
  experiments to confirm the performance numbers we got and make sure we are
  not overshooting. I noted these issues down as BLOCKER in the proposal.
  While doing so we also found a pretty serious bug with our scheduler that we
  trying to fix:
 https://gitlab.torproject.org/tpo/core/tor/-/issues/40006
- Did not have time to think about the priority queue's max size. I added a
  BLOCKER about this in the [HANDLE_QUEUE] section.
- Did not have time to think about a minimum effort feature on the queue. I
  guess this also depends on the scheduler.
- Need to think more about the effort estimation logic and make sure that it
  can't backfire big time.
- Need to kill all the XXXs, TODOs and BLOCKERs.

Also, tevador let me know if you'd like me to add you as a co-author on the
proposal based on all your great feedback so far.

This is looking more and more plausible but let's wait for more data before we
seal the deal.

Thanks for all the feedback and looking forward to more!

---

Filename: xxx-pow-over-intro-v1
Title: A First Take at PoW Over Introduction Circuits
Author: George Kadianakis, Mike Perry, David Goulet
Created: 2 April 2020
Status: Draft

0. Abstract

  This proposal aims to thwart introduction flooding DoS attacks by introducing
  a dynamic Proof-Of-Work protocol that occurs over introduction circuits.

1. Motivation

  So far our attempts at limiting the impact of introduction flooding DoS
  attacks on onion services has been focused on horizontal scaling with
  Onionbalance, optimizing the CPU usage of Tor and applying congestion control
  using rate limiting. While these measures move the goalpost forward, a core
  problem with onion service DoS is that building rendezvous circuits is a
  costly procedure both for the service and for the network. For more
  information on the limitations of rate-limiting when defending against DDoS,
  see [REF_TLS_1].

  If we ever hope to have truly reachable global onion services, we need to
  make it harder for attackers to overload the service with introduction
  requests. This proposal achieves this by allowing onion services to specify
  an optional dynamic proof-of-work scheme that its clients need to participate
  in if they want to get served.

  With the right parameters, this proof-of-work scheme acts as a gatekeeper to
  block amplification attacks by attackers while letting legitimate clients
  through.

1.1. Related work

  For a similar concept, see the three internet drafts that have been proposed
  for defending against TLS-based DDoS attacks using client puzzles [REF_TLS].

1.2. Threat model [THREAT_MODEL]

1.2.1. Attacker profiles [ATTACKER_MODEL]

  This proposal is written to thwart specific attackers. A simple PoW proposal
  cannot defend against all and every DoS attack on the Internet, but there are
  adverary models we can defend against.

  Let's start with some adversary profiles:

  "The script-kiddie"

The script-kiddie has a single computer and pushes it to its
limits. Perhaps it also has a VPS and a pwned server. We are talking about
an attacker with total access to 10 Ghz of CPU and 10 GBs of RAM. We
consider the total cost for this attacker to be zero $.

  "The small botnet"

The small botnet is a bunch of computers lined up to do an introduction
flooding attack. Assuming 500 medium-range computers, we are talking about
an attacker with total access to 10 Thz of CPU and 10 TB of RAM. We consider
the upfront cost for this attacker to be about $400.

  "The large botnet"

The large botnet is a serious operation with many thousands of computers
organized to do this attack. Assuming 100k medium-range computers, we are
talking about an attacker with total access to 200 Thz of CPU and 200 TB of
RAM. The upfront cost for this attacker is about $36k.

  We hope that this proposal can help us defend against the script-kiddie
  attac

Re: [tor-dev] Onion Client Auth on v3 descriptor via Control port

2020-06-17 Thread George Kadianakis
Miguel Jacq  writes:

> Hi George,
>
> On Wed, Jun 17, 2020 at 12:37:18PM +0300, George Kadianakis wrote:
>> 
>> Hmm, this is a bit embarassing for both of us, but if I'm not mistaken
>> ONION_CLIENT_AUTH_ADD only controls the client-side of client auth
>> credentials. This is not obvious at all by the command name, and it only
>> becomes a bit clearer by reading the control-spec.txt...
>> 
>> We added that control port command so that the browser could present a
>> UX for client authorization.
>
> Ahahahah. Riiight, thanks for that clarification. This whole time I indeed 
> thought this was a novel way for adding Client Auth for v3 onions via the 
> control port.
>
> I had been reading the rend-spec-v3 
> https://github.com/torproject/torspec/blob/master/rend-spec-v3.txt 
>
> G.2.1 'Service side' says '[XXX figure out control port command format]' and 
> I figured it just hadn't been updated to reflect the new command. I hadn't 
> even thought to read the control spec..
>
>> 
>> AFAIK there is no control port command for adding service-side client
>> auth credentials. You will need to do this using the filesystem by using
>> the '/authorized_clients/' directory as displayed by
>> the "CLIENT AUTHORIZATION" section of the manual... Or you will need to
>> implement the control port commands in tor :/
>> 
>> Sorry for the sad news here... :/
>
> Okay, thanks for all the clarification. Indeed, OnionShare uses purely 
> ephemeral onions, so the standard filesystem method won't work (unless we 
> switch to that).
>

Right Seems like v2 supports adding client auth credentials through
the control port using the ADD_ONION command, but that's not the case
for v3...

Just a simple matter of programming as always ;)

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Onion Client Auth on v3 descriptor via Control port

2020-06-17 Thread George Kadianakis
Miguel Jacq  writes:

> Hi,
>
> I'm one of the OnionShare developers, looking at what can be done to support 
> Client Auth with v3 onions.
>
> OnionShare depends on Stem for all its interaction setting up ephemeral 
> onions, so we need Stem to support that fierst.
>
> So I have been working on adding support for ONION_CLIENT_AUTH_ADD to Stem. I 
> actually have it working as far as getting a 250 OK back from the controller! 
> Nice.
>
> But I'm puzzled, because despite successfully adding the client auth, I can 
> access my onion service *without* auth in Tor Browser.
>
> 
>
> So that all looks good. But what is weird, is if I go to 
> http://gmdo3idszymnvfbuf2fm6miepearldgwbo7qfc4lsrw2kact2ka77kqd.onion/ , I 
> see my 'hello world', I never had to add any client auth to Tor Browser.
>
> What am I doing wrong? How do I make the onion auth actually be 'required' 
> since I succeeded at adding it? I was under the impression that as soon as I 
> ran ONION_CLIENT_AUTH_ADD and got a success, from that point on, client auth 
> would be *needed*.
>
> Maybe it's a problem with how I'm generating the keys? I had a bit of trouble 
> figuring out how to send the base64 encoded private key. Even so, it accepts 
> the private key, and yet it allows access without auth, which surprised me...
>
> It's probably really obvious but I've been working on this a while so I'm 
> tired :) Time to embarass myself on a public mailing list..
>
> Thanks in advance!
>

Hmm, this is a bit embarassing for both of us, but if I'm not mistaken
ONION_CLIENT_AUTH_ADD only controls the client-side of client auth
credentials. This is not obvious at all by the command name, and it only
becomes a bit clearer by reading the control-spec.txt...

We added that control port command so that the browser could present a
UX for client authorization.

AFAIK there is no control port command for adding service-side client
auth credentials. You will need to do this using the filesystem by using
the '/authorized_clients/' directory as displayed by
the "CLIENT AUTHORIZATION" section of the manual... Or you will need to
implement the control port commands in tor :/

Sorry for the sad news here... :/

PS: All this confusion stems from the name of this feature being "client
authorization". The fact that the name includes the string "client"
makes it confusing to specify whether functionality is client-side
or service-side... We should rename that feature, but making it
simply "authorization" is weird because then people are gonna wonder
whether onion services offer no authentication by default. Perhaps
we need to find a cooler name for this feature...

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Proposal: A First Take at PoW Over Introduction Circuits

2020-06-10 Thread George Kadianakis
Hello,

after reading all the excellent feedback on this thread, I did another
revision on this proposal:
  https://github.com/asn-d6/torspec/tree/pow-over-intro
I'm inlining the full proposal in the end of this email.

Here is a changelog:
- Improve attack vector section
- Shrink nonce size on cells to 16 bytes
- Change effort definition to linear

Here is a few things I did not do and might need some help with:

- I did not decide on the PoW function. I think to do this we miss the
  scheduler number crunching from dgoulet, and also I need to understand the
  possible options a bit more. I removed most references to argon2 and replaced
  them with XXX_POW.

  Tevador, thanks a lot for your tailored work on equix. This is fantastic.  I
  have a question that I don't see addressed in your very well written
  README. In your initial email, we discuss how Equihash does not have good GPU
  resistance:
  https://lists.torproject.org/pipermail/tor-dev/2020-May/014268.html

  Since equix is using Equihash isn't this gonna be a problem here too? I'm not
  too worried about ASIC resistance since I doubt someone is gonna build ASICs
  for this problem just yet, but script kiddies with their CS:GO graphics cards
  attacking equix is something I'm concerned about. I bet you have thought of
  this, so I'm wondering what's your take here.

  Right now I think the possible options are equix or the reduced Randomx
  (again thanks tevador) or yespower. In theory we could do all three of them
  and just support different versions; but that means more engineering.

  In any case, we are also waiting for some Tor-specific numbers from dgoulet,
  so we need those before we proceed here.

- In their initial mail, tevador points out an attack where the adversary games
  the effort estimation logic, by pausing an attack a minute before descriptor
  upload, so that the final descriptor has a very small target effort. They
  suggest using the median effort over a long period of time to fix this. Mike,
  can you check that out and see how we can adapt our logic to fix this?

- In tevador's initial mail, they point how the cell should include POW_EFFORT
  and that we should specify a "minimum effort" value instead of just inserting
  any effort in the pqueue. I can understand how this can have benefits (like
  the June discussion between tevador and yoehoduv) but I'm also concerned that
  this can make us more vulnerable to [ATTACK_BOTTOM_HALF] types of attacks, by
  completely dropping introduction requests instead of queueing them for an
  abstract future. I wouldn't be surprised if my concerns are invalid and
  harmful here. Does anyone have intuition?

- tevador suggests we use two seeds, and always accept introductions with the
  previous seed. I agree this is a good idea, and it's not as complex as I
  originally thought (I have trauma from the v3 design where we try to support
  multiple time periods at the same time). However, because this doubles the
  vefication time, I decided to wait for dgoulet's scheduler numbers and until
  the PoW function is finalized to understand if we can afford the verification
  overhead.

- Solar Designer suggested we do Ethash's anti-DDoS trick to avoid instances of
  [ATTACK_TOP_HALF]. This involves wrapping the final PoW token in a fast hash
  with a really low difficulty, and having the verifier check that fast hash
  POW first. This means that a target trying to flood us with invalid PoW would
  need to do some work for every PoW instead of it being free. This is a
  decision we should take at the end after we do some number crunching and see
  where we are at in terms of verification time and attack models.

Thanks a lot! :)

---

Filename: xxx-pow-over-intro-v1
Title: A First Take at PoW Over Introduction Circuits
Author: George Kadianakis, Mike Perry, David Goulet
Created: 2 April 2020
Status: Draft

0. Abstract

  This proposal aims to thwart introduction flooding DoS attacks by introducing
  a dynamic Proof-Of-Work protocol that occurs over introduction circuits.

1. Motivation

  So far our attempts at limiting the impact of introduction flooding DoS
  attacks on onion services has been focused on horizontal scaling with
  Onionbalance, optimizing the CPU usage of Tor and applying congestion control
  using rate limiting. While these measures move the goalpost forward, a core
  problem with onion service DoS is that building rendezvous circuits is a
  costly procedure both for the service and for the network. For more
  information on the limitations of rate-limiting when defending against DDoS,
  see [REF_TLS_1].

  If we ever hope to have truly reachable global onion services, we need to
  make it harder for attackers to overload the service with introduction
  requests. This proposal achieves this by allowing onion services to specify
  an optional dynamic proof-of-work scheme that its clients need to participate
  in if they want to get served.

  With the right 

Re: [tor-dev] [RFC] Proposal: A First Take at PoW Over Introduction Circuits

2020-04-14 Thread George Kadianakis
> Hello list,
>
> hope everyone is safe and doing well!
>
> I present you an initial draft of a proposal on PoW-based defences for
> onion services under DoS.
>

Hello again,

many thanks for all the thoughtful feedback!

In the end of this email I inline a new version of the proposal
addressing various issues discussed over IRC and on this thread.
Here is a rough changelog:

- Specifying some features we might want from "v1.5".
- Adding suggested-effort to the descriptor.
- Specifying the effort() function.
- Specifying the format of the expiration time.
- Adding a protocol-specific label to the PoW computation.
- Removing the seed and output values from the INTRODUCE1 cell.
- Specifying what happens when a client does not send a PoW token when PoW is 
enabled.
- Revamping the UX section.
- Added Mike and David in the authors list.

I'm also pushing the spec to my git repo so that you can see a diff:
https://github.com/asn-d6/torspec/tree/pow-over-intro

Now before going in to the proposal here are the three big topics currently
under discussion in the thread:

== How the scheduler should work ==

   I'm not gonna touch on this, since David is writing an initial draft of a
   scheduler design soon, so let's wait for that email before we discuss this
   further.

== Should there be a target difficulty on the descriptor? ==

   I have made changes in the proposal to this effect. See sections
   [EFFORT_ESTIMATION] and [CLIENT_TIMEOUT] for more information.

   While there is no hard-target difficulty, the descriptor now contains
   a suggested difficulty that clients should aim at. The service will
   still add requests with lower effort than the suggested one in the
   priority queue. That's to make the system more resilient to attacks
   in cases where the client cannot get the latest descriptor (and hence
   latest suggested effort) due to the descriptor upload/fetch
   rate-limiting restrictions in place.

== Which PoW function should we use? ==

   The proposal suggests argon2, and Mike has been looking at Randomx. However,
   after further consideration and speaking with some people (props to Alex
   Biryukov), it seems like those two functions are not well fitted for this
   purpose, since they are memory-hard both for the client and the service. And
   since we are trying to minimize the verification overhead, so that the
   service can do hundreds of verifications per second, they don't seem like
   good fits.

   In particular, slimming down argon2 to the point that services can do
   hundreds of those verifications per second, results in an argon2 without any
   memory-hardness. And Randomx is even heavier, since it uses argon2 under the
   hood and also does extra stuff. In particular, from some preliminary
   computations, it seems like the top-half of the cell processing takes about
   2ms, whereas Randomx takes at least 17ms in my computer, which means that it
   puts an almost 1000% overhead to the top-half processing of a single
   introduction.

   This means that assymetric PoW schemes like Equihash and family is what we
   should be looking at next. These schemes aim to have small proof sizes, and
   be memory-hard for the prover, but lightweight for the verifier. They are
   currently used by Zcash so there is quite some literature and improvements.

   In particular, Equihash has two important parameters (n,k). These parameters
   together control the proof size (so for example, Equihash<144,5> has a 100B
   proof, and Equihash<200,9> has a 1344B proof), and the 'k' parameter
   controls the verification speed (the verifier has to do 2^k hash invocations
   to do the verification). Also see this for more details:
  https://forum.bitcoingold.org/t/our-new-equihash-equihash-btg/1512
  https://www.cryptolux.org/images/b/b9/Equihash.pdf

   The good thing here is that these parameters look good and offer good
   security. Furthermore, Equihash is used by big and friendly projects like
   Zcash.

   The negative thing is that because Equihash is widely used there is already
   ASIC hardware for it, so we would need to look at the parameters we pick and
   how ASIC-friendly they are. Furthermore, an attacker who buys Equihash ASIC
   can also use it for coin mining which makes it an easier investment.

   IMO, we should look more into Equihash and other assymetric types of PoW, as
   well as speak with people who know Equihash well.

   Finally, our proposal has a big benefit over the blockchain use cases: it's
   much more agile. We can deploy changes to the PoW algorithm without having
   to hard-fork, and we can do this even through the consensus for maximum
   agility. This means that we should try to use this agility to our advantage.

Looking forward to more feedback!

=

And here comes the updated proposal:

Filename: xxx-pow-over-intro-v1
Title: A First Take at PoW Over Introduction Circuits
Author: George Kadianakis

Re: [tor-dev] Does a design document for the DoS subsystem exist?

2020-04-13 Thread George Kadianakis
Lennart Oldenburg  writes:

> Hi all,
>
> We are investigating how Tor protects itself against Denial-of-Service
> (DoS) attacks. So far, it has been difficult to find a comprehensive
> top-level design document for the DoS subsystem (e.g., a torspec or
> proposal) that reflects the decisions that lead to the subsystem in its
> current form.
>
> Specifically, we are looking at the DoS mitigation subsystem code for
> entry guards at src/core/or/dos.{h,c} [1]. We are trying to understand
> the chosen countermeasures and how the default and current consensus
> values came to be, e.g., the decision to limit to 3 circuits per second
> after the initial burst.
>
> 1) Could you kindly point us in the right direction if any such document
> exists?
>
> 2) If it does not exist, would you mind briefly explaining how the DoS
> threshold values (such as DoSCircuitCreationMinConnections,
> DoSCircuitCreationRate, DoSCircuitCreationBurst, and
> DoSConnectionMaxConcurrentCount) were chosen?
>

Hello there,

first of all let me say that the DoS subsystem of Tor is under active
development, so things are subject to change and mutate towards various
directions (e.g. 
https://lists.torproject.org/pipermail/tor-dev/2020-April/014215.html).

However, since you are asking for resources on the currently existing
DoS subsystem here is some things you can look at:

- Resources on general Tor rate limiting:
https://trac.torproject.org/projects/tor/ticket/24902 

https://lists.torproject.org/pipermail/tor-relays/2018-January/014357.html

- The proposal for the HS DoS subsystem:

https://github.com/torproject/torspec/blob/master/proposals/305-establish-intro-dos-defense-extention.txt

- More information on HS DoS subsystem:
  https://lists.torproject.org/pipermail/tor-dev/2019-April/013790.html
  https://lists.torproject.org/pipermail/tor-dev/2019-May/013837.html
  https://lists.torproject.org/pipermail/tor-dev/2019-July/013923.html

Good luck with your research and please let us know if you reach the
point where you can break or fix things! :)

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] [RFC] Proposal: A First Take at PoW Over Introduction Circuits

2020-04-02 Thread George Kadianakis
Hello list,

hope everyone is safe and doing well!

I present you an initial draft of a proposal on PoW-based defences for
onion services under DoS.

The proposal is not finished yet and it needs tuning and fixing. There
are many places marked with XXX and TODO around the proposal that should
be addressed.

The important part is that looking at the numbers it does seem like this
proposal can work as a concept and serve its intended purpose. The most
handwavey parts of the proposal right now are [INTRO_QUEUE] and
[POW_SECURITY] and if this thing fails in the end, it's probably gonna
be something that slipped over there. Hence, we should polish these
sections before we proceed with any sort of engineering here.

In any case, I decided to send it to the list even in premature form, so
that it can serve as a stable point of reference in subsequent
discussions. It can also be found in my git repo:
https://github.com/asn-d6/torspec/tree/pow-over-intro

Cheers and stay safe!

---

Filename: xxx-pow-over-intro-v1
Title: A First Take at PoW Over Introduction Circuits
Author: George Kadianakis
Created: 2 April 2020
Status: Draft

0. Abstract

  This proposal aims to thwart introduction flooding DoS attacks by introducing
  a dynamic Proof-Of-Work protocol that occurs over introduction circuits.

1. Motivation

  So far our attempts at limiting the impact of introduction flooding DoS
  attacks on onion services has been focused on horizontal scaling with
  Onionbalance, optimizing the CPU usage of Tor and applying congestion control
  using rate limiting. While these measures move the goalpost forward, a core
  problem with onion service DoS is that building rendezvous circuits is a
  costly procedure both for the service and for the network. If we ever hope to
  have truly reachable global onion services, we need to make it harder for
  attackers to overload the service with introduction requests.

  This proposal achieves this by allowing onion services to specify an optional
  dynamic proof-of-work scheme that its clients need to participate in if they
  want to get served.

  With the right parameters, this proof-of-work scheme acts as a gatekeeper to
  block amplification attacks by attackers while letting legitimate clients
  through.

1.1. Threat model [THREAT_MODEL]

1.1.1. Attacker profiles [ATTACKER_MODEL]

  This proposal is written to thwart specific attackers. A simple PoW proposal
  cannot defend against all and every DoS attack on the Internet, but there are
  adverary models we can defend against.

  Let's start with some adversary profiles:

  "The script-kiddie"

The script-kiddie has a single computer and pushes it to its
limits. Perhaps it also has a VPS and a pwned server. We are talking about
an attacker with total access to 10 Ghz of CPU and 10 GBs of RAM. We
consider the total cost for this attacker to be zero $.

  "The small botnet"

The small botnet is a bunch of computers lined up to do an introduction
flooding attack. Assuming 500 medium-range computers, we are talking about
an attacker with total access to 10 Thz of CPU and 10 TB of RAM. We consider
the upfront cost for this attacker to be about $400.

  "The large botnet"

The large botnet is a serious operation with many thousands of computers
organized to do this attack. Assuming 100k medium-range computers, we are
talking about an attacker with total access to 200 Thz of CPU and 200 TB of
RAM. The upfront cost for this attacker is about $36k.

  We hope that this proposal can help us defend against the script-kiddie
  attacker and small botnets. To defend against a large botnet we would need
  more tools in our disposal (see [FUTURE_WORK]).

  {XXX: Do the above make sense? What other attackers do we care about? What
other metrics do we care about? Network speed? I got the botnet costs
from here [REF_BOTNET] Back up our claims of defence.}

1.1.2. User profiles [USER_MODEL]

  We have attackers and we have users. Here are a few user profiles:

  "The standard web user"

This is a standard laptop/desktop user who is trying to browse the
web. They don't know how these defences work and they don't care to
configure or tweak them. They are gonna use the default values and if the
site doesn't load, they are gonna close their browser and be sad at Tor.
They run a 2Ghz computer with 4GB of RAM.

  "The motivated user"

This is a user that really wants to reach their destination. They don't
care about the journey; they just want to get there. They know what's going
on; they are willing to tweak the default values and make their computer do
expensive multi-minute PoW computations to get where they want to be.

  "The mobile user"

This is a motivated user on a mobile phone. Even tho they want to read the
news article, they don't have much leeway on stressing their machi

Re: [tor-dev] Improving onion service availability during DoS using anonymous credentials

2020-03-30 Thread George Kadianakis
George Kadianakis  writes:

> Hello list,
>
> there has been lots of discussions about improving onion service availability
> under DoS conditions. Many approaches have been proposed [OOO] but only a few
> have been tried and even fewer show any real improvements to the availability
> of the service.
>
> An approach that we've been considering is the use of anonymous credentials as
> a way to prioritize good clients from bad clients. The idea is that the 
> service
> gives tokens to clients it believes to be good, and prioritizes client with
> tokens over clients without tokens whenever possible. This is a post to start 
> a
> discussion of how such approaches could work and whether they are worth
> pursuing futher.
>
> == Preliminaries ==
>
> === When should the access control take place? ===
>
> Doing DoS defenses with anon credentials is all about enforcing access control
> at the right point of the protocol so that the amplification factor of evil
> clients gets cut as early as possible.
>
> Very roughly the phases of the onion service protocol are: descriptor fetch
> phase, intro phase, rendezvous phase. Let's see how those look like for the
> purposes of access control:
>
> - Doing the access control during the descriptor fetch stage is something 
> worth
>   considering because it's the first phase of the protocol and hence the
>   earliest and best place to soak up any damage from evil clients. There is
>   already a form of optional access control implemented here called "client
>   authorization" and it's worth thinking of what's lacking to make it useful
>   against DoS attackers. I'm gonna address this in section [CLIENTAUTH].
>
> - Doing the access control during the introduction phase is another fruitful
>   approach. Blocking bad clients during introduction means that they dont get 
> to
>   force the service to create a costly rendezvous circuit, and since services
>   have a long-term circuit open towards the intro points it makes it easier 
> for
>   services to pass access control related data to the intro point. This is
>   actually the approach we are gonna be talking most about in this post.
>
> - Finally, doing the access control during the rendezvous phase is way too 
> late
>   since by that time the onion service has already spent lots of resources
>   catering the evil client, so let's ignore that.
>
> === Entities of an anonymous credential system ===
>
> Anonymous credential systems traditionally have three entities that concern 
> us:
>
>   - The Issuer:   the entity who issues the credentials/tokens
>   - The Prover:   the entity who collects tokens and uses them to get 
> access
>   - The Verifier: the entity who verifies that tokens are legit and 
> grants/restricts access
>
> In the world of onion services, the Issuer is naturally the onion service, and
> the Prover is the onion service client. The Verifier could either be
> the onion service itself or its introduction points. We will see below how 
> this
> could work and the relevant tradeoffs.
>
>  ++  ++   ++
>  | Client |<-+-+-+-->|Intro point |<--+---+-->|Onion service   |
>  |(Prover)|  |(Verifier?) |   |(Issuer)(Verifier?) |
>  ++  ++   ++
>
>
> === How do tokens get around? ===
>
> A main question here is "How do good clients end up with tokens?". For the
> purposes of this post, we will assume that clients get these tokens in an out
> of band fashion. For example, a journalist can give tokens to her sources over
> Signal so they can use them with Securedrop. Or a forum operator can give
> tokens to old-time members of the forum to be used during a DoS.
>
> A natural chicken-and-egg problem occurs here since how is an onion service
> supposed to give tokens to its users if it's unreachable because of a DoS? We
> realize this is a big problem and we are not sure exactly how to solve it. 
> This
> problem naturally limits the use of anonymous credential solutions, and sorta
> makes them a second-layer of defense since it assumes a first-layer of defense
> that allows operators to pass tokens to the good people. A first-layer 
> approach
> here could perhaps look like PrivacyPass where users get tokens by solving
> CAPTCHAs.
>
> == Anonymous credentials ==
>
> By surveying the anonymous credential literature we have found various types 
> of
> anonymous credential schemes that are relevant for us:
>
> - Discrete-logarithm-based credentials based on blind signatures:
>
> This is a class of 

[tor-dev] Improving onion service availability during DoS using anonymous credentials

2020-03-23 Thread George Kadianakis
Hello list,

there has been lots of discussions about improving onion service availability
under DoS conditions. Many approaches have been proposed [OOO] but only a few
have been tried and even fewer show any real improvements to the availability
of the service.

An approach that we've been considering is the use of anonymous credentials as
a way to prioritize good clients from bad clients. The idea is that the service
gives tokens to clients it believes to be good, and prioritizes client with
tokens over clients without tokens whenever possible. This is a post to start a
discussion of how such approaches could work and whether they are worth
pursuing futher.

== Preliminaries ==

=== When should the access control take place? ===

Doing DoS defenses with anon credentials is all about enforcing access control
at the right point of the protocol so that the amplification factor of evil
clients gets cut as early as possible.

Very roughly the phases of the onion service protocol are: descriptor fetch
phase, intro phase, rendezvous phase. Let's see how those look like for the
purposes of access control:

- Doing the access control during the descriptor fetch stage is something worth
  considering because it's the first phase of the protocol and hence the
  earliest and best place to soak up any damage from evil clients. There is
  already a form of optional access control implemented here called "client
  authorization" and it's worth thinking of what's lacking to make it useful
  against DoS attackers. I'm gonna address this in section [CLIENTAUTH].

- Doing the access control during the introduction phase is another fruitful
  approach. Blocking bad clients during introduction means that they dont get to
  force the service to create a costly rendezvous circuit, and since services
  have a long-term circuit open towards the intro points it makes it easier for
  services to pass access control related data to the intro point. This is
  actually the approach we are gonna be talking most about in this post.

- Finally, doing the access control during the rendezvous phase is way too late
  since by that time the onion service has already spent lots of resources
  catering the evil client, so let's ignore that.

=== Entities of an anonymous credential system ===

Anonymous credential systems traditionally have three entities that concern us:

  - The Issuer:   the entity who issues the credentials/tokens
  - The Prover:   the entity who collects tokens and uses them to get 
access
  - The Verifier: the entity who verifies that tokens are legit and 
grants/restricts access

In the world of onion services, the Issuer is naturally the onion service, and
the Prover is the onion service client. The Verifier could either be
the onion service itself or its introduction points. We will see below how this
could work and the relevant tradeoffs.

 ++  ++   ++
 | Client |<-+-+-+-->|Intro point |<--+---+-->|Onion service   |
 |(Prover)|  |(Verifier?) |   |(Issuer)(Verifier?) |
 ++  ++   ++


=== How do tokens get around? ===

A main question here is "How do good clients end up with tokens?". For the
purposes of this post, we will assume that clients get these tokens in an out
of band fashion. For example, a journalist can give tokens to her sources over
Signal so they can use them with Securedrop. Or a forum operator can give
tokens to old-time members of the forum to be used during a DoS.

A natural chicken-and-egg problem occurs here since how is an onion service
supposed to give tokens to its users if it's unreachable because of a DoS? We
realize this is a big problem and we are not sure exactly how to solve it. This
problem naturally limits the use of anonymous credential solutions, and sorta
makes them a second-layer of defense since it assumes a first-layer of defense
that allows operators to pass tokens to the good people. A first-layer approach
here could perhaps look like PrivacyPass where users get tokens by solving
CAPTCHAs.

== Anonymous credentials ==

By surveying the anonymous credential literature we have found various types of
anonymous credential schemes that are relevant for us:

- Discrete-logarithm-based credentials based on blind signatures:

This is a class of anon credential schemes that allow us to separate the
verifier from the issuer. In particular this means that we can have the
service issue the tokens, but the introduction point being the verifier.

They are usually based on blind signatures like in the case of Microsoft's
U-Prove system [UUU].

- Discrete-logarithm-based credentials based on OPRF:

Another approach here is to use OPRF constructions based on the discrete
logarithm problem to create an anonymous credential scheme like in the case
of PrivacyPass [PPP]. The 

Re: [tor-dev] Request for onionbalance v3 pre-alpha testing

2020-03-03 Thread George Kadianakis
George Kadianakis  writes:

> George Kadianakis  writes:
>
>> Hello list,
>>
>> we've been developing Onionbalance v3 for the past months, and I'm
>> pretty hyped to say that the project has reached a stability point that
>> could benefit from some initial testing by curious and adventurous
>> developers and users.
>>
>
> Hello people,
>
> I haven't received much testing for onionbalance v3 yet, so I'm
> shamelessly bumping this thread in hopes for more activity. I bet many
> people would love to test this but they don't know it exists so perhaps
> this works.
>
> Also, I just uploaded this guide to my github so tha I can dynamically
> update it if bugs are found by testers without having to post errata to
> a mailing list: 
> https://github.com/asn-d6/onionbalance/blob/master/docs/alpha-testing-v3.txt
>
> Thanks!

Hello again,

you can now find an even better guide for setting up OnionBalance V3 here:

https://onionbalance-v3.readthedocs.io/en/latest/v3/tutorial-v3.html#tutorial-v3

I'm currently working on packages and CI for it.

Let me know if you find any bugs or issues :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] CVE-2020-8516 Hidden Service deanonymization

2020-02-06 Thread George Kadianakis
David Goulet  writes:

> On 04 Feb (19:03:38), juanjo wrote:
>
> Greetings!
>
>> Since no one is posting it here and talking about it, I will post it.
>> 
>> https://nvd.nist.gov/vuln/detail/CVE-2020-8516
>> 
>> The guy: 
>> http://www.hackerfactor.com/blog/index.php?/archives/868-Deanonymizing-Tor-Circuits.html
>> 
>> Is this real?
>> 
>> Are we actually not verifying if the IP of the Rend is a node in the Tor
>> network?
>
> We (network team) actually don't think this is a bug but it is actually done
> on purpose for specific reasons. Please see asn's answer on
> https://bugs.torproject.org/33129 that explains why that is.
>
> Onto the bigger issue at ends that the post explains. I'm going to extract the
> relevant quote that this post is all about:
>
> Remember: the guard rarely changes but the other two hops change often.
> If he can repeatedly map out my circuit's last node, then he can build a
> large exclusion list. If he can exclude everything else, then he can find
> my guard node. And if he can't exclude everything, then he can probably
> whittle it down to a handful of possible guard nodes.
>
> That is indeed a known attack. One can create a set of relays from the 3rd
> node (last one before connecting to the rendezvous point) selected by the
> service and doing enough requests to the service, you can end up with a very
> large set of relays that can _not_ be your Guard due to how path selection
> works as explained in the blog post.
>

For what it's worth, I'm glad this discussion has been restarted because
we did lots of research work in 2018 about these sort of attacks, but we
were kinda drown in the various tradeoffs and ended up not doing much
after releasing the vanguard tool.

For people who are following from home and would like to help out here
is some reading materials:
   https://lists.torproject.org/pipermail/tor-dev/2018-April/013070.html
   https://lists.torproject.org/pipermail/tor-dev/2018-May/013162.html
   https://trac.torproject.org/projects/tor/ticket/25754

Basically, from what I remember, to defend against such attacks we
either need to change our path selection logic (#24487), or abandon the
path restrictions that cause infoleaks (big thread above), or use two
guards (prop#291 plus big thread above). Each of these options has its
own tradeoffs and we need to analyze them again. If someone could do a
summary that would be great to get this started again...

For now, if you are afraid of such attacks, you should use and love vanguards!

Thanks a lot! :-)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Request for onionbalance v3 pre-alpha testing

2020-02-06 Thread George Kadianakis
George Kadianakis  writes:

> Hello list,
>
> we've been developing Onionbalance v3 for the past months, and I'm
> pretty hyped to say that the project has reached a stability point that
> could benefit from some initial testing by curious and adventurous
> developers and users.
>

Hello people,

I haven't received much testing for onionbalance v3 yet, so I'm
shamelessly bumping this thread in hopes for more activity. I bet many
people would love to test this but they don't know it exists so perhaps
this works.

Also, I just uploaded this guide to my github so tha I can dynamically
update it if bugs are found by testers without having to post errata to
a mailing list: 
https://github.com/asn-d6/onionbalance/blob/master/docs/alpha-testing-v3.txt

Thanks!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Request for onionbalance v3 pre-alpha testing

2020-01-31 Thread George Kadianakis
Hello list,

we've been developing Onionbalance v3 for the past months, and I'm
pretty hyped to say that the project has reached a stability point that
could benefit from some initial testing by curious and adventurous
developers and users.

The project is not yet ready for proper use in actual production environments
(more on this later), but my hope is that this testing will reveal bugs that I
can't catch in my test environment and receive user feedback that will speed up
the overall process and allow a faster and more stable initial release.

This email features a pretty complicated pseudo-guide that if it gets followed
to completion it will hopefully result in a functional onionbalance setup.

As I said the guide is complicated and in ugly text format, so I suggest you
put on some calming music (perhaps some Bill Evans or some Com Truise or some
Tor) and go slowly. Trying to speed run this will result in disappointment and
perhaps even partial or total loss of faith in humanity.

== So what's the current status of Onionbalance v3? ==

The project lives as a fork of Donncha's original Onionbalance project in this
repository: https://github.com/asn-d6/onionbalance

The current status is that during my testing I have onionbalanced a real v3
service with two backend instances successfully for a few days with no obvious
reachability issues. It also works on a private tor network (using Chutney) for
hours with no issues. All the above happened on Debian stable and testing.

Also, here are some features that are currently missing but will be
included in the initial launch:
- This new onionbalance will support both v2 and v3 onion services, altho
  currently v2 support is botched and only v3 will work for the purposes of
  this testing. If you can quickly revive v2 support patches are welcome!
- Any sort of documentation about v3 mode (this post is all the documentation 
there is for now!)
- Missing streamlined setup process (installation scripts, packages or 
dependency tracking)
- Code documentation and unittests missing
- There is no daemon mode. You will have to run this in screen for now
- Fun bugs included!

Furthermore, v2 and v3 support will not have feature parity at the time of the
initial launch, and we will have to incrementally build these features:
- Cool v2 features like DISTINCT_DESCRIPTORS are not yet implemented so v3
  cannot load balance as deeply as v3 yet (patches welcome!)
- There is no way to transfer your existing v3 onion service to onionbalance 
(patches welcome!)
- There is no torrc generation for instances as in v2 (patches welcome!)
- It will be possible for clients and HSDirs to figure out whether a descriptor
  comes from an onionservice vs a regular v3 onion. Making them
  indistinguishable is possible but requires more engineering and will be left
  as a TODO for now (see #31857)

Finally, there is no guarantee that configs generated with this pre-alpha
version of Onionbalance will be forward compatible when we actually launch the
project. This means that if you setup onionbalance v3 right now, you might need
to re-set it up in a month or so when the actual release happens.

tl;dr This is a call for experimental testing and it's meant to help the
developers and accelerate development and not to be used in anything remotely
serious or close to production. The actual release for operators and users is
coming in a month or so.

== Before we get started... ==

OK so let's assume that you still want to help out!

Given that this is a pre-alpha test I'm gonna assume that you fill the
following prerequisites otherwise you are gonna have an adventure:

[ ] Comfortable with git
[ ] Familiar with v2 onionbalance (at least in terms of terminology and how it 
works)
[ ] Familiar with setting up and configuring (v3) onion services
[ ] Familiar with Linux (or in the mood to find weird bugs in other platforms)
[ ] Familiar with compiling C projects and chasing their dependencies
[ ] Familiar with running Python projects and chasing their dependencies
[ ] Patience to push through various import error messages that might appear

The above are not mandatory but if you don't check out all the boxes you might
encounter various errors and roadblocks that will frustrate. Actually... even
if you check out all these boxes you might encounter various errors that might
frustrate you.

== Some theory ==

In any case, if you want to learn more about the architecture of Onionbalance, 
see:
   https://blog.torproject.org/cooking-onions-finding-onionbalance

As part of this guide, you might hear me say the term "frontend" which is what
we call master onionbalance onion service instance (whose address clients type
on their browser), or the term "backend" which is the load balancing instances
that do the actual introduction/rendezvous work for the frontend. So for
example, in the visual below, Alice fetches the frontend's descriptor but does
the introduction/rendezvous with the backends:


Re: [tor-dev] HSv3 descriptor work in stem

2019-12-04 Thread George Kadianakis
Hello Damian,

I reported a bug report here:
https://trac.torproject.org/projects/tor/ticket/31823#comment:19

I just reopened the old trac ticket but I think this is suboptimal.

Would you prefer me to open new tickets in the future, or maybe open an
issue on Github? I can do whatever is convenient for you!

Thanks for all the code! So far it works great!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Raising exceptions in add_event_listener() threads (was Re: HSv3 descriptor work in stem)

2019-12-03 Thread George Kadianakis
Damian Johnson  writes:

> Thanks George, this is a great question! I've expanded our tutorial to
> hopefully cover this a bit better...
>
> https://stem.torproject.org/tutorials/tortoise_and_the_hare.html#advanced-listeners
>

Thanks both for this information! It was very useful!
I basically followed the tutorial and it now works just fine!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Onion DoS: Killing rendezvous circuits over the application layer

2019-12-02 Thread George Kadianakis
Greetings!

This is another thread [0] about onion service denial-of-service attacks.

It has long been suggested that onion service operators should be given the
option to kill spammy rendezvous circuits at will if they feel they are causing
too much damage.

Right now this is possible using the HiddenServiceExportCircuitID torrc option
(introduced in #4700) and then using the CLOSECIRCUIT control port command to
close circuits.

Unfortunately, we have recently got reports that this technique is not viable
for busy onion services under DoS because their control port gets overwhelmed
by (useful for them) events, and it's basically rendered useless to the point
that any CLOSECIRCUIT command takes several seconds to become effective.

For this reason, multiple onion operators [1] have resorted to using the
actual HTTP protocol as a direct channel of communication to the Tor
daemon to request circuit shutdowns. This works by embedding a special
string (or HTTP error code) to the HTTP responses from nginx to the Tor
daemon and adding special custom code to the Tor daemon to close
circuits that carry this string. This seems to work well enough for
people so far.

This is a thread to discuss this approach and other alternatives since it seems
a useful tool against application-layer onion service denial of service attacks.

Let me go through the positives and negatives of actuallym erging this
defence upstream to little-t-tor:

---

Positives:

1) This is a solid defense that actually helps people and has been reported as
   a positive countermeasure in an area that has been hard to find concrete
   defences (also see [0]).

2) Seems like more and more people are doing this already in a custom ad-hoc
   fashion, so merging this upstream will at least give them a secure way of
   doing it (instead of writing custom C code).

3) It's actually a pretty simple patch in terms of tech-debt and maintaining it.

4) The more we address DoS vectors like this one, the less incentive will exist
   for DoS actors to exist. Effectively improving the long-term health of the
   network.

Negatives:

a) It's a dirty hotfix that blends the networking layers and might be annoying
   to maintain in the long-term.

b) It only works for HTTP (and without SSL?).

---

For me, point (1) is extremely important, since we've been struggling with
helping onion services that are getting DoSsed and this feature offers a solid
defense against practical attacks.

However, IMO the right way to do this feature, would be to improve the control
port code and design so that it doesn't get so overwhelmed by multiple
events. That said, I'm not sure exactly what kind of changes we would have to
do to the control port to actually make it a viable option, and it seems to me
like a pretty big project that serves as a medium-term to long-term solution
(which we have no resources to pursue right now), whereas the hack of this
thread is more of a short-term solution.

I'm looking forward to constructive feedback here, since this seems like a
controversial feature that users really need.

Thanks! :)

[0]: serving as a continuation of previous classics such as:
 https://lists.torproject.org/pipermail/tor-dev/2019-June/013875.html
 https://lists.torproject.org/pipermail/tor-dev/2019-July/013923.html
 etc.

[1]: 
http://www.hackerfactor.com/blog/index.php?/archives/804-A-New-Tor-Attack.html
 https://trac.torproject.org/projects/tor/ticket/32511
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Raising exceptions in add_event_listener() threads (was Re: HSv3 descriptor work in stem)

2019-11-27 Thread George Kadianakis
Hello Damian (and list),

here is another question about an issue I have encountered while
developing onionbalance v3.

In particular, I'm fetching HS descriptors using HSFETCH and then adding
an add_event_listener() event to a function that does the descriptor
parsing and handling as follows:

controller.add_event_listener(handle_new_desc_content_event, 
EventType.HS_DESC_CONTENT)

The problem is that the handle_new_desc_content_event() callback has
grown to a non-trivial size and complexity, since it needs to parse the
descriptor, put it in the right places, and tick off the right
checkboxes.

Since its size has increased, so has the number of bugs and errors that
are appearing during development. The problem is that because the
callback is running on a separate thread (?) any errors and exceptions
that get raised in that thread never surface to the my console and hence
I don't see them. This means that I need to do very tedious printf
debugging to find the exact place and type of error everytime something
happens.

What's the proper way to do debugging and development in callbacks like
that? Is there a way to make the exceptions float back to the main
thread or something? Or any other tips?

Thanks! :)

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Practracker regen in #30381

2019-11-27 Thread George Kadianakis
teor  writes:

> Hi George, David,
>
> It looks like you regenerated the whole practracker file in #30381:
> https://trac.torproject.org/projects/tor/ticket/30381
> https://github.com/torproject/tor/commit/53ac9a9a91a8f2ab45c75550456716074911e685#diff-9fd3400f062c4541d79881e199fd9e1f
>
> But we usually just add exceptions for the files that we modified.
>
> When we do a full regeneration, we lose a whole lot of warnings that
> tell us where our code quality is getting worse.
>
> Do you mind if I revert the unrelated changes?
>

No problem either! Sorry for the trouble.

FWIW, what happened there is that when I need to rebase an old dev
branch to master (because of revisions etc.), there are almost always
multiple conflicts with practracker. Resolving these manually is very
annoying (they are many and confusing), so sometimes I have ditched the
exceptions.txt completely and just regenerated practracker exceptions
from scratch. That's what happened in that case. If someone has a tip
for this situation, it would be cool :)

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Acceptable clock skew in tor 0.4.1

2019-11-11 Thread George Kadianakis
intrigeri  writes:

> Hi,
>
> recently, tor has become more tolerant to skewed system clocks;
> great, thanks!
>
> At Tails, we would like to take advantage of these improvements in
> order to remove as much as we can of our not-quite-safe clock fixing
> code. Our testing suggests that:
>
>  - A ±24h clock skew is now acceptable in most cases¹: tor
>bootstraps successfully.
>
>  - While with a ±48h clock skew, tor fails to bootstrap.
>
> Could someone on the network team please confirm that these empirical
> results match what the code is currently supposed to do?
>

Hello intri!

I'm not really 100% up to date with the clock skew tolerance of Tor, but
the ±24h value seems plausible since that's the range where tor
considers a consensus to be "reasonably live" [0], which is what most of
its subsystems require to work.

An unfortunate exception here is v3 onion services: v3 onion services
only tolerate skews of maximum ±3 hours [1] but in most cases even
tighter than that. This is to assure that v3 clients and services have a
recent and accurate view of the network. In theory all of Tor needs a
recent and accurate view of the network, but v3 is particularly fragile
because of the shared random value and the precise time periods.

Sorry to break the bad news, but it is what it is. In theory we could
potentially do improvements for v3 here, but this is not in the scope
right now.

Cheers!

[0]: see networkstatus_consensus_reasonably_live()
[1]: see networkstatus_get_live_consensus() which is used in the v3
 system, and basically checks if the current time is between the
 consensus valid-after and valid-until.


> [1] In some corner cases I see weird behavior (#32438).
> And obfs4proxy is stricter than that, which I should report on Trac.
>
> Cheers,
> -- 
> intrigeri
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HSv3 descriptor work in stem

2019-10-29 Thread George Kadianakis
George Kadianakis  writes:

> Damian Johnson  writes:
>
>> Thanks George! Yup, work on that branch is in progress:
>>
>> https://gitweb.torproject.org/user/atagar/stem.git/log/?h=hsv3
>
> Hello Damian,
>
> thanks for the reply here! I'm now back and ready to start working again
> on onionbalance/stem.
>
> What is your plan with the hsv3 branch? Should I start reviewing your
> changes already, or give you more time to do more?
>
> Thanks a lot for all the work! :)

Hello again,

I took a super quick look (particularly at the easy parts of your
changes). Thanks for all the changes!

My only feedback so far is that the python2 port commits have broken
python3 for me (particularly the ed25519 blinding implementation). In
general, the ed25519 blinding implementation is very hairy Python3
crypto code and it won't be easy to support both versions I think.

Would it be egregious to provide hsv3 support only for python3 users so
that we can use python3 features as we wish?

I personally plan to use HSv3 support for onionbalance and that will be
in python3, so I wouldn't mind that at all. Not sure who else is gonna
use hsv3 support in the near future.

Cheers!

PS: From now on perhaps we can use #31823 for code related discussions
(sorry for the medium mixing)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HSv3 descriptor work in stem

2019-10-28 Thread George Kadianakis
Damian Johnson  writes:

> Thanks George! Yup, work on that branch is in progress:
>
> https://gitweb.torproject.org/user/atagar/stem.git/log/?h=hsv3

Hello Damian,

thanks for the reply here! I'm now back and ready to start working again
on onionbalance/stem.

What is your plan with the hsv3 branch? Should I start reviewing your
changes already, or give you more time to do more?

Thanks a lot for all the work! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HSv3 descriptor work in stem

2019-10-17 Thread George Kadianakis
Damian Johnson  writes:

>>Can I use `_descriptor_content()` to do that? Or should I call
>>`_descriptor_content()` to generate the whole thing _without_ the
>>sig, and then do the signature computation on its result and
>>concatenate it after?
>
> Hi George. Yup, to create a signed descriptor we create the bulk of
> the content then append the signature. Server and extrainfo
> descriptors already do this so I suspect you can do something
> similar...
>
> https://gitweb.torproject.org/stem.git/tree/stem/descriptor/server_descriptor.py#n902
> https://gitweb.torproject.org/stem.git/tree/stem/descriptor/__init__.py#n1388
>
> Will this do the trick?
>
> PS. Sorry about the duplicate. Hit reply rather than reply-all
> forgetting that you included the list.

Thanks for the reply Damian! That was super useful!

The current state of affairs can be found here: 
https://trac.torproject.org/projects/tor/ticket/31823#comment:1
(just in case you didn't check IRC that day)

peace
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Optimistic SOCKS Data

2019-10-10 Thread George Kadianakis
David Goulet  writes:

> On 08 Oct (19:49:34), Matthew Finkel wrote:
>> On Wed, Oct 2, 2019 at 5:46 PM Nick Mathewson  wrote:
>> >
>> > On Fri, Sep 27, 2019 at 1:35 PM Tom Ritter  wrote:
>> > >
>> > > On Mon, 5 Aug 2019 at 18:33, Tom Ritter  wrote:
>> > > >
>> > > > On Tue, 2 Jul 2019 at 09:23, Tom Ritter  wrote:
>> > > > > Or... something else?  Very interested in what David/asn think since
>> > > > > they worked on #30382 ...
>> > > >
>> > > > I never updated this thread after discussing with people on irc.
>> > > >
>> > > > So the implementation of
>> > > > SOCKS-error-code-for-an-Onion-Service-needs-auth implementation is
>> > > > done. David (if I'm summarizing correctly) felt that the SOCKS Error
>> > > > code approach may not be the best choice given our desire for
>> > > > optimistic data; but felt it was up to the Tor Browser team to decide.
>> > > >
>> > > > In the goal of something that works for 90%+ of use case today, the
>> > > > rest later, I'll propose the following:
>> > > >
>> > > > In little-t tor, detect if we're connecting to an onion site, and if
>> > > > so do not early-report SOCKS connection.
>> > > >
>> > > > Another ugly option is to early-report a successful SOCKS connection
>> > > > even for onion sites, and if we later receive an auth request, send an
>> > > > HTTP error code like 407 that we then detect over in the browser and
>> > > > use to prompt the user. I don't like this because it is considerably
>> > > > more work (I expect), horrible ugly layering violations, and I don't
>> > > > think it will work for https://onion links.
>> > >
>> > > I attached an updated proposal taking this into account, and I'd like
>> > > to request it be entered into torspec's proposals list.
>> >
>> > Okay!  This is now proposal 309.
>>
>> I went for a walk and I came to the realization that we're going about this
>> (a little bit) wrong.
>>

Thanks for the updates Matt!

>> The advantage of optimistic data is that application data is available when
>> tor sends the RELAY_BEGIN cell (therefore it is able to send a RELAY_DATA
>> cell immediately after the RELAY_BEGIN cell is sent). So, tor doesn't need
>> to reply immediately, just early enough such that the application can start
>> writing data on the connection.
>>
>> For exit connections, Tor should probably reply a success/failure
>> immediately (where failures result from impossible connection requests or
>> other early failures).
>>
>> For onion service connections, tor can reply much later. I might suggest as
>> late as successfully retrieving the onion service descriptor. Of course,
>> this will introduce a race between the application writing data and tor
>> completing the introduction and rendezvous, but this may be worth the risk.
>

So are you suggesting that we can still do SOCKS error codes? But as
David said, some of the errors we care about are after the descriptor
fetch, so how would we do those?

Also, please help me understand the race condition you refer to. I tried
to draw this in a diagram form:
  https://gist.github.com/asn-d6/55fbe7a3d746dc7e00da25d3ce90268a

IIUC, for onions the advantage of opportunistic SOCKS is that we would
send DATA to the service right after finishing rendezvous, whereas right
now we need to do a round-trip with Tor Browser after finishing
rendezvous. Is that right?

If that's the case, then sending the SOCKS reply after the rendezvous
circuit is completed would be the same as the current behavior, and
hence not an optimization, right?

And sending the SOCKS reply after the introduction is completed (as
David is suggesting) would be an optimization indeed, but we lose
errors (we lose the rendezvous failed error, which can occur if the
onion service is under DoS and cannot build new circuits but can still
receive introductions).

What other problems exist here?

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] HSv3 descriptor work in stem

2019-10-02 Thread George Kadianakis
Hello atagar,

I'm starting this thread to ask you questions about stem and the HSv3
work we've been doing over email so that we don't do it over IRC.

Here is an initial question:

   I'm working on HSv3 descriptor encoding, and I'm trying to understand
   how `_descriptor_content()` works. In particular, I want to compute the
   signature of a descriptor, but I see that `descriptor_content()`
   fills it with random bytes in all the `content()` methods I managed
   to find:

('signature', _random_crypto_blob('SIGNATURE')),

   What's the right way to compute the signature for such objects? In
   particular, I would need a method that first generates the whole
   descriptor body, and then computes the signature of that with a given
   private key.

   Can I use `_descriptor_content()` to do that? Or should I call
   `_descriptor_content()` to generate the whole thing _without_ the
   sig, and then do the signature computation on its result and
   concatenate it after?


Thanks! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Exposing onion service errors to Tor Browser

2019-09-30 Thread George Kadianakis
Hello list,

we've recently been thinking about how to expose onion-service-related
errors to Tor Browser so that we can give more useful error pages to
users.  We currently return "Unable to connect" error pages for any kind
of onion service error, and I think we can do better.

This is a thread to think about the errors we want to expose, how that
should look like, and what options we should give to the users when it
happens. Relevant master tickets are #30022, #30025 and #3.

We decided (in #14389) that Tor will export these errors through the
SOCKS port, and the relevant spec is proposal 304 [0].

As part of #30090 antonela started making a table of potential
errors. I'm gonna use that in this thread and also add a few more.

Let's go:

= Client-level errors =

These are errors on the user side of things: 

=== 1) Typo error on address ===

This can be detected by Tor using the checksum or if the address is
too big or too small.

TODO: We will need to add a new error code to prop304. Not sure if
the error code should distinguish between checksum fail or length fail.

There is no recovery here since the address is busted. The user
needs to find the right one.

=== 2) Missing Client Authorization ===

This is prop304's 'F4' error (see #30382), and it means that we
can't decrypt the descriptor because it requires client auth, but we
don't have it configured.

The recovery here is the whole point of #30237 where we make a
dialog for the user to insert their client auth credentials.

=== 3) Wrong Client Authorization ===

This is prop304's 'F5' error, and it means that there client auth
credentials configured for this onion are wrong.

The user recovery here is unclear but it might be that they need to
change their client auth credentials. IMO, we should not try to make
the perfect UX here, and we should just go with something super
simple.

= Service-level errors =

These are errors on the onion service side:

=== 4) Service Descriptor Can Not be Found ===

This is prop304's 'F0' error, and it means that we could not find
the descriptor of the service on the directory servers. This means
that the service is not up right now (or, more unlikely, that some
bug has happened somewhere).

The user recovery here is unclear. The user can try to reconnect in
case the service got up in the meanwhile, but this is not so likely
in a small period of time.

Perhaps we can give the user the option to reconnect every 10
seconds or so? Does this make sense from a UX PoV?

Again this equivalent to a "Remote host is down" error and we should
use it as such.

= Network-level errors =

These are errors caused by the network (directory servers, intro points,
rendezvous points) or even the service itself. It's kinda unclear given
all the hops involved. 

=== 5) Onion Service Descriptor Is Invalid ===

This is prop304's 'F1' error and it means that we got a descriptor
back from the directory but it's corrupted.

This is very unlikely to happen since directory servers do not keep
corrupted descriptors, so it usually means that some bug happened
somewhere (or that the directory is bad or confused).

In terms of recovery and error page, this is kinda an
"Oops. Internal error." situation where this is rare and weird and
hence we don't know what's the best recovery option. We can give the
option to reconnect but it's likely not gonna help much.

Again this should never really appear, so let's not stress too much
over it.

=== 6) Onion Service Introduction Failed ===

This is prop304's 'F2' error and it means that for some reason the
introduction did not complete. This could be because the onion
service is not up anymore, or it could be because the network is
screwed in some way (e.g. the service is DoSed).

The recovery here might be some 'reconnect' button which could be
helpful in case of a DoS situation, but it would not help much if
the service is not up anymore.

=== 7) Onion Service Rendezvous Failed ===

This is prop304's 'F3' error and it means that the rendezvous did
not complete. This usually means that the service is having a bad
time, and is either DoSed or it generally cannot cope.

The recovery again here is some 'reconnect' button, since if we did
the introduction successfuly, the service is up, and reconnecting
might work at some point.

This one and (6) are very related and perhaps they can be handlded
identically, since exposing terms like "intro" and "rend" to users
will not be nice. Still we might want to expose a technical error
value somewhere for debugging purposes when users come to us.

===

I think the above set of errors will satisfy all our needs. In
particular:
- #30022 (typos ticket) needs error (1) from above.
- 

Re: [tor-dev] [prop305] Introduction Point Behavior

2019-08-20 Thread George Kadianakis
David Goulet  writes:

> Greetings,
>
> This is part of the many discussions about proposal 305 which is the
> ESTABLISH_INTRO DoS defenses cell extension.
>
> Implementation is close to done and under review in ticket #30924. However,
> there is one part that is yet to be cleared out. asn and I thought it would be
> better to bring it to tor-dev@ to get a more informed decision.
>
> As a reminder, the service operator will be able set torrc options that are
> the DoS defenses parameters. Those values are validated (bound check) and then
> sent to an introduction point, supporting the extension (protover HSIntro=5),
> in the ESTABLISH_INTRO cell. The intro point then gets them and apply them
> only to that specific circuit. If no cell extension is seen, the intro point
> will honor the consensus parameters for these DoS defenses.
>
> What we want to discuss is what happened when the introduction points receives
> bad values. What does it do with the circuit? Below is the list of possible
> bad values and the proposed behavior:
>
> 1) One of the paramater (at this point in time, only 2 exists) is out of bound
>that is above INT32_MAX.
>
>Behavior: We propose to ignore the cell extension, and fallback to follow
>  the consensus parameters. Keeping the circuit alive and working.
>
>The reason for this is because if let say the intro point would close the
>circuit due to "bad protocol", then the service would open a new circuit to
>an intro point supporting the extension and it would fail again.
>Effectively turning the service into a "zombie" and "DoS" weapon itself ;).
>
>At this point, there is really no reason on why the service would send bad
>values since torrc options are validated and then sent to the intro point.
>But this doesn't protect us from our future-developer-self making coding
>or protocol mistake ;).
>
> 
>
> I'm leaning towards not closing the circuit and falling back on the consensus
> parameters. And at some point in time, we'll be able to implement the
> INTRO_ESTABLISHED response. In the meantime, there is little chances that tor
> vanilla start sending bad values since they are validated from the torrc file.
>

Hello David,

I agree with your evaluation about keeping the circuit open on bad
values and going with the consensus parameters!

That said, let's also make a ticket about the INTRO_ESTABLISHED
enhancement that will allow us to send back status messages. Same goes
for a ticket that allows us to send multiple ESTABLISH_INTRO in the same
circuit, so that we can update the values in a hotplug way.

Finally, this is off-topic but another intro<->service communication we
might want to add in the future, is a message from the intro informing
the service that the rate-limiting parameters have been hit.

Cheers! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Status of open circuit padding tickets

2019-07-23 Thread George Kadianakis
Hello Nick and Mike,

here is a summary of the current state of open circpad tickets, which I
tried to tidy up today. These are all the tickets I had in my radar and
I hope I didn't miss any.

I will be on leave starting the day after tomorrow (25th) so I wanted to
inform you of the status quo:

0.4.1:
#31024: Coverity: circpadding: always check circpad_machine_current_state() 
- [needs_review]
- This is done and in needs_review for Mike.

#30992: circpadding: Circsetup machines give out warnings when client-side 
intro gets NACKed
- This is still unresolved and in 041. I couldn't figure out by
  just looking at the logs... Perhaps we could push it in 042 given
  that it's a pretty rare client behavior.

0.4.2:
#30942: [warn] Unexpected INTRODUCE_ACK on circuit 3944288021.
- I took a stab at this today since this seems to be a central
  issue that can cause lots of log warns. It's now in needs_review.

#31112: remove specified target_hopnum from relay-side machines - 
[merge_ready]
#31113: Circuitpadding updated comments
#31098: transition when we send our first padding packet, not on received - 
[merge_ready]
- These are all Tobias' patches. I made changes files and fixed a
  few of them and also made a PR. I moved them to 042 since they
  are not really urgent bugfixes. The only one I would consider
  backporting to 041 is #31098.

#31002: circpadding: Middle node did not accept our padding request - 
[assigned]
- This is still unresolved. Mike took a look during PETS but not
  sure what's the verdict. I pushed to 042 since it does not seem
  urgent but we might want to reconsider.

#30578: The circuitpadding_circuitsetup_machine test: Re-enable, remove, 
re-document, or ___?
- This is still unresolved but not urgent. I pushed it to 042.

0.4.0:
#30649: Every few hours, relays [warn] Received circuit padding stop command
- This was hanging on a 040 backport which I just did.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Fwd: Re: Onion Service - Intropoint DoS Defenses

2019-07-08 Thread George Kadianakis
juanjo  writes:

>  Forwarded Message 
> Subject:  Re: [tor-dev] Onion Service - Intropoint DoS Defenses
> Date: Thu, 4 Jul 2019 20:38:48 +0200
> From: juanjo 
> To:   David Goulet 
>
>
>
> These experiments and final note confirm what I thought about this rate 
> limiting feature from the start: it is missing important parts. Ok, you 
> can protect the network a little and the HS, but the general 
> availability is not affected so it actually does not help for that.
>
> I wanna make a proposal including many things at the same time, but I 
> don't have much time to follow the guidelines to make a official 
> proposal. Maybe in some weeks?
>

Hello!

Ideally I would make one proposal for each of the things you care
about. Doing one huge proposal with all the things will make it less
likely for things to be done, since someone will disagree about one
small part of the proposal, and it will block the whole proposal
altogether.

> Again, I repeat: things that should be done now:
>
> -Authenticated rend signature. This would help a lot I think.
>

Current attacks do not spoof rendezvous points, they actually do make
the circuits, so I don't think that would help a whole lot. Still future
attacks might, so I agree this is worth doing (#25066 needs more
thinking and a proposal).

> -Mid-term: PoW for the client when reaching the 305prop limit instead of 
> denying access? IDK, all always configurable.
>

Plausible.

> -Deprecate clients or allow the Hidden Service to configure the IP to 
> allow access for old version clients (not supporting new antiDoS 
> features) or not. If we allow old version without protections, all 
> security measures are useless.
>

Plausibl-ish.

> And just a new idea: what about make the rotation of IP dynamic based on 
> this prop305 values? + time based rotation:
> One of the goal for rotation was defending against correlation attacks: 
> if we set a lower limit we have a potential DoS (right now), if we set 
> it high we have a potential correlation attack, bigger surface.
> What about we join time based rotation (ex. 24 hours) + or limit reached 
> based on the prop305 values.
>

Please see #26294 which is about to be merged upstream and will remove
some more useless parameters from intro point rotation. After #26294,
intro points will only rotate based on time.

What is the correlation attack you are worrying about? And why do you
think that rotating more frequently will make it safer? Usually rotating
less frequently helps against attacks by ensuring that it's less likely
to cycle into bad nodes.

Cheers! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Onion Service - Intropoint DoS Defenses

2019-07-04 Thread George Kadianakis
David Goulet  writes:

> On 30 May (09:49:26), David Goulet wrote:
>> Greetings!
>
> [snip]
>
> Hi everyone,
>
> I'm writing here to update on where we are about the introduction rate
> limiting at the intro point feature.
>
> The branch of #15516 (https://trac.torproject.org/15516) is ready to be merged
> upstream which implements a simple rate/burst combo for controlling the amount
> of INTRODUCE2 cells that are relayed to the service.
>

Great stuff! Thanks for the update!

> 
>
> The bad news is that availability is _not_ improved. One of the big reasons
> for that is because the rate limit defenses, once engaged at the intro point,
> will send back a NACK to the client. A vanilla tor client will stop using that
> introduction point away for 120 seconds if it gets 3 NACKs from it. This leads
> to tor quickly giving up on trying to connect and thus telling the client that
> connection is impossible to the .onion.
>
> We've hacked a tor client to play along and stop ignoring the NACKs to see how
> much time it would take to reach it. On average, a client would roughly need
> around 70 seconds with more than 40 NACKs on average.
>
> However, it varied a _lot_ during our experiments with many outliers from 8
> seconds with 1 NACK up to 160 seconds with 88 NACKs. (For this, the
> SocksTimeout had to be bumped quite a bit).
>

That makes sense.

So it seems like this change will change the UX of clients visiting
DoSed onion services to a sideways direction (not better/worse), right?
Clients will immediately see a "Cant connect" page on their browser
since the SOCKS conn will abort after after getting 3 NACKs. Is that the
case?

This change also impacts the performance impact of these legitimate
clients, since now they will immediately try all three introduction
points by extending the introduction circuit two times. This means that
legitimate clients will be slightly more damaging to the network, but
the DoS attacker will be much less damaging to the network, and since
the DoS attacker causes all the damage here this seems like a net
positive change.

> There is an avenue of improvement here to make the intro point sends a
> specific NACK reason (like "Under heavy load" or ...) which would make the
> client consider it like "I should retry soon-ish" and thus making the client
> possibly able to connect after many seconds (or until the SocksTimeout).
>
> Another bad news there! We can't do that anytime soon because of this bug that
> basically crash clients if an unknown status code is sent back (that is a new
> NACK value): https://trac.torproject.org/30454. So yeah... quite unfortunate
> there but also a superb reason for everyone out there to upgrade :).
>

Do we have any view on what's the ideal client behavior here? Is
"retrying soon-ish" actually something we want to do? Does it have
security implications?

> 
>
> Overall, this rate limit feature does two things:
>
> 1. Reduce the overall network load.
>
>Soaking the introduction requests at the intro point helps avoid the
>service creating pointless rendezvous circuits which makes it "less" of an
>amplification attack.
>

I think it would be really useful to get a baseline of how much we
"Reduce the overall network load" here, given that this is the reason we
are doing this.

That is, it would be great to get a graph with how many rendezvous
circuits and/or bandwidth attackers can induce to the network right now
by attacking a service, and what's the same number if we do this feature
with different parameters.

> 2. Keep the service usable.
>
>The tor daemon doesn't go in massive CPU load and thus can be actually used
>properly during the attack.
>
> The problem with (2) is the availability part where for a legit client to
> reach the service, it is close to impossible for a vanilla tor without lots of
> luck.  However, if let say the tor daemon would be configured with 2 .onion
> where one is public and the other one is private with client authorization,
> then the second .onion would be totally usable due to the tor daemon not being
> CPU overloaded.
>

That's more like a "Keep the service CPU usable, but not the service itself" ;)

> 
>
> At this point in time, we don't have a good grasp on what happens in terms of
> CPU if the rate or the burst is bumped up or even how availability is
> affected. During our experimentation, we did observed a "sort of" linear
> progression between CPU usage and rate. But we barely touched the surface
> since it was changed from 25 to 50 to 75 and that is it.
>

I wonder how we can get a better grasp at this given that we are about
to deploy it on the real net. Perhaps some graphs with the effect of
these parameters on (1) and (2) above would be useful.

In particular, I think it would be smart and not a huge delay to wait
until Stockholm before we merge this so that we can discuss it in person
with more people and come up with exact parameters, client behaviors, etc.

Thanks again! :)

Re: [tor-dev] Proposal for PoW DoS defenses during introduction (was Re: Proposal 305: ESTABLISH_INTRO Cell DoS Defense Extension)

2019-06-14 Thread George Kadianakis
juanjo  writes:

> On 13/6/19 12:21, George Kadianakis wrote:
>> Is this a new cell? What's the format? Are these really keys or are they
>> just nonces?
>
> Yes sorry, they are nonces.
>
>
> This was only a proposal for a proposal.
>
>> Is this a new cell? What's the format? Are these really keys or are they
>> just nonces?
>>
>> IMO we should not do this through a new cell because that increases the
>> round-trip by one. Instead we should just embed the PoW parameters in
>> the onion service descriptor and clients find them there.
> Yes, this is a new cell triggered only when DoS limit is reached.
>
> We can't embed it on the onion service descriptor because the attacker 
> could precompute the PoW and make a dictionary attack. The IPKey (will 
> be a nonce) should unique for each new connecting client that wants to 
> send the INTRODUCE2.
>
> What we want this way is increasing the cost of an attacker by many 
> times vs only a little overhead to the I.P.
>

I see. So you were going for an interactive PoW protocol. I wonder what
else we can get if we admit we want interactive. Can we get a CAPTCHA?
What else?

Still, I think the above protocol can be optimized to not require an
extra round trip (extra round trips are bad for the network and for the
intro point): For example, in place of an IPKey nonce that the IP
explicitly sends to the client, we could use some sort of unpredictable
crypto object from the circuit setup (e.g. ntor) between the client and
intro point.

>> That looks like a naive PoW scheme. It would perhaps be preferable to
>> try to find a GPU/ASIC-resistant or memory-hard PoW scheme here, to
>> minimize the advantage of adversaries with GPUs etc.?  Are there any
>> good such schemes?
>>
>> Also services should definitely be able to configure the difficulty of
>> the PoW, and IMO this should again happen through the descriptor.
> That PoW scheme was just a simple example. We should find the right 
> choice. Something hard to find but easy to check.
>

Yep. We should indeed find the right choice here. I have briefly tried
and failed to find papers that compare PoW schemes in a useful way for
this project.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Proposal for PoW DoS defenses during introduction (was Re: Proposal 305: ESTABLISH_INTRO Cell DoS Defense Extension)

2019-06-13 Thread George Kadianakis
juanjo  writes:

> Hello, this is my view of things, please be gentle as this is my first 
> proposal draft :)
>

Hello,

thanks for working on this. IMO any proof-of-work introduction proposal
can be seen as orthogonal to David's prop305 which is a rate-limiting
proposal (even tho it's not named as such) and hence deserves its own
thread.

> _ADAPTIVE POW PROPOSAL:_
>
> Client sends the INTRODUCE1 as normal.
>
> Introduction Point checks the Current Requests Rate and checks the DoS 
> settings.
>
> -DoS check is OK: send INTRODUCE2 to Hidden Service etc...
>

So far so good (even tho this is not our usual proposal format).

> -DoS settings/rate limit reached: then
>
>      1.Introduction Point generates a random 8 bytes key (IPKey) and 
> associates it with the client circuit. Then send INTRODUCE_POW to the 
> Client with the IPKey.

Is this a new cell? What's the format? Are these really keys or are they
just nonces?

IMO we should not do this through a new cell because that increases the
round-trip by one. Instead we should just embed the PoW parameters in
the onion service descriptor and clients find them there. 

>      2.Client computes POW.
>      Do{
> Generates random 8 bytes key (ClientKey).
> Generates hash(sha512/256 or sha3??) of
> hash(IPKey + ClientKey)
> } while (hash does not start with "abcde")
>

That looks like a naive PoW scheme. It would perhaps be preferable to
try to find a GPU/ASIC-resistant or memory-hard PoW scheme here, to
minimize the advantage of adversaries with GPUs etc.?  Are there any
good such schemes?

Also services should definitely be able to configure the difficulty of
the PoW, and IMO this should again happen through the descriptor.

>      3. Client sends INTRODUCE_POWR to the I.P. with the generated POW 
> and the ClientKey.

IMO this should happen as part of the INTRODUCE1 cell.

>      4. I.P. checks the POW:
>
>          -POW is correct: send INTRODUCE2 to HS.
>          -POW is not correct: send INTRODUCE_POW_ERROR to client with 
> new IPKey.
>
> *I say 8 bytes for the Keys just for example.
>
> PROS AND CONS, who needs to update Tor version?:
> --
>
> Only rate limit: Introduction Point and Hidden Service. No breakage.
>
> POW: Client, Introduction Point and Hidden Service. POW will break 
> compatibility with other v3 Hidden Services clients, if we implement a 
> way to bypass POW for old clients then this feature won't work as intended.
>
> A forgotten guy here: Authenticated Rends cell: where we make sure the 
> Client established a connection to the Rend Point before requesting the 
> INTRODUCE1.
>

Yep, that's yet another proposal (ticket #25066).
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 305: ESTABLISH_INTRO Cell DoS Defense Extension

2019-06-12 Thread George Kadianakis
David Goulet  writes:

> Filename: 305-establish-intro-dos-defense-extention.txt
> Title: ESTABLISH_INTRO Cell DoS Defense Extension
> Author: David Goulet, George Kadianakis
> Created: 06-June-2019
> Status: Draft
>

Thanks for this proposal, it's most excellent and an essential building
block for future work on intro point related defences.

>
>We propose a new EXT_FIELD_TYPE value:
>
>   [01] -- DOS_PARAMETERS.
>
>   If this flag is set, the extension should be used by the
>   introduction point to learn what values the denial of service
>   subsystem should be using.
>

Perhaps we can name it "rate-limiting parameters"? But no strong opinion.

>The EXT_FIELD content format is:
>
>   N_PARAMS[1 byte]
>   N_PARAMS times:
>  PARAM_TYPE  [1 byte]
>  PARAM_VALUE [8 byte]
>
>The PARAM_TYPE proposed values are:
>
>   [01] -- DOS_INTRODUCE2_RATE_PER_SEC
>   The rate per second of INTRODUCE2 cell relayed to the service.
>
>   [02] -- DOS_INTRODUCE2_BURST_PER_SEC
>   The burst per second of INTRODUCE2 cell relayed to the service.
>
>The PARAM_VALUE size is 8 bytes in order to accomodate 64bit values
>(uint64_t). It MUST match the specified limit for the following PARAM_TYPE:
>
>   [01] -- Min: 0, Max: INT_MAX
>   [02] -- Min: 0, Max: INT_MAX
>

How would this new addition to the cell impact the size of the cell? How
much free space do we have for additional features to this cell (e.g. to
do the PoW stuff of the other thread)?

>A value of 0 means the defense is disabled which has precedence over the
>network wide consensus parameter.
>
>In this case, if the rate per second is set to 0 (param 0x01) then the
>burst value should be ignored. And vice-versa, if the burst value is 0,
>then the rate value should be ignored. In other words, setting one single
>parameter to 0 disables the INTRODUCE2 rate limiting defense.
>

I think it could be cool to add a discussion section where we introduce
a new cell from the intro to the service which informs the service that
rate limiting limits have been hit. So that there is a way for the
service to get feedback that it's under attack or capped by
limits. Otherwise, there is simply no way to learn it.

This can be a later feature fwiw.

> 3. Protocol Version
>
>We introduce a new protocol version in order for onion service that wants
>to specifically select introduction points supporting this new extension.
>But also, it should be used to know when to send this extension or not.
>
>The new version for the "HSIntro" protocol is:
>
>   "5" -- support ESTABLISH_INTRO cell DoS parameters extension for onion
>  service version 3 only.
>
> 4. Configuration Options
>
>We also propose new torrc options in order for the operator to control
>those values passed through the ESTABLISH_INTRO cell.
>
>   "HiddenServiceEnableIntroDoSDefense 0|1"
>
>  If this option is set to 1, the onion service will always send to the
>  introduction point denial of service defense parameters regardless of
>  what the consensus enables it or not. The value will be taken from
>  the consensus and if not present, the default values will be used.
>  (Default: 0)
>
>   "HiddenServiceEnableIntroDoSRatePerSec N sec"
>
>  Controls the introduce rate per second the introduction point should
>  impose on the introduction circuit.
>  (Default: 25, Min: 0, Max: 4294967295)
>
>   "HiddenServiceEnableIntroDoSBurstPerSec N sec"
>
>  Controls the introduce burst per second the introduction point should
>  impose on the introduction circuit.
>  (Default: 200, Min: 0, Max: 4294967295)
>
>They respectively control the parameter type 0x01 and 0x02 in the
>ESTABLISH_INTRO cell detailed in section 2.
>
>The default values of the rate and burst are taken from ongoing anti-DoS
>implementation work [1][2]. They aren't meant to be defined with this
>proposal.
>
> 5. Security Considerations
>
>Using this new extension leaks to the introduction point the service's tor
>version. This could in theory help any kind of de-anonymization attack on a
>service since at first it partitions it in a very small group of running
>tor.
>
>Furthermore, when the first tor version supporting this extension will be
>released, very few introduction points will be updated to that version.
>Which means that we could end up in a situation where many services want to
>us

Re: [tor-dev] Onion Service - Intropoint DoS Defenses

2019-06-06 Thread George Kadianakis
David Goulet  writes:

> Greetings!
>
> 
>

Hello, I'm here to brainstorm about this suggested feature. I don't have
a precise plan forward here, so I'm just talking.

> Unfortunately, our circuit-level flow control does not apply to the
> service introduction circuit which means that the intro point is
> allowed, by the Tor protocol, to send an arbitrary large amount of cells
> down the circuit.  This means for the service that even after the DoS
> has stopped, it would still receive massive amounts of cells because
> some are either inflight on the circuit or queued at the intro point
> ready to be sent (towards the service).
> 

== SENDME VS Token bucket

So it seems like we are going with a token bucket approach (#15516) to
rate-limit introduce cells, even tho the rest of the Tor protocol is
using SENDME cells. Are we reinventing the wheel here?

> > That being all said, our short-term goal here is to add INTRODUCE2
> rate-limiting (similar to the Guard DoS subsystem deployed early last year)
> *at* the intro point but much simpler. The goal is to soak up the introduction
> load directly at the intro points which would help reduce the load on the
> network overall and thus preserve its health.
>

== We need to understand the effects of this feature: 

First of all, the main thing to note here is that this is a feature that
primarily intends to improve network health against DoS adversaries. It
achieves this by greatly reducing the amount of useless rendezvous
circuits opened by the victim service, which then improves the health of
guard nodes (when guard nodes breaks, circuit start retrying endlessly,
and hell begins).

We don't know how this feature will impact the availability of an
attacked service. Right now, my hypothesis is that even with this
feature enabled, an attacked service will remain unusable. That's
because an attacker who spams INTRO1 cells will always saturate the
intro point and innocent clients with a browser will be very unlikely to
get service (kinda like sitting under a waterfall and trying to fill a
glass with your spit). That said, with this defense, the service won't
be 100% CPU, so perhaps innocent clients who manage to sneak in will get
service, whereas now they don't anyhow.

IMO, it's very important to understand exactly how this feature will
impact the availability of the service: If this feature does not help
the availability of the service, then victim operators will be
incentivized to disable the feature (or crank up the limits) which means
that we will not improve the health of the network, which is our primary
goal here.

---

== Why are we doing all this?

Another thing I wanted to mention here is the second order effect we are
facing. The only reason we are doing all this is because attackers are
incentived into attacking onion services. Perhaps the best thing we
could do here is to create tools to make denial of service attacks less
effective against onion services, which would make attackers stop
performing them, and hence we won't need to implement rate-limits to
protect the network in case they do. Right now the best things we have
towards that direction is the incomplete-but-plausible design of [0] and
the inelegant 1b from [1].

This is especially true since to get this rate-limiting feature deployed
to the whole network we need all relays (intro points) to upgrade to the
new version so we are looking at years in the future anyway.

[0]: https://lists.torproject.org/pipermail/tor-dev/2019-May/013849.html
 https://lists.torproject.org/pipermail/tor-dev/2019-June/013862.html
[1]: https://lists.torproject.org/pipermail/tor-dev/2019-April/013790.html

>
> One naive approach is to see how much cells an attack can send towards a
> service. George and I have conducted experiment where with 10 *modified* tor
> clients bombarding a service at a much faster rate than 1 per-second (what
> vanilla tor does if asked to connect a lot), we see in 1 minute ~15000
> INTRODUCE2 cells at the service. This varies in the thousands depending on
> different factors but overall that is a good average of our experiment.
>
> This means that 15000/60 = 250 cells per second.
>
> Considering that this is an absurd amount of INTRODUCE2 cells (maybe?), we can
> put a rate per second of let say a fifth meaning 50 and a burst of 200.
>
> Over the normal 3 intro points a service has, it means 150 introduction
> per-second are allowed with a burst of 600 in total. Or in other words, 150
> clients can reach the service every second up to a burst of 600 at once. This
> probably will ring alarms bell for very popular services that probably gets
> 1000+ users a second so please check next section.
>
> I'm not that excited about hardcoded network wide values so this is why the
> next section is more exciting but much more work for us!
>

Yes, I'm also very afraid of imposing network wide values here. What
happens to hypothetical onion services that outperform the hard limits
we impose here, 

Re: [tor-dev] Proposal 304: Extending SOCKS5 Onion Service Error Codes

2019-06-06 Thread George Kadianakis
David Goulet  writes:

> Filename: 304-socks5-extending-hs-error-codes.txt
> Title: Extending SOCKS5 Onion Service Error Codes
> Author: David Goulet, George Kadianakis
> Created: 22-May-2019
> Status: Open
>

Merged to torspec as proposal 304! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Onion Service - Intropoint DoS Defenses

2019-06-03 Thread George Kadianakis
George Kadianakis  writes:

> George Kadianakis  writes:
>
>> juanjo  writes:
>>
>>> Ok, thanks, I was actually thinking about PoW on the Introduction Point 
>>> itself, but it would need to add a round trip, like some sort of 
>>> "authentication based PoW" before allowing to send the INTRODUCE1 cell. 
>>> At least it would make the overhead of clients higher than I.P. as the 
>>> clients would need to compute the PoW function and the I.P. only to 
>>> verify it. So if right now the cost of the attack is "low" we can add an 
>>> overhead of +10 to the client and only +2 to the I.P. (for example) and 
>>> the hidden service doesn't need to do anything.
>>>
>>
>> Also see the idea in (b) (1) here: 
>> https://lists.torproject.org/pipermail/tor-dev/2019-April/013790.html
>> and how it couples with the "rendezvous approver" from ticket #16059.
>> Given a generic system there, adding proof-of-work is a possibility.
>>
>> Another option would be to add the proof-of-work in the public parts of
>> INTRO1 and have the introduction point verify it which is not covered in
>> our email above.
>>
>> Proof-of-work systems could be something to consider, altho tweaking a
>> proof-of-work system that would deny attackers and still allow normal
>> clients to visit it (without e.g. burning the battery of mobile clients)
>> is an open problem AFAIK.
>>
>>
>
> Here is how this could work after a discussion with dgoulet and arma on IRC:
>
> 1) Service enables DoS protection in its torrc.
>
> 2) Service uploads descriptor with PoW parameters.
>
> 3) Service sends special flag in its ESTABLISH_INTRO to its intro points
>that says "Enable PoW defences".
>
> 4) Clients fetch descriptor, parse the PoW parameters and now need to
>complete PoW before they send a valid INTRO1 cell, otherwise it gets
>dropped by the intro point.
>
> All the above seems like they could work for some use cases.
>
> As said above, I doubt there are parameters that would help against DoS
> and still allow people to pleasantly visit such onion services through
> an uncharged mobile phone, but this choice is up to the onion
> service. The onion service can turn this feature on when they want, and
> disable it when they want. And mobile clients could also disallow visits
> to such sites to avoid battery/CPU burns.
>
> All the above seems likely, but it's significant work. We first need a
> proposal to discuss, and then there is lots of code to be written...
>

FWIW, thinking about this more, I think it's quite unlikely that we will
find a non-interactive PoW system here (like hashcash) whose parameters
would allow a legit client to compute a PoW in a reasonable time frame,
and still disallow a motivated attacker with GPUs to compute
hundreds/thousands of them in a single second (which can be enough to
DoS a service).

We should look into parameters and tuning non-interactive PoW systems,
or we could look into interactive proof-of-work systems like CAPTCHAs
(or something else), which would be additional work but might suit as
more.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Onion Service - Intropoint DoS Defenses

2019-05-31 Thread George Kadianakis
George Kadianakis  writes:

> juanjo  writes:
>
>> Ok, thanks, I was actually thinking about PoW on the Introduction Point 
>> itself, but it would need to add a round trip, like some sort of 
>> "authentication based PoW" before allowing to send the INTRODUCE1 cell. 
>> At least it would make the overhead of clients higher than I.P. as the 
>> clients would need to compute the PoW function and the I.P. only to 
>> verify it. So if right now the cost of the attack is "low" we can add an 
>> overhead of +10 to the client and only +2 to the I.P. (for example) and 
>> the hidden service doesn't need to do anything.
>>
>
> Also see the idea in (b) (1) here: 
> https://lists.torproject.org/pipermail/tor-dev/2019-April/013790.html
> and how it couples with the "rendezvous approver" from ticket #16059.
> Given a generic system there, adding proof-of-work is a possibility.
>
> Another option would be to add the proof-of-work in the public parts of
> INTRO1 and have the introduction point verify it which is not covered in
> our email above.
>
> Proof-of-work systems could be something to consider, altho tweaking a
> proof-of-work system that would deny attackers and still allow normal
> clients to visit it (without e.g. burning the battery of mobile clients)
> is an open problem AFAIK.
>
>

Here is how this could work after a discussion with dgoulet and arma on IRC:

1) Service enables DoS protection in its torrc.

2) Service uploads descriptor with PoW parameters.

3) Service sends special flag in its ESTABLISH_INTRO to its intro points
   that says "Enable PoW defences".

4) Clients fetch descriptor, parse the PoW parameters and now need to
   complete PoW before they send a valid INTRO1 cell, otherwise it gets
   dropped by the intro point.

All the above seems like they could work for some use cases.

As said above, I doubt there are parameters that would help against DoS
and still allow people to pleasantly visit such onion services through
an uncharged mobile phone, but this choice is up to the onion
service. The onion service can turn this feature on when they want, and
disable it when they want. And mobile clients could also disallow visits
to such sites to avoid battery/CPU burns.

All the above seems likely, but it's significant work. We first need a
proposal to discuss, and then there is lots of code to be written...


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Onion Service - Intropoint DoS Defenses

2019-05-31 Thread George Kadianakis
juanjo  writes:

> Ok, thanks, I was actually thinking about PoW on the Introduction Point 
> itself, but it would need to add a round trip, like some sort of 
> "authentication based PoW" before allowing to send the INTRODUCE1 cell. 
> At least it would make the overhead of clients higher than I.P. as the 
> clients would need to compute the PoW function and the I.P. only to 
> verify it. So if right now the cost of the attack is "low" we can add an 
> overhead of +10 to the client and only +2 to the I.P. (for example) and 
> the hidden service doesn't need to do anything.
>

Also see the idea in (b) (1) here: 
https://lists.torproject.org/pipermail/tor-dev/2019-April/013790.html
and how it couples with the "rendezvous approver" from ticket #16059.
Given a generic system there, adding proof-of-work is a possibility.

Another option would be to add the proof-of-work in the public parts of
INTRO1 and have the introduction point verify it which is not covered in
our email above.

Proof-of-work systems could be something to consider, altho tweaking a
proof-of-work system that would deny attackers and still allow normal
clients to visit it (without e.g. burning the battery of mobile clients)
is an open problem AFAIK.



___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 302: Hiding onion service clients using WTF-PAD

2019-05-27 Thread George Kadianakis
David Goulet  writes:

> On 16 May (14:20:05), George Kadianakis wrote:
>
> Hello!
>
>> 4.1. A dive into general circuit construction sequences [CIRCCONSTRUCTION]
>> 
>>In this section we give an overview of how circuit construction looks like
>>to a network or guard-level adversary. We use this knowledge to make the
>>right padding machines that can make intro and rend circuits look like 
>> these
>>general circuits.
>> 
>>In particular, most general Tor circuits used to surf the web or download
>>directory information, start with the following 6-cell relay cell 
>> sequence (cells
>>surrounded in [brackets] are outgoing, the others are incoming):
>> 
>>  [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> EXTENDED2 -> [BEGIN] -> CONNECTED
>> 
>>When this is done, the client has established a 3-hop circuit and also
>>opened a stream to the other end. Usually after this comes a series of 
>> DATA
>>cell that either fetches pages, establishes an SSL connection or fetches
>>directory information:
>> 
>>  [DATA] -> [DATA] -> DATA -> DATA
>> 
>>The above stream of 10 relay cells defines the grand majority of general
>>circuits that come out of Tor browser during our testing, and it's what we
>>are gonna use to make introduction and rednezvous circuits blend in.
>
> Considering "either fetches pages,..." is in the description, I'm confused how
> only 2 data cells is the grand majority?
>
> A simple "wget torproject.org" gives me an index.html of 16KB meaning at least
> 32 DATA cells. Even a directory fetch can't only be 2 data cells... ?
>

Perhaps I should have made it more clear but the pattern:

[DATA] -> [DATA] -> DATA -> DATA -> ...

comes from the SSL handshake that happens in most general circuits. In
particular the first two [DATA] cells are the ClientHello etc. SSL
records that get sent by the client, and then the subsequence DATA cells
are the ServerHello etc. of the server.

>> 5.1. Client-side introduction circuit hiding machines [INTRO_CIRC_HIDING]
>> 
>>These two machines are meant to hide client-side introduction circuits. 
>> The
>>origin-side machine sits on the client and sends padding towards the
>>introduction circuit, whereas the relay-side machine sits on the 
>> middle-hop
>>(second hop of the circuit) and sends padding towards the client. The
>>padding from the origin-side machine terminates at the middle-hop and does
>>not get forwarded to the actual introduction point.
>> 
>>Both of these machines only get activated for introduction circuits, and
>>only after an INTRODUCE1 cell has been sent out.
>> 
>>This means that before the machine gets activated our cell flow looks 
>> like this:
>> 
>> [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> EXTENDED2 -> [EXTEND2] -> 
>> EXTENDED2 -> [INTRODUCE1]
>> 
>>Comparing the above with section [CIRCCONSTRUCTION], we see that the above
>>cell sequence matches the one from general circuits up to the first 7 
>> cells.
>> 
>>However, in normal introduction circuits this is followed by an
>>INTRODUCE_ACK and then the circuit gets teared down, which does not match
>>the sequence from [CIRCCONSTRUCTION].
>> 
>>Hence when our machine is used, after sending an [INTRODUCE1] cell, we 
>> also
>>send a [PADDING_NEGOTIATE] cell, which gets answered by a 
>> PADDING_NEGOTIATED
>>cell and an INTRODUCE_ACKED cell. This makes us match the 
>> [CIRCCONSTRUCTION]
>>sequence up to the first 10 cells.
>> 
>>After that, we continue sending padding from the relay-side machine so as 
>> to
>>fake a directory download, or an SSL connection setup. We also want to
>>continue sending padding so that the connection stays up longer to destroy
>>the "Duration of Activity" fingerprint.
>
> I've looked at the implementation quickly and these DROP cells aren't
> accounted for in our circuit flow control which means that there will be a
> difference between a "real" DATA circuit and a circuit being sent PADDING in
> order to look like the former. And that will be the flow control cell(s)
> (SENDME) coming back from the end point that is receiving the data.
>
> In other words, one circuit (the padded one) will have only a long stream of
> cells going in one direction and the second circuit (with legit data) will
> have that long stream but now and then a cell coming back down the ci

Re: [tor-dev] Proposal 302: Hiding onion service clients using WTF-PAD

2019-05-20 Thread George Kadianakis
Tom Ritter  writes:

> On Thu, 16 May 2019 at 11:20, George Kadianakis  wrote:
>> 3) Duration of Activity ("DoA")
>>
>>   The USENIX paper uses the period of time during which circuits send and
>>   receive cells to distinguish circuit types. For example, client-side
>>   introduction circuits are really short lived, wheras service-side
>>   introduction circuits are very long lived. OTOH, rendezvous circuits 
>> have
>>   the same median lifetime as general Tor circuits which is 10 minutes.
>>
>>   We use WTF-PAD to destroy this feature of client-side introduction
>>   circuits by setting a special WTF-PAD option, which keeps the circuits
>>   open for 10 minutes completely mimicking the DoA of general Tor 
>> circuits.
>
> 10 minutes exactly; or a median of 10 minutes?  Wouldn't 10 minutes
> exactly be a near-perfect distinguisher? And if it's a median of 10
> minutes, do we know if it follows a normal distribution/what is the
> shape of the distribution to mimic?
>

Oops, you are right, Tom.

It's not 10 minutes exactly. The right thing to say is that it's a median
of 10 minutes, altho I'm not entirely sure of the exact distribution.

These circuits basically now follow the MaxCircuitDirtiness
configuration like general circuits, and it gets orchestrated by
circuit_expire_old_circuits_clientside(). Not sure if it's in a spec
somewhere.

I will update the spec soon with the fix. Thanks!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Proposal 302: Hiding onion service clients using WTF-PAD

2019-05-16 Thread George Kadianakis
Filename: 302-padding-machines-for-onion-clients.txt
Title: Hiding onion service clients using padding
Author: George Kadianakis, Mike Perry
Created: Thursday 16 May 2019
Status: Accepted
Ticket: #28634

0. Overview

   Tor clients use "circuits" to do anonymous communications. There are various
   types of circuits. Some of them are for navigating the normal Internet,
   others are for fetching Tor directory information, others are for connecting
   to onion services, while others are simply for measurements and testing.

   It's currently possible for MITM type of adversaries (like tor-network-level
   and local-area-network adversaries) to distinguish Tor circuit types from
   each other using a wide array of metadata and distinguishers.

   In this proposal, we study various techniques that can be used to
   distinguish client-side onion service circuits and provide WTF-PAD circuit
   padding machines (using prop#254) to hide them against certain adversaries.

1. Motivation

   We are writing this proposal for various reasons:

   1) We believe that in an ideal setting MITM adversaries should not be able
  to distinguish circuit types by inspecting traffic. Tor traffic should
  look amorphous to an outside observer to maximize uncertainty and
  anonymity properties.

  Client-side onion service circuits are an easy target for this proposal,
  because we believe we can improve their privacy with low bandwidth
  overhead.

   2) We want to start experimenting with the WTF-PAD subsystem of Tor, and
  this use-case provides us with a good testbed.

   3) We hope that by actually starting to use the WTF-PAD subsystem of Tor, we
  will encourage more researchers to start experimenting with it.

2. Scope of the proposal [SCOPE]

   Given the above, this proposal sets forth to use the WTF-PAD system to hide
   client-side onion service circuits against the classifiers of paper by Kwon
   et al. above.

   By client-side onion service circuits we refer to these two types of 
circuits:
  - Client-side introduction circuits: Circuit from client to the 
introduction point
  - Client-side rendezvous circuits: Circuit from client to the rendezvous 
point

   Service-side onion service circuits are not in scope for this proposal, and
   this is because hiding those would require more bandwidth and also more
   advanced WTF-PAD features.

   Furthermore, this proposal only aims to cloak the naive distinguishing
   features mentioned in the [KNOWN_DISTINGUISHERS] section, and can by no
   means guarantee that client-side onion service circuits are totally
   indistinguishable by other means.

   The machines specified in this proposal are meant to be lightweight and
   created for a specific purpose. This means that they can be easily extended
   with additional states to do more advanced hiding.

3. Known distinguishers against onion service circuits [KNOWN_DISTINGUISHERS]

   Over the past years it's been assumed that motivated adversaries can
   distinguish onion-service traffic from normal Tor traffic given their
   special characteristics.

   As far as we know, there has been relatively little research-level work done
   to this direction. The main article published in this area is the USENIX
   paper "Circuit Fingerprinting Attacks: Passive Deanonymization of Tor Hidden
   Services" by Kwon et al. [0]

   The above paper deals with onion service circuits in sections 3.2 and 5.1.
   It uses the following three "naive" circuit features to distinguish circuits:
  1) Circuit construction sequence
  2) Number of incoming and outgoing cells
  3) Duration of Activity ("DoA")

All onion service circuits have particularly loud signatures to the above
characteristics, but WTF-PAD (prop#254) gives us tools to effectively
silence those signatures to the point where the paper's classifiers won't
work.

4. Hiding circuit features using WTF-PAD

   According to section [KNOWN_DISTINGUISHERS] there are three circuit features
   we are attempting to hide. Here is how we plan to do this using the WTF-PAD
   system:

   1) Circuit construction sequence

  The USENIX paper uses the directions of the first 10 cells sent in a
  circuit to fingerprint them. Client-side onion service circuits have
  unique circuit construction sequences and hence they can be fingeprinted
  using just the first 10 cells.

  We use WTF-PAD to destroy this feature of onion service circuits by
  carefully sending padding cells (relay DROP cells) during circuit
  construction and making them look exactly like most general tor circuits
  up till the end of the circuit construction sequence.

   2) Number of incoming and outgoing cells

  The USENIX paper uses the amount of incoming and outgoing cells to
  distinguish circuit types. For example, client-side introduction circuits
  have the same amount of inc

Re: [tor-dev] [RFC] control-spec: Specify add/remove/view client auth commands (client-side).

2019-05-07 Thread George Kadianakis
George Kadianakis  writes:

> Hello list,
>
> here is a control spec patch for adding v3 client auth commands to
> add/remove/view clients from the client-side (so Tor Browser -> Tor):
> 
> https://github.com/torproject/torspec/pull/81/commits/3a26880e80617210b4729f96664ef9f0345b0b7c
>
> I'm currently unhappy with the naming of those commands, and in general
> with how easy it is to confuse them with the (non-existent) service-side
> commands. I'm wondering how to name them better so that when we add the
> respective service-side commands (at some point we should) there is no
> confusion.
>

Thanks for all the comments. I think I took everything into account, and
I'm inlining an updated version of the patch. My apologies if I forgot
something.

There will likely be updates (e.g. on the error codes) as we get to
implement this, because we always forget something.

Thanks for the feedback, very much appreciated! :)

---

+
+ 3.30. ONION_CLIENT_AUTH_ADD
+ 
+   The syntax is:
+ "ONION_CLIENT_AUTH_ADD" SP HSAddress
+ SP "X25519PrivKey=" PrivateKeyBlob
+ [SP "ClientName=" Nickname]
+ [SP "Type=" TYPE] CRLF
+ 
+ HSAddress = 56*Base32Character
+ PrivateKeyBlob = base64 encoding of x25519 key
+ 
+   Tells the connected Tor to add client-side v3 client auth credentials for 
the
+   onion service with "HSAddress". The "PrivateKeyBlob" is the x25519 private
+   key that should be used for this client, and "Nickname" is an optional
+   nickname for the client.
+ 
+   TYPE is a comma-separated tuple of types for this new client. For now, the
+   currently supported types are:
+ "Permanent" - This client's credentials should be stored in the 
filesystem.
+   If this is not set, the client's credentials are epheremal
+   and stored in memory.
+ 
+   On success, "250 OK" is returned. Otherwise, the following error codes 
exist:
+ 251 - Client with with this "PrivateKeyBlob" already existed.
+ 512 - Syntax error in "HSAddress", or "PrivateKeyBlob" or "Nickname"
+ 551 - Client with with this "Nickname" already exists
+ 
+ 3.31. ONION_CLIENT_AUTH_REMOVE
+ 
+   The syntax is:
+ "ONION_CLIENT_AUTH_REMOVE" SP HSAddress
+SP "X25519PrivKey=" PrivateKeyBlob CRLF
+ 
+   Tells the connected Tor to remove the client-side v3 client auth credentials
+   for the onion service with "HSAddress" and client with key "PrivateKeyBlob".
+ 
+   On success "250 OK" is returned. Otherwise, the following error codes exist:
+ 512 - Syntax error in "HSAddress", or "PrivateKeyBlob".
+ 251 - Client with "PrivateKeyBlob" did not exist.
+ 
+ 3.32. ONION_CLIENT_AUTH_VIEW
+ 
+   The syntax is:
+ "ONION_CLIENT_AUTH_VIEW" [SP HSAddress] CRLF
+ 
+   Tells the connected Tor to list all the stored client-side v3 client auth
+   credentials for "HSAddress". If no "HSAddress" is provided, list all the
+   stored client-side v3 client auth credentials.
+ 
+   The server reply format is:
+ "250-ONION_CLIENT_AUTH_VIEW" [SP HSAddress] CRLF
+ *("250-CLIENT X25519PrivKey=" PrivateKeyBlob
+   [SP "ClientName=" Nickname]
+   [SP "Type=" TYPE] CRLF)
+ "250 OK" CRLF
+ 
+   Where "PrivateKeyBlob" is the x25519 private key of this client. "Nickname"
+   is an optional nickname for this client, which can be set either through the
+   ONION_CLIENT_AUTH_ADD command, or it's the filename of this client if the
+   credentials are stored in the filesystem.
+ 
+   TYPE is a comma-separated field of types for this client, the currently
+   supported types are:
+   "Permanent" - This client's credentials are stored in the filesystem.
+ 
+   On success "250 OK" is returned. Otherwise, the following error codes exist:
+ 512 - Syntax error in "HSAddress".
+
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] control-spec: Specify add/remove/view client auth commands (client-side).

2019-05-06 Thread George Kadianakis
Mark Smith  writes:

> On 5/6/19 11:19 AM, George Kadianakis wrote:
>> Hello list,
>> 
>> here is a control spec patch for adding v3 client auth commands to
>> add/remove/view clients from the client-side (so Tor Browser -> Tor):
>> 
>> https://github.com/torproject/torspec/pull/81/commits/3a26880e80617210b4729f96664ef9f0345b0b7c
>> 
>> I'm currently unhappy with the naming of those commands, and in general
>> with how easy it is to confuse them with the (non-existent) service-side
>> commands. I'm wondering how to name them better so that when we add the
>> respective service-side commands (at some point we should) there is no
>> confusion.
>> 
>> Let me know what you think!
>
> Thanks for working on this.  I have a couple of comments:
>
> 1. How does Permanent get set?  Should there by an option added to
> ADD_ONION_CLIENT_AUTH to let the client say "store this on disk"?
>

Yes we do want that! We just thought it adds to engineering complexity and
it shouldn't get in as part of the first implementation (i.e. as an s27-must).

I will still add it to the spec, and just not implement it.

> 2. For VIEW_ONION_CLIENT_AUTH it would be nice if the HSAddress
> parameter was optional.  We may want to build an interface that allows
> users to see all of their keys and choose which ones to remove, etc.
>

Good point! Will do.

Will probs have a revision for this list tomorrow!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] [RFC] control-spec: Specify add/remove/view client auth commands (client-side).

2019-05-06 Thread George Kadianakis
Hello list,

here is a control spec patch for adding v3 client auth commands to
add/remove/view clients from the client-side (so Tor Browser -> Tor):

https://github.com/torproject/torspec/pull/81/commits/3a26880e80617210b4729f96664ef9f0345b0b7c

I'm currently unhappy with the naming of those commands, and in general
with how easy it is to confuse them with the (non-existent) service-side
commands. I'm wondering how to name them better so that when we add the
respective service-side commands (at some point we should) there is no
confusion.

Let me know what you think!

Thanks! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Denial of service defences for onion services

2019-04-30 Thread George Kadianakis
Hello list,

This is a thread summarizing and brainstorming various defences about denial of
service defences for onion services after an in-depth discussion with David 
Goulet.

We've been thinking about denial of service defences for onion services
lately. This has been a recurrent topic that has been creeping up every once in
a while: Last time we had to tackle this issue it was back in early 2018 when
we had to design a DoS mitigation subsystem because the network was crumbling
down (https://trac.torproject.org/projects/tor/ticket/24902).

Unfortunately, while the DoS mitigation subsystem improved the health of the
network and stopped the DoS attacks back then, it did not address the total
space of possible attacks, and onion services and the network is still open to
various attacks. The main DoS attack right now is the naive attack of flooding
the service with too many introduction requests, and this is the attack that
this post is gonna be dealing with.

We don't like DoS attacks because they cause two issues to Tor:

   a) They damage the health of the Tor network impacting every user
   b) They kill availability of legitimate onion services.

In this thread we will handle these two issues independently, as there is no
single solution that improves both areas at once. We have some pretty good
ideas on (a), but we would appreciate ideas on (b), so feel free to give us
your input.

== a) Minimizing the damage to the network caused by DoS attacks:

   Most of the damage caused during DoS attacks is from the circuits created by
   the attacker to introduce/rendezvous to the victim onion service, and also
   by the circuits created by the victim onion service as it tries to
   rendezvous with all those clients. An attacker can literally create tens of
   thousands of introduction circuits in less than a minute, which get
   amplified by the service launching that many rendezvous circuits. Not good.

   Here are a few ways to reduce the damage to the network:

   == 1) Rate limiting introduction circuits

  There should be a way to rate-limit introductions so that services do not
  get overwhelmed. There are various places where we can rate-limit: we
  could rate-limit on the guard-layer, or on the intro-point layer or on
  the service-layer.

  We have already attempted at rate-limiting on the guard-layer with
  #24902, but it's hard to go deeper there because the guard does not know
  if the circuit is a DoS attacker, or a busy onion service, or 150 Tor
  users in an airport. We also think that rate-limiting on the
  service-layer won't do much good since that's too far down the circuit,
  and we are trying to reduce the operations it has to do so that it
  doesn't get overwhelmed (see #15463 for various queue-management
  approaches for rate-limiting on the service side).

  So we've been thinking of rate-limiting on the introduction point layer,
  since it's a nice soaking point that does not do much right now. See
  #15516 (comment 28) for a concrete proposal by arma which results in far
  less damage to the network (since evil traffic does not get carried
  through to the service-side introduction circuit, and no extra rendezvous
  circuits get launched), and also a swifter way for legit clients to know
  that an onion-service circuit won't work.

   == 2) Stop needless circuit rotation on service-side

  Right now, services will rotate their introduction circuits after a
  certain number of introductions (#26294). This means that during an
  attack, the service not only needs to handle thousands of fake
  introduction circuits, but also continuously tear down and recreate
  introduction circuits and publish new descriptors. See comment 8 on that
  ticket for a short-term proposal on how to improve the situation here,
  by not continuously rotating introduction points.

   == 3) Optimize CPU performance on the service-side

  Right now, onion services during an attack are actually CPU bound. See
  #30221 for various improvements we can do to improve the performance of
  services. However, improving CPU performance might have the opposite 
effect,
  since processing cells quicker means that the service will make even more 
  rendezvous circuits.

   == 4) Make sure attackers don't take shortcuts around the protocol

  We should make sure that attackers don't take shortcuts around the Tor
  protocol to launch their attacks. Examples here involve requiring a
  proof-of-rendezvous from clients (#25066), and not allowing single-hop
  proxies to do introductions (#22689).

   The above suggestions (maybe in priority order) are ways we can improve the
   damage dealt to the network by DoS attackers. But that still does not make
   DoS attacks less effective. So here follows the section about improving
   service availability:

== b) Improve service availability during 

Re: [tor-dev] prob_distr.c: LogLogistic fails stochastic tests on 32-bits mingw

2018-12-12 Thread George Kadianakis
George Kadianakis  writes:

> Hello Riastradh,
>
> as discussed on IRC, Appveyor recently started failing the stochastic
> tests of LogLogistic on 32-bit builds:
>  https://github.com/torproject/tor/pull/576
>  https://ci.appveyor.com/project/torproject/tor/builds/20897462
>
> I managed to reproduce the breakage by cross-compiling Tor and running
> the tests with wine, using this script of ahf: 
> https://github.com/ahf/tor-win32/
>
> Here are my findings:
>
> The following two test cases are breaking 100% reproducibly:
>
> ok = test_stochastic_log_logistic_impl(M_E, 1e-1);
> ok = test_stochastic_log_logistic_impl(exp(-10), 1e-2);
>

And here are some updates:

I followed your suggestion and turned the tests into deterministic by
sampling from a deterministic randomness source. I verified that all the
crypto_rand() call outputs are now the same between the 32-bit mingw
build and the 64-bit gcc one:
  
https://github.com/asn-d6/tor/commit/3d8c86c2f08ad2cc7ed030bbf8e11b110351f5c8

I then focused on the test_stochastic_log_logistic_impl(M_E, 1e-1) test
case and tried to figure out where the deviation was happening between
64-bit gcc and 32-bit mingw... That took a while but I finally got some
figures. Check out my commit that adds some printfs as well:
  
https://github.com/asn-d6/tor/commit/36999c640fe824ab9fb85b5d2cd15017a97a532f

So using the output from that that commit I noticed that many times
log_logistic_sample() would give different outputs in these two
systems. In particular sometimes the x value would differ even with the
same (s, p0) pair, and other times the x value would be the same but the
final alpha*pow(x,1/beta) value would differ. Even tho this is the case,
the test would only fail for certain values for beta (as mentioned in my
previous email).

I now inline various such failure cases and one correct one:

Case #1 (same x, different sample value):

 mingw-32:
beta: 0x1.ap-4
s: 3122729323, p0: 0x1.68d18a44b82fbp-1
x: 0x1.d686a1e7fa35p+0
alpha*pow(x, 1/beta): 0x1.2affd5bfff433p+10

 gcc-64:
beta: 0x1.ap-4
s: 3122729323, p0: 0x1.68d18a44b82fbp-1
x: 0x1.d686a1e7fa35p+0
alpha*pow(x, 1/beta): 0x1.2affd5bfff434p+10

Case #2 (same x, different sample value):

 mingw-32:
beta: 0x1.ap-4
s: 738208646, p0: 0x1.a1ecd53def5d3p-2
x: 0x1.068987864c2aep-2
alpha*pow(x, 1/beta): 0x1.bfba380255bb8p-19

 linux:
beta: 0x1.ap-4
s: 738208646, p0: 0x1.a1ecd53def5d3p-2
x: 0x1.068987864c2aep-2
alpha*pow(x, 1/beta): 0x1.bfba380255bb9p-19
 
Case #3 (different x, different sample value):

 mingw-32:
beta: 0x1.ap-4
s: 95364755, p0: 0x1.575b5ea720e3cp-1
x: 0x1.fb7949976ab04p+0
alpha*pow(x, 1/beta): 0x1.3e605e169e8cbp+11

 gcc-64:
beta: 0x1.ap-4
s: 95364755, p0: 0x1.575b5ea720e3cp-1
x: 0x1.fb7949976ab03p+0
alpha*pow(x, 1/beta): 0x1.3e605e169e8c5p+11

Case #4 (different x, different sample value):

 mingw-32:
beta: 0x1.ap-4
s: 2082443965, p0: 0x1.530a8759113bp-2
x: 0x1.42989e50ac641p+2
alpha*pow(x, 1/beta): 0x1.b724d48bf0f6cp+24

 gcc-64:
beta: 0x1.ap-4
s: 2082443965, p0: 0x1.530a8759113bp-2
x: 0x1.42989e50ac64p+2
alpha*pow(x, 1/beta): 0x1.b724d48bf0f5ep+24

Case #5 (different x, different sample value):

 mingw-32:
beta: 0x1.ap-4
s: 443038967, p0: 0x1.b0124b971bbf3p-4
x: 0x1.1f5b72f5f6a3ep+4
alpha*pow(x, 1/beta): 0x1.143a16cdae94fp+43

 gcc-64:
beta: 0x1.ap-4
s: 443038967, p0: 0x1.b0124b971bbf3p-4
x: 0x1.1f5b72f5f6a3fp+4
alpha*pow(x, 1/beta): 0x1.143a16cdae958p+43

Case #6 (same sample value):

 mingw-32:
beta: 0x1.ap-4
s: 2932701594, p0: 0x1.b407f600e6d87p-1
x: 0x1.7bb183ccc47efp-1
alpha*pow(x, 1/beta): 0x1.181016f03c09p-3

 gcc-64:
beta: 0x1.ap-4
s: 2932701594, p0: 0x1.b407f600e6d87p-1
x: 0x1.7bb183ccc47efp-1
alpha*pow(x, 1/beta): 0x1.181016f03c09p-3
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] prob_distr.c: LogLogistic fails stochastic tests on 32-bits mingw

2018-12-11 Thread George Kadianakis
Hello Riastradh,

as discussed on IRC, Appveyor recently started failing the stochastic
tests of LogLogistic on 32-bit builds:
 https://github.com/torproject/tor/pull/576
 https://ci.appveyor.com/project/torproject/tor/builds/20897462

I managed to reproduce the breakage by cross-compiling Tor and running
the tests with wine, using this script of ahf: https://github.com/ahf/tor-win32/

Here are my findings:

The following two test cases are breaking 100% reproducibly:

ok = test_stochastic_log_logistic_impl(M_E, 1e-1);
ok = test_stochastic_log_logistic_impl(exp(-10), 1e-2);

The breakage seems to be because of the beta parameter. In particular,
it seems like the test will break with any beta <= 0.26, and will
succeed with a beta >= 0.27. The space in between is still unclear ;)

I haven't managed to figure out what's actually offending the test but I
can reproduce this so I can do some digging if you have any ideas.

FWIW, I haven't noticed any other stochastic test breakage.

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Updates and review on "Proposal 254: Padding Negotiation"

2018-10-30 Thread George Kadianakis
Hey Mike,

I took another look at prop#254 and made some changes of my own in my
torspec branch circuitpadding-proposal-updates (see commit ab37543). Let
me know if they look right to you. Some of those I had to look into the
code to understand, and I hope I got them right.

Furthermore, I opened a pull request with a few questions and comments
for further improvements and clarifications:
   https://github.com/torproject/torspec/pull/39

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Temporary hidden services

2018-10-19 Thread George Kadianakis
Michael Rogers  writes:

> On 18/10/2018 13:26, George Kadianakis wrote:
>> Michael Rogers  writes:
>> 
>>> Hi George,
>>>
>>> On 15/10/2018 19:11, George Kadianakis wrote:
>>>> Nick's trick seems like a reasonable way to avoid the issue with both 
>>>> parties
>>>> knowing the private key.
>>>
>>> Thanks! Good to know. Any thoughts about how to handle the conversion
>>> between ECDH and EdDSA keys?
>>>
>> 
>> Hmm, that's a tricky topic! Using the same x25519 keypair for DH and
>> signing is something that should be done only under supervision by a
>> proper cryptographer(tm). I'm not a proper cryptographer so I'm
>> literally unable to evaluate whether doing it in your case would be
>> secure. If it's possible I would avoid it altogether...
>> 
>> I think one of the issues is that when you transform your x25519 DH key
>> to an ed25519 key and use it for signing, if the attacker is able to
>> choose what you sign, the resulting signature will basically provide a
>> DH oracle to the attacker, which can result in your privkey getting
>> completely pwned. We actually do this x25519<->ed255519 conversion for
>> onionkeys cross-certificates (proposal228) but we had the design
>> carefully reviewed by people who know what's going on (unlike me).
>> 
>> In your case, the resulting ed25519 key would be used to sign the
>> temporary HS descriptor. The HS descriptor is of course not entirely
>> attacker-controlled data, but part of it *could be considered* attacker
>> controlled (e.g. the encrypted introduction points), and I really don't
>> know whether security can be impacted in this case. Also there might be
>> other attacks that I'm unaware of... Again, you need a proper
>> cryptographer for this.
>
> Thanks, that confirms my reservations about converting between ECDH and
> EdDSA keys, especially when we don't fully control what each key will be
> used for. I think we'd better hold off on that approach unless/until the
> crypto community comes up with idiot-proof instructions.
>
>> A cheap way to avoid this, might be to include both an x25519 and an
>> ed25519 key in the "link" you send to the other person. You use the
>> x25519 key to do the DH and derive the shared secret, and then both
>> parties use the shared secret to blind the ed25519 key and derive the
>> blinded (aka hierarchically key derived) temporary onion service
>> address... Maybe that works for you but it will increase the link size
>> to double, which might impact UX.
>
> Nice! Link size aside, that sounds like it ought to work.
>
> A given user's temporary hidden service addresses would all be related
> to each other in the sense of being derived from the same root Ed25519
> key pair. If I understand right, the security proof for the key blinding
> scheme says the blinded keys are unlinkable from the point of view of
> someone who doesn't know the root public key (and obviously that's a
> property the original use of key blinding requires). I don't think the
> proof says whether the keys are unlinkable from the point of view of
> someone who does know the root public key, but doesn't know the blinding
> factors (which would apply to the link-reading adversary in this case,
> and also to each contact who received a link). It seem like common sense
> that you can't use the root key (and one blinding factor, in the case of
> a contact) to find or distinguish other blinded keys without knowing the
> corresponding blinding factors. But what seems like common sense to me
> doesn't count for much in crypto...
>

Hm, where did you get this about the security proof? The only security
proof I know of is https://www-users.cs.umn.edu/~hoppernj/basic-proof.pdf and I 
don't see
that assumption anywhere in there, but it's also been a long while since
I read it.

I think in general you are OK here. An informal argument: according to
rend-spec-v3.txt appendix A.2 the key derivation is as follows:

derived private key: a' = h a (mod l)
derived public key: A' = h A = (h a) B

In your case, the attacker does not know 'h' (the blinding factor),
whereas in the case of onion service the attacker does not know 'a' or
'a*B' (the private/public key). In both cases, the attacker is missing
knowledge of a secret scalar, so it does not seem to make a difference
which scalar the attacker does not know.

Of course, the above is super informal, and I'm not a cryptographer,
yada yada.

> We'd also have to be careful about the number of blinded keys generated
> from a given root key. The security proof uses T = 2^16 as an example
> for the maximum number of epochs, giving a 16-bit security loss 

Re: [tor-dev] Temporary hidden services

2018-10-15 Thread George Kadianakis
Michael Rogers  writes:

> Hi all,
>
> The Briar team is working on a way for users to add each other as
> contacts by exchanging links without having to meet in person.
>
> We don't want to include the address of the user's long-term Tor hidden
> service in the link, as we assume the link may be observed by an
> adversary, who would then be able to use the availability of the hidden
> service to tell whether the user was online at any future time.
>
> We're considering two solutions to this issue. The first is to use a
> temporary hidden service that's discarded after, say, 24 hours. The
> address of the temporary hidden service is included in the link. This
> limits the window during which the user's activity is exposed to an
> adversary who observes the link, but it also requires the contact to use
> the link before it expires.
>
> The second solution is to include an ECDH public key in the link,
> exchange links with the contact, and derive a hidden service key pair
> from the shared secret. The key pair is known to both the user and the
> contact. One of them publishes the hidden service, the other connects to
> it. They exchange long-term hidden service addresses via the temporary
> hidden service, which is then discarded.
>
> The advantage of the second solution is that the user's link is static -
> it doesn't expire and can be shared with any number of contacts. A
> different shared secret, and thus a different temporary hidden service,
> is used for adding each contact.
>
> But using a hidden service in such a way that the client connecting to
> the service knows the service's private key is clearly a departure from
> the normal way of doing things. So before pursuing this idea I wanted to
> check whether it's safe, in the sense that the hidden service still
> conceals its owner's identity from the client.
>

Hello Michael,

Nick's trick seems like a reasonable way to avoid the issue with both parties
knowing the private key.

I have a separate question wrt the threat model:

It seems to me that adversary in this game can observe the link, and all
these stunts are done just for the case where the adversary steals the
link (i.e. the temporary ECDH public keys).

In that case, given that both Alice and Bob are completely
unauthenticated and there is no root trust, how can you ensure that the
adversary Eve won't perform the ECDH herself, then connect to the
temporary onion service, and steal the long-term onion service link
(thereby destroying the secrecy of the long-term onion service for ever,
even if the attack is detected in the future through Alice and Bob
communicating in an out-of-band way).

Are we assuming that Alice and Bob have no common shared-secret in
place?  Because if they did, then you could use that from the start to
encrypt the long-term onion service identifier. If you don't, you could
potentially fall into attacks like the one above.

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] State of the HA proxy onion patch

2018-09-17 Thread George Kadianakis
Mahrud S  writes:

> Hi George,
>
> I think it looks good. Only comment I have is that it would be nice to have
> an option to change the ipv6 subset, though I imagine people who would use
> it can easily recompile with their own setting.
>

Agreed.

IMO we should open a ticket about making the subnet configurable, and handle 
that in the future.


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] State of the HA proxy onion patch

2018-09-15 Thread George Kadianakis
Mahrud S  writes:

> Hi George,
>
> I was trying to find a way to use the virtual port (i.e.
> blahblah.onion:*port*) as dst_port, but I couldn't find a suitable in time.
> For our purposes specifically, we only needed virtual port 443 for https,
> so I hard-coded 443 in an almost identical branch on top of
> 0.3.5.0-alpha-dev here:
> https://github.com/mahrud/tor/commit/a81eac6d0c0a35adc6036e736565f4a8e2f806fd
>
> As far as I know we haven't run into any issues so I kept it minimal, but
> the torrc option would be very much appreciated!
>

Hey Mahrud,

we have a ready-to-merge version of #4700 ready.

Check: https://github.com/torproject/tor/pull/343
for the latest PR.

and https://trac.torproject.org/projects/tor/ticket/4700#comment:21
https://github.com/torproject/tor/pull/327 (the old PR)
if you want to read the review comments and bugs.

Let us know if you have any questions or if you don't like something.

Thanks! :)

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] State of the HA proxy onion patch

2018-09-05 Thread George Kadianakis
Hello Mahrud,

I wanted to ask if you've been using the #4700 branch and how is it going?

We've been planning to include #4700 in the upcoming 0.3.5 release if
possible, and we remember that you had some pending patches to it. Do
you think you can publish those somewhere if they are to be included upstream?

There are also some further mods that need to happen that I'm not sure
if you've performed in your local branch (torrc option & restricting the
feature only to onion connections, as per #4700).

Let us know how it's working for you and whether you have any patches
that we should have in mind, so that we can see if we can fit it in the
035 release.

Thank you! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Alternative directory format for v3 client auth

2018-08-14 Thread George Kadianakis
George Kadianakis  writes:

> George Kadianakis  writes:
>
>> Hello haxxpop and David,
>>
>> here is a patch with an alternative directory format for v3 client auth
>> crypto key bookkeeping as discussed yesterday on IRC:
>>https://github.com/torproject/torspec/pull/23
>>
>> Thanks for making me edit the spec because it made me think of various
>> details that had to be thought of.
>
> Hello again,
>
> there have been many discussions about client auth since that last email
> a month ago. Here is a newer branch that we want to get merged so that
> we proceed with implementation: https://github.com/torproject/torspec/pull/33
>
> The first commit is the same as in the original post, and all subsequent
> commits are improvements on top of it.
>
> Here are a few high-level changes that were made after discussion:
>
> - Ditched intro auth for now, since descriptor auth is sufficient for
>   our threat model, and trying to support two different auth types would
>   complicate things.
>
> - Opted for a KISS design for now where we don't ask Tor to generate
>   client auth keys neither on the client side or on the service side.
>   For now we assume that client/service-side generated their keys with
>   an external tool, and we will build such tools in the future, instead
>   of spending too much time bikeshedding about it right now.
>
> - Client auth is enabled if the client auth directory is populated with
>   the right files, instead of relying on torrc switches etc.
>
> Furthermore, the last three commits are quick mainly-cosmetic changes I
> did alone before posting this here. Inform me if you don't like those.
>
> I'll let this simmer here for a few days before merging it in torspec.
> Let me know if you have questions! Thanks for reading!
>

FWIW, the above spec branch has been merged upstream to torspec.git!

Feedback is still welcome and we will patch upstream if needed! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Alternative directory format for v3 client auth

2018-08-08 Thread George Kadianakis
George Kadianakis  writes:

> Hello haxxpop and David,
>
> here is a patch with an alternative directory format for v3 client auth
> crypto key bookkeeping as discussed yesterday on IRC:
>https://github.com/torproject/torspec/pull/23
>
> Thanks for making me edit the spec because it made me think of various
> details that had to be thought of.

Hello again,

there have been many discussions about client auth since that last email
a month ago. Here is a newer branch that we want to get merged so that
we proceed with implementation: https://github.com/torproject/torspec/pull/33

The first commit is the same as in the original post, and all subsequent
commits are improvements on top of it.

Here are a few high-level changes that were made after discussion:

- Ditched intro auth for now, since descriptor auth is sufficient for
  our threat model, and trying to support two different auth types would
  complicate things.

- Opted for a KISS design for now where we don't ask Tor to generate
  client auth keys neither on the client side or on the service side.
  For now we assume that client/service-side generated their keys with
  an external tool, and we will build such tools in the future, instead
  of spending too much time bikeshedding about it right now.

- Client auth is enabled if the client auth directory is populated with
  the right files, instead of relying on torrc switches etc.

Furthermore, the last three commits are quick mainly-cosmetic changes I
did alone before posting this here. Inform me if you don't like those.

I'll let this simmer here for a few days before merging it in torspec.
Let me know if you have questions! Thanks for reading!

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Reviewing Trac #18642 (Teach the OOM handler about the DNS cache)

2018-08-06 Thread George Kadianakis
n...@neelc.org writes:

> Hi tor-dev@ mailing list,
>
> I have a patch for Bug #18642 (Teach the OOM handler about the DNS cache) 
> which I would like reviewed.
>
> The URL is here: https://trac.torproject.org/projects/tor/ticket/18642 
> (https://trac.torproject.org/projects/tor/ticket/18642)
>
> Originally, dgoulet chose he would review, but after no response on the 
> patch, he has emailed me that he's on vacation until the 16th of August, 
> hence the reason why I'm emailing here. I am really keen on getting this 
> patch in, and if there's any Tor developer here, could someone please review 
> and merge it?
>

Hey Neel!

Sorry about that! We are still working on our review-assignment workflow
so that kinda got stuck in the pipeline!

It has been defragged and I plan to assign it to someone-not-dgoulet today!

Cheers! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] WTF-PAD and the future

2018-07-29 Thread George Kadianakis
Mike Perry  writes:

> George Kadianakis:
>> Hello Mike,
>> 
>> I had a talk with Marc and Mohsen today about WTF-PAD. I now understand
>> much more about WTF-PAD and how it works with regards to histograms.  I
>> think I might even understand enough to start some sort of conversation
>> about it:
>> 
>> Here are some takeaways:
>> 
>> 1) Marc and Mohsen think that WTF-PAD might not be the way forward
>>because of its various drawbacks and its complexity. Apparently there
>>are various attacks on WTF-PAD that Roger has discovered (SENDME
>>cells side-channels?) and also the deep learning crowd has done some
>>pretty good damage to the WTF-PAD padding (90%-60% accuracy?). They
>>also told me that achieving needed precision on the timings might be
>>a PITA.
>
> Are there citations for any of this? Last I heard Matt Wright was
> working on a deep learning study but the results were mixed.
>

I think this is the best we have in terms of public results:
  https://arxiv.org/abs/1801.02265

>> 2) From what I understand you are also hoping to use WTF-PAD to protect
>>against circuit fingerprinting and not just website
>>fingerprinting. They told me that while this might be plausible,
>>there is no current research on how well it can achieve that.  Are we
>>hoping to do that? And what research remains here? How can I help?
>>Which parts of the Tor circuit protocol are we hoping to hide?
>
> I am designing WTF-PAD to be a framework for deploying padding against
> arbitrary traffic analysis attacks. It is meant to allow us to define
> histograms on the fly (in the Tor consensus) as these are studied. The
> fact that they have not yet been studied is not super relevant to
> deploying the framework for it now.
>

ACK.

What other traffic analysis attacks are we looking at addressing here?

I'm thinking of stuff like "circuit fingerprinting of onion services",
but I wonder if histograms and random sampling is too crude to actually
be able to help against sophisticated attacks. I don't have a suggestion
for something better currently.

On that topic, is it decided whether the adaptive padding of WTF-PAD
will also happen during circuit construction, or only after that?

>> 3) Marc and Mohsen suggested using application-layer defences because
>>the application-layer has much better view of the actual structures
>>that are sent on the wire, instead of the black box view that the
>>network layer has.
>> 
>>In particular they were mainly concerned about onion services
>>fingerprinting because they are part of a restricted closed world,
>>whereas they were less concerned about the entire internet because of
>>its vast size.
>> 
>>They suggested that we could investigate using the service-side
>>"alpaca" library for onion services (e.g. as part of securedrop?)
>>which should resolve the most pressing concern of HS identification.
>
> I mean yeah application-layer defenses are useful for website traffic
> fingerprinting, but that is a very narrow slice of the traffic analysis
> problems that I want this framework to solve.
>
> WTF-PAD also doesn't rule out hidden service operators using alpaca,
> either. 
>

Agreed.

>> 4) They also told me of research by Tobias Pulls which eliminates the
>>needs for histograms in WTF-PAD and instead it samples from the
>>probability distribution directly. They think that this can simplify
>>things somewhat. Any thoughts on this?
>
> Yes this is actually exactly what I want to do with the next iteration
> of WTF-PAD! The question is what form/model to use for these probability
> distributions. Right now we're encoding inter-burst and inter-packet
> timings with some weird geometric distribution determining how long
> these bursts should go on for, when it might be more natural to encode
> and sample from length-based distributions/histograms.
>
> (Histograms vs distribution is not the problem -- its what they encode
> and how they encode it that matters).
>
> I don't see this paper on Tobias's website. Is it up anywhere yet?
>  

Hmm. Looking at the README of wtfpad (see the APE section), I think this
blog post is the best resource we have on this:
 https://www.cs.kau.se/pulls/hot/thebasketcase-ape/

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] WTF-PAD and the future

2018-07-27 Thread George Kadianakis
Hello Mike,

I had a talk with Marc and Mohsen today about WTF-PAD. I now understand
much more about WTF-PAD and how it works with regards to histograms.  I
think I might even understand enough to start some sort of conversation
about it:

Here are some takeaways:

1) Marc and Mohsen think that WTF-PAD might not be the way forward
   because of its various drawbacks and its complexity. Apparently there
   are various attacks on WTF-PAD that Roger has discovered (SENDME
   cells side-channels?) and also the deep learning crowd has done some
   pretty good damage to the WTF-PAD padding (90%-60% accuracy?). They
   also told me that achieving needed precision on the timings might be
   a PITA.

2) From what I understand you are also hoping to use WTF-PAD to protect
   against circuit fingerprinting and not just website
   fingerprinting. They told me that while this might be plausible,
   there is no current research on how well it can achieve that.  Are we
   hoping to do that? And what research remains here? How can I help?
   Which parts of the Tor circuit protocol are we hoping to hide?

3) Marc and Mohsen suggested using application-layer defences because
   the application-layer has much better view of the actual structures
   that are sent on the wire, instead of the black box view that the
   network layer has.

   In particular they were mainly concerned about onion services
   fingerprinting because they are part of a restricted closed world,
   whereas they were less concerned about the entire internet because of
   its vast size.

   They suggested that we could investigate using the service-side
   "alpaca" library for onion services (e.g. as part of securedrop?)
   which should resolve the most pressing concern of HS identification.

4) They also told me of research by Tobias Pulls which eliminates the
   needs for histograms in WTF-PAD and instead it samples from the
   probability distribution directly. They think that this can simplify
   things somewhat. Any thoughts on this?

Let me know what you think. I still don't understand the entire space
completely yet, so please be gentle. ;) 

Cheers! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Alternative directory format for v3 client auth

2018-07-26 Thread George Kadianakis
Alex Xu  writes:

> Quoting George Kadianakis (2018-07-11 19:26:06), as excerpted
>> Michael Rogers  writes:
>> 
>> > On 11/07/18 14:22, George Kadianakis wrote:
>> >> Michael Rogers  writes:
>> >> 
>> > First, Ed25519-based authentication ("intro auth"). Could this be punted
>> > to the application layer, or is there a reason it has to happen at the
>> > Tor layer?
>> >
>> 
>> Yes, it could be stuffed into the application layer. However that could be
>> an argument for everything (including end-to-end encryption of onions).
>> 
>> It might be the case that some application-layer protocols don't allow
>> any sort of pluggable authentication to happen on top of them, or that
>> users wouldn't want to enable them for some reason. Does this feel like
>> an artificial reason to you?
>> 
>> Another positive thing about intro auth is that it allows fine-grained
>> control over authentication, potentially allowing different tiers of
>> users etc.
>
> That might be true, but it's not an argument for intro auth, because
> application-layer authentication offers that too.
>
>> Also see https://lists.torproject.org/pipermail/tor-dev/2018-May/013155.html
>> 
>> > Fourth, what goals does desc auth achieve in the v3 design? If I
>> > understand right, in v2 its major goal was to hide the intro points from
>> > everyone except authorised clients (including HSDirs). In v3 the intro
>> > points are already hidden from anyone who doesn't know the onion address
>> > (including HSDirs), so this goal can be achieved by not revealing the
>> > onion address to anyone except authorised clients.
>> >
>> > I'm probably missing something, but as far as I can see the only other
>> > goal achieved by desc auth is the ability to revoke a client's access
>> > without needing to distribute a new onion address to other clients. This
>> > seems useful. But again, I'd ask whether it could be punted to the
>> > application layer. The only advantage I can see from putting it at the
>> > Tor layer is that the list of intro points is hidden from revoked
>> > clients. Is there a real world use case where that's a big enough
>> > advantage to justify putting all this authorisation machinery at the Tor
>> > layer? Or maybe there are other things this design achieves that I
>> > haven't thought of.
>> >
>> 
>> Yes, you identified the point of desc auth correctly.
>> 
>> Another very important reason to have an authorization system inside
>> Tor, is because it allows only authorized clients to rendezvous (and in
>> general directly interact) with the onion service. That can mitigate all
>> sorts of guard discovery and correlation attacks that could be doable by
>> anyone, and restrict them only to authorized users.
>> 
>> Of course the above is achieved with either desc auth or intro
>> auth. Having both of them does not offer any benefits in this direction.
>
> asn said that a benefit of Tor-level authentication is that users may be
> likely to accidentally reveal their onion service address, e.g. by
> posting screenshots, or copying and pasting the URL, but are less likely
> to accidentally reveal their separate authentication credentials.
>
> I thought of a minor benefit of desc auth: revoked clients are prevented
> entirely from attacking the onion service, e.g. by DDoS.

True. This is actually one of the most useful benefits of client auth
right now: blocking introduction requests from non-authenticated clients
and hence blocking guard discovery or DDoS attacks.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HS v3 client authorization types

2018-07-12 Thread George Kadianakis
David Goulet  writes:

> On 18 May (19:03:09), George Kadianakis wrote:
>> Ian Goldberg  writes:
>> 
>> > On Thu, May 10, 2018 at 12:20:05AM +0700, Suphanat Chunhapanya wrote:
>> >> On 05/09/2018 03:50 PM, George Kadianakis wrote:
>> >> > b) We might also want to look into XEdDSA and see if we can potentially
>> >> >use the same keypair for both intro auth (ed25519) and desc auth
>> >> (x25519).
>> >> 
>> >> This will be a great advantage if we can do that because putting two
>> >> private keys in the HidServAuth is so frustrating.
>> >
>> > The private key for intro auth is used to make a signature (that will be
>> > different per client), while the private key for desc auth is used to
>> > decrypt the descriptor (which will be the same for all clients), no?
>> >
>> 
>> Hm. Both intro auth and desc auth keys are different for each client. In
>> the case of desc auth we do that so that we can revoke a client without
>> needing to refresh desc auth keys for all other clients.
>
> Following yesterday's discussion on IRC with haxxpop and asn, and some more
> today, I worked on a revised version of the spec:
>
> https://gitweb.torproject.org/user/dgoulet/torspec.git/commit/?h=ticket20700_01
>
> Probably will be easier to just read the whole thing instead of the diff:
>
> https://gitweb.torproject.org/user/dgoulet/torspec.git/tree/rend-spec-v3.txt?h=ticket20700_01#n2279
>
> So the idea is that instead of making the HS client/operator have to pass
> around portions of a file containing private and public keys, it is to
> logically seperate them so that the operator only deals with one single file
> when wanting to transmit the keys to a client.
>

Thanks for the fixes David.

Please see last commit of https://github.com/torproject/torspec/pull/24
for some stuff on top of your branch.

Some things we need to think about:
- The ".pubkeys" files are now used internally by Tor, whereas the
  "./client_cfg_lines" file is the one that the operator is supposed to
  look at and interact with. Is it easier for the operator to deal with
  one big file, or with many small files? We should think about that and
  maybe reverse our choices.

  As an example, how is the operator supposed to know which line in
  "./client_cfg_lines" is for which client? In my patch above I used
  # comments to separate lines but that might not be straightforward for
  people.

- Assuming that we are not doing intro auth any time soon, I deleted all
  mentions of ed25519 keys from that side of the spec, in the assumption
  that we will need to introduce them back the right way if we ever
  decide to do intro auth. Is this a good idea or not?

  As an example of the complexity I'm trying to hide, if we keep ed25519
  in the spec, we need to specify how the HidServAuth line knows whether
  a key is x25519 or ed25519.

- Do we need to define new torrc options for service-side and client-side?

Some more things to do:
- Rename "./client_authorized" to "./authorized_clients"?
- Rename "./client_cfg_lines" to 
- What's the "auth-type"? I assume standard.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Alternative directory format for v3 client auth

2018-07-11 Thread George Kadianakis
Michael Rogers  writes:

> On 11/07/18 14:22, George Kadianakis wrote:
>> Michael Rogers  writes:
>> 
>>> On 10/07/18 19:58, George Kadianakis wrote:
>>>> here is a patch with an alternative directory format for v3 client auth
>>>> crypto key bookkeeping as discussed yesterday on IRC:
>>>>https://github.com/torproject/torspec/pull/23
>>>>
>>>> Thanks for making me edit the spec because it made me think of various
>>>> details that had to be thought of.
>>>>
>>>> Let me know if you don't like it or if something is wrong.
>>>
>>> Minor clarification: line 2298 says the keypair is stored, it might be
>>> clearer to say the private key is stored.
>>>
>>> Nitpick: should the directory be called "client_authorized_privkeys" if
>>> it might contain private keys, public keys, or a mixture of the two?
>>>
>> 
>> Good points in both cases. Will fix soon (along with other feedback if 
>> received).
>> 
>> Other than that, what do you think about the whole concept? Too complex?
>> Logical? Too much?
>> 
>> Cheers for the feedback! :)
>
> Sorry for being late to the party - I just this morning finished reading
> the thread from 2016 where the client auth design was hashed out. :-/
>
> I think putting each client's keys in a separate file makes a lot of sense.
>
> At a higher level there are some things I'm not sure about. Sorry if
> this is threadjacking, but you said the magic words "whole concept". ;-)
>

Thanks for raising these issues and for taking the time to read the
previous thread. We really need feedback like this from people who have
used our systems like you :)

> First, Ed25519-based authentication ("intro auth"). Could this be punted
> to the application layer, or is there a reason it has to happen at the
> Tor layer?
>

Yes, it could be stuffed into the application layer. However that could be
an argument for everything (including end-to-end encryption of onions).

It might be the case that some application-layer protocols don't allow
any sort of pluggable authentication to happen on top of them, or that
users wouldn't want to enable them for some reason. Does this feel like
an artificial reason to you?

Another positive thing about intro auth is that it allows fine-grained
control over authentication, potentially allowing different tiers of
users etc.

Also see https://lists.torproject.org/pipermail/tor-dev/2018-May/013155.html

> Second, X25519-based authorization ("desc auth"). If I understand right,
> using asymmetric keypairs here rather than symmetric keys makes it
> possible for the client to generate a keypair and send the public key to
> the service over an authenticated but not confidential channel. But the
> client may not know how to do that, so we also need to support an
> alternative workflow where the service generates the keypair and sends
> the private key to the client over an authenticated and confidential
> channel.
>
> The upside of this design is the ability to use an authenticated but not
> confidential channel (as long as the client and service understand which
> workflow they need to use). The downside is extra complexity. I'm not
> really convinced this is a good tradeoff. But I'm guessing this argument
> has already been had, and my side lost. :-)
>

Yes, you have described it very well.
And I agree that the tradeoff is complicated.

> Third, what's the purpose of the fake auth-client lines for a service
> that doesn't use client auth? I understand that when a service does use
> client auth, it may not want clients (or anyone else who knows the onion
> address) to know the exact number of clients. But when a service doesn't
> use client auth, anyone who can decrypt the first layer of the
> descriptor can also decrypt the second layer, and therefore knows that
> the auth-client lines are fake. So are they just for padding in that
> case? But the first layer's padded before encryption anyway.
>

Yes, fake auth-client lines when client auth is disabled are not very
useful as you point out (also see #23641).

> Fourth, what goals does desc auth achieve in the v3 design? If I
> understand right, in v2 its major goal was to hide the intro points from
> everyone except authorised clients (including HSDirs). In v3 the intro
> points are already hidden from anyone who doesn't know the onion address
> (including HSDirs), so this goal can be achieved by not revealing the
> onion address to anyone except authorised clients.
>
> I'm probably missing something, but as far as I can see the only other
> goal achieved by desc auth is the ability to revoke a client's access
> wi

Re: [tor-dev] Alternative directory format for v3 client auth

2018-07-11 Thread George Kadianakis
Michael Rogers  writes:

> On 10/07/18 19:58, George Kadianakis wrote:
>> here is a patch with an alternative directory format for v3 client auth
>> crypto key bookkeeping as discussed yesterday on IRC:
>>https://github.com/torproject/torspec/pull/23
>> 
>> Thanks for making me edit the spec because it made me think of various
>> details that had to be thought of.
>> 
>> Let me know if you don't like it or if something is wrong.
>
> Minor clarification: line 2298 says the keypair is stored, it might be
> clearer to say the private key is stored.
>
> Nitpick: should the directory be called "client_authorized_privkeys" if
> it might contain private keys, public keys, or a mixture of the two?
>

Good points in both cases. Will fix soon (along with other feedback if 
received).

Other than that, what do you think about the whole concept? Too complex?
Logical? Too much?

Cheers for the feedback! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Alternative directory format for v3 client auth

2018-07-10 Thread George Kadianakis
Hello haxxpop and David,

here is a patch with an alternative directory format for v3 client auth
crypto key bookkeeping as discussed yesterday on IRC:
   https://github.com/torproject/torspec/pull/23

Thanks for making me edit the spec because it made me think of various
details that had to be thought of.

Let me know if you don't like it or if something is wrong.

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] The case with Tor2Web

2018-07-09 Thread George Kadianakis
Hello!

It's a semi-secret that tor2web traffic has been blocked from the Tor
network when we introduced the DoS subsystem this March [0]. The reason
is that a big part of the DoS traffic was coming from one-hop clients
continuously hammering onion services.

This is something that we've been considering doing for a while (for
security and code-complexity reasons), and it just happened naturally
during the DoS incident.

As part of this, and since the DoS subsystem is going to stick around,
we are planning to permanently kill the Tor2Web subsystem of Tor, in an
effort to simplify our codebase and our feature list.

If you've been relying on tor2web for something, please consider
switching to a normal 3-hop client indeed. This is a heads up so that
you can let us know if that won't work for you, or you need help
transitioning out.

Cheers and hope we are not making you sad.

[0]: https://trac.torproject.org/projects/tor/ticket/24902
 
https://blog.torproject.org/new-stable-tor-releases-security-fixes-and-dos-prevention-03210-03110-02915
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] DoH over non-HTTPS onion v3

2018-06-23 Thread George Kadianakis
nusenu  writes:

> Hi,
>
> this is just a short heads-up.
>
> I'm currently tinkering about how we could
> improve DNS security and privacy for tor clients. My idea write-up is not done
> yet but since the IETF DoH WG [1] is proceeding towards their next steps
> I wanted to move now before it might be to late and let you know that I
> might ask them if they want to allow non-HTTPS uris in the case of
> onion v3 addresses (currently HTTPS is required). This might be handy for TB 
> in the future.
> If you have objections let me know.
>
> I also reached out to Seth Schoen and asked him about his
> efforts to make onion v3 DV certificates acceptable to the CA/Browser Forum 
> (if that is possible then the HTTPS requirement isn't a problem for DoH over 
> onion v3).
>

IIUC, you are trying to persuade the working group that they can use
HTTP v3 onions as DNS resolvers.

Sounds good to me! Let us know how we can support you with this :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Proposal 292: Mesh-based vanguards

2018-05-28 Thread George Kadianakis
Hello list,

here is the vanguard proposal that supersedes proposal 247.

It specifies the newest vanguard design which we've been working on:
   https://github.com/mikeperry-tor/vanguards

FWIW, the above project is still in experimental state and we are
expecting to launch it officially in the short term future. Until then,
feel free to experiment with it if you feel like it.

Check out the proposal and code and let us know if you have any
questions or feedback!

Thanks!

---

Filename: 292-mesh-vanguards.txt
Title: Mesh-based vanguards
Authors: George Kadianakis and Mike Perry
Created: 2018-05-08
Status: Open
Supersedes: 247

0. Motivation

  A guard discovery attack allows attackers to determine the guard
  node of a Tor client. The hidden service rendezvous protocol
  provides an attack vector for a guard discovery attack since anyone
  can force an HS to construct a 3-hop circuit to a relay (#9001).

  Following the guard discovery attack with a compromise and/or
  coercion of the guard node can lead to the deanonymization of a
  hidden service.

1. Overview

  This document tries to make the above guard discovery + compromise
  attack harder to launch. It introduces a configuration
  option which makes the hidden service also pin the second and third
  hops of its circuits for a longer duration.

  With this new path selection, we force the adversary to perform a
  Sybil attack and two compromise attacks before succeeding. This is
  an improvement over the current state where the Sybil attack is
  trivial to pull off, and only a single compromise attack is required.

  With this new path selection, an attacker is forced to do a one or
  more node compromise attacks before learning the guard node of a hidden
  service. This increases the uncertainty of the attacker, since
  compromise attacks are costly and potentially detectable, so an
  attacker will have to think twice before beginning a chain of node
  compromise attacks that they might not be able to complete.

1.1. Tor integration

  The mechanisms introduced in this proposal are currently implemented
  partially in Tor and partially through an external Python script:
https://github.com/mikeperry-tor/vanguards

  The Python script uses the new Tor configuration options HSLayer2Nodes and
  HSLayer3Nodes to be able to select nodes for the guard layers. The Python
  script is tasked with maintaining and rotating the guard nodes as needed
  based on the lifetimes described in this proposal.

  In the future, we are aiming to include the whole functionality into Tor,
  with no need for external scripts.

1.2. Visuals

  Here is how a hidden service rendezvous circuit currently looks like:

 -> middle_1 -> middle_A
 -> middle_2 -> middle_B
 -> middle_3 -> middle_C
 -> middle_4 -> middle_D
   HS -> guard   -> middle_5 -> middle_E
 -> middle_6 -> middle_F
 -> middle_7 -> middle_G
 -> middle_8 -> middle_H
 ->   ...->  ...
 -> middle_n -> middle_n

  this proposal pins the two middle positions into a much more
  restricted sets, as follows:

   -> guard_2A
   -> guard_3A
  -> guard_1A  -> guard_2B -> guard_3B
   HS  -> guard_3C
  -> guard_1B  -> guard_2C -> guard_3D
   -> guard_3E
   -> guard_2D -> guard_3F

  Additionally, to avoid linkability, we insert an extra middle node
  after the third layer guard for client side intro and hsdir circuits,
  and service-side rendezvous circuits. This means that the set of
  paths for Client (C) and Service (S) side look like this:

 C - G - L2 - L3 - R
 S - G - L2 - L3 - HSDIR
 S - G - L2 - L3 - I
 C - G - L2 - L3 - M - I
 C - G - L2 - L3 - M - HSDIR
 S - G - L2 - L3 - M - R

1.3. Threat model, Assumptions, and Goals

  Consider an adversary with the following powers:

 - Can launch a Sybil guard discovery attack against any node of a
   rendezvous circuit. The slower the rotation period of the node,
   the longer the attack takes. Similarly, the higher the percentage
   of the network is compromised, the faster the attack runs.

 - Can compromise any node on the network, but this compromise takes
   time and potentially even coercive action, and also carries risk
   of discovery.

  We also make the following assumptions about the types of attacks:

  1. A Sybil attack is observable by both people monitoring the network
 for large numbers of new nodes, as well as vigilant hidden service
 operators. It will require either large amounts of traffic sent
 towards the hidden service, multiple test circuits, or both.

Re: [tor-dev] HS v3 client authorization types

2018-05-18 Thread George Kadianakis
Ian Goldberg <i...@cs.uwaterloo.ca> writes:

> On Thu, May 10, 2018 at 12:20:05AM +0700, Suphanat Chunhapanya wrote:
>> On 05/09/2018 03:50 PM, George Kadianakis wrote:
>> > b) We might also want to look into XEdDSA and see if we can potentially
>> >use the same keypair for both intro auth (ed25519) and desc auth
>> (x25519).
>> 
>> This will be a great advantage if we can do that because putting two
>> private keys in the HidServAuth is so frustrating.
>
> The private key for intro auth is used to make a signature (that will be
> different per client), while the private key for desc auth is used to
> decrypt the descriptor (which will be the same for all clients), no?
>

Hm. Both intro auth and desc auth keys are different for each client. In
the case of desc auth we do that so that we can revoke a client without
needing to refresh desc auth keys for all other clients.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HS v3 client authorization types

2018-05-14 Thread George Kadianakis
Suphanat Chunhapanya <haxx@gmail.com> writes:

> On 05/09/2018 03:50 PM, George Kadianakis wrote:
>> I thought about this some more and discussed it with haxxpop on IRC. In
>> the end, I think that perhaps starting with just desc auth and then in
>> the future implementing intro auth is also an acceptable plan forward.
>
> I think we have two more things to think about.
>
> 1. I forgot to think about the format of client_authorized_pubkeys file.
> In the client_authorized_pubkeys file, each line should indicate the
> auth type for which the pubkey is used instead of just specifying the
> client name and the pubkey. So the line should be as follows.
>
>   
>
> and, if auth-type is "standard", it will be equivalent to two lines of
> "desc" and "intro".
>

Sounds plausible.

BTW, what's the role of `client_authorized_pubkeys` in your opinion? Is
it only used by little-t-tor internally to see which clients are
recognized or not? IIUC, the onion service operator should not really
need to use it since it contains pubkeys.

BTW, I noticed that in v2, when we enable client auth, the onion service
also edits the `hostname` file to produce different lines for each
client, so that the operator can copy-paste them directly to the
users. Do you find that useful? Do you think we should do it too for v3?

Ideally we should ask for feedback from people who use client auth here,
because all these questions are basically UX questions...

> 2. If we want to release the "desc" auth first, I want to say something
> about the config lines.
>
> The "standard" auth config on the client side will not contain the
> ed25519 private key and it will look like this:
>
> HidServAuth onion-address standard x25519-private-key
>
> However, after we release the intro auth, that config line (which does
> not contain the ed25519 private key) should still be valid because, if
> the client upgrades its version, it doesn't need to change the word
> "standard" to the word "desc" in the HidServAuth config line.
>
> On the service side, it will be different. After releasing "desc" auth
> but before releasing "intro" auth, the line in client_authorized_pubkeys
> will look like this (without ed25519 pubkey).
>
>  standard x25519-public-key
>
> But after we release the "intro" auth, that line shouldn't be valid
> anymore because the "standard" line should contain both x25519 and
> ed25519 public keys. It's different from the client side.
>

Yeah this is another great UX question I'm not entirely sure about...

Perhaps the "standard" type should use the keys provided and do the best
it can with those keys. If both keys are provided it should do both "desc"
and "intro" auth, otherwise it should just do the best it can. But to do
this, we need to be able to differentiate "desc" keys from "intro" keys...

> --
>
> I think (1) may not have problems (I guess) but for (2) is it acceptable
> to make ed25519-private-key optional on the HidServAuth "standard"
> config line?
>

Sounds reasonable perhaps... But we need to think more about the UX 
implications!

> --
>
> On 05/09/2018 03:50 PM, George Kadianakis wrote:
>> b) We might also want to look into XEdDSA and see if we can potentially
>>use the same keypair for both intro auth (ed25519) and desc auth
> (x25519).
>
> This will be a great advantage if we can do that because putting two
> private keys in the HidServAuth is so frustrating.
>

Yeah we should think about this too. I'll try do some research this week.

BTW an alternative approach here when both keys are used would be to
concatenate them into one string so that the user does not need to care
about two different keys, and they should just care about a single
"authentication token".

Thanks for raising these issues haxxpop and sorry for not having
straightforward answers for them just yet!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal #291 (two guards) IRC meeting Wed Apr 18, 17:00 UTC

2018-05-09 Thread George Kadianakis
Mike Perry  writes:

> Mike Perry:
>> Heyo.
>> 
>> We're going to have a meeting to discuss Proposal 291. See this thread:
>> https://lists.torproject.org/pipermail/tor-dev/2018-April/013053.html
>
> Ok, we had this meeting. High level (ammended) action items are:
>
> 1. Use patches in https://trac.torproject.org/projects/tor/ticket/25843
>to set NumEntryGuards=2 in torrc, and observe results. Please join us!
>Stuff we are looking for during testing is on that ticket!
> 2. Merge that patch to make the torrc guard options do what we meant for
>them to do. Probably backport it.
> 3. Descibe adversary models for our variant proposals from the notes.
>(Why do we disagree? In Mike's case, my disagreements are because I
> think each step is an improvement over previous/status quo -- we can
> decide harder things later and still do better both now and later.)
> 4. Agree on an order of operations for fixes+changes, ideally such that we
>don't block forever trying to come up with a perfect solution. Things
>are pretty bad now. All we really need to do is agree on steps to make
>it better.
>
> 
>
> Concrete things we can do now:
> #1: ourselves set those guard params to 2 and find bugs. once #3 below is 
> done, encourage others, like on tor-talk, to do it too.
> #2: enumerate the current situations where we use a guard other than our 
> first guard, especially noting the ones where the attacker can make us use a 
> guard other than our first guard. fix as many as we want to fix. maybe 
> categorize by whether they cause us to mark our first guard as down or not.

OK, I did a bit of #2 yesterday as part of an IRC discussion with Mike
and Roger. In particular, I attempted to enumerate the places in our
codebase where we mark a guard as unreachable and hence skip it for
future circuits.

The key functions here are entry_guard_failed() and entry_guard_chan_failed().
These are called in the following places:

1) circuit_build_failed(): We blame the guard if there was an error
   during path building when we don't have the first hop open on the
   circuit yet. We don't blame the guard for errors during path
   selection.

2) connection_dir_request_failed(): We blame the guard if we fail to
   connect to a dirserver because of network error.

3) connection_or_about_to_close(): We blame the guard when we are
   closing an OR connection that started at us but never made it to
   state open. We do this because otherwise we would keep beating our
   heads against a broken guard.

4) connection_or_client_learned_peer_id(): We blame the guard when we
   receive the wrong RSA identity key from the guard during the TLS handshake.

The first 3 cases here seem to handle the cases of network errors and
unreachable guards. It's interesting how we have to handle this case in
three different places. I wonder if we are missing any other places here.

The last case seems to handle the case of network MITM attacks. I don't
see anything wrong with that, since encountering an MITM certainly means
that something bad is going on, and also an MITM adversary could also
cause one of the first 3 cases.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HS v3 client authorization types

2018-05-09 Thread George Kadianakis
George Kadianakis <desnac...@riseup.net> writes:

> Suphanat Chunhapanya <haxx@gmail.com> writes:
>
>> Hi,
>>
>> On 04/28/2018 06:19 AM, teor wrote:
>>>> Or should we require the service to enable both for all clients?
>>>>
>>>> If you want to let the service be able to enable one while disable the
>>>> other, do you have any opinion on how to configure the torrc?
>>> 
>>> If someone doesn't understand client auth in detail, and just wants
>>> to be more secure, we should give them a single option that enables
>>> both kinds of client auth. (Security by default.)
>>> 
>>> OnionServiceClientAuthentication 1
>>> (Default: 0)
>>> 
>>> If someone knows they only want a particular client auth method,
>>> we should give them another option that contains a list of active
>>> client auth methods. (Describe what you have, not what you don't
>>> have, because negatives confuse humans.)
>>> 
>>> OnionServiceClientAuthenticationMethods intro
>>> (Default: descriptor, intro)
>>
>>
>> Do you have any opinion on specifying the client names in your
>> recommendation? and the list of client names in "descriptor" and "intro"
>> should be independent.
>>
>> However, what i am currently think of is that we can use the existing
>> format.
>>
>> HiddenServiceAuthorizeClient auth-type client-name,client-name,...
>>
>> But instead of allowing only two auth-types "descriptor" and "intro", we
>> allow another type called "default" which includes both "descriptor" and
>> "intro"
>>
>> So if I put an option:
>> HiddenServiceAuthorizeClient default client-name,client-name,...
>>
>> It will be equivalent to two lines of:
>> HiddenServiceAuthorizeClient descriptor client-name,client-name,...
>> HiddenServiceAuthorizeClient intro client-name,client-name,...
>>
>> And on the client side, if I put an option:
>> HidServAuth onion-address default x25519-private-key ed25519-private-key
>>
>> It will be equivalent to two lines of:
>> HidServAuth onion-address descriptor x25519-private-key
>> HidServAuth onion-address intro ed25519-private-key
>>
>
> In general, I feel like being able to individually enable "intro" or
> "descriptor" auth might be a worthwhile approach for advanced use cases
> (see end of my email).  However, I can see the following issues:
>
> a) It's gonna be hard to communicate what "intro" or "descriptor" auth
>do when enabled individually, and motivate people to use them. I
>think it might actually confuse most operators, except from the super
>advanced ones.
>
> b) It will be more complicated in terms of engineering. Because we would
>have to support three auth types instead of one. Especially so if we
>try to support the special benefits you mentioned in the top post,
>like "If only intro auth is enabled, we can revoke a client without
>republishing the descriptor".
>
> I think my approach here would be to try to support both auth types by
> the time we launch this feature (under the "standard" auth type), and
> then in the future as we get more insight on how people use them, we
> should start allowing to individually switch them on and off.
>
> ---
>
> For reference here is how I see the various auth types:
>
> "desc" auth:
>Encrypts the descriptor using x25519.  Protects against HSDirs
>who know the onion address from snooping into the descriptor and
>learning intro points, etc.
>
> "intro" auth:
>Authentication at the introduction layer. Allows end-to-end
>authentication with the onion service. Allows more fine-grained
>control over authentication. Also allows the service to know
>which client is visiting it (see #4700).
>
> "standard" auth:
>The combination of "desc" auth and "intro" auth. This basically
>provides the same security logic as v2 "basic" auth.  IMO, this
>is what most operators want when they turn on client auth.
>
> And here are use cases for switching the auths individually:
>
> "intro" auth without "desc" auth:
>Like haxxpop said in top post, if we only have intro auth we can
>revoke/add clients without needing to republish our descriptor.
>
> "desc" auth without "intro" auth:
>This might be useful if a use ca

Re: [tor-dev] Proposal #291 Properties (was IRC meeting)

2018-05-03 Thread George Kadianakis
Mike Perry  writes:

> Mike Perry:
>> teor:
>> > 
>> > 
>> > > On 25 Apr 2018, at 18:30, Mike Perry  wrote:
>> > > 
>> > > 1. Hidden service use can't push you over to an unused guard (at all).
>> > >  2. Hidden service use can't influence your choice of guard (at all).
>> > >  3. Exits and websites can't push you over to an unused guard (at all)
>> > >  4. DoS/Guard node downtime signals are rare (absent)
>> > >  5. Nodes are not reused for Guard and Exit positions ("any" positions)
>> > >  6. Information about the guard(s) does not leak to the website/RP (at 
>> > > all).
>> > >  7. Relays in the same family can't be forced to correlate Exit traffic.
>> > 
>> > I think this list is missing some important user-visible properties, or 
>> > it's
>> > not clear which property above corresponds to these properties:
>> > 
>> > * Is Tor reliable and responsive when guards go down, or when I move
>> >   networks, or when I have lost and regained service?
>> 
>> I think this is implicitly provided by #4. Downtime is a security issue.
>> If (any of) a client Guard(s) are down, and the adversary can detect
>> this based on client behavior, well, that is a side channel signal that
>> provides information about the Guard. So by satisfying #4, we also
>> satisfy the weaker conditions of general reliability and responsiveness.
>>  
>> > I also think it's missing an implicit property, which we should make 
>> > explicit:
>> > 
>> > * Can Tor users be fingerprinted by their set of guards or directory 
>> > guards?
>> > 
>> > Perhaps this property is out of scope.
>> 
>> I think it is worth considering. We should add it if we need to do
>> another round of evaluation.
>
> Alright, for the sake of argument, let's call this Property #8:
>   8. Less information from guard fingerprinting (the least information)
>
> I argue that this #8 is also equivalent to a #9 that Roger would ask
> for:
>   9. Fewer points of observation into the network (the fewest points).
>

If we are actually aiming for 8 and 9 we need to do something about the
numdirguard=3 situation, otherwise we still have a huge guard fpr and we
still expose ourselves to more of the network even if we keep one guard.

> To avoid TL;DR, that argument is an exercise to the reader ;).
>
> Here is a proposal that beats my previous proposal on Property #8 and
> #9, while trying to preserve as many of the other properties as
> possible:
>
>  * Set "num primary guards"=1 and "num primary guards to use"=1
>  * Set "num directory guards"=1 and "num directory guards to use"=1
>  * Don't give Exit nodes the Guard flag.
>  * Allow "same node, same /16, same family" between guard and last hop,
>but only for HS circuits (which are at least 4 hops).
>  * Allow same /16 and same family for HS circuits.

This's for all hops? So all service-side HS circ hops can share the same
family? I gues that's OK since we don't know what's happening on the
other side of the HS circuit anyhow? Or what?

>  * When a primary guard leaves the consensus, pick a new one.
>  * When a primary guard fails circuits, do $MAGIC_FAILURE_HEURISTIC.

What is the $MAGIC_FAILURE_HEURISTIC supposed to do? Also I doubt we can
do anything magic here, we even have trouble doing very naive stuff when
it comes to network-uptime response.

>
> This proposal gets strong:
>   1. Hidden service use can't push you over to an unused guard (at all).
>   2. Hidden service use can't influence your choice of guard (at all).
>   3. Exits and websites can't push you over to an unused guard (at all)
>   8. Less information from guard fingerprinting (the least information)
>
> It loses #4 (and your reliability point above), because if we transition
> to a second guard too quickly when the first one starts failing, then we
> lose the winning fingerprinting property we want to keep. So then
> therefore, we must tolerate failure and RESOURCELIMIT issues and suffer
> through connectivity issues during DoS:
>   4. DoS/Guard node downtime signals are rare (absent) 
>
> It then gets us regular:
>   5. Nodes are not reused for Guard and Exit positions ("any" positions)
>   6. Information about the guard(s) does not leak to the website/RP (at all).
>   7. Relays in the same family can't be forced to correlate Exit traffic.
>
> And again, we could get strong #6 if we allow the guard node for both RP
> and the node before the RP:
>   6. Information about the guard(s) does not leak to the website/RP (at all).
>
> So the key thing (in this property list) that forcing one guard causes us
> to lose is reliability under DoS, which is a guard discovery vector (and
> probably a source of other side channels, too).
>

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HS v3 client authorization types

2018-05-02 Thread George Kadianakis
Suphanat Chunhapanya  writes:

> Hi,
>
> On 04/28/2018 06:19 AM, teor wrote:
>>> Or should we require the service to enable both for all clients?
>>>
>>> If you want to let the service be able to enable one while disable the
>>> other, do you have any opinion on how to configure the torrc?
>> 
>> If someone doesn't understand client auth in detail, and just wants
>> to be more secure, we should give them a single option that enables
>> both kinds of client auth. (Security by default.)
>> 
>> OnionServiceClientAuthentication 1
>> (Default: 0)
>> 
>> If someone knows they only want a particular client auth method,
>> we should give them another option that contains a list of active
>> client auth methods. (Describe what you have, not what you don't
>> have, because negatives confuse humans.)
>> 
>> OnionServiceClientAuthenticationMethods intro
>> (Default: descriptor, intro)
>
>
> Do you have any opinion on specifying the client names in your
> recommendation? and the list of client names in "descriptor" and "intro"
> should be independent.
>
> However, what i am currently think of is that we can use the existing
> format.
>
> HiddenServiceAuthorizeClient auth-type client-name,client-name,...
>
> But instead of allowing only two auth-types "descriptor" and "intro", we
> allow another type called "default" which includes both "descriptor" and
> "intro"
>
> So if I put an option:
> HiddenServiceAuthorizeClient default client-name,client-name,...
>
> It will be equivalent to two lines of:
> HiddenServiceAuthorizeClient descriptor client-name,client-name,...
> HiddenServiceAuthorizeClient intro client-name,client-name,...
>
> And on the client side, if I put an option:
> HidServAuth onion-address default x25519-private-key ed25519-private-key
>
> It will be equivalent to two lines of:
> HidServAuth onion-address descriptor x25519-private-key
> HidServAuth onion-address intro ed25519-private-key
>

In general, I feel like being able to individually enable "intro" or
"descriptor" auth might be a worthwhile approach for advanced use cases
(see end of my email).  However, I can see the following issues:

a) It's gonna be hard to communicate what "intro" or "descriptor" auth
   do when enabled individually, and motivate people to use them. I
   think it might actually confuse most operators, except from the super
   advanced ones.

b) It will be more complicated in terms of engineering. Because we would
   have to support three auth types instead of one. Especially so if we
   try to support the special benefits you mentioned in the top post,
   like "If only intro auth is enabled, we can revoke a client without
   republishing the descriptor".

I think my approach here would be to try to support both auth types by
the time we launch this feature (under the "standard" auth type), and
then in the future as we get more insight on how people use them, we
should start allowing to individually switch them on and off.

---

For reference here is how I see the various auth types:

"desc" auth:
   Encrypts the descriptor using x25519.  Protects against HSDirs
   who know the onion address from snooping into the descriptor and
   learning intro points, etc.

"intro" auth:
   Authentication at the introduction layer. Allows end-to-end
   authentication with the onion service. Allows more fine-grained
   control over authentication. Also allows the service to know
   which client is visiting it (see #4700).

"standard" auth:
   The combination of "desc" auth and "intro" auth. This basically
   provides the same security logic as v2 "basic" auth.  IMO, this
   is what most operators want when they turn on client auth.

And here are use cases for switching the auths individually:

"intro" auth without "desc" auth:
   Like haxxpop said in top post, if we only have intro auth we can
   revoke/add clients without needing to republish our descriptor.

"desc" auth without "intro" auth:
   This might be useful if a use case does not appreciate the
   anti-feature of intro auth where it allows the HS to know which
   client is visiting it at any given time (see #4700).

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] onion v2 deprecation plan?

2018-04-27 Thread George Kadianakis
Jonathan Marquardt  writes:

> On Wed, Apr 25, 2018 at 04:58:36PM -0400, grarpamp wrote:
>> In onionland, there seems to be little knowledge of v3, thus little worry
>> about v2 in cases where v3 would actually apply to benefit, that's bad.
>
> v3 onion services just seem like a way worse deal to the average user and 
> the unknowledgeable admin. Mainly because the addresses are way too long. I 
> can remember a couple of v2 addresses, but not a single v3 address. So that's 
> just bad advertising from the start.
>

IMO if you have the ability to memorize v2 addresses by heart, you are
already not an average user. Average users just google most things they
try to visit.

That said, I do share your concerns, and that's why I mentioned that
finding a solution to the onion name issue is a priority before v3 can
go mainstream (or even v2).

> Before at least Facebook, DuckDuckGo, The New York Times, the Debian Project 
> and even the Tor Project themselves (!) have rolled out their v3 onion 
> services, one shouldn't even think about deprecating HSv2. It's going to be 
> around for many years to come, taking for them just as long to vanish as an 
> SSL version, I think, unfortunately.

Agreed. We are indeed a long way from deprecating HSv2 :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal #291 Properties (was IRC meeting)

2018-04-26 Thread George Kadianakis
Mike Perry  writes:

> Mike Perry:
>> Mike Perry:
>> > Heyo.
>> > 
>> > We're going to have a meeting to discuss Proposal 291. See this thread:
>> > https://lists.torproject.org/pipermail/tor-dev/2018-April/013053.html
>> 
>> 3. Describe adversary models for our variant proposals from the notes.
>>(Why do we disagree? In Mike's case, my disagreements are because I
>> think each step is an improvement over previous/status quo -- we can
>> decide harder things later and still do better both now and later.)
>
> Ok, in the interest of getting closer to an adversary model, let's first
> start with enumerating the properties the proposals below provide.
> Properties #1-5 have parenthesis at the end of them.  When the condition
> in parenthesis is met for property #N, we'll call that "strong #N".
>

Thanks Mike for this email. I think this moves us forward quite a bit
with an adversary model here! Here is some feedback:

>   1. Hidden service use can't push you over to an unused guard (at all).
>   2. Hidden service use can't influence your choice of guard (at all).

Can we have a bit of more detailed description about the two properties above?
(2) seems like a superset of (1), so making these properties clear would be 
useful.

>   3. Exits and websites can't push you over to an unused guard (at all)
>   4. DoS/Guard node downtime signals are rare (absent)

Also, what does property (4) mean exactly?

>   5. Nodes are not reused for Guard and Exit positions ("any" positions)
>   6. Information about the guard(s) does not leak to the website/RP (at all).
>   7. Relays in the same family can't be forced to correlate Exit traffic.
>

Also it might be useful to rate the current guard design with these
properties and see how well we are currently doing.

IIUC, since we use all the primaries for dirguards it provides:
  1. Hidden service use can't push you over to an unused guard (at all).
  3. Exits and websites can't push you over to an unused guard (at all)

Because of the path restrictions it also provides:
  5. Nodes are not reused for Guard and Exit positions ("any" positions)
  7. Relays in the same family can't be forced to correlate Exit traffic.

It does *not* provide
  2. Hidden service use can't influence your choice of guard (at all).
  4. DoS/Guard node downtime signals are rare (absent)
  6. Information about the guard(s) does not leak to the website/RP (at 
all).

Let me know if I messed it up.

Clearly since everyone in this thread wants to improve the current
situation, the properties the current system lacks are important. In
particular it seems like (2) and (6) are particularly important properties.

>> Roger's proposal:
>> * Remove /16 and family path restrictions between guard and last hop
>> * Optionally, dir auths don't give you Guard if you're an Exit
>> * Use first guard but pad to backup guard so the switch isn't as obvious
>> * First and backup guard are chosen in different /16's and different families
>
> Depending on how good the padding is, this proposal maybe-provides:
>   1. Hidden service use can't push you over to an unused guard (at all).
>   3. Exits and websites can't push you over to an unused guard (at all)
>
> Depending on how good the detection mechanism is:
>   4. DoS/Guard node downtime signals are much more rare (absent)
>
> It provides strong:
>   5. Nodes are not reused for Guard and Exit positions ("any" positions)
>
> It provides:
>   7. Relays in the same family can't be forced to correlate Exit traffic.
>

How does it provide 7?

>
> 
>
>> Aaron's proposal:
>> * Use first guard but pad to backup guard so the switch isn't as obvious
>> * First and backup guard are chosen in different /16's and different families
>
> Depending on how good the padding is, this proposal maybe-provides:
>   1. Hidden service use can't push you over to an unused guard (at all).
>   3. Exits and websites can't push you over to an unused guard (at all)
>
> Depending on how good the detection mechanism is:
>   4. DoS/Guard node downtime signals are much more rare (absent)
>
> It provides strong #5:
>   5. Nodes are not reused for Guard and Exit positions ("any" positions)
>
> It provides #7:
>   7. Relays in the same family can't be forced to correlate Exit traffic.
>
> It does not provide #2 or #6:
>   2. Hidden service use can't influence your choice of guard (at all).
>   6. Information about the guard(s) does not leak to the website/RP (at all).
>  

How come Aaron's proposal provides the same benefits as Roger's even tho
they different? Am I missing something?

> 
>
> Ok, so here's a proposal that gets strong #1-4, and regular #5-7. It is
> my current favorite:
>
>>  * Set "num primary guards"=2 and "num primary guards to use"=2
>>  * Don't give Exit nodes the Guard flag.
>>  * Allow "same node, same /16, same family" between guard and last hop,
>>but only for HS circuits (which are at least 4 hops long for these
>>

Re: [tor-dev] onion v2 deprecation plan?

2018-04-26 Thread George Kadianakis
nusenu  writes:

> Hi,
>
> even though you are probably years away from deprecating onion v2 services
> it is certainly good to have a clear plan.
>
> I'm asking because the sooner onion v2 are deprecated the sooner some 
> people can stop worrying about malicious HSDirs.
>

Yes indeed. The sooner we deprecate v2 the sooner we can stop worrying
about malicious HSDirs. And also we will be able to reduce the
requirements for becoming an HSDir which will strengthen and make our
network more robust.

That said, I think we are unfortunately still far from deprecating v2
onions:

The first actual step to v2 deprecation, is to make v3 the default
version.  But to get there, we first need to solve various bugs and
issues with the current v3 system (#25552, #22893, #23662, #24977,
etc.).  We also need to implement various needed features, like offline
keys (#18098), client-authorization (#20700 ; WIP 
https://github.com/torproject/tor/pull/36),
control port commands like HSFETCH (#25417) and revive onionbalance for
v3.  We might also want to consider possible improvements to the UX of
long onion names (like #24310) 
(https://blog.torproject.org/cooking-onions-names-your-onions).

After we do most of the above, we can turn the switch to make v3 the
default, and then we need to wait some time for most of the users to
migrate from v2 to v3. After that we can initiate the countdown, and
eventually deprecate v2s for real.

It's hard to provide an actual timeline for the above right
now. However, we are currently applying for some onion-service-related
grants, and hopefully if we get them we will have the funding to
accelerate the development pace.

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HS desc replay protection and ed25519 malleability

2018-04-24 Thread George Kadianakis
isis agora lovecruft <i...@torproject.org> writes:

> Ian Goldberg transcribed 2.5K bytes:
>> On Wed, Apr 18, 2018 at 04:53:59PM +0300, George Kadianakis wrote:
>> > Thanks for the help!
>> > 
>> > Hmm, so everyone gets a shot at a single malleability "attack" with
>> > everye d25519 sig? What's the point of the (RS[63] & 224) check then?
>> > 
>> > In this case, we can't use S as the replay cache index since the
>> > attacker can mutate it and still get the sig to verify.
>> 
>> You could still use (S mod l) as the cache index, though, right?
>
> Yes, although with the caveat that this is somewhat expensive.
>
> 
>
>> Perhaps use the *output* of the hash H(R,A,M)?  Or the pair
>> (R, S mod l)?
>
> H(R || A || M) might be okay… but it's still a little icky given that it's
> still not strictly tied to the secret key that produces the eventual
> signature (cf. the calculation of `V` below).
>
> I would highly advise reusing Trevor Perrin's work on building a VRF into
> Generalised EdDSA, [0] which solves precisely this problem (i.e. "how do I get
> verifiable randomness, given an ed25519 signature?") in the following way:
>  |
>  | The VXEdDSA signing algorithm takes the same inputs as XEdDSA. The output
>  | is a pair of values. First, a signature (V || h || s), which is a byte
>  | sequence of length 3b bits, where V encodes a point and h and s encode
>  | integers modulo q. Second, a VRF output byte sequence v of length equal to
>  | b bits, formed by multiplying the V output by the cofactor c.
>  |
>  | The VXEdDSA verification algorithm takes the same inputs as XEdDSA, except
>  | with a VXEdDSA signature instead of an XEdDSA signature. If VXEdDSA
>  | verification is successful, it returns a VRF output byte sequence v of
>  | length equal to b bits; otherwise it returns false.)
>  |
>  |vxeddsa_sign(k, M, Z):
>  |A, a = calculate_key_pair(k)
>  |B_{v} = hash_to_point(A || M)
>  |V = a B_{v}
>  |r = hash3(a || V || Z) (mod q)
>  |R = r B
>  |R_{v} = r B_{v}
>  |h = hash4(A || V || R || Rv || M) (mod q)
>  |s = r + ha (mod q)
>  |v = hash5(c V) (mod 2^{b})
>  |return (V || h || s), v
>
> (Trevor is using q, where you, Ian, and I are using \ell, and floodyberry
> uses n.  Also ignore the `calculate_key_pair` function, that's just because
> Signal stores keys in Montgomery form.  Also ignore the numbers after the
> hashes, that's just denoting the labelset system for domain separation.)
>
> Personally, I'd redefine `hash_to_point` such that it does two elligator2
> encodings from a 512-bit hash, and then adds the points together, before
> multiplying by the cofactor, to ensure it produces all (instead of roughly
> half of all) possible points.
>
> Also, (shameless plug) you get all of this basically for free if you just do
> generalised EdDSA with Ristretto, [1] since the cofactor is elliminated.
> I've been working with Trevor on this, it's called D.A.V.R.O.S. and it'll
> be done… (as) soon (as I have more free time to finish it).
>

Hello Isis and thanks for the help!

I found some time to read about XEdDSA and VXEdDSA: they seem like very
interesting constructions and indeed the latter seems like the thing we
should be using here!

That said, my impression is that swapping ed25519 for VXEdDSA in the HS
protoocl is not gonna be easy (especially since we are hoping to fit
this in 034), because then HSDirs will need to be able to verify both
ed25519 and VXEdDSA signatures, which IIUC are different constructions
(that is, an HSDir node can't produce a VRF output with an ed25519 sig).

IMO, a reasonable plan for now is to use either (R, S mod l) or H(R,A,M)
in the replay cache and only *after* the desc signature has been
verified.  Then perhaps in the future we should consider whether we can
eventually swap out ed25519 for VXEdDSA so that we can use its VRF
output directly.

I hope this makes sense. I'm gonna start implementing this concept this
week, in hopes that we can fit it in the 034 schedule, because otherwise
revision counters are gonna linger around for longer!

Thanks! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] ahmia ~ summer of privacy

2018-04-24 Thread George Kadianakis
Stelios Barberakis  writes:

> Hello all,
>
> My name is Stelios and I am a CS student at Technical University of Crete.
> This summer I will be working on ahmia  project, during
> the "summer of privacy".
>
> This will be the first time to be engaged with the tor-ecosystem community
> as a developer, and I am really glad about it.
>
> The current plan includes some updates, enhancements of the codebase and
> the building/deployment process (including docker), as well as some new
> features like search filters, but we are looking to come up with new ideas
> for additional features, etc.
>
> Everyone is welcomed to contribute to the brainstorming, so feel free to
> ask anything you would like, either here or by joining #ahmia channel in
> OFTC if you feel so.
>

Hello Stelios and welcome to the SoP!

One of the things I'd be really interested in seeing as part of this
ahmia SoP, is some more transparency on how people are using Ahmia. It
would be great to revive the old stats panel that Ahmia had which
showed stuff like "unique users per day", "queries per day", "unique
queries per day", etc. Ahmia used to have these kind of stats back in
2014, so perhaps reviving them might be easier than having to do them
from scratch: 
https://lists.torproject.org/pipermail/tor-reports/2014-May/000536.html

Cheers! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal #291 (two guards) IRC meeting Wed Apr 18, 17:00 UTC

2018-04-20 Thread George Kadianakis
Mike Perry  writes:

> Mike Perry:
>> Heyo.
>> 
>> We're going to have a meeting to discuss Proposal 291. See this thread:
>> https://lists.torproject.org/pipermail/tor-dev/2018-April/013053.html
>
> Ok, we had this meeting. High level (ammended) action items are:
>
> 1. Use patches in https://trac.torproject.org/projects/tor/ticket/25843
>to set NumEntryGuards=2 in torrc, and observe results. Please join us!
>Stuff we are looking for during testing is on that ticket!
> 2. Merge that patch to make the torrc guard options do what we meant for
>them to do. Probably backport it.

Hello,

I wrote the patch on #25843 and I'm now testing 2-guards on my Tor. So far so
good, but I think we need people on more unstable connections to test this.

> 3. Descibe adversary models for our variant proposals from the notes.
>(Why do we disagree? In Mike's case, my disagreements are because I
> think each step is an improvement over previous/status quo -- we can
> decide harder things later and still do better both now and later.)

Here is my proposal, but please don't consider it set on stone. I
actually think these are really complicated issues that take a while to
understand, and we should probably not rush it. Even on a short first
IRC meeting we came up with new issues and ideas while discussing this
topic.

asn proposal:
  1) Allow "same node, same /16, same family" between guard and last hop.
 If it's a 3-hop circ (A - B - A), extend it to a 4-hop circ (A - B - C - 
A).
  2) Switch to two primary guards; and revisit prop#271 as needed to make this 
possible and good.

Rationale:

I care about an attacker who is trying to deanon Tor client by setting
up Tor nodes and comboing various active attacks. In particular, I worry
about adversary who uses guard discovery to learn client's guard nodes
and then uses #14917 or tries to DoS them.

I like two guards because it makes us stronger and more redundant
against such attacks, and also because it improves congestion. The
"pad-to-backup" idea seems too experimental to me, and not sufficiently
specified right now hence I'm unable to analyze it (e.g.  how much do we
pad, how often, can this actually mask us against adversary who launches
#14917 repeatedly?).

I propose altering the above path restrictions because that seems to be
the only way to concretely defend against #14917 (e.g. see attacks
against idle clients on meeting log, etc.). Attackers who have already
owned our guard node are not in my threat model wrt these attacks.  IMO
simple A - B - A path restrictions don't help us against such persistent
adversaries; e.g. attacker can simply spawn up another tiny relay C on
another data center and do an A - B - C correlation attack.

> 4. Agree on an order of operations for fixes+changes, ideally such that we
>don't block forever trying to come up with a perfect solution. Things
>are pretty bad now. All we really need to do is agree on steps to make
>it better.
>

I think (1) and (2) above can be considered as orthogonal issues and get
done in any order. IMO, here are the prerequisites for doing these tasks:

For path restrictions: Specify current path restrictions through the whole Tor 
circuit
   and write a concrete proposal with proposed changes. I 
think we
   are looking for 0.3.5 if we want to do this.

For 2-guards: Get the 2-guard design sufficiently tested to ensure that we
  are not gonna bug out the whole network by switching to
  2-guards. I'm particularly worried about clients on bad
  networks, and clients continuously flapping on-and-off the net.
  If we toggle the consensus param switch soon, we should be
  prepared for another round of guard bugs in 034, and that's fine.

Cheers! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HS desc replay protection and ed25519 malleability

2018-04-18 Thread George Kadianakis
Watson Ladd <watsonbl...@gmail.com> writes:

> On Wed, Apr 18, 2018 at 6:15 AM, George Kadianakis <desnac...@riseup.net> 
> wrote:
>> Hello Ian, isis, and other crypto people around here!
>>
>> Here is an intro: In HSv3 we've been using revision counters
>> (integers++) to do HS desc replay protection, so that bad HSDirs cannot
>> replay old descs to other HSDirs. We recently learned that this is a bad
>> idea from a scalability prespective (multiple sites need to track rev
>> counter...), and also it's needless complexity in the code (tor needs to
>> cache the counters etc.). See ticket #25552 for more details:
>>   https://trac.torproject.org/projects/tor/ticket/25552
>>   https://gitweb.torproject.org/torspec.git/tree/rend-spec-v3.txt#n1078
>>
>> In #25552 we've been making plans to ditch the rev counters and replace
>> them with a casual replay cache. (These replay caches also don't need to
>> be big, since descriptors are only replayable for a day before the
>> ephemeral blinded key changes, and the cache can be reset).
>>
>> Anyhow, now we've been playing the game of "which part of the desc
>> should we use in the replay cache"? The latest plan here has been to use
>> the ed25519 descriptor signature since it's something small, simple and
>> necessarily changes with every fresh descriptor. And this is how we
>> entered the ed25519 malleability scene.
>>
>> The basic question here is, can we use the ed25519 signature in our
>> replay cache and consider it immutable by attackers without the private
>> key? And should we use R, or S, or both?
>>
>> According to RFC8032:
>>
>>  Ed25519 and Ed448 signatures are not malleable due to the
>>  check that decoded S is smaller than l.  Without this
>>  check, one can add a multiple of l into a scalar part and
>>  still pass signature verification, resulting in malleable
>>  signatures.
>>
>> However, neither donna or ref10 include such a check explicitly
>> IIUC. Instead they check whether (RS[63] & 224), which basically ensures
>> that the high 3 bits of S are zeroed, which ensures S < 2^253. Is that
>> equivalent to the RFC check? Because if I'm counting right, for most
>> legit S values you can still add a single l as the attacker and get an
>> S' = S + l < 2^253 equivalent signature (you can't add 2*l tho).
>
> This seems right. Malleability is not part of the standard security
> definition for signatures.

Thanks for the help!

Hmm, so everyone gets a shot at a single malleability "attack" with
everye d25519 sig? What's the point of the (RS[63] & 224) check then?

In this case, we can't use S as the replay cache index since the
attacker can mutate it and still get the sig to verify. Can we use R as
the replay cache index then? Can an attacker given (R,S) find (R',S')
such that the sig still verifies?
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] HS desc replay protection and ed25519 malleability

2018-04-18 Thread George Kadianakis
Hello Ian, isis, and other crypto people around here!

Here is an intro: In HSv3 we've been using revision counters
(integers++) to do HS desc replay protection, so that bad HSDirs cannot
replay old descs to other HSDirs. We recently learned that this is a bad
idea from a scalability prespective (multiple sites need to track rev
counter...), and also it's needless complexity in the code (tor needs to
cache the counters etc.). See ticket #25552 for more details:
  https://trac.torproject.org/projects/tor/ticket/25552
  https://gitweb.torproject.org/torspec.git/tree/rend-spec-v3.txt#n1078

In #25552 we've been making plans to ditch the rev counters and replace
them with a casual replay cache. (These replay caches also don't need to
be big, since descriptors are only replayable for a day before the
ephemeral blinded key changes, and the cache can be reset).

Anyhow, now we've been playing the game of "which part of the desc
should we use in the replay cache"? The latest plan here has been to use
the ed25519 descriptor signature since it's something small, simple and
necessarily changes with every fresh descriptor. And this is how we
entered the ed25519 malleability scene.

The basic question here is, can we use the ed25519 signature in our
replay cache and consider it immutable by attackers without the private
key? And should we use R, or S, or both?

According to RFC8032:

 Ed25519 and Ed448 signatures are not malleable due to the
 check that decoded S is smaller than l.  Without this
 check, one can add a multiple of l into a scalar part and
 still pass signature verification, resulting in malleable
 signatures.

However, neither donna or ref10 include such a check explicitly
IIUC. Instead they check whether (RS[63] & 224), which basically ensures
that the high 3 bits of S are zeroed, which ensures S < 2^253. Is that
equivalent to the RFC check? Because if I'm counting right, for most
legit S values you can still add a single l as the attacker and get an
S' = S + l < 2^253 equivalent signature (you can't add 2*l tho).

So what's the state of ed25519 malleability? I know that after the
Bitcoin incident, people have thought about this a lot, so I doubt we
are in a broken state, but I just wanted to make sure that we will not
mess something up. :)

Thanks for the help! :)
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal: The move to two guard nodes

2018-04-12 Thread George Kadianakis
Mike Perry  writes:

> In-line below for ease of comment. Also available at:
> https://gitweb.torproject.org/user/mikeperry/torspec.git/tree/proposals/xxx-two-guard-nodes.txt?h=twoguards
>
> ===
>
> Filename: xxx-two-guard-nodes.txt
> Title: The move to two guard nodes
> Author: Mike Perry
> Created: 2018-03-22
> Supersedes: Proposal 236
>
> 
>
> 3.1. Eliminate path restrictions entirely
>
>   If Tor decided to stop enforcing /16, node family, and also allowed the
>   guard node to be chosen twice in the path, then under normal conditions,
>   it should retain the use of its primary guard.
>
>   This approach is not as extreme as it seems on face. In fact, it is hard
>   to come up with arguments against removing these restrictions. Tor's
>   /16 restriction is of questionable utility against monitoring, and it can
>   be argued that since only good actors use node family, it gives influence
>   over path selection to bad actors in ways that are worse than the benefit
>   it provides to paths through good actors[10,11].
>
>   However, while removing path restrictions will solve the immediate
>   problem, it will not address other instances where Tor temporarily opts
>   use a second guard due to congestion, OOM, or failure of its primary
>   guard, and we're still running into bugs where this can be adversarially
>   controlled or just happen randomly[5].
>

Seems like the above paragraph is our main argument against removing
path restrictions.

Might be worth pointing out that if congestion/OOM attacks are in our
threat model against the current single guard design, then the same
adversary can force prop#291 to open a connection to the *third* guard
by first doing an OOM/congestion attack against one of your first two
guards, and then pushing you to your third guard using a path
restriction attack (#14917).

Thought that I should mention that because it might be an argument for
both moving to two guards and also lifting some path restrictions...
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal: The move to two guard nodes

2018-04-10 Thread George Kadianakis
Mike Perry  writes:

> In-line below for ease of comment. Also available at:
> https://gitweb.torproject.org/user/mikeperry/torspec.git/tree/proposals/xxx-two-guard-nodes.txt?h=twoguards
>
> ===
>
> Filename: xxx-two-guard-nodes.txt
> Title: The move to two guard nodes
> Author: Mike Perry
> Created: 2018-03-22
> Supersedes: Proposal 236
>
> 
>
> 3.1. Eliminate path restrictions entirely
>
>   If Tor decided to stop enforcing /16, node family, and also allowed the
>   guard node to be chosen twice in the path, then under normal conditions,
>   it should retain the use of its primary guard.
>
>   This approach is not as extreme as it seems on face. In fact, it is hard
>   to come up with arguments against removing these restrictions. Tor's
>   /16 restriction is of questionable utility against monitoring, and it can
>   be argued that since only good actors use node family, it gives influence
>   over path selection to bad actors in ways that are worse than the benefit
>   it provides to paths through good actors[10,11].
>
>   However, while removing path restrictions will solve the immediate
>   problem, it will not address other instances where Tor temporarily opts
>   use a second guard due to congestion, OOM, or failure of its primary
>   guard, and we're still running into bugs where this can be adversarially
>   controlled or just happen randomly[5].
>

Hello Mike,

IMO we should not portray removing the above path restrictions as
something extreme, until we have good evidence that those path
restrictions offer something positive in the cases we are
examining. Personally, I see the result of this proposal of making Sybil
attacks two times more quick (section 2.3), as an equally radical
result.

That said, I feel that this proposal is valuable and I'm not trying to
say that I don't like this proposal, or that I don't buy the
arguments. I'm trying to say that I don't know how to weight the
tradeoffs here so that I gain confidence, because I'm not sure how
people are trying to attack Tor clients right now.

The way I see it is that if we adopt this proposal:
+ We are better defended against active attacks like congestion attacks
  and OOM/DoS attacks.
+ We improve network health by reducing congestion to certain guards.
- Sybil attacks can be performed two times more quickly.

IMO, we should not rush this decision for 034, given that it's a
concensus parameter change that can happen instantaneously.  However, we
should do the following soon:

1) Accept that there is no single best guard topology, and fix our
   codebase to work well with either one guard or two guards, so that we
   are ready for when we flip the switch. Perhaps we can fix
   #25753/#25705/etc. in a way that works well both now and in the
   2-guard future?

2) Investigate our current prop#271 codebase and make sure that the
   paragraph below will work as intended if we do this proposal.

3) Involve more peple into this (Roger, NRL, etc.) and have them think
   about this, to gain more confidence.

Do you think this approach is too slow or backwards?

Just to speed it up, I just did (2) below:

>   Note that for this analysis to hold, we have to ensure that nodes that
>   are at RESOURCELIMIT or otherwise temporarily unresponsive do not cause
>   us to consider other primary guards beyond than the two we have chosen.
>   This is accomplished by setting guard-n-primary-guards to 2 (in addition
>   to setting guard-n-primary-guards-to-use to 2). With this parameter
>   set, the proposal 271 algorithm will avoid considering more than our two
>   guards, unless *both* are down at once.
>

OK, the above paragraph is basically the juice of this proposal! I spent
all day today to investigate how this would work! The results are very
positive, but also not 100% straightforward because of the various
intricancies of prop#271.

[First of all, there is no way to simulate the above topology using the
config file because if you set NumEntryGuards=2 in your torrc, Tor will
setup 4 primary guards because of the way get_n_primary_guards()
works. So I hacked my Tor client to *have* 2 primary guards
(guard-n-primary-guards), and *use* 2 primary guards
(guard-n-primary-guards-to-use).]

The good part: This topology works exactly how the proposal wants it to
work. Because of the way primary guards work, you will have 2 primary
guards, and if one of them goes down you will always use the other
primary, instead of falling back to a third guard. That's excellent, but
it's also abusing the primary guard feature in a good way but not in the
way we were intending it to be used.

Here are the side-effects from this abuse:

- By reducing the amount of primaries from three to two, it's more
  likely that all primaries can be down at a given time. Prop#271 was
  written with an inherent assumption that one of the primaries will
  always be reachable, because when all of them are down the code goes
  into an "oh 

Re: [tor-dev] Setting NumEntryGuards=2

2018-03-27 Thread George Kadianakis
Mike Perry  writes:

> [ text/plain ]
> Back in 2014, Tor moved from three guard nodes to one guard node:
> https://blog.torproject.org/improving-tors-anonymity-changing-guard-parameters
> https://trac.torproject.org/projects/tor/ticket/12206
>
> We made this change primarily to limit points of observability of entry
> into the Tor network for clients and onion services, as well as to
> reduce the ability of an adversary to track clients as they move from
> one internet connection to another by their choice of guards.
>
> At the time, I was in favor of two entry guards but did not have a
> strong preference, and we ended up choosing one guard. After seeing
> various consequences of using only one entry guard, I think a much
> stronger case can now be made for bumping back up to two.
>
> Roger suggested that I enumerate the pros and cons of this increase on
> this mailing list, so we can discuss and consider this switch. So here
> is my attempt at that list. Let's start with a more in-depth recap of
> the one-guard arguments, along with some recent observations that change
> things.
>
>
> Arguments for staying with just one guard:
>
> 1. One guard means less observability.
>

Hello!

Here is some small analysis of Sybil resistance on 1-guard vs 2-guards.

I think this analysis is important even given the #14917 issue, since we
could defend against that by lifting those particular path
restrictions. I agree that's not ideal, but IMO it's definitely
something we should consider as part of a thorough analysis, since by
solving #14917 correctly we could still maintain connection to just
1-guard (assuming it's a stable node).

===

So on to Sybil resistance analysis:

For a 5% bandwidth adversary and a single guard, an attacker would have
50% chance to Sybil your G1 (i.e. deanonymize you) after 14
rotations. For a 3.5 month rotation frequency, this means that you would
expect your guard to be uncompromised by Sybil's for about 4 years. For
a 10% adversary you need 7 rotations for 50% so that's 2 years.

Now if we go to two guards, a 5% adversary would need 2 years to Sybil
your G1, wheras a 10% adversary could do that in 1 year.

All the above numbers are assuming a completely stable guard node, that
you only switch because its lifetime expired and not because of
reachability issues etc. So in the real world, the actual guarantees are
probably lower.

In general, I obviously feel more comfortable with the single guard
results, but also the dual-guard results are not so bad.

===

Now with regards to engineering, here is also something to be said about
how the prop271 algorithm will handle NumEntryGuards=2:

IIRC, the way it's currently handled means that if any of the two first
primary guards is down, the algorithm will skip that and choose between
the two next available, potentially going into the third primary guard
in the list [see how select_entry_guard_for_circuit() uses
get_n_primary_guards_to_use()]. This might not be ideal, and perhaps we
should tolerate some small unstabilities of the primary guards so that
we don't get to expose ourselves to even more guards...

Also, we need to look at how
guard_selection_get_err_str_if_dir_info_missing() will work after we
increase NumEntryGuards, since that function is what caused #21969, and
we should make sure that it's not gonna get more annoying if we bump up
the number of guards.

That's it for now! :)

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Setting NumEntryGuards=2

2018-03-22 Thread George Kadianakis
David Goulet <dgou...@torproject.org> writes:

> [ text/plain ]
> On 22 Mar (13:46:36), George Kadianakis wrote:
>> Mike Perry <mikepe...@torproject.org> writes:
>> 
>> > [ text/plain ]
>> > Back in 2014, Tor moved from three guard nodes to one guard node:
>> > https://blog.torproject.org/improving-tors-anonymity-changing-guard-parameters
>> > https://trac.torproject.org/projects/tor/ticket/12206
>> >
>> > We made this change primarily to limit points of observability of entry
>> > into the Tor network for clients and onion services, as well as to
>> > reduce the ability of an adversary to track clients as they move from
>> > one internet connection to another by their choice of guards.
>> >
>> > At the time, I was in favor of two entry guards but did not have a
>> > strong preference, and we ended up choosing one guard. After seeing
>> > various consequences of using only one entry guard, I think a much
>> > stronger case can now be made for bumping back up to two.
>> >
>> 
>> 
>> 
>> > Roger suggested that I enumerate the pros and cons of this increase on
>> > this mailing list, so we can discuss and consider this switch. So here
>> > is my attempt at that list. Let's start with a more in-depth recap of
>> > the one-guard arguments, along with some recent observations that change
>> > things.
>> >
>> >
>> > Arguments for staying with just one guard:
>> >
>> > 1. One guard means less observability.
>> >
>> > As Roger put it in the above blog post: "I think the analysis of the
>> > network-level adversary in Aaron's paper is the strongest argument for
>> > restricting the variety of Internet paths that traffic takes between the
>> > Tor client and the Tor network."
>> > http://freehaven.net/anonbib/#ccs2013-usersrouted
>> >
>> > Unfortunately, we have since learned that Tor's path selection has the
>> > effect of giving the adversary the ability to generate at least one
>> > additional observation path. We first became aware of this in
>> > https://trac.torproject.org/projects/tor/ticket/14917, where the change
>> > to one guard allowed an adversary to discover your guard by choosing it
>> > as a rendezvous point and observing the circuit failure. After the fix
>> > for #14917, the onion service will build a connection to a second guard
>> > that it keeps in reserve. By using this attack (as well as a similar but
>> > more involved attack with unique exit policies and carefully chosen /16
>> > exit node subnets), the adversary can force clients to be observed over
>> > two paths whenever they like.
>> >
>> > So while we may get benefit for moving from three guards to two guards,
>> > we don't get much (or any) benefit from moving to two guards to one
>> > guard against an active adversary that either connects to onion
>> > services, or serves content to clients and runs exits.
>> >
>> 
>> Hmm, that's a fair point. However, the fact that this behavior exists
>> currently does not mean that it's the best we can do with what we have.
>> 
>> Example of what we can do to stop this bad behavior: instead of using
>> our second guard when our "exit" conflicts with our first guard like
>> this: [G2 -> M1 -> G1], we could instead make a 4-hop circuit as
>> follows: [G1 -> M1 -> M2 -> G2]. This would stop us from using our
>> second guard and would hide the obvious signal you are worrying about.
>> (I see that dgoulet also suggested that in the ticket comment:7)
>
> For hidden service, I think you meant [G1 -> M1 -> M2 -> *G1*] considering
> that G1 is the chosen RP. But also, I think my comment was very wrong 3 years
> ago, a service already builds a 4 hops to the RP so it should then be this in
> your example?: [G1 -> M1 -> M2 -> M3 -> G1]
>

Yep, you are right in everything here.

> This makes it VERY easy for a Guard node to learn that it is the guard of a
> specific .onion but considering an evil guard of a .onion, there are other
> effective methods to learn it so I'm not convinced that this path will be
> worst, just maybe bad for performance.
>

Why bad for performance? It will be the same length as currently.

> But also this would violate tor protocol of "never having the same hop in the
> path" so overall making an exeception for this makes me worry a bit :S.
>

I think this is your main objection to this approach, and I understand
it, but I'm not sure how

Re: [tor-dev] Setting NumEntryGuards=2

2018-03-22 Thread George Kadianakis
Mike Perry  writes:

> [ text/plain ]
> Back in 2014, Tor moved from three guard nodes to one guard node:
> https://blog.torproject.org/improving-tors-anonymity-changing-guard-parameters
> https://trac.torproject.org/projects/tor/ticket/12206
>
> We made this change primarily to limit points of observability of entry
> into the Tor network for clients and onion services, as well as to
> reduce the ability of an adversary to track clients as they move from
> one internet connection to another by their choice of guards.
>
> At the time, I was in favor of two entry guards but did not have a
> strong preference, and we ended up choosing one guard. After seeing
> various consequences of using only one entry guard, I think a much
> stronger case can now be made for bumping back up to two.
>

Hello Mike,

thanks for writing this post. Thinking about entry guards is extremely
important since guards and path selection is pretty much the whole
security of Tor.

However we should think hard here before flapping from one conf to
another.  In the grand scheme of things, I see the positives of moving
to two guards but also the positives of staying with one guard; I think
we need more data to decide what's best and for which threat models.

In general, the main argument for me to stay with one guard is to
minimize client exposure to guards over a period of time. If we choose
two guards instead of one, clients will expose themselves to double the
amount of guards over time (not to take into account flaky unreachable
guards). Perhaps we could compensate that by increasing the lifetime of
guards if we switch to two guards... I think simulations and graphs here
to show exposure of guards per number of guards would be useful, and we
have some of those already for prop247.

OTOH the main arguments for me to switch to two guards is not so much
security but performance improvements and reducing congestion of guard
nodes.

> Roger suggested that I enumerate the pros and cons of this increase on
> this mailing list, so we can discuss and consider this switch. So here
> is my attempt at that list. Let's start with a more in-depth recap of
> the one-guard arguments, along with some recent observations that change
> things.
>
>
> Arguments for staying with just one guard:
>
> 1. One guard means less observability.
>
> As Roger put it in the above blog post: "I think the analysis of the
> network-level adversary in Aaron's paper is the strongest argument for
> restricting the variety of Internet paths that traffic takes between the
> Tor client and the Tor network."
> http://freehaven.net/anonbib/#ccs2013-usersrouted
>
> Unfortunately, we have since learned that Tor's path selection has the
> effect of giving the adversary the ability to generate at least one
> additional observation path. We first became aware of this in
> https://trac.torproject.org/projects/tor/ticket/14917, where the change
> to one guard allowed an adversary to discover your guard by choosing it
> as a rendezvous point and observing the circuit failure. After the fix
> for #14917, the onion service will build a connection to a second guard
> that it keeps in reserve. By using this attack (as well as a similar but
> more involved attack with unique exit policies and carefully chosen /16
> exit node subnets), the adversary can force clients to be observed over
> two paths whenever they like.
>
> So while we may get benefit for moving from three guards to two guards,
> we don't get much (or any) benefit from moving to two guards to one
> guard against an active adversary that either connects to onion
> services, or serves content to clients and runs exits.
>

Hmm, that's a fair point. However, the fact that this behavior exists
currently does not mean that it's the best we can do with what we have.

Example of what we can do to stop this bad behavior: instead of using
our second guard when our "exit" conflicts with our first guard like
this: [G2 -> M1 -> G1], we could instead make a 4-hop circuit as
follows: [G1 -> M1 -> M2 -> G2]. This would stop us from using our
second guard and would hide the obvious signal you are worrying about.
(I see that dgoulet also suggested that in the ticket comment:7)

> 2. Guard fingerprintability is lower with one guard
>
> An adversary who is watching netflow connection records for an entire
> area is able to track users as they move from internet connection to
> internet connection through the degree of uniqueness of their guard
> choice. There is much less information in two guards than three, but
> still significantly more than with one guard:
> https://trac.torproject.org/projects/tor/ticket/9273#comment:3
>
> But, even with one guard, if there are not very many Tor users in your
> area, you still may be trackable. "Guard bucket" designs are discussed
> on the blog post and in related tickets, but they are complicated and
> involve tricky tradeoffs (see
> 

Re: [tor-dev] Enhancement for Tor 0.3.4.x

2018-02-19 Thread George Kadianakis
Nick Mathewson  writes:

> [ text/plain ]
> On Mon, Feb 12, 2018 at 2:32 PM, David Goulet  wrote:
>> Hello everone!
>>
>> As an effort to better organize our 0.3.4.x release for which the merge 
>> window
>> opens in 3 days (Feb 15th, 2018), we need to identify the enhancement(s) that
>> we want so we can better prioritize the development during the merge window
>> timeframe (3 months).
>>
>> Each feature should have its ticket marked for 0.3.4 milestone and with an
>> Owner set so we know who is "leading" that effort. It doesn't have to be the
>> person who code the whole thing but should be a good point of contact to 
>> start
>> with (and it can change over time as well).
>>
>> It is possible that an enhancement can have more than one ticket so in this
>> case, please make a "parent" ticket that explains the whole thing and child
>> tickets assigned to it.
>>
>> The network team just had its weekly meeting and if I recall correctly, these
>> enhancement should be planned for 0.3.4 (please the people who works on this,
>> can you tell us the tickets and make sure they are up to date?)
>>
>> - Privcount (prop#280)
>> - large CREATE cells (prop#249)
>>
>> If you plan to do a set of patches for a feature or enhancement, please do
>> submit it on this thread and make sure a proper ticket exists with an Owner.
>
> My biggest additional wishlist items for 0.3.4 are:
>
>   * ZSTD tuning (#24368)
>   * Fewer wakeups when idle (#14039)
>
> And as a reach:
>   * Improved TLS 1.3 support
>

Hello,

I agree it's a great idea to prioritize features for the next releases
so that we don't go blind!

Question wrt TLS 1.3: Is it lots of work to support TLS 1.3? And what
do we gain by supporting it? Should we prioritize it for this release or
for a subsequent one?

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2018-02-02 Thread George Kadianakis
Georg Koppen <g...@torproject.org> writes:

> [ text/plain ]
> George Kadianakis:
>> As discussed in this mailing list and in IRC, I'm posting a subsequent
>> version of this proposal. Basic improvements:
>> - Uses a new custom HTTP header, instead of Alt-Svc or Location.
>> - Does not do auto-redirect; it instead suggests the onion based on
>>   antonella's mockup: 
>> https://trac.torproject.org/projects/tor/attachment/ticket/21952/21952.png
>
> I don't see that or any particular idea of informing the user in the
> proposal itself, though. I think more about those browser side plans
> should be specified in it (probably in section 2). Right now you are
> quite specific about the redirection part and its pro and cons but
> rather vague on the actual UX improvements (having the header is just
> half of what you need).
>

Hello,

pushed another commit to the onion-location branch in my repo for
addressing the concerns in GeKo's email:
   
https://gitweb.torproject.org/user/asn/torspec.git/commit/?h=onion-location=14fc750e3afcd759f4235ab955535a07eed24286

I was not sure what other stuff to put in section 2 but please let me
know if you don't feel fullfiled with the current ones!!!

Also, I wiped out the improvements section because i was not sure what
to put there.

As a side thing, I found this extension which does the bottombar part of
this proposal, but it gets the redirection list from a local file
instead of an HTTP header: https://github.com/Someguy123/HiddenEverywhere

Cheers!


>> 
>> 
>> 
>> UX improvement proposal: Onion redirects using Onion-Location HTTP header
>> 
>> 
>> 1. Motivation:
>> 
>>Lots of high-profile websites have onion addresses these days (e.g. Tor ,
>
> Tor,
>
>>NYT, blockchain, ProPublica).  All those websites seem confused on what's
>>the right way to inform their users about their onion addresses. Here are
>>some confusion examples:
>>  a) torproject.org does not even advertise their onion address to Tor 
>> users (!!!)
>>  b) blockchain.info throws an ugly ASCII page to Tor users mentioning 
>> their onion
>> address and completely wrecking the UX (loses URL params, etc.)
>>  c) ProPublica has a "Browse via Tor" section which redirects to the 
>> onion site.
>> 
>>Ideally there would be a consistent way for websites to inform their users
>>about their onion counterpart. This would provide the following positives:
>>  + Tor users would use onions more often. That's important for user
>>education and user perception, and also to partially dispell the 
>> darkweb myth.
>>  + Website operators wouldn't have to come up with ad-hoc ways to 
>> advertise
>>their onion services, which sometimes results in complete breakage of
>>the user experience (particularly with blockchain)
>> 
>>This proposal specifies a simple way forward here that's far from perfect,
>>but can still provide benefits and also improve user-education around 
>> onions
>>so that in the future we could employ more advanced techniques.
>> 
>>Also see Tor ticket #21952 for more discussion on this:
>>   https://trac.torproject.org/projects/tor/ticket/21952
>> 
>> 2. Proposal
>> 
>>We introduce a new HTTP header called "Onion-Location" with the exact same
>>restrictions and semantics as the Location HTTP header. Websites can use 
>> the
>>Onion-Location HTTP header to specify their onion counterpart, in the same
>>way that they would use the Location header.
>> 
>>The Tor Browser intercepts the Onion-Location header (if any) and informs
>>the user of the existense of the onion site, giving them the option to 
>> visit
>
> s/existense/existence/
>
>>it. Tor Browser only does so if the header is served over HTTPS.
>> 
>>Browsers that don't support Tor SHOULD ignore the Onion-Location header.
>> 
>> 3. Improvements
>
> Did you plan to write anything here? I guess there are at least UX
> improvements to the ad-hoc things you mentioned at the beginning of the
> proposal.
>
>> 4. Drawbacks
>> 
>> 4.1. No security/performance benefits
>> 
>>While we could come up with onion redirection proposals that provide
>>security and performance benefits, this proposal does not actually provide
>>any of those.
>> 
>>As a matter of fact, the security rem

Re: [tor-dev] Starting with contributing to Anonymous Local Count Statistics.

2018-02-02 Thread George Kadianakis
Aruna Maurya <aruna.maury...@gmail.com> writes:

> [ text/plain ]
> Hey!
>
> What is the current status of the project, how much work has been done and
> where can I pick up from?
>

Hi!

The project is currently not being worked on.

Mainly design work has been done so far; no code has been written.
See:   https://lists.torproject.org/pipermail/tor-dev/2017-March/012001.html
   https://lists.torproject.org/pipermail/tor-dev/2017-March/012073.html

I suggest you pick it up by fleshing out the design work and seeing if
it works for you, and then checking out the code to see where you need
to inject the code. Perhaps you can also get in touch with Jaskaran
Singh (jvsg1...@gmail.com) who did all the previous design work to see
if he is interested in collaborating!

Cheers!



> On Fri, Feb 2, 2018 at 3:04 PM, Aruna Maurya <aruna.maury...@gmail.com>
> wrote:
>
>>
>> -- Forwarded message --
>> From: George Kadianakis <desnac...@riseup.net>
>> Date: Wed, Jan 31, 2018 at 6:32 PM
>> Subject: Re: [tor-dev] Starting with contributing to Anonymous Local Count
>> Statistics.
>> To: Aruna Maurya <aruna.maury...@gmail.com>, tor-dev@lists.torproject.org
>>
>>
>> Aruna Maurya <aruna.maury...@gmail.com> writes:
>>
>> > [ text/plain ]
>> > Hey!
>> >
>> > I was going through the Tor Volunteer page and came across the Anonymous
>> > local count statistics project. As a student it would be a great starting
>> > point and an even bigger opportunity to get a chance to collaborate and
>> > learn in the process.
>> >
>> > I would like to contribute to it, and would love to start as soon as
>> > possible. It would be great if someone could guide me through.
>> >
>>
>> Hello Aruna,
>>
>> thanks for reaching out.
>>
>> I also find this project interesting. I'd like to help you but my time
>> is quite limited lately.
>>
>> What would you like guidance with?
>>
>> With regards to design, I suggest you take a look at the last comments
>> of this trac ticket:  https://trac.torproject.org/pr
>> ojects/tor/ticket/7532#comment:22
>> Particularly it seems like the PCSA algorithm might be a reasonable way
>> forward.
>>
>> With regards to coding, I suggest you familiarize yourself with the Tor
>> codebase. Some specific places to look at would be the way that Tor
>> currently counts users. For example, see geoip_note_client_seen() and
>> its callers, for when bridges register new clients to their stats
>> subsystem. Also check geoip_format_bridge_stats() for when bridges
>> finally report those stats.
>>
>> Let us know if you have any specific questions!
>>
>> Cheers!
>>
>>
>>
>> --
>> Regards,
>> Aruna Maurya,
>> CSE,B.tech,
>> Blog <https://themindreserves.wordpress.com/> | Medium
>> <https://medium.com/@arunamaurya>
>>
>>
>
>
> -- 
> Regards,
> Aruna Maurya,
> CSE,B.tech,
> Blog <https://themindreserves.wordpress.com/> | Medium
> <https://medium.com/@arunamaurya>
> [ text/plain ]
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


  1   2   3   4   5   >