Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread teor
> 
> On 16 Nov 2017, at 00:38, Alec Muffett  wrote:
> 
>> I think it's important to point out that a Tor client is never
>> guaranteed to hold a *definitive* consensus.
>> 
> That's why I say "(mostly) definitive" in my text - my feeling is that a 
> locally-held copy of the consensus to be queried is going to be on average of 
> far higher quality, completeness, and non-stagnancy than something that one 
> tries to scrape out of Onionoo every 15 minutes.

Please don't use a consensus or a tor client to check for exits for
this purpose. It produces significant numbers of false negatives,
because some exits use other IP addresses for their exit traffic.

Using Onionoo or TorDNSEL reduces your false negatives, because it
pulls data from Exitmap to populate exit_addresses. (Tor clients do
not pull data from Exitmap, and that data is not in the consensus.)

> On 16 Nov 2017, at 03:03, Tom Ritter  wrote:
> 
> Detecting exit nodes is error prone, as you point out. Some exit nodes
> have their traffic exit a different address than their listening
> port.[1]
> 
> ...
> [1] Hey does Exonerator handle these?

Exonerator uses data from Exitmap, which queries a service through each
exit to discover the address(es) the exit uses to send client requests
to websites.

The list is updated every 24 hours.
So there's really no need to scrape OnionOO every 15 minutes.

>> but now we are discussing weird tor
>> modules that communicate with the Tor daemon to decide whether to
>> redirect clients, so it seems to me like an equally "special" Tor setup
>> for sysadmins.
>> 
> I can see how you would think that, and I would kind-of agree, but at least 
> this would be local and cheap.  Perhaps instead of a magic protocol, it 
> should be a REST API that's embedded in the local Tor daemon?  That would be 
> a really, REALLY common pattern for an enterprise to query.


You can download the set of exit addresses every 24 hours, and write a
small tool that implements a REST API to query it:

https://check.torproject.org/exit-addresses

In fact, you could even adapt the "check" service to your needs, if it
doesn't do what you want already:

https://gitweb.torproject.org/check.git

Is this the kind of JSON reply you would want?

https://check.torproject.org/api/ip

{"IsTor":true,"IP":"176.10.104.243"}

Or for the interactive version, see:

https://check.torproject.org/cgi-bin/TorBulkExitList.py

(And if you supply a destination port, it's more accurate, because it
checks exit policies as well.)

T

--
Tim / teor

PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
ricochet:ekmygaiu4rzgsk6n




signature.asc
Description: Message signed with OpenPGP
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Connection, Channel and Scheduler - An Intense Trek

2017-11-15 Thread Nick Mathewson
On Mon, Oct 30, 2017 at 3:57 PM, David Goulet  wrote:

> Hello everyone!
>
> DISCLAIMER: The following is enormous and tries to describe in some level
> of
> details the situation in tor with connection<->channel<->scheduler. This
> comes
> after we've merged the KIST scheduler, we've realized many things we'ren't
> what
> they were suppose to be or meant for. In the end, I'm asking questions so
> we
> can move forward with development and fixing things.
>
> Last thing before you start your journey in the depth of Tor, the 3
> subsystems
> I'm going to talk about and how they interact are kind of very complicated
> so
> it is very possible that I might have gotten things wrong or miss some
> details.
> Please, point them out so we can better document, better be informed and
> make
> good decisions. I plan to document as much as I can from this process for
> a new
> file in torguts.git repository.
>
>
Snipping the analysis, and going straight to the conclusions.  I'll leave
one sentence in the analysis because it's such a great summary:



> Many things are problematic currently


They sure are. :)

== Part Four - The Conclusion ==
>
> Through this epic journey, we've discovered some issues as well as design
> problems. Now the question is what should and can do about it?
>
> In a nutshell, there are a couple of questions we should ask our selfves
> and
> try to answer so we can move forward:
>
> * I believe now that we should seriously discuss the relevance of channels.
>   Originally, the idea was good that is providing an abstraction layer for
> the
>   relay to relay handshake and send/process cells related to the protocol.
> But,
>   as of now, they are half doing it.
>
>   There is an important cost in code and maintanance of something that is
> not
>   properly implemented/finished (channel abstraction) and also something
> that
>   is unused. An abstraction implemented only for one thing is not really
> useful
>   except maybe to offer an example for others? But we aren't providing a
> good
>   example right now imo...
>
>   That being said, we can spend time fixing the channel subsystem, trying
> to
>   turn it in a nicer interface, fixing all the issues I've described above
> (and
>   I suspect there might be more) so the cell scheduler can play nicely with
>   channels. Or, we could rip them off eliminating lots of code and
> reducing our
>   technical debt. I would like us to think about what we want seriously
> because
>   that channel subsystem is _complicated_ and very few of us fully
> understands
>   it afaict.
>
>   Which would bring us back to (which is btw basically what we have now
>   considering the channel queues are useless):
>
> conn inbuf -> circ queue -> conn outbuf
>
>   If we don't want to get rid of channel, the fixes are non trivial. For
>   starter, we have to decide if we want to keep the channel queue or not
> and if
>   yes, we need to almost start from square 1 in terms of testing because we
>   would basically introduce a new layer of queuing cells.
>

So, this is the question I'm least sure about. Please take the following as
tentative.

I think that the two choices ("refactor channels" and "rip out channels")
may be less different than we think. Neither one is going to be trivial to
do, and we shouldn't assume that sticking everything together into one big
type will actually make the code _simpler_.

The way I think about the code right now, "channel" is an interface which
"connection_or" implements, and there is no meaningful barrier between
connection_or and channeltls.  I _do_ like the idea of keeping some kind of
abstraction barrier, though: a "channel" is "whatever we can send and
receive cells from", whereas an "or_connection" has a lot of other baggage
that comes with it.

>From my POV, we *should* definitely abolish the channels' queues, and
minimize the amount of logic that channels do on their own. I'm not sure if
we should rip them out entirely, or just simplify them a lot. I don't think
either necessarily simpler or less bug-prone than the other.

Perhaps we should sketch out what the new interface would look like?  Or
maybe do an hour or two worth of exploratory hacking on each approach?

(This reminds me of another change I want someday, which is splitting
edge_connection_t into an "edge_connection" type that implements a "stream"
interface: right now, we have quite a few streams that aren't actually edge
connections, but which use the type anyway.)

* Dealing with the DESTROY cell design issue will require a bit more tricky
>   work I think. We must not starve circuit with a DESTROY cell pending to
> be
>   sent else the other side keeps sending data. But we should also not
> starve
>   all the circuits because if we ever need to send a gazillion DESTROY
> cell in
>   priority, we'll make the relay useless (DoS vector).
>
>   The question is, do we trust our EWMA policy to be wise enough to pick
> the
>   circuit in a reasonable 

[tor-dev] Detecting multi-homed exit relays (was: Onion auto-redirects using Alt-Svc HTTP header)

2017-11-15 Thread Philipp Winter
On Wed, Nov 15, 2017 at 10:03:39AM -0600, Tom Ritter wrote:
> Detecting exit nodes is error prone, as you point out. Some exit nodes
> have their traffic exit a different address than their listening
> port.[1]

Right.  It's not trivial for tor to figure out what exit relays are
multi-homed -- at least not without actually establishing circuits and
fetching content over each exit relay.

I just finished an exitmap scan and found 17 exit relays that exit from
an IP address that is different from what's listed in the consensus:

193.171.202.146 -> 193.171.202.150 for 

104.223.123.99  -> 104.223.123.98 for 

87.118.83.3 -> 87.118.82.3 for 

89.31.57.58 -> 89.31.57.5 for 

37.187.105.104  -> 196.54.55.14 for 

77.247.181.164  -> 77.247.181.162 for 

198.211.103.26  -> 185.165.169.23 for 

52.15.62.13 -> 69.181.127.85 for 

138.197.4.77-> 163.172.45.46 for 

52.15.62.13 -> 104.132.0.104 for 

31.185.27.203   -> 31.185.27.201 for 

104.223.123.101 -> 104.223.123.98 for 

77.247.181.166  -> 77.247.181.162 for 

149.56.223.240  -> 149.56.223.241 for 

88.190.118.95   -> 94.23.201.80 for 

192.241.79.175  -> 192.241.79.178 for 

143.106.60.70   -> 193.15.16.4 for 


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread Philipp Winter
On Tue, Nov 14, 2017 at 02:51:55PM +0200, George Kadianakis wrote:
> Let me know what you think :)

Section 9.4 in the Alt-Svc draft talks about abusing the header for
tracking.  In particular, a malicious website could give each Tor user
a unique onion domain to track their activity.  That's particularly
problematic if the "persist" flag is used in the Alt-Svc header.

Granted, malicious websites can already do that to an extent by serving
unique onion domains on each page load, but we should still keep this
issue in mind.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread Tom Ritter
On 15 November 2017 at 05:35, Alec Muffett  wrote:
> Apologies, I am waiting for a train and don't have much bandwidth, so I will
> be brief:
>
> 1) There is no point in issuing  to anyone unless
> they are accessing  via an exit node.
>
> 2) It's inefficient to issue the header upon every web access by every
> person in the world; when the header is only relevant to 1-in-a-few-thousand
> users, you will be imposing extra bandwidth cost upon the remaining
> 99.99...% -- which is unfair to them

Agreed (mostly). I could see use cases where users not accessing a
website via Tor may wish to know an onionsite is available, but they
are also the vast minority.


> 3) Therefore: the header should only be issued to people arriving via an
> exit node.  The means of achieving this are
>
> a) Location
>
> b) Bending Alt-Svc to fit and breaking web standards
>
> c) Creating an entirely new header
>
> 4) Location already works and does the right thing.  Privacy International
> already use this and issue it to people who connect to their .ORG site from
> an Exit Node.
>
> 5) Bending Alt-Svc to fit, is pointless, because Location already works
>
> 6) Creating a new header? Given (4) and (5) above, the only potential
> material benefit of it that I can see would be to "promote Tor branding" -
> and (subjective opinion) this would actually harm the cause of Tor at all
> because it is *special*.
>
> 6 Rationale) The majority the "Dark Net" shade which had been thrown at Tor
> over the past 10 years has pivoted upon "needing special software to
> access", and creating (pardon me) a "special" header to onionify a fetch
> seems to be promoting the weirdness of Tor, again.
>
> The required goal of redirection to the corresponding Onion site does not
> require anything more than a redirect, and - pardon me - but there are
> already 4x different kinds of redirects that are supported by the Location
> header (301, 302, 307, 308) with useful semantics. Why reinvent 4 wheels
> specially for Tor?


I think there are some additional things to gain by using a new header:

Software that understands the header can handle it differently than
Location. I think the notification bar and the 'Don't redirect me to
the onionsite' options are pretty good UI things we should consider.
They're actually not great UX, but it might be 'doing our part' to try
and not confuse users about trusted browser chrome.[0]

Users who _appear_ to be coming from an exit node but are not using
Tor are not blackholed. How common is this? I've seen reports from
users who do this. If I were in a position to, I would consider having
exit node traffic 'blend into' more general non-exit traffic (like a
university connection) just to make the political statement that "Tor
traffic is internet traffic".

Detecting exit nodes is error prone, as you point out. Some exit nodes
have their traffic exit a different address than their listening
port.[1]


Location is really close to what we need, but it is limited in some
ways. I'm still on the fence.


[0] Except of course that notification bars are themselves spoofable
chrome but lets ignore that for now...
[1] Hey does Exonerator handle these?



On 15 November 2017 at 07:38, Alec Muffett  wrote:
> I can see how you would think that, and I would kind-of agree, but at least
> this would be local and cheap.  Perhaps instead of a magic protocol, it
> should be a REST API that's embedded in the local Tor daemon?  That would be
> a really, REALLY common pattern for an enterprise to query.

This information should already be exposed via the Control Port,
although there would be more work on behalf of the implementer to
parse more information than desired and pare it down to what is
needed.

-tom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread Alec Muffett
I think it's important to point out that a Tor client is never
guaranteed to hold a *definitive* consensus.


That's why I say "(mostly) definitive" in my text - my feeling is that a
locally-held copy of the consensus to be queried is going to be on average
of far higher quality, completeness, and non-stagnancy than something that
one tries to scrape out of Onionoo every 15 minutes.

True "definitiveness" can wait. A solution which does not require treading
beyond the local area network for a "good enough" result, is a sufficient
90+% solution :-)


If we were to create "the definitive exit node oracle" we would need a
Tor client that polls the dirauths the second a new consensus comes out,


So let's not do that, then.


Furthermore, you said that enterprises might be spooked out by
tor-specific "special" HTTP headers,


Yes.


but now we are discussing weird tor
modules that communicate with the Tor daemon to decide whether to
redirect clients, so it seems to me like an equally "special" Tor setup
for sysadmins.


I can see how you would think that, and I would kind-of agree, but at least
this would be local and cheap.  Perhaps instead of a magic protocol, it
should be a REST API that's embedded in the local Tor daemon?  That would
be a really, REALLY common pattern for an enterprise to query.

How about that?

- alec
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread George Kadianakis
Alec Muffett  writes:

> On 15 Nov 2017 12:18, "Iain R. Learmonth"  wrote:
>
> Is this not what TorDNSEL does?
> https://www.torproject.org/projects/tordnsel.html.en
>
>
> Hi Iain!
>

Hey Alec,

thanks for the feedback.

> That certainly sounds like it will give you the answer! But although it
> would give the right kind of answer, it is not what I am asking for.
>
> At the scale of websites like Facebook or the New York Times, a timely
> response is required for the purposes of rendering a page. The benefits of
> solving the problem at "enterprise" scale then trickle down to
> implementations of all sizes.
>
> Speaking as a programmer, it would be delightfully easy to make a DNS query
> and wait for a response to give you an answer... but then you have to send
> the query, wait for propagation, wait for a result, trust the result, debug
> cached versions of the results, leak the fact that all these lookups are
> going on, and so forth.
>
> This all adds adds up to latency and cost, as well as leaking metadata of
> your lookups; plus your local DNS administrator will hate you (cf: doing
> name resolution for every webpage fetch for writing Apache logs, is frowned
> upon.  Better to log the raw IP address and resolve it later if you need.
>
> On the other hand: if you are running a local Tor daemon, a copy of the
> entire consensus is held locally and is (basically) definitive.  You query
> it with near zero lookup latency, you get an instant response with no
> practical lag behind "real time", plus there are no men in the middle, and
> there is no unwanted metadata leakage.
>

I think it's important to point out that a Tor client is never
guaranteed to hold a *definitive* consensus. Currently Tor clients can
stay perfectly happy with a consensus that is up to 3 hours old, even if
they don't fetch the latest one (which gets made every hour).

In general, the Tor network does not have a definitive state at any
point, and different clients/relays can have different states at the
same time.

If we were to create "the definitive exit node oracle" we would need a
Tor client that polls the dirauths the second a new consensus comes out,
and maybe even then there could be desynchs. Perhaps it's worthwhile
doing such a thing, and maybe that's exactly what tordnsel is doing, but
it's something that can bring extra load to the dirauths and should not
be done in many instances.

Furthermore, you said that enterprises might be spooked out by
tor-specific "special" HTTP headers, but now we are discussing weird tor
modules that communicate with the Tor daemon to decide whether to
redirect clients, so it seems to me like an equally "special" Tor setup
for sysadmins.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Understanding the guard/md issue (#21969)

2017-11-15 Thread George Kadianakis
George Kadianakis  writes:

> Hey Tim,
>

OK updates here.

We merged #23895 and #23862 to 032 and master.

#23817 is now in needs_review and hopefully will get in the next 032 alpha.
I think this next alpha should be much better in terms of mds.

Next tickets in terms of importance should probably be #23863 and #24113.
I have questions/feedback in both of them, and I'm ready to move in.

Cheers!
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread Alec Muffett
On 15 Nov 2017 12:18, "Iain R. Learmonth"  wrote:

Is this not what TorDNSEL does?
https://www.torproject.org/projects/tordnsel.html.en


Hi Iain!

That certainly sounds like it will give you the answer! But although it
would give the right kind of answer, it is not what I am asking for.

At the scale of websites like Facebook or the New York Times, a timely
response is required for the purposes of rendering a page. The benefits of
solving the problem at "enterprise" scale then trickle down to
implementations of all sizes.

Speaking as a programmer, it would be delightfully easy to make a DNS query
and wait for a response to give you an answer... but then you have to send
the query, wait for propagation, wait for a result, trust the result, debug
cached versions of the results, leak the fact that all these lookups are
going on, and so forth.

This all adds adds up to latency and cost, as well as leaking metadata of
your lookups; plus your local DNS administrator will hate you (cf: doing
name resolution for every webpage fetch for writing Apache logs, is frowned
upon.  Better to log the raw IP address and resolve it later if you need.

On the other hand: if you are running a local Tor daemon, a copy of the
entire consensus is held locally and is (basically) definitive.  You query
it with near zero lookup latency, you get an instant response with no
practical lag behind "real time", plus there are no men in the middle, and
there is no unwanted metadata leakage.

If the Tor daemon is on the local machine, then the lookup cost is
near-zero, and - hey! - you are encouraging more people to run more tor
daemons, which (as above) has to be a good thing.

So: the results are very close to what TorDNSEL provides, but what I seek
is something with different and better latency, security, reliability and
privacy qualities than TorDNSEL offers.

- alec
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread Iain R. Learmonth
Hi,

On 15/11/17 11:35, Alec Muffett wrote:
> 8) So, to pass concrete advice on the basis of experience: rather than
> pursue novel headers and reinvent a bunch of established,
> widely-understood web redirection technologies, I would ask that Tor
> focus its efforts instead upon providing a service - perhaps a listener
> service embedded in little-t tor as an enable-able service akin to
> SOCKSListener - which can accept a request from ,
> receive an newline-terminated IP address, and return a set of flags
> associated with that IP (exit node, relay, whatever) - or "none" where
> the IP is not part of the tor network.  Riff on this protocol as you see
> fit.

Is this not what TorDNSEL does?

https://www.torproject.org/projects/tordnsel.html.en

Thanks,
Iain.



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread Alec Muffett
Apologies, I am waiting for a train and don't have much bandwidth, so I
will be brief:

1) There is no point in issuing  to anyone unless
they are accessing  via an exit node.

2) It's inefficient to issue the header upon every web access by every
person in the world; when the header is only relevant to
1-in-a-few-thousand users, you will be imposing extra bandwidth cost upon
the remaining 99.99...% -- which is unfair to them

3) Therefore: the header should only be issued to people arriving via an
exit node.  The means of achieving this are

a) Location

b) Bending Alt-Svc to fit and breaking web standards

c) Creating an entirely new header

4) Location already works and does the right thing.  Privacy International
already use this and issue it to people who connect to their .ORG site from
an Exit Node.

5) Bending Alt-Svc to fit, is pointless, because Location already works

6) Creating a new header? Given (4) and (5) above, the only potential
material benefit of it that I can see would be to "promote Tor branding" -
and (subjective opinion) this would actually harm the cause of Tor at all
because it is *special*.

6 Rationale) The majority the "Dark Net" shade which had been thrown at Tor
over the past 10 years has pivoted upon "needing special software to
access", and creating (pardon me) a "special" header to onionify a fetch
seems to be promoting the weirdness of Tor, again.

The required goal of redirection to the corresponding Onion site does not
require anything more than a redirect, and - pardon me - but there are
already 4x different kinds of redirects that are supported by the Location
header (301, 302, 307, 308) with useful semantics. Why reinvent 4 wheels
specially for Tor?

7) Story: when I was implementing the Facebook onion, I built infra to
support such (eventual) redirection and/or exit-node-usage tracking. Hit "
facebook.com/si/proxy/" from Tor/NonTor to see it in action. The most
challenging thing for me was to get a reliable and cheap way to look-up,
locally, quickly, cheaply and reliably, whether a given IP address
corresponded to an exit node. The closest that I could get to that idea was
to scrape Onionoo every so often and to cache the results into a
distributed, almost-memcache-like table for the entire site. ie: squillions
of machines. This mechanism suffers from all the obvious flaws, notably
Onionoo crashes and/or "lag" behind the state of the consensus.

8) So, to pass concrete advice on the basis of experience: rather than
pursue novel headers and reinvent a bunch of established, widely-understood
web redirection technologies, I would ask that Tor focus its efforts
instead upon providing a service - perhaps a listener service embedded in
little-t tor as an enable-able service akin to SOCKSListener - which can
accept a request from , receive an newline-terminated IP
address, and return a set of flags associated with that IP (exit node,
relay, whatever) - or "none" where the IP is not part of the tor network.
Riff on this protocol as you see fit.

This would mean more people running more tor daemons in more datacentres
(and possibly configuring some of them as relays) - using this lookup
service to establish quickly whether $CLIENT_ADDR is an exit node or not,
and whether it should be issued "308 Permanent Redirect With Same Method"

I think this is a better goal for Tor to be pursuing.  What do you think?

- alec
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] UX improvement proposal: Onion auto-redirects using Alt-Svc HTTP header

2017-11-15 Thread George Kadianakis
Tom Ritter  writes:

> I am a big proponent of websites advertising .onions in their Alt-Srv.
>
>> 4.2. No security/performance benefits
>>
>>While we could come up with auto-redirect proposals that provide security
>>and performance benefits, this proposal does not actually provide any of
>>those.
>>
>>As a matter of fact, the security remains the same as connecting to normal
>>websites (since we trust its HTTP headers), and the performance gets worse
>>since we first need to connect to the website, get its headers, and then
>>also connect to the onion.
>>
>>Still _all_ the website approaches mentioned in the "Motivation" section
>>suffer from the above drawbacks, and sysadmins still come up with ad-hoc
>>ways to inform users abou their onions. So this simple proposal will still
>>help those websites and also pave the way forward for future auto-redirect
>>techniques.
>
> I envision a future Onion Everywhere extension like HTTPS Everywhere
> that works similar to the HSTS preload list. Crawlers validate a
> websites intention to be in the Onion Everywhere extension, and we
> cache the Alt-Srv information so it is used on first load.
>

Yep, that's yet another cool way to do this. 

>
>> 4.3. Alt-Svc does not do exactly what we want
>>
>>I read in a blog post [*] that using Alt-Svc header "doesn’t change the 
>> URL
>>in the location bar, document.location or any other indication of where 
>> the
>>resource is; this is a “layer below” the URL.". IIUC, this is not exactly
>>what we want because users will not notice the onion address, they will 
>> not
>>get the user education part of the proposal and their connection will 
>> still
>>be slowed down.
>>
>>I think we could perhaps change this in Tor Browser so that it rewrites 
>> the
>>onion address to make it clear to people that they are now surfing the
>>onionspace.
>>
>>[*]: https://www.mnot.net/blog/2016/03/09/alt-svc
>
>
> I am a big opponent of changing the semantics of Alt-Srv.
>
> We'd have to change the semantics to only do redirection for onion
> domains. We'd also have to figure out how to handle cases where the
> onion lives alongside non-onion (which takes precedence?) We'd also
> have to maintain and carry this patch ourselves because it's pretty
> antithetical to the very intent of the header and I doubt the
> networking team at Mozilla would be interested in maintaining it.
>
> Besides those issues, it also eliminates Alt-Srv as a working option
> to something *else* websites may want: to silently redirect users to
> their .onion _without_ the possibility of confusion for the user by
> changing the address bar. I think Alt-Srv is an option for partial
> petname support in TB.
>
> There is a perfectly functioning mechanism for redirecting users: the
> Location header. It does a lot of what you want: including temporary
> or persistent redirects, updating the addess bar. Obviously it doesn't
> work for all users, most don't know what .onion is, so Facebook isn't
> going to deploy a .onion Location redirect even if they attempted to
> detect TB users.
>
> But they could send a new Onion-Redirect header that is recognized and
> processed (with a notification bar) by any UA that supports Tor and
> wants to implement it. This header would have a viable path to uplift,
> support in extensions, and even standardization. Onion Everywhere can
> preload these headers too.
>

Agreed, the semantics of Alt-Svc are not what we want, and it's probably
not a good idea to change it from an engineering/policy perspective.

Establishing our own header, with the same semantics as Location, seems
to be a cleaner way to approach this.

When I find some time (hopefully this or next week) I will fix up the
proposal based on received feedback.

>
> On 14 November 2017 at 11:25, teor  wrote:
>>> 4. Drawbacks
>>
>> You missed the biggest one:
>>
>> If the onion site is down, the user will be redirected to the downed site.
>> (I've used onion site redirects with this issue, it's really annoying.)
>> Similarly, if a feature is broken on the onion site, the user will be
>> redirected to a site they can't use.
>>
>> Or if the user simply wants to use the non-onion site for some reason
>> (maybe they want a link they can share with their non-onion friends,
>> or maybe they don't want to reveal they're using Tor Browser).
>>
>> Users *must* have a way to disable the redirect on every redirect.
>
> Right now, I don't agree with this. (I reserve the right to change my
> mind.) Websites can already redirect users to broken links through
> mistakes. Why is "my onion site is not running" a scenario we want to
> code around but "my subdomain is not running" is not? If a website
> wants to redirect users they should be responsible enough to monitor
> the uptime of their onion domain and keep it running.  Maybe we need
> better tooling for that, but that's