[tor-dev] Hidden Service authorization UI

2014-11-09 Thread George Kadianakis
Hidden Service authorization is a pretty obscure feature of HSes, that
can be quite useful for small-to-medium HSes.

Basically, it allows client access control during the introduction
step. If the client doesn't prove itself, the Hidden Service will not
poroceed to the rendezvous step.

This allows HS operators to block access in a lower level than the
application-layer. It also prevents guard discovery attacks since the
HS will not show up in the rendezvous. It's also a way for current
HSes to hide their address and list of IPs from the HSDirs (we get
this for free in rend-spec-ng.txt).

In the current HS implementation there are two ways to do authorization:
https://gitweb.torproject.org/torspec.git/blob/HEAD:/rend-spec.txt#l768
both have different threat models.

In the future "Next Generation Hidden Services" specification there
are again two ways to do authorization:
https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/224-rend-spec-ng.txt#l1446
One way is with a password and the other is with a public key.

I suspect that HS authorization is very rare in the current network,
and if we believe it's a useful tool, it might be worthwhile to make
it more useable by people.

For example, it would be interesting if TBB would allow people to
input a password/pubkey upon visiting a protected HS. Protected HSes
can be recognized by looking at the "authentication-required" field of
the HS descriptor. Typing your password on the browser is much more
useable than editing a config file.

Furthermore on the server-side, like meejah recently suggested [0], it
would be nice if there was a way for HSes to be able to dynamically
add/remove authorized clients using the control port.

[0]: https://lists.torproject.org/pipermail/tor-dev/2014-October/007693.html
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Griffin Boyce
So most of my work over the next three days is writing and editing 
documentation on hidden services. 

I'm in Boston and the purpose of this trip is to rewrite existing documentation 
to be more useful, but with authenticated hidden services, what's available is 
extremely sparse. GlobaLeaks and SecureDrop have good authenticated hidden 
service setups (and good use cases for them). A friend of mine uses an 
authenticated HS for his personal cloud.  More secure for him than logging into 
DropBox, etc. So they're also useful for mere mortals like us. ;-) 

Is there something you need/want in terms of documentation.

best,
Griffin

PS: yes I'm aware of the hilarious timing of this trip.


On November 9, 2014 7:50:00 AM EST, George Kadianakis  
wrote:
>Hidden Service authorization is a pretty obscure feature of HSes, that
>can be quite useful for small-to-medium HSes.
>
>Basically, it allows client access control during the introduction
>step. If the client doesn't prove itself, the Hidden Service will not
>poroceed to the rendezvous step.
>
>This allows HS operators to block access in a lower level than the
>application-layer. It also prevents guard discovery attacks since the
>HS will not show up in the rendezvous. It's also a way for current
>HSes to hide their address and list of IPs from the HSDirs (we get
>this for free in rend-spec-ng.txt).
>
>In the current HS implementation there are two ways to do
>authorization:
>https://gitweb.torproject.org/torspec.git/blob/HEAD:/rend-spec.txt#l768
>both have different threat models.
>
>In the future "Next Generation Hidden Services" specification there
>are again two ways to do authorization:
>https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/224-rend-spec-ng.txt#l1446
>One way is with a password and the other is with a public key.
>
>I suspect that HS authorization is very rare in the current network,
>and if we believe it's a useful tool, it might be worthwhile to make
>it more useable by people.
>
>For example, it would be interesting if TBB would allow people to
>input a password/pubkey upon visiting a protected HS. Protected HSes
>can be recognized by looking at the "authentication-required" field of
>the HS descriptor. Typing your password on the browser is much more
>useable than editing a config file.
>
>Furthermore on the server-side, like meejah recently suggested [0], it
>would be nice if there was a way for HSes to be able to dynamically
>add/remove authorized clients using the control port.
>
>[0]:
>https://lists.torproject.org/pipermail/tor-dev/2014-October/007693.html
>___
>tor-dev mailing list
>tor-dev@lists.torproject.org
>https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Andrea Shepard
On Sun, Nov 09, 2014 at 08:18:40AM -0500, Griffin Boyce wrote:
> So most of my work over the next three days is writing and editing
> documentation on hidden services. 
> 
> I'm in Boston and the purpose of this trip is to rewrite existing
> documentation to be more useful, but with authenticated hidden services,
> what's available is extremely sparse. GlobaLeaks and SecureDrop have good
> authenticated hidden service setups (and good use cases for them). A friend
> of mine uses an authenticated HS for his personal cloud.  More secure for
> him than logging into DropBox, etc. So they're also useful for mere mortals
> like us. ;-) 
> 
> Is there something you need/want in terms of documentation.
> 
> best,
> Griffin
> 
> PS: yes I'm aware of the hilarious timing of this trip.

No particular suggestions to offer on documentation, but 'hilarious' may
actually be 'good', since for situations like this where an HS doesn't need
to be open to the general public, it denies attackers the ability to cause
the HS to produce traffic on demand and thus probably makes it more resistant
to any HS exploits that may have been involved in recent events.

-- 
Andrea Shepard

PGP fingerprint (ECC): BDF5 F867 8A52 4E4A BECF  DE79 A4FF BC34 F01D D536
PGP fingerprint (RSA): 3611 95A4 0740 ED1B 7EA5  DF7E 4191 13D9 D0CF BDA5


pgpk8DwIelER8.pgp
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Andrea Shepard
On Sun, Nov 09, 2014 at 12:50:00PM +, George Kadianakis wrote:
> I suspect that HS authorization is very rare in the current network,
> and if we believe it's a useful tool, it might be worthwhile to make
> it more useable by people.

Yes, HS authoritzation is rare.  It's rare enough that it was broken
for a whole series of releases and no one noticed or complained.  That
sucks and it should be used more because it probably does help resist
attacks for a large category of use cases.

> For example, it would be interesting if TBB would allow people to
> input a password/pubkey upon visiting a protected HS. Protected HSes
> can be recognized by looking at the "authentication-required" field of
> the HS descriptor. Typing your password on the browser is much more
> useable than editing a config file.

How would Tor Browser learn about this reason for not being able to connect/
tell Tor the authentication info?  This is starting to sound like wanting
SOCKS5 extensions to indicate different causes for connection failures in
#6031 did.

-- 
Andrea Shepard

PGP fingerprint (ECC): BDF5 F867 8A52 4E4A BECF  DE79 A4FF BC34 F01D D536
PGP fingerprint (RSA): 3611 95A4 0740 ED1B 7EA5  DF7E 4191 13D9 D0CF BDA5


pgpzJ7kEbdqKF.pgp
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Yawning Angel
On Sun, 9 Nov 2014 16:19:24 +
Andrea Shepard  wrote:

> How would Tor Browser learn about this reason for not being able to
> connect/ tell Tor the authentication info?  This is starting to sound
> like wanting SOCKS5 extensions to indicate different causes for
> connection failures in #6031 did.

Well prop 229 is on my todo list, though it's not very high up.  The
last time I spoke to people about this, it seemed like a nice to have
but not massively important sort of thing, but I'd be more than happy
to rearrange things in that department.

Especially as my tenative plans for obfsng (aka obfs6 depending on how
long it gets stuck in design and deployment) involves 1 KiB keys...

Regards,

-- 
Yawning Angel


pgp1Omyydtsp8.pgp
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Garrett Robinson
SecureDrop (and former Firefox) dev here. A few months ago I started
working on a patch to support prompting users for an authenticated
hidden service cookie in the manner of HTTP Basic Auth. [0] We require
journalists who use SecureDrop to download submissions from an
authenticated Tor hidden service, and bootstrapping that for them is
currently a major UX pain point. [1]

The main difficulty was that there was not a clear way to communicate
the HidServAuth info to the Tor Browser's running Tor process. AFAICT,
that is not currently supported in the Tor control protocol. So an
extension to the Tor control would be useful here. It would also be
possible to edit the torrc, reload Tor, and have the TB wait for that,
but that is a) incredibly ugly and b) probably prone to causing all
kinds of fun problems. Haven't tried it yet.

> How would Tor Browser learn about this reason for not being able to
connect/
> tell Tor the authentication info? This is starting to sound like 
> wanting SOCKS5 extensions to indicate different causes for
> connection failures in #6031 did.

My current patch waits for a connection timeout on a .onion, then offers
a tab-modal prompt that says "A connection to a Tor Hidden Service
failed. If you are trying to connect to an authenticated Tor hidden
service, enter your authentication string below:". A SOCKS5 extension
would be even better, to avoid annoying users who mistype onion's or who
are trying to access an onion that is down. I included a "Don't ask
again" checkbox but it would probably still be annoying.

Would be interested in hearing ideas about how hard it would be to
extend the control protocol and add a SOCKS5 extension for connection
failures, and if anybody is already working in those directions. I'll
try to return to this patch when I have time in the coming weeks.

[0] https://trac.torproject.org/projects/tor/ticket/8000
[1]
https://github.com/freedomofpress/securedrop/blob/develop/tails_files/README.md



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] high latency hidden services

2014-11-09 Thread Mansour Moufid
Hi everyone,

Operation Onymous, the anecdotes about it (I don't think the DoS was a
DoS), the wording of the related legal documents, and the previous CMU
research... make me think that traffic confirmation attacks are now
widely used in practice.  Other, cat-and-mouse implemetation
vulnerabilities may be diversions or parallel construction.

This kind of attack would mean it's game over for HS that use HTTP or
other low-latency protocols.

Has there been research on integrating high-latency message delivery
protocols with the hidden service model of location hiding?  The
SecureDrop or Pynchon Gate protocols sound like good starting points.
I would love to participate, and encourage everyone to start in this
direction (in your copious free time ;).


Mansour
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Vlad Tsyrklevich
I'm probably missing significant Tor development history here, but section
5.2 of the tor design paper
 mentions using
the domain format x.y.onion where x is used for authorization and y.onion
is used for actual the actual addressing. I'm not sure if this idea was
ever actually taken any further, but this seems preferable to the UI flow
you're talking about and might mean that TBB doesn't have to be updated at
all! The concerns I can see are 1) potentially caching the authorization
component in the history and 2) essentially disallowing sub-domains for
hidden services [this is a minor problem since if hidden services want the
security benefits of single-origin policy separation they can just do what
facebook did and have a separate onion addresses.] Upstreaming this into
the tor daemon would also allow any application to address authenticated
hidden services easily instead of just TBB.
On Sun Nov 09 2014 at 12:21:01 PM Garrett Robinson 
wrote:

> SecureDrop (and former Firefox) dev here. A few months ago I started
> working on a patch to support prompting users for an authenticated
> hidden service cookie in the manner of HTTP Basic Auth. [0] We require
> journalists who use SecureDrop to download submissions from an
> authenticated Tor hidden service, and bootstrapping that for them is
> currently a major UX pain point. [1]
>
> The main difficulty was that there was not a clear way to communicate
> the HidServAuth info to the Tor Browser's running Tor process. AFAICT,
> that is not currently supported in the Tor control protocol. So an
> extension to the Tor control would be useful here. It would also be
> possible to edit the torrc, reload Tor, and have the TB wait for that,
> but that is a) incredibly ugly and b) probably prone to causing all
> kinds of fun problems. Haven't tried it yet.
>
> > How would Tor Browser learn about this reason for not being able to
> connect/
> > tell Tor the authentication info? This is starting to sound like
> > wanting SOCKS5 extensions to indicate different causes for
> > connection failures in #6031 did.
>
> My current patch waits for a connection timeout on a .onion, then offers
> a tab-modal prompt that says "A connection to a Tor Hidden Service
> failed. If you are trying to connect to an authenticated Tor hidden
> service, enter your authentication string below:". A SOCKS5 extension
> would be even better, to avoid annoying users who mistype onion's or who
> are trying to access an onion that is down. I included a "Don't ask
> again" checkbox but it would probably still be annoying.
>
> Would be interested in hearing ideas about how hard it would be to
> extend the control protocol and add a SOCKS5 extension for connection
> failures, and if anybody is already working in those directions. I'll
> try to return to this patch when I have time in the coming weeks.
>
> [0] https://trac.torproject.org/projects/tor/ticket/8000
> [1]
> https://github.com/freedomofpress/securedrop/blob/develop/tails_files/
> README.md
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [tor-internal] HS attack blog post (was Re: Hidden services and drug markets takedown)

2014-11-09 Thread A. Johnson
I think the option to rate-limit guard selection is a great idea to defend 
against guard DoS. The downside is possible connection loss even if you’re not 
under attack and you just happen to pick flaky guards. In case you’re 
interested, I examined this defense and how often such benign service loss 
would occur in section 6.B of "The Sniper Attack: Anonymously Deanonymizing and 
Disabling the Tor Network” 
. Table 6 shows the 
probability that this happens for a Tor client that operates continuously for 
two months (after 2-3 months all guards will have expired and the process 
repeats). If, for example, you are willing to require only one active guard 
(a_g=1) and you limit yourself to no more than 4 new guards (r=4) chosen in the 
last 28 days (t=28), then you had a 0.0008 chance of having any down time 
(whether or not it happens depends on which guards you chose). If you increase 
the number of allowed new guards to 5 (r=5), then the probability of downtime 
was zero.

Cheers,
Aaron

> Got it. Though on the client side, could we have a warning or have an option 
> to hibernate a HS if they have been forced to switch guards N times in the 
> last N minutes or hours or such? This would allow a DNS on the HS of course, 
> but that may be preferable to discovery.
> 
> David Chasteen
> 
> PGP 0x48458ecd78833c0d
> 
> On Nov 9, 2014 12:49 PM, "Matthew Finkel"  wrote:
> On Sun, Nov 09, 2014 at 12:34:01PM -0500, David Chasteen wrote:
> > Would it be possible to create some kind of tor weather-like detection
> > and alert system to warn if such a massive DoS attack were underway such
> > that users could know that perhaps now might not be the best possible
> > time to use Tor? A hidden threat is worse than a known one. We're not
> > going to be able to mitigate every known threat, but making users aware
> > that the threat profile is heightened can allow them to make informed
> > risk decisions.
> >
> 
> Unfortunately we don't receive any real-time statistics from relays,
> and our metrics calculations are run daily, so there's a significant
> delay in any visualizations we generate. Perhaps we can still look for
> and see long-term attacks (> 24-36 hours) but nothing that will
> significantly benefit users shortly after the attack begins.
> ___
> tor-internal mailing list
> tor-inter...@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-internal
> ___
> tor-internal mailing list
> tor-inter...@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-internal

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] HSDir Auth and onion descriptor scraping

2014-11-09 Thread grarpamp
> George K:
> I suspect that HS authorization is very rare in the current network,
> and if we believe it's a useful tool, it might be worthwhile to make
> it more useable by people.

Is anyone making their HSDir onion descriptor scraping patches
available somewhere? I'd suspect the rarity of HS authorization
could also be determined with that since some fields would be
obfuscated and thus not match patterns.

s/scraping/logging/

rend--spec.txt:
2. Authentication and authorization.
2.1. Service with large-scale client authorization
2.2. Authorization for limited number of clients
2.3. Hidden service configuration
2.4. Client configuration
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [HTTPS-Everywhere] "darkweb everywhere" extension

2014-11-09 Thread rufo
This might be a good use for the Alternate-Protocol header currently
used by Chrome to allow opportunistic upgrade from HTTP to SPDY.

See also the Alt-Svc header proposed by the HTTPbis WG earlier this year.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] yes hello, internet supervillain here

2014-11-09 Thread Fears No One
I have some news to report, along with more data.

The August DoS attempt appears to have been a crawler bot after all. An
old friend came forward after reading tor-dev and we laughed about his
dumb crawler bot vs my dumb "must-serve-200-codes-at-everything" nginx
config. His user agent string only accounts for the spike in August, and
I see no evidence of a mass crawl from it in my log reports. The
2014-09_24.old file's spike in traffic doesn't match up with his crawl
times in any way, but he theorizes that somebody else maybe used the
same crawler package. For reference, this directory output shows when
each of his mass onion crawls ended:

drwxrwxr-x  3 username group 4096 Jul 27 04:30 onion-1
drwxrwxr-x  3 username group 4096 Jul 28 13:40 onion-2
drwxrwxr-x  3 username group 4096 Jul 28 14:36 onion-3
drwxrwxr-x  3 username group 4096 Jul 31 01:47 onion-4
drwxrwxr-x  3 username group 4096 Jul 31 06:48 onion-5
drwxrwxr-x  3 username group 4096 Aug 17 01:43 onion-6
drwxrwxr-x  3 username group 4096 Aug 28 00:49 onion-7
drwxrwxr-x  3 username group 4096 Sep 13 23:30 onion-8

This is probably the part where I mention that he mass crawled a bunch
of onions, not just mine. To save others the time of grepping for his
user agent string in log reports, I'm going to be slightly rude and
paste my grep command + the results here:

grep -R "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/35.0.1916.153 Safari/537.36" | sort
06/doxbin_2014_06_11.txt: 57 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_12.txt:186 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_13.txt:103 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_14.txt: 70 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_15.txt:106 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_16.txt: 47 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_17.txt: 68 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_18.txt: 51 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_19.txt: 71 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_20.txt: 27 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_21.txt: 32 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_22.txt:104 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_23.txt:169 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_24.txt: 68 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_25.txt: 65 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_26.txt: 44 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_27.txt: 86 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_28.txt: 62 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_29.txt: 35 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
06/doxbin_2014_06_30.txt: 97 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
07/doxbin_2014_07_01.txt: 56 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
07/doxbin_2014_07_02.txt:131 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
07/doxbin_2014_07_03.txt: 86 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
07/doxbin_2014_07_04.txt: 80 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36
07/doxbin_2014_07_05.txt:219 Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, 

Re: [tor-dev] yes hello, internet supervillain here

2014-11-09 Thread Matthew Finkel
On Sun, Nov 09, 2014 at 07:25:39PM +, Fears No One wrote:
> In other news, the same guy runs a bot that records uptimes for various
> onions, and he gave me output related to up/down times for doxbin,
> Cloud9, and Silk Road 2.0.
> 
> NOTE: Time zone is GMT+9:30 on all of these. He used sed to replace 0
> with down and 1 with up for readability reasons on the doxbin and Silk
> Road pastes, but the Cloud9 paste is raw.
> 
> doxbin: http://pastebin.com/pVxQDS9u
> Cloud9: http://pastebin.com/5uYmpmfQ (0 = down, 1 = up)
> Silk Road 2.0: http://pastebin.com/jQvgz0VF

Thanks!

It would be interesting to see if any of these down-times correlate
to any relays restarting or descriptors significantly changing (such
as looking at 2014-10-22 UTC+9:30 when all three services went down at
some point that day).
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Jacob Appelbaum
> In the future "Next Generation Hidden Services" specification there
> are again two ways to do authorization:
> https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/224-rend-spec-ng.txt#l1446
> One way is with a password and the other is with a public key.

A {shared secret,key} and a user specific onion?

>
> I suspect that HS authorization is very rare in the current network,
> and if we believe it's a useful tool, it might be worthwhile to make
> it more useable by people.
>

I've used this feature extensively. I love it.

> For example, it would be interesting if TBB would allow people to
> input a password/pubkey upon visiting a protected HS. Protected HSes
> can be recognized by looking at the "authentication-required" field of
> the HS descriptor. Typing your password on the browser is much more
> useable than editing a config file.

That sounds interesting.

All the best,
Jacob
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HSDir Auth and onion descriptor scraping

2014-11-09 Thread Gareth Owen
I have several hundred thousand (or million? Haven't counted) hs descriptors 
saved on my hard disk from a data collection experiment (from 70k HSes).  I'm a 
bit nervous about sharing these en masse as whilst not confidential they're 
supposed to be difficult to obtain in this quantity.  However, if someone wants 
to write a quick script that goes through all of them and counts the number of 
authenticated vs nonauthed then I do not mind running it on the dataset and 
publishing the results.  I have a directory where each file is a hs descriptor.

The introduction point data is base64 encoded plaibtext when unauthed or has 
high entropy otherwise.

Best
Gareth



On 19:06, 9 Nov 2014, at 19:06, grarpamp  wrote:
>> George K:
>> I suspect that HS authorization is very rare in the current network,
>> and if we believe it's a useful tool, it might be worthwhile to make
>> it more useable by people.
>
>Is anyone making their HSDir onion descriptor scraping patches
>available somewhere? I'd suspect the rarity of HS authorization
>could also be determined with that since some fields would be
>obfuscated and thus not match patterns.
>
>s/scraping/logging/
>
>rend--spec.txt:
>2. Authentication and authorization.
>2.1. Service with large-scale client authorization
>2.2. Authorization for limited number of clients
>2.3. Hidden service configuration
>2.4. Client configuration
>___
>tor-dev mailing list
>tor-dev@lists.torproject.org
>https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Fabio Pietrosanti - lists

On 11/9/14 8:58 PM, Jacob Appelbaum wrote:
>> For example, it would be interesting if TBB would allow people to
>> input a password/pubkey upon visiting a protected HS. Protected HSes
>> can be recognized by looking at the "authentication-required" field of
>> the HS descriptor. Typing your password on the browser is much more
>> useable than editing a config file.
> That sounds interesting.

Also i love this idea but i would suggest to preserve the copy&paste
self-authenticated URL property of TorHS, also in presence of authorization.

It could be done with a parameter in the URL
http://blahblah.onion/?authTorHBauBauMeowMeow=PASSWORD
Or it could be done with a URL handler httpA://PASSWORD@blahblah.onion .

That way it will be possible to use such authenticated TorHS by
bookmarking an URL in TBB or by copy/pasting it from a password manager.

-- 
Fabio Pietrosanti (naif)
HERMES - Center for Transparency and Digital Human Rights
http://logioshermes.org - http://globaleaks.org - http://tor2web.org - 
http://ahmia.fi

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] yes hello, internet supervillain here

2014-11-09 Thread Paul Syverson
On Sun, Nov 09, 2014 at 07:25:39PM +, Fears No One wrote:
> I have some news to report, along with more data.
> 
> The August DoS attempt appears to have been a crawler bot after all. An
> old friend came forward after reading tor-dev and we laughed about his
> dumb crawler bot vs my dumb "must-serve-200-codes-at-everything" nginx
> config. His user agent string only accounts for the spike in August, and
> I see no evidence of a mass crawl from it in my log reports. The
> 2014-09_24.old file's spike in traffic doesn't match up with his crawl
> times in any way, but he theorizes that somebody else maybe used the
> same crawler package. 

I don't know the exact timing, but 9/24 would line up with HS crawl activity
that was being conducted in association with the kickoff of Sponsor R work
https://trac.torproject.org/projects/tor/wiki/org/sponsors/SponsorR

HTH,
Paul
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Hi everyone!

2014-11-09 Thread Conny Hermansson
Hi!
I´m new to the Tor project and I´m looking for some easy project to get me
started. preferebly in Java, I can do debugging or write some code.

Conny
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hi everyone!

2014-11-09 Thread Damian Johnson
Hi Connny, glad you want to get involved! Please take a peek at...

https://www.torproject.org/getinvolved/volunteer.html.en#Projects

If you're interested in Java then Orbot
(https://guardianproject.info/apps/orbot/) and Metrics
(https://metrics.torproject.org/) are your best bets.

Cheers! -Damian


On Sun, Nov 9, 2014 at 12:50 PM, Conny Hermansson
 wrote:
> Hi!
> I´m new to the Tor project and I´m looking for some easy project to get me
> started. preferebly in Java, I can do debugging or write some code.
>
> Conny
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Pluggable-transport implementations of your website fingerprinting defenses

2014-11-09 Thread David Fifield
NB I'm copying the tor-dev mailing list on this message.

At CCS I saw Rishab present these papers:

"CS-BuFLO: A Congestion Sensitive Website Fingerprinting Defense"
http://www3.cs.stonybrook.edu/~rnithyanand/pubs/wpes2014-csb.pdf
"Glove: A Bespoke Website Fingerprinting Defense"
http://www3.cs.stonybrook.edu/~rnithyanand/pubs/wpes2014-glove.pdf
"A Systematic Approach to Developing and Evaluating Website Fingerprinting 
Defenses"
http://www3.cs.stonybrook.edu/~rnithyanand/pubs/ccs2014.pdf

I spoke quite a lot to Rishab and suggested, since these schemes have
source code, that they be wrapped in a pluggable transports interface so
they can be easily used by tor clients. It is not super difficult to
turn a network program into a pluggable transport, as there are
libraries like pyptlib and liballium that implement the internal
pluggable transports protocol. You might be familiar with wfpadtools
(https://bitbucket.org/mjuarezm/obfsproxy-wfpadtools), which aims to
make it easy to prototype fingerprinting defenses by specifying them in
terms of (e.g. padding) primitives.

I looked for the source code mentioned in the papers and I wasn't sure
what would be best to use. I found
https://crysp.uwaterloo.ca/software/webfingerprint/
which links to some short Python files like
https://crysp.uwaterloo.ca/software/webfingerprint/tamaraw.py
There is also
https://github.com/xiang-cai/CSBuFLO
but it doesn't appear to be directly usable. It appears to be a modified
OpenSSH, without a commit history showing what was modified.

What source code do you recommend for an implementation?

David Fifield
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] HSDir Auth and onion descriptor scraping

2014-11-09 Thread grarpamp
On Sun, Nov 9, 2014 at 3:22 PM, Gareth Owen  wrote:
> I have several hundred thousand (or million? Haven't counted) hs descriptors
> saved on my hard disk from a data collection experiment (from 70k HSes).
> I'm a bit nervous about sharing these en masse as whilst not confidential
> they're supposed to be difficult to obtain in this quantity.  However, if
> someone wants to write a quick script that goes through all of them and
> counts the number of authenticated vs nonauthed then I do not mind running
> it on the dataset and publishing the results.  I have a directory where each
> file is a hs descriptor.
>
> The introduction point data is base64 encoded plaibtext when unauthed or has
> high entropy otherwise.

What version descriptors are you collecting?

There are a few reports I could think to run against your dataset, even if
the IntroPoints were replaced with 127.0.0.n (n set to 1, 2, 3, n for each
IntroPoint in respective descriptors list)... or even 1:1 mapped for all
descriptors either a) randomly into a new parallel IPv4/IPv6 space (dot-quad),
or b) serially into a respective 32 or 128 bit number (not dot-quad).

Whether on or off list I could use your collection patches, and a raw
sample of a single recent on disk descriptor from a public service such as
hbjw7wjeoltskhol or kpvz7ki2v5agwt35 so we know your data format.

It's effectively public info anyways, I'll get to it sooner or later, others
already have.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread grarpamp
On Sun, Nov 9, 2014 at 3:30 PM, Fabio Pietrosanti - lists
 wrote:
> On 11/9/14 8:58 PM, Jacob Appelbaum wrote:
>>> For example, it would be interesting if TBB would allow people to
>>> input a password/pubkey upon visiting a protected HS. Protected HSes
>>> can be recognized by looking at the "authentication-required" field of
>>> the HS descriptor. Typing your password on the browser is much more
>>> useable than editing a config file.
>> That sounds interesting.
>
> Also i love this idea but i would suggest to preserve the copy&paste
> self-authenticated URL property of TorHS, also in presence of authorization.
>
> It could be done with a parameter in the URL
> http://blahblah.onion/?authTorHBauBauMeowMeow=PASSWORD
> Or it could be done with a URL handler httpA://PASSWORD@blahblah.onion .
>
> That way it will be possible to use such authenticated TorHS by
> bookmarking an URL in TBB or by copy/pasting it from a password manager.

This assumes you're using a Tor aware browser, or Tor is somehow protocol
aware and MITM for all user protocols (such as TLS non-web) which is impossible.
So this won't work. Any such descriptor authenticating would need done at
the onion 'hostname' level since that's the only non-user-protocol
area available.
ie: authtoken.16char.onion. Or in torrc as is today.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Griffin Boyce

On 2014-11-09 15:30, Fabio Pietrosanti - lists wrote:

On 11/9/14 8:58 PM, Jacob Appelbaum wrote:

For example, it would be interesting if TBB would allow people to
input a password/pubkey upon visiting a protected HS. Protected HSes
can be recognized by looking at the "authentication-required" field 
of

the HS descriptor. Typing your password on the browser is much more
useable than editing a config file.

That sounds interesting.


Also i love this idea but i would suggest to preserve the copy&paste
self-authenticated URL property of TorHS, also in presence of 
authorization.


  I'm conflicted about this idea.  Much better for usability ~but~ there 
should be an option for authenticated hidden services that want to *not* 
prompt and instead fail silently if the key isn't in the torrc (or 
x.y.onion url, depending on the design).


  Use case: if someone finds my hidden service url written in my planner 
while traveling across the border, they might visit it to see what it 
contains. If it offers a prompt, then they know it exists and can press 
me for the auth key (perhaps with an M4 carbine).  If there's no prompt 
and the request fails, then perhaps it "used to exist" a long time ago, 
or I wrote down an example URL.


best,
Griffin

--
"I believe that usability is a security concern; systems that do
not pay close attention to the human interaction factors involved
risk failing to provide security by failing to attract users."
~Len Sassaman
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Running a Separate Tor Network

2014-11-09 Thread Tom Ritter
On 22 October 2014 05:48, Roger Dingledine  wrote:
>> What I had to do was make one of my Directory Authorities an exit -
>> this let the other nodes start building circuits through the
>> authorities and upload descriptors.
>
> This part seems surprising to me -- directory authorities always publish
> their dirport whether they've found it reachable or not, and relays
> publish their descriptors directly to the dirport of each directory
> authority (not through the Tor network).
>
> So maybe there's a bug that you aren't describing, or maybe you are
> misunderstanding what you saw?
>
> See also https://trac.torproject.org/projects/tor/ticket/11973
>
>> Another problem I ran into was that nodes couldn't conduct
>> reachability tests when I had exits that were only using the Reduced
>> Exit Policy - because it doesn't list the ORPort/DirPort!  (I was
>> using nonstandard ports actually, but indeed the reduced exit policy
>> does not include 9001 or 9030.)  Looking at the current consensus,
>> there are 40 exits that exit to all ports, and 400-something exits
>> that use the ReducedExitPolicy.  It seems like 9001 and 9030 should
>> probably be added to that for reachability tests?
>
> The reachability tests for the ORPort involve extending the circuit to
> the ORPort -- which doesn't use an exit stream. So your relays should
> have been able to find themselves reachable, and published a descriptor,
> even with no exit relays in the network.

I think I traced down the source of the behavior I saw.  In brief, I
don't think reachability tests happen when there are no Exit nodes
because of a quirk in the bootstrapping process, where we never think
we have a minimum of directory information:

Nov 09 22:10:26.000 [notice] I learned some more directory
information, but not enough to build a circuit: We need more
descriptors: we have 5/5, and can only build 0% of likely paths. (We
have 100% of guards bw, 100% of midpoint bw, and 0% of exit bw.)

In long form: https://trac.torproject.org/projects/tor/ticket/13718




>> Continuing in this thread, another problem I hit was that (I believe)
>> nodes expect the 'Stable' flag when conducting certain reachability
>> tests.  I'm not 100% certain - it may not prevent the relay from
>> uploading a descriptor, but it seems like if no acceptable exit node
>> is Stable - some reachability tests will be stuck.  I see these sorts
>> of errors when there is no stable Exit node (the node generating the
>> errors is in fact a Stable Exit though, so it clearly uploaded its
>> descriptor and keeps running):
>
> In consider_testing_reachability() we call
>
> circuit_launch_by_extend_info(CIRCUIT_PURPOSE_TESTING, ei,
> CIRCLAUNCH_NEED_CAPACITY|CIRCLAUNCH_IS_INTERNAL);
>
> So the ORPort reachability test doesn't require the Stable flag.

You're right, reachability doesn't depend on Stable, sorry.



>> I then added auth5 to a second DirAuth (auth2) as a trusted DirAuth.
>> This results in a consensus for auth1, auth2, and auth5 - but auth3
>> and auth4 did not sign it or produce a consensus.  Because the
>> consensus was only signed by 2 of the 4 Auths (e.g., not a majority) -
>> it was rejected by the relays (which did not list auth5).
>
> Right -- when you change the set of directory authorities, you need to
> get a sufficient clump of them to change all at once. This coordination
> has been a real hassle as we grow the number of directory authorities,
> and it's one of the main reasons we don't have more currently.

I'm going to try thinking more about this problem.



> This was fixed in git commit c03cfc05, and I think the fix went into
> Tor 0.2.4.13-alpha. What ancient version is your man page from?

/looks sheepish
I was using http://linux.die.net/man/1/tor because it's very quick to
pull up :-p


>>  And how there _is no_
>> V3AuthInitialVotingInterval?  And that you can't modify these
>> parameters without turning on TestingTorParameters (despite the fact
>> that they will be used without TestingTorNetwork?)  And also,
>> unrelated to the naming, these parameters are a fallback case for when
>> we don't have a consensus, but if they're not kept in sync with
>> V3AuthVotingInterval and their kin - the DirAuth can wind up
>> completely out of sync and be unable to recover (except by luck).
>
> Yeah, don't mess with them unless you know what you're doing.
>
> As for the confusing names, you're totally right:
> https://trac.torproject.org/projects/tor/ticket/11967

Ahha.


>>  - The Directory Authority information is a bit out of date.
>> Specifically, I was most confused by V1 vs V2 vs V3 Directories.  I am
>> not sure if the actual network's DirAuths set V1AuthoritativeDirectory
>> or V2AuthoritativeDirectory - but I eventually convinced myself that
>> only V3AuthoritativeDirectory was needed.
>
> Correct. Can you submit a ticket to fix this, wherever you found it?
> Assuming it wasn't from your ancient man page that is? :)

It was.



>>  - The ne

Re: [tor-dev] Hidden Service authorization UI

2014-11-09 Thread Andrea Shepard
On Sun, Nov 09, 2014 at 09:16:40PM -0500, Griffin Boyce wrote:
> On 2014-11-09 15:30, Fabio Pietrosanti - lists wrote:
> >On 11/9/14 8:58 PM, Jacob Appelbaum wrote:
> >>>For example, it would be interesting if TBB would allow people to
> >>>input a password/pubkey upon visiting a protected HS. Protected HSes
> >>>can be recognized by looking at the "authentication-required"
> >>>field of
> >>>the HS descriptor. Typing your password on the browser is much more
> >>>useable than editing a config file.
> >>That sounds interesting.
> >
> >Also i love this idea but i would suggest to preserve the copy&paste
> >self-authenticated URL property of TorHS, also in presence of
> >authorization.
> 
>   I'm conflicted about this idea.  Much better for usability ~but~
> there should be an option for authenticated hidden services that
> want to *not* prompt and instead fail silently if the key isn't in
> the torrc (or x.y.onion url, depending on the design).
> 
>   Use case: if someone finds my hidden service url written in my
> planner while traveling across the border, they might visit it to
> see what it contains. If it offers a prompt, then they know it
> exists and can press me for the auth key (perhaps with an M4
> carbine).  If there's no prompt and the request fails, then perhaps
> it "used to exist" a long time ago, or I wrote down an example URL.
> 
> best,
> Griffin

I believe it's verifiable whether an authenticated HS exists anyway; you can
get the descriptor, but the list of intro points is encrypted.

-- 
Andrea Shepard

PGP fingerprint (ECC): BDF5 F867 8A52 4E4A BECF  DE79 A4FF BC34 F01D D536
PGP fingerprint (RSA): 3611 95A4 0740 ED1B 7EA5  DF7E 4191 13D9 D0CF BDA5


pgprRfXrwBan3.pgp
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev