[tor-talk] Not comfortable with the new single-hop system merged into Tor

2016-12-20 Thread hikki
I just think that this new single-hop system should have been reserved for a 
different Tor source/installation, dedicated only to non-anonymous hidden 
services, not merge it with the regular Tor software. And this for security.

I once witnessed a software (non-Tor related) that had a special function 
which was disabled by default, but was accidentally enabled due to a bug 
that occured during special circumstances, causing big trouble for some. In 
this case it caused a big money loss for some, but with the Tor software we 
are talking about the lives and wellbeing of humans.

How do I know that my hidden service is really running anonymously, and not 
with just 1-hop, besides just trusting the config defaults?

Please prove me wrong. I'm just concerned here, and just want some feedback.
Thanks for understanding!

-Hikki
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Massive Bandwidth Onion Services

2016-12-20 Thread Alec Muffett
On 20 December 2016 at 17:17, Alec Muffett  wrote:

4) They get booted; each launches its own Worker onion, and each scrapes
> the descriptors of all the other workers, synthesising a "master"
> descriptor and publishing it once a day to the HSDirs.
>
> 5) This means that, for workers A B C D E F, occasionally the master
> descriptor which B's onionbalance uploads to the HSDirs will get stomped-on
> a few minutes later by the  from F, and then the  from D will
> overwrite them, etc.
>
> 6) there is some (small?) extra load on the HSDirs this way - BUT the big
> win is that to take this onion site "offline" will require killing all 6
> daemons, all 6 machines - hence the "Horcrux" reference from Harry Potter.
>

ps: obviously the Horcrux can synchronise data around itself (for
webserving, etc) by using rsync over ssh to the worker onion addresses, as
a backchannel.

Or something like that.

Maybe "git" even.

-a

-- 
http://dropsafe.crypticide.com/aboutalecm
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Massive Bandwidth Onion Services

2016-12-20 Thread Alec Muffett
Hi George!

On 20 December 2016 at 14:03, George Kadianakis 
wrote:

> BTW and to slightly diverge the topic, I really like this experiment and
> its blazing fast results, but I still get a weird feeling when I see
> that to start functioning it requires 432 descriptors uploaded to the
> HSDir system (72 daemons * 6 descriptors).


There is a hypothetical way around that, but Donncha will need to comment,
too.

I don't _feel_ that this is a problem which needs to be solved in the next
2 years.

What _could_ happen in the future, is.

1) the 72 workers could each set up an IP but *not* publish it in a
descriptor, and then

2) the master daemon could poll the 72 workers for their list of current
IPs via a backchannel, and then...

3) construct the "master descriptor" from that information.

This has pros and cons.

It makes the architecture more "active" - using backchannels and lots of
chattiness - which is great for physically geolocated clusters - ie:
everything in one rack - but it makes matters *really* complicated for the
kinds of physically distributed clustering that Tor would be awesome and
cutting-edge for.

For one thing, in such a scenario, it would not be possible to use the
slave onion addresses to coordinate with each other; so it would make it
really hard to build a high-availability solution like a Horcrux* (see
below)

It's probably easier to think of this not as "432 descriptors" but instead
as "73 Hidden Services" - comprising 72x "physical" onion addresses, and 1x
"virtual" onion address using OnionBalance. This is much the same as the
Physical-versus-VIP architectures which one sees in other load-balancing
architectures.

It also resonates with the slides I posted in the thread, here:

https://twitter.com/AlecMuffett/status/802161730591793152

...arguing that Onion addresses are the Layer-2 addresses of a new network
space.

With such an approach, rather than seeing Onion addresses / HSDirs as
scarce resources, we would be better to be engineering them to become
abundant and cheap, for they will become as popular and as ephemeral as any
other Layer-2 address.


tl;dr - I am doing the bonkers stuff so that nobody else has to. 72 is
above-and-beyond, especially since Facebook does it with two; but if
streaming HD Video over Tor eventually becomes a thing, something this will
need to happen. :-)



> To be clear, I think this is
> fine for an innovative experiment like this, but it's not a setup that
> everyone on the network should replicate.


Concur. Only semi-retired enterprise architects with spare time need apply.
:-)

If you would like to talk to one of the 72 daemons, check out:

http://jmlwiy2xu3lmrh66.onion/

...which is probably okay for the next 24h or so.


I guess to improve the
> situation here, we would need some sort of "onionbalance complex mode"
> where instead of uploading the intermediary descriptors to the HSDir
> system, we upload them to an internal master onionbalance node which
> does the intro point multiplexing.
>

Agreed, we can do that, and that's very efficient for localised clusters.

However, I had this idea the other evening*, which smells very "Tor" and
has some interesting properties.

1) Say that, instead of 72, we chose a more sensible number like "6" onion
addresses

2) We configure 6 cheap devices (Raspberry Pi?) each to have a single
"worker" onion address

3) We also configure OnionBalance on all 6 computers, so that they all know
about each others' onion addresses, plus the *same* master key; so we have
an n-squared mesh.

4) They get booted; each launches its own Worker onion, and each scrapes
the descriptors of all the other workers, synthesising a "master"
descriptor and publishing it once a day to the HSDirs.

5) This means that, for workers A B C D E F, occasionally the master
descriptor which B's onionbalance uploads to the HSDirs will get stomped-on
a few minutes later by the  from F, and then the  from D will
overwrite them, etc.

6) there is some (small?) extra load on the HSDirs this way - BUT the big
win is that to take this onion site "offline" will require killing all 6
daemons, all 6 machines - hence the "Horcrux" reference from Harry Potter.

7) this works because the 6 daemons use the HSDir as a source of truth
about the descriptors, which is an idea Donncha had for OnionBalance, and
is awesome, because it enables this kind of trusted distributed directory.

8) to make it forgery-proof as well, you'd want to use certificates, or
signing, or something; but this would be an intensely robust
High-Availability architecture.

I want to build one, for test/fun, but not until the bandwidth testing is
done.

- alec

* Horcrux thread: https://twitter.com/AlecMuffett/status/810219913314992128

-- 
http://dropsafe.crypticide.com/aboutalecm
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] Massive Bandwidth Onion Services

2016-12-20 Thread George Kadianakis
Alec Muffett  writes:

> I would post this to the tor-onions list, but it might be more generally
> interesting to folk, so I'm posting here and will shift it if it gets too
> technical.
>
>
> I'm working on load-balanced, high-availability Tor deployment
> architectures, and on that basis I am running 72 tor daemons on a cluster
> of 6 quad-core Debian boxes.
>
> I am then - using Donncha's "OnionBalance" to:
>
> * scrape the descriptors of those 72 daemons
>
> * selects random(ish) 60 of the introduction points from those daemons, and
>
> * rebundle those 60 introduction points into 6 distinct descriptors of 10
> introduction points apiece, then
>
> * signing those distinct descriptors with a "service" onion address and
> emplacing them onto the HSDir ring.
>
>
> This means that, at any one time, the daemon will be able to have 60x the
> CPU and network-bus bandwidth, above/beyond what is available to a typical
> single-daemon instance.
>
> Why "72"? Because it's a number >60 and I'm seeking to stress-test things a
> little.
>
> Specifically: one eventual goal of this project is to adjust the timings a
> little, so that the HSDir cache acts a little like "Round-Robin DNS Load
> Balancing" - people accessing the "service" onion address at lunchtime will
> receive/cache different descriptors from those who access it some hours
> later, and the descriptors persist, so thereby the whole 72 daemons get
> used/averaged-out over an entire day.
>
> In my attempts to go fast-and-wide, however, I appear to have run into a
> hardcoded limit:
>
> Dec 19 09:24:43.365 [warn] HiddenServiceNumIntroductionPoints should be
> between 3 and 10, not 1
>
> There's little point in publishing >2, and perhaps* not >1 introduction
> point for each of the 72 daemons; also it makes scraping and reassembly
> slower/more expensive.
>
> So I am writing to ask whether it is possible (and whether it is wise?) to
> have a mode that will bypass this (otherwise very sensible) control?
>

Seems like a reasonable request. Doesn't make sense to refuse to start
up hidden services with a legitimate number of intro points. I added a
positive comment to #21033 which was courteously opened by David.

---

BTW and to slightly diverge the topic, I really like this experiment and
its blazing fast results, but I still get a weird feeling when I see
that to start functioning it requires 432 descriptors uploaded to the
HSDir system (72 daemons * 6 descriptors). To be clear, I think this is
fine for an innovative experiment like this, but it's not a setup that
everyone on the network should replicate. I guess to improve the
situation here, we would need some sort of "onionbalance complex mode"
where instead of uploading the intermediary descriptors to the HSDir
system, we upload them to an internal master onionbalance node which
does the intro point multiplexing.

Best of luck with your experiment :]

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] 33c3 and tor?

2016-12-20 Thread fatal

> CCC has opted not to accept any talks by Tor this year, 

unfortunate but understandable.

thanks for the quick reply!
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


Re: [tor-talk] 33c3 and tor?

2016-12-20 Thread Sebastian Hahn
Hi,

> On 20 Dec 2016, at 14:37, fatal  wrote:
> an early version of the fahrplan for 33c3 is released¹ (btw. also the
> 33c3 app on f-droid is already available).
> 
> I couldn't find any talks from the tor-project yet, but maybe I've
> overlooked them? Will there be any?
> 
> And will there be a tor relay operators meetup?
> 
> Thanks
> 
> f.
> 
> ¹ https://fahrplan.events.ccc.de/congress/2016/Fahrplan/

CCC has opted not to accept any talks by Tor this year, but there
will be Tor people there. Someone might organize a relay operators
meetup.

Cheers
Sebastian

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


[tor-talk] 33c3 and tor?

2016-12-20 Thread fatal
Hello,

an early version of the fahrplan for 33c3 is released¹ (btw. also the
33c3 app on f-droid is already available).

I couldn't find any talks from the tor-project yet, but maybe I've
overlooked them? Will there be any?

And will there be a tor relay operators meetup?

Thanks

f.

¹ https://fahrplan.events.ccc.de/congress/2016/Fahrplan/


-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk


[tor-talk] OONI & Sinar Project Report: The State of Internet Censorship in Malaysia

2016-12-20 Thread Maria Xynou
Hi,

Today OONI and Sinar Project jointly released a report that examines
internet censorship in Malaysia over the last few months.

This study involved the collection of network measurements through the
use of ooniprobe in an attempt to collect data that can serve as
evidence of internet censorship events, as well as the examination of
relevant laws and policies.

Our report can be found here:
https://ooni.torproject.org/post/malaysia-report/

The key findings of our study include:

1. We found *39 different websites* to be blocked through the *DNS
injection of block pages*. These sites include:

- *News outlets*, *blogs*, and *medium.com* for covering the *1MDB scandal*.

- A site that expresses heavy criticism towards Islam (and which, in the
Malaysian context, can be viewed as inciting hatred towards Islam).

- Popular torrenting sites (such as thepiratebay.org).

- A popular online dating site.

- Pornographic sites.

- Gambling sites.

2. This study provides evidence of *politically motivated censorship*
(since news outlets and blogs covering the 1MDB scandal were found to be
blocked). This censorship though has vaguely been justified by Malaysian
authorities on the grounds of "national security" under the
Communications and Multimedia Act (CMA) 1998.

3. The pornographic and gambling sites that were found to be blocked
might be part of the thousands of websites that were recently announced
to be blocked by Malaysian authorities (reference:
http://english.astroawani.com/malaysia-news/mcmc-blocks-over-5-000-websites-various-offences-jailani-125528).
In any case, the blocking of these sites can be justified under
Malaysia's laws and regulations (see "Legal environment" section of the
report).

4. Some previously blocked sites (Bersih rally websites) were found to
be accessible as part of this study. Social media and censorship
circumvention tools also appeared to be accessible.

5. Even though Sharia law poses restrictions to LGBTI rights, all LGBTI
sites that we tested were found to be accessible.

Please contact us with any questions you may have.

All the best,

Maria. 

-- 
Maria Xynou
Research and Partnerships Coordinator
Open Observatory of Network Interference (OONI)
https://ooni.torproject.org/
PGP Key Fingerprint: 2DC8 AFB6 CA11 B552 1081 FBDE 2131 B3BE 70CA 417E



signature.asc
Description: OpenPGP digital signature
-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk