Re: [tor-dev] Timers in Arti?

2024-01-15 Thread Michael Rogers
It's hard to say what the battery impact would be because it depends on 
what else the device is doing.


The best case for battery savings would be when the device is stationary 
with the screen turned off, no apps exempted from doze mode except the 
one running the hidden service, and no incoming push notifications. In 
that situation there should be few things bringing the device out of 
suspension, including minimal network traffic except for the app running 
the hidden service, so being able to release the wake lock and let the 
device suspend should have a relatively big impact.


Conversely if the screen is turned on, or there are a bunch of other 
apps using the network or doing background work, or push notifications 
constantly waking the device, then there will be relatively little impact.


I can run experiments to measure the savings in the ideal case, but it's 
hard to say how often that's representative of what's happening on real 
devices.


Strongly agree that it would be good to think about what Tor would look 
like if built for mobile first!


Cheers,
Michael

On 13/01/2024 15:19, Holmes Wilson wrote:
Michael, what kind of reduction in battery impact would you expect if 
you were able to make these changes?


Also, is anyone aware of any work that has been done within this 
community or in academia to consider from first principles what Tor 
would look like if built for mobile first? (e.g. built to only run when 
an app is in the foreground, or when a notification wakes up the app.) 
This seems like a discussion worth having once Arti settles and it's 
easier to build new things again!


On Wed Jan 10, 2024, 11:26 AM GMT, Michael Rogers 
<mailto:mich...@briarproject.org> wrote:


On 10/01/2024 01:46, Nick Mathewson wrote:

On Tue, Jan 9, 2024 at 12:58 PM Micah Elizabeth Scott
 wrote:


Ah. If you want to run an onion service you'll need to keep
at least a
couple circuits open continuously for the introduction
points. I'm not
sure where you would meaningfully find time to deep sleep in
that
scenario. There will be ongoing obligations from the
wifi/wan and tcp
stacks. You need a continuous TCP connection to the guard,
and multiple
circuits that are not discarded as idle. Incoming traffic on
those
circuits need to be addressed quickly or clients won't be
able to connect.

If we were really optimizing for a low power mobile onion
service
platform we'd have a different way to facilitate
introductions without a
continuously open set of circuits, but that would also be
much more
abuse-prone.

-beth


Hm. Do you know, is it possible to make network-driven wakeup events
relatively prompt? (Like, within a second or so if a TCP stream
we're
waiting on wakes up). If so, then onion services have a decent
chance
of being made to work.


Fantastic! Yes, the response to network events should be reasonably
prompt. I'll try to get some measurements.

As for the original question about timers, it would be good to
know if
the variance between scheduled wakeup and actual wakeup can be
bounded, or if there's any way to mark a timer as high-priority vs
low-priority or something.


This is unfortunately one of those things that's constantly changing on
Android and varies between manufacturers. In theory we should be
able to
set a periodic alarm that fires within fifteen minutes of the last
firing, although not all manufacturers honour this.

When the alarm fires the device will come out of suspension and the app
will be able to grab a temporary wake lock, which we can hold for some
amount of time (say a minute, for the sake of argument) to let Tor do
whatever it needs to do.

As far as I know these alarms can only be scheduled via a Java API, so
Tor would either need to signal to the controller that an alarm was
needed, or the controller could just assume this whenever hidden
services were published, and wake the device every fifteen minutes
without explicitly communicating with Tor about alarms.

Cheers,
Michael

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


OpenPGP_0x11044FD19FC527

Re: [tor-dev] Timers in Arti?

2024-01-10 Thread Michael Rogers

On 10/01/2024 01:46, Nick Mathewson wrote:

On Tue, Jan 9, 2024 at 12:58 PM Micah Elizabeth Scott
 wrote:


Ah. If you want to run an onion service you'll need to keep at least a
couple circuits open continuously for the introduction points. I'm not
sure where you would meaningfully find time to deep sleep in that
scenario. There will be ongoing obligations from the wifi/wan and tcp
stacks. You need a continuous TCP connection to the guard, and multiple
circuits that are not discarded as idle. Incoming traffic on those
circuits need to be addressed quickly or clients won't be able to connect.

If we were really optimizing for a low power mobile onion service
platform we'd have a different way to facilitate introductions without a
continuously open set of circuits, but that would also be much more
abuse-prone.

-beth


Hm. Do you know, is it possible to make network-driven wakeup events
relatively prompt?  (Like, within a second or so if a TCP stream we're
waiting on wakes up). If so, then onion services have a decent chance
of being made to work.


Fantastic! Yes, the response to network events should be reasonably 
prompt. I'll try to get some measurements.



As for the original question about timers, it would be good to know if
the variance between scheduled wakeup and actual wakeup can be
bounded, or if there's any way to mark a timer as  high-priority vs
low-priority or something.


This is unfortunately one of those things that's constantly changing on 
Android and varies between manufacturers. In theory we should be able to 
set a periodic alarm that fires within fifteen minutes of the last 
firing, although not all manufacturers honour this.


When the alarm fires the device will come out of suspension and the app 
will be able to grab a temporary wake lock, which we can hold for some 
amount of time (say a minute, for the sake of argument) to let Tor do 
whatever it needs to do.


As far as I know these alarms can only be scheduled via a Java API, so 
Tor would either need to signal to the controller that an alarm was 
needed, or the controller could just assume this whenever hidden 
services were published, and wake the device every fifteen minutes 
without explicitly communicating with Tor about alarms.


Cheers,
Michael


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Timers in Arti?

2024-01-10 Thread Michael Rogers
Thanks for this information. Android devices are generally able to keep 
TCP connections open during deep sleep, as long as there are keepalives 
from the other side (which we can request for Tor guard connections via 
KeepalivePeriod). The device will wake from deep sleep to handle network 
activity, so any state changes in Tor that are driven by network events 
should also work fine. The problem really is state changes that are 
driven by local timers.


If we assume optimistically that the device wakes often enough (due to 
network activity, background work scheduled by other apps, etc) to 
handle HS descriptor publication and consensus fetches in a reasonably 
timely way, then it seems to me that the main question is whether we can 
keep the introduction circuits alive if local timers don't fire in a 
timely way. And it would also be great to know whether we can not only 
keep the circuits alive, but avoid network-visible changes in behaviour 
compared with a device that's not sleeping.


Cheers,
Michael

On 09/01/2024 17:57, Micah Elizabeth Scott wrote:
Ah. If you want to run an onion service you'll need to keep at least a 
couple circuits open continuously for the introduction points. I'm not 
sure where you would meaningfully find time to deep sleep in that 
scenario. There will be ongoing obligations from the wifi/wan and tcp 
stacks. You need a continuous TCP connection to the guard, and multiple 
circuits that are not discarded as idle. Incoming traffic on those 
circuits need to be addressed quickly or clients won't be able to connect.


If we were really optimizing for a low power mobile onion service 
platform we'd have a different way to facilitate introductions without a 
continuously open set of circuits, but that would also be much more 
abuse-prone.


-beth


On 1/9/24 05:19, Michael Rogers wrote:
Sorry, I should have said that we're interested in keeping a hidden 
service running. Without that requirement, I agree we could just close 
the guard connection via DisableNetwork after some idle period.


So the question is about keeping circuits alive, and I guess also 
keeping HS descriptors published and the consensus fresh, although 
hopefully the timing requirements for those are a bit looser due to 
the longer timescales involved.


Cheers,
Michael

On 08/01/2024 21:01, Micah Elizabeth Scott wrote:
A variety of timers are needed on the relay side; on the client side 
though, aren't you mostly limited by the requirement of keeping a TCP 
connection open?


Really deep sleep would involve avoiding any protocol-level 
keepalives as well as TCP keepalives. This seems a lot like just 
shutting down the connection to the guard when sleeping; but of 
course that's got a latency penalty and it's visible to anyone who 
can see network activity.


-beth

On 1/4/24 04:48, Michael Rogers wrote:


Hi,

If I understand right, the C implementation of Tor has various state 
machines that are driven by local libevent timers as well as events 
from the network. For example, when building a circuit, if there 
hasn't been any progress for a certain amount of time then a timer 
fires to handle the timeout.


When running Tor on Android, we need to prevent the OS from 
suspending so that these timers fire when they're supposed to. This 
uses a lot of battery, because normally the OS would spend most of 
its time suspended. Unlike a laptop, an Android device with its 
screen turned off is constantly dipping in an out of suspension, and 
a lot of the platform's power optimisations are aimed at batching 
whatever work needs to be done so that the periods of suspension can 
be longer.


If we allowed the OS to suspend then the timers would fire with 
arbitrary delays, and I don't really know what impact that would 
have: I'd speculate that there might be connectivity problems (eg 
dead circuits not being detected and replaced) and/or 
network-visible changes in the behaviour of circuits that could 
affect anonymity.


So I've had a longstanding but not-very-hopeful request that maybe 
Tor's reliance on timers could be reduced, or at least clarified, so 
that we could either allow the OS to suspend without breaking 
anything, or at least know what was likely to break.


And basically I'd just like to renew that request for Arti. Could 
anyone give me an overview of how these local timers are handled in 
Arti, or any information about what's likely to happen if the timers 
are arbitrarily delayed?


Thanks,
Michael

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature
___
tor-dev

Re: [tor-dev] Timers in Arti?

2024-01-09 Thread Michael Rogers
Sorry, I should have said that we're interested in keeping a hidden 
service running. Without that requirement, I agree we could just close 
the guard connection via DisableNetwork after some idle period.


So the question is about keeping circuits alive, and I guess also 
keeping HS descriptors published and the consensus fresh, although 
hopefully the timing requirements for those are a bit looser due to the 
longer timescales involved.


Cheers,
Michael

On 08/01/2024 21:01, Micah Elizabeth Scott wrote:
A variety of timers are needed on the relay side; on the client side 
though, aren't you mostly limited by the requirement of keeping a TCP 
connection open?


Really deep sleep would involve avoiding any protocol-level keepalives 
as well as TCP keepalives. This seems a lot like just shutting down the 
connection to the guard when sleeping; but of course that's got a 
latency penalty and it's visible to anyone who can see network activity.


-beth

On 1/4/24 04:48, Michael Rogers wrote:


Hi,

If I understand right, the C implementation of Tor has various state 
machines that are driven by local libevent timers as well as events 
from the network. For example, when building a circuit, if there 
hasn't been any progress for a certain amount of time then a timer 
fires to handle the timeout.


When running Tor on Android, we need to prevent the OS from suspending 
so that these timers fire when they're supposed to. This uses a lot of 
battery, because normally the OS would spend most of its time 
suspended. Unlike a laptop, an Android device with its screen turned 
off is constantly dipping in an out of suspension, and a lot of the 
platform's power optimisations are aimed at batching whatever work 
needs to be done so that the periods of suspension can be longer.


If we allowed the OS to suspend then the timers would fire with 
arbitrary delays, and I don't really know what impact that would have: 
I'd speculate that there might be connectivity problems (eg dead 
circuits not being detected and replaced) and/or network-visible 
changes in the behaviour of circuits that could affect anonymity.


So I've had a longstanding but not-very-hopeful request that maybe 
Tor's reliance on timers could be reduced, or at least clarified, so 
that we could either allow the OS to suspend without breaking 
anything, or at least know what was likely to break.


And basically I'd just like to renew that request for Arti. Could 
anyone give me an overview of how these local timers are handled in 
Arti, or any information about what's likely to happen if the timers 
are arbitrarily delayed?


Thanks,
Michael

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Timers in Arti?

2024-01-04 Thread Michael Rogers

Hi,

If I understand right, the C implementation of Tor has various state 
machines that are driven by local libevent timers as well as events from 
the network. For example, when building a circuit, if there hasn't been 
any progress for a certain amount of time then a timer fires to handle 
the timeout.


When running Tor on Android, we need to prevent the OS from suspending 
so that these timers fire when they're supposed to. This uses a lot of 
battery, because normally the OS would spend most of its time suspended. 
Unlike a laptop, an Android device with its screen turned off is 
constantly dipping in an out of suspension, and a lot of the platform's 
power optimisations are aimed at batching whatever work needs to be done 
so that the periods of suspension can be longer.


If we allowed the OS to suspend then the timers would fire with 
arbitrary delays, and I don't really know what impact that would have: 
I'd speculate that there might be connectivity problems (eg dead 
circuits not being detected and replaced) and/or network-visible changes 
in the behaviour of circuits that could affect anonymity.


So I've had a longstanding but not-very-hopeful request that maybe Tor's 
reliance on timers could be reduced, or at least clarified, so that we 
could either allow the OS to suspend without breaking anything, or at 
least know what was likely to break.


And basically I'd just like to renew that request for Arti. Could anyone 
give me an overview of how these local timers are handled in Arti, or 
any information about what's likely to happen if the timers are 
arbitrarily delayed?


Thanks,
Michael


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] A way to connect quickly to a newly-online hidden service?

2023-02-23 Thread Michael Rogers

Hi Holmes,

The Briar team has been working on a new app that creates a hidden 
service when the app is first launched. What we've found during testing 
is that the newly published hidden service is usually reachable in the 
first connection attempt (with a timeout of 2 minutes, based on Tor's 
internal timeouts for HS client connections). We wait for a single copy 
of the HS descriptor to be uploaded before showing a QR code, which the 
other device scans and then immediately tries to connect. It sounds like 
you're seeing different results with a similar scenario, so it would be 
interesting to find out what's different between your scenario and ours.


When you say newly-online, do you mean that the HS has never been 
published before, or that it's been published and has then spent some 
time offline? Is the Tor process still running when the app is offline?


Cheers,
Michael

On 23/02/2023 14:35, Holmes Wilson wrote:

Hi everyone,

In the p2p messaging app we're building, Quiet, users exchange some 
information out-of-band (an onion address) and use that to connect to 
each other over Tor, as they would for direct messages in Ricochet 
Refresh or Cwtch.


One UX failure we see now is that newly-online hidden services are not 
available when users first try to connect, so connecting after users 
have just come online takes unreasonably long (15 minutes or more) and 
Quiet seems like it's broken.


Any thoughts on how to speed up connecting to a newly-online hidden 
service?


One idea we had is to include enough information from the HSDir entry in 
what peers exchange out-of-band, so that they wouldn't need to rely on 
finding the HSDir entry. (And if this information became out of date, we 
could simply encourage users to share information out of band again.) Is 
there an easy way for us to get this information from Tor on the 
receiver side and pass it to Tor when connecting?


Another idea was to run some service of our own as a fallback, that we 
knew would always have the latest HSDir entries posted by our users. Is 
there some straightforward way to hardcode our own HSDir so that Tor 
would always check it alongside others?


If I'm reading the situation wrong or there's some other possibility I'm 
not thinking of, please let me know! We've definitely observed long 
delays in onion addresses becoming available, on top of general 
connection delays, and I'm trying to find a way to improve this situation.


Thanks!

Holmes

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Latest releases missing from changelog

2022-11-09 Thread Michael Rogers

Hi all,

The latest releases (0.4.7.10, 0.4.6.12 and 0.4.5.14) seem to be missing 
from the changelog:


https://gitlab.torproject.org/tpo/core/tor/-/raw/main/ChangeLog

Is this file still the right place to look?

Cheers,
Michael


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Counting HS descriptor uploads

2022-08-17 Thread Michael Rogers

Hi David,

Many thanks for the reply. I think there are still some basic pieces I 
need to get my head around in order to understand this:


1. When a brand new hidden service is created, what are the minimum and 
maximum number of HSDirs that Tor will try to upload the descriptor to?


2. Does each successful upload produce exactly one HS_DESC UPLOADED 
control port event?


3. If Tor can't immediately upload as many copies of the descriptor as 
it wants to, does it retry immediately or is there some delay before 
retrying? (We're trying to decide how to design the UX for this 
situation.) Are there any control port events we could use to track the 
status of the upload process in this case?


4. When a (non-ephemeral) hidden service is created and then the Tor 
process is stopped and restarted, what are the minimum and maximum 
number of HSDirs that Tor will try to upload the descriptor to after 
restarting? In particular, are there any situations where Tor might 
decide that it doesn't need to upload any copies at all after 
restarting, because the previously uploaded copies are still good?


Thanks,
Michael

On 09/08/2022 14:01, David Goulet wrote:

On 28 Jun (13:27:03), Michael Rogers wrote:

Hi,


Better late than never I guess :S...



The Briar team is working on a new app that uses Tor hidden services, and
we're trying to work out when the server publishing a hidden service should
consider the service to be reachable by clients.

We've tried counting HS descriptor uploads via control port events, but we
found that when republishing a hidden service that has been published
before, the number of descriptor uploads seems to vary.

When republishing a hidden service, is it guaranteed that at least one copy
of the descriptor will be uploaded? Or are there circumstances where Tor
might decide that enough copies were previously uploaded, and still contain
up-to-date information about introduction points etc, so no new copies need
to be uploaded?


Descriptor upload can be quite chaotic and unpredictable. Reason is that there
are various cases that can make a service regenerate the descriptors and thus
republish them.

But, in all these cases, the descriptor will change as in for the "version"
but might not change the intro points information for instance. Such case
would be that the HSDir hashring changed and the service noticed so it would
immediately upload the existing descriptor(s).

Sometimes, 1 introduction points disappears and so a new one is picked up and
a re-upload is done.

And on and on, there are really various cases that can change it.

Not sure I'm fully answering the question but if not, let me know.

Cheers!
David


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] bridge:// URI and QR codes

2022-08-02 Thread Michael Rogers



On 20/07/2022 18:15, Nathan Freitas wrote:



On Wed, Jul 20, 2022, at 8:01 AM, meskio wrote:

Quoting Torsten Grote (2022-07-19 14:54:01)

On Monday, 18 July 2022 13:47:21 -03 meskio wrote:

What do you think of the proposal? How can we improve it?


A slightly unrelated question:

Was there any consideration about deanonymization attacks by giving the user a
bridge controlled by the attacker? I worry that those get more likely when
getting bridges via links and QR codes becomes normalized.

Apart from the source IP address of the user and their Tor traffic pattern, is
there anything else an attacker can learn from operating the bridge?


At least from my side there was not consideration on this topic yet. Thank you
for bringing it, I think is a pretty valid concern and we should do some
planning on it.

I wonder if we should only accept bridge URIs/QR codes when the user
clicks on
'add bridges' inside the tor related app. Or will be enough to accept
bridge
URIs on any moment but communicate to the user clearly what is
happening and ask
them for confirmation. We should never change the bridge configuration
silently
from a bridge URI without any user intervention.

I think we should add something about it to the "Recommendations to
implementers" on the proposal.


I believe in Orbot today we do promote the user after they scan a code or click 
on a bridge link. Definitely agree there should be that step.


Another thing that would be useful for this scenario would be for 
BridgeDB to publish some kind of signed record saying "the bridge with 
such-and-such a fingerprint was known to BridgeDB at such-and-such a 
time" - similar to what can already be queried via the API, but in a 
form that could be distributed offline.


If users were able to distribute these records alongside the 
corresponding bridge lines then apps might decide to treat BridgeDB 
bridges differently - for example, showing a warning if the bridge 
entered by the user was *not* signed by BridgeDB. This would provide a 
useful second layer of trust when finding bridges from sources like 
Telegram bots, where the provenance isn't always clear.


However, including these signatures in a bridge URI might make the URI 
quite long, which in turn might cause issues with scanning QR codes. So 
there might be tradeoffs here.


Cheers,
Michael


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Bootstrapping with a very inaccurate clock

2022-06-28 Thread Michael Rogers

Hello again,

Apologies for starting two threads in quick succession.

First of all I want to say thank you for all the improvements in 
handling clock skew between 0.3.5 and 0.4.5. We recently migrated Briar 
to 0.4.5, and some situations that are common on mobile, such as the 
system clock being configured with the wrong timezone, are now much less 
likely to affect Tor connectivity (except when using obfs4).


When the system clock is very far behind real time, Tor's clock skew 
events enable us to warn the user to check their time and date settings. 
When the clock is ahead of real time by up to a day, there are no clock 
skew events but Tor can bootstrap. But when the clock is several days 
ahead of real time, there are no clock skew events and Tor gets stuck 
boostrapping at 14%.


At the moment we're having trouble distinguishing this situation from 
other situations that might cause bootstrapping to get stuck, such as 
access to the Tor network being blocked. So ideally we'd like to be able 
to detect that the clock is the problem and ask the user to fix it.


I should say that this is probably a rare situation. Old devices that 
haven't been turned on for a while sometimes reset their clocks to a 
point in the distant past, but I've never seen a device spontaneously 
change its clock to a point in the distant future. Still, people do mess 
around with the time and date settings so we'd like to detect and fix 
this issue if possible.


Is there any way for Tor to tell, during bootstrapping, that the system 
clock appears to be far ahead of real time?


Thanks,
Michael


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Counting HS descriptor uploads

2022-06-28 Thread Michael Rogers

Hi,

The Briar team is working on a new app that uses Tor hidden services, 
and we're trying to work out when the server publishing a hidden service 
should consider the service to be reachable by clients.


We've tried counting HS descriptor uploads via control port events, but 
we found that when republishing a hidden service that has been published 
before, the number of descriptor uploads seems to vary.


When republishing a hidden service, is it guaranteed that at least one 
copy of the descriptor will be uploaded? Or are there circumstances 
where Tor might decide that enough copies were previously uploaded, and 
still contain up-to-date information about introduction points etc, so 
no new copies need to be uploaded?


Thanks,
Michael


OpenPGP_0x11044FD19FC527CC.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Onion Service Intro Point Retry Behavior

2019-11-01 Thread Michael Rogers
Hi David,

On 29/10/2019 14:52, David Goulet wrote:
> Long story short, couple weeks ago we've almost merged a new behavior on the
> service side with #31561 that would have ditch an intro point if its circuit
> would time out instead of retrying it. (Today, a service always retry their
> intro point up to 3 times on any type of circuit failure.)

Thanks for not merging this yet. :-)

> The primary original argument for retrying is based on the mobile use case. If
> a .onion is running on a cellphone and the network happens to be bad all the
> sudden, the service is better off to re-establish the intro circuits which
> would make the retry attempts of the client to finally succeed after a bit
> instead of having to re-fetch a descriptor and go to the new intro points.
> 
> Thus, in theory, it is mostly a reachability argument.
> 
> One question that can arise from this is: Will the client be able to reconnect
> using the old intro points by the time the service re-established?
> 
> In other words, is the retry behavior of the *client* allows enough time for
> the service to stabilize for the mobile use case? I'm curious to learn from
> people with experience with this!

For what it's worth, we used to run into the following problem with Briar:

* Device X tries to connect to device Y's hidden service
* X has a cached descriptor for Y's HS
* Since the time when X cached the descriptor, Y has lost its guard
connection, so it's built new intro circuits to new intro points
* After multiple connection attempts, X gives up on the intro points in
the cached descriptor and fetches a new descriptor
* This causes a delay in X connecting to Y

A typical mobile device loses its guard connection frequently - not
necessarily because it loses internet access, but because it switches
between wifi and mobile data. So the scenario above was very common.

Before the HS behaviour was changed to reuse the old intro points, we
had to maintain a patch against Tor to add a controller command for
flushing a cached HS descriptor before trying to connect. This
essentially made the client's descriptor cache redundant, so it was a
slight loss of efficiency, but better than trying a bunch of stale intro
points and then fetching a new descriptor anyway.

If you're considering switching back to the old behaviour, I'd like to
discuss whether we could make one of the following changes to continue
supporting the mobile HS use case:

1. Add a controller command for flushing an HS descriptor
2. Add a controller command for notifying Tor that we lost/gained
internet access, or switched between wifi and mobile data, so Tor knows
that (a) its guard connection may be dead, and (b) its intro circuits
may be dead, but not due to an attack by the intro points, so it can
safely reuse the intro points
3. If intro circuits are closed due to DisableNetwork changing from 0 to
1, remember this and reuse the intro points when the network is re-enabled

Android notifies apps of connectivity changes, so Briar could easily
pass this information on to Tor via a new controller command or by
setting DisableNetwork. (The general problem of detecting whether our
internet connectivity is broken for some definition of broken remains
hard, but fortunately we don't need to solve that to handle the common
cases of switching between wifi and mobile data, and losing mobile
signal, which the OS can tell us about.)

My one-sided two cents. ;-)

Cheers,
Michael
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] reproducible builds for Android tor daemon

2019-09-13 Thread Michael Rogers
Hi all,

Just saw this thread while heading out the door and wanted to mention
that we already have a reproducible build setup for Tor and obfs4proxy
binaries for Android and Linux. The binaries are published on JCenter.
Hans-Christoph, hope this shortens your path! :-)

https://code.briarproject.org/briar/tor-reproducer
https://code.briarproject.org/briar/go-reproducer
https://bintray.com/briarproject/org.briarproject

Cheers,
Michael

On 13/09/2019 16:24, Santiago Torres-Arias wrote:
> On Fri, Sep 13, 2019 at 12:32:06PM +0200, Nicolas Vigier wrote:
>> On Thu, 12 Sep 2019, Hans-Christoph Steiner wrote:
>>
>>>
>>> And third, and tips on getting a Linux shared library to build
>>> reproducibly.  E.g. is faketime a hard requirement?
>>
>> Usually it's not needed to use faketime. It's only useful if the
>> toolchain has bugs that cannot easily be fixed, causing some timestamps
>> to be inserted somewhere.
>>
> +1, using faketime so liberally would just mean there are other, bigger
> underlying issues (e.g., lack of support for S_D_E...)
> 
> Cheers!
> -Santiago.
> 
> 
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
> 


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Onion Service - Intropoint DoS Defenses

2019-07-05 Thread Michael Rogers
On 04/07/2019 12:46, George Kadianakis wrote:
> David Goulet  writes:
>> Overall, this rate limit feature does two things:
>>
>> 1. Reduce the overall network load.
>>
>>Soaking the introduction requests at the intro point helps avoid the
>>service creating pointless rendezvous circuits which makes it "less" of an
>>amplification attack.
>>
> 
> I think it would be really useful to get a baseline of how much we
> "Reduce the overall network load" here, given that this is the reason we
> are doing this.
> 
> That is, it would be great to get a graph with how many rendezvous
> circuits and/or bandwidth attackers can induce to the network right now
> by attacking a service, and what's the same number if we do this feature
> with different parameters.

If you're going to do this comparison, I wonder if it would be worth
including a third option in the comparison: dropping excess INTRODUCE2
cells at the service rather than NACKing them at the intro point.

In terms of network load, it seems like this would fall somewhere
between the status quo and the intro point rate-limiting mechanism:
excess INTRODUCE2 cells would be relayed from the intro point to the
service (thus higher network load than intro point rate-limiting), but
they wouldn't cause rendezvous circuits to be built (thus lower network
load than the status quo).

Unlike intro point rate-limiting, a backlog of INTRODUCE2 cells would
build up in the intro circuits if the attacker was sending cells faster
than the service could read and discard them, so I'd expect availability
to be affected for some time after the attack stopped, until the service
had drained the backlog.

Excess INTRODUCE2 cells would be dropped rather than NACKed, so
legitimate clients would see a rendezvous timeout rather than an intro
point failure; I'm not sure if that's good or bad.

On the other hand there would be a couple of advantages vs intro point
rate-limiting: services could deploy the mechanism immediately without
waiting for intro points to upgrade, and services could adjust their
rate-limiting parameters quickly in response to local conditions (e.g.
CPU load), without needing to define consensus parameters or a way for
services to send custom parameters to their intro points.

Previously I'd assumed these advantages would be outweighed by the
better network load reduction of intro point rate-limiting, but if
there's an opportunity to measure how much network load is actually
saved by each mechanism then maybe it's worth including this mechanism
in the evaluation to make sure that's true?

I may have missed parts of the discussion, so apologies if this has
already been discussed and ruled out.

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 300: Walking Onions: Scaling and Saving Bandwidth

2019-02-05 Thread Michael Rogers
Argh, I'm really sorry, I thought I'd reached the end of the proposal
but my questions were addressed further down. Sorry for the noise.

Cheers,
Michael

On 05/02/2019 17:42, Michael Rogers wrote:
> I'm very happy to see this proposal! Two quick questions about relay
> selection:
> 
> * Can a client specify that it wants an exit node whose policy allows
> something unusual, e.g. exiting to a port that's not allowed by the
> default policy? If not, does the client need to keep picking exit nodes
> until it gets a SNIP with a suitable policy?
> 
> * Similarly, if a client has restrictions on the guard nodes it can
> connect to (fascist firewall or IPv4/v6 restrictions, for example), does
> it need to keep picking guards via the directory fallback circuit until
> it gets a suitable one?
> 
> In both cases, perhaps a client with unusual requirements could first
> download the consensus, find a relay matching its requirements, then
> send that relay's index in its extend cell, so the relay receiving the
> extend cell wouldn't know whether the index was picked randomly by a
> client with no special requirements, or non-randomly by a client with
> special requirements?
> 
> I think this would allow the majority of clients to save bandwidth by
> not downloading the consensus, without allowing relays to distinguish
> the minority of clients with unusual exit/guard requirements. (The
> presence of the full consensus on disk would indicate that the client
> had unusual exit/guard requirements at some point, however.)
> 
> Cheers,
> Michael
> 
> On 05/02/2019 17:02, Nick Mathewson wrote:
>> Filename: 300-walking-onions.txt
>> Title: Walking Onions: Scaling and Saving Bandwidth
>> Author: Nick Mathewson
>> Created: 5-Feb-2019
>> Status: Draft
>>
>> 0. Status
>>
>>This proposal describes a mechanism called "Walking Onions" for
>>scaling the Tor network and reducing the amount of client bandwidth
>>used to maintain a client's view of the Tor network.
>>
>>This is a draft proposal; there are problems left to be solved and
>>questions left to be answered.  Once those parts are done, we can
>>fill in section 4 with the final details of the design.
>>
>> 1. Introduction
>>
>>In the current Tor network design, we assume that every client has a
>>complete view of all the relays in the network.  To achieve this,
>>clients download consensus directories at regular intervals, and
>>download descriptors for every relay listed in the directory.
>>
>>The substitution of microdescriptors for regular descriptors
>>(proposal 158) and the use of consensus diffs (proposal 140) have
>>lowered the bytes that clients must dedicate to directory operations.
>>But we still face the problem that, if we force each client to know
>>about every relay in the network, each client's directory traffic
>>will grow linearly with the number of relays in the network.
>>
>>Another drawback in our current system is that client directory
>>traffic is front-loaded: clients need to fetch an entire directory
>>before they begin building circuits.  This places extra delays on
>>clients, and extra load on the network.
>>
>>To anonymize the world, we will need to scale to a much larger number
>>of relays and clients: requiring clients to know about every relay in
>>the set simply won't scale, and requiring every new client to download
>>a large document is also problematic.
>>
>>There are obvious responses here, and some other anonymity tools have
>>taken them.  It's possible to have a client only use a fraction of
>>the relays in a network--but doing so opens the client to _epistemic
>>attacks_, in which the difference in clients' views of the
>>network is used to partition their traffic.  It's also possible to
>>move the problem of selecting relays from the client to the relays
>>themselves, and let each relay select the next relay in turn--but
>>this choice opens the client to _route capture attacks_, in which a
>>malicious relay selects only other malicious relays.
>>
>>In this proposal, I'll describe a design for eliminating up-front
>>client directory downloads.  Clients still choose relays at random,
>>but without ever having to hold a list of all the relays. This design
>>does not require clients to trust relays any more than they do today,
>>or open clients to epistemic attacks.
>>
>>I hope to maintain feature parity with the current Tor design; I'll
>>l

Re: [tor-dev] Proposal 300: Walking Onions: Scaling and Saving Bandwidth

2019-02-05 Thread Michael Rogers
I'm very happy to see this proposal! Two quick questions about relay
selection:

* Can a client specify that it wants an exit node whose policy allows
something unusual, e.g. exiting to a port that's not allowed by the
default policy? If not, does the client need to keep picking exit nodes
until it gets a SNIP with a suitable policy?

* Similarly, if a client has restrictions on the guard nodes it can
connect to (fascist firewall or IPv4/v6 restrictions, for example), does
it need to keep picking guards via the directory fallback circuit until
it gets a suitable one?

In both cases, perhaps a client with unusual requirements could first
download the consensus, find a relay matching its requirements, then
send that relay's index in its extend cell, so the relay receiving the
extend cell wouldn't know whether the index was picked randomly by a
client with no special requirements, or non-randomly by a client with
special requirements?

I think this would allow the majority of clients to save bandwidth by
not downloading the consensus, without allowing relays to distinguish
the minority of clients with unusual exit/guard requirements. (The
presence of the full consensus on disk would indicate that the client
had unusual exit/guard requirements at some point, however.)

Cheers,
Michael

On 05/02/2019 17:02, Nick Mathewson wrote:
> Filename: 300-walking-onions.txt
> Title: Walking Onions: Scaling and Saving Bandwidth
> Author: Nick Mathewson
> Created: 5-Feb-2019
> Status: Draft
> 
> 0. Status
> 
>This proposal describes a mechanism called "Walking Onions" for
>scaling the Tor network and reducing the amount of client bandwidth
>used to maintain a client's view of the Tor network.
> 
>This is a draft proposal; there are problems left to be solved and
>questions left to be answered.  Once those parts are done, we can
>fill in section 4 with the final details of the design.
> 
> 1. Introduction
> 
>In the current Tor network design, we assume that every client has a
>complete view of all the relays in the network.  To achieve this,
>clients download consensus directories at regular intervals, and
>download descriptors for every relay listed in the directory.
> 
>The substitution of microdescriptors for regular descriptors
>(proposal 158) and the use of consensus diffs (proposal 140) have
>lowered the bytes that clients must dedicate to directory operations.
>But we still face the problem that, if we force each client to know
>about every relay in the network, each client's directory traffic
>will grow linearly with the number of relays in the network.
> 
>Another drawback in our current system is that client directory
>traffic is front-loaded: clients need to fetch an entire directory
>before they begin building circuits.  This places extra delays on
>clients, and extra load on the network.
> 
>To anonymize the world, we will need to scale to a much larger number
>of relays and clients: requiring clients to know about every relay in
>the set simply won't scale, and requiring every new client to download
>a large document is also problematic.
> 
>There are obvious responses here, and some other anonymity tools have
>taken them.  It's possible to have a client only use a fraction of
>the relays in a network--but doing so opens the client to _epistemic
>attacks_, in which the difference in clients' views of the
>network is used to partition their traffic.  It's also possible to
>move the problem of selecting relays from the client to the relays
>themselves, and let each relay select the next relay in turn--but
>this choice opens the client to _route capture attacks_, in which a
>malicious relay selects only other malicious relays.
> 
>In this proposal, I'll describe a design for eliminating up-front
>client directory downloads.  Clients still choose relays at random,
>but without ever having to hold a list of all the relays. This design
>does not require clients to trust relays any more than they do today,
>or open clients to epistemic attacks.
> 
>I hope to maintain feature parity with the current Tor design; I'll
>list the places in which I haven't figured out how to do so yet.
> 
>I'm naming this design "walking onions".  The walking onion (Allium x
>proliferum) reproduces by growing tiny little bulbs at the
>end of a long stalk.  When the stalk gets too top-heavy, it flops
>over, and the little bulbs start growing somewhere new.
> 
>The rest of this document will run as follows.  In section 2, I'll
>explain the ideas behind the "walking onions" design, and how they
>can eliminate the need for regular directory downloads.  In section 3, I'll
>answer a number of follow-up questions that arise, and explain how to
>keep various features in Tor working.  Section 4 (not yet written)
>will elaborate all the details needed 

Re: [tor-dev] Upcoming Tor change with application impact: "Dormant Mode"

2019-01-04 Thread Michael Rogers
Sorry, I have one more follow-up question.

On 02/01/2019 21:00, Nick Mathewson wrote:
> On Fri, Dec 21, 2018 at 6:34 AM Michael Rogers  
> wrote:
>>
>> Hi Nick,
>>
>> Is the guard connection closed when becoming dormant?
> 
> No; it times out independently.

In that case I assume keepalives from the guard don't prevent Tor from
becoming dormant, or wake it from dormant mode. On Android, a keepalive
will briefly wake the CPU. When that happens, will Tor consume the
keepalive from the network buffer while remaining dormant, or does the
controller need to wake Tor from dormant mode periodically to ensure it
reads from the guard connection?

Thanks again,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Upcoming Tor change with application impact: "Dormant Mode"

2019-01-04 Thread Michael Rogers
Hi Nick,

Thanks very much for the reply. Follow-ups inline below.

On 02/01/2019 21:00, Nick Mathewson wrote:
> On Fri, Dec 21, 2018 at 6:34 AM Michael Rogers  
> wrote:
>>
>> Hi Nick,
>>
>> Is the guard connection closed when becoming dormant?
> 
> No; it times out independently.

That's good news from my point of view, but in that case I think the
idea of terminating the pluggable transport process when becoming
dormant might need to be reconsidered?

>> On 13/12/2018 20:56, Nick Mathewson wrote:
>>>DormantTimeoutDisabledByIdleStreams 0|1
>>>If true, then any open client stream (even one not reading or
>>>writing) counts as client activity for the purpose of
>>>DormantClientTimeout. If false, then only network activity 
>>> counts.
>>>(Default: 1)
>>
>> When this option's set to 0 and Tor becomes dormant, will it close any
>> idle client connections that are still open?
> 
> No.  By default, it won't go dormant if there are any idle client
> connections. See DormantTimeoutDisabledByIdleStreams for the option
> that overrides that behavior.

When DormantTimeoutDisabledByIdleStreams is set to 0, what happens to
idle client connections when Tor goes dormant?

>> Will it close client connections on receiving SIGNAL DORMANT?
> 
> No.
> 
>> If Tor doesn't close client connections when becoming dormant, will it
>> become active again (or send an event that the controller can use to
>> trigger SIGNAL ACTIVE) if there's activity on an open client stream?
> 
> No, but that might be a good idea if
> DormantTimeoutDisabledByIdleStreams is set to 0.

Sorry, do you mean it might be a good idea for Tor to become active
again/send an event in that case? Should I open a ticket for this if it
looks like we'll need it?

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Upcoming Tor change with application impact: "Dormant Mode"

2018-12-21 Thread Michael Rogers
Hi Nick,

Is the guard connection closed when becoming dormant?

On 13/12/2018 20:56, Nick Mathewson wrote:
>DormantTimeoutDisabledByIdleStreams 0|1
>If true, then any open client stream (even one not reading or
>writing) counts as client activity for the purpose of
>DormantClientTimeout. If false, then only network activity counts.
>(Default: 1)

When this option's set to 0 and Tor becomes dormant, will it close any
idle client connections that are still open?

Will it close client connections on receiving SIGNAL DORMANT?

If Tor doesn't close client connections when becoming dormant, will it
become active again (or send an event that the controller can use to
trigger SIGNAL ACTIVE) if there's activity on an open client stream?

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Dealing with frequent suspends on Android

2018-12-04 Thread Michael Rogers
On 26/11/2018 21:46, Nick Mathewson wrote:
> On Wed, Nov 21, 2018 at 5:10 PM Michael Rogers  
> wrote:
>>
>> On 20/11/2018 19:28, Nick Mathewson wrote:
>>> Hi!  I don't know if this will be useful or not, but I'm wondering if
>>> you've seen this ticket:
>>>   https://trac.torproject.org/projects/tor/ticket/28335
>>>
>>> The goal of this branch is to create a "dormant mode" where Tor does
>>> not run any but the most delay- and rescheduling-tolerant of its
>>> periodic events.  Tor enters this mode if a controller tells it to, or
>>> if (as a client) it passes long enough without user activity.  When in
>>> dormant mode, it doesn't disconnect from the network, and it will wake
>>> up again if the controller tells it to, or it receives a new client
>>> connection.
>>>
>>> Would this be at all helpful for any of this?
>>
>> This looks really useful for mobile clients, thank you!
> 
> Glad to hear it -- it's now merged into Tor's master branch.

Fantastic.

>> The comments on the pull request
>> (https://github.com/torproject/tor/pull/502) suggest that Tor won't
>> enter dormant mode if it's running a hidden service. Are there any plans
>> to support that in future?
> 
> I want to support this for hidden services.  Here's the ticket to
> track that: https://trac.torproject.org/projects/tor/ticket/28424
> 
> This is going to be harder than the other cases, though, so we decided
> to defer it for now and see if we have time later.

Makes sense. I added some simulation results to the ticket that support
dgoulet's assessment that the descriptor may become unreachable within a
few hours if not republished, due to HSDir churn.

However, even one hour in dormant mode would be a huge improvement over
what we can do now. What are the other blockers for running a hidden
service in dormant mode?

Would it be possible/easier to keep an ordinary client circuit alive in
dormant mode? That would allow us to keep a connection to a mailbox*
while dormant.

Cheers,
Michael

* Briar term for a device connected to power and internet that runs a
hidden service on behalf of the owner's main device.


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Dealing with frequent suspends on Android

2018-11-21 Thread Michael Rogers
On 20/11/2018 19:28, Nick Mathewson wrote:
> Hi!  I don't know if this will be useful or not, but I'm wondering if
> you've seen this ticket:
>   https://trac.torproject.org/projects/tor/ticket/28335
> 
> The goal of this branch is to create a "dormant mode" where Tor does
> not run any but the most delay- and rescheduling-tolerant of its
> periodic events.  Tor enters this mode if a controller tells it to, or
> if (as a client) it passes long enough without user activity.  When in
> dormant mode, it doesn't disconnect from the network, and it will wake
> up again if the controller tells it to, or it receives a new client
> connection.
> 
> Would this be at all helpful for any of this?

This looks really useful for mobile clients, thank you!

The comments on the pull request
(https://github.com/torproject/tor/pull/502) suggest that Tor won't
enter dormant mode if it's running a hidden service. Are there any plans
to support that in future?

One of the comments mentions a break-even point for consensus diffs,
where it costs less bandwidth to fetch a fresh consensus than all the
diffs from the last consensus you know about. Are diffs likely to remain
available up to the break-even point, or are there times when it would
be cheaper to use diffs, but you have to fetch a fresh consensus because
some of the diffs have expired? Essentially I'm wondering whether we'd
want to wake Tor from dormant mode occasionally to fetch diffs before
they expire, so we can avoid fetching a fresh consensus later.

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Dealing with frequent suspends on Android

2018-11-12 Thread Michael Rogers
On 07/11/2018 09:04, teor wrote:
> 
> 
>> On 7 Nov 2018, at 04:10, Michael Rogers  wrote:
>>
>> On 06/11/2018 01:58, Roger Dingledine wrote:
>>> On Tue, Nov 06, 2018 at 11:38:33AM +1000, teor wrote:
>>>>> so if we could ask the guard for
>>>>> regular keepalives, we might be able to promise that the CPU will wake
>>>>> once every keepalive interval, unless the guard connection's lost, in
>>>>> which case it will wake once every 15 minutes. But keepalives from the
>>>>> guard would require a protocol change, which would take time to roll
>>>>> out, and would let the guard know (if it doesn't already) that the
>>>>> client's running on Android.
>>>>
>>>> Tor implemented netflow padding in 0.3.1.1-alpha (May 2017):
>>>> https://gitweb.torproject.org/torspec.git/tree/padding-spec.txt
>>>>
>>>> Connection padding may act like a keepalive, we should consider this
>>>> use case as we improve our padding designs.
>>>
>>> Relays already send a keepalive (padding) cell every 5 minutes: see
>>> the KeepalivePeriod config option. That's separate from any of the new
>>> netflow padding. Clients send them too.
>>
>> Ah, thanks! I didn't realise keepalives were sent from relays to clients
>> as well as vice versa.
>>
>> That gives us a max sleep of 5 minutes when a guard connection's open
>> and 15 minutes when it's not, which is a great improvement.
>>
>> Would it have much impact on the network to reduce the default keepalive
>> interval to, say, one minute?
> 
> It's doable, but it would take a while to roll out to all relays.
> And it seems like a strange solution to a platform-specific problem.

You're right, it's not pretty. Using this hack on the client is bad
enough, and asking the network to support the hack is worse.

On the other hand, Android has a lot of users, and battery-friendly
hidden services on mobile devices would be a great building block for
privacy tools (assuming we could overcome the other obstacles too).

> We might also have to be careful, because a multiple of the keepalive
> interval is used to close connections without any circuits.
> 
>>> The netflow padding is more interesting for the Briar case, since it
>>> comes way way more often than keepalives: so if you're trying for deep
>>> sleep but you wake up for network activity every several seconds, you'll
>>> probably end up sad.
>>
>> Unfortunately we've disabled netflow padding due to bandwidth and
>> battery usage.
> 
> Even with ReducedConnectionPadding?

Interestingly we found that ReducedConnectionPadding didn't make much of
a difference for our use case, perhaps because the hidden service keeps
the OR connection open. If I understand right, closing the connection is
one of the ways ReducedConnectionPadding would normally save bandwidth.

I ran some experiments over the weekend to double-check this. Here's the
traffic per hour for Tor 0.3.4.8 running a hidden service with no
incoming or outgoing connections, averaged over 12 hours:

Default: sent 415 kB (stdev 90 k), received 434 kB (stdev 114 k)
No padding: sent 176 kB (stdev 74 k), received 232 kB (stdev 182 k)
Reduced padding: sent 418 kB (stdev 87 k), received 312 kB (stdev 183 k)

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Dealing with frequent suspends on Android

2018-11-06 Thread Michael Rogers
On 06/11/2018 01:58, Roger Dingledine wrote:
> On Tue, Nov 06, 2018 at 11:38:33AM +1000, teor wrote:
>>>  so if we could ask the guard for
>>> regular keepalives, we might be able to promise that the CPU will wake
>>> once every keepalive interval, unless the guard connection's lost, in
>>> which case it will wake once every 15 minutes. But keepalives from the
>>> guard would require a protocol change, which would take time to roll
>>> out, and would let the guard know (if it doesn't already) that the
>>> client's running on Android.
>>
>> Tor implemented netflow padding in 0.3.1.1-alpha (May 2017):
>> https://gitweb.torproject.org/torspec.git/tree/padding-spec.txt
>>
>> Connection padding may act like a keepalive, we should consider this
>> use case as we improve our padding designs.
> 
> Relays already send a keepalive (padding) cell every 5 minutes: see
> the KeepalivePeriod config option. That's separate from any of the new
> netflow padding. Clients send them too.

Ah, thanks! I didn't realise keepalives were sent from relays to clients
as well as vice versa.

That gives us a max sleep of 5 minutes when a guard connection's open
and 15 minutes when it's not, which is a great improvement.

Would it have much impact on the network to reduce the default keepalive
interval to, say, one minute?

> The netflow padding is more interesting for the Briar case, since it
> comes way way more often than keepalives: so if you're trying for deep
> sleep but you wake up for network activity every several seconds, you'll
> probably end up sad.

Unfortunately we've disabled netflow padding due to bandwidth and
battery usage.

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Dealing with frequent suspends on Android

2018-11-05 Thread Michael Rogers
Hi all,

It's great to see that some children of #25500 have already been
released in the 0.3.4 series. Can I ask about the longer-term plan for
this work, and whether #23289 (or something similar) is part of it?

The context for my question is that we're trying to reduce Briar's power
consumption. Until now we've held a wake lock to keep the CPU awake all
the time, but normally an Android device would go into "deep sleep"
(which corresponds to suspend on other platforms) whenever the screen's
turned off, apart from brief wakeups for alarms and incoming network
traffic. Holding a permanent wake lock has a big impact on battery life.

Most of our background work can be handled with alarms, but we still
need to hold a wake lock whenever Tor's running because libevent timers
don't fire when the CPU's asleep, and Tor gets a nasty surprise when it
wakes up and all its timers are late.

It looks like most of the work has been moved off the one-second
periodic timer, which is great, but I assume that work's now being
scheduled by other means and still needs to be done punctually, which we
can't currently guarantee on Android without a wake lock.

As far as I can tell, getting rid of the wake lock requires one of the
following:

1. Tor becomes extremely tolerant of unannounced CPU sleeps. I don't
know enough about Tor's software architecture to know how feasible this
is, but my starting assumption would be that adapting a network-oriented
codebase that's been written for a world where time passes at a steady
rate and timers fire punctually, to a world where time passes in fits
and starts and timers fire eventually, would be a nightmare.

2. Tor tolerates unannounced CPU sleeps within some limits. This is
similar to the previous scenario, except the controller sets a regular
alarm to ensure the CPU never sleeps for too long, and libevent ensures
that when the CPU wakes up, any overdue timers fire immediately (maybe
this happens already?). Again, I'd assume that adapting Tor to this
environment would be a huge task, but at least there'd be limits on the
insanity.

One of the difficulties with this option is that under some conditions,
the controller can only schedule one alarm every 15 minutes. Traffic
from the guard would also wake the CPU, so if we could ask the guard for
regular keepalives, we might be able to promise that the CPU will wake
once every keepalive interval, unless the guard connection's lost, in
which case it will wake once every 15 minutes. But keepalives from the
guard would require a protocol change, which would take time to roll
out, and would let the guard know (if it doesn't already) that the
client's running on Android.

3. Tor knows when it next needs to wake up, and relies on the controller
to wake it. This requires a way for the controller to query Tor, and Tor
to query libevent, for the next timer that needs to fire (perhaps from
some subset of timers that must fire punctually even if the CPU's
asleep). Libevent doesn't need to detect overdue timers by itself, but
it needs to provide a hook for re-checking whether timers are overdue.
The delay until the next timer needs to be at least a few seconds long,
at least some of the time, for sleeping to be worthwhile. And finally,
even if all those conditions are met, we run up against the 15-minute
limit on alarms again.

None of these options are good. I'm not even sure if the first and third
are feasible. But I don't know enough about Tor or libevent to be sure.
If you do, I'd be really grateful for your help in understanding the
constraints here.

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] obfs4, meek, active probing and the timeline of pluggable transports

2018-10-29 Thread Michael Rogers
On 27/10/2018 12:50, Piyush Kumar Sharma wrote:
> 2.) What was the motivation to bring in meek as a pluggable transport,
> given the fact that obfs4 works great to cover all the existing problems
> with Tor detection. Was the motivation just the fact that, it will be
> much easier for the users to use meek than obfs4 or something other than
> this?

Hi Piyush,

I'm not a Tor dev but I'll try to answer this.

To use obfs4 the client needs to know the IP address of an obfs4 bridge.
If these addresses are distributed in a public or semi-private way,
eventually the adversary may learn them in the same way that clients do,
in which case they can be blacklisted without active probing.

Meek allows the client to connect to any server that belongs to a "front
domain". If the front domain also hosts a lot of popular services or
important infrastructure then the adversary may be reluctant to block
it, in which case it isn't necessary to keep the front domain secret
from the adversary.

Until recently, Meek used AWS and Google App Engine as front domains. I
believe those services have stopped supporting domain fronting, but a
similar tactic may soon become possible with encrypted SNI, which is now
supported by Cloudflare.

If anyone on the list knows whether/when we're likely to see a pluggable
transport based on encrypted SNI I'd love to hear about it.

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Temporary hidden services

2018-10-22 Thread Michael Rogers
On 19/10/2018 16:05, Leif Ryge wrote:
> On Wed, Oct 17, 2018 at 07:27:32PM +0100, Michael Rogers wrote:
> [...] 
>> If we decided not to use the key blinding trick, and just allowed both
>> parties to know the private key, do you see any attacks?
> [...]
> 
> If I'm understanding your proposal correctly, I believe it would leave
> you vulnerable to a Key Compromise Impersonation (KCI) attack.
> 
> Imagine the scenario where Alice and Bob exchange the information to
> establish their temporary rendezvous onion which they both know the
> private key to, and they agree that Bob will be the client and Alice
> will be the server.
> 
> But, before Bob connects to it, the adversary somehow obtains a copy of
> everything Bob knows (but they don't have the ability to modify data or
> software on his computer - they just got a copy of his secrets somehow).
> 
> Obviously the adversary could then impersonate Bob to Alice, because
> they know everything that Bob knows. But, perhaps less obviously, in the
> case that Bob is supposed to connect to Alice's temporary onion which
> Bob (and now the adversary) know the key to, the adversary can also
> now impersonate Alice to Bob (by overwriting Alice's descriptors and
> taking over her temporary onion service).
> 
> In this scenario, if Bob hadn't known the private key for Alice's
> temporary onion that he is connecting to, impersonating Alice to Bob
> would not have been possible.
> 
> And of course, if the adversary can successfully impersonate both
> parties to eachother at this phase, they can provide their own long-term
> keys to each of them, and establish a long-term bidirectional mitm -
> which, I think, would otherwise not be possible even in the event that
> one party's long-term key was compromised.
> 
> Bob losing control of the key material before using it (without his
> computer being otherwise compromised) admittedly seems like an unlikely
> scenario, but you asked for "any attacks", so, I think KCI is one (if
> I'm understanding your proposal correctly).

Hi Leif,

Thanks for pointing this out - I'd heard about this kind of attack but
I'd forgotten to consider it.

We're planning to do a key exchange at the application layer after
making the hidden service connection, so I don't think the adversary's
ability to impersonate Alice's hidden service to Bob would necessarily
lead to application-layer impersonation on its own. But if you hadn't
raised this we might have designed the application-layer key exchange in
a way that was vulnerable to KCI as well, so thanks!

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Temporary hidden services

2018-10-22 Thread Michael Rogers
On 19/10/2018 14:01, George Kadianakis wrote:
> Michael Rogers  writes:
>> A given user's temporary hidden service addresses would all be related
>> to each other in the sense of being derived from the same root Ed25519
>> key pair. If I understand right, the security proof for the key blinding
>> scheme says the blinded keys are unlinkable from the point of view of
>> someone who doesn't know the root public key (and obviously that's a
>> property the original use of key blinding requires). I don't think the
>> proof says whether the keys are unlinkable from the point of view of
>> someone who does know the root public key, but doesn't know the blinding
>> factors (which would apply to the link-reading adversary in this case,
>> and also to each contact who received a link). It seem like common sense
>> that you can't use the root key (and one blinding factor, in the case of
>> a contact) to find or distinguish other blinded keys without knowing the
>> corresponding blinding factors. But what seems like common sense to me
>> doesn't count for much in crypto...
>>
> 
> Hm, where did you get this about the security proof? The only security
> proof I know of is https://www-users.cs.umn.edu/~hoppernj/basic-proof.pdf and 
> I don't see
> that assumption anywhere in there, but it's also been a long while since
> I read it.

I may have misunderstood the paper, but I was talking about the
unlinkability property defined in section 4.1.

If I understand right, the proof says that descriptors created with a
given identity key are unlinkable to each other, in the sense that an
adversary who's allowed to query for descriptors created with the
identity key can't tell whether one of the descriptors has been replaced
with one created with a different identity key.

It seems to follow that the blinded keys used to sign the descriptors*
are unlinkable, in the sense that an adversary who's allowed to query
for blinded keys derived from the identity key can't tell whether one of
the blinded keys has been replaced with one derived from a different
identity key - otherwise the adversary could use that ability to
distinguish the corresponding descriptors.

What I was trying to say before is that although I don't understand the
proof in section 5.1 of the paper, I *think* it's based on an adversary
who only sees the descriptors and doesn't also know the identity public
key. This is totally reasonable for the original setting, where we're
not aiming to provide unlinkability from the perspective of someone who
knows the identity public key. But it becomes problematic in this new
setting we're discussing, where the adversary is assumed to know the
identity public key and we still want the blinded keys to be unlinkable.

* OK, strictly speaking the blinded keys aren't used to sign the
descriptors directly, they're used to certify descriptor-signing keys -
but the paper argues that the distinction doesn't affect the proof.

> I think in general you are OK here. An informal argument: according to
> rend-spec-v3.txt appendix A.2 the key derivation is as follows:
> 
> derived private key: a' = h a (mod l)
> derived public key: A' = h A = (h a) B
> 
> In your case, the attacker does not know 'h' (the blinding factor),
> whereas in the case of onion service the attacker does not know 'a' or
> 'a*B' (the private/public key). In both cases, the attacker is missing
> knowledge of a secret scalar, so it does not seem to make a difference
> which scalar the attacker does not know.
> 
> Of course, the above is super informal, and I'm not a cryptographer,
> yada yada.

I agree it seems like it should be safe. My point is really just that we
seem to have gone beyond what's covered by the proof, which tends to
make me think I should prefer a solution that I understand a bit better.

(At the risk of wasting your time though, I just want to suggest an
interesting parallel. Imagine we're just dealing with a single ordinary
key pair, no blinding involved. The public key X = xB, where x is the
private key and B is the base point. Now obviously we rely on this property:

1. Nobody can find x given X and B

But we don't usually require that:

2. Nobody can tell whether public keys X and Y share the same base point
without knowing x, y, or the base point
3. Nobody can tell whether X has base point B without knowing x

We don't usually care about these properties because the base point is
public knowledge. But in the key blinding setting, the base point is
replaced with the identity public key. As far as I can see, the proof in
the paper covers property 2 but not property 3. I'm certainly not saying
that I know whether property 3 is true - I just want to point out that
it seems to be distinct from properties 1 and 2.)

>> We're testing a prototype of the UX at the moment.
>>
>> Bring

Re: [tor-dev] Temporary hidden services

2018-10-19 Thread Michael Rogers
On 18/10/2018 13:26, George Kadianakis wrote:
> Michael Rogers  writes:
> 
>> Hi George,
>>
>> On 15/10/2018 19:11, George Kadianakis wrote:
>>> Nick's trick seems like a reasonable way to avoid the issue with both 
>>> parties
>>> knowing the private key.
>>
>> Thanks! Good to know. Any thoughts about how to handle the conversion
>> between ECDH and EdDSA keys?
>>
> 
> Hmm, that's a tricky topic! Using the same x25519 keypair for DH and
> signing is something that should be done only under supervision by a
> proper cryptographer(tm). I'm not a proper cryptographer so I'm
> literally unable to evaluate whether doing it in your case would be
> secure. If it's possible I would avoid it altogether...
> 
> I think one of the issues is that when you transform your x25519 DH key
> to an ed25519 key and use it for signing, if the attacker is able to
> choose what you sign, the resulting signature will basically provide a
> DH oracle to the attacker, which can result in your privkey getting
> completely pwned. We actually do this x25519<->ed255519 conversion for
> onionkeys cross-certificates (proposal228) but we had the design
> carefully reviewed by people who know what's going on (unlike me).
> 
> In your case, the resulting ed25519 key would be used to sign the
> temporary HS descriptor. The HS descriptor is of course not entirely
> attacker-controlled data, but part of it *could be considered* attacker
> controlled (e.g. the encrypted introduction points), and I really don't
> know whether security can be impacted in this case. Also there might be
> other attacks that I'm unaware of... Again, you need a proper
> cryptographer for this.

Thanks, that confirms my reservations about converting between ECDH and
EdDSA keys, especially when we don't fully control what each key will be
used for. I think we'd better hold off on that approach unless/until the
crypto community comes up with idiot-proof instructions.

> A cheap way to avoid this, might be to include both an x25519 and an
> ed25519 key in the "link" you send to the other person. You use the
> x25519 key to do the DH and derive the shared secret, and then both
> parties use the shared secret to blind the ed25519 key and derive the
> blinded (aka hierarchically key derived) temporary onion service
> address... Maybe that works for you but it will increase the link size
> to double, which might impact UX.

Nice! Link size aside, that sounds like it ought to work.

A given user's temporary hidden service addresses would all be related
to each other in the sense of being derived from the same root Ed25519
key pair. If I understand right, the security proof for the key blinding
scheme says the blinded keys are unlinkable from the point of view of
someone who doesn't know the root public key (and obviously that's a
property the original use of key blinding requires). I don't think the
proof says whether the keys are unlinkable from the point of view of
someone who does know the root public key, but doesn't know the blinding
factors (which would apply to the link-reading adversary in this case,
and also to each contact who received a link). It seem like common sense
that you can't use the root key (and one blinding factor, in the case of
a contact) to find or distinguish other blinded keys without knowing the
corresponding blinding factors. But what seems like common sense to me
doesn't count for much in crypto...

We'd also have to be careful about the number of blinded keys generated
from a given root key. The security proof uses T = 2^16 as an example
for the maximum number of epochs, giving a 16-bit security loss vs
normal Ed25519. In this scheme T would be the maximum number of contacts
rather than epochs. 2^16 is more than enough for the current context,
where contacts are added manually, but we'd have to bear in mind that it
wouldn't be safe to use this for automatic exchanges initiated by other
parties.

> And talking about UX, this is definitely a messy protocol UX-wise. One
> person has to wait for the other person to start up a temporary HS. What
> happens if the HS never comes up? Every when does the other person check
> for the HS? How long does the first person keep the HS up? You can
> probably hide all these details on the UI, but it still seems like a
> messy situation.

Messy? Yeah, welcome to P2P. ;-)

We're testing a prototype of the UX at the moment.

Bringing up the hidden service tends to take around 30 seconds, which is
a long time if you make the user sit there and watch a progress wheel,
but not too bad if you let them go away and do other things until a
notification tells them it's done.

Of course that's the happy path, where the contact's online and has
already opened the user's link. If the contact sent their link and then
went offline, th

Re: [tor-dev] Temporary hidden services

2018-10-17 Thread Michael Rogers
Hi George,

On 15/10/2018 19:11, George Kadianakis wrote:
> Nick's trick seems like a reasonable way to avoid the issue with both parties
> knowing the private key.

Thanks! Good to know. Any thoughts about how to handle the conversion
between ECDH and EdDSA keys?

If we decided not to use the key blinding trick, and just allowed both
parties to know the private key, do you see any attacks?

> I have a separate question wrt the threat model:
> 
> It seems to me that adversary in this game can observe the link, and all
> these stunts are done just for the case where the adversary steals the
> link (i.e. the temporary ECDH public keys).
> 
> In that case, given that both Alice and Bob are completely
> unauthenticated and there is no root trust, how can you ensure that the
> adversary Eve won't perform the ECDH herself, then connect to the
> temporary onion service, and steal the long-term onion service link
> (thereby destroying the secrecy of the long-term onion service for ever,
> even if the attack is detected in the future through Alice and Bob
> communicating in an out-of-band way).
> 
> Are we assuming that Alice and Bob have no common shared-secret in
> place?  Because if they did, then you could use that from the start to
> encrypt the long-term onion service identifier. If you don't, you could
> potentially fall into attacks like the one above.

Great question. We assume the channel over which the links are exchanged
is insecure, so an adversary who can read and modify the channel can
carry out a man-in-the-middle attack as you describe. However, we also
assume there are some adversaries that can read but not modify the
channel, and it's worth protecting against those adversaries even if we
can't protect against stronger ones that can also modify the channel.

A realistic example of a read-only adversary would be one that gets
retrospective access to chat logs.

As you pointed out, modification can potentially be detected later
(although we haven't designed the protocol for doing that yet). I guess
that may help to deter adversaries who don't want to reveal that they
can read and modify the channel.

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Temporary hidden services

2018-10-01 Thread Michael Rogers
On 28/09/2018 02:40, Nick Mathewson wrote:
> On Thu, Sep 27, 2018 at 9:26 AM Michael Rogers  
> wrote:
>>
>> Hi all,
>>
>> The Briar team is working on a way for users to add each other as
>> contacts by exchanging links without having to meet in person.
>>
>> We don't want to include the address of the user's long-term Tor hidden
>> service in the link, as we assume the link may be observed by an
>> adversary, who would then be able to use the availability of the hidden
>> service to tell whether the user was online at any future time.
>>
>> We're considering two solutions to this issue. The first is to use a
>> temporary hidden service that's discarded after, say, 24 hours. The
>> address of the temporary hidden service is included in the link. This
>> limits the window during which the user's activity is exposed to an
>> adversary who observes the link, but it also requires the contact to use
>> the link before it expires.
>>
>> The second solution is to include an ECDH public key in the link,
>> exchange links with the contact, and derive a hidden service key pair
>> from the shared secret. The key pair is known to both the user and the
>> contact. One of them publishes the hidden service, the other connects to
>> it. They exchange long-term hidden service addresses via the temporary
>> hidden service, which is then discarded.
> 
> 
> Here's a third idea to think about:  What if you use the same key
> derivation trick we use in v3 onion services?
> 
> That is, every user could have a long term private key x and a public
> key g^x.  If the user with key g^x wants to allow a user with g^y to
> meet them with them, they derive s=h( g^xy) as the shared secret...
> but instead of using s as a private key, they do the key derivation
> trick, so that the single-use public key is derived as (g^x)*(g^s) =
> g^(x+s), and the single use secret key is derived as (x + s).  This
> way, each of them winds up with a private/public keypair for use with
> the other; each user is the only one that knows their private key; and
> the two of them are the only ones who can derive the public key.
> 
> You could do this in Tor's v3 onion-service design as-is: Just put
> h(g^xy) as the the "optional secret s" input when deriving the daily
> key.
> 
> For more info on this key derivation mechanism, see rend-spec-v3.txt,
> appendix A.
> 
> (Warning: I am not a cryptographer; I haven't thought about this idea
> very hard yet.)

Hi Nick,

This is a really interesting idea, thanks. I'm kicking myself because we
tried to come up with a way to derive a key pair such that the user gets
the private key and the contact gets the public key, and we couldn't
come up with anything - but it's already there as part of the HS design!

However, I'm much further away from being a cryptographer than you are,
and there are aspects of this that definitely go beyond my expertise.

The biggest one is that the user and her contact need to start with ECDH
key pairs (in order to derive a shared secret) and end up with an
Ed25519 key pair (in order to use it for a hidden service), whereas the
v3 key blinding scheme starts and ends with Ed21159 key pairs. I believe
there are ways to convert between X25519 and Ed25519 keys, but I don't
know what caveats would come with doing that, especially considering
that we want to reuse the DH keys for other exchanges.

I understand that the Tor devs are at a meeting this week, but hoping to
pick this up when you get back.

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Temporary hidden services

2018-10-01 Thread Michael Rogers
Hi Chad,

On 27/09/2018 20:02, Chad Retz wrote:
> I am no expert here, but I'm confused by "the client connecting to the
> service knows the service's private key". Why not just create an onion
> service (per contact) and then use the client authentication feature
> to ensure they share the same secret? Client auth is built in to
> discovery and from what I understand, retains anonymity of the owner.

Creating an onion service per contact would be a possibility, and
although some information about the user's online and offline periods
would still be leaked to an adversary who observed the link, via the
publication and renewal of the hidden service descriptor, I think you're
right that client auth would reduce the leakage by preventing the
adversary from connecting to the service to test whether it was online
at any given moment. Thanks for the suggestion!

> Also, why derive the hidden service key pair from the shared secret
> instead of having the sender provide the address based on a privately
> derived key pair?

I admit it's a weird way of doing things. The shared secret approach
allows the user to use the same link for all contacts over a long
period, without exposing a long-term hidden service address to an
adversary who observes the links.

This has some advantages, such as being able to publish your link via
multiple channels (email sig, social media profile, etc) that recipients
can check to increase their confidence that they've received your
authentic link.

On the other hand, time-limited or single-use links are less likely to
become surveillance selectors in their own right, and the "key
fingerprints on business cards" pattern has never really caught on. So
there are pros and cons to both approaches.

Cheers,
Michael

> To your primary question, I to would like to know
> that, given the private key of an onion service, the anonymity of the
> original publisher is maintained. I would guess that it is (granted
> they can overwrite the descriptor and take it over and what not).
> 
> Chad
> On Thu, Sep 27, 2018 at 8:26 AM Michael Rogers  
> wrote:
>>
>> Hi all,
>>
>> The Briar team is working on a way for users to add each other as
>> contacts by exchanging links without having to meet in person.
>>
>> We don't want to include the address of the user's long-term Tor hidden
>> service in the link, as we assume the link may be observed by an
>> adversary, who would then be able to use the availability of the hidden
>> service to tell whether the user was online at any future time.
>>
>> We're considering two solutions to this issue. The first is to use a
>> temporary hidden service that's discarded after, say, 24 hours. The
>> address of the temporary hidden service is included in the link. This
>> limits the window during which the user's activity is exposed to an
>> adversary who observes the link, but it also requires the contact to use
>> the link before it expires.
>>
>> The second solution is to include an ECDH public key in the link,
>> exchange links with the contact, and derive a hidden service key pair
>> from the shared secret. The key pair is known to both the user and the
>> contact. One of them publishes the hidden service, the other connects to
>> it. They exchange long-term hidden service addresses via the temporary
>> hidden service, which is then discarded.
>>
>> The advantage of the second solution is that the user's link is static -
>> it doesn't expire and can be shared with any number of contacts. A
>> different shared secret, and thus a different temporary hidden service,
>> is used for adding each contact.
>>
>> But using a hidden service in such a way that the client connecting to
>> the service knows the service's private key is clearly a departure from
>> the normal way of doing things. So before pursuing this idea I wanted to
>> check whether it's safe, in the sense that the hidden service still
>> conceals its owner's identity from the client.
>>
>> Attacks against the availability of the service (such as uploading a
>> different descriptor) are pointless in this scenario because the client
>> is the only one who would connect to the service anyway. So I'm just
>> interested in attacks against anonymity.
>>
>> Can anyone shed any light on this question? Or is this way of using
>> hidden services too disgusting to even discuss? :-)
>>
>> Thanks,
>> Michael
>> ___
>> tor-dev mailing list
>> tor-dev@lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
> 


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Temporary hidden services

2018-09-27 Thread Michael Rogers
Hi all,

The Briar team is working on a way for users to add each other as
contacts by exchanging links without having to meet in person.

We don't want to include the address of the user's long-term Tor hidden
service in the link, as we assume the link may be observed by an
adversary, who would then be able to use the availability of the hidden
service to tell whether the user was online at any future time.

We're considering two solutions to this issue. The first is to use a
temporary hidden service that's discarded after, say, 24 hours. The
address of the temporary hidden service is included in the link. This
limits the window during which the user's activity is exposed to an
adversary who observes the link, but it also requires the contact to use
the link before it expires.

The second solution is to include an ECDH public key in the link,
exchange links with the contact, and derive a hidden service key pair
from the shared secret. The key pair is known to both the user and the
contact. One of them publishes the hidden service, the other connects to
it. They exchange long-term hidden service addresses via the temporary
hidden service, which is then discarded.

The advantage of the second solution is that the user's link is static -
it doesn't expire and can be shared with any number of contacts. A
different shared secret, and thus a different temporary hidden service,
is used for adding each contact.

But using a hidden service in such a way that the client connecting to
the service knows the service's private key is clearly a departure from
the normal way of doing things. So before pursuing this idea I wanted to
check whether it's safe, in the sense that the hidden service still
conceals its owner's identity from the client.

Attacks against the availability of the service (such as uploading a
different descriptor) are pointless in this scenario because the client
is the only one who would connect to the service anyway. So I'm just
interested in attacks against anonymity.

Can anyone shed any light on this question? Or is this way of using
hidden services too disgusting to even discuss? :-)

Thanks,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Alternative directory format for v3 client auth

2018-07-11 Thread Michael Rogers
On 11/07/18 14:22, George Kadianakis wrote:
> Michael Rogers  writes:
> 
>> On 10/07/18 19:58, George Kadianakis wrote:
>>> here is a patch with an alternative directory format for v3 client auth
>>> crypto key bookkeeping as discussed yesterday on IRC:
>>>https://github.com/torproject/torspec/pull/23
>>>
>>> Thanks for making me edit the spec because it made me think of various
>>> details that had to be thought of.
>>>
>>> Let me know if you don't like it or if something is wrong.
>>
>> Minor clarification: line 2298 says the keypair is stored, it might be
>> clearer to say the private key is stored.
>>
>> Nitpick: should the directory be called "client_authorized_privkeys" if
>> it might contain private keys, public keys, or a mixture of the two?
>>
> 
> Good points in both cases. Will fix soon (along with other feedback if 
> received).
> 
> Other than that, what do you think about the whole concept? Too complex?
> Logical? Too much?
> 
> Cheers for the feedback! :)

Sorry for being late to the party - I just this morning finished reading
the thread from 2016 where the client auth design was hashed out. :-/

I think putting each client's keys in a separate file makes a lot of sense.

At a higher level there are some things I'm not sure about. Sorry if
this is threadjacking, but you said the magic words "whole concept". ;-)

First, Ed25519-based authentication ("intro auth"). Could this be punted
to the application layer, or is there a reason it has to happen at the
Tor layer?

Second, X25519-based authorization ("desc auth"). If I understand right,
using asymmetric keypairs here rather than symmetric keys makes it
possible for the client to generate a keypair and send the public key to
the service over an authenticated but not confidential channel. But the
client may not know how to do that, so we also need to support an
alternative workflow where the service generates the keypair and sends
the private key to the client over an authenticated and confidential
channel.

The upside of this design is the ability to use an authenticated but not
confidential channel (as long as the client and service understand which
workflow they need to use). The downside is extra complexity. I'm not
really convinced this is a good tradeoff. But I'm guessing this argument
has already been had, and my side lost. :-)

Third, what's the purpose of the fake auth-client lines for a service
that doesn't use client auth? I understand that when a service does use
client auth, it may not want clients (or anyone else who knows the onion
address) to know the exact number of clients. But when a service doesn't
use client auth, anyone who can decrypt the first layer of the
descriptor can also decrypt the second layer, and therefore knows that
the auth-client lines are fake. So are they just for padding in that
case? But the first layer's padded before encryption anyway.

Fourth, what goals does desc auth achieve in the v3 design? If I
understand right, in v2 its major goal was to hide the intro points from
everyone except authorised clients (including HSDirs). In v3 the intro
points are already hidden from anyone who doesn't know the onion address
(including HSDirs), so this goal can be achieved by not revealing the
onion address to anyone except authorised clients.

I'm probably missing something, but as far as I can see the only other
goal achieved by desc auth is the ability to revoke a client's access
without needing to distribute a new onion address to other clients. This
seems useful. But again, I'd ask whether it could be punted to the
application layer. The only advantage I can see from putting it at the
Tor layer is that the list of intro points is hidden from revoked
clients. Is there a real world use case where that's a big enough
advantage to justify putting all this authorisation machinery at the Tor
layer? Or maybe there are other things this design achieves that I
haven't thought of.

Anyway, sorry for the bag of assorted questions. I've been meaning to
catch up on all the discussions where they've probably been answered
already, but it's becoming clear that's a losing battle, so I'd better
just send them. Apologies if they're redundant or uninformed.

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Alternative directory format for v3 client auth

2018-07-11 Thread Michael Rogers
On 10/07/18 19:58, George Kadianakis wrote:
> here is a patch with an alternative directory format for v3 client auth
> crypto key bookkeeping as discussed yesterday on IRC:
>https://github.com/torproject/torspec/pull/23
> 
> Thanks for making me edit the spec because it made me think of various
> details that had to be thought of.
> 
> Let me know if you don't like it or if something is wrong.

Minor clarification: line 2298 says the keypair is stored, it might be
clearer to say the private key is stored.

Nitpick: should the directory be called "client_authorized_privkeys" if
it might contain private keys, public keys, or a mixture of the two?

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 286: Controller APIs for hibernation access on mobile

2017-12-06 Thread Michael Rogers
On 05/12/17 22:18, teor wrote:
> 
>> On 6 Dec 2017, at 05:12, Michael Rogers <mich...@briarproject.org> wrote:
>>
>> If the service needs to fetch a consensus and microdescs before it can
>> respond to a rendezvous cell, the delay could be far longer than the
>> difference in latency between a mobile phone and a laptop. So my point
>> is that the client will be able to tell that the service was woken from
>> idle by the rendezvous cell, which might have implications for the
>> service's anonymity.
>>
>> For example, it lets the client know that the service isn't running on
>> the same device as another service the client recently connected to,
>> otherwise the device wouldn't have been idle. Maybe that's unavoidable,
>> or not worth avoiding, but I just wanted to flag the issue.
> 
> We try to avoid attacks like this.
> Or, if we can't, we try to minimise their effect.
> 
> But when multiple onion services or clients share a tor instance, they also
> share the state of the consensus, directory documents, and guards.
> 
> Our best answer is probably: "don't share a tor instance if you want
> unlinkable onion services".
> 
> Or: "don't IDLE if you want unlinkable onion services".
> (Also, never lose your network connection.)

Sounds reasonable. Maybe something to this effect could be added to the
proposal, so app developers know what to expect in terms of linkability?

Could a long delay between receiving a rendezvous cell and responding
cause any other issues? For example, is there a high probability of the
client timing out before the service has fetched enough directory info
to be able to respond? If so, maybe it's worthwhile for the service to
be more proactive about keeping its directory info fresh?

>> Maybe I've misunderstood the proposal, but I thought the intent was that
>> Tor wouldn't fetch anything in IDLE mode, and wouldn't automatically
>> change from IDLE to IDLE_UPDATING - it would need a SLEEPWALK signal to
>> tell it to change to IDLE_UPDATING, and then it would automatically
>> change back to IDLE when it was done.
> 
> LIVE fetches directory documents so it always has a live consensus.
> IDLE fetches directory documents just often enough to stay online.
> SLEEP fetches nothing.

OK, so I guess the use case for SLEEPWALK is telling Tor to fetch a live
consensus and microdescs when it otherwise wouldn't have done - i.e. it
allows the controller to manage the freshness of the directory info?

But I'm really just guessing here. Nick, can you clarify?

>> Rather than the controller picking an interval, would it be better for
>> Tor to specify (maybe in its response to the IDLE signal) when it next
>> needs to be woken?
> 
> Or, "the latest time it can be woken to have directory documents with
> property X", where X is some combination of:
> * a live consensus
> * a reasonably live consensus
> * enough non-expired descriptors to build circuits

Yup, I think that makes sense - it achieves [what I guess is] the
purpose of SLEEPWALK while keeping knowledge about *why* Tor needs to be
woken at that time encapsulated within Tor, which is an improvement.

>>> So if IDLE doesn't meet your needs, it would help us to know why. If
>>> there's enough demand for it, it may be better to add a "WARM" state,
>>> where Tor checks for directory documents whenever a consensus
>>> expires, and otherwise acts like IDLE.
>>
>> Within the scope of this proposal that sounds like a good solution. But
>> if we're looking ahead to changes that allow the device to sleep without
>> shutting down Tor or disabling its network connectivity, then the
>> controller will need to be responsible for managing sleeps and wakeups,
>> which fits better with [my guess as to the intent of] the SLEEPWALK
>> mechanism than a WARM state.
> 
> We do need a use case here :-)
> 
> And yes, I agree that the controller should be able to manage wakeups.

OK, I have two use cases. They go beyond the scope of this proposal
because they're also concerned with CPU wakeups, but I'm not sure we can
really design the controller API without considering CPU wakeups at all.

The first use case is saving power by putting the device to sleep, while
keeping a hidden service available.

"Sleep" on Android is similar to suspend on Linux (for recent Android
kernels it's identical). User-space code is paused and the kernel only
responds to a limited set of interrupts, including network activity and
alarms.

Entering this state without disabling Tor's network connectivity causes
it to panic when the device wakes up - its libevent timers don't fire
during sleep, so it thinks the clock has jumped. Just suppressing

Re: [tor-dev] Proposal 286: Controller APIs for hibernation access on mobile

2017-12-05 Thread Michael Rogers
On 01/12/17 12:16, teor wrote:
> 
>> On 1 Dec 2017, at 21:56, Michael Rogers <mich...@briarproject.org> wrote:
>>
>>> On 30/11/17 12:55, Nick Mathewson wrote:
>>>
>>>   When a Tor instance that is running an onion service is IDLE, it
>>>   does the minimum to try to remain responsive on the onion
>>>   service: It keeps its introduction points open if it can. Once a
>>>   day, it fetches new directory information and opens new
>>>   introduction points.
>>
>> If a client connects to the service, the service will need to build a
>> circuit to the rendezvous point. Does it fetch up-to-date directory
>> information before doing so?
> 
> Interesting question.
> 
> It's not required, because the INTRODUCE cell contains all the
> rendezvous point details. But I think we should be consistent,
> and fetch a consensus and enough microdescs before performing
> any client or service activity, just like we do when bootstrapping.
> Otherwise, we'll end up with weird bugs.

Could/should this be done by reusing the existing bootstrapping process,
i.e. by reverting back to an earlier stage in the process and repeating
the rest of the process?

>> If so, there's a delay that may let the
>> client know the service was idle. Is that a problem?
> 
> Mobile clients typically have high latency already.
> If enough clients do this, it won't be a problem.

If the service needs to fetch a consensus and microdescs before it can
respond to a rendezvous cell, the delay could be far longer than the
difference in latency between a mobile phone and a laptop. So my point
is that the client will be able to tell that the service was woken from
idle by the rendezvous cell, which might have implications for the
service's anonymity.

For example, it lets the client know that the service isn't running on
the same device as another service the client recently connected to,
otherwise the device wouldn't have been idle. Maybe that's unavoidable,
or not worth avoiding, but I just wanted to flag the issue.

>>> 3.2. Changing the hibernation state
>>>
>>>   We add the following new possible values to the SIGNAL controller
>>>   command:
>>>  "SLEEP" -- enter the sleep state, after an appropriate
>>> shutdown interval.
>>>
>>>  "IDLE" -- enter the idle state
>>>
>>>  "SLEEPWALK" -- If in sleep or idle, start probing for
>>> directory information in the sleep-update or idle-update
>>> state respectively.  Remain in that state until we've
>>> probed for directory information, or until we're told to
>>> IDLE or SLEEP again, or (if we're idle) until we get client
>>> activity. Has no effect if not in sleep or idle.
>>>
>>>  "WAKEUP" -- If in sleep, sleep-update, idle, idle-update, or
>>> shutdown:sleep state, enter the live state.  Has no effect
>>> in any other state.
>>
>> How does the controller find out when the Tor instance next needs to
>> fetch directory information (or post a hidden service descriptor) so it
>> can send a SLEEPWALK command at the right time? Or should the controller
>> just send the command periodically, maybe once an hour?
> 
> I'm trying to work out what the use case is here, and why SLEEPWALK
> is a good solution,
> 
> If the controller sends SLEEPWALK, and Tor has nothing to do, it should
> immediately return to IDLE or SLEEP.
> 
> If the controller puts the Tor instance in IDLE mode, it doesn't need to
> issue a SLEEPWALK command every hour, because Tor will do the
> minimum it needs to do to be connected.

Maybe I've misunderstood the proposal, but I thought the intent was that
Tor wouldn't fetch anything in IDLE mode, and wouldn't automatically
change from IDLE to IDLE_UPDATING - it would need a SLEEPWALK signal to
tell it to change to IDLE_UPDATING, and then it would automatically
change back to IDLE when it was done.

I'm guessing that although limiting CPU wakeups is outside the scope of
this proposal, the SLEEPWALK mechanism is meant to be compatible with
some future changes where the device will be allowed to go into a sleep
state from which Tor can't wake it, and the controller will use the
platform's alarm API to schedule a SLEEPWALK signal to wake Tor so it
can perform its periodic tasks.

> The more options that Tor provides, and the more unusual things a
> controller tries to do, the more clients will stick out due to delays.
> So I don't think SLEEPWALK is a good idea, because it allows every
> different controller to pick a different update interval.

Rather than the controller picking an int

Re: [tor-dev] Proposal 286: Controller APIs for hibernation access on mobile

2017-12-01 Thread Michael Rogers
Hi Nick,

On 30/11/17 12:55, Nick Mathewson wrote:
> 2. Improvements to the hibernation model
> 
>To present a consistent interface that applications and
>controllers can use to manage power consumption, we make these
>enhancements to our hibernation model.
> 
>First, we add three new hibernation states: "IDLE",
>"IDLE_UPDATING", "SLEEP", and "SLEEP_UPDATING".
> 
>"IDLE" is like the current "idle" or "no predicted ports" state:
>Tor doesn't launch circuits or start any directory activity, but
>its listeners are still open.  Tor clients can enter the IDLE
>state on their own when they are LIVE, but haven't gotten any
>client activity for a while.  Existing connections and circuits
>are not closed. If the Tor instance receives any new connections,
>it becomes LIVE.

Does receiving a new connection include receiving a rendezvous cell from
one of the instance's intro points? If not, do we need a new status
message to tell the controller about this, or is there an existing
message we can use?

> 2.2. Onion service operation
> 
>When a Tor instance that is running an onion service is IDLE, it
>does the minimum to try to remain responsive on the onion
>service: It keeps its introduction points open if it can. Once a
>day, it fetches new directory information and opens new
>introduction points.

If a client connects to the service, the service will need to build a
circuit to the rendezvous point. Does it fetch up-to-date directory
information before doing so? If so, there's a delay that may let the
client know the service was idle. Is that a problem?

Two other possibilities would be for the service to fetch directory
information every hour in case a client connects, or to build the
circuit using whatever information it has available, which may be up to
a day old. Is that a problem?

> 3.2. Changing the hibernation state
> 
>We add the following new possible values to the SIGNAL controller
>command:
>   "SLEEP" -- enter the sleep state, after an appropriate
>  shutdown interval.
> 
>   "IDLE" -- enter the idle state
> 
>   "SLEEPWALK" -- If in sleep or idle, start probing for
>  directory information in the sleep-update or idle-update
>  state respectively.  Remain in that state until we've
>  probed for directory information, or until we're told to
>  IDLE or SLEEP again, or (if we're idle) until we get client
>  activity. Has no effect if not in sleep or idle.
> 
>   "WAKEUP" -- If in sleep, sleep-update, idle, idle-update, or
>  shutdown:sleep state, enter the live state.  Has no effect
>  in any other state.

How does the controller find out when the Tor instance next needs to
fetch directory information (or post a hidden service descriptor) so it
can send a SLEEPWALK command at the right time? Or should the controller
just send the command periodically, maybe once an hour?

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Action items wrt prop224 onion address encoding (was Re: Proposition: Applying an AONT to Prop224 addresses?)

2017-04-11 Thread Michael Rogers
On 11/04/17 11:45, George Kadianakis wrote:
> We basically add the canonical onion address in the inner encrypted
> layer of the descriptor, and expect the client to verify it. I made this
> feature optional in case we ever decide it was a bad idea.

Is the version number also included in the blinded key derivation? I
haven't been keeping up with prop224 developments, so apologies if
that's already been settled, but in your previous email it sounded like
it was one of the suggestions but not one of the action items.

If the version number is included in the descriptor but not in the
blinded key derivation, can a service publish descriptors for multiple
protocol versions? Would there be a conflict if the HS directories store
the descriptors under the same blinded key?

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Proposal for the encoding of prop224 onion addresses

2017-02-07 Thread Michael Rogers
On 05/02/17 07:44, Taylor R Campbell wrote:
>> Date: Sat, 04 Feb 2017 20:14:00 -0600
>> From: Vi Grey 
>> Also, should we consider including a Version option eventually in the
>> ADD_ONION control port command when using a KeyBlob?  It wouldn't matter
>> much for this new version and probably wouldn't matter much for a while,
>> but it might be good to keep this in mind for the future.
> 
> Versioning onion addresses and versioning network-internal service
> descriptors need not be done the same way.
> 
> Addresses are likely to be long-term, and should really change only if
> the meaning of the encoded public key has changed incompatibly but
> otherwise imperceptibly -- e.g., if for some reason Tor switched from
> Edwards coordinates to Montgomery coordinates on Curve25519.  (That
> would be a silly thing to do -- it is just an example of a change that
> could still interpret existing addresses, but differently.)

It seems to me that different people in this thread have different ideas
about what the version number in the onion address represents, and it
would be good to make that explicit in the spec. Does it represent:

1. The version of the *onion address format* - meaning that, for
example, a v4 address and a v5 address could point to the same
descriptor, which would be compatible with both addresses?

2. The version of the *descriptor format* - meaning that a v4 address
must point to a v4 descriptor, but a v4 descriptor and a v5 descriptor
could point to the same hidden service, which would be compatible with
both descriptors?

3. The version of the *entire protocol* - meaning that a v4 address must
point to a v4 descriptor, which must point to a v4 hidden service?

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Is it safe for second_elapsed_callback() to be called less often?

2016-12-07 Thread Michael Rogers
On 02/12/16 16:56, Nick Mathewson wrote:
> On Thu, Dec 1, 2016 at 9:54 AM, Michael Rogers <mich...@briarproject.org> 
> wrote:
>> Hi,
>>
>> When running hidden services on Android I've found it's necessary to
>> hold a wake lock to keep the CPU awake. Otherwise Tor's
>> second_elapsed_callback() doesn't get called once per second. When the
>> CPU eventually wakes and the callback is called, if 100 seconds or more
>> have passed since the last call, second_elapsed_callback() calls
>> circuit_note_clock_jumped(), which marks any dirty circuits as unusable.
>>
>> If I understand right, this will have the effect of breaking any
>> existing connections to the hidden service and making the service
>> unreachable until it's built new intro circuits and published a new
>> descriptor.
>>
>> #8239, #16387 and #19522 might help to reduce the downtime by improving
>> our chances of reusing the same intro points, but we'd still lose the
>> existing circuits.
>>
>> It would be nice if we could avoid this downtime without holding a
>> permanent wake lock, which has a big impact on battery life.
>>
>> A possible workaround would be to use Android's alarm API to wake the
>> controller process once per minute. The controller process can acquire a
>> wake lock for a couple of seconds to allow second_elapsed_callback() to
>> be called, then release the lock until the next alarm.
>>
>> However, it's pretty clear that Tor wants the callback to be called once
>> per second and gets nervous if that doesn't happen, so I wanted to ask
>> about the implications of calling the callback once or twice per minute
>> before pursuing this idea further. Is it in any way sane?
> 
> That's a hard question!  We'd need to audit everything in
> second_elapsed_callback() to see what would actually happen if it
> weren't called so often.  In many cases, it might be that we could
> make it happen only as needed, or more infrequently, without bad
> consequences... but I can't be too sure without going through all the
> code.
> 
> It's not crazy to try it and find out what breaks; let us know if you try?
> 
>> Another possibility would be to look into how Libevent's timers work.
>> Perhaps we can ensure that the timers wake the CPU on Android, so
>> second_elapsed_callback() and any other timer-based functions get called
>> without keeping the CPU permanently awake?
> 
> Interesting; I'd like to pursue that one too, but I don't know much
> about timed cpu waking on android; could you link to some good
> documentation?

I'll spare you a long rant about the quality of Android's documentation
and just say that I looked into this, and in summary, an unprivileged
process can't use native calls to schedule alarms that will wake the CPU.

(There may be a few devices where alarm timer support is enabled in the
kernel, *and* timerfd_create() supports the relevant clocks, *and* it
hasn't yet been patched to check the CAP_WAKE_ALARM capability like
timer_create() does, but those devices will never be the majority.)

Unfortunately, I also found that in doze mode, which was introduced in
Android M, alarms can't be scheduled via the Android API more frequently
than once every nine minutes. I don't think we can realistically expect
to refactor second_elapsed_callback() to be called once every nine
minutes, especially if no other timers can fire during that time. So my
original question is moot. We'll have to stick with a wake lock.

I would just like to plant the seed of a thought, though. Is it possible
to imagine a protocol for connecting a service to Tor, such that the
device running the service can sleep for long periods without losing
connectivity?

This has two variants. In the easy variant, the device can wake up to
handle network traffic. In the hard variant, the device can't wake up
until its next alarm.

This list probably isn't the place to discuss solutions to that problem
- I just mention it as food for thought.

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Is it safe for second_elapsed_callback() to be called less often?

2016-12-01 Thread Michael Rogers
Hi,

When running hidden services on Android I've found it's necessary to
hold a wake lock to keep the CPU awake. Otherwise Tor's
second_elapsed_callback() doesn't get called once per second. When the
CPU eventually wakes and the callback is called, if 100 seconds or more
have passed since the last call, second_elapsed_callback() calls
circuit_note_clock_jumped(), which marks any dirty circuits as unusable.

If I understand right, this will have the effect of breaking any
existing connections to the hidden service and making the service
unreachable until it's built new intro circuits and published a new
descriptor.

#8239, #16387 and #19522 might help to reduce the downtime by improving
our chances of reusing the same intro points, but we'd still lose the
existing circuits.

It would be nice if we could avoid this downtime without holding a
permanent wake lock, which has a big impact on battery life.

A possible workaround would be to use Android's alarm API to wake the
controller process once per minute. The controller process can acquire a
wake lock for a couple of seconds to allow second_elapsed_callback() to
be called, then release the lock until the next alarm.

However, it's pretty clear that Tor wants the callback to be called once
per second and gets nervous if that doesn't happen, so I wanted to ask
about the implications of calling the callback once or twice per minute
before pursuing this idea further. Is it in any way sane?

Another possibility would be to look into how Libevent's timers work.
Perhaps we can ensure that the timers wake the CPU on Android, so
second_elapsed_callback() and any other timer-based functions get called
without keeping the CPU permanently awake?

Thanks,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Different trust levels using single client instance

2016-10-31 Thread Michael Rogers
On 21/10/16 21:38, ban...@openmailbox.org wrote:
> Cons:
> *Some unforeseen way malicious VM "X" can link activities of or
> influence traffic of VM "Y"
> **Maybe sending NEWNYM requests in a timed pattern that changes exit IPs
> of VM Y's traffic, revealing they are behind the same client?
> **Maybe eavesdropping on HSes running on VM Y's behalf?
> **Something else we are not aware of?

If each VM has full access to the control port, even something as simple
as "SETCONF DisableNetwork" could be used for traffic confirmation.

ExcludeNodes, ExcludeExitNodes and MapAddress could be used to force
another VM's traffic through certain nodes.

Bandwidth events could be used for traffic analysis of another VM's traffic.

ADDRMAP events look like they might leak information about the hosts
another VM connects to. Likewise DANGEROUS_PORT leaks information about
ports, HS_DESC about HS descriptor lookups.

I'm not sure if covert channels between two VMs (e.g. for exfiltration)
are part of your threat model, but events would be a rich source of
those too.

Cheers,
Michael



0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [prop269] Further changes to the hybrid handshake proposal (and NTor)

2016-10-17 Thread Michael Rogers
On 14/10/16 22:45, isis agora lovecruft wrote:
>  1. [NTOR] Inputs to HKDF-extract(SALT, SECRET) which are not secret
> (e.g. server identity ID, and public keys A, X, Y) are now removed from
> SECRET and instead placed in the SALT.
> 
> Reasoning: *Only* secret data should be placed into the HKDF extractor,
> and public data should not be mixed into whatever entropic material is
> used for key generation.  This eliminates a theoretical attack in which
> the server chooses its public material in such a way as to bias the
> entropy extraction.  This isn't reasonably assumed to be possible in a
> "hash functions aren't probablistically pineapple slicers" world.
> 
> Previously, and also in NTor, we were adding the transcript of the
> handshake(s) and other public material (e.g. ID, A, X, Y, PROTOID)
> directly into the secret portion of an HMAC call, the output of which is
> eventually used to derive the key material.  The SALT for HKDF (as
> specified in RFC5869) can be anything, even a static string, but if we're
> going to be adding transcript material into the handshake, it shouldn't be
> in the entropy extraction phrase.

Hi Isis,

Sorry if this is a really stupid question, but there's something I've
never fully understood about how RFC 5869 describes the requirements for
the HKDF salt. Section 3.4 says:

   While there is no need to keep the salt secret, and the
   same salt value can be used with multiple IKM values, it is assumed
   that salt values are independent of the input keying material.  In
   particular, an application needs to make sure that salt values are
   not chosen or manipulated by an attacker.  As an example, consider
   the case (as in IKE) where the salt is derived from nonces supplied
   by the parties in a key exchange protocol.  Before the protocol can
   use such salt to derive keys, it needs to make sure that these nonces
   are authenticated as coming from the legitimate parties rather than
   selected by the attacker (in IKE, for example this authentication is
   an integral part of the authenticated Diffie-Hellman exchange).

As far as I can tell, the assumption in this example is that the
attacker is not the other party in the key exchange - otherwise
authenticating the nonces wouldn't tell us whether they were safe to
include in the salt.

If we're concerned with the server choosing its public material in such
a way as to bias the entropy extraction, does that mean that in this
case, the attacker is the server, and therefore the server's public
material shouldn't be included in the salt?

Again, probably just a failure on my part to understand the context, but
I thought I should ask just in case.

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 273: Exit relay pinning for web services

2016-10-06 Thread Michael Rogers
On 05/10/16 21:09, Philipp Winter wrote:
>Web servers support ERP by advertising it in the "Tor-Exit-Pins" HTTP
>header.  The header contains two directives, "url" and "max-age":
> 
>  Tor-Exit-Pins: url="https://example.com/pins.txt;; max-age=2678400
> 
>The "url" directive points to the full policy, which MUST be HTTPS.
>Tor Browser MUST NOT fetch the policy if it is not reachable over
>HTTPS.  Also, Tor Browser MUST abort the ERP procedure if the HTTPS
>certificate is not signed by a trusted authority.  The "max-age"
>directive determines the time in seconds for how long Tor Browser
>SHOULD cache the ERP policy.

If I run a bad exit and intercept the user's first HTTP connection to
the server, I can substitute the URL of a policy on my own server that
permanently pins the user to my bad exit. Who cares if the policy has to
be served over HTTPS, if I get to say where it's served from?

A couple of possible mitigations:
* Require the pin URL to have the same FQDN as the connection that
supplies the header
* Forbid the pin header from being served over plain HTTP, and apply the
same trusted certificate rules to the connection that supplies the
header as the connection that supplies the policy (sites that want
pinning can use HSTS to upgrade HTTP to HTTPS before serving the pin header)

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] stopping the censoring of tor users.

2016-02-26 Thread Michael Rogers
As far as I can tell, this would work, and you could do it without any
changes to the Tor network. Just set up a hidden service where the
service is an open proxy.

It wouldn't be transparent to clients, however - they'd need to do some
proxychains-style juggling to connect to their local onion proxy, then
connect through that to your hidden service, then connect through that
to the internet. You could of course create a library to do the extra
work on the client side - but the point is, it wouldn't work for
unmodified clients.

But perhaps the bigger problem with hidden exit nodes is that when
someone does something illegal through your exit node and the police
come to your door, you can't point to the public list of exit nodes and
say "It wasn't me, I'm just an exit node". The best you can do is point
to the hidden exit nodes library and say "It might not have been me,
anyone could be an exit node".

Cheers,
Michael

On 25/02/16 22:06, blacklight . wrote:
> About the issue of exit nodes needing to know to which bridge they need
> to connect to,  could we not make a system that similair to hidden
> services, so that the nodes can connect to them without knowing the
> actulle ip adress? If we could design an automatic system in which flash
> proxies could be configered like that, then it might work i think, what
> are your thoughts?
> 
> Op 25 feb. 2016 22:37 schreef "Thom Wiggers"  >:
> 
> You may be interested in the following from the FAQ:
> 
> https://www.torproject.org/docs/faq.html.en#HideExits
> 
> You should hide the list of Tor relays, so people can't block the exits.
> 
> There are a few reasons we don't:
> 
> a) We can't help but make the information available, since Tor
> clients need to use it to pick their paths. So if the "blockers"
> want it, they can get it anyway. Further, even if we didn't tell
> clients about the list of relays directly, somebody could still make
> a lot of connections through Tor to a test site and build a list of
> the addresses they see.
> b) If people want to block us, we believe that they should be
> allowed to do so. Obviously, we would prefer for everybody to allow
> Tor users to connect to them, but people have the right to decide
> who their services should allow connections from, and if they want
> to block anonymous users, they can.
> c) Being blockable also has tactical advantages: it may be a
> persuasive response to website maintainers who feel threatened by
> Tor. Giving them the option may inspire them to stop and think about
> whether they really want to eliminate private access to their
> system, and if not, what other options they might have. The time
> they might otherwise have spent blocking Tor, they may instead spend
> rethinking their overall approach to privacy and anonymity.
> 
> On 25/02/16 20:04, blacklight . wrote:
>> hello there! i don't know if this mailing list works but i thought
>> of giving it a try.
>>
>> i was lately reading an article
>> 
>> (http://www.pcworld.com/article/3037180/security/tor-users-increasingly-treated-like-second-class-web-citizens.html)
>>  and it was about tor users getting blocked from accessing alot of
>> website. but after giving this some thought i think i came up with
>> a possible solution to the problem :there is a thing called
>> bridges, they are used to access the tor network without your isp
>> knowing that you use tor, but if you can use those proxies to
>> enter the network, it might also be possible to exit the network
>> with them. But then we face a second challenge, the exit nodes
>> have to be configured in such a way that it will relay traffic to
>> such a bridge, so the exit node owners also need to know the ip of
>> the bridge. While this doesn't seem difficult to do, it can become
>> difficult. You see if the bridges are published on a public
>> list(like normal bridges are) then the blocking sites in question
>> will be able to block those address too. While this also posses a
>> problem, a possible solution could be found in something called
>> flashproxies, flashproxies are bridges with a really short life
>> span, they are created and destroyed fairly swiftly, when this is
>> done in a rapid pace, they become really hard to block because the
>> ip changes all the time. So if the exit nodes can be configured to
>> make use of such flash proxies, then the problem could be solved.
>> I Must admit that not an expert on this or anything, and it needs
>> alot of more thought, but it could work. so i was wondering if
>> there are any experts who could help me with thinking out this
>> subject and maybe confirm if this idea could work.
>>
>>
>> greetings, blacklight
>>
>>
>> ___
>> 

Re: [tor-dev] Revisiting Proposal 246: Merging Hidden Service Directories and Introduction Points

2016-01-14 Thread Michael Rogers
On 13/01/16 20:42, s7r wrote:
> After prop 250, a malicious HSDir can not do so much, but a merged
> HSDir + IP can for example kill the circuit and force the hidden
> service to rebuild it. Under the current design, we retry for some
> times and give up if an introduction point is unreliable, but with 246
> we have to retry much more. If the same attacker also holds a
> reasonable % of middle bandwidth fraction in the Tor network, it will
> at least perform a successful guard discovery over the hidden service
> which is terrible. This may be fixed by not retrying a HSDir+IP too
> many times as described in section 4.3 of the proposal, but it will
> automatically lead to a capacity decrease (we have 5 HSDir + IP left
> out of 6).

The countermeasures in section 4.3 could be problematic for mobile
hidden services, which have unreliable internet connections and
therefore lose their intro circuits frequently. A service that lost its
internet connection more than INTRO_RETRIES times in a consensus period
would be left with no introduction points.

The discussions on guard selection have suggested that clients can't
easily distinguish between relay failures and internet connection
failures. On Android the OS broadcasts connectivity events that could be
used to reset the retry counter via the control port, but I don't know
about other platforms.

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Fwd: Orbot roadmap feedback

2016-01-13 Thread Michael Rogers
On 13/01/16 16:04, Nathan Freitas wrote:
> On Tue, Jan 12, 2016, at 11:52 AM, Michael Rogers wrote:
>> On 12/01/16 16:16, Nathan Freitas wrote:
>>> The broader idea is to determine which Tor torrc settings are relevant
>>> to the mobile environment, and that could use a more intuitive user
>>> interface than the empty text input we currently offer in our Settings
>>> panel. This may also mean implement a slider type interface mechanism
>>> similar to Tor Browser. In addition, users are interested in being able
>>> to control which network types (wifi vs 3g) Orbot runs on, in order to
>>> conserve spending on bandwidth.
>>
>> Briar's TorPlugin has an option to disable Tor when using mobile data,
>> feel free to backport it to Orbot. Likewise for the plugin as a whole,
>> if it would be a suitable basis for LibOrbot. We've benefitted a lot
>> from your work and I'd love to send some contributions back upstream!
> 
> Is TorPlugin already a distinct library / module?

It's not, but it should be easy to separate, especially if you only need
to run on Android. An earlier version of TorPlugin was forked into an
Android/J2SE library by the Thali project, but I'd recommend starting
from the current version of the plugin.

https://github.com/thaliproject/Tor_Onion_Proxy_Library

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Fwd: Orbot roadmap feedback

2016-01-12 Thread Michael Rogers
On 12/01/16 16:16, Nathan Freitas wrote:
> The broader idea is to determine which Tor torrc settings are relevant
> to the mobile environment, and that could use a more intuitive user
> interface than the empty text input we currently offer in our Settings
> panel. This may also mean implement a slider type interface mechanism
> similar to Tor Browser. In addition, users are interested in being able
> to control which network types (wifi vs 3g) Orbot runs on, in order to
> conserve spending on bandwidth.

Briar's TorPlugin has an option to disable Tor when using mobile data,
feel free to backport it to Orbot. Likewise for the plugin as a whole,
if it would be a suitable basis for LibOrbot. We've benefitted a lot
from your work and I'd love to send some contributions back upstream!

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Shared random value calculation edge cases (proposal 250)

2015-12-07 Thread Michael Rogers
On 21/11/15 12:05, George Kadianakis wrote:
>> If there's no consensus on a fresh SRV, why not just use the disaster 
>> recovery procedure?
>>
>> That is:
>>
>># Consensus
>>if majority agrees on SR value(s):
>>put in consensus
>>else:
>>put disaster recovery SR value (based on last round's agreed SR 
>> value, if any) in consensus
>>
>>Output: consensus is created
>>
>>(Proceed like the 16:00 period)
>>
> 
> True. However, if the majority cannot agree on SR values, how can a majority 
> be
> formed to agree on disaster recovery SRVs? The problem here is that the 
> disaster
> SRVs are derived from the previous SRV.

Would it help if we defined the disaster SRV as being derived from the
last SRV for which there was a consensus, rather than from the previous
round's SRV?

That would allow the network to converge on the same sequence of
disaster SRVs, and then switch back from disaster mode to
business-as-usual mode, just by sharing valid consensuses.

>> That way, clients and relays don't need to do anything special: there will 
>> always be a SRV in the consensus.
>>
>> This means that the SR consensus method will always produce a SR value, 
>> which I believe is a much better property than occasionally failing to 
>> produce a value.
>>
> 
> Indeed, that's something very important.
> 
> Yesterday, we were discussing with David how bad it would be if 2-3 SR-enabled
> dirauths drop offline during voting, and the rest dirauths generate a 
> consensus
> without an SR value (because the consensus method requirement for SR failed or
> something).
> 
> In this case, some clients will have a consensus with an SR value, and some
> other clients will have a consensus with no SR value. Those two client sets 
> will
> use a different formula to connect to hidden services, bringing out terrible
> reachability consequences.

When the failed dirauths come back online they should accept the
consensus that was generated in their absence. So in the following
round, all dirauths will vote for an SRV based on the one that was
agreed while the failed dirauths were offline.

Cheers,
Michael

> I wonder what's the solution here. We don't want a single consensus to break
> reachability for all hidden services. I wonder what should we do? Should we 
> just
> make it ultra unlikely that a consensus will be generated without SR values?
> For exmaple, we could require every dirauth to participate in the protocol so
> that we either have a consensus with SR or no consensus at all. This will slow
> down deployment, but it might make the system more bullet proof.
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev



0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Shared random value calculation edge cases (proposal 250)

2015-11-20 Thread Michael Rogers
On 20/11/15 16:24, David Goulet wrote:
> # Consensus
> (This goes for both previous and current SRV)
> if SRV in consensus:
> dirauth MUST keep it even if the one they have doesn't match.
> Majority has decided what should be used.
> else:
> dirauth MUST discard the SRV it has.

This seems like an excellent idea. Relying on the consensus ensures that
no matter what crazy shit happens to the individual dirauths, they can
eventually converge on the same previous and current SRV values (or
agree that no such SRV values exist) by sharing the valid consensus
documents they've seen.

> Side effect of only keeping SRV that are in the consensus. If one voting
> round goes bad for X reason and consensus end up with no SRV, we end up
> in bootstrapping mode that is no previous nor current SRV in the
> consensus which is problematic because for 48 hours, we won't have a
> previous SRV which is the one used by everyone.
> 
> I don't see a way to get out of this because consensus is decided from
> the votes deterministically thus if not enough vote for SR values, we'll
> end up with a consensus with none so this is why client/HS have to
> fallback to a disaster value by themselves I think which can NOT be
> based on the previous SRV.

If there's no consensus on a fresh SRV for a while, clients and relays
can keep producing emergency SRVs by hashing the last fresh SRV they
know about, right? It doesn't have to be yesterday's SRV - if the last
fresh SRV was produced a week ago, they keep hashing based on that
(which is not ideal of course). As above, using the consensus seems like
a good idea because it allows the network to converge on the same values
just by sharing valid consensus documents.

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal: Padding Negotiation

2015-10-14 Thread Michael Rogers
On 12/09/15 01:34, Mike Perry wrote:
> 4.1. Rate limiting
> 
> Fully client-requested padding introduces a vector for resource
> amplification attacks and general network overload due to
> overly-aggressive client implementations requesting too much padding.
> 
> Current research indicates that this form of statistical padding should
> be effective at overhead rates of 50-60%. This suggests that clients
> that use more padding than this are likely to be overly aggressive in
> their behavior.
> 
> We recommend that three consensus parameters be used in the event that
> the network is being overloaded from padding to such a degree that
> padding requests should be ignored:
> 
>   * CircuitPaddingMaxRatio
> - The maximum ratio of padding traffic to non-padding traffic
>   (expressed as a percent) to allow on a circuit before ceasing
>   to pad. Ex: 75 means 75 padding packets for every 100 non-padding
>   packets.
> - Default: 100
>   * CircuitPaddingLimitCount
> - The number of padding cells that must be transmitted before the
>   ratio limit is applied.
> - Default: 500
>   * CircuitPaddingLimitTime
> - The time period in seconds over which to count padding cells for
>   application of the ratio limit.
> - Default: 60

Without disputing the need for limits, I wonder whether the ratio of
padding to non-padding traffic during a fixed time period is definitely
the right thing to limit. I'm guessing that the research showing that an
overhead of 50-60% is effective applies to website fingerprinting - what
about other applications?

Consider a chat client that sends a very small amount of very bursty
traffic. It will be hard to choose a padding distribution that provides
any useful concealment of the real traffic pattern without risking
exceeding the padding-to-traffic limit during some time periods. But
this client is hardly a threat to the network, even if it sometimes
sends more padding than traffic.

Calculating the ratio over the whole lifetime of the circuit would be an
improvement, but the problem would still arise at the start of the
circuit's lifetime (it seems like the appropriate value of
CircuitPaddingLimitCount would be application-dependent).

I'm not sure what to suggest here, but it seems a shame to have a
super-flexible way to specify padding distributions and then impose
limits that mean clients will only use simple padding distributions that
they can be sure will stay within the limits.

> XXX: Should we cap padding at these rates, or fully disable it once
> they're crossed?

If the ratio exceeds the limit and the relay stops padding, will the
client know that the circuit's no longer protected? Perhaps the safe
thing would be to kill the circuit?

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal: Merging Hidden Service Directories and Introduction Points

2015-08-19 Thread Michael Rogers
On 12/07/15 22:48, John Brooks wrote:
 1.3. Other effects on proposal 224
 
An adversarial introduction point is not significantly more capable than a
hidden service directory under proposal 224. The differences are:
 
  1. The introduction point maintains a long-lived circuit with the service
  2. The introduction point can break that circuit and cause the service to
 rebuild it

Regarding this second difference: the introduction point (cooperating
with a corrupt middle node) could potentially try to discover the
service's guard by repeatedly breaking the circuit until it was rebuilt
through the corrupt middle node. Would it make sense to use vanguards
here, as well as on rendezvous circuits?

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] (Draft) Proposal 224: Next-Generation Hidden Services in Tor

2015-05-28 Thread Michael Rogers
On 12/05/15 20:41, Jeff Burdges wrote:
 Alright, what if we collaboratively select the RP as follows :
 
 Drop the HSDirs and select IPs the way 224 selects HSDirs, like John
 Brooks suggests. Protect HSs from malicious IPs by partially pinning
 their second hop, ala (2).
 
 An HS opening a circuit to an IP shares with it a new random number 
 y. I donno if y could be a hash of an existing shared random value,
 maybe, maybe not.
 
 A client contacts a hidden services as follows :
 - Select an IP for the HS and open a circuit to it.
 - Send HS a random number w, encrypted so the IP cannot see it.
 - Send IP a ReqRand packet identifying the HS connection.
 
 An IP responds to ReqRand packet as follows :
 - We define a function h_c(x,y) = hash(x++y++c), or maybe some
 hmac-like construction, where c is a value dependent upon the current
 consensus.
 - Generate z = h_c(x,y) where x is a new random number.
 - Send the client z and send the HS both x and z.
 
 An HS verifies that z = h_c(x,y).
 
 Finally, the client and HS determine the RP to build the circuit
 using h_c(z,w) similarly to how 224 selects HSDirs.

One small problem with this suggestion (hopefully fixable) is that
there's no single current consensus that the client and IP are
guaranteed to share:

https://lists.torproject.org/pipermail/tor-dev/2014-September/007571.html

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] (Draft) Proposal 224: Next-Generation Hidden Services in Tor

2015-05-28 Thread Michael Rogers
On 26/04/15 23:14, John Brooks wrote:
 It occurred to me that with proposal 224, there’s no longer a clear reason
 to use both HSDirs and introduction points. I think we could select the IP
 in the same way that we plan to select HSDirs, and bypass needing
 descriptors entirely.
...
 One notable loss is that we won’t be able to use legacy relays as IP, which
 the current proposal tries to do. Another difference is that we’ll select
 IPs uniformly, instead of by bandwidth weight - I think this doesn’t create
 new problems, because being a HSDir is just as intensive.

A quick thought: to select IPs by bandwidth weight, divide each IP into
one or more slices depending on its bandwidth, then select slices
instead of selecting IPs directly.

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] (Draft) Proposal 224: Next-Generation Hidden Services in Tor

2015-05-28 Thread Michael Rogers
On 28/05/15 11:28, Jeff Burdges wrote:
 One small problem with this suggestion (hopefully fixable) is that 
 there's no single current consensus that the client and IP are 
 guaranteed to share:
 
 https://lists.torproject.org/pipermail/tor-dev/2014-September/007571.html


 If I understand, you’re saying the set of candidate RPs is larger
 than the set of candidate IPs which is larger than the set of
 candidate HSDirs, so disagreements about the consensus matter
 progressively more.  Alright, fair enough.

I wasn't thinking about the sizes of the sets so much as the probability
of overlap. If the client picks n HSDirs or IPs from the 1:00 consensus
and the service picks n HSDirs or IPs from the 2:00 consensus, and the
set of candidates is fairly stable between consensuses, and the ordering
is consistent, we can adjust n to get an acceptable probability of
overlap. But if the client and service (or client and IP) are picking a
single RP, there's no slack - they have to pick exactly the same one.

 An IP is quite long lived already, yes?  There is no route for the HS
 to tell the client its consensus time, but the client could share its
 consensus time, assuming that information is not sensitive elsewhere,
 so the HS could exclude nodes not considered by the client.  It's
 quite complicated though, so maybe not the best approach.

Even if the service knows what consensus the client is using, the
service may not have a copy of that consensus (and may not ever have had
one).

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Services and IP address changes

2015-05-21 Thread Michael Rogers
On 21/05/15 14:04, Nathan Freitas wrote:
 On Thu, May 21, 2015, at 07:16 AM, Martin Florian wrote:
 I think I've found one or more bugs that appear when Tor clients
 hosting HSes change their IP address during operation. I'm slightly
 overwhelmed from reading the Tor source code and not sure how to best
 fix them.
 
 Thanks for bringing this up. I know Michael from Briar has definitely
 focused on solving this at some point, and Yaron from the Thali Project
 (who build this library:
 https://github.com/thaliproject/Tor_Onion_Proxy_Library), as well. I've
 been implementing an OnionShare-type app myself, and had hoped this was
 solved by some recent changes, but it seems not, from your experience.

I think this may be a different problem from the one I looked at, which
was related to introduction points rather than rendezvous points.

But that reminds me, the solution for the stale introduction point issue
(flushing the HS descriptor from the cache on the client before
connecting) still needs to go through the patch workshop. I'll bump that
up the todo list.

 The central issue that I discovered can be reproduced like this
 (assuming Tor clients A, B and C):
 1. (Setup) A hosts the HS X and A, B and C are all booted up.
 2. B connects to X - it works!
 3. A changes its IP address.
 4. B tries to talk to X again - doesn't work!
 5. C tries to talk to X (for the first time) - works like a charm (so
 X IS working)

 I digged through the Tor log and source code and have now arrived at
 following hypothesis for why this particular error happens:
 - - after A changes its IP addresses, it never establishes a circuit to
 the old RP with B again.
 - - B, on the other hand, keeps trying to talk with A through that RP,
 saying that it is an Active rendezvous point. B never stops trying
 to use that RP.

I wonder whether the RP knows that the service-RP circuit has died, and
if so, should it tear down the client-RP circuit?

 I wonder if B also was running a hidden service, if it would be possible
 at the application level for A to tell B that it has changed IP
 addresses, and then through some interaction with the Tor Control Port,
 to fresh the RP?

It would be nice if we could find a way to detect this at the Tor level
so we don't have to maintain two circuits between A and B.

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Listen to Tor

2015-05-21 Thread Michael Rogers
On 17/05/15 06:17, Kenneth Freeman wrote:
 I recently had a brainstorm at Feast VI, a locally crowd sourced
 micro-grant catered dinner where ten artists each pitch their projects
 for ten or twelve minutes. The diners vote, and the winner takes home
 the gate; this time around it was approximately $1,300.
 
 http://www.feastboise.com/
 
 I edit Wikipedia a lot, and thus am delighted by the ambient music of
 Listen to Wikipedia.
 
 http://listen.hatnote.com/#en
 
 So, why not Listen to Tor? More specifically, to a Tor exit node?
 
 I'm a bit surprised that this music of anonymity (so to speak) hasn't,
 AFAIK, occurred to anyone else. I recall that Vidalia, long since
 deprecated, offered several options for exit node traffic...
 
 So if anyone wants to make aleatoric music from Tor, keep me informed...
 Feast VII is, I believe, in September. If you want to pitch generating
 art from Tor, that's one venue. And if the basic idea isn't technically
 or otherwise feasible, kick it around until it is!

Hi Kenneth,

What a cool idea! I played around with sonification of network traffic
once upon a time, using kismet, tcpdump and fluidsynth glued together
with a bit of perl. You can listen to the results here:

http://sonification.eu/

To avoid the privacy issues with monitoring exit node traffic, perhaps
you could run this on the client's LAN, producing two pieces of music,
one for unanonymised traffic and the other for the same traffic passed
through Tor? Then we'd know what privacy sounds like. :-)

Cheers,
Michael


0x9FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] (Draft) Proposal 224: Next-Generation Hidden Services in Tor

2015-05-12 Thread Michael Rogers
On 26/04/15 23:14, John Brooks wrote:
 It occurred to me that with proposal 224, there’s no longer a clear reason
 to use both HSDirs and introduction points. I think we could select the IP
 in the same way that we plan to select HSDirs, and bypass needing
 descriptors entirely.
 
 Imagine that we select a set of IPs for a service using the HSDir process in
 section 2.2 of the proposal. The service connects to each and establishes an
 introduction circuit, identified by the blinded signing key, and using an
 equivalent to the descriptor-signing key (per IP) for online crypto.
 
 The client can calculate the current blinded public key for the service and
 derive the list of IPs as it would have done for HSDirs. We likely need an
 extra step for the client to request the “auth-key” and “enc-key” on this IP
 before building an INTRODUCE1 cell, but that seems straightforward.
 
 The IPs end up being no stronger as an adversary than HSDirs would have
 been, with the exception that an IP also has an established long-term
 circuit to the service. Crucially, because the IP only sees the blinded key,
 it can’t build a valid INTRODUCE1 without external knowledge of the master
 key.

Something like this was suggested last May, and a concern was raised
about a malicious IP repeatedly killing the long-term circuit in order
to cause the HS to rebuild it. If the HS were ever to rebuild the
circuit through a malicious middle node, the adversary would learn the
identity of the HS's guard.

I don't know whether that's a serious enough threat to outweigh the
benefits of this idea, but I thought it should be mentioned.

Cheers,
Michael



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] PT-themed stuffed animals: huggable transports

2015-04-01 Thread Michael Rogers
On 01/04/15 16:50, Rishab Nithyanand wrote:
 Next, we went on to study possible replacements and found that the
 Scramblesuit rabbit [1] did significantly better! As a side benefit, it
 made all the censors go a and let it right through.
 
 Expect our full results at USENIX Security.
 
 [1] http://en.wikipedia.org/wiki/Rabbit

That's a plaintext rabbit - surely you meant [2]?

Cheers,
Michael

[2] https://en.wikipedia.org/wiki/File:Hare_Tonic.jpg



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Brainstorming ideas for controller features for improved testing; want feedback

2015-03-20 Thread Michael Rogers
On 20/03/15 15:55, Nick Mathewson wrote:
 27. Hidden service intropoint changes, desc changes, uploads
 
 Many hidden service transitions currently generate no events.  We
 could at minimum generate events for changed inroduction points,
 changed hidden service descriptors, uploading our own HS descriptor.

An HS descriptor upload event would be useful for apps that use hidden
services for p2p connections - we can avoid polling for descriptors if
we know when our own descriptor has been published.

 32. Forget cached information
 
 To better test our download logic, it would be helpful to have a way
 to drop items from our caches.

This too. (I have a patch for purging entries from the HS descriptor
cache that I'll bring to the patch workshop one of these weeks...)

Cheers,
Michael



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] bittorrent based pluggable transport

2015-03-07 Thread Michael Rogers
On 03/03/15 16:54, Tariq Elahi wrote:
 What I am getting at here is that we ought to figure out properties of
 CRSs that all CRSs should have based on some fundamentals/theories
 rather than what happens to be the censorship landscape today. The
 future holds many challenges and changes and getting ahead of the game
 will come from CRS designs that are resilient to change and do not
 make strong assumptions about the operating environment.

Responding to just one of many good points: I think your insight is the
same one that motivated the creation of pluggable transports. That is,
we need censorship resistance systems that are resilient to changes in
the operating environment, and one way to achieve that is to separate
the core of the CRS from the parts that are exposed to the environment.
Then we can replace the outer parts quickly in response to new
censorship tactics, without replacing the core.

In my view this is a reasonable strategy because there's very little we
can say about censorship tactics in general, as those tactics are
devised by intelligent people observing and responding to our own
tactics. If we draw a line around certain tactics and say, This is what
censors do, the censor is free to move outside that line. We've seen
that happen time and time again with filtering, throttling, denial of
service attacks, active probing, internet blackouts, and the promotion
of domestic alternatives to blocked services. Censors are too clever to
be captured by a fixed definition. The best we can do is to make
strategic choices, such as protocol agility, that enable us to respond
quickly and flexibly to the censor's moves.

Is it alright to use a tactic that may fail, perhaps suddenly, perhaps
silently, perhaps for some users but not others? I think it depends on
the censor's goals and the nature of the failure. If the censor just
wants to deny access to the CRS and the failure results in some users
losing access, then yes, it's alright - nobody's worse off than they
would've been without the tactic, and some people are better off for a
while.

If the censor wants to identify users of the CRS, perhaps to monitor or
persecute them, and the failure exposes the identities of some users,
it's harder to say whether using the tactic is alright. Who's
responsible for weighing the potential benefit of access against the
potential cost of exposure? It's tempting to say that developers have a
responsibility to protect users from any risk - but I've been told that
activists don't want developers to manage risks on their behalf; they
want developers to give them enough information to manage their own
risks. Is that true of all users? If not, perhaps the only responsible
course of action is to disable risky features by default and give any
users who want to manage their own risks enough information to decide
whether to override the defaults.

Cheers,
Michael



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] RFC: Ephemeral Hidden Services via the Control Port

2015-02-16 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

(CCing the hidden-services list.)

On 16/02/15 16:11, Leif Ryge wrote:
 If someone has a suggestion for an alternative interface that
 can handle applications crashing (possibly before they persist
 the list of HSes they need to clean up), applications that are
 just poorly written (and not cleaning up all the ephemeral HSes),
 and (optionally, though lacking this would be a reduction in
 features) limiting cross application HS enumeration, I'd be more
 inclined to change things.
 
 First, thanks for doing this! I think it's a great feature which
 will make it much easier to create new hidden service
 applications.

Seconded!

 I like the idea of tying HS lifetime to the control port connection
 for the reasons you state, namely that cleanup is automatic when
 applications crash.

As an app developer this strikes me as the right approach. But having
said that, I wouldn't actually need this feature because Briar already
uses __OwningControllerProcess to shut down Tor if the control
connection is closed. I imagine the same would apply to any app that
manages its own Tor process - so this feature would only be useful for
apps that share a Tor process with other apps and communicate directly
with it via the control port, rather than indirectly via a controller
such as Orbot.

Are there any such apps, and is it a good idea to support such apps
(has the rest of the control protocol been examined for
cross-controller data leaks, what happens if apps tread on each
other's configuration changes, etc)?

 However, it seems like in the case of applications which are not
 HS-specific this will necessitate keeping another process running
 just to keep the HS alive.

Dormant processes are essentially free, so does this matter?

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJU4kaeAAoJEBEET9GfxSfMP/oH/Aoel3gyOAtn5NrgKh6WRcFf
5TwPOElP/vL+XI5XrPRYJYczilQ2st/ZLu6nuULLvGoqbtDZ0VU23uyffPhypx87
n5hXPNYbXt7/tvJ42ULq509D1nRI9Xp39YOwPMt0Yw7RXWo2eB7eYd2n0tMXrdan
4hhxIqe21MXXiL9QGuf/MaToFRQP9TB2vNP5eZzS07WR1EvSN5TvBO5nZM9TRE4t
daJ0mNPhL4v6gb0j0iVCzZJFZ424swOyqdOu5K7iPRWkMbacX9uXINzjUn8NWzIO
hT6GK3dHsyhGjiWLQ0Dlpw1yIZ6vv5YCKAbYoS7mSS0U1NNSaouaT7zzR7+GIMo=
=uVVW
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] high latency hidden services

2015-01-20 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/01/15 11:25, Mike Perry wrote:
 You might also like the Adaptive Padding defense: 
 http://freehaven.net/anonbib/cache/ShWa-Timing06.pdf. It
 implements pretty much what you describe here. It is one of my
 current low-resource favorites.

Thanks for the link! I agree that what I proposed is very similar to
adaptive padding. The major difference is that with adaptive padding,
the padding cells travel to the end of the circuit, whereas with
single-hop padding they only travel to the next relay.

There are pros and cons to both approaches. Adaptive padding protects
against a malicious relay that manipulates the flow of data along a
circuit in order to see whether the flow of data at a colluding
downstream relay is affected. With adaptive padding, the first relay
downstream from the point of manipulation will inject padding to
smooth out the flow, and the colluding downstream relay won't be able
to distinguish the injected padding cells from data cells. But this
defense doesn't work if the adversary controls the downstream endpoint
rather than a downstream relay, as the endpoint can distinguish
padding from data.

Single-hop padding doesn't prevent colluding relays from discovering
that they belong to the same circuit, but it may provide better
protection than adaptive padding against an external adversary,
because it makes it possible to have a different start time, end time
and traffic rate for each hop of a given circuit. It's also compatible
with Tor's existing cell encryption and end-to-end flow control,
whereas it's not clear to me that the same's true for adaptive padding.

 Marc Juarez actually has a reference implementation of this defense
 as a pluggable transport: 
 https://bitbucket.org/mjuarezm/obfsproxy-wfpadtools/
 
 It is a generalization of the Adaptive Padding scheme, and is
 described here: 
 https://gitweb.torproject.org/user/mikeperry/torspec.git/tree/proposals/ideas/xxx-multihop-padding-primitives.txt?h=multihop-padding-primitives

  Our goal is to study this as a defense for Website Traffic 
 Fingerprinting, but I also believe it has promise for frustrating
 E2E correlation.

Great! Preventing end-to-end correlation of hidden service traffic is
my main goal here.

 Unfortunately, hidden services also represent the hardest form of
 E2E correlation - where the adversary actually gets to control when
 you connect, and to a large degree what you send (both in terms of
 amount, and traffic pattern). In this sense, it actually resembles
 a Global Active Adversary more than a GPA. It may even be harder
 than a generic GAA, because of the adversary also decides when you
 send data property.

That's a really tough problem, but I think it's also worthwhile to
consider the easier problem of preventing E2E correlation when the
client and the hidden service are cooperating - for example, a Pond
client and server that want unlinkability rather than mutual anonymity.

For that use case we may be able to find a relatively simple,
low-overhead solution that doesn't depend on datagram transports, new
flow control algorithms, etc.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUvj9eAAoJEBEET9GfxSfMf3QH/0AKebK2Qq9pjkG5vO7bEc/t
PuY8re7fMl/4yCiuWSkePF1sirz55dT08lvv/XrjEK09ly5BH95QZBpe8DNRMBnR
M0DRwyRZmTzoYH0+RfnCp308+XtJvLxivvRxTAPot8XuBnkr+b6xjensZNYeJ6pI
UZIItCXSo/ufKWlpdIFRxU1cnpoo2LLRxJyEwFkIayxvo48HPV+Wa++jZIZ+8A+K
9LGF8c0LNjJ0TO/zYLMJQBGrLC3lHsVhLKrteXNgJOOaTbMjj87+Q6ljEUv15mq3
Qp3A5Qk/dDUdWjHOpeX+bnEM4AX/ZVQW+aLxQzXcG/ahZ/uQR9T7ysoRIQT/p0M=
=I/Ps
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] high latency hidden services

2015-01-20 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 09/01/15 14:40, Yawning Angel wrote:
 I believe most of BuFLO's shortcomings documented in Cai, X., 
 Nithyanand, R., Johnson R., New Approaches to Website 
 Fingerprinting Defenses 5.A. apply to the currently proposed
 defense, though some of the problems have been solved via
 CS-BuFLO/Basket.

Thanks for the pointer to an excellent paper. The single-hop padding
scheme I suggested is closer to CS-BuFLO than BuFLO: it operates over
TCP, doesn't inject padding when the TCP connection is congested, and
allows the initiator to decide when to close each hop of the circuit
(similar to CS-BuFLO's early termination).

 CS-BuFLO as implemented in Basket without application assistance
 (to terminate padding early) has an absolutely gargantuan amount
 of bandwidth overhead, and the smarter Basket variant that doesn't
 have stupid amounts of sender side buffering isn't portable (for
 the same reasons that the new KIST code isn't).

Why do you need stupid amounts of buffering? Bursts of data from the
application can be smoothed out by keeping a small buffer and making
the application block until there's space in the buffer - as TCP does,
for example.

In general I don't see any need for a padding scheme to touch anything
below the TLS layer, or buffer any more data at the endpoints or
relays than is already buffered.

 None of the schemes I've seen proposed so far fit well into Tor as
 it is now, due to the fact that multiple circuits are multiplexed
 over a single channel (that enforces in-order-reliable delivery
 semantics). HOL blocking is a thing that needs to be considered.
 Solving this undoubtedly has really interesting anonymity impacts
 that I haven't given much thought to.

I don't see why padding needs to make multiplexing more complicated.
We already have logic for multiplexing circuits over a TLS connection.
Padding cells don't behave any differently from data cells in terms of
head-of-line blocking.

 Another issue with all of the padding schemes that I currently
 don't have a solid suggestion for is how to actually detect
 malicious peers that don't pad/pad incorrectly.

Is it necessary to detect peers that aren't padding correctly? In
adaptive padding, if a relay detects a gap in the incoming traffic
along a circuit, it doesn't try to assign blame for the existence of
the gap - it just fills it. Likewise for the single-hop padding scheme
I suggested: each relay is responsible for normalising its output, not
speculating about why its input was or wasn't normalised.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUvkW6AAoJEBEET9GfxSfMM0kIAISz9Kg4jKqaAlnui6eHFWz8
X/ApFiwlPHhEGGbb971LdCuY8O+ycz/wPLAL18UO9uMXUYDz5YF9iZyWFRHBlKB1
83qENqhm6WCJUqC00BVk5GjJ4tPEQ/BgwIZyz2ZEq1kfUdnSJoBib25sUPC2uU6r
h8rM2mJXyXj/6BxnVYQB9TKMzPDjCebR32aMOXJjxymy0B4Y3zwtlIiJje/Sz77i
YFdz3FdBQ1yoAyaTny2ww8cn8BQN0rieXZnfvG2Gq9rT8uFe84Z8zR3rz5O1fjgi
86tiX+TZbXImdsOO/sEOqaSO9kCsq37rfV2oslSGcWjxJUwUymInFSkrt52F4AU=
=cKyL
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] high latency hidden services

2015-01-19 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/01/15 06:03, grarpamp wrote:
 If that's what you're suggesting, then what happens if a client
 wants to extend a circuit from relay A to relay B, but A and B
 aren't exchanging chaff with each other?
 
 This doesn't happen. You have a lower layer of nodes doing fill 
 guided by knowledge of who their own guards are (in this model 
 relays must have notions of their own guards / first hops).
 Circuits are then strung unaffected and unaware over that as usual.
 Relays know the difference between their own action doing p2p for
 fill, and non fill (circuit) data trying to leave them... so they
 can make room in their existing first hop links, or negotiate new
 ones with where that data is trying to go.

Thanks for the explanation.

If relays A and B negotiate a new link between them when a client
wants to extend a circuit from A to B, then A and B must each subtract
some bandwidth from their existing links to allocate to the new link
(since they're already using their full bandwidth allowance, by design).

I suspect the details of how that reallocation is done will be
important for anonymity. If bandwidth is subtracted from existing
links without taking into account how much wheat the existing links
are carrying, then any circuits using those links will feel the
squeeze - the adversary will be able to tell when a relay's opening a
new link by building a circuit through the relay, filling the circuit
with wheat, and waiting for its throughput to get squeezed.

On the other hand, if bandwidth is subtracted from existing links in
such a way that existing wheat is never affected - in other words, if
you only reallocate the spare bandwidth - then it's possible for an
adversary observing a relay to find out how much wheat each link is
carrying by asking the relay to negotiate new links until it says no
because it can't reallocate any more spare bandwidth, at which point
any links that weren't requested by the adversary are now carrying
nothing but wheat.

 If anyone knows of networks (whether active, defunct or
 discredited) that have used link filling, I'd like a reference.
 Someone out there has to have at least coded one for fun.

PipeNet was a proposal for an onion-routing-like network with
constant-rate traffic:
http://cypherpunks.venona.com/date/1998/01/msg00878.html

Tarzan was an onion-routing-like network in which each relay exchanged
constant-rate traffic with a fixed set of other relays called its
mimics, and circuits could only be constructed over links between mimics:
http://pdos.csail.mit.edu/tarzan/docs/tarzan-ccs02.pdf
http://pdos.csail.mit.edu/tarzan/docs/tarzan-thesis.pdf

George Danezis looked at the anonymity properties of paths chosen from
a restricted graph rather than a complete graph (this was in the
context of mix networks, but the findings may also be relevant to
onion routing):
http://www.freehaven.net/anonbib/cache/danezis:pet2003.pdf

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUvY5eAAoJEBEET9GfxSfMRxMH/RbjCt0hVitb8dHcSaKNLzoS
Jz6M9hX71RWO7wDbRpcoKwOKAG9WnlMYbbLrPzRaORrfzetiQRiQ9P4lhZojXXSc
fXr3YmDHxrfDxjGI2pzw+jt9hBH1XKG/CdPZLUmnYTdsdgNa6WJBhz9346QzHOdq
ifPE1IQ9u6ExoRuRYvy9jiEXGnrYa8LC+cD6+dmyMVqBcD6chNFuUY+lqEh7D10m
te2x7wRvV+23wqghM8rKkTy7VKnYGnjDzA5zKIybvMf9TqPGI6t+zRIRsHj8xNnK
RDSV+dGs3AGvz0ysNumlqvdcP3/Nm6PYdMCIGBq8WgqwYSXIrVnToiPRezlPdY0=
=Hzc6
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] high latency hidden services

2015-01-05 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 04/01/15 09:45, grarpamp wrote:
 That tells you how much chaff to send in total, but not how much
 to send on each link.
 
 No. Buy or allocate 1Mbit of internet for your tor. You now have to
 fill that 1Mbit with tor. So find enough nodes from consensus who
 also have a need to fill some part of their own quota and start
 passing fill with them.
 
 You'd have to think about whether to use one of them as your own
 guard or make another filled connection to your real guard.

To be clear, are you suggesting that each relay and each client should
pick some relays from the consensus and exchange chaff with them, and
clients should also exchange chaff with their guards?

If that's what you're suggesting, then what happens if a client wants
to extend a circuit from relay A to relay B, but A and B aren't
exchanging chaff with each other?

 The amount of wheat+chaff on each link must change in response to
 the amount of wheat.
 
 Of course, wheat occupies space in a stream of buckets you'd
 otherwise be filling with chaff.

Are you saying that wheat can only be sent over links that have been
chosen to carry chaff?

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUquNZAAoJEBEET9GfxSfMO3MIAIVQ0nlcHjNjudOKMGpG6pyI
wDcq9GfFIp1h9WDYh0c+d398XEfAp+TlDAKNsGK3d7tFQHyM5JgtbaK/FGzXxcSj
hBeynFQGqiRg1r9Fq6+VTvTDhbzAI5vU2ZbLtOT83iXuNuWtiVYtjWvZhmS4Pkjc
YI7/Zy1sEKhvRcedSQZ+MPKs+D0uW0s0rzvHOue4jrEqUOpio2oApW4rV396O3Wi
TsROTBFXzB36yl7MaYsk4k6Wq1UIxdHMP+LXS6H7c+cC+GyCKTA/Cb5+1Dxh7/03
s8nNBmr4kC7FrQzxKO2zpsz+G4lV3dSK7yeJKX7xFu4opDWgNwMZuDAzEHuYcPs=
=8BU8
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] high latency hidden services

2015-01-01 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Resurrecting a thread from last year...

On 11/12/14 16:05, grarpamp wrote:
 On Thu, Dec 11, 2014 at 8:26 AM, Michael Rogers 
 mich...@briarproject.org wrote:
 * Which links should carry chaff?
 
 First you need it to cover the communicating endpoints entry links 
 at the edges. But if your endpoints aren't generating enough
 traffic to saturate the core, or even worse if there's not enough
 talking clients to multiplex each other through shared entry points
 densely enough, that's bad. So all links that any node has
 established to any other nodes seem to need chaffed.

Are you proposing that chaff would be sent end-to-end along circuits?
(That's what generating enough traffic to saturate the core seems to
imply.) If so, that would raise a number of problems:

1. Chaff would start and end at the same time for all hops of a given
circuit.

2. Each hop of a given circuit would carry at least as much traffic
away from the initiator as the next hop, and at most as much traffic
towards the initiator as the next hop (where traffic = wheat + chaff
in this context).

3. A delay introduced at one point in a circuit (e.g. by inducing
congestion) would be visible along the rest of circuit, potentially
revealing the path taken by the circuit.

The mechanism I proposed doesn't suffer from these problems.

 * How much chaff should we send on each link?
 
 Today, all nodes have an idea that the most bw you're ever going to
 get out of the system anyways is up to your pipe capacity, whether
 you let it free run, or you set a limit... all within your personal
 or purchased limits. So you just decide how much you can bear, set
 your committed rate, and fill it up.

That tells you how much chaff to send in total, but not how much to
send on each link.

 At present, relays don't divide their bandwidth between links
 ahead of time - they allocate bandwidth where it's needed. The
 bandwidth allocated to each link could be a smoothed function of
 the load
 
 This sounds like picking some chaff ratio (even a ratio function)
 and scaling up the overall link bandwidth as needed to carry enough
 overall wheat within that. Not sure if that works to cover the
 'first, entries, GPA watching there' above. Seems too user session
 driven bursty at those places, or the ratio/scale function is too
 slow to accept fast wheat demand. So you need a constant flow of
 CAR to hide all style wheat demands in. I scrap the ethernet
 thinking and recall clocked ATM. You interleave your wheat in place
 of the base fixed rate chaff of the link as needed. You negotiate
 the bw at the switch (node).

Here it sounds like you're proposing hop-by-hop chaff, not end-to-end?

 but then we need to make sure that instantaneous changes in the
 function's output don't leak any information about instantaneous
 changes in its input.
 
 This is the point of filling the links fulltime, you don't see any
 such ripples. (Maybe instantaneous pressure gets translated into a
 new domain of some nominal random masking jitter below. Which may
 still be a bit ethernet-ish.)

A relay can't send chaff to every other relay, so you can't fill all
the links fulltime. The amount of wheat+chaff on each link must change
in response to the amount of wheat.

The question is, do those changes only occur when circuits are opened
and closed - in which case the endpoint must ask each relay to
allocate some bandwidth for the lifetime of the circuit, as in my
proposal - or do changes occur in response to changes in the amount of
wheat, in which case we would need to find a function that allocates
bandwidth to each link in response to the amount of wheat, without
leaking too much information about the amount of wheat?

 That isn't trivial.
 
 What needs work is the bw negotiation protocol between nodes. 
 You've set the CAR on your box, now how to divide it among 
 established links? Does it reference other CARs in the consensus, 
 does it accept subtractive link requests until full, does it span
 just one hop or the full path, is it part of each node's first hop
 link extension relative to itself as a circuit builds its way
 through, are there PVCs, SVCs, then the recalc as nodes and paths
 come and go, how far in does the wheat/chaff recognition go, etc?
 Do you need to drop any node that doesn't keep up RX/TX at the
 negotiated rate as then doing something nefarious?

It seems like there are a lot of unanswered questions here. I'm not
saying the idea I proposed is perfect, but it does avoid all these
questions.

 * Is it acceptable to add any delay at all to low-latency
 traffic? My assumption is no - people already complain about Tor
 being slow for interactive protocols.
 
 No fixed delays, because yes it's annoyingly additive, and you 
 probably can't clock packets onto the wire from userland Tor
 precisely enough for [1]. Recall the model... a serial bucket 
 stream. Random jitter within an upper bound is different from fixed
 delay

Re: [tor-dev] TOR C# application

2014-12-17 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

I don't know if it's possible to use Tor as a library, but there are a
few Java apps that communicate over Tor by launching a Tor process and
using it as a SOCKS proxy. Does that sound like what you're aiming to do?

Here's a quick rundown of what you need to do:

1. Compile tor.exe from source, or copy it from the latest Tor Browser
Bundle (your app won't need to execute the Tor Browser).

2. Launch tor.exe from within your app, using command-line arguments
to pass the location of the config file and the PID of the controlling
process (i.e. your app). The complete command will look something like
this:

/path/to/tor.exe -f /path/to/config __OwningControllerProcess pid

3. Control tor.exe via the local control port using the Tor control
protocol. There's a Java library that implements this protocol, but I
don't know of a C# library, so you may need to write your own using
the Java library as guidance. You can set the control port in the
config file so it doesn't conflict with the Tor Browser.

You should use the AUTHENTICATE command to prevent other apps from
accessing the control port, and the TAKEOWNERSHIP command to ensure
tor.exe exits when your app exits.

https://gitweb.torproject.org/torspec.git/plain/control-spec.txt
https://github.com/guardianproject/jtorctl

4. Communicate over Tor by connecting to the local SOCKS port. You can
set this port in the config file so it doesn't conflict with the Tor
Browser.

Here's some Java source code that may be useful for guidance:
https://code.briarproject.org/akwizgran/briar/blob/master/briar-android/src/org/briarproject/plugins/tor/TorPlugin.java
https://github.com/Psiphon-Labs/ploggy/blob/master/AndroidApp/src/ca/psiphon/ploggy/TorWrapper.java
https://github.com/thaliproject/Tor_Onion_Proxy_Library

All of the above are based on Orbot by the Guardian Project:
https://gitweb.torproject.org/orbot.git

Cheers,
Michael

On 15/12/14 15:51, Hollow Quincy wrote:
 Hi all,
 
 I would like to write a C# application (IRC client) that is using
 TOR. I read a lot, but I still don't know how can I run TOR proxy
 in transparent way (from my c# code).
 
 I see that Tor Stem (https://stem.torproject.org/) can be used by 
 Python code or there are packages for Linux, but not C# (.NET).
 
 How can I run Tor proxy from C# ? Is there a library (dll) that I
 can run ? (without need to have open Tor Bundle browser).
 
 Thanks for help ___ 
 tor-dev mailing list tor-dev@lists.torproject.org 
 https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUkT6TAAoJEBEET9GfxSfMKg0H/2K7bfcQEEBjo5LMmObNNNW5
3VjFNOsCSoAL/ZxYqwKWa94UArFDA3GS1ucVDlxMdNWFDU2slNLKZfZ2CZhhFX7P
3zAYAbEGDfHJ4u38UPUAay9ND8pLEo/81BrJm2dZDgrB32pcojFIWfIfIgKxbE1J
+XuGz+DHMSWb/foOQzp9fozzPzWkAYjlauqHB/1SK3eVuvc8jn5pB3gKwEKs8WKF
YaOCEh4dVtw/WUQZPEoah44aT7UXsz62snBAPPVq0KYTs4aqQrSyK4aglmq6x632
/9/Okl4OjOmhkApnQQAunrfiBAyB1o6kA0ESfRb13hHgP8nLsoBL2TNBbF+Wt4s=
=DI+O
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] high latency hidden services

2014-12-11 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 10/12/14 00:14, grarpamp wrote:
 Guilty of tldr here, yet similarly, with the easily trackable 
 characteristics firstly above, I'm not seeing a benefit to
 anything other than filling all links with chaff which then hides
 all those parameters but one...

I'm not opposed to this approach, but filling all links isn't as
simple as it sounds:

* Which links should carry chaff? We can't send chaff between every
pair of relays, especially as the network grows.

* How much chaff should we send on each link? At present relays don't
divide their bandwidth between links ahead of time - they allocate
bandwidth where it's needed. The bandwidth allocated to each link
could be a smoothed function of the load - but then we need to make
sure that instantaneous changes in the function's output don't leak
any information about instantaneous changes in its input. That isn't
trivial.

* Is it acceptable to add any delay at all to low-latency traffic? My
assumption is no - people already complain about Tor being slow for
interactive protocols. So we'd still need to mark certain circuits as
high-latency, and only those circuits would benefit from chaff.

Once you fill in those details, the chaffing approach looks pretty
similar to the approach I suggested: the relay treats low-latency
circuits in the same way as now, allocates some bandwidth to
high-latency traffic, and uses that bandwidth for data or padding
depending on whether it has any data to send.

 I can't see any other way to have both low latency and hide the 
 talkers other than filling bandwidth committed links with talkers. 
 And when you want to talk, just fill in your voice in place of the 
 fake ones you'd otherwise send. That seems good against the GPA 
 above.

The alternative to filling the link with talkers is mixing the talkers
- - i.e. preventing the adversary from matching the circuits entering a
relay with the circuits leaving it. But as I said above, when you get
down to the details these approaches start to look similar - perhaps
they're really just different ways of describing the same thing.

Some papers on this topic that I haven't read for a while:

http://acsp.ece.cornell.edu/papers/VenkTong_Anonymous_08SSP.pdf
http://www.csl.mtu.edu/cs6461/www/Reading/08/Wang-ccs08.pdf
https://www.cosic.esat.kuleuven.be/publications/article-1230.pdf

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUiZt+AAoJEBEET9GfxSfMYsQH/iMCSvmQYxjGFZC5lzvpT06Z
Ggjk+mflVkEDCKPt8e8ucQ7dp1URi9BS0wgxvo1PtSvEaO1C6m9NgyWHsNTXAMEn
otpjn9szudJP1YV2vzWUrr32gWHK8ZLiji9JBlKW2fZXwkliSM9nltqgvRUeIXvD
9r/T5S5VDNFoow++05XASHCUjoQmj2baEO3H8xag+MEcy4vEMPby9Yf5pPVbEoEv
uDwkRvRqqttUl7KC7A04M60214/LE/UCwKPlMzZRLjEo2dKPcnXxY5GWLz19OZ31
tawt8y8I/QRgn6l6R/W059eJkJKze1IqhEC8z5+IQD8SaTmY9Lxs10CqB9JGhMs=
=BW/U
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] high latency hidden services

2014-12-09 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 25/11/14 12:45, George Kadianakis wrote:
 Yes, integrating low-latency with high-latency anonymity is a very 
 interesting probleml. Unfortunately, I haven't had any time to
 think about it.
 
 For people who want to think about it there is the Blending
 different latency traffic with alpha-mixing paper. Roger mentioned
 that one of the big challenges of making the paper usable with Tor,
 is switching from the message-based approach to stream-based.
 
 Other potential papers are Stop-and-Go-MIX by Kesdogan et al.
 and Garbled Routing (GR): A generic framework towards unification
 of anonymous communication systems by Madani et al. But I haven't
 looked into them at all...

Two of these papers were also mentioned in the guardian-dev thread, so
I guess we're thinking along similar lines.

Alpha mixes and stop-and-go mixes are message-oriented, which as you
said raises the question of how to integrate them into Tor. Judging by
the abstract of the garbled routing paper (paywalled), it's a hybrid
design combining message-oriented and circuit-oriented features. I
think there might also be scope for circuit-oriented designs with
higher latency than Tor currently provides, which might fit more
easily into the Tor architecture than message-oriented or hybrid designs.

A circuit-oriented design would aim to prevent an observer from
matching the circuits entering a relay with the circuits leaving the
relay. In other words it would prevent traffic confirmation at each
hop, and thus also end-to-end.

At least four characteristics can be used to match circuits entering
and leaving a relay: start time, end time, total traffic volume and
traffic timing. The design would need to provide ways to mix a circuit
with other circuits with respect to each characteristic.

The current architecture allows start and end times to be mixed by
pausing at each hop while building or tearing down a circuit. However,
each hop of a given circuit must start earlier and end later than the
next hop.

Traffic volumes can also be mixed by discarding padding at each hop,
but each hop must carry at least as much traffic as the next hop (or
vice versa for traffic travelling back towards the initiator). This is
analogous to the problem of messages shrinking at each hop of a
cypherpunk mix network, as padding is removed but not added.

There's currently no way to conceal traffic timing - each relay
forwards cells as soon as it can.

Here's a crude sketch of a design that allows all four characteristics
to be mixed, with fewer constraints than the current architecture.
Each hop of a circuit must start earlier than the next hop, but it can
end earlier or later, carry more or less traffic, and have different
traffic timing.

The basic idea is that the initiator chooses a traffic pattern for
each direction of each hop. The traffic pattern is described by a
distribution of inter-cell delays. Each relay sends the specified
traffic pattern regardless of whether it has any data to send, and
regardless of what happens at other hops.

Whenever a relay forwards a data cell along a circuit, it picks a
delay from the specified distribution, adds it to the current time,
and writes the result on the circuit's queue. When the scheduler
round-robins over circuits, it skips any circuits with future times
written on them. If a circuit's time has come, the relay sends the
first queued data cell if there is one; if not, it sends a single-hop
padding cell.

Flow control works end-to-end in the same way as any other Tor
circuit: single-hop padding cells aren't included in the circuit's
flow control window.

When tearing down the circuit, the initiator tells each relay how long
to continue sending the specified traffic pattern in each direction.
Thus each hop may stop sending traffic before or after the next hop.

Even this crude design has multiple parameters, so its anonymity
properties may not be easy to reason about. Even if we restrict
traffic patterns to a single-parameter distribution such as the
exponential, we also have to consider the pause time at each hop while
building circuits and the 'hangover time' at each hop while tearing
them down. But I think we can mine the mix literature for some ideas
to apply - and probably some attacks against this first attempt at a
design as well.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUh2xQAAoJEBEET9GfxSfMOUkH/1AyAQrYiGsmv017f0YnFP7N
5zrM83yCIU8x4qIgEmPo5oBIHiJmRisgyjiI3/hMc8ugTVOmwmfDQIDO+sPP6xCp
tN/QPVz5JPPqzg47T5EfNsNAf2Jj0GVtAg2gLjFH6RRo2PjPLgQOmEcHoBFVsGOj
G72nYl4GbUDQo9MjaTVk+lKzE3uQUEkRzreKsGNk6lQOvKtB6zbQQFHEftNiVaSM
2yKNmqarcVXxaEmXqRuq5jxsZ++7RU60IJw6mq5vle+88dzXEWX+0nVdVEcSYcC/
LEZ3fJR1boNtTDVB9u3N7vAX3fKg7o6UTSm2ut93VFxmtcVIRfQDk0OpcuuZPoA=
=Vj2S
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org

Re: [tor-dev] obfs4 questions

2014-11-29 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 29/11/14 00:35, Yawning Angel wrote:
 On Fri, 28 Nov 2014 17:57:26 + Michael Rogers
 mich...@briarproject.org wrote:
 
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
 
 On 28/11/14 15:50, Yawning Angel wrote:
 A one time poly1305 key is calculated for each box, based on
 32 bytes of zeroes encrypted with a one time Salsa20
 key/counter derived from the nonce and the box key.  You can
 view the use of Salsa20 there as an arbitrary keyed hash
 function (in the case of the original paper, AES was used).
 
 Hope that clarifies things somewhat,
 
 Thanks - this is similar to the argument I came up with. I called
 my argument hand-wavy because it relies on HSalsa20 and Salsa20
 being PRFs, and I don't know how big an assumption that is.
 
 For what it's worth 7 Nonce and stream both support using a
 counter here as the nonce, and refers to 'The standard (PRF)
 security conjecture for Salsa20.  IIRC the security proof for the
 extended nonce variants also hinges on the underlying algorithms
 being PRFs as well, so it's something I'm comfortable in assuming.
 
 http://cr.yp.to/highspeed/naclcrypto-20090310.pdf

Awesome, thanks!

Cheers,
Michael

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUeYwsAAoJEBEET9GfxSfM9PsIAIADA/7Lfkx9kxxkvfggMNQZ
Ln71QB//POwEskJSVftg/NE30pdw9KiYA8EJLA5El62UxUT4NS8OOyiGTSXz3IDo
dvBcnOls9HAVYeE7HjOeKdiwwitjBv0+QFetGY+0XNAjkmHLkU+cQdO9+jkJ122l
nWLFaOj1o3qjx7QHiL7TKqFf+Rh1P/quurNBYrexM2uRxsAXQgncGMVaLgGAdvmu
h09NotPW5sDTmu4m6HgRFQKYD15sPkkF2C65IkQNiO0Al7NIVcxq6JEtzLMcK66t
jZpvZe+U/XrgEFqzkxYep20bFITTovXkC6cMhm4iA5X58ZKWnGf8eBxVs/RCDCY=
=2auj
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] obfs4 questions

2014-11-28 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

In the obfs4 spec I couldn't find a description of how the secretbox
nonces for the frames are constructed. A 16-byte nonce prefix comes
from the KDF, but what about the remaining 8 (presumably
frame-specific) bytes?

If an attacker changes the order of the secretboxes so that the
recipient tries to open a secretbox with the wrong nonce, is that
guaranteed to fail, as it would if the secretbox had been modified? I
can make a hand-wavy argument for why I think it will fail, but I
don't know whether the secretbox construct is designed to ensure this.

Any particular reason for using two different MACs (HMAC-SHA256-128
for the handshake, Poly1305 for the frames) and two different hashes
(SHA-256 for the handshake, SipHash-2-4 for obfuscation)?

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUeHO0AAoJEBEET9GfxSfMcPEH/iEYxlXtceeG3/wzp97oW/He
lPNqqowyXczlyJO0SDG8L96hG6RYQZb7M0t8KJsYJapAznioZi2/qRQEC2/VFXg1
1EN//Bd9iO7QUXaIo1djC97Qoq3qmR/GY50xKYIjxr/gZSLk2dAAtleFUuerBrl9
nLrTr7kSKk3xzY0GFYtYKbj3bvuGusGrFioAIgfnKtF8iAlSjEIo8uE2Y1RFVu2d
Q9GOake1VjC5V7Ue/MDCpWagwebPhnDHXSCWSXhvrYT5rmkjrkR2nhl2hAIq/0pA
sfjsGquMrT5fdXcRrQaxHsamCt1228/ZlEAkCep/PRpS0NgLDJlRtPe49RD44Gk=
=OHtY
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] obfs4 questions

2014-11-28 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 28/11/14 15:50, Yawning Angel wrote:
 A one time poly1305 key is calculated for each box, based on 32
 bytes of zeroes encrypted with a one time Salsa20 key/counter
 derived from the nonce and the box key.  You can view the use of
 Salsa20 there as an arbitrary keyed hash function (in the case of
 the original paper, AES was used).
 
 Hope that clarifies things somewhat,

Thanks - this is similar to the argument I came up with. I called my
argument hand-wavy because it relies on HSalsa20 and Salsa20 being
PRFs, and I don't know how big an assumption that is.

I mean, I'm sure it's fine, I was just wondering if the designers had
explicitly said anywhere that it was fine.

 So yes, it is a property of crypto_secretbox because that's how
 Poly1305 works.  It wouldn't be a workable AEAD mode if nonces
 (which usually are transmitted in the clear) could be modified
 undetected by attackers either.

Well that's the thing - crypto_secretbox isn't an AEAD mode, it
doesn't support additional authenticated data. With a typical AEAD
mode like GCM (which doesn't derive the authentication key from the
nonce) you can include the nonce in the AAD, so it's explicitly
authenticated. With crypto_secretbox it seems like the nonce is
implicitly authenticated, but I just wanted to be sure.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUeLeGAAoJEBEET9GfxSfM3hwH/A72XG9bGgAJM6JgF2aW3SLr
rVI+UKb4Z7SgHIP++fgNsVoNG1X6PRZ1/5Va0/TLaLFEIyEQrY77+GIE1h2jADQS
7hdYu3bfSELQDMnib/BeQnrw5cZ348gxG9yXYCGZjXFrhiQJ6nfxpMZdkWUuFSew
4ORj3GgMRPkfw1cuwmOX3O84+ZXb9Nma1elTe6xgL/fUYdmQ6FlM/CEMnD7NKxBK
DCVbUr6aiRYI+6x4waCP7ZIMYCGJSAFjzOnfQwDhqdIu7FGeOAw7bpdj7iTREZ7Z
XAdh2lfhkYuH815UteDFb151O9ll8BMm9IFDmxPsYmIsDlkjfxQZZ5c+Twp2Dr0=
=1Zde
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] high latency hidden services

2014-11-20 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 09/11/14 18:33, Mansour Moufid wrote:
 Has there been research on integrating high-latency message
 delivery protocols with the hidden service model of location
 hiding?  The SecureDrop or Pynchon Gate protocols sound like good
 starting points. I would love to participate, and encourage
 everyone to start in this direction (in your copious free time ;).

This issue has just come up on the guardian-dev list, so we're moving
the conversation over here. Context quoted below.

On Thu, Nov 20, 2014, at 09:46 AM, Michael Rogers wrote:
 On 20/11/14 14:21, Nathan of Guardian wrote:
 If we simply use Tor as a low-latency transport for
 asynchronous messaging then we're limited to Tor's threat
 model, i.e. we can't prevent traffic confirmation attacks. If
 we revive one of the remailers or build a new system then we're
 limited to a small number of users, i.e. a small anonymity set.
 So ideally we'd find some way of adding high-latency mix-like
 features to Tor.
 
 How much difference in latency are we talking about? Can we just
  introduce some sort of randomness or delay into our existing 
 stacks/protocols?
 
 If we add delays at the application layer then those delays will
 be the same all along the Tor circuit. So from the point of view of
 an adversary doing a traffic confirmation attack against Tor, the
 delays are irrelevant: the adversary sees the same pattern of
 delays at both ends of the circuit, so the ends are still
 correlated with each other.
 
 To decorrelate the traffic entering Tor from the traffic leaving
 Tor we need to delay the traffic at each hop. Ideally we'd go
 further than that and decouple high-latency traffic from circuits,
 so that traffic could enter Tor on one circuit and leave on another
 circuit, long after the first circuit was closed. But that's a much
 harder problem than adding a delay at each hop, I think.
 
 Done right, this could provide a large anonymity set for the 
 high-latency users and improve the traffic analysis resistance
 of Tor for the low-latency users at the same time, by providing
 a pool of latency-insensitive traffic to smooth out the bursty 
 low-latency traffic between relays.
 
 I think this really makes the case, why a native Tor-based 
 messaging channel/layer/link/substrate should be implemented.
 
 Great! Maybe we should move this discussion to the thread on
 tor-dev that Mansour Moufid started recently?

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUbgMZAAoJEBEET9GfxSfMWegIAL/IQj7uOVSqd3kG+SSj/bCx
RxyAkYmvS9josfDdtyOqDEJ3wMUcjMK313SPOA6vFWkkDYPqkbrXPwWCFMLJK/2m
qVJ/ZItrno0RiOpTBEij2s7/VYW5ECzYjZUe3+O8lveYLz6k7IXXRiVkJt3y1oFn
ze2Hl4XqTi7/rwaE9Q6JPfMBk4zcH4POtUaEYdpDME60QQQ8HRn0s2W9QlWihvrT
ALhBqWXqZAZwEw1iKokT4XG/6HhGSs0WKgL53+wjfHTb6UdRTCas+nc/YuBuOVun
45OWmI8DuUG6YgxT5NEHKUL+OGts4ikoJ2TZwZ7Xjhd4zrgU+bZXTXDOQgUmJTY=
=rcBt
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service authorization UI

2014-11-11 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 09/11/14 12:50, George Kadianakis wrote:
 I suspect that HS authorization is very rare in the current
 network, and if we believe it's a useful tool, it might be
 worthwhile to make it more useable by people.

For what it's worth, the reason I haven't (so far) implemented HS
authorization for Briar is that it's inconvenient to manage the list
of auth cookies (effectively a second copy of the controlling app's
contact list) via the control port.

It would be really nice if the controlling app could store the auth
cookies itself and supply the relevant cookie when making a SOCKS
connection. The SOCKS password field seems like a natural fit, but I
think perhaps Tor's already using the username and password for some
other purpose?

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUYjENAAoJEBEET9GfxSfMJz4H/iAHIn02IbkyjFEPYko2IIg/
ms6uTABTHm1VkF+UgrKejlNwYO/aVpPmeWGjohCrbqDbxw1PQdYVBncOyL1pLWa3
KkKYsyfoREGkp0/PR/jsRic3dopzp5wVKC2jT9ujVVrH1+Al1iG6f4feI6kmv4R+
jaXrjd4g57NnLpYlFwB8DMeLm/+Q5tWWp42pjNyy6edL+H06TDOe85ivcYsfzrbb
BmMJexNvspKsmN5lDaG402TVnrmHaxJCj/ZaPBvQdEXDjOMQQD4MLM8eZYMY3nYA
23SW3QiXERqdg7jf8l2cLZNZyeTyYhZTRjbkySbsCwRWLN/UgE/s8WYkdbDRo0I=
=i490
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] 5-hop hidden service circuits (was: Potential projects for SponsorR (Hidden Services))

2014-10-21 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 20/10/14 14:37, George Kadianakis wrote:
 On an even more researchy tone, Qingping Hou et al wrote a
 proposal to reduce the length of HS circuits to 5 hops (down from
 6). You can find their proposal here: 
 https://lists.torproject.org/pipermail/tor-dev/2014-February/006198.html

  The project is crazy and dangerous and needs lots of analysis,
 but it's something worth considering. Maybe this is a good time to
 do this analysis?

One aspect of this proposal that might be problematic: the client and
hidden service negotiate a random number and use it to pick a
rendezvous point from a list of candidates. They must have matching
lists of candidates.

With a similar idea in mind, I recently looked into how long it takes
for two clients to obtain copies of the same consensus. I found out
that this is never guaranteed to happen, because each client may skip
a consensus each time it downloads a fresh one. That would need to be
addressed before implementing the 5-hop proposal.

https://lists.torproject.org/pipermail/tor-dev/2014-September/007571.html

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJURlAkAAoJEBEET9GfxSfM8u0H/26wRDiVzzYZopgIb1Vlb99q
FuPAF5D0quETvAoYxcxpN72+pJjIWOdmccw4NURpJQ7FYYQgbTEirNPqmaeZjDt7
T9IG4goL8CIyxlFA25Kbb6bCf1xLtIOJHksdtCNeLnf8wsLyOYlW7kZEhQQpojzM
1TScWp9rfSfJc/P6juGad6H0BKaLDhsmUb1gtBGYS7JHQNNovruqVugI+Q2iqzOK
UhvBW2Lfhnf42HuTfal/oQaq3z00wPPxmqA25GmTmrzhUU0xqqHKHrReV7YqppOw
GZH2SSxpcnV1EQ/M/fh8jMtsaGZiQe1bCZQqjwOudkuinpoir7RiCU9QS3PytiM=
=2oJx
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Patches to improve mobile hidden service performance

2014-10-08 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/10/14 02:55, John Brooks wrote:
 1. Each time the network's enabled, don't try to build 
 introduction circuits until we've successfully built a circuit. 
 This avoids a problem where we'd try to build introduction
 circuits immediately, all the circuits would fail, and we'd wait
 for 5 minutes before trying again.
 
 This sounds like a useful patch. Do you mean that 
 status/circuit-established currently returns true after 
 DisableNetwork is set?

Yup, that's right. So when the network is re-enabled we try building
intro circuits straight away, in contrast to startup when we wait for
the first circuit to be built.

 I suggest submitting that separately from the control changes.

Thanks, I'll do that.

 2. Added a command to the control protocol to purge any cached 
 state relating to a specified hidden service. This command can
 be used before trying to connect to the service to force Tor to 
 download a fresh descriptor. It's a more fine-grained
 alternative to SIGNAL NEWNYM, which purges all cached descriptors
 and also discards circuits.
 
 You?ll need to update control-spec. I?m curious what would happen
 if you use this command on a HSDir or for a local service - those
 cases need to be defined.

Hmm, good point. The behaviour will be similar to SIGNAL NEWNYM but
only affecting a single service. I'll look into the details.

 I wonder what problem you?re trying to solve, and if this is the
 best solution. From a quick reading of the code tor?s behavior is:
 
 1. If an intro point remains, try it 2. If we run out of intro
 points, re-fetch the descriptor 3. If we?ve tried fetching the
 descriptor from all HSDirs within the past 15 minutes, fail 4. If
 the descriptor is re-fetched, try all intro points again
 
 The list of HSDirs used in the past 15 minutes seems to be cleared 
 when a connection succeeds, when we give up connecting to a
 service, or on rendezvous failures.
 
 Even if you hit that 15-minute-wait case, it looks like another 
 connection attempt will re-fetch the descriptor and start over.
 See rendclient.c:1065, leading to 
 purge_hid_serv_from_last_hid_serv_requests. Can you point out
 where your client gets stalled, or where I?ve read the logic
 wrong?

I agree with your reading - this change is meant to avoid the first
failed connection attempt in cases where we know it's likely to fail.

Another way to approach the problem would be to make the retention
time of the HS descriptor cache configurable. Then clients connecting
to mobile hidden services could set a shorter retention time than the
default. But that would affect all hidden services used by the client.
What I'm trying to achieve with this patch is a way of saying I know
this specific descriptor is likely to be out-of-date, don't bother
trying it.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUNQCRAAoJEBEET9GfxSfM49sIAKA8/m7PkC+GwgAe9zVYkcaP
540HFG73NFvtXK/z3Ww/prJsSwAJFSwXnKmYHLOp4upZ9E4+dtt82gOWorDxh9jR
ubv3nGIFZ4D+90LS5O28OHAGwz/9zdx7533xRI58pXMQQpuWcqsfk9KECKWolRcY
cEcLYXeUvnadbfR1HjEUp4RjXaw0KzXQwhXushs6bLTBUV29phhShDggHTYx/k4u
9FMHjLMn7neGnwvLyRUsaT9SCejB1HQJkpXGMwLKBB3B8VM8Gc1ZClISLz54qQQq
x5hWiV11NI5sTNVLuDYuA7xj2SIyTXtJGEj9y2N1STV1smVW1GOKkSW5hyzVoX4=
=LWzn
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Patches to improve mobile hidden service performance

2014-10-08 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/10/14 08:34, grarpamp wrote:
 I've added a link to this thread to the below ticket. Somewhere 
 therein I and Qingping were describing need and syntax to 
 flush/fetch hidden service descriptors via control. For which your
 HS descriptor purge may be a useful component. If it is able to
 flush both single specified, or all, HS desc's that could be good
 semantic so as not to disturb other state. Thanks.
 
 # Allow controllers to retrieve HS descriptors from Tor 
 https://trac.torproject.org/projects/tor/ticket/3521

Thanks! The patch only supports purging a single descriptor at
present, but it would be easy enough to make the hostname optional and
purge all descriptors if it's absent.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUNQENAAoJEBEET9GfxSfMzNAIAJVrOc3qinqDu04niBFFcNI+
robNA9DHi7g7EF5sGBdoIuE1aO3JVx2mApNT+IVAQgH1iZSzRz/2jWCeeC12nDKQ
dhkHD7fej02Zk+f3mCzf534/2/LFxyTCAPkPyE+GGhljrmExSBINo1u0KzedTwsu
0pDjld2rVTF/h0ktgxuzpVIhXlruQon8As1Tz3vOzCTIaIQxb8nwc6u6ShxhQmHT
LnxRCcdChsIyFsafsn7rJDDMbnnNn5h751d0YwiELJNetb1l1W9lh8ekSuUCpqMp
L4Vr8TZrwKuFHaRLroMXN94FoosuA9nRIt21iW5eQ6pfhV8wkqscA2zOha3w5Kg=
=67Md
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Patches to improve mobile hidden service performance

2014-10-07 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

I've been experimenting with small changes to Tor to improve the
performance of mobile hidden services. The following patches for Tor
and jtorctl make two performance improvements:

1. Each time the network's enabled, don't try to build introduction
circuits until we've successfully built a circuit. This avoids a
problem where we'd try to build introduction circuits immediately, all
the circuits would fail, and we'd wait for 5 minutes before trying again.

2. Added a command to the control protocol to purge any cached state
relating to a specified hidden service. This command can be used
before trying to connect to the service to force Tor to download a
fresh descriptor. It's a more fine-grained alternative to SIGNAL
NEWNYM, which purges all cached descriptors and also discards circuits.

https://code.briarproject.org/akwizgran/briar/blob/9e5e2e2df24d84135f14adaa42111c3ea2c55df8/tor.patch

https://code.briarproject.org/akwizgran/briar/blob/9e5e2e2df24d84135f14adaa42111c3ea2c55df8/jtorctl.patch

The Tor patch is based on the tor-0.2.24 tag, and the jtorctl patch is
based on the Guardian Project's repo, which is ahead of the upstream
repo (n8fr8 needs commit privileges for upstream, I think).

https://github.com/guardianproject/jtorctl

I've only done small-scale testing of these patches so far. If they
seems like they might be useful I'll create a trac ticket to merge them.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUM6W7AAoJEBEET9GfxSfM5FkIAL+gPk2ZmGvSnt/gVQGZwqH5
mpqGoPIfZPycfhm/dgOjE4MSANcGo7kTqnh85ir9TQvjtNAEkfkTL7GB9G+DOd2X
ghje+HCWNRU8WlFD27rrxSUzT+8IvfVaJaj+sCYtA44ib3qpz4XXKXee8xTW2epV
dnCLtW7TBUjQEHqbL0aGtsKEHvgVQcLWPxusEioefBPXo6a+8cRKLY+EJvqxaMze
mdOQBh+wa7SIyMQi4JqgzjtKAoAqpVzXYsIrmxT1hlCOOaNDjbFUN35QbrZMT6B+
zNCuWIIC6JfBWccfxDsAew2MC8WNf05nvgM67cTFU2nCI6MwYYf4x2xtEOcA5qs=
=LweI
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Getting the network consensus

2014-09-30 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

While working on peer-to-peer hidden services I've been wondering how
long two clients need to wait before arriving at the same view of the
Tor network. The answer, which I find surprising, is that this is
never guaranteed to happen. Here's why.

In what follows I'll assume that the interval between network
consensuses being published is one hour, but the conclusions remain
the same if the interval is longer or shorter - just change the units.
I'll use 1:00 consensus to mean a consensus that becomes valid at
1:00, is fresh until 2:00, and is valid until 4:00.

According to dir-spec.txt, each directory cache downloads the 1:00
consensus at a randomly chosen time between 1:00 and 1:30. Each client
replaces the 1:00 consensus with a newer consensus at a randomly
chosen time between 2:45 and 3:51. (The precise endpoint is 3:50:37.5,
due to a calculation described in section 5.1 of the spec. I'm curious
to know the origin of that calculation.)

Some observations:

1. The period when clients are replacing each consensus slightly
overlaps the period when they're replacing the previous consensus. For
about five and a half minutes of every hour there are twice as many
clients replacing their consensuses, so directory caches will
experience greater load during this period.

How much greater? We can divide clients that are downloading the
consensus into three groups:

A. Clients that have never downloaded a consensus.

B. Clients that are replacing a consensus that's no longer valid.

C. Clients that are replacing a valid consensus with a newer consensus.

Clients in group A bypass the directory caches and hit the
authorities, so they don't contribute to load on the caches. Clients
in group B have just started up and loaded an expired consensus from
disk. These clients are equally likely to hit the caches at any point
in the consensus interval (although the load will vary by time of day,
day of week, etc). Clients in group C are twice as likely to hit the
caches between 45 minutes and 51 minutes past the hour as at any other
time, due to the overlap between replacement intervals. So unless
group C is dwarfed by group B, we should see a noticeable increase in
load on the caches between 45 and 51 minutes past the hour. (If group
C is dwarfed by group B then the increase will be lost in the noise.)

If you run a directory cache, do your logs show that pattern?

2. The time a client chooses for replacing a newly downloaded
consensus may be earlier than the time it chose for replacing the
previous consensus. Again, this is due to the overlap between
replacement intervals: a client may replace the 1:00 consensus at 3:48
and then choose 3:47 as the random time to replace the newly
downloaded 2:00 consensus.

What happens in that case? I haven't looked at the code, but I'm going
to speculate that it doesn't explode in flames or travel through time.
A reasonable way to handle the situation would be to round up the
replacement time to the present time - in other words, to replace the
newly downloaded consensus immediately. If that guess is correct, the
extra load during the overlap between replacement intervals will get
squeezed towards the end of the overlap, as replacement times
occasionally get rounded up but never get rounded down. Some clients
will download two consensuses back-to-back during the overlap.

Again, if you run a directory cache I'd be interested to know whether
you're seeing this pattern.

3. Clients often skip consensuses. The interval for clients to replace
the 1:00 consensus runs from 2:45 to 3:51, and the interval for caches
to download the 3:00 consensus runs from 3:00 to 3:30. So there's
roughly a 46% chance that a client replacing the 1:00 consensus will
get the 2:00 consensus, and roughly a 54% chance that it will get the
3:00 consensus.

This means it's possible for two clients to replace their consensuses
regularly but never see the same consensus: one of them sees the
consensuses published on odd hours, the other sees the consensuses
published on even hours. This situation is unlikely to continue for
long if each client makes a fresh random choice for each replacement
time, but there's no time at which we can say the clients will
definitely have seen the same consensus.

I can't see any partitioning attacks arising from this, but I thought
I'd point it out anyway because I find the result surprising.

As I said before, I'm curious to know the origin of the calculation in
section 5.1 that produces the overlap between replacement intervals.
The replacement time is chosen uniformly at random from the interval
between the time 3/4 into the first interval after the consensus is no
longer fresh, and 7/8 of the time remaining after that before the
consensus is invalid. It's the 7/8 part that results in the odd
figure of 50 minutes 37.5 seconds.

If the replacement time were to be chosen uniformly at random from the
interval between the time 3/4 

Re: [tor-dev] Defending against guard discovery attacks by pinning middle nodes

2014-09-13 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 13/09/14 14:07, George Kadianakis wrote:
 a) To reduce the ownage probabilities we could pick a single
 middle node instead of 6. That will greatly improve guard
 discovery probabilities, and make us look like this:
 
 HS - guard - middle - exit - RP (where exit is chosen from
 the set of all relays)
 
 However, that will definitely degrade HS performance. I'm not sure 
 if Tor relays are able to handle all that concentrated HS traffic.
 Specifically, the guards/middles that get picked by popular HSes
 will get flooded with traffic that is never accounted for in Tor's
 load balancing equations (since HS traffic is not counted) and they
 will get hammered both by HS traffic and regular Tor traffic.

Hi George,

Could you explain what it means to say that HS traffic isn't counted
in the load balancing equations? Why is that so, and can it be changed
if that would allow a more secure HS design?

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUFEODAAoJEBEET9GfxSfMkicH/RJFaXcI2fmgq9Qm9xV5C3Hj
cTQ2DxsMFN8TmyusdesfzbD9+2TuDGNgIP9h773mgGzUsLll6RB8QNnLgL9kHDVF
+I3KibOY7qfQ0Fu0auVeYj+9jNda+cvQggQuGBtHyiYZkbirrodOW7HfthL96RCJ
HMw+HuSFFH/62idQjuVIbqjv88Ft1y5MgwRfplslkzc1aXO+bOOmHTf7qFXLXEq9
82X57rF1XLc/pTCCiHp3uVmJF5Dwp6unXBNPQf8L8yumTjgTfgLwYBkC8NOp6X+B
Cty1cD6RalKrR++fGAmOifKHz73GAYG9VRSeAfSHYltIHHo0JRReXdLyAgUKrrU=
=x/pl
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Running Tor with static path

2014-06-16 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 16/06/14 17:47, Zack Weinberg wrote:
 On Mon, Jun 16, 2014 at 11:38 AM, mahdi mahdi.it2...@gmail.com
 wrote:
 Hi all, I want to execute an experiment on Tor in which i need to
 fix the ip address of entry,relay and exit onion router. For that
 i need to determine the IPs and keys of ORs in OP permanently. Is
 there any idea of what function of Tor's code should be
 replace?!
 
 You can do this using a custom controller; you don't need to
 modify the Tor program itself. There's a library for writing
 controllers here: https://stem.torproject.org/

A related question: is it possible to build introduction/rendezvous
circuits via the controller protocol?

Cheers,
Michael

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJTnyZcAAoJEBEET9GfxSfMDZwH/iEU+zbvMMM5AEl79UUwNQXU
H9WXBIllls1Ofc0PUbkaZHyoP2ekRxNexDGxaEfDyPEb3HVpxMjVkWo9wB5wGE9q
+kcQ7oneSrdEW3ba6bJOCxkby0HkNXEUqVHsINdGF5gEkaKcxoYyPZRFsgi+yitL
oMfXv/O1Bv5+f1qAdBbq3VkbAY1NuPbe8W96yFeSTa9yVIDV5k6l1bYkIwfCdFE2
QURgE57eaDsRg9JUNiYJEWHjf3uVYvUvaA0r9iqr+Wk3daB33CJQb65BYXVwRraE
vcONOAl3Zfqiemg4nbjOug0qKsKRPhqh8mtry0tQTJTFhpFQ68DGf6/KOyUKnao=
=+jjD
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Question about HS code

2014-06-02 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 29/05/14 19:36, Nick Mathewson wrote:
 So what's the effect of REND_HID_SERV_DIR_REQUERY_PERIOD?
 
 
 Hello, Michael!
 
 This looks like a possible bug to me.  Could you open a ticket at 
 trac.torproject.org?

Hi Nick,

Robert Ransom replied off-list explaining the intent of this code. For
the sake of the list archives: the constant prevents repeated
successful queries to the same HSDir, and repeated unsuccessful
queries during a single connection attempt. The rationale for not also
using it to limit unsuccessful queries during distinct connection
attempts is given in ticket 3335.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJTjL9CAAoJEBEET9GfxSfMeO8H/jculsD8xuVjYAb/55aOB3qx
8OQ4CCaVYhyCl80rpkN0VnI25eLpRjLxPOuwQE86uBwF4bB2jSaPAXGP42UmMFr8
+SyU8uF/QZxpK9MpB3EJRZ9nArGvLyHLmB5lmakgJJx/oJ4SYW0pBfdnK/7Feq49
RO0bqcZXUtEzQrbGl/u3/RJwuu9V5LAGVAvJgkkIuqnSqF/DFJuKh5Z/VLLQQASy
bnS5/794KViNZbDGqBTBSh3ep9BgqgcRONWDdWGOtORyRvHOlfJUsg1wk5TyMlGH
6kWdFTNJNE0VXyndBFJmTBT/mXNyoWP5F/A0hpSC+b2xq7D8LdgMeamIZfZJM10=
=6SvT
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Question about HS code

2014-05-27 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

I've been trying to get to grips with the hidden service code, and I
have a question that I was hoping someone on the list could answer.

The constant REND_HID_SERV_DIR_REQUERY_PERIOD is defined as 15 * 60
(15 minutes) in rendclient.c, with the comment The period for which a
hidden service directory cannot be queried for the same descriptor ID
again. As far as I can tell, the purpose of this constant is to
prevent a client from repeatedly asking an HS directory for a
descriptor that the directory doesn't have.

However, when a descriptor fetch fails and there's no reusable cached
descriptor, rend_client_desc_trynow(query) calls
rend_client_note_connection_attempt_ended(onion_address), which calls
purge_hid_serv_from_last_hid_serv_requests(onion_address), which (as
far as I can tell) forgets which HS directories have been tried for
the descriptor, allowing the same directories to be tried again before
REND_HID_SERV_DIR_REQUERY_PERIOD elapses.

So what's the effect of REND_HID_SERV_DIR_REQUERY_PERIOD?

Thanks for any guidance,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJThJI1AAoJEBEET9GfxSfMHxwH/3DH2ZpWsmjfu8wX/Zaw2m1T
5NBYhrzpQqJ5C3doVd5cSsCuZINQOG/kHFda3HAf4a0UwANkGZQzSKiB78IXL/VG
yEVts1ZpDEF3rtGgnQMBnFQ9JChNzYRuJ5uT+EFJBjW9fgWVuh+8KugNev9wLiR8
V8ayDaekZxUaT11+qEqS974IRI1wWVIzUvGFylMsCsuf/NAKkUalXscOOT3ckKUN
g2b1M5f38uzpFqdvRtTIgL/p0dS3JtrrGcuYnwD2+THzeAQpIHGuFPxnbjoauCBd
HwyGCo0v3UKr4e00YSKlhJBAZjB2Y41jwi5vasp7IefFvsuVOXfcc8b4BP5Syyk=
=d/B4
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] RFC: obfs4 (Name not final)

2014-05-23 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 23/05/14 13:16, Philipp Winter wrote:
 - ScrambleSuit's framing mechanism is vulnerable to this attack: 
 http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf In a nutshell, the
 receiver needs to decrypt the ScrambleSuit header before it is able
 to verify the HMAC which makes it possible for an attacker to 
 tamper with the length fields.  While there are probably simpler
 attacks, it would be nice to have a fix for this problem.

In the next version of the Briar transport protocol we're addressing
that problem by dividing each frame into two parts. The first part is
a fixed-length header, the second is a variable-length body. Each part
is separately encrypted and MACed. The header contains the length of
the body.

This requires two MACs per frame, but I prefer that to the
alternatives: using fixed-length frames, or using the decrypted length
field before checking whether it's been tampered with.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJTf2xRAAoJEBEET9GfxSfMdPIH/0YQ+9d0HBl2Nj4imSKwe6tz
6OWKqgL5Vqd/Qvq7/vSwtHVY+yY/+C1dmHGLFAO+6W12OHUNdcylcavT/425SrVx
GEcvCMhAKzAu/QUI/b8vMMCPvjwfMgN35SONGEPfuhBAZm3+4oF8GiKs/o6+7nrk
XCmvYZ8btupoVNPdNUhktjkFK3KhW4iYpiyYJzqtJ8/ip+5EABHdj7ATV6QJU02S
7UnXrUEnT5XBbi3jcod7MaN5YF/xtdXKzfYE2uoiJyi5KK2zHTorC4J6STe98kKR
ygnipgWv+kut5izHwrDfoig+yGEFfui0CYMTyJZGtGcdk1VhUnhiFs8nndDWBtk=
=jite
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Introduction Points and their rotation periods (was Re: Hidden Service Scaling)

2014-05-14 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 13/05/14 18:28, Michael Rogers wrote:
 A fourth possibility is to rank the candidate nodes for each
 position in the circuit, and build the circuit through the
 highest-ranked candidate for each position that's currently online.
 Thus whenever the circuit's rebuilt it will pass through the same
 nodes if possible, or mostly the same nodes if some of the
 favourite candidates are offline. Over time the circuit will
 gradually move to new nodes due to churn - but that will happen as
 slowly for this design as it can happen for any design.

Sorry for the self-reply. I've realised this has the same problem as
one of the other possibilities - a bad node can veto any attempt to
extend the circuit to a good node, so any circuit that hits a bad node
will pass through bad nodes from that point on.

It seems the only safe thing is to rebuild the circuit using all the
same nodes, or if that isn't possible, build an entirely new circuit.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJTcycQAAoJEBEET9GfxSfMZUcIAMMPXrfvHznbiQML0A7Uf8BD
Zef2ahhch5n2QdMdMiW5Ln+gepIeTVFDSDMaGNE0SLSEt9NgRS5WL89NVG+/+VLR
Az2f7+TubOV7602KZ6Sb0ILT3sN1H+pyU/vmiUD94FY0cTmxZqJ7YPQDOo1GrX6q
vypm3zeOQxM1Q80m/jcStlUYGZuYzrFwKrhWRLlPOshG+GhhVJh6CTYldbJwh4eG
zHUDw2n7ZWOHXDHLgAbaxkvuLxtEo5++KhyL2ejryVPZu84W0hphrP2ZUqICP/hz
vaYchSqdrJaPOOXjRuh7iv79LMRqpsVXK02YCZLF4gPhbJp+umJhcduH4CBIz8g=
=5ejA
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Introduction Points and their rotation periods (was Re: Hidden Service Scaling)

2014-05-13 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 11/05/14 17:36, George Kadianakis wrote:
 I think it would be good for the performance of mobile hidden
 services, but I'm concerned about the attack waldo described
 eariler in this thread, in which a malicious IP breaks circuits
 until the service builds a circuit through a malicious middle 
 node, allowing the attacker to discover the service's entry
 guard.
 
 
 I couldn't find the attack you described in this thread. This
 thread is quite big.

The attack was described here:
https://lists.torproject.org/pipermail/tor-dev/2014-May/006807.html

 However, I'm not sure how rotating IPs _more frequently_ can help 
 against the guard discovery attack you described. It would seem to
 me that the contrary is true (the fewer IPs you go through, the
 less probability you have for one of them to be adversarial).

I'm not suggesting that fast rotation would be better than slow
rotation, but there are some possibilities that don't involve periodic
rotation at all.

One possibility (which might be the current behaviour?) is that if the
circuit to an IP fails, you build a new circuit to a new IP rather
than a new circuit to the same IP. Advantage: not vulnerable to
waldo's attack. Disadvantage: rapid turnover of IPs.

Another possibility is to rebuild the circuit through the same nodes
if possible, or if not, build an entirely new circuit to a new IP.
This would prevent waldo's attack, but it might still cause rapid
turnover of IPs unless all nodes in the circuit were chosen from the
high-uptime pool.

A third possibility (which might be the virtual circuits idea?) is to
reuse the nodes in the circuit up to the point of failure, and pick
new nodes beyond that point. But that would seem to be vulnerable to a
selective DoS attack where a bad node vetoes any attempt to extend the
circuit to a good node, thus causing any circuit that hits a bad node
to pass through bad nodes from that point on (similar to MorphMix).

A fourth possibility is to rank the candidate nodes for each position
in the circuit, and build the circuit through the highest-ranked
candidate for each position that's currently online. Thus whenever the
circuit's rebuilt it will pass through the same nodes if possible, or
mostly the same nodes if some of the favourite candidates are offline.
Over time the circuit will gradually move to new nodes due to churn -
but that will happen as slowly for this design as it can happen for
any design.

The ranking for each position should be secret so that if an attacker
knows what a client's favourite node is, she can't make one of her
nodes the second-favourite and wait for the favourite to fail. And the
ranking for different circuits should be independent so the client's
circuits don't leak information about each other.

One way to achieve those properties would be to generate a secret key
for each circuit and rank the candidates for each position according
to a pseudo-random function of the key, the candidate's fingerprint
and the position. The key wouldn't be shared with anyone, it would
just be used to rank the candidates.

A MAC function such as HMAC could serve as the pseudo-random function:
sort by HMAC(key,fingerprint|position). HASH(key|fingerprint|position)
would be another possibility, but MAC functions are explicitly
designed to keep the key secret.

The key would be preserved across sessions for as long as the circuit
was wanted - I guess the lifetime would be unlimited for IP circuits,
and application-determined for ordinary client circuits. It would be
great if an application could signal to the OP this circuit belongs
to long-lived identity X and get the same circuit (churn permitting)
that was previously used for identity X.

As far as I can see, the same entry guards should be used for all
circuits, regardless of application-layer identity - otherwise a local
observer watching a user could make observations like Every Wednesday
morning, the user connects to guard Y and correlate those with the
activity of a pseudonym (emails, blog updates, etc). So when choosing
nodes for a circuit, the candidates for the first position should be
the client's entry guards.

If this idea hasn't already been proposed, I suggest we call it
persistent circuits.

 Perhaps the attack could be mitigated by keeping the same middle
 node and IP for as long as possible, then choosing a new middle
 node *and* a new IP when either of them became unavailable? Then
 a malicious IP that broke a circuit would push the circuit onto a
 new IP.
 
 
 Also see 
 https://lists.torproject.org/pipermail/tor-dev/2013-October/005621.html
 .
 
 Unfortunately, it seems to me that the 'virtual circuit' idea is 
 messier than we imagine, and taking the 'guard layers' approach
 might be less dangerous and easier to analyze.

Interesting, thanks for the link! Has anything been written about how
the guard layers approach would work other than Mike Perry's comment
on ticket #9001?

Cheers,

Re: [tor-dev] Introduction Points and their rotation periods (was Re: Hidden Service Scaling)

2014-05-11 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 10/05/14 21:09, George Kadianakis wrote:
 It's interesting that you say this, because we pretty much took
 the opposite approach with guard nodes. That is, the plan is to
 extend their rotation period to 9 months (from the current 2-3
 months). See: 
 https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/236-single-guard-node.txt

  I was even planning on writing an extension to rend-spec-ng.txt
 to specify how IPs should be picked and to extend their rotation 
 period. That's for the same reason we do it for entry guards:

Hi George,

Is there an analysis somewhere of why it would be better to change IPs
less frequently? I think it would be good for the performance of
mobile hidden services, but I'm concerned about the attack waldo
described eariler in this thread, in which a malicious IP breaks
circuits until the service builds a circuit through a malicious middle
node, allowing the attacker to discover the service's entry guard.

Perhaps the attack could be mitigated by keeping the same middle node
and IP for as long as possible, then choosing a new middle node *and*
a new IP when either of them became unavailable? Then a malicious IP
that broke a circuit would push the circuit onto a new IP.

However, that might require all three nodes in the circuit to be
picked from the high-uptime pool.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJTb5hbAAoJEBEET9GfxSfMtwUH/jFE64dbgZAsi0QM0C5htVlU
3Wz932lW9QXYxQoPw8axPZY4WjpA/XQwp7T2CZE3vpd6zgMaRAvEvmhcyefdOkD8
fBQzaL0jBILZkbNKZKTnCAF5Te4qpg/wwAnbC1v7q2c/KS806Q6+/T0FkBTcIrib
MbbHn0Cr301P1l5WMe1e7xNTArvSIiQsyVhebhNWdhbfwK20ek/YCKSdPblWVZwI
WqLr/n8EWWw2OwmPBOHKl7nZHfPQ2OJ1Q0/hoAzDg0UmaQc8qBwW+k/TlfPyMVTC
phRF8+9sIhVFYebXip2QKwM7sF5OL3CVMT80QJGlo6G2ADGD+9OFCUsx7oXUEjc=
=u2Ph
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service Scaling

2014-05-07 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/05/14 22:07, Christopher Baines wrote:
 On 06/05/14 15:29, Michael Rogers wrote:
 Does this mean that at present, the service builds a new IP
 circuit (to a new IP?) every time it receives a connection? If
 so, is it the IP or the service that closes the old circuit?
 
 Not quite. When the service (instance, or instances) select an 
 introduction point, a circuit to that introduction point is built.
 This is a long term circuit, through which the
 RELAY_COMMAND_INTRODUCE2 cells can be sent. This circuit enables
 the IP to contact the service when a client asks it to do so.
 
 Currently, any IP's will close any existing circuits which are for
 a common purpose and service.

Thanks for the explanation!

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJTair9AAoJEBEET9GfxSfMN80IALQ1dHkYbf/IzoypYqn0pldi
oNC0YoMCmvKFUOpyYClADLns74komcyodfgoNbwbEB1NLlOpeuUn9UubE4HKKAY9
74pTrl9f8uUg1pJ8NaNaoQfiKnEQEO/mdW19cKfleS4ZjG0wbEy15e+GdxokjzXv
tDK3OAzCZPzgaAoHNUzY4ORgKGU7Jy/+AAg06e2GcLzyqGT8tDWQGMtiJUs6Uxci
gB5m1CymjTX6yhGg/UC48y0wg7ty17uIa2SiBBNIQHTOs3DaJLFhGD3oMrIld3YS
3f2kdKkFnbQytTyWKcDPFPDU5N9IcGqVZiV3ozMELxvhBY7aI1Y+joYm3w4SqBk=
=l7Py
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service Scaling

2014-05-07 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 07/05/14 17:32, Christopher Baines wrote:
 What about the attack suggested by waldo, where a malicious IP 
 repeatedly breaks the circuit until it's rebuilt through a
 malicious middle node? Are entry guards enough to protect the
 service's anonymity in that case?
 
 I think it is a valid concern. Assuming the attacker has
 identified their node as an IP, and has the corresponding public
 key. They can then get the service to create new circuits to their
 node, buy just causing the existing ones to fail.
 
 Using guard nodes for those circuits would seem to be helpful, as
 this would greatly reduce the chance that the attackers nodes are
 used in the first hop.
 
 If guard nodes where used (assuming that they are currently not),
 you would have to be careful to act correctly when the guard node
 fails, in terms of using a different guard, or selecting a new
 guard to use instead (in an attempt to still connect to the
 introduction point).

Perhaps it would make sense to pick one or more IPs per guard, and
change those IPs when the guard is changed? Then waldo's attack by a
malicious IP would only ever discover one guard.

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJTam21AAoJEBEET9GfxSfMiLkIAJuEjcF4yYH8L6nJOeSw33r+
aa7ANQPoBE0+dxXssNmFSw6Jw77qfip8LTQrvp58csdoxlh7ckp5wDMD0EqDag8X
98MuD6LRMD2q8MyJWHHYzBIn1SipW0PdTjpckdWlzI/u7ltpLy1ZHtLlpbKOGTKP
pTmG0enWCGP7bpkQeEiJYmCHPbQWxTYJ1lvGdG9EX6DMqWR51FiTJpl5u/eI0JiS
5iLzCuPyP+DCyOBlaxFozujSRnElAKgsIQKz9+NY+bmHFC7tCnh1zE7DikbJlDUd
XmZuzvK2VPuCabtDUegBteeenoyD3gtKKk59OyQUu9YbBz8JfJLY0zEmvTG9Mn4=
=gDUS
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hidden Service Scaling

2014-05-06 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Christopher,

I'm interested in your work because the hidden service protocol
doesn't seem to perform very well for hidden services running on
mobile devices, which frequently lose network connectivity. I wonder
if the situation can be improved by choosing introduction points
deterministically.

On 30/04/14 22:06, Christopher Baines wrote:
 - multiple connections for one service to an introduction point is 
 allowed (previously, existing were closed)

Does this mean that at present, the service builds a new IP circuit
(to a new IP?) every time it receives a connection? If so, is it the
IP or the service that closes the old circuit?

Thanks,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJTaPGuAAoJEBEET9GfxSfMBr0H/2U2X9nspW17PdcsenxmfLWa
qdDRtqiVYkV96GMyEF7kO+felqHyZXQ/l3PHdkRZIalCwWdJrP2gTtV0NJ1tSIJM
dyLwdtIBybRfi1rWJBXrphBA8GuG5qOIdxMcgO90CixvWNz21BxeQ1JGwKI/etP9
bio/saloYTgX6FX0S8TPzTBs42RW2mZ6PIt98Tdeq3AMOU0EZIw/bNb+vxPo/qpJ
yX9ui/gIB5QA2IPrPLZzT7H1dqPnLSaYUg4E17SiBgsOHpa/7wDtDARgDINT+C3+
kK8T75yIq8ElEVRU+3TU9Rz+y+JCvHKJgKMvLqEOsYMS6w+fFtJCBLT5ZLnS+uI=
=77ga
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] jtorctl

2014-04-04 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 04/04/14 15:44, Nathan Freitas wrote:
 *ahem* Orbot represents 2 million users of jtorctl, and it works
 just fine for us (in our limited use of it).
 
 https://gitweb.torproject.org/orbot.git/tree/HEAD:/external

Hehe, I was just this second cloning the Orbot repo to see if you were
still using jtorctl. :-) Briar also uses it, as does Ploggy.

On 04/04/14 07:54, Karsten Loesing wrote:
 Michael, are you working on JTorCtl because you want to help out
 with writing some Java code?  If so, I might have some other fine
 Java code for you to hack on that is in active use:

I'd love to help, but right now I've got too much on my plate to take
on any new projects. Sorry!

 Or are you using JTorCtl for some project of yours?  In that case
 it might be easier to keep your own JTorCtl branch for now and ask
 for your changes to be merged when you publish your application.
 Of course, you could also open a Trac ticket and ask for review,
 but that might take between a while and forever.

I'm happy to maintain a fork - that's what I've been doing until now,
but I thought it might be useful to push my changes upstream. Here's a
summary of the changes:

1. Make TorControlConnection thread-safe
2. Convert TorControlError and TorControlSyntaxError into checked
exceptions
3. Reduce the visibility of classes and methods
4. Use the Java 1.5 foreach operator for readability
5. Fix formatting of javadoc comments

Points 2 and 3 might affect existing users of the library. I'm happy
to post five small patches or one big patch for eventual review if you
think Trac's the way to go.

Cheers,
Michael

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBCAAGBQJTPtFHAAoJEBEET9GfxSfMMjAIAK9ajy2Efe1m+hYp6sw9EuJV
+KhoUD3UMCq9GWsq93hATZVhrRbV5bpOJvEjaikd9/Ux6TA6888Ngi23LWh7D/Er
aaVjm87AoAhMOeaOEXXsmlIqAi8+DRA7jZbE2zHB3TYAjVEU7/KKWY+1J57+j3nu
n01Yfr2bp0XeEFJ1a9Q6RPZdhYGo6u0OMsjh9etQoEoDdE+0NR2PYRpUz1sigi/q
DYEdxeX66ZTVaEOm4KndSEjmacxt86xQrbsaCGovLb97iUxXa1gv4W66ATTC5aIi
A+9Y9v2RmfxynyhQh8QGCd8edxGJ5vQ8M+ISinDIBeq4UiY225r2MWt5GWUQrHc=
=4UPM
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] jtorctl

2014-04-03 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

I have a patch to improve thread safety in jtorctl. How should I
submit it?

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBCAAGBQJTPT1vAAoJEBEET9GfxSfM0CMH+wais6E2jteMA0EnrAsV9+K1
jx08ypBNtE7qTYehe3m4s6vFwEGmw6lu2qYqi8XwPlD6PY/x1OtnDWjrTUsMfyWe
OMZF2kqtMcpk47Y+SAL2c8P/BGYk88Wb2UMTZEGo3AjBZS2wQyHD+FMaSoDP02R9
MvCfwON+sn4a+AJxrnX/NsrI5FivJWR8obWX2sYpfCQaNIub/G/SGh9DtMbJFEjD
tLc+MHyrrmmEmM1LN8kQp6QdBP7xIn/Cv+St3vzG920UuvQD5ZmHCYqiD3ddp74f
vqfuKZs8GvnDFnND9mfy16l9lUu8dME1GWnoAO0l4jRmhBKwUCZMnjaLZvBVZiE=
=6inv
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Using the HS protocol for unlinkability only

2014-03-26 Thread Michael Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

(Please let me know if this belongs on tor-talk instead of here.)

I'm working on a messaging app that uses Tor hidden services to
provide unlinkability (from the point of view of a network observer)
between users and their contacts. Users know who their contacts are,
so we don't need mutual anonymity, just unlinkability.

I wonder whether we need everything that the Tor hidden service
protocol provides, or whether we might be able to save some bandwidth
(for clients and the Tor network) and improve performance by using
parts of the hidden service protocol in a different way.

First of all, we may not need to publish hidden service descriptors in
the HS directory, because we have a way for clients to exchange static
information such as HS public keys out-of-band.

Second, we may not need to use introduction points to protect services
from DoS attacks - we can assume that users trust their contacts not
to DoS them.

Third, we may be able to reduce the number of hops in the
client-service circuits, because we don't need mutual anonymity.

This isn't the first app to use hidden services for unlinkability, so
I expect this topic's come up before. Are there any discussions I
should look at before coming up with hare-brained schemes to misuse
the hidden service protocol?

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBCAAGBQJTMwZDAAoJEBEET9GfxSfM330H/2seo/oZgz2K54W5oxkRKa07
Jh+W4swi2utFs728bEdbWDl6EaWiTYvwlUxTBllrXVTGQolPxUsHo4jHLk0Xt5ah
Jo3RZxbiKF6rZkcRC66nxF6aGAQ0JZn+xkvVB4xb/2vzMg7jQ9N+GACQ7fRKEvqA
GgqqEjKVJdzgtBKQBl0eZYZi/4VXisCEtN7pY1MHiO5k/wFMsg3z1MceN9HSw7EG
JedzCLT0r/OVW/f07/1iQU6TWRohcOuE/pBCHi+6ctgp/6a+NehKBw4gIcm1aBur
kZiSxea0FIpWdarTcMVctwayLhhzpTuMB6/0YOBO+1/u2rqkLl6njqnDaALkicQ=
=AnYI
-END PGP SIGNATURE-
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev