[tor-relays] do the 800+ UbuntuCore relays constitute a Sybil attack?

2017-11-28 Thread starlight . 2017q4
The population of these has been climbing for more than a week and no-one has 
commented, which seems odd.  No contact provided.

https://atlas.torproject.org/#search/UbuntuCore

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] do the 800+ UbuntuCore relays constitute a Sybil attack?

2017-11-30 Thread starlight . 2017q4
Thank you for the reply.

I missed the post when searching the topic--the relays are a bit unusual and I 
keep tripping over them. . .


On Nov 29, 2017 01:37, "Chad MILLER"  wrote:

>To be honest, I reckon these UbuntuCore nodes are almost all mundane
>desktops and server.
>
>I think there are only a few added each day, for a total of about one or
>two thousand that intend to be bridges and relays. Intent doesn't mean they
>have the inbound connectivity to join the consensus, though.
>
>
>
>On Nov 28, 2017 17:18, "Roger Dingledine"  wrote:
>
>> On Tue, Nov 28, 2017 at 08:06:11PM -0500, starlight.2017q4 at binnacle.cx
>> wrote:
>> > The population of these has been climbing for more than a week and
>> no-one has commented, which seems odd.  No contact provided.
>> >
>> > https://atlas.torproject.org/#search/UbuntuCore
>>
>> See this thread:
>> https://lists.torproject.org/pipermail/tor-relays/2016-August/010046.html
>>
>> So they are not a single unified operator.
>>
>> What fraction of consensus weights are they? I'm under the impression
>> they're running on refrigerators or whatever so most of them have crappy
>> connectivity.
>>
>> --Roger
>>

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] botnet? abusing/attacking guard nodes

2017-12-17 Thread starlight . 2017q4
Guard relay here appears to have come under steadily increasing abuse over the 
last several months.  Belive the two previous threads relate to the same issue:

   Failing because we have 4063 connections already
   // Number of file descriptors

   DoS attacks are real

Several times a day a large burst of circuit extends are attempted resulting in 
log flooding with

   [Warning] assign_to_cpuworker failed. Ignoring.

where the above indicates a circuit-launch failed due to a full circuit request 
queue.  Presently the guard runs on an old system lacking AES-NI, and the 
operation is expansive rather than trivial.  Originally thought the events were 
very brief, but after reducing MaxClientCircuitsPending from a larger value to 
the default it appears they last between five and ten minutes.

The abuser also contrives to create huge circuit queues, which resulted in an 
OOM kill of the daemon a couple of days back.  Lowered MaxMemInQueues to 1G, 
set vm.overcommit_memory=2 with vm.overcommit_ratio=X (X such that 
/proc/meminfo:CommitLimit is comfortably less than physical memory) and now 
instead of a dameon take-out see

   [Warning] We're low on memory.  Killing circuits with over-long
   queues. (This behavior is controlled by MaxMemInQueues.)
   
   Removed 1060505952 bytes by killing 1 circuits;
   19k circuits remain alive. Also killed 0 non-
   linked directory connections.

As you can see the one circuit was consuming all of MaxMemInQueues.

And today this showed up in the middle of a "assign_to_cpuworker failed" blast:

   [Warning] Failing because we have Y connections already. . .

Digging into the source, the message indicates ENOMEM/ENOBUFS was returned from 
an attempt to create a socket.  Socket max on the system is much higher than Y 
so kernel memory exhaustion is the cause.  Implication is a burst of client 
connections associated with the events, but haven't verified that.

An old server was dusted off after a hardware fail and the machine is a bit 
underpowered, but certainly up to the load that corresponds with the connection 
speed and assigned consensus weight.  AFAICT normal Tor clients experience 
acceptable performance.  The less-than-blazing current hardware illuminates the 
abuse/attack incidents and inspired the writing of this post.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] botnet? abusing/attacking guard nodes

2017-12-17 Thread starlight . 2017q4
>My relay ran out of connections once and also crashed once so I followed 
>the suggestions in the "DoS attacks are real (probably)" thread and 
>implemented connection limits in my firewall. Everything has run 
>smoothly since.

I missed this thread, thank you for highlighting it!

>My only concern is how low I can set the number of connections per IP 
>address. Someone wrote a legit client will only open max 2 tcp 
>connections. I'd like this verified before I lower my limits further.

Two connection limit is fine for single-IP clients, but will
penalize multiple clients operating behind NAT IPs.  I've
decided that's too bad for them for the moment. . .

Limiting connections-per-IP fixes it.  I set 

   -m connlimit --connlimit-above 2 --connlimit-mask 32 -j DROP

and obtained good mitigation.  The attacker relies on opening
tons of connections and this simple rules squashes it.  Rule
accumulated 15 million hits over a short span.

I have kept an eye on the number of peer-relay connections
and client connections for a long time and the client
connection count has been artificially high of late.
With the above rule it went right back to the usual
level.

The limit can be higher than two and it should work
as well--the rate of new connections appears to be
critical to the attack.  Possibly the low performance
CPU here lacking AES hardware mitigates it since each
connection appears to require an onionskin calculation,
and whatever old-connection cleanup logic exists in Tor
easily keeps pace.  This has been going on for months
and only became a problem now with the attacker
enhancing it to somehow queue huge amounts of data
on circuits--but per my initial post that's simple
to mitigate.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] could Tor devs provide an update on DOS attacks?

2017-12-30 Thread starlight . 2017q4
I realize we're in the middle of the Christmas / New Year dead week, but it 
would be great one of the developers could says something (anything) about the 
ongoing denial-of-service attacks.

My node crashed a second time a fews days back, and while the second iteration 
of hardening appears have held, I see many others crack under the stress.  If 
one opens a control channel and issues

   setevents extended orconn

volumes of

   650 ORCONN ... LAUNCHED ID=
   650 ORCONN ...  FAILED REASON=CONNECTREFUSED NCIRCS=x ID=

messages appear.  First I though these were operators with some new mitigation, 
but finally realized the relays are crashed and have not yet dropped off the 
consensus.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] could Tor devs provide an update on DOS attacks?

2017-12-31 Thread starlight . 2017q4
At 18:25 12/30/2017 -0500, Roger Dingledine wrote:

Thank you Roger for your detailed reply.

I have some observations:

1) An additional contingent of dubious clients exists aside from the newly 
arrived big-name-hoster instances each generating _hundreds_of_thousands_ of 
connection requests _per_guard_per_day_:  hundreds of scattered client IPs 
behave in a distinctive bot-like manner, and seem a likely source of excess 
circuit-extend activity.  These IPs have been active since late August this 
year.

2) Intervals of extreme circuit-extend activity come and go in patterns that 
resemble attacks to my eyes.  In one my guard relay was so overloaded before 
crashing that no normal user circuits could be created whatsoever.  Has never 
come close to happening before.

3) I run an exit on a much more powerful machine.  Normally the exit does not 
complain "assign_to_cpuworker failed," but recently the exit was attacked two 
different ways in rapid succession:  first it was hit with a DDOS 
packet-saturation blast calibrated to overload the network interface but not so 
strong as to trigger the ISP's anti-DDOS system (which works well); the first 
attack had little effect.  Then within two hours the exit was hit with a 
singular and massive circuit-extend attack that pegged the crypto-worker 
thread, generating thousands of "assign_to_cpuworker failed" messages.  Both 
attacks degraded traffic flow noticeably but did not severely impact the exit.  
The attacker gave up (or accomplished their goal), presumably moving on to 
other targets.

4) Aside from "assign_to_cpuworker failed" overloading, the recent aggravation 
commenced with a "sniper attack" against my guard relay that resulted in Linux 
OOM kill of the daemon.  Brought it back up with a more appropriate 
MaxMemInQueues setting and they tried again exactly two times, then ceased.  I 
am certain it was a sniper attack due to the subsequent attempts, and it 
appears the perpetrator was actively and consciously engaged in attacking a 
selected target.

https://trac.torproject.org/projects/tor/ticket/24737

Here are my two cents:  The current stress activity is either or both of 1) a 
long-running guerilla campaign to harass Tor relay operators and the Tor 
network, calibrated to avoid attracting an all-hands mitigation and associated 
bad press, 2) an effort to deanonymize hidden services with various 
guard-discovery guard-substitution techniques.

In light of the above I suggest adding support for circuit-extend rate-limiting 
of some kind or another.  I run Tor relays to help regular flesh-and-blood 
users, not to facilitate volume traffic/abuse initiated by dubious actors.  I 
wish to favor the former and have no qualms hobbling and blocking the latter.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] could Tor devs provide an update on DOS attacks?

2017-12-31 Thread starlight . 2017q4
At 07:36 12/31/2017 -0500, I wrote:
>
>. . . suggest adding support for circuit-extend rate-limiting of some kind or 
>another. . .

Further in support of the request, for _12_hours_ preceding the most recent 
crash, the daemon reported:

Your computer is too slow to handle this many circuit creation requests. . .
[450043 similar message(s) suppressed in last 60 seconds]

and for the attack on the fast exit machine:

[1091489 similar message(s) suppressed in last 60 seconds]

I see no reason _any_ router should _ever_ have to handle this volume of 
circuit requests.  DOS attacks no doubt whatsoever.  Rate limit is needed to 
mitigate the problem.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] could Tor devs provide an update on DOS attacks?

2018-01-01 Thread starlight . 2017q4
At 07:36 12/31/2017 -0500, I wrote:
>
>. . . suggest adding support for circuit-extend rate-limiting of some kind or 
>another. . .

also:

Heartbeat: Tor's uptime is 10 days 0:00 hours, with 115583 circuits open.
I've sent 5190.11 GB and received 5048.62 GB.
Circuit handshake stats since last time: 538253/637284 TAP,
5878399/5922888 NTor.

Heartbeat: Tor's uptime is 10 days 6:00 hours, with 179761 circuits open.
I've sent 5314.34 GB and received 5193.56 GB.
Circuit handshake stats since last time: 34639/34885 TAP,
*** 18741983/144697651 NTor. ***

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Disable CellStatistics !!!

2018-02-04 Thread starlight . 2017q4
After many crashes and much pain, I determined that
having CellStatistics enabled causes a busy relay
to consume at least two or three _gigabytes_ of
additional memory.  Relay operators with less than
16GB per instance are advised to disable it.

By default CellStatistics is disabled unless explicitly
set in torrc.

CellStatistics can be turned off without restarting,
via the control-channel command

   setconf CellStatistics=0

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] 1 circuit using 1.5Gig or ram? [0.3.3.2-alpha]

2018-02-12 Thread starlight . 2017q4
On 12 Feb (19:44:02 UTC), David Goulet wrote:
>Wow... 1599323088 bytes is insane. This should _not_ happen for only 1
>circuit. We actually have checks in place to avoid this but it seems they
>either totally failed or we have a edge case.
>
>Can you tell me what scheduler were you using (look for "Scheduler" in the
>notice log).
>
>Any warnings in the logs that you could share or everything was normal?
>
>Finally, if you can share the OS you are running this relay and if Linux, the
>kernel version.


Don't know if it's relevant but my relay was hit in similar fashion in December.
Running 0.2.9.14 (no KIST) on Linux at the time (no other related log messages,
MaxMemInQueues=1GB reduced from 2GB after OOM termination):

Dec 15 15:28:52 Tor[]: assign_to_cpuworker failed. Ignoring.
Dec 15 15:48:16 Tor[]: assign_to_cpuworker failed. Ignoring.
Dec 15 16:39:44 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Dec 15 17:39:45 Tor[]: Removed 442695264 bytes by killing 1 circuits; 18766 
circuits remain alive. Also killed 0 non-linked directory connections.
Dec 15 19:03:22 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Dec 15 19:03:23 Tor[]: Removed 1060505952 bytes by killing 1 circuits; 19865 
circuits remain alive. Also killed 0 non-linked directory connections.

More recently (and more reasonably, MaxMemInQueues=512MB), running 0.3.2.9:

Feb  4 20:12:39 Tor[]: Scheduler type KIST has been enabled.
Feb  6 08:12:41 Tor[]: Heartbeat: Tor's uptime is 1 day 11:59 hours. I've sent 
29.00 MB and received 364.99 MB.
Feb  6 14:04:43 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Feb  6 14:04:43 Tor[]: Removed 166298880 bytes by killing 2 circuits; 20213 
circuits remain alive. Also killed 0 non-linked directory connections.
Feb  6 14:11:17 Tor[]: Heartbeat: Tor's uptime is 1 day 17:59 hours, with 20573 
circuits open. I've sent 910.29 GB and received 902.58 GB.
Feb  6 14:11:17 Tor[]: Circuit handshake stats since last time: 1876499/3018306 
TAP, 4322015/4322131 NTor.
Feb  6 14:11:17 Tor[]: Since startup, we have initiated 0 v1 connections, 0 v2 
connections, 1 v3 connections, and 23846 v4 connections; and received 6 v1 
connections, 7844 v2 connections, 11906 v3 connections, and 214565 v4 
connections.
Feb  6 14:12:41 Tor[]: Heartbeat: Tor's uptime is 1 day 17:59 hours. I've sent 
31.62 MB and received 420.63 MB.
Feb  6 14:22:50 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Feb  6 14:22:50 Tor[]: Removed 181501584 bytes by killing 2 circuits; 19078 
circuits remain alive. Also killed 0 non-linked directory connections.
Feb  6 15:01:50 Tor[]: We're low on memory.  Killing circuits with over-long 
queues. (This behavior is controlled by MaxMemInQueues.)
Feb  6 15:01:50 Tor[]: Removed 105918912 bytes by killing 1 circuits; 19679 
circuits remain alive. Also killed 0 non-linked directory connections.
Feb  6 15:46:24 Tor[]: Channel padding timeout scheduled 157451ms in the past. 
Feb  6 19:30:36 Tor[]: new bridge descriptor 'Binnacle' (fresh): 
$4F0DB7E687FC7C0AE55C8F243DA8B0EB27FBF1F2~Binnacle at 108.53.208.157
Feb  6 20:11:17 Tor[]: Heartbeat: Tor's uptime is 1 day 23:59 hours, with 18045 
circuits open. I've sent 1043.74 GB and received 1034.65 GB.
Feb  6 20:11:17 Tor[]: Circuit handshake stats since last time: 260970/368918 
TAP, 3957087/3957791 NTor.

Perhaps this indicates some newer KIST mitigation logic is effective.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] gabelmoo's BW scanner, temporary or permanent leave?

2018-02-19 Thread starlight . 2017q4
noticed gablemoo's BW scanner is offline and the entry for it at

   https://consensus-health.torproject.org/#bwauthstatus

was removed; is it gone or just taking an extended break?

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] gabelmoo's BW scanner, temporary or permanent leave?

2018-02-19 Thread starlight . 2017q4
>> On 19. Feb 2018, at 22:38, starlight.2017q4 at binnacle.cx wrote:
>> 
>> noticed gablemoo's BW scanner is offline and the entry for it at
>> 
>>   https://consensus-health.torproject.org/#bwauthstatus
>> 
>> was removed; is it gone or just taking an extended break?
>
>It is broken because the current machine cannot fulfill its memory
>requirement. Hardware upgrades are initiated but stalled due to
>personal reasons.

Thank you for this great contribution!

and thank you for the status update

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] DoS mitigation taking hold!

2018-03-06 Thread starlight . 2017q4
Back to normal!  Check out the final two log entries.

Thank you David Goulet and all the Tor devs!!!


Feb 24 22:45 Tor 0.3.3.2-alpha (...) running on Linux with . . .
.
.
.
Mar  2 21:09 Circuit handshake stats since last time: 247305/247482 TAP, 
4620773/4622133 NTor.
Mar  3 03:09 Circuit handshake stats since last time: 969715/1129528 TAP, 
5195458/5196319 NTor.
Mar  3 09:09 Circuit handshake stats since last time: 1847030/2202846 TAP, 
5027827/5029068 NTor.
Mar  3 13:24 Tor 0.3.3.3-alpha (...) running on Linux with . . .
Mar  3 19:24 Circuit handshake stats since last time: 93572/93573 TAP, 
5009863/5010456 NTor.
Mar  4 01:24 Circuit handshake stats since last time: 169018/262021 TAP, 
4539061/4539069 NTor.
Mar  4 07:24 Circuit handshake stats since last time: 169753/235140 TAP, 
4436073/4436112 NTor.
Mar  4 13:24 Circuit handshake stats since last time: 119517/119547 TAP, 
4040264/4041342 NTor.
Mar  4 19:24 Circuit handshake stats since last time: 1098774/1119662 TAP, 
4087379/4088944 NTor.
Mar  5 01:24 Circuit handshake stats since last time: 223758/267280 TAP, 
3828115/3828568 NTor.
Mar  5 07:24 Circuit handshake stats since last time: 99975/113118 TAP, 
3537541/3537974 NTor.
Mar  5 13:24 Circuit handshake stats since last time: 173710/175951 TAP, 
3792871/3793745 NTor.
Mar  5 19:24 Circuit handshake stats since last time: 657254/747645 TAP, 
3678773/3679332 NTor.
Mar  6 01:24 Circuit handshake stats since last time: 606685/704653 TAP, 
3186116/3186477 NTor.
Mar  6 07:24 Circuit handshake stats since last time: 582036/708742 TAP, 
2043464/2043654 NTor.
Mar  6 13:24 Circuit handshake stats since last time: 54266/54266 TAP, *** 
586327/586328 NTor. ***
Mar  6 19:24 Circuit handshake stats since last time: 122817/122817 TAP, *** 
558229/558231 NTor. ***

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Previous Guard not getting Guard flag back

2018-03-20 Thread starlight . 2017q4
>While I understand that my relay lost the guard flag because of a weekend
>of downtime, I would expect that it would get it back after a while of
>stable again? Anyone able to shed some light on when it will get the flag
>back?
>https://metrics.torproject.org/rs.html#details/924B24AFA7F075D059E8EEB284CC400B33D3D036


Guard flag calculation is somewhat involved (due to several
get-out-jail promotions/bypasses), but the essential
part is weighted uptime in excess of the median uptime of guard
candidates, or 98% whichever is lower.  Presently the authorities
have it somewhere around 96% (they do not publish the value).
Your relay is at about 91% and will be back as a Guard in less
than five days when it will hit 97%.  The attached XLS shows
it roughly, left side today and right side as-of 3/26.  Uptime
data came from the "1_month" section of

   https://onionoo.torproject.org/uptime?search=NSDFreedom

Built this sheet for myself recently and simply stuffed in data
for your relay.  Enjoy.

P.S. If anyone spots flaws in the approach, please comment.

NSDFreedom_guard_recovery.xls
Description: Binary data
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Previous Guard not getting Guard flag back

2018-03-20 Thread starlight . 2017q4
What might not be directly obvious is the calculation bases
on 12-hour intervals where on each interval the previous
uptime is down-weighted to 95% of the next-most-recent.  OnionOO
uptime intervals are four hours, so each set of three OO
intervals are averaged as a single 12-hour interval and then
weighted.  This only approximates logic of rephist.c which
keeps track of uptime with compact records indicating the
start and end of relay-is-up span.


At 00:34 3/21/2018 -0500, starlight.201...@binnacle.cx wrote:
>>While I understand that my relay lost the guard flag because of a weekend
>>of downtime, I would expect that it would get it back after a while of
>>stable again? Anyone able to shed some light on when it will get the flag
>>back?
>>https://metrics.torproject.org/rs.html#details/924B24AFA7F075D059E8EEB284CC400B33D3D036
>
>
>Guard flag calculation is somewhat involved (due to several
>get-out-jail promotions/bypasses), but the essential
>part is weighted uptime in excess of the median uptime of guard
>candidates, or 98% whichever is lower.  Presently the authorities
>have it somewhere around 96% (they do not publish the value).
>Your relay is at about 91% and will be back as a Guard in less
>than five days when it will hit 97%.  The attached XLS shows
>it roughly, left side today and right side as-of 3/26.  Uptime
>data came from the "1_month" section of
>
>   https://onionoo.torproject.org/uptime?search=NSDFreedom
>
>Built this sheet for myself recently and simply stuffed in data
>for your relay.  Enjoy.
>
>P.S. If anyone spots flaws in the approach, please comment.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Previous Guard not getting Guard flag back

2018-03-20 Thread starlight . 2017q4
Flubbed a date paste on the right-side (visual aid only)
but calc is correct.  Fix is to copy I3, paste it to I4:I8.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Previous Guard not getting Guard flag back

2018-03-20 Thread starlight . 2017q4
Actually I badly munged the entire right-side.
Should have written a perl script or C program
to do this--spread-sheet is a terrible hack.
Fixup attached here.

Your relay should be a guard again by the end
of next Wednesday on 3/28, allowing the auths
are in a nice mood and 96.2 is adequate.
Worst case should be Thursday.

NSDFreedom_guard_recovery.xls
Description: Binary data
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Previous Guard not getting Guard flag back

2018-03-20 Thread starlight . 2017q4
Can't win!  Lesson here is never hack a spread-sheet
for someone else's relay ;-)

Data elements were reverse-order relative to the
sheet and I forgot to reverse them.  I _think_
this is correct. . .Guard flag comes back Saturday 3/24.

I'll have to write a perl or python script sometime
to pull data from OnionOO and run the calc. . .

NSDFreedom_guard_recovery.xls
Description: Binary data
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] DoSer is back, Tor dev's please consider

2018-03-22 Thread starlight . 2017q4
Please note:

Here parameter DoSCircuitCreationMinConnections=1 is set (rather than the 
default value of 3).

Mar 11 17:23:53 Tor[]: DoS mitigation since startup: 0 circuits rejected . . .
. . .
Mar 22 11:23:54 Tor[]: DoS mitigation since startup: 299608 circuits rejected. 
. .
Mar 22 17:23:54 Tor[]: DoS mitigation since startup: 806025 circuits rejected. 
. .

I.E. mitigation circuit rejections increased 170% in six hours after moving 
vaguely for over ten days.

Also:

top - 19:05:53 up 11 days.
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND 
 1998 tor   20   0  662m 611m 108m R 47.2 15.4   7901:32 tor
 2000 tor   20   0  662m 611m 108m S 42.2 15.4 343:28.28 tor
 2001 tor   20   0  662m 611m 108m R 56.8 15.4 343:24.46 tor

I.E. crypto workers pegged after barely registering since DoSer shut it down on 
March 7th.

'iptables' mitigation rule here shows the DoS source-IPs ablaze.

==

Suggestion:  DoSCircuitCreationMinConnections=1 be established in consensus

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Previous Guard not getting Guard flag back

2018-03-22 Thread starlight . 2017q4
Came in a day early with six authorities voting yea and three voting nay.

Implies the median uptime percentage for guard candidates is slightly under 
95.8.


NSDFreedom_guard_recovery.xls
Description: Binary data
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] DoSer is back, Tor dev's please consider

2018-03-23 Thread starlight . 2017q4
At 03:20 3/23/2018 +, tor  wrote:
>> Suggestion: DoSCircuitCreationMinConnections=1 be established in consensus
>
>The man page for the above option says:
>
>"Minimum threshold of concurrent connections before a client address can be 
>flagged as executing a circuit creation DoS. In other words, once a client 
>address reaches the circuit rate and has a 
>minimum of NUM concurrent connections, a detection is positive. "0" means use 
>the consensus parameter. If not defined in the consensus, the value is 3. 
>(Default: 0)"
[snip]
>
>Am I misunderstanding?

"concurrent connections" refers to concurrent TCP+TLS network layer 
connections, not to Tor circuits--nominally one-connection-per-peer IP.  It 
means the excess circuit-extend rate logic does not kick in at all until at 
least N TCP connections from a particular IP exist.  Once the configured number 
of TCP connections is present, the circuit extend rate is examined.

An adversary who stays under the configured limit (presently three) can extend 
circuits at extreme rates on (two) TCP connections.  Adversary must marshal a 
larger number of IP addresses than previously to obtain the same effect and 
this raises the cost of attack, but they may still cause significant trouble as 
my relay's statistics demonstrate.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] can dirport be disabled on fallback directory?

2018-05-18 Thread starlight . 2017q4
Lately seeing escalating abuse traffic on the relay dirport, now up to 20k 
rotating source IP addresses per week.

The simple solution is to disable dirport, but the relay is a fallback 
directory and I don't want to make a change that will negatively affect the 
relay's ability to function as such.  Would disabling dirport be a problem?

also:

can a non-advertised dirport be left configured for local-system use while the 
public advertised dirport is disabled?

does a command utility or method exist for querying dirport documents via 
tunnelled-dir-server?  including miscellanous documents such as

/tor/status-vote/current/consensus.z
/tor/keys/all.z
/tor/server/all.z
/tor/extra/all.z

/tor/server/fp/++.z
/tor/extra/fp/++.z
/tor/micro/d/-.z
/tor/keys/fp/+.z

thanks!

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Unusual load returning?

2018-05-18 Thread starlight . 2017q4
At 19:25 5/15/2018 +, r1610091651  wrote:
>I've noticed unusual load on the relay. Notice the huge change in load
>between 3-8 am (CET).
...
>Wondering if others experienced it recently?


One here.  Perhaps isolated probes?

May 18 11:23 Tor[]: Circuit ... : 10279/10279 TAP, 296594/296595 NTor.
May 18 17:23 Tor[]: Heartbeat: uptime is 68 days ...
May 18 17:23 Tor[]: Circuit ... : 654119/660742 TAP, 318906/318907 NTor.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] can dirport be disabled on fallback directory?

2018-05-19 Thread starlight . 2017q4
Dirport is a handy convenience, but is not essential to proper
functioning of the network.  Put a connection rate-limit on
dirport and it stopped the abuser cold.  Dirport traffic went
from 15% of total back down to 1-2% where it belongs.

Nonetheless the questions posed are valid.



At 12:25 5/18/2018 -0400, starlight.201...@binnacle.cx wrote:
>Lately seeing escalating abuse traffic on the relay dirport, now up to 20k 
>rotating source IP addresses per week.
>
>The simple solution is to disable dirport, but the relay is a fallback 
>directory and I don't want to make a change that will negatively affect the 
>relay's ability to function as such.  Would 
>disabling dirport be a problem?
>
>also:
>
>can a non-advertised dirport be left configured for local-system use while the 
>public advertised dirport is disabled?
>
>does a command utility or method exist for querying dirport documents via 
>tunnelled-dir-server?  including miscellanous documents such as
>
>/tor/status-vote/current/consensus.z
>/tor/keys/all.z
>/tor/server/all.z
>/tor/extra/all.z
>
>/tor/server/fp/++.z
>/tor/extra/fp/++.z
>/tor/micro/d/-.z
>/tor/keys/fp/+.z
>
>thanks!
>

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] can dirport be disabled on fallback directory?

2018-05-20 Thread starlight . 2017q4
On May 20, 2018 10:08:17 UTC, gustavo  wrote:
>
>On May 18, 2018 4:25:23 PM UTC, starlight.2017q4 at binnacle.cx wrote:
>>Lately seeing escalating abuse traffic on the relay dirport, now up to
>>20k rotating source IP addresses per week.
>
>How do you detect it?

FIRST: your relays are not impacted by this issue because
DirPort is disabled in their configuration.  So you can
stop reading here if you like.

>Will tor log it in the logs where I can look for
>it or do you monitor the TCP/IP stack ?
>
>I run two relays (milanese one of them) besides basic
>OS level monitoring I don't monitor much else.
>
>I wonder if I should monitor more or what to search for
>in logs (I run my relays without logs since I don't
>have an use for)

Simply perusing the /var/log/messages log lines for
the relay on occasion should be sufficient for most
operators.  The daemon will complain about many if
not all important problems.

--

For those with DirPort configured, one can check for the
problem by looking at the 'state' file with the command

   egrep '^BWHistory.*WriteValues' state

and calculating the percent BWHistoryDirWriteValues is
relative to BWHistoryWriteValues for the same samples.
Should be under 5%, more like 1-3%.  If 15% the attacker
is harassing your relay.

This particular abuse scenario can be mitigated by
applying an 'iptables -m limit' rule set to incoming
DirPort connection requests

  -or-

by disabling DirPort in the config since clear-text
DirPort is no longer required for the Tor network to
function properly.  Those running FallBack directories
should probably send an update to this list if they
apply this change as I belive the FallBackDir script
excludes relays where ports differ from the whitelist
or have changed in OnionOO historical data.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] can dirport be disabled on fallback directory?

2018-05-20 Thread starlight . 2017q4
On May 20, 2018 17:37:07 UTC, Damian Johnson atagar at torproject.org wrote:
>
>> There don't seem to be any examples of ORPort endpoints in the Stem
>> repository. I think Dave plans to add some more documentation as part of Tor
>> Summer of Privacy.
>
>Oh interesting. You're right. I recently added Stem's ORPort
>capabilities but seems I forgot to add usage examples in the docs.
>Thanks for pointing that out, I'll add it to my todo list. In the
>meantime I provided an example here...
>
>https://blog.atagar.com/april2018/

Thank you!

Does a command-line utility to make such requests exist
or can one be created?  The availability of a command
tool for ORPort document downloads would bring DirPort
one step closer to complete obsolesce.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] DirPort DOS activity against Fallback Directories

2018-05-21 Thread starlight . 2017q4
Recently I noticed excessive DirPort requests to my relay, where DirPort 
bandwidth reached 15% of ORPort bandwidth.  Normal DirPort load is around 2%.

https://lists.torproject.org/pipermail/tor-relays/2018-May/015253.html

Just looked over a sample of FallBackDir relays in Relay Search and
it appears this excess-load abuse is directed at them in particular.
Some fall-back directories show more than a month of excess request
traffic, presumably on the DirPort.  Logs here indicate six weeks
of abuse escalating in increments.  Possibly this foreshadows a major
increase in an effort to impair FallBackDir relay functionality.

Either an iptables connection-rate limit or disabling DirPort
resolves the problem.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] DirPort DOS activity against Fallback Directories

2018-05-21 Thread starlight . 2017q4
At 18:29 5/21/2018 +, Logforme  wrote:
>
>How can I find this information on my relay? 
>(855BC2DABE24C861CD887DB9B2E950424B49FC34)
>

Is visible here

https://metrics.torproject.org/rs.html#details/855BC2DABE24C861CD887DB9B2E950424B49FC34

Click on the Bandwidth History "3-Month" tab.  Your relays
shows indications of excess load.  You can verify this on the
local system as follows:

>For those with DirPort configured, one can check for the
>problem by looking at the 'state' file with the command
>
>   egrep '^BWHistory.*WriteValues' state | tr ',' '\n'
>
>and calculating the percent BWHistoryDirWriteValues is
>relative to BWHistoryWriteValues for the same samples.
>Should be under 5%, more like 1-3%.  If 15% the attacker
>is harassing your relay.

The above was written for lower-bandwidth relays
of around 10MB/sec.  Faster relays show a smaller increase,
but if the absolute traffic level is on the order of
60MB or more attack is likely.  A more reasonable DirPort
traffic level is around 10M.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Verizon AS701 blocking Tor consensus server tor26 (86.59.21.38)

2018-05-25 Thread starlight . 2017q4
>Hi tor-relays mailing list,
>
>Good news! Verizon unblocked tor26 (86.59.21.38).
>
>I posted something similar on NANOG (with modifications for network people) 
>here: 
>https://mailman.nanog.org/pipermail/nanog/2018-May/095386.html
>
>Someone nice at Verizon must have read NANOG (VZ NOC people probably do 
>read NANOG) and unblocked tor26. Here is a (successful) traceroute:


Thank you Neel!  My relay along with all the other Verizon FiOS
relays are now visible to 'tor26'.  This can be viewed by
navigating to

https://consensus-health.torproject.org/

and clicking the 'detailed page' link at the bottom
(takes time to load and render).

https://metrics.torproject.org/rs.html#search/as:701

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays