Re: hidden service maps

2008-05-19 Thread Ben Wilhelm

Grant Heller wrote:

Thank you for replying, Ben.
 
Can (the concept of) a hidden service be simplified to that of any 
arbitrary protocol?  Reconfigure an application to point to Tor instead 
of the Internet and if the hidden service exists, the application will 
communicate normally?


No prob. :)

Short answer: Yes.

Longer answer: Yes, but it's a little more difficult. The application 
can't try to connect via IP, because hidden services don't have IPs. It 
has to connect via a hostname. As I understand it, this means the 
application must be using SOCKS 4a or SOCKS 5, and also must be set up 
to do that - a lot of applications simply aren't, and expect to be able 
to resolve the host and then connect to the IP in two separate steps. 
Which, in fairness, works in virtually all cases.


If the application is properly coded, then, yes, any system which 
doesn't break the network layering hierarchy will work just fine, given 
an appropriate onion address.


It would presumably be possible to rig up some kind of wacky system that 
mapped private IPs to onion addresses on the fly, but Tor doesn't have 
that yet (and I don't know if they've even bothered considering 
implementing it, there are bigger issues.)


-Ben


Re: USAF wants to violate federal criminal law

2008-05-18 Thread Ben Wilhelm


Scott Bennett wrote:

It's
worth noting that the BSD users and even LINUX users don't have Windows
users' problem of always having to watch where they step to avoid falling
through security holes.


Yes, the great strength of Linux is that there are never massive 
pervasive security holes, and even if there were, they would certainly 
be fixed within days.


Oh wait, http://www.theregister.co.uk/2008/05/16/debian_openssl_flaw/ - 
whoops! Linux has serious long-term security breaches also!


Well, at least there aren't any constantly-exploited packages with a 
history of insecurity that are still commonly used, oh wait ha ha 
http://www.phpbb.com/ yes there are.


Is Linux *more secure*? Absolutely. Does Linux let you walk along in 
cheerful oblivion, knowing that the Grandmaster of Linux won't let any 
security holes onto your computer? Not in the least. If you don't watch 
where you go, you won't fall through as *many* security holes - but 
you'll still fall through a few.


Claiming that isn't the case, especially with such a horrible 
counterexample mere days ago, isn't really inclined to make people 
believe you.


-Ben


Re: USAF wants to violate federal criminal law

2008-05-18 Thread Ben Wilhelm

Wilfred L. Guerin wrote:

Even worse, you read FCC Part 15 rules and ask why would I WANT it to
ACCEPT INTERFERENCE??


You may want to read 
http://www.proz.com/kudoz/english/electronics_elect_eng/1105076-device_must_accept_any_interference_received.html 
for information on what accept interference means. Basically, it means 
that it must not explode or melt down - not that it must take orders 
from arbitrary other people and send them your credit card numbers.


 This httpS message sends the wire negotiated encryption key over the
 wire WITH the encrypted data. Do you frequently write the lock
 combination on the safe or tape the key to the lock when it is left in
 hostile environments?

I think you really, really need to go learn more about cryptography and 
the https protocol, as there's no point where what you described 
actually happens. The closest is when the client sends a chunk of random 
data to the server, which they both use to generate the encryption keys 
. . . and this only happens once it's already encrypted by the server's 
public key, meaning nobody besides the server can read it.


As a side note, HTTPS is basically HTTP wrapped in an SSL/TLS session . 
. . and guess what Tor uses? If it's as insecure as you claim, Tor is 
pretty hilariously broken.


-Ben


Re: Ports 443 80

2008-05-18 Thread Ben Wilhelm


As I understand it, there's still a problem here - Tor thinks it's 
listening on port 9001, so it'll advertise to the directories that it's 
waiting on port 9001. Which obviously won't work all that well if they 
have to connect to port 80.


Here's what the relevant section of my torrc looks like:

## Required: what port to advertise for Tor connections.
ORPort 80
## If you want to listen on a port other than the one advertised
## in ORPort (e.g. to advertise 443 but bind to 9090), uncomment the
## line below too. You'll need to do ipchains or other port forwarding
## yourself to make this work.
ORListenAddress 0.0.0.0:8080

There's a chunk below it for directory ports also, although I have that 
disabled due to low bandwidth.


-Ben

Nathaniel Dube wrote:
I just tried something else and I managed to get it working. :-)  The problem 
was I was over thinking the solution.  I set the ports in torrc back to their 
defaults ORPort 9001  DirPort 9030.


Instead, what I did was have the port forwaring on the router level...  
443 -- 9001  80 -- 9030.  Then I had the router forward ports 9001  9030 
to my private IP on the network.  So now I only need open ports 9001  9030 
on my local software firewall.


This solution is the easiest and most efficient way I can see doing it.  I 
hope this helps out every one else.  Here's my entire torrc so every one 
knows what settings I used to get it working.


SocksPort 9050
SocksListenAddress 127.0.0.1
DataDirectory /home/tor/.tor
ControlPort 9051
Nickname [Left Out]
ContactInfo [Left Out]
ORPort 9001
DirPort 9030

It's with this torrc and hardware router settings I managed to get every thing 
working.  Thanks every one for all the help.




Re: More GSoC Ideas

2008-03-21 Thread Ben Wilhelm


Various comments on these, regarding why some of these are dubious ideas:


  A. I had at least one connection to legal-preteen.com. I am willing
to take some chances of getting into trouble with the law for the sake
of avoiding internet censoship, but not to that end. Child pornography
and The Great Firewall of China are two completely separate things.


You will never, ever, ever block all child porn websites. It's simply 
impossible. To make things worse, in the US there's at least some 
possibility that filtering things by content leaves you open for 
lawsuits based on what you didn't filter - meaning that blocking child 
porn websites might leave you liable for the ones you missed. From a 
purely PR perspective, people might also argue well, he clearly knew 
child porn was being viewed through his server, and he kept his srever 
up! Burn him, he's a witch!



  B. I've had to block Google because my roomates were getting the
nasty this might be spyware page and weren't all too happy about
that.


I don't really have a problem with this one :) (Although if you can get 
a second IP from your ISP, this can be solved neatly - I have all Tor 
traffic going through its own special IP. Still, this is often impractical.)



  C. I've blocked The Pirate Bay, and when I have time, will block
other such sites. (See idea 2). If operators want to let tor users go
through to those sites that's fine, I don't even care all that much
except that I think the limited tor bandwidth can go to better uses.


The Pirate Bay itself uses extraordinarily little bandwidth, and to my 
knowledge nobody has ever been prosecuted for downloading .torrent 
files. The actual process of running the torrent doesn't necessarily 
even touch TPB (what with distributed hash tables and the like) and even 
the parts that do touch TPB use a minimal amount of bandwidth. 
Essentially, this doesn't do what you might think it does.



2. On *nix systems, make it easy for snort to filter out tor traffic
on a protocol level. I realize there are plenty of legal uses for
BitTorrent, Gnutella, etc., but most of them do not require anonymity
in a strong sense. That is, they can get the same content through http
(most of the time) anyway, and downloading a Linux distribution (or
whatever) won't be flagged by most governments/agencies/whatever. It's
my bandwidth, I have the right to let *others'* use it as I see fit.


First off, it's nearly impossible to make Tor capable of filtering on 
this sort of a level - the Tor client simply doesn't know what kind of 
traffic may be sent through it until the connection is already made, and 
thus it can't possibly avoid servers that disallow certain protocols. 
The only thing you could do here is sever connections as soon as you 
determine that it's the wrong type and this obviously has severe 
usability implications.


Second, an increasing number of protocols are encrypted, thanks to the 
efforts of Verizon and co - I certainly turn on encryption on my 
bittorrent client whenever I use it, and I don't even use it to download 
illegal stuff. Obviously anything encrypted will pass straight through 
your clever protocol filter.



However, the last thing my parents
need is the FBI knocking on their door wondering why they are visiting
legal-preteen.com.


I think they may be even more irritated when you assure them that 
legal-preteen.com is blocked, and then the FBI shows up wanting to know 
why they're visiting hot-hot-hot-15-and-under.com :)


-Ben



Re: Compromised entry guards rejecting safe circuits (was Re: OSI 1-3 attack on Tor? in it.wikipedia)

2008-02-17 Thread Ben Wilhelm

Anon Mus wrote:

Ben,

I think you are using the purely theoretical  numbers and applying them

to the problem as if they were reality.

As I remember the problem with the selection of primes for PKE is,

1. the seeding of the pseudo-random number generator

e.g. with a 16bit seed then only 65,000 or so entry points into the 
number generation which leads that number of keys.


Even for an 8byte random seed the number of keys generated would be 
about 10^19 keys and obviously, following your example, this represents


less than a milligram of your hydrogen memory, about a breath of air in

the lungs of the average human being.


Yes, this is correct - if you use a horrifically insecure random-number 
generator, you'll end up with a horrifically insecure public key. Any 
serious application of crypto will use a random-number generator with 
far more than 16 bits of entropy. I don't actually know what the current 
standard for pseudo-random crypto generators are, but I give as a simple 
example Boost's Mersenne Twister generator, which, as I understand it, 
can be given something on the order of 20,000 bits of entropy as a seed. 
(Obviously, this is far more than is strictly needed to generate all 
256-bit primes.)



2. the pseudo-random numbers generators, themselves have not been
proven 
to be numerically complete. Indeed their very form suggests not.


This is untrue in several ways. There's nothing in the structure of a 
psuedorandom generator which makes it impossible to analyse, and many 
pseudorandom generators are understood extremely well. Again, this isn't 
something I'm particularly expert in, but it's a solved problem to 
roughly the same extent that the entire public-key cryptography issue is 
a solved problem (i.e. solved, barring spectacular and unexpected 
advances.)


Note that you could simply use a source of truly secure entropy to 
bypass these issues entirely, and most non-embedded operating systems 
include such a thing built-in.


Of course, the scenario for this attack, as originally outlined ( Re: 
OSI 1-3 attack on Tor? in it.wikipedia), is still intact, fully correct


and easily provable.


We've described logically why your original attack would not work (at 
least, why it would not allow any kind of security breaches - obviously 
you can bring the Tor network down using such an attack, but that's not 
exactly avoidable.) It is neither intact nor correct, and, assuming no 
security bugs in the Tor implementation, I believe it is provably so.


-Ben


Re: Compromised entry guards rejecting safe circuits (was Re: OSI 1-3 attack on Tor? in it.wikipedia)

2008-02-16 Thread Ben Wilhelm


Anon Mus wrote:
A fully global networked array of prime number testers, prime numbers 
being the underlying basis for your public key encryption technology.


1 million decimal digit long primes achieved, the search for 10 million

digit primes underway.

http://en.wikipedia.org/wiki/Great_Internet_Mersenne_Prime_Search

http://mersenne.org/primenet/

 The virtual machine's sustained throughput 
http://mersenne.org/ips/stats.html* is currently *29479 billion 
floating point operations per second* (gigaflops), or 2448.9 CPU years 
(Pentium 90Mhz) computing time per day. For the testing of Mersenne 
numbers, this is equivalent to 1052 Cray T916 supercomputers


Take a look at just which org is offering the $100,000 prize !!! (In
the 
para. headed by *v22.12 Mersenne Research Software Released)*


http://mersenne.org/ips/index.html#contest

This project went live in 1997 and the CM5 ( 
http://en.wikipedia.org/wiki/FROSTBURG ) was phased out in 1999 .. you 
decide.


Makes 512 bit prime location and storage look like a walk in the park.


You're suffering from several very serious misconceptions.

First off, the Mersenne primality testing network is designed to test 
prime numbers of a very specific type, namely 2^n-1. It turns out that 
you can test primality for those numbers in a much more efficient manner 
than for general primes. The Mersenne algorithm is useless for general 
primes, and virtually every prime used in modern cryptography is not 
going to be a Mersenne prime.


Second, merely testing to see if something is prime is not isn't 
particularly helpful in breaking modern cryptography. You already know 
that the public key isn't a prime (since it's the product of two private 
keys) and you also already know that the private keys are prime (since 
that's necessary for the algorithm to function.) What you'd need to do 
in order to derive the private keys from a public key is to *factor* an 
extremely large number with no convenient properties. This is an 
entirely different issue from mere primality testing.


Without major breakthroughs in number factoring, I seem to remember it's 
actually provable that modern public keys literally cannot be factored 
within the heat death of the universe. As in, if you turned every atom 
of the universe into energy, and used it to power a universe-sized 
supercomputer which reaches the theoretical limits of efficiency, you 
would not be done factoring a single public key by the time you ran out 
of energy. Unless you want to claim that the US government is actually 
*more powerful* than this, any number of supercomputers and databases 
they might have is completely irrelevant.


Now, if you do want to keep on with the the government is all-powerful 
and can corrupt Tor installations easily, there's a few easy tactics 
you can use.


First, you can claim that the US governmenet has come up with a 
factoring breakthrough that makes factoring - and thus far, far easier. 
There's certainly nothing we've discovered yet that proves this is 
impossible. Of course, there's no evidence for it being possible either.


Second, private keys are only as secure as they system they are stored 
on. Much more plausibly, you could claim that the US government has 
backdoors into most (if not all) modern OSes, including the ones used to 
generate Tor's directory server private keys. If the government got the 
private keys that way there would be, of course, no barrier to them 
intercepting Tor communications in exactly the way you claim.


But claiming that the government has huge datacenters that derive public 
keys from private keys is simply impossible. The math doesn't add up.


Now for a bit of hard math, just to demonstrate that you need to think 
about your numbers a bit further:


The density of prime numbers can be approximated as 1/log(N), as you've 
mentioned. This means, for 256-binary-digit primes, the density is 
approximately 1/log(2^256) or 0.012976. There are 2^255 (that's about 
5.7896 * 10^76) 256-digit numbers, therefore we can assume that there 
are approximately 1/log(2^256) * 2^255 primes in that area.


This is approximately 7.5127 * 10^74 primes.

If we assume we can store each prime number on a single atom of hydrogen 
(this is obviously a hilarious overestimation of storage density, but 
bear with me) we can store 6.02214 * 10^23 prime numbers in one gram of 
hydrogen. Thus we will need 1.2475 * 10^51 grams in order to store our 
prime database. The Sun masses approximately 1.98892 * 10^33 grams, so 
we'll need the hydrogen of approximately 627 thousand million million 
suns merely to store a list of all the 256-digit prime numbers.


If Tor uses 512-bit keys then we're approximately seventy orders of 
magnitude too small, however.


(That was actually kind of fun to work out.)

-Ben


Re: Compromised entry guards rejecting safe circuits (was Re: OSI 1-3 attack on Tor? in it.wikipedia)

2008-02-16 Thread Ben Wilhelm

Anon Mus wrote:

Ben,

Yes you are right factorising this is hard, but thats not what I've
been 
suggesting. What if every time you generated a pair of keys you stored 
the result somewhere!


Say you owned a huge network of say mil/gov computers which communicate

securely using sefl generated rotating keys. As any client finishes
with 
a key pair they send them off to a central storage location.  If they 
are not there already they are added to the store.


To find the private key(s) you only need to search through the list of 
public keys. If you only find 1% of the server communities private keys


then you've got many extra nodes to add to your dummy network.

Hopefully you understand this and I'll get some sleep tonite ( :D ).

-K-


You're continuing to drastically underestimate the numbers involved. 
Let's say that a computer is a cube, one half foot on each side. Now 
let's take the Earth, and *cover the Earth with solid computers* to a 
depth of one mile. This gives us approximately 232 billion billion 
computers. If you assume that each computer can generate a thousand 
private/public pairs per second (I believe this is an exaggeration for 
commodity hardware, though you could likely build a custom system to do 
so) then that means we get 2.32 * 10^23 keys every second.


I'm going to go handwavy here and assume that one key is approximately 
equal to one prime. This isn't true, but we'll end up within an order of 
magnitude of the right answer, and honestly more precision than that 
isn't needed.


With 7.5127 * 10^74 primes, attempting to cover 1% of the keyspace at 
2.32 * 10^23 keys per second would take approximately one million 
million million million million million million *years*. Excuse me for 
not being particularly worried about this. And remember, this assumes 
the entire surface of the planet is covered, a mile thick, with 
computers. Last I checked this was not the case.


(Again, this also ignores the issue of where you store all this data.)

Seriously, sit down and think about the numbers some. The numbers are 
*gigantic* - so gigantic that brute force becomes implausible, even if 
you assume the adversary owns all the government and corporations of our 
world and has access to alien supercomputers.


-Ben


Re: The use of malicious botnets to disrupt The Onion Router

2008-02-01 Thread Ben Wilhelm


A manually administered . . . centralized list? Because, call me crazy, 
but a centralized list of authorized routers has some very, very 
obvious flaws in it, both technical and security-related.


-Ben

Ron Wireman wrote:
It seems to me that we owe a lot the roughly 1,500 people who donate 
their bandwidth to our project at any one time.  They give us a 
tremendous gift that allows us to participate in unpopular or even 
dangerous political speech and debate, to by-pass inappropriately 
restrictive filters, and to limit the amount of information about 
ourselves that we reveal to the organizations who run the Internet sites 
we access.  I don't wish to divulge some of the ways in which I've used 
tor to protect myself, but I'm sure all of you reading this list can 
think of many examples where it has assisted you in your own life and 
most of you use it on a frequent basis.  All of this comes at the cost 
of time and money from many volunteers who receive no benefit whatsoever 
from relaying your traffic for you.


It seems to me, however, that even this gracious act of charity may be 
no match for the types of attacks we may be faced with as we become more 
popular and, as a result, more of a target. The number of users running 
tor nodes pales in comparison to the number of computers that may be in 
any one of the many individual botnets, which are groups of hijacked 
computers controlled in unison by a single entity.  The largest of these 
botnets ever discovered had over 1,000 times the number of nodes that 
tor does.  What happens when one of these botnets are commanded to join 
tor all at once and begin harvesting private data that people naively 
did not encrypt or, worse, replacing all pictures requested with 
goatse.jpg?  These and other malicious acts could easily take place, 
perhaps even perpetrated by a malevolent government entity, and would 
cause significant disruption to our router.


We must take expedient measures to prevent this type of attack, because 
as of now, tor is quite vulnerable, perhaps even critically so.  The 
group of computers that make up the official Network Time Protocol pool, 
a network that is used to provide extremely accurate time 
synchronization for millions of computers around the world, has a 
manually administrated list.  Since it has about as many nodes on it as 
tor has, it suggests that maintaining such a list would not be 
difficult.  It seems to me that this would be an excellent way to 
prevent a node flood attack.  Without it, tor will be rot.


Awaiting your comments anxiously,

Ron Wireman


Re: Child pornography blocking again

2008-01-25 Thread Ben Wilhelm


Kraktus wrote:

On 25/01/2008, Eugen Leitl [EMAIL PROTECTED] wrote:

I just want to know if there is a technically feasible way of

Use your brain. Packets have no EVIL bit to test for.


I'm pretty sure my suggestion is better than an RFC April Fools' Joke.


Actually, I disagree - the April Fool's joke was obviously a joke, while 
your suggestion - which is dangerous and badly designed on several 
fronts - could be taken seriously be people.


If you can solve all those problems, there might be something to it, but 
I personally do not believe that those problems are solvable.


-Ben


Re: Child pornography blocking again

2008-01-24 Thread Ben Wilhelm


Kraktus wrote:

I realise, of course, there are problems with this.


* Use of effort that could be spent other places
* Possible legal liability issues
* Cries of you're blocking child porn, why not also block warez/hate 
speech/freenet/political propoganda that I don't like
* Every single problem that comes along with trying to maintain a 
blacklist, including malicious submissions, manpower, filtering


And, the biggest problems to my mind:

* If the blacklist is stored on some central server, creating a very 
nice system where people must report what they're browsing to a central 
authority
* If the blacklist is stored in a downloadable form of any kind, 
effectively making a *list of child pornography sites*


The second might be avoidable through some clever hashing, but that 
simultaneously eliminates any sort of accountability or auditability, 
and as much as I like the Tor guys I don't want them to be able to knock 
entire sites off the Tor network.


(I'm also kind of entertained at the idea of a privacy group saying, 
effectively, okay now that our behavior is no longer trackable please 
send us all the kiddieporn sites you know of thanks in advance.)


-Ben


Re: Tor server using Vista?

2008-01-04 Thread Ben Wilhelm



Alexander W. Janssen wrote:

I ain't no Windows-advocate but I find this argument a bit weak.
Nowadays all the modern operating systems have the same problems: To
much installed services by default, weak administration and the general
reluctance of users to pay attentions to security-updates and
best-practise when it comes to using common sense.


Amen to that. Out of all the systems I've administered, I've had zero 
Windows boxes compromised and one Linux box. And that isn't because 
Linux is less secure - it's because I knew Windows a lot better by the 
time I started doing stuff online, and I didn't know enough Linux at the 
time to realize I was making a horrible security vulnerability with one 
bad decision.


The most secure operating system in the world will be insecure in the 
hands of someone who doesn't understand it. The least secure operating 
system - which is probably Windows at the moment - can still be run 
quite securely if you keep on top of it.


I use Windows as a desktop system, and keep it behind an OpenBSD 
firewall/router. If for some reason I felt like this was the system I 
had to run a Tor server on, I'd run it on this system with little worry 
of compromise.


-Ben


Re: Passing another,second,individual torrc on command line to Tor possible ?

2007-12-31 Thread Ben Wilhelm


tor -f my_alternate_torrc_file

There ya go. :)

-Ben

Ben Stover wrote:

Can I start Tor with a second, individual torrrc configuration file?

In general I want to use the original torrc. But occasionally I want to use 
e.g. specific exit nodes.
So I must use a modified torrc. Instead of always having to manually edit the 
one and only torrc
I would appreciate to have a second torrc which I can pass to Tor as starting 
parameter on command line.

Is this possible ?
Or is there a work around for this ?

Ben








Re: some civically irresponsible exits?

2007-11-07 Thread Ben Wilhelm



Eugen Leitl wrote:

On Wed, Nov 07, 2007 at 01:41:18PM +0100, Lexi Pimenidis wrote:


Our exit node has already been used to send spam over Port 80, i.e.
using the yahoo web interface (there was a small discussion on that a


No. Spam has been sent via Yahoo. It's their problem, not Tor's.
Blocking port 25 is different, because here you can spam end
users directly.


Devil's advocate position: No, you can't. You can't connect directly to 
any user to send spam directly to their inbox. You *can* connect to 
arbitrary mail servers and request that they spam users, but that's 
their problem, not Tor's.


I think blocking port 25 is probably the right thing to do as a default, 
but I personally have all ports open on my server.


-Ben


Re: Browser dos/don'ts ( was Re: Incognito Live CD using Polipo)

2007-10-13 Thread Ben Wilhelm



TOR Admin (gpfTOR1) wrote:

Robert Hogan schrieb:

Do:
Spoof user-agent (is this necessary even with javascript disabled?) (browser)


I think, it is nessecary. Do this job in browser, because no proxy can
do it for SSL-encrypted stuff. And change the fake time by time.


I disagree. Don't do anything that makes you stand out. That includes 
changing to a multitude of fake user-agents.


Pick the most common user-agent and use it. That's probably whatever the 
latest version of Firefox returns. (I'm assuming Tor traffic is 
firefox-heavy - I may be wrong on this. IE6 or IE7 may be a better 
choice. Remember, they can tell you're probably coming from Tor, so you 
want to blend in with average Tor traffic.) Then only change it if the 
most popular browser changes.


That way you blend in with the herd. It's easy to track the guy who's 
using Bob's Krazy Web Browzur one day, and xXxDeAtHxXx the next day, and 
lol ive got a new useragent today after that. It's not so easy to 
track one guy out of ten thousand using Firefox.


-Ben


Tor takes too much RAM

2007-07-21 Thread Ben Wilhelm


# free
 total   used   free sharedbuffers cached
Mem: 98520  96772   1748  0   2220   5848
-/+ buffers/cache:  88704   9816
Swap:65528  58480   7048
# killall tor
# free
 total   used   free sharedbuffers cached
Mem: 98520  41464  57056  0644  10356
-/+ buffers/cache:  30464  68056
Swap:65528  22496  43032

I'd love to keep it running, but when it's singlehandedly chewing up 
more than half of my system's RAM, it just isn't going to happen. Any 
suggestions on this? Are there config options I can tweak to make it a 
little less RAM-hungry, or is it just intrinsically a memory gobbler?


-Ben


Re: Tor takes too much RAM

2007-07-21 Thread Ben Wilhelm


Here's some docs for you to look over, since you clearly don't know the 
free command.


http://swoolley.org/man.cgi/1/free
http://rimuhosting.com/howto/memory.jsp (look at interpreting free.)

Also, running it on a system and comparing the output to /proc/meminfo 
can be enlightening. I find it gives a good overview of what the system 
is doing - top might say hey this program is using a lot of memory, 
but free can tell you this entire system is struggling on the 
commandline, which is occasionally much nicer than opening up top. I 
would have included top output except that I didn't think of posting to 
the list before doing this, and I suspect that starting tor cleanly 
would not have demonstrated the memory usage as well.


Essentially, it's telling me that before I killed tor, I had 7m of 
swapfile and 1.7m of RAM free. Deducting buffers and cache, I had a 
whopping 10mb of RAM free. After I killed tor, that changed to 43mb of 
swap and 57mb of main memory free, with 11m used by buffers and cache. 
Note that the former was after I'd killed apache and mysql in an attempt 
to have a usable command-line - I imagine it would have looked worse if 
I hadn't already done that.


So, essentially, tor was eating 90mb or so of RAM at that point. 
Considering Olaf's hilarious 1.5gb example, I guess I was getting off 
lightly.


-Ben

Scott Bennett wrote:

 On Fri, 20 Jul 2007 23:39:45 -0700 Ben Wilhelm [EMAIL PROTECTED]
wrote:


# free
 total   used   free sharedbuffers cached
Mem: 98520  96772   1748  0   2220   5848
-/+ buffers/cache:  88704   9816
Swap:65528  58480   7048
# killall tor
# free
 total   used   free sharedbuffers cached
Mem: 98520  41464  57056  0644  10356
-/+ buffers/cache:  30464  68056
Swap:65528  22496  43032

I'd love to keep it running, but when it's singlehandedly chewing up 
more than half of my system's RAM, it just isn't going to happen. Any 
suggestions on this? Are there config options I can tweak to make it a 
little less RAM-hungry, or is it just intrinsically a memory gobbler?



 How can you tell that it is?  The display of numbers above doesn't seem to
show the important figure, namely, the working set size for tor.  The rest of
what tor allocates in user space is irrelevant.  Kernel space allocations that
are page-fixed (in slang, wired [down]) are important, but those that are not
fixed shouldn't usually matter either.  What, for example, does the used
column above mean?  Is that the total virtual memory allocated to a particular
process?  To all processes?  Or is it just the page frame memory currently in
use by a particular process?  By all processes?  Even top(1) gives more useful
information than your free command.


  Scott Bennett, Comm. ASMELG, CFIAG
**
* Internet:   bennett at cs.niu.edu  *
**
* A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army.   *
*-- Gov. John Hancock, New York Journal, 28 January 1790 *
**


Re: Tor takes too much RAM

2007-07-21 Thread Ben Wilhelm



Scott Bennett wrote:

 Does LINUX have vmstat(8)?  Or swapinfo(8)/pstat(8)?  In any case,
it must have ps(1), which should give some sort of breakdown of what tor
is using.


It does have vmstat. I should point out, however, that vmstat shows 
pretty much the exact same stuff that free does, only with less math 
done for you, a few important missing numbers, and a few added numbers 
which aren't really important in this case.


(This is a different server, which is why it has more RAM)
$ vmstat  free
procs ---memory-- ---swap-- -io -system-- 
cpu
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy 
id wa
 0  0   2716  27472 143508  2087611 1 333  3 12 
77  7

 total   used   free sharedbuffers cached
Mem:385840 358368  27472  0 143508  20876
-/+ buffers/cache: 193984 191856
Swap:   369452   2716 366736

Tor is currently disabled on that first server because I run other 
things on it that I'd rather stayed functioning :) My relatively small 
90mb usage is pretty meaningless compared to the 1.5gb usage of others, 
though, and on my other computer it's using 136m RES according to top.


  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
24579 zorba 15   0  148m 136m 4724 S  5.3 36.3 278:10.78 tor


 So how was your system affected?  Was there heavy paging going on?
Or worse, swapping?  Those are what one would expect when there is a lot
of contention for the available page frames.


It's not a local system, so I didn't have an easy way of seeing a hard 
drive indicator. I can tell you that I was having literally 
thirty-second-long delays just trying to use the command line, which 
went away when I killed tor. So I suspect it was swapping like a mofo.


Version 0.1.2.14 - I'm just updating with Debian (unstable, I suspect.)

-Ben


Re: Blocking child pornography exits

2007-07-21 Thread Ben Wilhelm



Scott Bennett wrote:
  Not AFAIK.  It blocks exits for whatever ports you tell it to 
block exits
 for.  The sample torrc that comes with the package has several 
example lines
 that you can uncomment or that you can simply use as examples for 
syntax when
 writing your own ExitPolicy statements.  One of those may be an 
ExitPolicy
 reject *:25, but it starts out, IIRC, having only an ExitPolicy 
reject *:*
 statement uncommented for those who want to dabble in running a 
middleman-only

 server.

For quite a few versions, Tor has come with a significant number of 
ports blocked, including standard ports for email, exploits, and p2p 
filesharing. I don't know if this is still the case, but if not, it's 
changed recently.


The relevant code, which seems to still be active, starts at line 542 in 
policies.c, and I'll copy the exit policy itself and relevant comment in:


#define DEFAULT_EXIT_POLICY \
  reject *:25,reject *:119,reject *:135-139,reject *:445, \
  reject *:465,reject *:563,reject *:587, \
  reject *:1214,reject *:4661-4666,   \
  reject *:6346-6429,reject *:6699,reject *:6881-6999,accept *:*

/** Parse the exit policy bcfg/b into the linked list *bdest/b. If
 * cfg doesn't end in an absolute accept or reject, add the default exit
 * policy afterwards. If brejectprivate/b is true, prepend
 * reject private:* to the policy. Return -1 if we can't parse cfg,
 * else return 0.
 */

So chances are that if you haven't explicitly added an absolute accept 
or reject to the end of your cfg, you're blocking a large number of 
ports that the tor developers have decided they don't want on their network.


Last I heard, the tor developers did this solely to keep the network 
usable, and not for moral reasons. But I may be wrong on that. 
Nevertheless, trying to block something as nebulous and illdefined as 
child pornography is obviously a far, far different thing than simply 
blocking a pile of ports frequently used for p2p traffic. Tor doesn't 
even try to recognize common p2p packets, so hey.


-Ben



Re: more letters from the feds

2007-01-11 Thread Ben Wilhelm


xiando wrote:

I think this is a valid point. I ran an exit-node for a short while at home
without thinking too much about it. The huge amount of traffic I was
attracting (even within minutes of booting up) made me shut it off for the
sake of personal convenience, but I don't think I will ever go back -


Use bandwidth limiting?

BandwidthRate 45 KB
BandwidthBurst 95 KB

This is low, but that's all I can spare on my home ADSL, and at least it 
contributes something. Limit your home node and it don't attract more than 
you can spare. Also, you can lower it temporarily (the minimum is 20 KB) and 
just -HUP tor if you need to upload a big file somewhere or something like 
that.




I use 20kb because I've got 45kb upstream, and I've got my router set up 
with QOS so the Tor traffic doesn't interfere at all. (And, presumably, 
so I don't interfere with it.)


I should point out that I have since run into one small issue with 
hosting locally - several IRC servers blacklist Tor server IPs. One of 
these servers was pulling the list off somewhere that also blacklists 
entire /24s including exit servers, and was using that mode, so my home 
computer was banned since it was a Tor IP. Luckily I know several 
IRCops, asked around, and that server no longer does that anymore ;) but 
it was a mild inconvenience.


So, life isn't perfect running a Tor server.

-Ben


Re: more letters from the feds

2007-01-10 Thread Ben Wilhelm



Robert Hogan wrote:
* From a common-sense, peace-of-mind point of view, is running an exit-node 
strictly for co-located servers? Does anyone here run one at home? If so, 
have you had second thoughts?


I run one at home, but it's on a dedicated IP, within a virtual machine. 
 I wouldn't want to run one off any IP I actually personally used (I 
like being able to use Slashdot and Wikipedia, and I have IRC channels 
that have some security based on my IP :P) but I don't mind having it 
segmented off onto a separate IP.


Plus, this way when they say hey tor.pavlovian.net was downloading 
child porn I say I have never once used that IP for anything of my 
own. Honestly, I don't think getting that letter about a colo server 
would be any better than getting one at home, so hey. :)


-Ben



MyFamily issues

2006-12-27 Thread Ben Wilhelm


I am having some trouble setting up MyFamily properly. I've found my 
fingerprint without any issue - it looks like DD91 7584 0D14 450F A3F9 
482C 22AC 83B8 D861 E802 - but when I try to add it to the torrc in the 
obvious way:


MyFamily DD9175840D14450FA3F9482C22AC83B8D861E802

I get the following error:

Dec 27 14:03:31.002 [warn] Failed to parse/validate config: Invalid 
nickname 'DD9175840D14450FA3F9482C22AC83B8D861E802' in MyFamily line


Suggestions? Right now I have two servers running without MyFamily, 
which isn't ideal, but I'll fix it as soon as Tor lets me :P


-Ben


Re: MyFamily issues

2006-12-27 Thread Ben Wilhelm


That did it. Perhaps the wiki should be updated? 
http://wiki.noreply.org/noreply/TheOnionRouter/TorFAQ#MultipleServers 
doesn't mention needing a dollar sign at all, but I can't figure out how 
to change it myself, so someone else oughta deal with that. :)


Thanks :)

-Ben

Joerg Maschtaler wrote:

Hi Ben,

Ben Wilhelm wrote on 27.12.2006 23:11:
I am having some trouble setting up MyFamily properly. I've found my 
fingerprint without any issue - it looks like DD91 7584 0D14 450F A3F9 
482C 22AC 83B8 D861 E802 - but when I try to add it to the torrc in the 
obvious way:


MyFamily DD9175840D14450FA3F9482C22AC83B8D861E802


Add the Dollar sign:
MyFamily $DD9175840D14450FA3F9482C22AC83B8D861E802

This is the way it works for me.

regards,
Joerg


satellite delay (was: Re: [or-talk] Re: Win32.Trojan.Agent appear when close Torpark)

2006-11-17 Thread Ben Wilhelm


Of what reason don´t the same apply on the internet? One packet sent, 
then wait a couple of seconds for it to reach desitination and the 
answer packet returns, then next packet and so on. In my imagination, 
only loading a non graphic website, or sending this email to the list 
would take for hours, or what? But apparently it don´t... 


You're imagining the following situation (C: client, S: server):

C: Send me the next packet.
S: Okay, here it is.
C: Send me the next packet.
S: Okay, here it is.
C: Send me the next packet.

In reality, it's more like this:

S: Here's packet 0.
S: Here's packet 1.
S: Here's packet 2.
S: Here's packet 3.
S: Here's packet 4.
S: Here's packet 5.
S: Here's packet 6.
C: I've received packets 0 through 4 correctly.
S: Here's packet 7.
S: Here's packet 8.
S: Here's packet 9.
S: Here's packet 10.
S: Here's packet 11.
C: I've received packets 5 through 8 correctly.
S: Here's packet 12.
S: Here's packet 13.
S: Here's packet 14.
S: Here's packet 15.
S: Here's packet 16.
C: Please resend packets 12 and 14.
S: Here's packet 12 again.
S: Here's packet 14 again.
S: Here's packet 17.
S: Here's packet 18.
S: Here's packet 19.
S: Here's packet 20.
S: Here's packet 21.
C: I've received packets 9 through 18 correctly.

TCP is very complicated, but is designed to keep sending a large amount 
of data even under high lag.


Now, as for HTTP, yes, there's going to be one lag cycle just based on 
the fact that you request a webpage and *then* you request the contents. 
But HTTP is designed so that you can request a ton of items 
simultaneously or in series, so you can just download the main webpage, 
start requesting images on it while the main webpage itself is still 
downloading, and parallelize as much as possible.


-Ben


Re: low bandwidth utilization

2006-07-03 Thread Ben Wilhelm


Or perhaps you're running Tor under VMWare? It turns out VMWare is 
absolutely awful at keeping the system clock sane, and it can easily 
create weirdnesses like this.


-Ben

Florian Reitmeir wrote:

Hi,


  /var/log/tor/log after the crash?

Nothing unusual as far as I can see:

Jul 03 06:25:34.506 [notice] Tor 0.1.1.21 opening new log file.
Jul 03 14:32:13.626 [notice] hibernate_begin(): Interrupt: will shut down in 30 
seconds. Interrupt again to exit now.
Jul 03 14:32:43.584 [notice] consider_hibernation(): Clean shutdown finished. 
Exiting.
Jul 03 14:32:49.316 [notice] Tor 0.1.1.21 opening log file.
Jul 03 14:32:49.527 [notice] Your Tor server's identity key fingerprint is 
'redgene 3BB0 DC6E A321 256D DD11 5519 7DBD 3F1E 4862 3549'
Jul 03 14:32:50.523 [notice] We now have enough directory information to build 
circuits.
Jul 03 14:32:50.687 [notice] router_orport_found_reachable(): Self-testing 
indicates your ORPort is reachable from the outside. Excellent. Publishing 
server descriptor.
Jul 03 14:32:51.383 [notice] Tor has successfully opened a circuit. Looks like 
client functionality is working.
Jul 02 06:25:22.809 [notice] Tor 0.1.1.21 opening new log file.
Jul 03 06:25:34.435 [notice] do_hup(): Received reload signal (hup). Reloading 
config.
Jul 01 06:25:31.956 [notice] Tor 0.1.1.21 opening new log file.
Jul 01 23:16:39.030 [notice] hibernate_begin(): Interrupt: will shut down in 30 
seconds. Interrupt again to exit now.
Jul 01 23:17:09.703 [notice] consider_hibernation(): Clean shutdown finished. 
Exiting.
Jul 01 23:17:24.867 [notice] Tor 0.1.1.21 opening log file.
Jul 01 23:17:25.349 [notice] Your Tor server's identity key fingerprint is 
'redgene 3BB0 DC6E A321 256D DD11 5519 7DBD 3F1E 4862 3549'
Jul 01 23:17:27.182 [notice] We now have enough directory information to build 
circuits.
Jul 01 23:17:27.746 [notice] router_orport_found_reachable(): Self-testing 
indicates your ORPort is reachable from the outside. Excellent. Publishing 
server descriptor.
Jul 01 23:17:28.275 [notice] Tor has successfully opened a circuit. Looks like 
client functionality is working.
Jul 02 06:25:22.758 [notice] do_hup(): Received reload signal (hup). Reloading 
config.
Jun 30 06:25:31.988 [notice] Tor 0.1.1.21 opening new log file.
Jul 01 06:25:31.955 [notice] do_hup(): Received reload signal (hup). Reloading 
config.
Jun 29 06:25:03.123 [notice] Tor 0.1.1.21 opening new log file.
Jun 30 06:25:31.987 [notice] do_hup(): Received reload signal (hup). Reloading 
config.
Jun 28 06:25:32.537 [notice] Tor 0.1.1.21 opening new log file.
Jun 29 06:25:03.122 [notice] do_hup(): Received reload signal (hup). Reloading 
config.


is this your log file? the timestamps really make me shudder, normally log
file entries are sorted in time...

do you use ntpdate with cron? or an ntpd on the system?




Re: Improvement of memory allocation possible?

2006-05-11 Thread Ben Wilhelm


Does your allocator actually return memory to the OS? Many don't, and in 
my (admittedly brief) look through the source, I don't remember seeing a 
custom allocator.


If it doesn't return memory to the OS, it'll just sit at its maximum 
allocated size for all eternity, despite not using much of this memory. 
(Although your buffer-shrinking would help reduce that 
maximum-allocated-size.)


-Ben

Nick Mathewson wrote:

On Thu, May 11, 2006 at 09:21:05PM +0200, Joerg Maschtaler wrote:

Hi,

if it possible to add an option which allows to shrink memory buffers
immediatly if they are not full? [1]


I don't think you would want that; the CPU usage would be *insanely*
high.  Every time you transmitted any information at all, you'd need
to shrink the buffer, and then immediately re-grow the buffer the
buffer when you had more data to transmit.

Right now, Tor shrinks buffers ever 60 seconds, down to the next
largest power of two above the largest amount of the buffer at any
time in the last 60 seconds.  A 60-second lag here probably does no
harm memory-wise, but the power-of-two thing will, on average, make
25% of your buffer space unused.

The only thing that would actually help trade cpu for RAM here won't
be a more frequent shrinking; instead, we'd have to switch off the
power-of-two buffers implementation.  But if we're going to do *that*,
we may as well move to an mbuf/skbuff-style implementation, and get
improved RAM usage and improved CPU usage at the same time.  (That
approach will make our SSL frame-size uniformity code a little
trickier, but I think we can handle that.)


I run an Tor server on a virtual server in which the amount of RAM is
the bottleneck of this system.
Since ~ 3 weeks i measure the resident memory allocation and the
corresponding traffic of the Tor server and the thing i realize is that
the allocation only shrinks if i shutdown (and restart) the Tor
server. [2]


Hm.  I should look at a breakdown of buffer size; I'll try to do that
later tonight, once I've had my server running for a bit.  It's
probably important to know whether our real problem is wedged
connections whose buffers get impossibly large, or buffers whose
capacities are larger than they have to be.

yrs,