Re: Any way to secure/anonymize ALL traffic?

2010-12-22 Thread 7v5w7go9ub0o
On 12/22/10 20:32, Kyle Williams wrote:
> On Wed, Dec 22, 2010 at 8:39 AM,
> 7v5w7go9ub0o<7v5w7go9u...@gmail.com>wrote:
>
>>
>> Any and ALL suggests a machine that allows only HTTP/S activity
>> to/from a TOR process; to/from a TOR entry node; all other traffic
>> (e.g. UDP from some sneaky plugin) is blocked.
>>
>> An iptables script or Windows firewall could do that. Presumably a
>> second script would be invoked for normal operation.
>>
>> Alternatively, VMs dedicated to TOR applications could achieve
>> your goal, plus protect your box if something grabs your e.g.
>> browser and tries to sniff around.
>>
>> JanusVM(.com) does exactly this and works with any OS.

Dang. I went to that site and was impressed; yet I was not at
all inclined to try it out.

WHY?   .Suddenly it dawns on me that my closed-minded attitude was
because of VM-prejudice ( :-) ) - I'm a Linux user and so am oriented
toward QEMU and VirtualBox (I presume that VMware is a favorite and best
choice for Windows users). I'd guess there are a number of us who
have never checked out JanusVM because we don't want to learn VMware
just to experiment with a single application.

A quick google came up with this:
<http://www.ubuntugeek.com/howto-convert-vmware-image-to-virtualbox-image.html>

JanusVM seems an important application; and I don't want to reinvent the
wheel putting TOR into a VM!So I hope to play with conversion
sometime next week. But if you already know how to do this (convert),
how about putting a note on your web page telling VB and Qemu users how
to use JanusVM on their VM host of choice?


***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Any way to secure/anonymize ALL traffic?

2010-12-22 Thread 7v5w7go9ub0o
On 12/22/10 17:10, Praedor Atrebates wrote:
> Could one setup a VM with some arbitrary timezone for it alone and
> run tor and bind there so that flash and javascript cannot get such
> info as local timezone, etc?  Would it be possible to have the VM
> change timezone in some random/semi-random fashion so that any
> timezone (and other) info that could be otherwise acquired would be
> just as unreliable an identifier of your system/location as
> information acquired from a tor session?  Then, even if flash or
> javascript did try to pull information outside tor it would be
> totally bogus and ever-changing.  It would still be nice to be able
> to squelch any attempt by flash to find your REAL IP address by
> forcing it to ALWAYS exit via tor no matter what.
>

Yes.   Feed the VM either random, or standardized (every TOR VM has the
same "fingerprint") data.

As mentioned earlier, a firewall (in this case within the VM) can block
all connections, except between TOR and TOR entry modes; the VM
insulates any unique user info from a roving plugin/extension. The VM
also protects the host, should the application within be compromised
(e.g. memory attack).

JAVA is capable of more identity-revealing mischief than JS; within a VM
you could safely run even JAVA.

HTH

***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Any way to secure/anonymize ALL traffic?

2010-12-22 Thread 7v5w7go9ub0o
On 12/22/10 08:38, Praedor Atrebates wrote:
> I have always been disturbed by the fact that javascript or flash
> can sidestep tor and give away your real IP.  Is there truly no way
> to control one's own computer so that any and ALL traffic that goes
> out to the ethernet port or wlan gets directed through tor no matter
> what?  Can any combination of software and hardware prevent software
> on one's own computer from acting the way someone else wants rather
> than as the owner wants?  I would love to be able to use javascript
> and flash (some site require one or the other or both to be
> functional) and know that ANY traffic that exits my own system WILL
> be directed through the tor network.
>
>

Any and ALL suggests a machine that allows only HTTP/S activity to/from
a TOR process; to/from a TOR entry node; all other traffic (e.g. UDP
from some sneaky plugin) is blocked.

An iptables script or Windows firewall could do that. Presumably a
second script would be invoked for normal operation.

Alternatively, VMs dedicated to TOR applications could achieve your
goal, plus protect your box if something grabs your e.g. browser and
tries to sniff around.

HTH

***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Adding voip to torchat

2010-12-19 Thread 7v5w7go9ub0o
On 12/18/10 12:25, intrigeri wrote:
> 7v5w7go9ub0o wrote (18 Dec 2010 16:45:06 GMT) :
>> Did you do any testing with SIP clients - e.g. SIP Communicator
>> and/or SFLphone? I ask because each of these seem to be very
>> active, and also offer ZRTP.
>
> I did not. Please let me know any testing results about such
> matters. The Tor wiki is probably a nice place to do so.

Well, I'm having some problems with SIP communicator (have lost interest
in SFLphone) - but I do intend some testing.

How would I use the Tor wiki
<https://trac.torproject.org/projects/tor/wiki>?  Are you referring to a
hidden service?


***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Adding voip to torchat

2010-12-18 Thread 7v5w7go9ub0o
On 12/18/10 09:27, intrigeri wrote:
> Hi,
>
> Gregory Maxwell wrote (18 Dec 2010 11:08:39 GMT) :
>> On Sat, Dec 18, 2010 at 4:55 AM,  xhdhx
>> wrote:
>>> I figured the lgical thing to add to torchat would be voip .Is
>>> there any move to that end , can anyone give me pointers as to
>>> probable protocols , packages that can be ported to torchat .Or
>>> how abt getting ekiga to do the same along with zrtp .???
>
> Preliminary testing showed OnionCat + Mumble to be a working and
> relatively easy to setup Tor-enabled VoIP solution; the 1/2s - 1s
> delay is only slightly annoying.

Nice! And it seems that Mumble is working on video as well! I'm guessing
that the strength of multi-platform Mumble is its ability to handle delays?

Did you do any testing with SIP clients - e.g. SIP Communicator and/or
SFLphone? I ask because each of these seem to be very active, and also
offer ZRTP.

TIA



***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: shadowserver.org

2010-06-14 Thread 7v5w7go9ub0o
On 06/14/10 18:52, John Brooks wrote:
[]
>>  And second, the exit policy of my node does not allow
>> IRC.
>>
>> For me this makes no sense at all.
>>
>
> From my experience, shadowserver has a habit of being overzealous
> like this. I've never dealt with them in the context of Tor, but I
> had an experience trying to get them to remove a large, legitimate
> IRC network from their blacklists awhile ago (apparently, some
> wireless providers use these blacklists to block traffic by IP). My
> impression is that anything that they consider to be even
> peripherally related to botnet or spam activity gets blacklisted and
> reported, without much further investigation. I was told that they
> removed those servers from their blacklists, but as of now (many
> months later), they are still listed.
>
> Many ISPs are willing to simply ignore automated and often-incorrect
>  abuse reports like these.

Given that tor-readme.spamt.net does not allow IRC, this may indeed be a
false alarm. "Details" are necessary to understand what may, or may not
have happened!

Perhaps a gentle offensive would be appropriate in this situation!? e.g.

-  A letter to server4you (cc to shadow) re-emphasing tor's commitment
to legitimate use, and educating them about "automated and
often-incorrect abuse reports!? The objective of this gentle offensive
would be to add server4you to that list of ISPs that ignores
shadowserver alarms.

This begs the question, does anyone have a well-written letter and/or
links to articles documenting honey-pot/shadowserver false positives?
Perhaps a well-constructed letter would be something that could be
maintained at the TOR home page; available to other nodes in similar
situations!?

(Also, perhaps TOR should additionally start documenting cases of false
positives - it may become very useful when the next political onslaught
against anonymity becomes active.)



***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: shadowserver.org

2010-06-14 Thread 7v5w7go9ub0o
On 06/14/10 10:02, alex-...@copton.net wrote:

> I am running the exit-node tor-readme.spamt.net.

Thank You tor-readme.spamt.net, for your generous contribution to tor!!!


> My provider, server4you, keeps getting abuse reports from
> shadowserver.org. According to the abuse service they are running
> honeynets which record activity comming from my exit-node's IP.

What, specifically, are they tracking to your IP? This unspecific
complaint could be anything from an innocent series of pings, to an out
and out stream from metasploit!?

>
> I have tried to communicate directly with shadowserver.org but got
> no answer.

[]

> Any recommendations on what I could do?

Guess I'd politely tell server4you, with a copy to shadowserver, that
you want to accommodate shadowserver; that they've been unresponsive;
and that you'll need specific information to fix the problem. Once you
have that info, you may need only block specific ports at shadowserver.

HTH



***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: [GSoC] Improving Snakes on a Tor

2010-05-01 Thread 7v5w7go9ub0o
On 05/01/10 00:15, John M. Schanck wrote:
[]
> I'm going to be working on improving the Snakes on a Tor (SoaT) exit
> scanner. For those of you not familiar with it, SoaT aims to detect
> malicious, misconfigured, or heavily censored exit nodes by comparing the
> results of queries fetched across those exits to results obtained without
> Tor. It's an ambitious project, originally developed by Mike Perry and
> crafted into its current form by Aleksei Gorney during GSoC 2008, so my
> goals are modest. I'm going to begin by stabilizing the existing codebase,
> and then work on minimizing the number of false positives generated by the
> current filters. If time permits I'll also begin designing new filters to
> handle adversaries not yet accounted for.
>
[]

GSoC is God's work; thank you!


***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Torlock - a simple script to prevent outgoing packets from bypassing Tor.

2010-03-01 Thread 7v5w7go9ub0o

On 03/01/10 11:38, Kyle Williams wrote:

You might want to look at JanusVM.


I can't quite tell; I'm guessing that JanusVM uses a VPN(TUN/TAP) to 
redirect all host packets to the VM - thereby blocking any "loose" 
packets? (any non-TOR interaction with the ISP - which may be a hotspot)?


TIA

[]


This script, named Torlock, does the following things when used to start
Tor:
- Creates a special user named torlock by default (if you run it first time
  or have removed that user after previous Tor session).
- Uses Iptables to block network access for everyone except for torlock.
- Setuids to torlock and starts Tor. Tor will be started in background mode,
  and its output redirected to a file.

When used to stop Tor, it stops Tor, unlocks network access, and
(optionally)
removes torlock user.

[]
Nice!  Thank You
***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Create a SAFE TOR Hidden Service in a VM (Re: Please Help Me Test my Hidden Service Pt. 2)

2010-02-25 Thread 7v5w7go9ub0o
On 02/24/10 23:16, Ted Smith wrote:
> On Wed, 2010-02-24 at 11:56 -0500, 7v5w7go9ub0o wrote:
[]
>> Perhaps mention the benefits of TPM chips (on 'ix, they can be
>> configured to benefit the user, not some record company)?
>>
> Yup. Check out Trusted Grub if you're blessed with the appropriate
> hardware.
[]
>> - FWIW, I run a quick MD5 hash check on the boot partition as part
>> of my boot up. Quick and easy; again, IDS, not IPS.
>>
> Do you read the source for your shell script before every boot? The
> attacker could just replace your hash check with a no-op and print
> "Everything is fine", and you wouldn't be any wiser.
>

That's right - unless, I suppose, you could store it somewhere in the
TPM chip, and have TPM oversee the hashing. But as you mention, Trusted
Grub is the more elegant solution. (Wish I could get a TPM chip for my
Asus P6T :-( )

FWIW, I run this check after boot up, and after Loop-AES OTFE is active
and makes the encrypted hash available (sigh...Intrusion detection, not
Intrusion prevention)




***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: TorChat is a security hazard

2010-02-24 Thread 7v5w7go9ub0o
On 02/23/10 22:38, Paul Campbell wrote:
[snip]
>
> It is possible to run Off-the-Record Messaging over Tor.
> Off-the-Record Messaging has all kinds of features: encryption,
> perfect forward secrecy and deniable authentication.  And it doesn't
> have the problems of "TorChat".

Good point on OTR messaging.

Many of us use Pidgin to facilitate OTR messaging within a SILC network
chatroom (created on the fly). SILC is quick and flexible - much better
than freenode.

TOR is comfortably used by Pidgin to get to SILC, making the OTR
conversation secure, and anonymous to all but the participants



***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Create a SAFE TOR Hidden Service in a VM (Re: Please Help Me Test my Hidden Service Pt. 2)

2010-02-24 Thread 7v5w7go9ub0o

On 02/24/10 00:10, Ringo wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

One update that should be noted is that this doesn't protect against
"bad nanny" attacks. With full disk encryption, the boot partition isn't
encrypted (as you have to load it so it can ask for your passphrase and
decrypt the rest of the drive). If the machine isn't physically secured,
it's vulnerable to this type of attack.


Perhaps mention the benefits of TPM chips (on 'ix, they can be 
configured to benefit the user, not some record company)?


- Alternatively, a simple BIOS boot password will block nanny from using 
your own cpu against you (e.g. loading up a CD or USB OS). Should she 
delete the password - which she wouldn't do - she'll not be able to 
replace it and you'll then know that you need to use a different HD.


- FWIW, I run a quick MD5 hash check on the boot partition as part of my 
boot up. Quick and easy; again, IDS, not IPS.


 - Somewhere I read of using smartmontools to keep track of disk-usage; 
a script interrogates the HD at shutdown and again at startup; if they 
don't match, the drive was used outside of the OS (e.g. removed and 
copied by a forensic program). Suppose you could add a second, manual 
test (or hidden script) that assured that they didn't crack your 
encryption and use your own OS.


Of course, nothing is 100%


***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Create a SAFE TOR Hidden Service in a VM (Re: Please Help Me Test my Hidden Service Pt. 2)

2010-02-23 Thread 7v5w7go9ub0o

Good job!

IMHO this is a very nice paper; well written!

(Adjusted the title of this post a bit, in case the readers weren't 
aware your goal )


(FWIW, some might want to read the paper - to gain a lot of insight and 
background - and then download/test a copy of your (sanitized) .img 
file. First running of the VM would be -with- saving of any changes to 
the VM so as to create and save a unique, permanent service name; 
subsequent runs discard changes!?)


***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Tor on the Nokia N900 (Mobile Tor stuff)

2010-02-20 Thread 7v5w7go9ub0o
On 02/19/10 20:09, Jacob Appelbaum wrote:
> 7v5w7go9ub0o wrote:
>> On 02/18/10 20:07, Jacob Appelbaum wrote:
>>> The performance of Tor is similar to any other Tor client - this is our
>>> reference C implementation running on the N900.
>>>
>>> With that said - You may want to hold out and get an Android phone.
>>> We're looking to do a release of Tor on Android next week. We have some
>>> very promising alphas and it's quite exciting!
>>
>> Please correct me if I'm wrong, but am a little surprised at the
>> interest in TOR on Android  - Android seems a closed, phone-home "cloud"
>> computer with little/no regard for privacy or anonymity. I'd always
>> wonder about a nice little log somewhere on my phone and/or in the "cloud".
>>
>
> I think that Android offers us a new possibility for telephones. I also
> like the N900 but I feel that Nokia often screws their user community.
> It's good to have options and so the more Tor on the more devices, the
> better.
>

Understood/Agreed. Especially given the periodic political wars on
privacy and anonymity.

> You may be interested in hearing about the Guardian project:
> http://openideals.com/guardian/
>
> Additionally, you may also be interested in Noisedroid:
> https://www.noisebridge.net/wiki/Noisedroid
>
> Or perhaps the more well known cyanogen firmware:
> http://www.cyanogenmod.com/
>
> All of those offer a possibility for an Android system built entirely
> from Free Software pieces. The big missing piece is the baseband and
> when last I checked there was not a single smart phone with a free
> baseband firmware. Harald Welte is currently working on on solving this
> problem for the Calypso chipset:
> http://laforge.gnumonks.org/weblog/2010/02/19/#20100219-announcing_osmocom_bb
>
> The future looks nice all around. Having Tor on as many of these devices
> will provide many people with options beyond what we can imagine.
>
>> OTOH, IIUC, The N900 can be configured as a traditional lap/desktop.
>> (Arguably, one may want to hold out for an entirely open-source meego
>> N900 with the new Intel chip)
>>

Thank you for the informative reply. I'm quite clueless about the
mobile/cell world and these are very useful links.

I presently carry a TracFone for emergencies, a small camera for photos,
and use a laptop at wifi hotspots for telephone and net use - a lot of
"stuff".

My goal is to consolidate all of that into a powerful, Linux
cell phone that I can maintain on my desktop (ubuntu or meego) - as I
maintain my laptop (Gentoo) now. The x86 moorestown seems a powerful
chip; meego is open source; I'm guessing that moorestown and meego will
go into the next high-end Nokia.

I'd look for an open(?) phone with a good camera and not use it for cell
phoning (or perhaps get a limited monthly T-mobile plan when I'm on the
road). Add micro or wireless-usb, and I could occasionally add a folding
keyboard.

(thoughts about the above welcomed)



***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Tor on the Nokia N900 (Mobile Tor stuff)

2010-02-20 Thread 7v5w7go9ub0o
On 02/19/10 16:13, Rich Jones wrote:
Thanks for the reply. After some thought, I realize that the perfect 
should not be an enemy of the good; that TORing the net remains 
valuable, even if the OS is handshaking the cloud.

It is also important that as many (cell) users as possible use/support 
TOR - use it or lose it.

> Perhaps, that is a good reason FOR Tor to be on Android.
>
> I'm a huge android fan (currently making my living off of Android
> development, in fact) - and the reason that it is so interesting to me
> is because it covers all of the major US carriers, numerous
> international carriers, and it's available on both high and low end
> phones. There is a huge amount of interoperability that you just don't
> get by just developing for a single Nokia phone.


***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: Tor on the Nokia N900 (Mobile Tor stuff)

2010-02-19 Thread 7v5w7go9ub0o
On 02/18/10 20:07, Jacob Appelbaum wrote:
> The performance of Tor is similar to any other Tor client - this is our
> reference C implementation running on the N900.
>
> With that said - You may want to hold out and get an Android phone.
> We're looking to do a release of Tor on Android next week. We have some
> very promising alphas and it's quite exciting!

Please correct me if I'm wrong, but am a little surprised at the 
interest in TOR on Android  - Android seems a closed, phone-home "cloud" 
computer with little/no regard for privacy or anonymity. I'd always 
wonder about a nice little log somewhere on my phone and/or in the "cloud".

OTOH, IIUC, The N900 can be configured as a traditional lap/desktop. 
(Arguably, one may want to hold out for an entirely open-source meego 
N900 with the new Intel chip)



***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: browser fingerprinting - panopticlick

2010-01-31 Thread 7v5w7go9ub0o

Kyle Williams wrote:

7v5w7go9ub0o wrote:

Andrew Lewman wrote:


On 01/29/2010 08:20 PM, 7v5w7go9ub0o wrote:

As we slowly transition to web 2.0, probably the next step is 
putting the TOR browser in a VM full of bogus, randomized 
userid/sysid/network information - carefully firewalled to 
allow TOR access only (TOR would be running somewhere outside 
the browser VM).


Already working on that, https://www.torproject.org/torvm/ or 
pick a live cd with tor integrated into it.



Good to see these projects being developed. IIUC, the TORVM is a 
tor client; so the TORVM is designed for easy installation, and 
perhaps to contain any exploit of TOR!?



This was one of the design points of Tor VM; to protect Tor by 
running it inside a VM, so if your browser in the HOST OS goes bad on

 you Tor would be protected inside the VM.

Guess I was thinking of a different approach: putting Firefox in a 
VM and just letting it go ahead and get crazy with flash, JS, 
cookies (.. I have tired of tweaking NoScript, RequestPolicy, and 
CS Lite all the time.).   TOR is running in a chroot jail on 
the "regular" OS, connected by network.


JS/Flash will presumably look for unique or geographic information
 within the VM and will get only bogus stuff which is cleaned and 
randomized every few minutes, along with cookies and caches. DNS is
 "unbound", elsewhere on the internal network, and has protection 
against many of the "DNS tricks". FWICT the obtainable network 
information all reflects the virtual Ethernet.




You may want to take a look at another project I've had out for a few
 months, but haven't really made much light of it. Chromium Browser 
VM http://www.janusvm.com/chromium_vm/


The name says it all.  It's Chromium running inside a VM.  Unlike 
traditional VMs, this VM attempts to make the browser feel like a 
native application to the HOST OS even though it's running inside the
 VM.  If you open a "Incognito" session with Chromium, it does a 
pretty good job at protecting your privacy with regards to your 
history and cookies, preventing the disclosure of what sites you've 
visited on the Internet (tested against JS & CSS).  Check it out.


You can run it in different modes: - Exported browser display 
(default) - Exported browser display with plugins disabled - Browser 
in a local X server (inside the VM's window or as a boot CD.) - 
Browser in a local X server with plugins disabled (inside the VM's 
window or as a boot CD.) - All the above options + Tor


The ISO is also bootable from a CD-ROM, just burn it, boot it, and 
choose a boot option with "Local X Server".  It uses the same drivers
 turnkey linux (aka: Ubuntu 8.04). So it's over kill for driver 
support from the VM stand point, but it's good as bootable CD for 
lots of different hardware vendors.



Dang!   This makes a lot of sense! A fast, "throwaway" browser, quickly
(instantly?) reloaded in a virgin state - as opposed to the traditional
approach of a heavily-protected Firefox remaining in memory for a while.

As you know, on Linux one simply QEMU/KVMs the .iso on storage; dead easy.

I'd guess there is reluctance to try it, as many believe that Google is
satan and fear that there is home-phoning to the "cloud" going on with
Chromium. Of course, running it in a well-firewalled, standardized VM
may render that information meaningless, and any reporting outside of
TOR impossible.

[]


Against the EFF's new fingerprinting tool, this browser VM masks most
 of your real attributes, but fails when it comes your screen size. 
Interestingly, the color depth was off and reported 24 when should be
 32.  BTW, the performance benchmarks with this browser inside (or 
outside) a VM smoke FF and IE hands down.  Kudos to Google. :)


Got a copy; gonna give it a try!

(FWIW, Have had good luck with a hardened-Gentoo FF QEMU/KVM VM, except
for graphics which suck. Once they/I figure out how to get GPU
pass-through, I'll do routine browsing - including flash/silverlight
streaming - in it. IIUC chromium does html5 video; will see if I can
get some html5 pass-through video streaming out of your .iso (though,
obviously, not through TOR.)



***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: browser fingerprinting - panopticlick

2010-01-31 Thread 7v5w7go9ub0o
Andrew Lewman wrote:
> On 01/30/2010 08:40 PM, 7v5w7go9ub0o wrote:
>> Given the implications of panopticlick, have you any interest/plans
>> in making Torbutton fingerprints even more indistinguishable (e.g.
>> give every user a windows I.E. fingerprint)
> 
> Just to highlight what Mike said,
> 
> "As an aside, since there are already some questions in #tor and 
> #tor-dev, I want to point out that Torbutton's obfuscation features 
> are only intended to make you appear uniform amongst other Tor users.
>  Tor users already stick out like a sore thumb because of using exit 
> IPs, and the small numbers relative to the rest of your vistor base 
> will make Torbutton's obfuscated settings appear very unique compared
>  to regular visitors."
> 
> All Tor users should look the same.  Not the same as all Tor users
> look like the rest of the Internet.  You already know it's a tor user
> because of the easily identifiable exit relay ip address.  It should
> be hard to tell if there is 1 tor user or 1 million from the other
> information gleaned about the browser.
> 

Agreed; first of all TOR users should look the same..

1. FWICT, the TORBUTTON obfuscation occurs only on the User Agent
response. To make us look the same, ISTM the HTTP_ACCEPT Headers should
also be standardized.

Perhaps all of the fields tested by panopticlick could be standardized -
reporting that JS is active even if it isn't (or vice versa)? Obviously
there will be additional tests beyond those of panopticlick.

2. Given it is the goal for all TOR users to look the same, there
seems a parallel argument for all TOR users to also be as
indistinguishable as possible from the dominant other on the net (I.E.
7 on XP?) - just in case some signature collector doesn't correlate with
the tor exit, but can tell we were TOR because we bare the TORBUTTON
signature.

It just seems to me that the panopticlick signature trick is now out of
the bag, and it will become widely implemented. Best would be for ALL
browsers to appear the same and be indistinguishable.





***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: browser fingerprinting - panopticlick

2010-01-31 Thread 7v5w7go9ub0o
scar wrote:

> 
> thanks for the suggestions, 7v5w7go9ub0o.
> 
> i also read through [1] and am trying out the LinkStatus add-on[2].
> 
> it seems to work, and is kind of useful in that it tells me in the 
> status bar the time i last visited a link.
> 
> 
> 1. http://whattheinternetknowsaboutyou.com/docs/solutions.html 2.
> https://addons.mozilla.org/en-US/firefox/addon/12312

Nice links; thanks. Thanks for the status report.

Being a little extension adverse, think I'll start some initial testing
of layout.css.visited_links_enabled=false.

Please post your future thoughts about this here!  :-)








***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: browser fingerprinting - panopticlick

2010-01-30 Thread 7v5w7go9ub0o
Mike Perry wrote:
[]
> 
> The reason why Torbutton didn't opt for the same origin policy method
>  is because Tor exit nodes can impersonate any non-https origin they
>  choose, and query your history or store global cache identifiers
> that way. It was basically all or nothing for us.

Ah. makes sense.

> 
> But yes, it would be nice if Colin Jackson and company kept 
> SafeHistory and SafeCache updated for regular users. Sadly they seem
>  to have forgotten about it. I wonder if anyone will make a fork and
>  update it.
> 
IIRC, they were also concerned about the "wild west" of FF internal
extension management - that a bad guy can wreak havoc in there (of course,
Torbutton has done that to our benefit :-) ).

Given the implications of panopticlick, have you any interest/plans in
making Torbutton fingerprints even more indistinguishable (e.g. give
every user a windows I.E. fingerprint)





***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: browser fingerprinting - panopticlick

2010-01-30 Thread 7v5w7go9ub0o
scar wrote:

> 
> Mike Perry @ 01/28/2010 02:04 PM:
>> After all, in normal operation, your history leaks one fuckload of
>> a lot of bits. And that's a technical term. Sensitive ones too,
>> like what diseases and genetic conditions you may have (via Google
>> Health url history, or Wikipedia url history). It's pretty annoying
>> that the browser makers really have no plan to do anything about
>> that massive privacy leak.
> 
> isn't there any way to protect against that without using
> Tor/Torbutton? i think there was a SafeHistory add-on, but it's still
> not been ported to FF 3.0+.

IIUC, SafeHistory (with other stuff) has been incorporated into Torbutton.

Also, you can use Torbutton - benefiting from its protections and
obscuration - without using TOR. So if you have, say, three Firefox
browser profiles (e.g. banker, anonymous, and general browsing), you can
equip each with Torbutton and configure only the anonymous browser to go out
via Polipo/TOR. The other two can go out via Polipo/direct (obviously,
two instances of polipo). Simply configure each Torbutton's proxy
setting appropriately.


***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: browser fingerprinting - panopticlick

2010-01-30 Thread 7v5w7go9ub0o
scar wrote:
> -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
> 
> Mike Perry @ 01/28/2010 02:04 PM:
>> After all, in normal operation, your history leaks one fuckload of
>>  a lot of bits. And that's a technical term. Sensitive ones too, 
>> like what diseases and genetic conditions you may have (via Google
>>  Health url history, or Wikipedia url history). It's pretty
>> annoying that the browser makers really have no plan to do anything
>> about that massive privacy leak.
> 
> isn't there any way to protect against that without using 
> Tor/Torbutton? i think there was a SafeHistory add-on, but it's still
>  not been ported to FF 3.0+.

IIUC, SafeHistory (with other stuff) has been incorporated into Torbutton.

Also, you can use Torbutton - benefiting from its protections and
obscuration - without using TOR. So if you have, say, three Firefox
browser profiles (e.g. banker, anonymous, and general browsing), you can
equip each with Torbutton and configure only the anonymous browser to go out
via Polipo/TOR. The other two can go out via Polipo/direct. Simply
configure each Torbutton proxy setting appropriately.


***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: browser fingerprinting - panopticlick

2010-01-30 Thread 7v5w7go9ub0o
Andrew Lewman wrote:
> On 01/29/2010 08:20 PM, 7v5w7go9ub0o wrote:
>> As we slowly transition to web 2.0, probably the next step is 
>> putting the TOR browser in a VM full of bogus, randomized 
>> userid/sysid/network information - carefully firewalled to allow 
>> TOR access only (TOR would be running somewhere outside the browser
>>  VM).
> 
> Already working on that, https://www.torproject.org/torvm/ or pick a
>  live cd with tor integrated into it.
> 

Good to see these projects being developed. IIUC, the TORVM is a tor client;
so the TORVM is designed for easy installation, and perhaps to contain
any exploit of TOR!?

Guess I was thinking of a different approach: putting Firefox in a VM
and just letting it go ahead and get crazy with flash, JS, cookies (.. I
have tired of tweaking NoScript, RequestPolicy, and CS Lite all the
time.).   TOR is running in a chroot jail on the "regular" OS,
connected by network.

JS/Flash will presumably look for unique or geographic information
within the VM and will get only bogus stuff which is cleaned and
randomized every few minutes, along with cookies and caches. DNS is
"unbound", elsewhere on the internal network, and has protection against
many of the "DNS tricks". FWICT the obtainable network information all
reflects the virtual Ethernet.

Any "infections" would be temporary, as the VM is set to make temporary
changes only; am using VNC to control it and to transfer any permanent
data back and forth between it and the "regular" OS.

I suspect others have similar approaches under way!?   It would be nice
to have a list somewhere of all of the "compromising" files and data
available to flash/silverlight/JS - by OS - so that those running VMs
know what to randomize (I presume Linux would be easier to contain than
Windows).









***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: browser fingerprinting - panopticlick

2010-01-29 Thread 7v5w7go9ub0o
Andrew Lewman wrote:
> On 01/29/2010 04:36 PM, Michael Holstein wrote:
>>> The main cause was the screen resolution.
> 
> https://blog.torproject.org/blog/effs-panopticlick-and-torbutton
> 
>> Running TOR and leaving javascript enabled sort of defeats the 
>> point, doesn't it?
> 
> Not really.  Most of the websites are useless without javascript 
> enabled.  Torbutton protects against known attacks via javascript 
> (yes there's something to be said about unknown attacks...).
> 

(Sigh)...Exactly right; and add flash to that prerequisite.

As we slowly transition to web 2.0, probably the next step is putting
the TOR browser in a VM full of bogus, randomized userid/sysid/network
information - carefully firewalled to allow TOR access only (TOR would
be running somewhere outside the browser VM).

***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: browser fingerprinting - panopticlick

2010-01-29 Thread 7v5w7go9ub0o

Mike Perry wrote:

Thus spake Seth David Schoen (sch...@eff.org):


Mike Perry writes:


Thus spake coderman (coder...@gmail.com):


EFF has an interesting tool available:
  https://panopticlick.eff.org/

technical details at
https://www.eff.org/deeplinks/2010/01/primer-information-theory-and-privacy

an interesting look at exactly how distinguishable your default
browser configuration may be...

FYI, Torbutton has defended against many of these anonymity set
reduction attacks for years, despite how EFFs site may make it appear
otherwise.

Are you unhappy with the phrase "modern versions" in

http://panopticlick.eff.org/self-defense.php

or do you think that page as a whole isn't prominent enough?


Ah yeah. I didn't see that at all. You should be linking to the
sentence subjects instead of "here" :). The modern versions phrase
could be changed to "Torbutton 1.2.0 and above" and still be correct,
but I actually didn't notice that page at all.

I also think the "Your browser fingerprint appears to be unique among
the N tested so far" string could be perhaps increased in size or also
have the number bolded too.

As an aside, since there are already some questions in #tor and
#tor-dev, I want to point out that Torbutton's obfuscation features
are only intended to make you appear uniform amongst other Tor users.
Tor users already stick out like a sore thumb because of using exit
IPs, and the small numbers relative to the rest of your vistor base
will make Torbutton's obfuscated settings appear very unique compared
to regular visitors.



These guys have been warning about the browser fingerprint issue for
years.



They offer a FireFox plugin that attempts to provide a more generic
signature.

(I love it when my Firefox/Linux browser registers as I.E./Windows.) :-)

(It is also fun watching the Suricata and Snort IDS logs after changing 
to I.E.)




***
To unsubscribe, send an e-mail to majord...@torproject.org with
unsubscribe or-talkin the body. http://archives.seul.org/or/talk/


Re: /r/onions

2009-09-15 Thread 7v5w7go9ub0o
Rich Jones wrote:
> Hello!
> 
> I know there have to be some really interesting .onions out there, but I've
> been having trouble finding them, so I started a reddit.com 'subreddit' to
> do some of that web 2.0 magic and let the good ones bubble up.
> 
> http://www.reddit.com/r/onions/ , if any of you have anything to share,
> that'd be pretty cool.
> 
> R
> 

Necessary!  Thanks for doing this!



Re: More Secure Tor Browsing Through A Virtual Machine in Ubuntu

2009-08-24 Thread 7v5w7go9ub0o

Kyle Williams wrote:
> On Mon, Aug 24, 2009 at 6:18 AM, 7v5w7go9ub0o 
> <7v5w7go9u...@gmail.com>wrote:
> 

==

>> 
>> 1. Which VM software are the most breakout proof, should an 
>> attacker gain access with a root shell?
>> 
> 
> This is a tricky question which depends on your requirements.  Before
>  I get into which is better than which VM engine, I first must 
> understand what it is I want to do with my browser. Do I want to be 
> able to download data and save it, or just be able to observe it? If 
> you want to save data, where/how do I store it in such a way that I 
> can securely access it from other mini VMs without compromising that 
> VMs integrity?

My initial presumption on this question is to approach desktop use as
one does chroot jails:

- use X clients to provide desktop windowed use of X-applications
on the VMs (e.g. Firefox/TB/etc.)

- use SSH to transfer data, and if necessary, root up to manipulate
stuff within the VMs. Alternatively, maintain the VMs separately, and
copy the "virgin" VM into memory where they are started (this is how I
run my chroot jails).

(I suppose one could use VNC/SSH)

either way, one would have some very restrictive firewalls/packet filters.

Presuming that the above applies to any VM application, do you know if
any of the current crop are most breakout proof? If that hasn't been
determined (by the folks who track vulnerabilities - e.g. securityfocus,
sans, etc.), is there a variety of VM that is known to be weak?


> 
> Older vulnerabilities exist for both VMware and Qemu that allowed 
> access to the host OS through the guest OS because the VM engine 
> itself was "folder sharing" from the host to the guest OS.

Nah.. I'd keep them separate, and communicate through networking -
likely X and SSH.


> These have been patched for quite some time, but such a feature that 
> allows "guest to host" sharing makes me nervous, and I think should 
> be avoided *at the VM engine level* at all cost.

understood; agreed.


> The downside is, I'm not sure how to save my downloads from the VM to
>  another VM or the host OS itself without opening up a whole for an 
> attacker to get through assuming a root compromise inside the VM.

I'd assume that the host/or/primary OS would SSH into a guest, using an
unprivileged account, and simply copy/move files via ssh. The SSH client
on the host/primary could be easily jailed; the connection could be
generally kept open.

> 
> 
>> 2. Which VMs' guest software are the most opaque - i.e. have NO 
>> information available to a roving root?
>> 
> 
> I'm a fan of having a base ISO and storing any changes made in 
> memory.  If I do get infected by some 0day, the only information they
>  could pull would be any existing cookies, cache, or history I may 
> have from my browser (depending on my browser's settings).  Not to 
> mention, since it's a read only ISO, no permanent changes can be 
> made.

(like this? <http://opensource.dyc.edu/tinhat>  (p.s. IIRC, it is
encrypted with loop-aes))

What I'm getting at in this question is the ability of root to gain
information about the host - ideally the VM doesn't know anything about
the host, or the real world in which the host/hypervisor is running.

IIRC QEMU created a hardware emulation layer, in which you could tell
the guest OS all sorts of lies :-)

> 
> 
>> 3. Which VMs require the least overhead?
>> 
> 

Here is where I'd *guess* that a hypervisor would be the way to go - I
haven't looked into it

> 
>> 4. IIUC, one can attach a VM to his existing OS, or one can first 
>> install some sort of hypervisor followed by a primary OS, and a 
>> series of secondary OS's? If this is true, what are the pros and 
>> cons of either approach. (I presume that you want a number of VMs -
>>  each containing sensitive or vulnerable applications)
>> 
>> 
> Take a look at Moka5(.com).  This is pretty much what we were aiming 
> to do at one point. Moka5 is however based on (funded by) VMware , 
> and vmware totally jacked this idea from two guys after sending them 
> a siest and desist letter firsterrr.

Moka5 appears to be a VNC client within a VM client.

> 
> Back to the hypervisor; this would be really nice.  I have see some 
> problems though.
> 
> If I have a browser VM (B-VM), and a email VM (E-VM), how does the 
> B-VM communicate with E-VM through things like the "e-mail:" URI 
> type?

Using X, I right-click:copy the email link in the mail client Jail/VM,
and then middle click the browser that is in the "browser" Jail/VM.

> Should B-VM and E-VM even be able to know about each other?

On my box, they don't (B-jail a

Re: More Secure Tor Browsing Through A Virtual Machine in Ubuntu

2009-08-24 Thread 7v5w7go9ub0o
Ringo wrote:
> I would appreciate any feedback people have on this. This is just an
> idea and it's kind of beta, so don't use this unless you know what
> you're doing. PGP key at bottom of message
> 
> 
> 
> 
> More Secure Tor Browsing Through A Virtual Machine in Ubuntu
> 

IMHO, you're on the right track.


Due to limited resources on my laptop, I've used (hardened) chroot jails
to contain tor, my browser, mail client, dhcpd client, etc. - primarily
to contain any successful intruder. Hotspot laptop users are constantly
being probed and subjected to the latest attack scripts.

But ISTM that small, optimized, hardened little VMs would be ideal -
additionally protecting anonymity; perhaps reasonably allowing the use
of JS on your browser within your browser VM.


Your post begs the questions:

1. Which VM software are the most breakout proof, should an attacker
gain access with a root shell?

2. Which VMs' guest software are the most opaque - i.e. have NO
information available to a roving root?

3. Which VMs require the least overhead?

4. IIUC, one can attach a VM to his existing OS, or one can first
install some sort of hypervisor followed by a primary OS, and a series
of secondary OS's? If this is true, what are the pros and cons of either
approach. (I presume that you want a number of VMs - each containing
sensitive or vulnerable applications)












Re: Torbutton for Mozilla Thunderbird

2009-08-09 Thread 7v5w7go9ub0o
Sigh... yes; especially when one (upon rare occasion) requests the
embedded http images, and thereby asks TBird to visit a web page. One
would then want the transaction monitored by both TorButton and
NoScript.  :-(

(p.s. open up about:config in TB and scan for "jav". I hope there really
is no js.)

Flamsmark wrote:
> This is potentially a less-than-ideal solution. Torbutton for Friefox
> is carefully and specifically designed to address web-browsing
> privacy concerns, making the user seem to belong to the largest 
> possible set of potential users. Simply sending Thunderbird traffic
> through Tor may not provide the desired level of anonymity. There are
> other  privacy concerns with Thunderbird, for instance, it sometimes
> broadcasts its hostname. Moreover, there hasn't been an anonymity
> audit of Thunderbird like there has with Firefox. There may be other
> behaviours like this which completely compromise the anonymity
> benefits provided by Tor.
> 
> On Sat, Aug 8, 2009 at 14:33, Karsten N.
> wrote:
> 
>> James Brown schrieb:
>>> How can I get the Torbutton for the Mozilla Thunderbird?
>> I use ProxyButton for this job. I does a proxy switch to Tor and
>> rewrite my IP address in the header of the mail.
>> 
>> Received: from [85.245.13.68] (helo=[0.0.0.0]) by n...@domain.tld
>> 
>> download http://proxybutton.mozdev.org/installation.html
>> 
>> Karsten N.
>> 
> 





Re: TOR and HADOPI

2009-05-28 Thread 7v5w7go9ub0o

Juliusz Chroboczek wrote:

Is anyone know where find an "how to use TOR against HADOPI" ?


Using tor to evade the French data retention and HADOPI laws is no different
from using tor for evading the surveillance of other police states.


(Hadopi is the new law in france about P2P: if you download some music or
movie with a P2P system, the provider will send you a mail to say stop; if
you continue, they send a real letter and after, they stop your connexion
and FINE you (and you will continue to pay provider but you will have no
right to have an internet connexion :-(( )  -
http://www.p2pnet.net/story/21764 - )


Now don't get me started about how stupid HADOPI is.

Under HADOPI, the ISP is required to monitor your Internet usage, at
their cost.  After three warnings, they are meant to disconnect you
while you continue paying your ISP bill.  I'm sure that's going to do
wonders for the ISPs' customer relations.

While HADOPI mandates massive surveillance of Internet users, the total
budget voted for enforcing it is a mere 6.7 M¤ per annum, which implies
that enforcement will be entirely from the ISPs' pockets.  I'm sure
they'll love it.

Juliusz


The ISPs' pockets?  I'd guess they'll all quickly raise their rates an 
amount generous enough to cover those additional costs. Heh. only 
people pay taxes and fees. :-)




Re: Fwd: [Wikitech-l] Planning to tighten TorBlock settings

2009-04-03 Thread 7v5w7go9ub0o

Gregory Maxwell wrote:

FYI—

-- Forwarded message -- From: Brion Vibber 
 Date: Fri, Apr 3, 2009 at 5:44 AM Subject: 
[Wikitech-l] Planning to tighten TorBlock settings To: Wikimedia 
developers 



en.wikipedia.org and others have seen a rash of abuse coming via Tor
 in the form of account creations with abusive names and such; this
is taking up a large chunk of CheckUser and oversighter time and 
effort, which is apparently not too fun.


It looks like the current settings don't generally restrict various 
actions to a logged-in user when accessing through Tor; is there any 
objection to tightening this up to restrict edits, account creations,

etc via Tor except when the account is explicitly excepted?


Thank you for bringing this up! How sad for us all!

I sure hope that the Tor community can quickly effect some sort of short
term solution. The precedent of destination sites restricting
Tor access - even temporarily - is something that must be avoided
(before governments become involved). Even if we have to use a blunt
instrument (e.g. temporarily blocking wikipedia access entirely).




Re: looking for an FTP -> SOCKS proxy

2009-01-05 Thread 7v5w7go9ub0o

Scott Bennett wrote:

 On Mon, 05 Jan 2009 12:01:29 +0100 gabrix  wrote:

Scott Bennett wrote:

 I know people are doing FTP transfers via tor, but I don't know how
they are doing it.  What are people using for a proxy to sit between either
a native FTP client or a web browser to do FTP transfers?
 Thanks in advance for suggestions.



Yeah! How ???


 How what?  How do I know?  Because I see lots of exits from my relay
on ports 20 and 21.  How can I do it?  Well, the suggestion to use proxychains
seems to have been a fruitful one, though it would be nice if the package
were better documented.  I changed one entry in its proxychains.conf file to
get it to use SOCKS5 rather than SOCKS4 to connect to tor, so that it could
forward DNS queries through tor, but it appears that a different entry causes
proxychains to blab queries from my machine to a name server at gtei.net.
I guess I'll have to ponder a while on how to work around that. :-(


I have good luck with each socks-aware filezilla, and lftp. lftp is 
faster and more flexible; filezilla better handles ftp ssh.


HTH




Re: Jailed/sandboxed/chrooted applications

2009-01-02 Thread 7v5w7go9ub0o

Adlesshaven wrote:
Does anyone here jail, sandbox or chroot the applications they use 
with Tor?


yep.

1. Separate, individual (GRSecurity-hardened) jails on Linux for
Thunderbird, Opera, and TOR itself.

2. Opera connects to TOR via polipo - which is jailed in a "common"
jail; and Thunderbird connects to tor via Socat - which is also jailed
in that common jail (as are lftp and filezilla, which occasionally use
TOR, or go direct).

I jail TOR because it may be attacked directly from the WAN; I
separately jail the tools connecting to TOR (socat and polipo) because
they occasionally connect to the WAN directly. Because they are
highly targeted, I figure that browsers and mail clients ought to be
individually jailed as a general principle.




I have been trying to adapt the Wiki's transparent proxy 
recommendations to a FreeBSD jail for the last couple weeks with no 
luck.


Do FreeBSD jails automatically provide a different address (e.g.
127.0.0.2)? If so, you may need to check the proxy addresses.


What is the

best way to isolate applications completely for use with Tor?



IMHO, In order of priority:

1. Separate machine on LAN.
   or
2. Separate virtual machines on hardened (e.g. bsds; hardened linux) box.
   or
3. Jails
   or
4. none of the above, but running each application as an individual,
privilege-less user that can not read beyond its own home directory. So
if user "tor:tor" is compromised, it can only read files on /home/tor
and not beyond.

(obviously, item 4 actually applies to each of the three preceding items
as well :-) )

(obviously, most users choose alternative 5 ... a single user runs a
host of programs, including browser, tor, mail, etc.)

HTH

p.s.

Don't know what a transparent proxy is. from wikipedia: "Transparent and
non-transparent proxy server

The term "transparent proxy" is most often used incorrectly to mean
"intercepting proxy" (because the client does not need to configure a
proxy and cannot directly detect that its requests are being proxied).
Transparent proxies can be implemented using Cisco's WCCP (Web Cache
Control Protocol). This proprietary protocol resides on the router and
is configured from the cache, allowing the cache to determine what ports
and traffic is sent to it via transparent redirection from the router.
This redirection can occur in one of two ways: GRE Tunneling (OSI Layer
3) or MAC rewrites (OSI Layer 2).

However, RFC 2616 (Hypertext Transfer Protocol -- HTTP/1.1) offers
different definitions:
"A 'transparent proxy' is a proxy that does not modify the request or
response beyond what is required for proxy authentication and
identification".
"A 'non-transparent proxy' is a proxy that modifies the request or
response in order to provide some added service to the user agent, such
as group annotation services, media type transformation, protocol
reduction, or anonymity filtering".







Re: FYI: ultimate security proxy with tor

2008-10-29 Thread 7v5w7go9ub0o

Eugen Leitl wrote:

FYI: http://howtoforge.com/ultimate-security-proxy-with-tor

Ultimate Security Proxy With Tor

Nowadays, within the growing web 2.0 environment you may want to have some 
anonymity, and use other IP addresses than your own IP. Or, for some special 
purposes - a few IPs or more, frequently changed. So no one will be able to 
track you. A solution exists, and it is called Tor Project, or simply tor. 
There are a lot of articles and howtos giving you the idea of how it works, I'm 
not going to describe here onion routing and its principles, I'll rather tell 
you how practically pull out the maximum out of it.

So, let's start.

Major disadvantages of tor, with all its benefits like security (however, tor 
manufacturers advice not to rely on its strong anonymity, because there are 
some cases, when it is possible to track young h4x0r...) are:

1. Long enough session (60 seconds for one connection per one exit node, that 
means, that you will have one external IP for a minute)
2. Slow perfomance (request takes up to 20 seconds to complete, what makes 
surfing sites with lot of elements a disaster)
3. All the requests come through one node, and possibly route how your requests 
arrive can be calculated.

In order to fight these three we are going to use:

1. 8 tor processes, each using separate spool directory
2. 8 privoxy processes, each configured to talk to separate tor.
3. First squid, with malware domains blacklist, will have 8 round robin cache 
peers configured. (squid-in)
4. Havp, with squid-in as parent. (Anti-virus proxy, using clamav :) )
5. Second squid, that will use havp as parent (squid-out). Users will connect 
to this one.

etc.




Thanks for posting this.

IIUC, Squid sequentially accesses peer proxies. Alternatively, could one 
achieve the same benefit using iptables load-balancing and those peer 
proxies?


Would there be more sophisticated controls using one or the other (e.g. 
multiple requests when one peer slows down)?


TIA, Newbie








Re: Default ORPort 443 [was: Re: German data rentention law]

2008-10-19 Thread 7v5w7go9ub0o

Erilenz wrote:

* on the Sun, Oct 19, 2008 at 07:14:31AM -0500, Scott Bennett wrote:


Besides, opening ports < 1024 usually requires root-privileges,
which could introduce serious security issues if an exploitable
flaw were found in Tor. You can still advertise port 443 as your
ORPort and listen on 9001, but this requires some port-forwarding
magic, which is not entirely feasible for a default
configuration. (But your other reason is sound as well)

 Also good points.  Another is that an unprivileged user on a multi-user
system may wish to run a tor relay, which would require a few configuration
tricks, but should definitely be doable.  However, as you point out, an
unprivileged user ought not to be able to open a secured port, so the default
should not be a port in the secure ports range.


I just took a quick glance and there seem to be at least a couple of hundred
nodes running an OR port on 443, so people must be taking note of the
documentation at http://www.torproject.org/docs/tor-doc-relay.html.en



Indeed!  And these would be a couple of hundred nodes that are not 
running HTTPS servers, and have likely configured an unprivileged port 
assignment of 443 - either through configuration changes of the OS, or 
as in my case, by running TOR in a chroot jail with a wrapper that drops 
privilege.


Sigh..It is SO easy to come up with "gotchas" on any proposal to 
change default rules ... (e.g. consider all the "gotchas" on 
allowing default access to mail ports).


The question SHOULD be; "will the change in  defaults   work well in 
most situations, and in general improve the goals of TOR?"  It should 
NOT be, "can I find a "gotcha" and kill the suggestion." In those cases 
where a conflict occurs, the operator would simply deactivate the 
default 443 TOR port.


I don't know the answer to the "real" question; I doubt that anything 
will confuse NSA for long; but I suspect there are lesser agencies in 
the US and other countries that would find their monitoring and data 
(including connection info.) retention schemes confounded by it.





Re: German data rentention law

2008-10-18 Thread 7v5w7go9ub0o

Roger Dingledine wrote:

On Sat, Oct 18, 2008 at 06:43:34PM -0400, 7v5w7go9ub0o wrote:

Roger Dingledine wrote:





Otherwise, all german nodes have to switch to middle man.




To be clear, I didn't write the above line.

1. Given that the ISP will have logs anyway, why disallow German exit 
nodes?


A fine question. Hopefully as we learn more about what ISPs will log,
we will come to decide that having Tor exit relays in Germany doesn't
pose much risk -- as long as we take appropriate other steps to make
sure the other end of the circuit isn't logged by German ISPs too.


2. How about changing all TOR port useage - including relays and entry
ports - to 443?

'Twould be hard to know which are entry nodes, which are relays, and 
which is browser traffic. That ought to make "mapping" the onion, and 
ISP log analysis a little more challenging :-) .


It isn't just a matter of what port they listen on. So long as there's
a public list of Tor relays, then people can just compare IP addresses
they see to the public relay list. And that public relay list isn't
going away anytime soon, since Tor clients need it when picking a path.


Am presuming that some on that list are "multi-function" servers!?

Guess I'm thinking along the line of a PC that has a TOR relay and 
bridge (both) that's being logged by its ISP.


If all inbound and outbound TOR circuits were port 443, all the ISP 
would log is a bewildering collection of inbound, SSL-encrypted 
connections to 443, and  outbound, SSL-encrypted connections to 443 - 
hard to know if any given inbound is an entry-connection, or 
relay-connection.


Likewise, outbound connections to 443 somewhere else might be TOR, or it 
might be the operator browsing his bank account.


If nothing else, defaulting to 443 would allow a greater number of 
"hotspot" laptops access to TOR from HTTP/S-only networks.




Re: German data rentention law

2008-10-18 Thread 7v5w7go9ub0o

Roger Dingledine wrote:





Otherwise, all german nodes have to switch to middle man.







1. Given that the ISP will have logs anyway, why disallow German exit 
nodes?


2. How about changing all TOR port useage - including relays and entry 
ports - to 443?


'Twould be hard to know which are entry nodes, which are relays, and 
which is browser traffic. That ought to make "mapping" the onion, and 
ISP log analysis a little more challenging :-) .





Re: Does TOR use any non-ephemeral (non-DHE) ciphers?

2008-09-25 Thread 7v5w7go9ub0o


Nick Mathewson wrote:




Thank You.


Does TOR use any non-ephemeral (non-DHE) ciphers?

2008-09-24 Thread 7v5w7go9ub0o
David Howe has been running some tests, and has discovered that in many 
cases, SSL transactions can be recorded, and decrypted by Wireshark 
after the fact - this because an ephemeral cipher was NOT chosen by the 
server; i.e. a cipher was chosen that does not provide "Perfect Forward 
Secrecy" . This ability of Wireshark provides a motivation to steal or 
subpoena private keys - which may awaken governmental interest in TOR 
private keys!?


So this begs the questions:




Does TOR use any non-ephemeral (non-DHE) ciphers?




The following is from David Howe's 9/23/08 posting in GRC's 
"cryptography" newsgroup:



"Apache 2.2 webserver, default configuration
XCA generated self signed webserver cert
Internet explorer (versions 6,7,8beta)
Firefox (versions 2.x,3.x)
Wireshark 1.0.3

Testing: for each session, a Wireshark capture was created *without*
access to the key. Fresh instance of Wireshark each capture. After all
captures are made, they are copied to another machine where Wireshark is
configured with the private key, to examine the packets.

Results:
IE (all versions) readable
FF (both versions) unreadable (error in dissector log)

After further analysis, it appears that the apache webserver takes the
first suitable match from the list of offered cryptographic suites, not
an abstract "Best" match.

In the case of IE, the first match is for TLS_RSA_WITH_RC4_128_MD5 which
has no DHE (perfect forward secrecy) component. in FF, the first match
is for TLS_DHE_RSA_WITH_AES_256_CBC_SHA which DOES have a PFS component.

Further testing is required, first to see if I can configure Apache to
give preference to DHE enabled solutions, and second to see what the
default behaviour of IIS is. I will update this post once I have more
results."


Re: OnionCat 0.1.9 now supports IPv4

2008-09-15 Thread 7v5w7go9ub0o

Bernhard Fischer wrote:

On Monday 15 September 2008, Sven Anderson wrote:

Am 15.09.2008 um 16:16 schrieb Bernhard Fischer:

We have a new version of OnionCat ready which is now capable of
IPv4-forwarding.

Read http://www.abenteuerland.at/onioncat/ for further instructions
on how to
use OnionCat and IP.

Does it really work in an acceptable way? I ask because "TCP Over TCP
Is A Bad Idea"[1]. I would expect it to have an awful performance.

[1] http://sites.inka.de/~bigred/devel/tcp-tcp.html


Regards,

Sven


What somebody would define as "acceptable" may be different but basically your 
concern about it is right.


TCP over TCP is an issue which has some kind of "rubber-band" effects on the 
packet transmission but that's a problem that share all kinds of VPNs. 
Hopefully TOR will work with UDP eventually because in respect to performance 
this would give lower latencies in my opinion and would also make OnionCat 
more powerful.


Specifically with TOR and OnionCat the rubber-band effect can even be observed 
when sending pings over OnionCat (and that's ICMP over TCP ;).
I'm not sure but I think that's because of the numerous concatenated TCP 
sessions that TOR circuits are built of.


But the real biggest problem currently is the bad performance of hidden 
services in general which overlays all other difficulties.


The advantage of OnionCat is its IP-transparency and together with TOR it is 
location hidden. If someone wants or needs to use hidden services he will 
also use OnionCat, there's not really a difference in performace.


Bernhard


Is there a performance difference between IPv4 and IPv6 (TCP over TCP)? 
I'd presume none; and I'd also presume that a UDP vpn would work fine.


(Thanks for developing OnionCat!)


Re: Google's Chrome Web Browser and Tor

2008-09-05 Thread 7v5w7go9ub0o

Nick Mathewson wrote:

On Thu, Sep 04, 2008 at 03:20:34PM -0700, Kyle Williams wrote:

Hi all,

I've been playing around with Google's new web browser and Tor.  I thought
it might be good to share my findings with everyone.
After reading Google's privacy policy[1], I for one would not want to use
this on a regular basis, if at all.

The first bug I tried was an old one I found with Firefox; the NEWS:// URI
type.
Any link that has a NEWS:// URI will launch Outlook Express and attempt to
contact the server in the URL...without using Tor.

The second bug I found resulted in local file/folder disclosure.
This is very similar to the one I found in Internet Explorer.

The third bug I found was with MIME-TYPEs, specifically Windows Media Player
supported formats.
The BANNER tag can also leak your IP address when the playlist is loaded
*IF* WMP is not set to use a proxy.
Also, a playlist in WMP can specify protocols that use UDP, hence, no proxy
support...no Tor.


On the flip-side, it is very cool how each browser tab is it's own process,
making several types of attacks much more difficult.
However, with an invasive privacy policy, local proxy bypassing, and local
files/folders able to be read from your hard drive, I've decided not to use
this browser.

It just doesn't feel privacy/anonymity friendly to me.
Anyone else want to chime in on this?


I dig what I've heard of the Chrome architecture, but it seems clear
that, like every other consumer browser, it's not suitable for
anonymous browsing out-of-the-box.  The real question will be how easy
it is to adapt it to be safe.  Torbutton, for instance, has proven to
take some pretty extreme hackery to try to shut down all of Firefox's
interesting leaks.  If it turned out to be (say) an order of magnitude
easier to extend Chrome to be anonymity-friendly, that would be pretty
awesome.  We'll see, I guess.

Has anybody looked into Chrome's extension mechanisms?  It would be
neat to know how hard it would be to address the information leaks
addressed in, say, https://www.torproject.org/torbutton/design/ .




ISTM this thing is more a web 2.0 portal than a browser; it is conceived 
and designed first and foremost to make user access of Google online 
services smooth, slick, (and advertisement laden). Its secondary 
function as a good browser simply allows most users to have only one 
browser active.


Given it is OpenBSD Open source, if it proves to be a good design 
(interesting for sure) with potential to become a good privacy browser, 
and proves to have the very-quick JS engine that some claim, it might be 
"forked" at some point.


The first thing the sibling would hopefully do is remove the "unique 
application number" business (see below); the second would be all 
phone-home features (see below).


Even if it doesn't become officially forked, if it becomes a good 
package (say, 6 months from now after intense support and development), 
there will likely be patch files and/or "enthusiast" versions available.


Certainly Linux/TOR users will "repair" the userid business before 
compiling it (or with a hex editor), and firewall-off any connection 
with home base.


Thankfully, Opera with plugins removed is already an extremely quick, 
lightweight, secure, general-purpose TOR and non-TOR browser; FireFox 
with extensions well-addresses "expanded features", so dual-browser 
users can comfortably wait for chrome to mature.


 

"When you type URLs or queries in the address bar, the letters you type
are sent to Google so the Suggest feature can automatically recommend
terms or URLs you may be looking for. If you choose to share usage
statistics with Google and you accept a suggested query or URL, Google
Chrome will send that information to Google as well. You can disable
this feature as explained here.
If you navigate to a URL that does not exist, Google Chrome may send the
URL to Google so we can help you find the URL you were looking for. You
can disable this feature as explained here.
Google Chrome's SafeBrowsing feature periodically contacts Google's
servers to download the most recent list of known phishing and malware
sites. In addition, when you visit a site that we think could be a
phishing or malware site, your browser will send Google a hashed,
partial copy of the site's URL so that we can send more information
about the risky URL. Google cannot determine the real URL you are
visiting from this information. More information about how this works is
here.
Your copy of Google Chrome includes one or more unique application
numbers. These numbers and information about your installation of the
browser (e.g., version number, language) will be sent to Google when you
first install and use it and when Google Chrome automatically checks for
updates.  If you choose to send usage statistics and crash reports to
Google, the browser will send us this information along with a unique
application number as well.  Crash repor

Re: Vidalia exit-country

2008-08-21 Thread 7v5w7go9ub0o

Camilo Viecco wrote:

7v5w7go9ub0o wrote:

What a great idea!

Thank you for working on this!! And thanks to Google for supporting 
this project.


Sadly, I get a clean linux compilation, but no extra tab. Is there an 
additional dependency? e.g. geoip?


TIA

gcc-3.4.6, glibc-2.6.1

There are no other dependencies expect a recent version of tor.
Maybe is a terminology issue. Check if on the 'settings' page you find a 
button named 'Node Policy'.
If you find it click on it and enable 'Enable Vidalia Relay Policy 
Management', then enable 'Strict Exit Relay Management'

You should be set.

Let me know of you have more problems

Camilo



Thanks for replying!   Still no success.

I tried both the posted source, and the svn.

Could there be some outdated source code at both places?

The top line of the changelog looks like this; no mention of exit-country:


0.1.8   xx-xxx-2008
  o Reduce the default number of messages to retain in the message log 
to 50
messages. Most people never look at the window and the extra 200 
messages

just needlessly eat memory.


0.1.7   02-Aug-2008
  o Handle spaces in the Tor version number we get from 'getinfo version'
since Tor has included svn revision numbers in its response (e.g.
"0.2.0.30 (r12345)") for a while now.


TIA


Re: Update to default exit policy

2008-08-20 Thread 7v5w7go9ub0o
anonym wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 20/08/08 15:42, 7v5w7go9ub0o wrote:
>> anonym wrote:
>>> Email clients leak tons of information, the most critical I know of
>>> being your IP address and/or host in the EHLO/HELO in the beginning
>>> of the SMTP(S) transaction.
>> Nope.
>>
>> The encrypted connection occurs before the smtp handshake.
>>
>> IP/host info is not compromised, this is not an issue.
> 
> Care to elaborate on this?
> 
> The way I understand it, the encrypted connection will only prevent
> eavesdroppers from snooping the IP address/host, but the destination
> email server will get it in the EHLO/HELO message. IMHO, that equals a
> compromise of grand scale.

AH! we were talking about two different things. :-(

I was referring to third-parties being unable to sniff your email
contents or your host address within an SSL/SMTP transaction via TOR.
You're talking about withholding information from the mail server itself
(e.g. you're on the road with a laptop, and don't want to leave records
of where you were as you sent your messages).

And indeed, you raise an interesting point!

FWICT, different clients put different information into that HELO. Even
a common client such as TBird puts different info. in Mac OS's (unique
registration information) than it does in Windows (IPA octet).

- Having the option to configure what goes into this field may be a
basis for selecting one's email client.

- Guess it's time to sniff some SMTP connections, and if I become
irritated enough, tweak the source code and recompile my client; hexedit
my client; change clients; or install a proxy or server. (sigh)





Re: Update to default exit policy

2008-08-20 Thread 7v5w7go9ub0o
anonym wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 20/08/08 15:42, 7v5w7go9ub0o wrote:
>> anonym wrote:
>>> Email clients leak tons of information, the most critical I know of
>>> being your IP address and/or host in the EHLO/HELO in the beginning
>>> of the SMTP(S) transaction.
>> Nope.
>>
>> The encrypted connection occurs before the smtp handshake.
>>
>> IP/host info is not compromised, this is not an issue.
> 
> Care to elaborate on this?
> 
> The way I understand it, the encrypted connection will only prevent
> eavesdroppers from snooping the IP address/host, but the destination
> email server will get it in the EHLO/HELO message. IMHO, that equals a
> compromise of grand scale.

AH! we were talking about two different things. :-(

I was referring to third-parties being unable to sniff your email 
contents or your host address within an SSL/SMTP transaction via TOR. 
You're talking about withholding information from the mail server itself 
(e.g. you're on the road with a laptop, and don't want to leave records 
of where you were as you sent your messages).

And indeed, you raise an interesting point!

FWICT, different clients put different information into that HELO. Even 
a common client such as TBird puts different info. in Mac OS's (unique 
registration information) than it does in Windows (IPA octet).

- Having the option to configure what goes into this field may be a 
basis for selecting one's email client.

- Guess it's time to sniff some SMTP connections, and if I become 
irritated enough, tweak the source code and recompile my client; hexedit 
my client; change clients; or install a proxy or server. (sigh)




Re: Update to default exit policy

2008-08-20 Thread 7v5w7go9ub0o

anonym wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 19/08/08 17:46, Dawney Smith wrote:

I have a *lot* of experience with email administration on a very large
scale, I know what I'm talking about.


I'm sure you do. I'd love to have email work flawlessly and securly with
Tor, so opening ports 465 and 587 would be great (currently I do have
problems since there's few exit nodes which do that). But as I
understand it, email clients + Tor might be a very bad idea ATM. Email
clients leak tons of information, the most critical I know of being your
IP address and/or host in the EHLO/HELO in the beginning of the SMTP(S)
transaction.


Nope.

The encrypted connection occurs before the smtp handshake.

IP/host info is not compromised, this is not an issue.



Really, this isn't an argument countering your in any way, but rather a
plea that the issues of using email clients with Tor are researched and
resolved before that combination gets promoted (IMHO opening ports 465
and 587 is a step towards promoting it). It's very likely your average
user will screw up given the current state of things.


TOR guidelines are clear.

Don't use active content; Do use encrypted protocols.

(Now it will be the case that some users do NOT use email encryption - 
they are lost anyway!)


Re: Update to default exit policy

2008-08-19 Thread 7v5w7go9ub0o

Dawney Smith wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

krishna e bera wrote:


I'm not clear on how authentication (on any port) stops spam,
other than the ISP cutting off a given userid after complaints.
A lot of spam already comes from malware infected computers 
via their legitimately configured email.
Those computers are probably not using Tor, let alone transparent proxy, 
but malware could grab their credentials and then 
use Tor on another host to send out spam over port 587,

if that port was allowed in exit policies.


There is a clear misunderstanding of the issue at hand by many people
here. The exit policy was put in place to prevent connections between
Tor users and the last hop (the end MX server), *not* to prevent
connections between Tor users and SMTP relays, which is what everybody
keeps repeating.

There is no problem with a Tor user connecting to an SMTP relay and
sending email. If they can do it using Tor, they can do it without using
Tor, faster. In those cases, it is the administrator of the SMTP relay
that is responsible to stop spam.

Just to repeat the problem. It is Tor users connecting to the
destination MX server that is the problem. Mail relay, not mail submission.

Ports 465 and 587 are mail submission ports. Port 25 is for both
submission *and* relay.

I have a *lot* of experience with email administration on a very large
scale, I know what I'm talking about.


Thanks for pursuing this!

1. Your arguments make good technical sense.

2. In fact, many endpoints have already enabled those ports without
experiencing problems.

3. Many of us routinely handle our ssl email accounts via TOR, and your
proposal (open them by default) would help spread the load, as well as
reasonably expanding the default functionality of TOR.

Thanks Again!

(p.s. this post is being sent via ssl GMAIL, which will include the 
"posting host" when using smtps. My posting host will be a TOR exit node 
:-) )









Re: Vidalia exit-country

2008-08-15 Thread 7v5w7go9ub0o

Camilo Viecco wrote:

Hello Maillist

As part of the 'google summer of code'(gsoc) I was able to add some of 
blossom's functionality to vidalia. The project consisted of adding a 'select 
exit by country' option to vidalia so that users could leverage the Tor network 
to select the 'perspective' of the network the wished to have. The idea is that 
many entities select their content based on the traffic ip address 'source' and 
users might like to have different perspectives easily controlled by them.
 
The project added a new 'tab' on the vidalia's settings window where users can select a country from where they want to 'exit'. Users can also select  countries they would like to avoid. There is no Tor version prerequiste to use this tool.


This exit aware vidalia version has been tested on windows xp(mingw), linux 
(2.6-386) and OsX(leopard-ppc) with both Qt 4.3.5 and 4.4.1.


Binary packages from windows(Thanks to Matt Edman) are available at:

  http://vidalia-project.net:8001/vidalia/vidalia-0.1.8-svn-exit-country.exe
  http://vidalia-project.net:8001/vidalia/vidalia-0.1.8-svn-exit-country.exe.asc


Unix tarballs are available at:

http://www.vidalia-project.net/dist/exit-country.tar.gz
http://www.vidalia-project.net/dist/exit-country.tar.gz.asc


The source code can also be downloaded from the vidalia svn by doing:

svn co https://svn.vidalia-project.net/svn/vidalia/branches/exit-country 
exit-country-vidalia


Building instructions and prerequsites are the same as vidalia.  The
build requires cmake and qt.
The complete build instructions can be found at:

 http://trac.vidalia-project.net/wiki/InstallSource

Have a nice summer

Camilo Viecco
/ 
GPG (GPG) Key fingerprint = 0781 10A0 44CC C441 594F  E5A9 858A 173E 3EC5 EA42

/


What a great idea!

Thank you for working on this!! And thanks to Google for supporting this 
project.


Sadly, I get a clean linux compilation, but no extra tab. Is there an 
additional dependency? e.g. geoip?


TIA

gcc-3.4.6, glibc-2.6.1


Tor controller errors

2008-08-11 Thread 7v5w7go9ub0o
IIUC, I should be able to list the current configuration via the control 
port. My attempt at this fails... help!


getinfo config/*
552 Unrecognized key "config/*"

(when I list valid GETINFO commands, I get this:

getinfo info/names
250+info/names=
accounting/bytes -- Number of bytes read/written so far in the 
accounting interval.
accounting/bytes-left -- Number of bytes left to write/read so far in 
the accounting interval.

accounting/enabled -- Is accounting currently enabled?
accounting/hibernating -- Are we hibernating or awake?
accounting/interval-end -- Time when the accounting period ends.
accounting/interval-start -- Time when the accounting period starts.
accounting/interval-wake -- Time to wake up in this accounting period.
addr-mappings/all -- Current address mappings without expiry times.
addr-mappings/cache -- Current cached DNS replies without expiry times.
addr-mappings/config -- Current address mappings from configuration 
without expiry times.
addr-mappings/control -- Current address mappings from controller 
without expiry times.

address -- IP address of this Tor host, if we can guess it.
address-mappings/all -- Current address mappings.
address-mappings/cache -- Current cached DNS replies.
address-mappings/config -- Current address mappings from configuration.
address-mappings/control -- Current address mappings from controller.
circuit-status -- List of current circuits originating here.
config-file -- Current location of the "torrc" file.
config/* -- Current configuration values.
config/names -- List of configuration options, types, and documentation.
desc-annotations/id/* -- Router annotations by hexdigest.
desc/all-recent -- All non-expired, non-superseded router descriptors.
desc/id/* -- Router descriptors by ID.
desc/name/* -- Router descriptors by nickname.
dir-usage -- Breakdown of bytes transferred over DirPort.
dir/server/* -- Router descriptors as retrieved from a DirPort.
dir/status/* -- Networkstatus docs as retrieved from a DirPort.
entry-guards -- Which nodes are we using as entry guards?
events/names -- Events that the controller can ask for with SETEVENTS.
exit-policy/default* -- The default value appended to the configured 
exit policy.

extra-info/digest/* -- Extra-info documents by digest.
features/names -- What arguments can USEFEATURE take?
info/names -- List of GETINFO options, types, and documentation.
ip-to-country/* -- Perform a GEOIP lookup
network-status -- Brief summary of router status (v1 directory format)
ns/all -- Brief summary of router status (v2 directory format)
ns/id/* -- Brief summary of router status by ID (v2 directory format).
ns/name/* -- Brief summary of router status by nickname (v2 directory 
format).
ns/purpose/* -- Brief summary of router status by purpose (v2 directory 
format).

orconn-status -- A list of current OR connections.
status/bootstrap-phase -- The last bootstrap phase status event that Tor 
sent.
status/circuit-established -- Whether we think client functionality is 
working.
status/enough-dir-info -- Whether we have enough up-to-date directory 
information to build circuits.

status/version/current -- Status of the current version.
status/version/num-concurring -- Number of versioning authorities 
agreeing on the status of the current version

status/version/num-versioning -- Number of versioning authorities.
status/version/recommended -- List of currently recommended versions.
stream-status -- List of current streams.
version -- The current version of Tor.
.
250 OK)


Thanks in advance


p.s.

getinfo version
250-version=0.2.1.4-alpha (r16409)










Re: browser footprint

2008-07-21 Thread 7v5w7go9ub0o

Karsten N. wrote:

I have read a thread at the JonDos forum about browser footprints.

A browser is not only identified by the user-agent, it is possible to
use the accepted language, the accepted content, accepted charsets...

To create a highly anonymous group, many user should use the same
settings for HTTP header values.

You may check your browser at: https://www.jondos.de/de/anontest#

At the page you will see the recommended settings. A developer of
JonDos wrote, they are in contact with the tor dev team about this.
Is it true? I can not find anything about this at torproject.org.

In Firefox / Iceweasel you may set all recommendations at about:config

 intl.charset.default  utf-8
 intl.accept_charsets  *
 intl.accept_languages en
 network.http.accept.default   */*

add a new string value to the configuration:

general.useragent.override  Mozilla/5.0 Gecko/20070713 Firefox/2.0.0.0

and use some plugins like RefControl, CookieSafe, NoScript

For Konqueror I think, it is only possible, to set the following
values in $HOME/.kde/share/config/kio_httprc

  Language=en
  SendUserAgent=true
  UserAgent=Mozilla/5.0 Gecko/20070713 Firefox/2.0.0.0
  SendReferrer=false

More options possible?

Are there recommendations by others?

Karsten N.





Thanks for posting this; I think it is an important topic.

1. ISTM that one should go out to some of the statistics sites and
determine what the most frequently occurring "prints" actually are.

For user agents, there are many statistic sites; e.g.:

http://www.thecounter.com/stats/
http://www.upsdell.com/BrowserNews/stat.htm

FWICT, the most frequently occuring general User Agent is one of these:

Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1)

I'd then find out what the typical (American English) windows I.E.
browser puts out for charsets and encodings, and use that.

(I do not believe that content type  */* is used by many browsers; nor
does gzip,deflate appear for HTTP_ACCEPT_CODING)

2. I agree that TOR would be the logical place to incorporate an
optional sanitizing routine that makes all browsers look the same. It is
likely that some folks will complain that it'll break certain features -
fine, they don't have to use it. But for most of TOR browser useage, 
it'd work fine.


If doing this in TOR is not practical or too far off, TOR could at least
officially recommend the replacement signatures that most users could
apply using our own devices (e.g. tweaking polipo, using privoxy,
proximitron, etc.).

It seems to me that if we wanted to approach TOR on this:

a. the first step would be to determine what the browser headers should be.

b. the second step would be to code and test a patch for TOR that
replaces individual headers with the standard headers, and deletes
extraneous stuff.

c. Present the recommendations and code patch.


If you are in contact with JonDoe, you might ask them why they chose the 
signatures they did


HTH

(p.s. Suggest you retitle this topic to "browser fingerprints" or 
"browser signatures". "footprints" typically refer to the size and 
overhead of an application)









Re: Torbutton 1.2.0RC4 Error

2008-07-03 Thread 7v5w7go9ub0o

Matthieu Dalissier wrote:

Hi there,

I don't know if it is the right place to post this.

A few days ago  i ve upgraded my torbutton to 1.2.0RC4 with FF3.0 and 
now can't toggle away from the Tor-state.
The plugin is in default configuration, when i click on the Torbutton an 
error message with following text appears:


Please file bug report! Error applying Non-Tor settings: [Exception... 
"Component returned failure code: 0x8000 (NS_ERROR_UNEXPECTED) 
[nsIPrefBranch2.setBoolPref]"  nsresult: "0x8000 
(NS_ERROR_UNEXPECTED)"  location: "JS frame :: 
chrome://torbutton/content/torbutton.js :: torbutton_update_status :: 
line 986"  data: no]


Ditto all of the above.

(Sorry Didn't report it because I have a pretty non-standard setup)


Re: Password Authentication Required

2008-06-26 Thread 7v5w7go9ub0o

Kyle Williams wrote:

On Thu, Jun 26, 2008 at 4:30 PM, <[EMAIL PROTECTED]> wrote:


On Thu, Jun 26, 2008 at 02:30:30PM -0400, [EMAIL PROTECTED] wrote
0.9K bytes in 29 lines about:
: 'til you get a better reply, my newbie guess is that this sounds like a
: vidalia connection response. i.e. the hashed password in vidalia doesn't
: match the hashed pw in torrc.

THis is correct.  Vidalia, by default, sets a random password for Tor
controlport authentication when it starts up Tor.You can set this
password to be static through vidalia in the "Settings | Advanced"
button.

--
Andrew




So is Vidalia setting the password to  every time it starts, or just
the first time it starts?

If every time, then why would it prompt the user for the password when it
already knows it?

If first time, then why doesn't it read the password from the torrc or
vidalia.conf file and use that?

I remember having a discussion with Roger about this and, if I recall
correctly, he wanted it to be able to handle authentication to the
controlport without user interaction.  Less pop ups == better user
experience was the argument.

Just curious.

Thank you,
-- Kyle



Do what Kyle Williams said; alternatively, edit each the TOR and vidalia 
configuration files.


Vidalia will have something like this:

ControlPassword=johnsmith
UseRandomPassword=false

where the password is johnsmith. You have to hash that, so do the following:

tor  --hash-password johnsmith

it'll tell you that the hash is:

 16:751C69A9B10D7F4260B04E0D07D7EBCB760EDCEBADD40CDAF40F1FB095

so put the following line into torrc: (one line, ignore the wrap)


hashedControlPassword 
16:751C69A9B10D7F4260B04E0D07D7EBCB760EDCEBADD40CDAF40F1FB095



HTH

(thanks!! for running a TOR relay  :-) )






Re: OnionCat -- An IP-Transparent TOR Hidden Service Connector

2008-06-26 Thread 7v5w7go9ub0o

F. Fox wrote:


scar wrote:

F. Fox @ 2008/06/26 02:39:

7v5w7go9ub0o wrote:
(snip)

This actually creates another question (not to be argumentative :-) ).
Given that there is no exit node, would an OnionCat to OnionCat
connection over TOR need to be encrypted? Is it plain-text anywhere
along the line?

(snip)
No, it wouldn't need extra encryption - a hidden-service connection has
end-to-end encryption by its very nature.


unless the nodes in the circuit were all using compromised ssh keys due
to that recent debian bug, or other unknown future bugs.  in this case,
extra encryption might be the saving grace.




True enough - the only downside to extra layers of encryption, is the
computational burden; with modern machines, it can't help to provide
"your own" layer. =:oD


The overall goal is to distribute all data and "processing" to the 
host/home computer, and make the next laptop (2 lb. Asus eee) a 
throw-away thin client (sigh I'd like to take it with me to Canada, 
and if some overzealous border guard wants to look at, copy, or 
confiscate it. go ahead; there will be no financial or personal 
information onboard to be compromised through sloppy handling by govt. 
bureaucrats).


Part of this equation is that the laptop is an underpowered "sub".

So the communications overhead placed on the little laptop becomes a
consideration. FWIW, I plan on comparing a few alternatives for
responsiveness:

1. NX/SSH direct with port-knocking (using
http://www.cipherdyne.org/fwknop/ to unhide the otherwise
firewall-hidden SSH service port).  This is the current setup.

2. NX/SSH via OnionCat with port-knocking;

3. NX via OnionCat without encryption beyond that provided by TOR 
hidden-service to hidden-service; fwknop hiding the SSH service port 
from script-kiddies cruising the OC nodes (Bernhard's concern).


Obviously, I'm hoping that alternative 2 proves viable; SCAR's
suggestion of a MIM possibility with alt. 3, though it appears remote, 
seems also quite possible and is indeed disconcerting.


Thanks again for the feedback to this newbie!!



Re: Password Authentication Required

2008-06-26 Thread 7v5w7go9ub0o

john smith wrote:

Greetings!

Had to shut down my Tor relay [0.2.0.28-rc running on ubuntu 7.10
gutsy] earlier this evening.

When I re-started my relay it seemed to connect without any problems
then it suddenly shut down.

When I attempted to re-start it again I was presented with a 'Password
Authentication Required - Please enter your control password (not the
hash)' prompt.

I've also attempted to install 0.2.1.2-alpha only to be presented with
the same 'Password Authentication Required' prompt.

Could anyone please explain why this has happened & advise what I can
do to solve this problem.


'til you get a better reply, my newbie guess is that this sounds like a 
vidalia connection response. i.e. the hashed password in vidalia doesn't 
match the hashed pw in torrc.


Try starting it without vidalia (i.e. don't use vidalia to start it).

Once it's up and going, then bring up vidalia.


HTH


Re: OnionCat -- An IP-Transparent TOR Hidden Service Connector

2008-06-25 Thread 7v5w7go9ub0o

F. Fox wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

7v5w7go9ub0o wrote:
(snip)

1. Connecting via TOR would be an extra, minor security option to
conceal the fact that my home is running a VNC server - eavesdropping
kids at the hotspot may try to make it a hacking "prize".


You should know that VNC is considered an insecure protocol; the wise
thing to do, is to allow it only to run over a secure tunneling protocol
(e.g., SSH, or a VPN program).


Thanks Good point.

My present setup (MX) uses SSH to connect client to host; it tunnels its
mx protocol within SSH.

(FWIW, Because some hotspots limit one to 80/443, my host has sshd
listening on 443, and I connect encrypted to it. (I presume that only
the most sophisticated DPI could discern that I'm using SSH instead of
HTTPS :-) ))



This is not only because many variations of VNC don't provide their own
encryption (remember, exit nodes can sniff - and they can see WAY too
much if you're using plain VNC!), but also because such a protocol would
strengthen the authentication required to get in.


This actually creates another question (not to be argumentative :-) ).

Given that there is no exit node, would an OnionCat to OnionCat
connection over TOR need to be encrypted? Is it plain-text anywhere
along the line?

(This would be a consideration, given SSH is tcp and TOR is tcp, and I
might get the tcp over tcp tunnel ( "TCP meltdown" ) timing conflict, it 
 might be good to send the MX/VNC protocol unencrypted)



Thanks in Advance


Re: OnionCat -- An IP-Transparent TOR Hidden Service Connector

2008-06-25 Thread 7v5w7go9ub0o

Dave Page wrote:

On Wed, Jun 25, 2008 at 09:16:12AM -0400, 7v5w7go9ub0o wrote:

Bernhard Fischer wrote:

On Tuesday 24 June 2008, 7v5w7go9ub0o wrote:



My hope is to use OnionCat on my laptop to VNC via TOR to my home
computer using nomachine NX. Is that kind of use possible with OC?


1. Connecting via TOR would be an extra, minor security option to 
conceal the fact that my home is running a VNC server - eavesdropping 
kids at the hotspot may try to make it a hacking "prize".


If you are connecting using NX, the only port you need to access is SSH
- all NX traffic is tunneled over that. Of course, you should never use
the default NX SSH keys over the Internet.


Thanks for replying! I've set up new keys; NX works great! :-)

So if I  was using NX/SSH to the non-standard port of 443, and if my
server box looked like this:

eth0  Link encap:Ethernet  HWaddr 00:A0:A8:B4:45:74
  inet addr:192.168.1.4  Bcast:192.168.1.255  Mask:255.255.255.0
  inet6 addr: fe80::2a0:ccff:fe7a:4574/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:8493 errors:0 dropped:0 overruns:0 frame:0
  TX packets:6762 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:7177839 (6.8 Mb)  TX bytes:1668147 (1.5 Mb)
  Interrupt:16 Base address:0x6000

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:89151 errors:0 dropped:0 overruns:0 frame:0
  TX packets:89151 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:76689713 (73.1 Mb)  TX bytes:76689713 (73.1 Mb)

tun0  Link encap:UNSPEC  HWaddr
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
  inet6 addr: fd87:d87e:eb43:e20e:a09d:5e14:fabb:edf3/48
Scope:Global
  UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:500
  RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

1. what address would I put in my SSH config to have it listen to OC?

config looks like this now:

#Port 22
Port 443

#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
===
so would it be, e.g.:

ListenAddress fd87:d87e:eb43:e20e:a09d:5e14:fabb:edf3

or perhaps:

ListenAddress fe80::2a0:ccff:fe7a:4574

or perhaps:

ListenAddress 0.0.0.0

or ???





You should be able to connect to your machine over SSH via Tor, and then
connect out from that machine normally.


Right you are.

I tried to connect out last night and couldn't get anywhere.  I tried
again today, and it works fine.

e.g.

Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
192.168.1.0 *   255.255.255.0   U 0  00 eth0
loopback*   255.0.0.0   U 0  00 lo
default 192.168.1.1 0.0.0.0 UG0  00 eth0


Thanks In Advance



Re: OnionCat -- An IP-Transparent TOR Hidden Service Connector

2008-06-25 Thread 7v5w7go9ub0o

Bernhard Fischer wrote:

On Tuesday 24 June 2008, 7v5w7go9ub0o wrote:


My hope is to use OnionCat on my laptop to VNC via TOR to my home
computer using nomachine NX. Is that kind of use possible with OC?

Thanks again.


Yes, this should work. But why would you like to do this? TOR's hidden 
services are not very fast even though no exit nodes are required.

And if you connect to your own computer, anonymity is not an issue.
If you only want to have encryption I suggest to use ssh oder openvpn.


Thank you for your quick reply!

This would be used only from open, WIFI hotspots while I'm on the road. 
I'm typically gone for two to three weeks, and  the computer would be 
home unattended.


1. Connecting via TOR would be an extra, minor security option to 
conceal the fact that my home is running a VNC server - eavesdropping 
kids at the hotspot may try to make it a hacking "prize".


2. Out of general principle, I see no reason to record my home IPA in 
the various hotspot logs. I wish to come and go quietly without "signing 
in".


Question:

Using the "phone home" example above, is there any way that my home 
computer could go out to the general WAN and access non-TOR, non-OC 
resources? IIUC, the TUN device means that my computers can only connect 
with each other!? i.e. can I also somehow connect out on eth0 while 
maintaining an OC/TUN connection?





Re: OnionCat -- An IP-Transparent TOR Hidden Service Connector

2008-06-24 Thread 7v5w7go9ub0o

Bernhard Fischer wrote:
OnionCat creates a transparent IPv6 layer on top of TOR's hidden services. It 
transmits any kind of IP-based data transparently through the TOR network on 
a location hidden basis.  You can think of it as a point-to-multipoint VPN 
between hidden services.


See

http://www.abenteuerland.at/onioncat/

for further information.

Bernhard


Thank you for creating this interesting tool!

It installed easily and works well (well. after I compiled in IPv6 
support, prompted by an OnionCat diagnostic) :-)


My hope is to use OnionCat on my laptop to VNC via TOR to my home 
computer using nomachine NX. Is that kind of use possible with OC?


Thanks again.