Re: [tor-dev] OnionShare bug that's possibly caused by an upstream v3 onion bug

2018-11-25 Thread Ivan Markin
On 2018-11-25 05:30, Micah Lee wrote:
> I've been working on a major OnionShare release that, among other
> things, will use v3 onion services by default. But it appears that
> either something in stem or in Tor deals with v3 onions differently
> than v2 onions, and causes a critical bug in OnionShare. It took a lot
> of work to track down exactly how to reproduce this bug, and I haven't
> opened an upstream issue for either stem or tor because I feel like I
> don't understand it enough yet.

Hi Micah and all,

Thanks for the heads-up!

I write here only to confirm that I can reproduce the issue without
stem (using bulb) [1]. It seems that the underlying issue should be
in little-t-tor and not in stem.
Or yes, maybe it's not even a bug (though it seems weird to me).

[1] https://github.com/nogoegst/onion-abort-issue

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] generate relay fingerprint without tor given the datadir/keys folder?

2017-02-03 Thread Ivan Markin
On Fri, Feb 03, 2017 at 04:12:00PM +, nusenu wrote:
> Hi,
> 
> given the files within the datadir/keys folder (without the
> datadir/fingerprint file), is there an easy way to generate the relay
> fingerprint? (using openssl?)
> 
> According to the spec [1] the fingerprint is the SHA1 hash of the public
> key. (I assume RSA pubkey)
> According to the tor man page [2] the RSA public key should be in
> keys/secret_id_key.
> 
> openssl rsa -in secret_id_key -pubout| ..? |sha1sum

Not as messy as I thought though:
$ openssl rsa -in secret_id_key -outform DER -RSAPublicKey_out | sha1

On GNU/Linux sha1 is probably sha1sum.

Happy hacking
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] generate relay fingerprint without tor given the datadir/keys folder?

2017-02-03 Thread Ivan Markin
-$ go get https://github.com/nogoegst/whatonion
+$ go get github.com/nogoegst/whatonion

Whoops, sorry.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] generate relay fingerprint without tor given the datadir/keys folder?

2017-02-03 Thread Ivan Markin
On Fri, Feb 03, 2017 at 04:12:00PM +, nusenu wrote:
> Hi,
> 
> given the files within the datadir/keys folder (without the
> datadir/fingerprint file), is there an easy way to generate the relay
> fingerprint? (using openssl?)

I'm sure that it will be a mess to do it via openssl utility. Some time
ago I wrote a tool for showing onion addresses for private key files.
Now I pushed a feature to it to display relay fingerprint (because 
onion address is a truncated fingerprint base32 encoded...).

$ go get https://github.com/nogoegst/whatonion

$ whatonion -fp /path/to/secret_onion_key

Hope it helps, enjoy!
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Directory structure of prop224 onion services

2017-01-31 Thread Ivan Markin
On Wed, Feb 01, 2017 at 09:36:54AM +1100, teor wrote:
> 
> > On 1 Feb 2017, at 01:36, David Goulet  wrote:
> > 
> > On 31 Jan (09:02:35), teor wrote:
> >> 
> >>> On 27 Jan 2017, at 01:58, David Goulet  wrote:
> > However, there is kind of an issue rising from this. Imagine that v4 changes
> > the onion address format because new crypto. We'll end up with a problem 
> > where
> > the how to extract the version from the address is actually version
> > specific... A solution to that is that "whatever size/encoding the address 
> > is,
> > version will ALWAYS be the last 1 byte."
> > 
> > Thoughts?
> 
> I think it is a good idea to make the version the last byte of the
> address.

Sure, if stays here it's effectively a version suffix (label). Another
version will imply address length, encoding, checksum, etc.
 
> But I also think that a version file is a good idea to make it easy for
> applications to discover the on-disk version.
> 
> Otherwise, the algorithm would have to be something like:
> * look for a hostname file
> * read the first line
> * find the address in that line
> * if the address is N characters long, version 2
>   * do we promise we will never have addresses this long in future versions?
> * base32 decode that line
>   * do we promise addresses will always be base32?
> * read the last byte
>   * do we promise addresses will always have the version in the last byte?

This entire idea to do something for some apps to detect onion service
version "just by looking at disk" looks like a feature creep to me. Why should 
we bother at all while we have onion address that is self-descriptive?*
It's not a big deal really to parse onion address from a 'hostname'
file. Especially for an app that wants to know the protocol version (what
for?).
Personally I belive that there are no(t so many) apps that would
mess with onion services without using an onion-related library such
as stem which will make it as simple as one function call over
onion address.

[*] Yeah, for that there should always be Base32 and version byte at the
end.
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Proposal for the encoding of prop224 onion addresses

2017-01-31 Thread Ivan Markin
On Tue, Jan 31, 2017 at 02:54:50PM +0200, George Kadianakis wrote:
> I merged my prop224 onion encoding patch to torspec just now, after
> fixing the bug that Ivan mentioned above.

Thanks!

btw it's not clear how H() output should be truncated to form a
checksum. Should it be the first 2 bytes or the last 2 bytes?
It should be specified in the definition of CHECKSUM (because length of
digest obviously is not 2 bytes):

- CHECKSUM = H(".onion checksum" || PUBKEY || VERSION)
+ CHECKSUM = H(".onion checksum" || PUBKEY || VERSION)[:2]


Also it worthwhile to include examples with correct checksums.

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Proposal for the encoding of prop224 onion addresses

2017-01-30 Thread Ivan Markin
On Tue, Jan 24, 2017 at 02:27:43PM +0200, George Kadianakis wrote:
> And given the above, here is the new microproposal:
> 
>   onion_address = base32(pubkey || checksum || version)
>   checksum = SHA3(".onion checksum" || pubkey || version)
> 
>   where:
>pubkey is 32 bytes ed25519 pubkey
>version is one byte (default value for prop224: '\x03')
>checksum hash is truncated to two bytes
> 
>   Here are a few example addresses (with broken checksum):
> 
>l5satjgud6gucryazcyvyvhuxhr74u6ygigiuyixe3a6ysis67ororad.onion
>btojiu7nu5y5iwut64eufevogqdw4wmqzugnoluw232r4t3ecsfv37ad.onion
>vckjr6bpchiahzhmtzslnl477hdfvwhzw7dmymz3s5lp64mwf6wfeqad.onion
>   
>   Checksum strength: The checksum has a false negative rate of 1/65536.
> 
>   Address handling: Clients handling onion addresses first parse the
>   version field, then extract pubkey, then verify checksum.
> 
> Let me know how you feel about this one. If people like it I will
> transcribe it to prop224.


FYI, I've implemented derivation and verification of v3 onion addresses
(https://github.com/nogoegst/onionutil/blob/master/address.go).
Some test vectors I got:

private key
onion address

33a7e5c16e0308a3e6a0e7f4a621b3caad9ed1acdb3f78369b1377c5e605027879bcc625184b05194975c28b66b66b0469f7f6556fb1ac3189a79b40dda32f1f
pg6mmjiyjmcrsslvykfwnntlaru7p5svn6y2ymmju6nubxndf4pscryd

62a70904f219a788f3c3c46b64c7bc6e800fed54079f2bb88c4fe3800fe2264593f6ad7b54b6391d2b78147a0b2e808e143780de07f1bda6ee7f052d2e9da67b
sp3k262uwy4r2k3ycr5awluarykdpag6a7y33jxop4cs2lu5uz5sseqd

8d31e643f3693944817172030bab236a818d4a1d1ecbd7b8ce3ccb005dfb15fbb8391d2003bb3bd285b035ac8eb30c80c4e2a29bb7a2f0ce0df8743c37ec3593
xa4r2iadxm55fbnqgwwi5mymqdcofiu3w6rpbtqn7b2dyn7mgwj64jyd

a7f82fdf8f93a299e947f302313971b6759b8140d86468ead9cc960474c274b5f2ba31b35974d6a5214360cc3098fc69cf0a51d9944672a8904c97cba06c3945
6k5ddm2zotlkkikdmdgdbgh4nhhquuozsrdhfkeqjsl4xidmhfc6ntqd

ba85d39f1e45ca1627a4d5e28fb891fa810669feec96a146551c87109376f01b07ec065de1daa2b12da5fc2d8b8ae516b23d4a2cbe00edc11c87636c2f3d2129
a7wamxpb3krlclnf7qwyxcxfc2zd2srmxyao3qi4q5rwylz5eeu35xqd
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Directory structure of prop224 onion services

2017-01-30 Thread Ivan Markin
On Tue, Jan 31, 2017 at 09:02:35AM +1100, teor wrote:
> How does an application tell the difference between a v2 and v3
> directory?
> 
> What's the supported method, that we will continue to support in
> future, regardless of key or algorithm changes?

I guess by looking at the address and checking its validity and/or
looking at the vesion field in v3 (see recent discussion on onion
address encoding). So if there will be crypto changes there will be
version field changes as well.

--
Ivan Makrin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Proposal for the encoding of prop224 onion addresses

2017-01-30 Thread Ivan Markin
Hi,

George Kadianakis wrote:
> I made a torspec branch that alters prop224 accordingly:
>   
> https://gitweb.torproject.org/user/asn/torspec.git/commit/?h=prop224-onion-address&id=50ffab9903880acf55fe387f4d509ecb2aa17f95

It seems that SHA3 digest length is missing for onion address
generation. I guess (?) that it supposed to be SHA3-256 but it
definitely should be specified here.
I think that it just a typo since there is definition of H() above.

Thanks,
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] Proposal for the encoding of prop224 onion addresses

2017-01-23 Thread Ivan Markin
Hi George,

George Kadianakis:
>   What should we do in Tor? My suggestion is to use '\x98' as the default
>   version value which prefixes all addresses with 't' (as in Tor).  Check
>   the examples I cited above.
> 
>   An alternative is to turn the scheme to:
> onion_address = base32(pubkey + checksum + version)
>   where the version byte is at the end with no effect at usability.
> 
>   A heavier alternative would be to have two bytes of version so that we
>   can just prefix them all with 'tor'...

Yes, this is definitely good idea to introduce version octet.
Though it seems pretty redundant to me to prefix onion addresses with
't'/'tor'. I think that the version octet should increment as you
described above.
I think that version should placed at the end of the address. This would
make addresses more distinguishable addresses among each other.


> [D2] Checksum strength:
> 
>   In the suggested scheme we use a hash-based checksum of two bytes (16 
> bits).
>   This means that in case of an address typo, we have 1/65536 probability
>   to not detect the error (false negative). It also means that after 256
>   typos we will have 50% probability to miss an error (happy birthday!).
> 
>   I feel like the above numbers are pretty good given the small checksum 
> size.
> 
>   The alternative would be to make the checksum four bytes (like in
>   Bitcoin).  This would _greatly_ increase the strength of our checksum 
> but
>   it would also increase our address length by 4 base32 characters (and
>   also force us to remove leading padding from base32 output). This is how
>   these 60-character addresses look like:

Is that necessary? Two bytes seem to be more than enough for typo-level
error.

> [D3] Do we like base32???
> 
>   In this proposal I suggest we keep the base32 encoding since we've been
>   using it for a while; but this is the perfect time to switch if we feel
>   the need to.
> 
>   For example, Bitcoin is using base58 which is much more compact than
>   base32, and also has much better UX properties than base64:
>  https://en.bitcoin.it/wiki/Base58Check_encoding#Background

I'm personally consider both base64 and base58 having poor UX and agree
with Linda. Mostly it's because they are case-sensitive - this makes
them too hard to type in. Also base58 has non-integer bit capacity that
makes implementation way more complicated and error-prone (we've seen
enough bugs even in b32 and b64 implementations).

---

I had an idea recently that having variable-length flexible addresses in
fashion similar to TLVs in OTR protocol would be nice. In that case
there are no more length constraints at all, so we may use keys of
different types/sizes (pq?), embed authentication data, etc, etc.

Type:   1 byte
Length: 1 byte (=up to 255 bytes= 2040 bits)
Value:  Length bytes

0x01 0x20 [0x01..0xff] 0x33 0x02 [0x11 0x99] ".onion"
  TT   T TTT
  ||   | ||+-- two-byte checksum
  ||   | |+--- length of the checksum
  ||   | + checksum type
  ||   +-- ed25519 pk
  |+-- size of pk (32 bytes)
  +--- prop224 identity key type

So, its length now 1+1+32 + 1+1+2 = 38 byte = 61 base32 chars with one
(1) unused bit.
E.g.:
obdczrndtadzdhb6iyemnxf7f4i6x7yojnunarlrvt2virtmrecmwgx5golqe.onion

Despite of using more bytes (type/length) it provides freedom for future
adjustments (e.g. another checksum/key algo). Also these TLVs are
commutative so changing order has no effect (maybe should, like in
DER?). As a side effect plain-old-onion-addresses can be encoded here
(even with a checksum).

I'm not sure whether it's reasonable as it seems to me.

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [Proposal] A simple way to make Tor-Browser-Bundle more portable and secure

2016-10-30 Thread Ivan Markin
Yawning Angel:
> Having to rebuild the browser when the libc needs to be updated seems
> terrible as well.

Why is it terrible?
Using static linking drastically reduces overall *complexity*
(~1/security). If you do use libc code in your stuff then it's a part of
this stuff.  If there is a bug in libc - just rebuild your broken
software. It either works or not. Doing dynamic linking is leaving it in
superposition state.

I consider having the browser that builds for >30m is way more terrible.

From
https://wayback.archive.org/web/20090525150626/http://blog.garbe.us/2008/02/08/01_Static_linking/
:

> I prefer static linking:

> Executing statically linked executables is much faster, because there
> are no expensive shared object lookups during exec().
> 
> Statically linked executables are portable, long lasting and fail
> safe to ABI changes -- they will run on the same architecture even in
> 10 years time. Never expect errors like
> /lib/ssa/libstdc++.so.6:version 'GLIBCXX_3.4.4' not found again.
> 
> Statically linked executables use less disk space. Most executables
> use only a small subset of the functions provided by a static library
> -- so there is absolutely no reason to link complete static libraries
> into a static executable (e.g. spoken for a hello_world.c you only
> need to link vprintf statically into the executable, not the whole
> static libc!). The contrary is true for dynamic libraries -- you
> always use the whole library, regardless what functions you are
> using.
> 
> Statically linked executables consume less memory because their
> binary size is smaller and they only map the functions they depend on
> into memory (contrary to dynamic libs).
> 
> The reason why dynamic linking has been invented was not to decrease
> the general executable sizes or to save memory consumption, or to
> speed up the exec() -- but to allow changing code during runtime --
> and that's the real purpose of dynamic linking, we shouldn't forget
> that.

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Reducing initial onion descriptor upload delay (down to 0s?)

2016-09-29 Thread Ivan Markin
teor:
> Submit a patch on the ticket that changes the interval to 5 seconds, and
> see if it gets accepted before the code freeze:
> https://trac.torproject.org/projects/tor/ticket/20082
> 
> It would help to come up with a reasoned argument why 5 seconds is better
> than 30 seconds, and why it can't possibly be any worse under any
> circumstances.

Hmm, okay. I've created this ticket and submitted a patch there. :)
I just lost the point of this discussion and what steps should be taken
next with this delay.

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Reducing initial onion descriptor upload delay (down to 0s?)

2016-09-27 Thread Ivan Markin
Hi tor-dev@,

Ivan Markin:
> IMO an onion service should publish its first descriptor instantly. If
> something happens afterwards and one has to fix the descriptor - deal
> with it with backoff/delay to prevent DoS on HSDirs.
> I think that most of the ephemeral services are not going to use more
> than one descriptor. Moreover, they are going to use just one
> introduction point. So it's not a big deal if one of the published IPs
> fails since a client is going to use one of the rest.
> Also note the reachability issue I mentioned.
> 
> teor:
> > It would be nice to have this change in 0.2.9 for Single Onion
> > Services and I think also for HSs with OnionBalance

Can we actually have this in 029? If yes, how should we do it exactly?

This issue gets more and more annoying. Mostly because of the
(un-)reachability issue.

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] v2 to v3 onion service transition

2016-09-14 Thread Ivan Markin
David Goulet:
> The thing that worries me about the approach of:
> "Publishes somewhere their v3 address and cross-cert*."
> 
> ... is the amount of more traffic and complexity we had to the network for
> such a thing. I for sure don't want Tor to maintain some sorts of registry
> here just for a transition period that ultimately will die. Anyway, Tor
> shouldn't have to provide infrastructure for the actual network to work well.

No-no-no! By saying "somewhere" I meant "manually and out-of-band" (e.g.
on the site itself, via OTR chat, wall paintings etc). There is no
mechanism to notify Tor user (hopefully!) about such change. Tor just
provides a transport and nothing else.

> What if the operator has a torrc option that says "Please use this v2 HS and
> cross certify it with the v3.". Out of my head (just an example):
> 
> HiddenServiceDir /my/service/one
> HiddenServicePort ...
> HiddenServiceLinkedWithV3 1
> 
> HiddenServiceDir /my/service/two
> HiddenServicePort ...
> HiddenServiceLinkedWithV3 0 /* Would be off by default to avoid 
> linkability. */

Personally I'm absolutely against any new torrc options. It's hard to
find this file, edit it, restart tor after (okay-okay I'm biased towards
Control and stateless tor here).
It also introduces
LengthyStingsThatAreTooHardToReadAndWhatIsToSayAboutRemember.

> We should also consider if it's really Tor's job to do this. Maybe it's OK to
> leave this job to the operators to deal with the v2 <-> v3 advertisement by
> themselves?
> 
> My guts tell me that I would like to have v3 tied to v2 as little as possible
> really but I also want current .onion operator to be able to provide maximum
> security for their users _especially_ when a .onion is very difficult to give
> around in some harsh political context.

Agreed. To make clearer what I mean:

I meant a userspace tool (maybe embedded into little-t-tor, TBB for
verification) that takes v2 private key and v3 address and creates
cross-ceritification v2->v3 (a signature):

$ onionxcert /path/to/v2-private-key v3-address.onion
base32-encoded-rsa-signature

Also it takes v3 private key and signs this cross-certification:

$ onionxcert /path/to/v3-private-key base32-encoded-rsa-signature
base32-encoded-cross-cert

Afterwards operator publishes (as described above) this document
1024+64=1536 (~308chars) along with v3 onion address. It could be even
"human-readable" (copypastable) :

llamanymityx4fi3l6x2gyzmtmgxjyqyorj9qsb5r543izcwymle.onion
base32-encoded-cross-cert-2gyzmtmgxjyqyorj9qsb5r543izcwy
mh2gyzmtmgxjyqyorj9qsb5r543izcwyml2gyzmtmgxjyqyorj9qsb5r
543izcwyml2gyzmtmgxjyqyorj9qsb5r543izcwyml2gyzmtmgxjyqyo
rj9qsb5r543izcwyml2gyzmtmgxjyqyorj9qsb5r543izcwyml2gyzmt
mgxjyqyorj9qsb5r543izcwymlml2gyzmtmgxjyqyorj9qsb5r543izc
wymlml2gyzmtmgxjyqyorj9qsb

End user verifies it like this:

$ onionxcert -v2 grapelookcorewwwi -v3
llamanymityx4fi3l6x2gyzmtmgxjyqyorj9qsb5r543izcwymle
base32-encoded-cross-cert
OK.

[Yes, it requires an HSDir fetch to get full RSA key].

Then it can also be included into v2 descriptor if the operator wishes
so. (torrc option? :/ modified private key file?)

After all this stuff happened, we can make a transparent connection over
verified v3 onion service that seems like it's still a v2 address for
the end user. At some point users update their address book and get happy.
We can also perform funny trick of v2 - publish "alias" v2 descriptors
without any intropoints and thus making no v2 service.


Thoughts, comments?

> ... that only informed power user will be able to understand what the
> hell is going on (but that we can maybe fix with good documentation,
> blog post and good practices guide).

It's better not to break. :)

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] v2 to v3 onion service transition (was: "old style" hidden services after Prop224)

2016-09-13 Thread Ivan Markin
Forking this thread to discuss onion service transition path.

David Goulet:
> The question arise now. Someone running a .onion upgrades her tor that
> supports v3, should we allow v2 to continue running or transition it to v3 or
> make them both happy together...? We haven't discuss this in depth and thus we
> need to come to a decision before we end up implementating this (which is
> _soon_). I personally could think that we probably want to offer a transition
> path and thus have maybe a torrc option that controls that behavior meaning
> allowing v2 for which we enable by default at first and then a subsequent Tor
> release will disable it so the user would have to explicitely set it to
> continue running v2 .onion and then finally rip off v2 entirely in an other
> release thus offering a deprecation path.

We can add arbitrary fields into descriptors. So we can build up kinda
"aliases". What comes to mind first:

Onion Service Operator
  Publishes v2 descriptor with v3 cross-certification.
  Publishes somewhere their v3 address and cross-cert*.
v2-only client
  Uses v2 service.
v3-compatible client
  Takes v3 address from a descriptor for requested v2 address.
  Makes a connection to v3 address that looks like a connection
  to v2 for the end user. There should be no v3->v2 downgrade
  option. One-way ticket.
v3-only client
  Uses v3 service.

Also, there should not be any torrc options, I think. There is no
behavior to control so there is no need to make this transition even
more sophisticated.

* There should be a simple tool to generate and verify
cross-certifications. Like signify(1) from OpenBSD. Or even simpler.
Probably something that even built into TBB.

Don't know if such transparent connection thing is secure or not. It
seems to be as secure as v2 services.
Thoughts?

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] "old style" hidden services after Prop224

2016-09-13 Thread Ivan Markin
I agree with both s7r and Lunar here. Just want to add some bits:

> I disagree with your approach, for comparison's sake, let's say v2 is IPv4
> and v3 is IPv6. When IPV6 was introduced, IPv4 was kept around (and still
> is to this day, although IPv6 is arguably a much better solution in a lot
> of areas). Expecting _everyone_ to just switch to IPv6 or get cut off is a
> bit of a pipe dream.

The main goal of Tor Project is privacy and security _for everyone by
default_. If Tor is going to end up like PGP... it's a nightmare. ["Hey,
are you using new onion or old onion?", "Hm, I don't know. Maybe it's
better to go with normal Internet? At least it works.".]
If you don't care much about security/privacy/anonymity and look for a
"crypto-identified network" with NAT traversal - Tor is not _the_
network you're looking for. And yes, IPv4 is still around since it's
"just technology" and isn't tied to security. One can use outdated
technology (I do, a lot) but not use _outdated security_ ["outdated
security" hardly can be called security - a white horse is not a horse].
So ISPs can use IPv4 for any reason until the Sun become a supernova and
may not care about CGNAT and all this crazy stuff - it doesn't _threat_
anyone* except their budget.

> Tor hidden services are a bit "special" because it's hard to poll their
> owners on their intentions. Some hidden service operators have gone to
> great lengths to advertise their .onion URLs (v2-style), some have even
> generated vanity addresses (like Facebook). Forcing a switch to v3 at some
> point presents a very interesting opportunity for phishing because suddenly
> a service known and trusted at some address (as opaque as it is) would need
> to move to an even more opaque address, with no way to determine if the two
> are really related, run by the same operator, etc. If I were a LE agency, I
> would immediately grab v3 hidden services, proxy content to existing v2
> services and advertise my v3 URL everywhere, then happily monitor traffic.

It's not going to happen. You have v2 private key and your new v3
private key. Now you cross-certify them** and teach your users to use
new address.
If you've spent zillions of core-hours on generating a nice-looking
address on top of RSA1024 (can be factored now[1]) and truncated (!)
SHA-1 (someone probably can afford collisions now [2]) - congratulations.

NSA:
> Attacks always get better; they never get worse.

Also, one shouldn't rely on secret keys as you've mentioned. It
shouldn't be an apocalypse if you lost/compromised your key.

> All I'm saying is don't remove the v2 services, even if you choose to no
> longer support them. Some operators (like my company) may choose to
> continue to patch the v2 areas if required and release the patches to the
> community at large. Forcing us out altogether would make us drop Tor and
> start using an alternative network or expending the additional effort to
> make our services network-agnostic (so no more good PR for Tor).

No problem, just run your personal Tor network in your corporate
environment - Tor is free software. It's not as hard as you may think.
And you're always welcome to contribute back. Also it's good since you
may find out some edge-cases or complicated bugs that could harm The Tor
Network at some point.
Just for the record, probably a good choice for your use case here is to
stick with cjdns [3]. It provides IPv6 "cryptoaddress" and mesh
networking and other cool stuff. You don't have to have authorities and
don't have to use OnionCat. It just works. But it's different from Tor
in many ways. The major difference - it's not anonymous.


* Yes, it does break end-to-end principle and makes the Internet less
fault-resistant etc, etc. Anyway, the Internet is already broken. Deal
with it.
** It definitely needs a proposal. :) Maybe I missed one? Something like
xxx-rsa1024-rend-id-migration.txt.

[1] https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf

https://media.ccc.de/v/32c3-7288-logjam_diffie-hellman_discrete_logs_the_nsa_and_you
[2] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html
[3] https://github.com/cjdelisle/cjdns
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] "old style" hidden services after Prop224

2016-09-12 Thread Ivan Markin
Razvan Dragomirescu:
> Thank you Ivan! I still dont see a very good reason to _replace_ the 
> current HS system with the one in Prop224 and not run the two in
> parallel.

For me it's because it would make overall system more complex and thus
error-prone and reasonably less secure. Is like using RC4, SHA1, 3DES in
TLS and be vulnerable downgrade attacks and all kind of stuff like
Sweet32 and LogJam (export-grade onions, haha).

> Why not let the client decide what format and security
> features it wants for its services?

It's like dealing with plethora of ciphers and hashes in GnuPG:

https://moxie.org/blog/gpg-and-me/:
> It’s up to the user whether the underlying cipher is SERPENT or IDEA
> or TwoFish. The GnuPG man page is over sixteen thousand words long;
> for comparison, the novel Fahrenheit 451 is only 40k words.

When system is complex in that way someone is going make huge
mistake(s). If crypto is bad, just put it into museum.

So I don't see _any_ reason to manage outdated and less secure system
while we have a better option (if we already deployed it).

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] "old style" hidden services after Prop224

2016-09-12 Thread Ivan Markin
Hi Razvan,

Razvan Dragomirescu:
> I've developed against the current hidden service infrastructure and it
> appears to work like a charm, but I'm a bit worried about Prop224. That
> will break both the OnionBalance-like service re-registration that I'm
> using _and_ the OnionCat HS to IP6 mapping. I know that efforts are in
> place to upgrade the two in view of Prop224 but I'm wondering if there's
> any good reason to drop support for "old style" hidden services once
> Prop224 is fully deployed.

No worries, prop224 is not going to break OnionBalance-like
re-registration - it's just going to make it more complicated. One will
have to perform cross-certification trickery in order to reassemble
intropoints of another onion service. We want to avoid this plain
"re-registration" since anyone can do it (for details see #15951 [1]).
The way out is to add a feature into little-t-tor and to rewrite tools
like OnionBalance, avant, etc to fetch intropoint list from backend
services directly (via ControlPort or special onion address) thus going
without posting useless descriptors to HSDirs and fetch them from HSDirs
again.

Yes, in case of OnionCat onion<->IPv6 mapping we got a problem. It's
just because address length is 80bit for legacy, 256bit for prop224 and
<128bit for IPv6. And one have to use something additional (like DHT for
cjdns) to "resolve" short IPv6 into larger Ed25519. Apparently IPv6 is
good but not enough to be used as public keys. IMO we need something
better* for this.

Also you'll likely have issues with migration from RSA1024 to Ed25519 on
your smartcards. Most (Java) cards I know have built-in RSA engine and
any additional crypto may not fit in or be slow.

So my two cents is to migrate to prop224 as soon as possible and make
everyone secure (RSA1024 and SHA1 are bd).

* Maybe just hostnames with variable length?
[1] https://trac.torproject.org/projects/tor/ticket/15951
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Reducing initial onion descriptor upload delay (down to 0s?)

2016-09-08 Thread Ivan Markin
teor:
>>  * Can we set it back to 5s thus avoiding issues that can arise after
>> removing the delay?
> 
> Let's base the delay on the amount of time it takes for a HS descriptor to 
> stabilise.
> This is the situation we're trying to prevent:
> * the HS opens all its intro point circuits
> * it sends its descriptor
> * one of the intro points fails
> * it sends another descriptor
> 
> If this hardly ever happens in the first 30 seconds, we likely don't need any 
> delay at all.
> But how could we measure how frequent this is, and how long it takes?

IMO an onion service should publish its first descriptor instantly. If
something happens afterwards and one has to fix the descriptor - deal
with it with backoff/delay to prevent DoS on HSDirs.
I think that most of the ephemeral services are not going to use more
than one descriptor. Moreover, they are going to use just one
introduction point. So it's not a big deal if one of the published IPs
fails since a client is going to use one of the rest.
Also note the reachability issue I mentioned.

> It would be nice to have this change in 0.2.9 for Single Onion
> Services and I think also for HSs with OnionBalance

Same here. Most of the stuff that uses ADD_ONION is meant to setup onion
services instantly but has to wait 'until descriptor gets published'.
30s is too much.

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Reducing initial onion descriptor upload delay (down to 0s?)

2016-09-07 Thread Ivan Markin
Hi tor-dev@!

Moving the discussion on the future of rendinitialpostdelay from ticket
#20082 [1] here.

Transcribing the issue:
> At the moment descriptor is getting posted at 
> MIN_REND_INITIAL_POST_DELAY (30) seconds after onion service 
> initialization. For the use case of real-time one-time services
> (like OnionShare, etc) one has to wait for 30 seconds until this
> onion service can be reached. Besides, if a client tries to reach
> the service before its descriptor is ever published, tor client gets 
> stuck preventing user from reaching this service after descriptor is 
> published. Like this: Could not pick one of the responsible hidden 
> service directories to fetch descriptors, because we already tried 
> them all unsuccessfully.


> It has jumped to 30s from 5s due to "load on authorities". 
> 11d89141ac0ae0ff371e8da79abe218474e7365c:
> 
> +  o Minor bugfixes (hidden services): +- Upload hidden service
> descriptors slightly less often, to reduce +  load on
> authorities.
> 
> "Load on authorities" is not the point anymore because we don't use
> V0 since 0.2.2.1-alpha. Thus I think it's safe to drop it back to at
> least 5s (3s?) for all services. Or even remove it at all?

The questions are:
  * Can we drop this delay? Why?
  * Can we set it back to 5s thus avoiding issues that can arise after
removing the delay?
  * Should we do something now or postpone it to prop224?

[1] https://trac.torproject.org/projects/tor/ticket/20082
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [rfc] Usability Improvements for Atlas

2016-08-02 Thread Ivan Markin
Ivan Markin:
> Hi,
> 
> I've also recently improved Atlas but it changes major part of the user
> experience. So this is a kind of Request For Comments.
> 
> Most of the changes are my vision of how-it-should-all-look-like. The
> most notable changes are:
> * There is no super-wide list of relays that doesn't fit even into Tor
> Browser window (panels instead with upstatus indication)
> * Responsive design for narrow screens
> * Updated navigation bar with new version of Bootstrap
> * Vector icons for flags and other
> * Stretching of the graphs
> 
> You can review all the changes at [1] and deployed version at [2].
> 
> Please let me know if you have any ideas/suggestions.
> 
> 
> P.S. Some of functions like list sorting, display range, detailed search
> and maybe more are broken/gone now. I'm wondering if these functions are
> useful?


Sorry, my onion service is down for a while. For the sake of simplicity
I've deployed Atlas on GitHub [1]. The code is on the repo there [2].

[1] https://nogoegst.github.io/atlas
[2] https://github.com/nogoegst/atlas
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Usability Improvements for Atlas (was Re: Globe is now retired)

2016-07-02 Thread Ivan Markin
Hi,

I've also recently improved Atlas but it changes major part of the user
experience. So this is a kind of Request For Comments.

Most of the changes are my vision of how-it-should-all-look-like. The
most notable changes are:
* There is no super-wide list of relays that doesn't fit even into Tor
Browser window (panels instead with upstatus indication)
* Responsive design for narrow screens
* Updated navigation bar with new version of Bootstrap
* Vector icons for flags and other
* Stretching of the graphs

You can review all the changes at [1] and deployed version at [2].

Please let me know if you have any ideas/suggestions.


P.S. Some of functions like list sorting, display range, detailed search
and maybe more are broken/gone now. I'm wondering if these functions are
useful?


[1] http://hartwellnogoegst.onion/atlas/log/?h=new-design
[2] http://hartwellnogoegst.onion/deploy/atlas-new-design/

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving hidden services on mobile phones

2016-07-01 Thread Ivan Markin
Hi Chris,

Chris Ballinger:
> Glad to see more work on this! For a while I've been toying with the idea
> of making a one-button Android XMPP server app that uses Tor HS to solve
> the CGNAT reachability issue.

Thanks for your interest!
If you're building a messaging system based on Onion Services, please
have a look at Ricochet [1]. I would be absolutely awesome if someone
port/implement/improve it on Android!

* The problem with XMPP is that there is a central system (server) to
which metadata is exposed in order to work. It's bad. :)


[1] https://github.com/ricochet-im/ricochet
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] is the consensus document unpredictable / unique?

2016-06-26 Thread Ivan Markin
s7r:
> And if the private key is on a smartcard, and the smartcard is plugged
> in the host all the time, what's the gain? I am not saying there isn't
> any, I just don't see it at this moment. One I can think of is that
> malware and/or someone hacking can't copy the private key and hijack the
> hidden service, but the risk remains in case someone physically sizes
> the server ("host").

Not necessarily. If you do a setup which drops power for the smartcard
in case of seizure* (disconnects it) then you're going to be safe™. You
have to have a PIN-protected card for this to work.

* A bit tricky, I know.
--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] getting reliable time-period without a clock

2016-06-20 Thread Ivan Markin
Hello Razan,

Razvan Dragomirescu:
> I am working on a smartcard-based hidden service publishing solution and
> since I'm tying the hidden service descriptor to the physical smartcard, I
> want to make sure that the host is not asking the smartcard to generate
> hidden service descriptors in advance, to be used when the card is no
> longer inserted into the host/reader.

Just for the record, currently it's a problem that is going to be solved
by introducing shared random randomness [1].

> The smartcard has no internal clock or time source and it's not supposed to
> trust the host it's inserted into, so I need an external trusted source
> that indicates the current time period. I'm not 100% familiar with the Tor
> protocol (minus the hidden service parts I've been reading about recently),
> so is there any way to get a feel of what the network thinks is the current
> time or the current time-period? An idea would be to fetch the Facebook
> hidden service descriptor or some other trusted 3rd party hidden service at
> a known address and see if the time period given to the smartcard is valid
> for that Facebook descriptor too. An operator could set up  one or more
> trusted hidden services to match against the time-period (inside the
> smartcard) before it signs a given descriptor.

Hmm, you seem to trust untrusted host here since you trust tor daemon
running on the host for clock fetching.
Anyway you're proposing to offload more tor logic onto the smartcard
thus making it trusted host. For me it seems to be unreasonable for such
tiny amount of resources it has. The only functon of a smartcard is to
store private keys in secure manner (do not expose them, only use them).

I think that a possible solution to this is to have some trusted
air-gapped host with the smartcard that generates chunks of signed
descriptors. This trusted host can check if the digest is legit. Then
you can transfer the digests to a "postman" machine which just uploads
these descriptors.
[ha-ha, ironically, I'm currently creating such setup right now. I'm
transferring signed digests via UART]


[1]
https://gitweb.torproject.org/torspec.git/tree/proposals/250-commit-reveal-consensus.txt
--
Healthy bulbs,
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2016-06-03 Thread Ivan Markin
Razvan Dragomirescu:
> Hey Evan, your hidden service appears to be down. Are there any mirrors of
> the code or can you bring it back online? My project is starting to take
> shape (took your advice and I'm using OpenPGP for now - may move to my own
> implementation in the future, but I want to create a small MVP ASAP).

Oops, should be fixed now. Thanks for noticing.

P.S. Please use direct email for such kind of issues to not to litter
the list.

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] tor on GNU/Windows

2016-05-28 Thread Ivan Markin
Ian Goldberg:
> I had a crazy thought the other day: has anyone tried running the Linux
> version of tor (client or node) on the new GNU/Windows (or whatever
> they're officially calling their Linux compatability layer)?

Just for the record, the name of the CL is
"Windows Subsystem for Linux" or WSL. [1]

[1] https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2016-05-24 Thread Ivan Markin
Razvan Dragomirescu:
> Thanks Evan for the .onion links, I'll take a look. I'm still collecting
> data, testing hardware, etc. BTW, one of the cheapest options for this is
> http://www.ftsafe.com/product/epass/eJavaToken - $12 at
> http://javacardos.com/store/smartcard_eJavaToken.php . Unfortunately it has
> a bug that prevents OpenPGP from running (something to do with signature
> padding, I didn't look much into it). My plan is to write a very small
> JavaCard-based applet to load onto the card - that only does RSA key
> generation and signing, nothing else. Easy to write and easy to audit.

You can write it yourself but a working solution is already there. It's
possible to flash Java applet to almost any common jcard (they're pretty
cheap). Have a look at the nice guide by Subgraph team [1].
For the purpose of digest signing you can easily modify the applet to
have more than two signing keys (keep in mind that there are some card
limits).


[1] https://subgraph.com/sgos/documentation/smartcards/index.en.html
--
Have fun,
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Configuring Single Onion Services

2016-04-11 Thread Ivan Markin
Tim Wilson-Brown - teor:
> We tried adding NonAnonymous to the name, and it was unwieldy. And it
> also confuses the semantics: what if we have multiple types of
> SingleOnionMode?

If we do have multiple types of SingleOnionMode we should specify this
type as a value to NonAnonymousOnionServiceMode option (or so). Why not?

Sorry for my bias towards NonAnonymousOnions but SingleOnion really
confuses me.

> Also, see my reply to David, where I explain that NonAnonymousMode
> applies to the entire tor instance, including things that are totally
> unrelated to Single Onion Services, like whether you can open a
> SOCKSPort or run Tor2Web.

That's why I propose to use "NonAnonymousOnionServiceMode 1" instead of
just "NonAnonymousMode 1".


> We could add a compilation option --enable-single-onion-mode instead
> of NonAnonymousMode, but I think making Single Onion Service
> operators compile their own tor is unnecessary.

Gosh, it would be really inconvenient and nontransparent. And error-prone.

-- 
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Configuring Single Onion Services

2016-04-11 Thread Ivan Markin
David Goulet:
> It's a bit weird to have to enable two options for one feature (single onion)
> BUT I like the double torrc option forcing the users to understand what's
> going on (also adding semantic to the config file).
> 
> Bikesheding: the name though could be a bit misleading. What if that tor
> process is also used as a client to "wget" stuff on the server for instance.
> Won't I be confused if NonAnonymousMode is _set_ not knowing it applies to
> what? Idea: "HiddenServiceNonAnonymousMode 1". Pretty explicit that it's for
> the service.

I don't think using doubled option will force people to understand
what's happening. Most probable outcome is that two-option requirement
will look just "strange". It's strange because it's vague.
I agree with David, something like "NonAnonymousOnionServiceMode 1"
should be enough. It looks pretty clear and simple.
[NB: a service cannot be Hidden and NonAnonymous at the same time :) ]

-- 
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Is it possible to leak huge load of data over onions?

2016-04-03 Thread Ivan Markin
NB: Sorry for breaking the threading. Replying to the right message.

dawuud:

> Alice and Bob can share lots of files and they can do so with their
> Tor onion services. They should be able to exchange files without
> requiring them to be online at the same time. Are you sure you've
> choosen the right model for file sharing?

I haven't chosen any storage model. I'm just wondering about technical
capabilities of Tor to act as _anonymous_ transport for this data.
"Will one be anonymous when they transmit big amount of data?"
"What the limits are?"
"What step should the source take to be safe?"

> If Alice and Bob share a confidential, authenticated communications
> channel then they can use that to exchange key material and secret
> connection information. That should be enough to bootstrap the
> exchange of large amounts of documents:

The Internet is not confidential. Surely the opposite.

> Anyone who hacks the storage servers she is operating gets to see
> some interesting and useful metadata such as the size of the files
> and what time they are read; not nearly as bad as a total loss in
> confidentiality.

Yes, but there are much more adversaries. Any AS near the endpoints
poses big threat.

> No that's not necessarily correct; if the drives contain ciphertext
> and the key was not compromised then the situation would not be
> risky.

The source can easily fail by compromising fingerprints, chemical
traces, serial number of the hard drive (with proprietary firmware!),
place of origin and other 'physical' metadata. It's not "just
ciphertext" in a vacuum.

--
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Is it possible to leak huge load of data over onions?

2016-04-03 Thread Ivan Markin
Recently someone leaked enormous amount of docs (2.6 TiB) to the
journalists [1]. It's still hard to do such thing even over plain old
Internet. Highly possible that these docs were transfered on a physical
hard drive despite doing so is really *risky*.

Anyways, in the framework of anonymous whistleblowing, i.e. SecureDrop
and Tor specifically it's seems to be an interesting case. I'm wondering
about the following aspects:

o   Even if we use exit mode/non-anonymous onions (RSOS)
is such leaking reliable? The primary issue here
is time of transmission. It's much longer than any
time period we have in Tor.

o   What is going to happen with the connection after
the HS republishes its descriptor? Long after?
[This one is probably fine if we are not using
 IPs, but...]

o   Most importantly, is transferring data on >1 TiB
scale (or just transferring data for days) safe at
all? At least the source should not change their
location/RP/circuits. Or need to pack all this stuff
into chunks and send them separately. It's not
obvious how it can be done properly. So at what
point the source should stop the transmission
(size/time/etc)/change location or the guard/
pick new RP?

--
[1] http://panamapapers.sueddeutsche.de/articles/56febff0a1bb8d3c3495adf4/
--
Happy hacking,
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Is it possible to leak huge load of data over onions?

2016-04-03 Thread Ivan Markin
Recently someone leaked enormous amount of docs (2.6 TiB) to the
journalists [1]. It's still hard to do such thing even over plain old
Internet. Highly possible that these docs were transfered on a physical
hard drive despite doing so is really *risky*.

Anyways, in the framework of anonymous whistleblowing, i.e. SecureDrop
and Tor specifically it's seems to be an interesting case. I'm wondering
about the following aspects:

o   Even if we use exit mode/non-anonymous onions (RSOS)
is such leaking reliable? The primary issue here
is time of transmission. It's much longer than any
time period we have in Tor.

o   What is going to happen with the connection after
the HS republishes its descriptor? Long after?
[This one is probably fine if we are not using
 IPs, but...]

o   Most importantly, is transferring data on >1 TiB
scale (or just transferring data for days) safe at
all? At least the source should not change their
location/RP/circuits. Or need to pack all this stuff
into chunks and send them separately. It's not
obvious how it can be done properly. So at what
point the source should stop the transmission
(size/time/etc)/change location or the guard/
pick new RP?

--
[1] http://panamapapers.sueddeutsche.de/articles/56febff0a1bb8d3c3495adf4/
--
Happy hacking,
Ivan Markin
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-20 Thread Ivan Markin
grarpamp:
> Yes if you intend to patch tor to use a smartcard as a
> cryptographic coprocessor offloading anything of interest
> that needs signed / encrypted / decrypted to it. The card
> will need to remain plugged in for tor to function.

As I said before, only thing that actually needs to be protected here is
"main"/"frontend" .onion identity. For that purpose all you need to do
is to sign descriptors. And not to lose the key.

grarpamp:
> However how is "pin" on swissbit enabled?
> If it goes from the host (say via ssh or keyboard or some
> device or app) through usb port through armory to swissbit,
> that is never secure.

No, I will be secure. An adversary could sniff your PIN and sign
whatever they want to, true. But revealing the PIN != revealing the key.
In this case your identity key is still safe even if your PIN is
"compromised".

-- 
Ivan Markin



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-18 Thread Ivan Markin
Razvan Dragomirescu:
> Ivan, if I understand
> https://onionbalance.readthedocs.org/en/latest/design.html#next-generation-onion-services-prop-224-compatibility
> correctly, the setup I've planned will no longer work once Tor switches to
> the next generation hidden services architecture, is this correct? Will
> there be any backwards compatibility or will old hidden services simply
> stop working at that point?

No, actually the setup will work. But it will not work until the code
base (of the OB) is changed*. For now one can sign arbitrary set of IPs
with their key (you can test it with  e.g. Facebook HS) and this
descriptor will be valid [1].
Cross-certifications are just a mechanism of hardening this process. In
order to make frontend descriptor valid backend instances must "be
aware" of the frontend. So backend nodes are certifying public key of
frontend and then they can be included into a frontend descriptor.
[using OB terminology]

[*] Also there is still only RSA crypto in the OB.

[1] https://trac.torproject.org/projects/tor/ticket/15951
-- 
Ivan Markin
/"\
\ /   ASCII Ribbon Campaign
 Xagainst HTML email & Microsoft
/ \  attachments! http://arc.pasp.de/



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-18 Thread Ivan Markin
Razvan Dragomirescu:
> Thank you Ivan!
You're welcome!

> Ah, I understand now! That actually makes perfect sense for my application.
> If I understand it correctly, I can simply let Tor register the HS by
> itself (using a random HS name/key), then fetch the introduction points and
> keys and re-register them with a different HS name - this would make the
> service accessible both on the random name that the host has chosen
> (without talking to the card) and on the name that the card holds the
> private key to (registered out of band, directly by a script that looks
> like OnionBalance).

Yes, exactly.

>> If somebody already knows your
>> backend keys then certainly they know any of your data on this machine.
>>
>> No, not exactly :). There's still one thing they don't have access to -
> the smartcard! Even on a completely compromised backend machine, they still
> can't make the smartcard do something it doesn't want to do. In my project,
> it is the smartcard that drives the system - so a smartcard on one system
> can ask the host to connect it to a similar smartcard on a different system
> by giving it the HS name. The host establishes the connection, then the two
> smartcards establish their own encrypted channel over that connection. A
> compromised host can only deny service or redirect traffic somewhere else,
> but still can't make the smartcard accept injected traffic and can't
> extract the keys on it. I'm basically using Tor as a transport layer with
> NAT traversal and want to tie the HS names to smartcards so that I have a
> way to reach a _specific_ card/endpoint.

The question is what are you trying to protect. With decryption key
you're protecting actual content to transmit over the net that is
already stored in plaintext. You may regard "onion" keys as "TLS
symmetric keys" because they are ephemeral and used "just for one
purpose". Keep in mind that these keys are disposable and there is no
threat when you lost them - just generate new ones.


Good randomness!
-- 
Ivan Markin
/"\
\ /   ASCII Ribbon Campaign
 Xagainst HTML email & Microsoft
/ \  attachments! http://arc.pasp.de/



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-17 Thread Ivan Markin
Razvan Dragomirescu:
> Ivan, according to https://www.torproject.org/docs/hidden-services.html.en
> (maybe I misunderstood it), at Step 4, the client sends an _encrypted_
> packet to the hidden service, so the hidden service needs to be able to
> decrypt that packet. So the key on the card needs to be used both for
> signing the HS registration and for decrypting the packets during the
> initial handshake, isn't this correct?

Not exactly. The trick is that keys are not the same. For more details
have a look at the specifications [1]. There is a "permanent key"
("holds the name", signs descriptors) and an "onion key" [2] for each
Introduction Point to communicate with the HS. So the "nameholder" key
("permanent") is used only for signing descriptor with a list of IPs and
corresponding keys.

> As far as I could tell, there is no way to tell Tor to use a smartcard in
> any phase of the protocol, your OnionBalance tool simply handles the
> registration by itself (outside of Tor).

Yes, there is no support for SCs in little-t-tor itself.
What OB is doing is just a combining some IPs (from backend instances)
into frontend instance descriptor, signing it and then publishing it via
tor.

btw, OnionBalace is not my project [3].

> Regarding bandwidth, this is for an Internet of Things project, there's
> very little data going back and forth, I only plan to use the Tor network
> because it's a very good way of establishing point to point circuits in a
> decentralized manner. The alternative would be to use something like PubNub
>  or Amazon's new IoT service, but those would depend on PubNub/Amazon.

As I said before it may not fit your purpose. :) I still don't think
that decrypting via SC is necessary. If somebody already knows your
backend keys then certainly they know any of your data on this machine.
Maybe I missed something.


[1] https://gitweb.torproject.org/torspec.git/tree/rend-spec.txt#n63
[2] https://gitweb.torproject.org/torspec.git/tree/rend-spec.txt#n364
[3] https://github.com/donnchac/onionbalance
-- 
Ivan Markin
/"\
\ /   ASCII Ribbon Campaign
 Xagainst HTML email & Microsoft
/ \  attachments! http://arc.pasp.de/



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-17 Thread Ivan Markin
Ken Keys:
>> > The point is that one can't[*] extract a private key from a smartcard
>> > and because of that even if machine is compromised your private key
>> > stays safe.
> If the machine is going to use the HS key, the actual HS key has to be
> visible to it.

Nope. If the machine is going to use the HS key it can ask a smartcard
to do so. Of course private key is visible to something/someone anyway.
But in case of smartcards it is visible to a smartcard only.

> An encrypted container holding a VM could use RSA-style
> public/private key encryption so that it never has to see the private
> key used to unlock it. You would still need to trust the VM, but the
> encrypted container would allow you to establish a chain of custody.

It's OK to unlock some encrypted block device/VM with some 'unpluggable'
key. But it does nothing to protect your HS' identity.

-- 
Ivan Markin
/"\
\ /   ASCII Ribbon Campaign
 Xagainst HTML email & Microsoft
/ \  attachments! http://arc.pasp.de/



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-17 Thread Ivan Markin
Ken Keys:
> If the tor process is going to use the key, at some point the
> unencrypted key has to be visible to the machine running it. You would
> in any case have to trust the machine hosting the tor node. A more
> secure setup would be to run the tor node inside an encrypted VM and use
> your smartcard/dongle/whatever to unlock the VM.

The point is that one can't[*] extract a private key from a smartcard
and because of that even if machine is compromised your private key
stays safe.


[*] Not so easy, but possible.
-- 
Ivan Markin
/"\
\ /   ASCII Ribbon Campaign
 Xagainst HTML email & Microsoft
/ \  attachments! http://arc.pasp.de/



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-17 Thread Ivan Markin
Razvan Dragomirescu:
> Thank you Ivan, I've taken a look but as far as I understand your project
> only signs the HiddenService descriptors from an OpenPGP card. It still
> requires each backend instance to have its own copy of the key (where it
> can be read by an attacker). My goal is to have the HS private key
> exclusively inside the smartcard and only sign/decrypt with it when needed
> but never reveal it.An attacker should not be able to steal the key and
> host his own HS at the same address - the address would be effectively tied
> to the smartcard - whoever owns the smartcard can sign HS descriptors and
> decrypt traffic with it, so he or she is the owner of the service.

Yes, it still requires to have plain keys for decryption of traffic on
backend instances, sure. But you're not right about key "stealing"
(copying). An address of a HS is calculated from key which is signing
descriptors. This key resides on a smartcard. It's already
"the-address-would-be-effectively-tied-to-the-smartcard" situation there.

I do not see any reason to decrypt traffic on a smartcard; in case if an
attacker can copy your backend key there is no need to decrypt anything
- they already have an access to the content on your instance. Also
backend instances' keys are disposable - you can change them seamlessly.

P.S. Notice about bandwidth issue when you're decrypting all of the
traffic on a smartcard (half-duplex, etc.).

-- 
Ivan Markin
/"\
\ /   ASCII Ribbon Campaign
 Xagainst HTML email & Microsoft
/ \  attachments! http://arc.pasp.de/



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] adding smartcard support to Tor

2015-10-16 Thread Ivan Markin
Hello,
Razvan Dragomirescu:
> I am not sure if this has been discussed before or how hard it would be to
> implement, but I'm looking for a way to integrate a smartcard with Tor -
> essentially, I want to be able to host hidden service keys on the card. I'm
> trying to bind the hidden service to a hardware component (the smartcard)
> so that it can be securely hosted in a hostile environment as well as
> impossible to clone/move without physical access to the smartcard.

I'm not sure that this solution is 100% for your purposes. But recently
I've added OpenPGP smartcard support to do exactly this into OnionBlance
[1]+[2]. What it does is that it just signs a HS descriptor using
OpenPGP SC (via 'Signature' or 'Authentication' key). [It's still a
pretty dirty hack, there is no even any exception handling.] You can use
it by installing "manager/front" service with your smartcard in it via
OnionBalace and balancing to your actual HS. There is no any bandwidth
limiting (see OnionBalance design). You can setup OB and an actual HS on
the same machine for sure.

> I have Tor running on the USBArmory by InversePath (
> http://inversepath.com/usbarmory.html ) and have a microSD form factor card
> made by Swissbit (
> www.swissbit.com/products/security-products/overwiev/security-products-overview/
> ) up and running on it. I am a JavaCard developer myself  and I have
> developed embedded Linux firmwares before but I have never touched the Tor
> source.

There is a nice JavaC applet by Joeri [3]. It's the same applet that
Yubikey is using. You can find well-written tutorial of producing your
OpenPGP card at Subgraph [4].

> 
> Is there anyone that is willing to take on a side project doing this? Would
> it be just a matter of configuring OpenSSL to use the card (I haven't tried
> that yet)?

I'm not sure that it is worth to implement a card support in
little-t-tor itself. As I said, all the logic is about HS descriptor
signing. Python and other langs that provide readablity will provide
security then.
I think/hope so.

[1] https://github.com/mark-in/onionbalance
[2] https://github.com/mark-in/openpgpycard
[3] http://sourceforge.net/projects/javacardopenpgp/
[4] https://subgraph.com/sgos/documentation/smartcards/index.en.html

Hope it helps.
-- 
Ivan Markin
/"\
\ /   ASCII Ribbon Campaign
 Xagainst HTML email & Microsoft
/ \  attachments! http://arc.pasp.de/



signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev