Re: removing IP-address autodetection, Tor integration

2015-06-18 Thread Leif Ryge
On Thu, Jun 18, 2015 at 12:31:16PM -0700, Brian Warner wrote:
 [snip]

This all sounds great to me! But there are a few edge cases which shouldn't be
forgotten:

 * It could be desirable to connect to a grid (possibly of non-onion storage
   servers) using Tor to reach all of the servers *except* the user's own
   servers, which are reachable via their LAN or VPN.

 * It could be desirable to have a server listen on both an onion address and a
   LAN address.

 * It could be desirable to connect to some servers via different addresses
   than they are advertising (say, because you know its LAN address).

OK so maybe these are all variations on the same use case, which happens to be
how I want to use Tahoe :)

I think per-server connection preferences should be exposed via the
introducerless mode which you (Brian) mostly implemented long ago but left
commented out and which David made work in the truckee branch[1]. Speaking of
which, I really need to bring that up to date with the last 6 months or so of
Tahoe development... I'll try to work on that in the near future.

I'm looking forward to being able to use the i2p grid (which I believe is the
largest and longest running public tahoe grid) and the onion grid
simultaneously!

~leif

[1]: https://github.com/leif/tahoe-lafs/blob/truckee/NEWS.rst
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: Introducer pubgrid down?

2014-08-09 Thread Leif Ryge
On Sat, Aug 09, 2014 at 03:27:43PM +0200, Ed Kapitein wrote:
 Ah, enjoy your holiday.
 We'll see what happens until Wednesday, most of my leases are 3-7-10
 and we are down to 9 storage nodes.
 so i guess there will be some rebalancing going on at the next check/repair.

Tahoe-LAFS does not currently do rebalancing. If you have a file with 3-7-10
encoding and repair it when there are 9 nodes available, one node will get two
shares of the file. If you repair it again when there are 10 servers, the 10th
server will *not* get a share of the file (unless it already had one) because
there are already 10 shares on the grid. So, you'll be left with two shares on
one node (until that node goes away).

The only two ways I'm aware of to do rebalancing currently are to ask the
operator of the node with multiple shares to delete one (make sure you have
enough elsewhere first!) or to use dawuud's introducerless branch (which is now
merged into my truckee[1] branch) to manually configure your gateway to connect
to all known servers except for the one with multiple shares (and then run a
repair).

HTH,
~leif

1: https://github.com/leif/tahoe-lafs/blob/truckee/NEWS.rst

 Again, enjoy your holiday and thanks for the support.
 
 Kind regards,
 Ed
 
 
 On 08/09/2014 02:56 PM, Paul Rabahy wrote:
 I noticed it was down, but didn't have a chance to fix it. I
 should be able to get it running again on Wednesday. I'm on
 vacation right now and don't have access to my main computer.
 
 On Saturday, August 9, 2014, Ed Kapitein e...@kapitein.org
 mailto:e...@kapitein.org wrote:
 
 Hi,
 
 The introducer to the pubgrid seems down since yesterday.
 Has something changed, or is it just down?
 
 Kind regards,
 Ed
 
 ___
 tahoe-dev mailing list
 tahoe-dev@tahoe-lafs.org
 https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
 
 

 ___
 tahoe-dev mailing list
 tahoe-dev@tahoe-lafs.org
 https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev



signature.asc
Description: Digital signature
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: 'pip install allmydata-tahoe' now works

2014-06-30 Thread Leif Ryge
Unfortunately (unless I'm missing something; I haven't investigated fully) the
statement 'pip install allmydata-tahoe' now works is rather dangerously
misleading as it implies that that is a safe command to run on an
internet-connected computer.

Recent versions of pip verify SSL certificates and won't download over
unencrypted HTTP unless you specifically tell it to. But, unless I'm mistaken,
pip install allmydata-tahoe will still run tahoe's setup.py build which
will brazenly download and execute unverified code.

If I am mistaken (and I hope I am!) someone should close
https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2055 (Building tahoe safely is
non-trivial).

~leif

On Mon, Jun 30, 2014 at 06:58:30AM -0700, Callme Whatiwant wrote:
 Huzzah!
 
 On Mon, Jun 23, 2014 at 12:47 PM, Brian Warner war...@lothar.com wrote:
  Just a heads up, the new Nevow-0.11.1 release a few days ago fixed
  tahoe's #2032, which means that you should now be able to install tahoe
  with just:
 
   pip install allmydata-tahoe
 
  That should grab all the necessary dependencies for you, including Twisted.
 
  Hooray for easier installations!
 
  cheers,
   -Brian
 
  #2032: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2032
  ___
  tahoe-dev mailing list
  tahoe-dev@tahoe-lafs.org
  https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
 ___
 tahoe-dev mailing list
 tahoe-dev@tahoe-lafs.org
 https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


signature.asc
Description: Digital signature
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [Tails-dev] Tahoe-LAFS persistence

2014-06-01 Thread Leif Ryge
On Sun, Jun 01, 2014 at 11:11:29AM -0400, Greg Troxel wrote:
 David Stainton dstainton...@gmail.com writes:
 
  Since Tahoe-LAFS is not a posix compliant filesystem...
  we cannot easily create a persistent volume that only
  stores data on a Tahoe grid. There is an ugly FUSE hack
  but it is extremely ineffient.
 
 This can be viewed as a bug in tahoe :-)
 But seriously, fixing the FUSE interface would be a great contribution.
 It's not clear to me how efficient the FUSE interface has to be before
 it isn't the limiting issue; tahoe is not a fast filesystem.

While Tahoe's native FUSE interface bitrotted long ago, there are two FUSE
interfaces which are currently usable:

 - Tahoe has an SFTP server can be mounted with FUSE's sshfs, like any other
   SFTP server
 - the python-fs module (a general filesystem abstraction library) has a Tahoe
   client, using Tahoe's web api, and can expose it (like any python-fs object)
   via FUSE.

The big problem with using these fuse mounts for many applications is that
Tahoe mutables don't provide random-access writes. So if you put, say, your
firefox profile on a Tahoe-backed FUSE mount... every write+sync to Firefox's
places.sqlite etc will involve re-uploading the whole thing and firefox will
only be usable for brief moments at a time if at all (I think - I haven't tried
it).

 *
 *** The following section of this email is not about Tahoe+Tails. ***
 *** If you're just interested in that, skip to BACK TO THE NEAR FUTURE. ***
 *

In my opinion, this is not something that should be fixed by improving
Tahoe's current mutable files, but rather by replacing them since they have
several other shortcomings. Most importantly (imo):

 - They don't preserve history (each write overwrites the previous version, so
   if you have a write capability you can also overwrite)
 - They aren't lockable (if you have uncoordinated writes to a file, you're
   gonna have a bad time. so, you must be very careful sharing writecaps.)
 - There is no asymmetric encryption (if you have a write capability, you can
   also read).
 - They aren't deduplicated at a file level, much less at a block-level

My hand-wavey ideal solution to these problems (chisel) involves hashsplit
(BUP-style) asymmetrically-encrypted immutable files, references to which are
added to a directory which is a *decentralized add-only set*. So, there can
be write-only capabilities which can neither read nor delete data after they've
written it, and because the directory is an add-only set instead of an
append-only file there can be multiple writers without coordination. I've got a
rough idea about how to do this, and a little bit of code... hopefully I'll
find time to work on it more soon. I'm building this separately from Tahoe, but
intending to (optionally) use Tahoe immutable files underneath, and I'd like to
eventually be able to expose a FUSE interface that *does* allow random access
writes to files. Probably Tahoe will need some performance improvements for it
to work well on top of it, though.

Another undesirable thing about Tahoe's current mutables is that write caps
contain RSA private keys which are rather cumbersome to write down. If they
were ECC private keys they could be generated from memorable secrets (which is
potentially dangerous but quite convenient) or at least shorter secrets (because
RSA requires much larger keys than ECC for a given security level).

But none of this is very relevant to the issue of using Tahoe for Tails
persistence in the immediate future, so...

 ***
 *** BACK TO THE NEAR FUTURE ***
 ***

  So there should be three options per persistent file-set:
  1. do not persist
  2. persist to local media
  3. persist to local media AND a Tahoe-LAFS grid
 
 Are you proposing to store the capabilities to access the persistent
 data on the local media (removable flash, I'm assuming)?   I've come
 into this thread somewhat late, but the security and usability
 properties are not entirely clear to me.

I think the current plan for Tails persistence on Tahoe is to persist to the
USB disk (as Tails already does) and run tahoe backup on a regular schedule
(and/or when the user manually triggers it, and/or triggered by some sort of
inotify-driven agent). So, yes, the user should store their root cap on Tails'
encrypted persistent partition, and also back it up elsewhere (on another usb
stick, or on paper). The Tails persistence setup tool should then have an
option to restore from an existing Tahoe root cap.

I guess the restore could also be done to a ramdisk on a Tails system without
USB persistence, and maybe something could even be done using unionfs to
combine a ramdisk with a fuse-mounted tahoe directory to avoid needing to
download the whole thing? I wonder how that would work.

  

Re: apparent serious integrity problem in build system - setuptools bug?

2014-03-21 Thread Leif Ryge
On Fri, Mar 21, 2014 at 08:01:48PM -0400, Greg Troxel wrote:
 
 Update: after spiffing up py-OpenSSL quite a bit more, so it passes its
 own tests, tahoe seems to work ok with it.  I'm still not sure, and
 will check further.
 
 My comments about auto-downloading stuff by default being a bug stand
 :-)

The auto-downloading issue is filed here:
https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2055

~leif
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: Odd issue when uploading really large file

2013-12-08 Thread Leif Ryge
On Sun, Dec 08, 2013 at 08:53:36AM -0800, Jeff Tchang wrote:
 I just tahoe put a very large file but it doesn't seem to show up when I do
 tahoe ls.
 
 Am I missing something? I have set my redundancy to 1 of 1 and only have
 one storage node. The storage node's storage directory increased by 6.6GB
 which was the size of the file so it definitely transferred over to the
 other server.
 
 -Jeff

How did you put the file?

If you used the tahoe put command without a destination argument, you'll have
created an unlinked file (the read cap of which was printed to stdout). If this
is what you did, if you still have the read capability that was printed by
tahoe put you can quickly add the file to a directory using the tahoe ln
command. If you don't have the read capability anymore, you can put the file
again (adding a destination argument such as tahoe: if you have a tahoe alias
called tahoe) and the file will be reencoded but won't need to be reuploaded
(if you don't change your convergance secret).

~leif


signature.asc
Description: Digital signature
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: How can I run Tahoe-LAFS on one computer

2013-11-12 Thread Leif Ryge
On Tue, Nov 12, 2013 at 09:24:18AM +0100, Ed Kapitein wrote:
 On Tue, 2013-11-12 at 16:03 +0800, xiao_s_yuan wrote:
  I want to know can I put the client,storage,introducer on one computer
  with ubuntu,I tried to do this but find that only one .tahoe can
  exist under the /root folder,but every client or storage  must have a
  folder like .tahoe,so how can I run tahoe-LAFS on only one computer
 
 Hi xiao,
 
 I successfully did that once by using different users for the introducer
 and the storage server.
 So run the introducer as user1 and run the main tahoe process as user2.
 
 
 Kind regards,
 Ed

The tahoe create-{client,node,introducer} and {start,stop,restart} commands all
accept an optional directory argument which can be used instead of the default
~/.tahoe directory.

Here are two different scripts which each automate the creation of local grids
for testing purposes:

https://github.com/nejucomo/lafs-giab
https://github.com/leif/tahoe-lafs/blob/truckee/quickgrid.sh

~leif


signature.asc
Description: Digital signature
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] Go package for Zfec

2013-11-02 Thread Leif Ryge
On Fri, Nov 01, 2013 at 11:22:49PM -0700, Simon Liu wrote:
 Hi,
 
 I've created a Go package for the zfec library and used it to port the
 zfec/zunfec command line tools (as an example).
 
 To install it run:
 go get gitorious.org/zfec/go-zfec.git
 
 To read documentation:
 http://godoc.org/git.gitorious.org/zfec/go-zfec.git
 
 The two repos are here:
 https://gitorious.org/zfec
 
 Regards,
 Simon

There is also a go port (as opposed to go bindings for the C implementation)
here: https://github.com/korvus81/jfec

(I have not used it; I just discovered it the other day while looking for a
link to zfec to send to this list.)

There is also a JavaScript implementation here:
https://github.com/richardkiss/js_zfec

It would be nice if there was a single place on the web with information about
all of these, perhaps https://tahoe-lafs.org/trac/zfec ?

Right now that page has an error about trac not having the darcs plugin.

~leif


signature.asc
Description: Digital signature
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] zfec 1.4.24 standalone install quickstart?

2013-11-02 Thread Leif Ryge
On Sat, Oct 26, 2013 at 03:29:05AM +, Garonda Rodian wrote:
 I'm certain I'm making beginner mistakes, but in my defense, I think I'm 
 simply trying to follow the README.rst after downloading from
 https://pypi.python.org/pypi/zfec
 
 Is there some pyutil install I'm supposed to do first?
 [...]

Yes, you need pyutil which you can find at https://pypi.python.org/pypi/pyutil
or you can have pip install it for you BUT be careful of older versions of pip
which have a bad habit of downloading code over HTTP or unverified HTTPS.

I just tested and confirmed that zfec installs correctly (and safely, relying
on HTTPS and PyPI) into a new virtualenv (root not required) using these steps:

wget 
https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.10.1.tar.gz
tar xf virtualenv-1.10.1.tar.gz
./virtualenv-1.10.1/virtualenv.py ./zfec_venv
source ./zfec_venv/bin/activate
pip install --no-allow-insecure zfec

happy hacking,
~leif


signature.asc
Description: Digital signature
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] One Grid to Rule Them All

2013-06-30 Thread Leif Ryge
On Mon, Jul 01, 2013 at 12:33:54AM -0400, Avi Freedman wrote:
 
   I personally want to be able to email or tweet or inscribe on papyrus a 
   URL
   containing a read cap, and anyone who sees that and has Tahoe-LAFS version
   Glorious Future installed should have a reasonable chance to retrieve the
   content.
  
   Regards,
   Comrade Nathan
   Grid Universalist
 
 As a followup...
 
 I took a look at tinyproxy.
 
 The config file I set up was:
 
 http://38.118.79.85:/lafscluster1/file/URI%3ACHK%3Apr3tecemgw2t37fjo5gd7bo4bq%3Aj6ht6jhnetfdcjyjpwduns4rgqnxdq7slbrn4hjv5yiemt4skygq%3A3%3A10%3A10285/@@named=/tinyproxy.conf
 
 (which is served using the proxy to a local node)
 
 or
 
 http://198.186.190.244/tinyproxy.conf
 
 That's with a filter of:
 
 root@introducer1:~# more /etc/tinyproxy.filter 
 /file/*
 
 Then just whomp in a filecap and it should work.  The first question
 is whether there are other ways to get at the backend full web server
 functionality under /file/
 
 So I think that'd work as a way to expose files for download.
 I wanted to explictly prohibit dirs since tahoe has more web logic
 enabled about what to do with a directory.
 
 For running as a service (commerical or otherwise), if every cluster
 had their own proxy and every cluster only had one user (until accounting
 comes to LAFS), cost accounting should be doable with traffic accounting.
 
 In a federated way one could also have local web proxies to local
 tahoe processes and do DNS round robin or more sophisticated load
 balancing to make data available.
 
 Criticism and warnings as to the vulnerabilies of tinyproxy or the 
 setup are invited and welcome, or suggestions for alternative low-memory 
 proxies. 
 
 Avi

I haven't used it yet, but I think Comrade Nathan's own nginx-based
Tahoe-LAFS Restrictive Proxy Gateway is just what you're looking for:
https://bitbucket.org/nejucomo/lafs-rpg

~leif


signature.asc
Description: Digital signature
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] Grid in a Box script.

2013-06-15 Thread Leif Ryge
On Sat, Jun 15, 2013 at 05:43:13PM +0200, Nathan wrote:
 Hello,
 
 In the spirit of release early/often here's a python script I just wrote:
 
 https://github.com/nejucomo/lafs-giab
 
 giab is Grid in a Box.  It creates an introducer, and a storage node,
 and configures the storage node to use that introducer, and also to use N =
 K = happy = 1.
 
 I use this when I want to do integration testing for other tools I'm
 working on which are clients of the webapi:  I want to quickly create a new
 empty grid to run the integration tests against.  When I'm done I can nuke
 that Grid in a Box directory.
 
 There is no packaging config or README yet.  The best docs are in the
 --help output.  Let me know if you find this useful.
 
 
 Regards,
 nejucomo

Nice!
I actually wrote something very similar a few days ago, but in bash:
https://github.com/leif/tahoe-lafs/blob/truckee/quickgrid.sh
I think yours is better :)

~leif


signature.asc
Description: Digital signature
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev