[tahoe-dev] call for volunteers for volunteergrid2

2011-01-11 Thread Zooko O'Whielacronx
Folks:

I've heard from a few different people that they want a Tahoe-LAFS
grid to contribute a server to and receive storage service in return.
There is one such grid in operation, called volunteergrid, but it is
not accepting new members at this time. Therefore, I'm creating a
mailing list named volunteergrid2-l to let such people
self-organize. I'm going to be the mailing list administrator, but
otherwise I don't intend to spend a lot of time organizing.

Here is a proposed constitution for volunteergrid2.

---
Welcome to volunteergrid2-l!

Rules (enforced by community):
1. We have at most 20 members.
2. Respect one another's privacy and security. (Normally Tahoe-LAFS
makes it impossible to do otherwise, but accidents can happen, and
when they do, mutual respect for one another's privacy is the best
defense.)
3. Contribute more storage space than you use (remember that expansion
uses up more disk space by a factor of N/K, and that old version of
files linger for about a month before they are garbage-collected).
4. Offer good quality, durable, highly-available service -- commit to
keeping your server(s) running year after year. If a server dies, try
to rescue the data off of its hard drive. Use careful system
administration. Try to keep your servers up and reachable all the
time. Let your fellow volunteers know if there is a failure or
interruption of service.
6. Be nice! Kindness and good humor help groups of human cooperate
more efficiently.
---

If you want to be on the volunteergrid2-l mailing list, please send me
email! zo...@zooko.com

Regards,

Zooko
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] call for volunteers for volunteergrid2

2011-01-11 Thread Jody Harris
I'm in.

- Think carefully.


On Tue, Jan 11, 2011 at 8:04 AM, Zooko O'Whielacronx zo...@zooko.comwrote:

 Folks:

 I've heard from a few different people that they want a Tahoe-LAFS
 grid to contribute a server to and receive storage service in return.
 There is one such grid in operation, called volunteergrid, but it is
 not accepting new members at this time. Therefore, I'm creating a
 mailing list named volunteergrid2-l to let such people
 self-organize. I'm going to be the mailing list administrator, but
 otherwise I don't intend to spend a lot of time organizing.

 Here is a proposed constitution for volunteergrid2.

 ---
 Welcome to volunteergrid2-l!

 Rules (enforced by community):
 1. We have at most 20 members.
 2. Respect one another's privacy and security. (Normally Tahoe-LAFS
 makes it impossible to do otherwise, but accidents can happen, and
 when they do, mutual respect for one another's privacy is the best
 defense.)
 3. Contribute more storage space than you use (remember that expansion
 uses up more disk space by a factor of N/K, and that old version of
 files linger for about a month before they are garbage-collected).
 4. Offer good quality, durable, highly-available service -- commit to
 keeping your server(s) running year after year. If a server dies, try
 to rescue the data off of its hard drive. Use careful system
 administration. Try to keep your servers up and reachable all the
 time. Let your fellow volunteers know if there is a failure or
 interruption of service.
 6. Be nice! Kindness and good humor help groups of human cooperate
 more efficiently.
 ---

 If you want to be on the volunteergrid2-l mailing list, please send me
 email! zo...@zooko.com

 Regards,

 Zooko
 ___
 tahoe-dev mailing list
 tahoe-dev@tahoe-lafs.org
 http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev

___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] How do lease renewal and repair work?

2011-01-11 Thread Brian Warner
On 1/11/11 7:19 AM, Shawn Willden wrote:

 Specifically, what I'm wondering what happens if a client running a
 deep-check --repair --add-lease tries to add a lease on an existing
 share and the storage server refuses the lease renewal?

That part of the code needs some work, on both sides. At present, the
storage server will never refuse a lease renewal, even if the server is
in readonly mode (server.py StorageServer.remote_add_lease and the
unused remote_renew_lease). And the client will ignore failures in the
remote_add_lease call (immutable.checker.Checker._add_lease_failed),
partially because older versions of the server didn't support
remote_add_lease, and we want repair to work anyways.

 Will the repairer assume that the unrenewed share needs to be placed
 somewhere else? Or will the client have to wait until the unrenewed
 share actually expires before the repairer will place another copy?

The current repairer won't notice a renewal-rejection. Since the share
will stick around until expiration actually kills it, the repairer won't
do anything special until it expires, at which point it'll create a new
share as usual.

We should change this, especially w.r.t. Accounting, since leases are
the basis of storage-usage calculation. Servers should reject lease
add/renewal requests when in readonly mode, or when the Account does not
allow the claiming of additional space. The upload-a-share and
add/renew-a-lease calls should be expanded to allow requesting a
specific duration on the lease (defaulting to one month, as before).
When repairing a file, the client should not be happy until all N shares
have a lease that will last at least as long as the client's configured
lease-duration value. We might need a please tell me how long lease XYZ
will last request. If a renewal request is rejected, and the existing
lease will expire too soon, the repairer should upload additional shares
to other servers.

 The motivation is that I'm thinking about how storage nodes can
 withdraw gracefully from the grid. If the storage servers can refuse
 to renew leases and if the repairer assumes that unrenewed shares need
 to be placed elsewhere, then it should be very simple to create a new
 storage server configuration flag withdrawing, which tells the
 storage server to refuse new shares, and also to refuse lease
 renewals. Then, with lease expiration turned on, all of the shares
 it's holding will eventually expire, but all of the clients who own
 those shares will have ample opportunity to relocate them. When the
 last of the withdrawing server's shares expire, then it can be shut
 down.

That sounds like a great approach. Maybe retired/retiring? We've
also used spin down and decommission to describe this state in the
past. We've also kicked around the idea that storage servers should be
able to abandon ship on their own: upload their shares directly to
other servers (do the permuted-list thing on their own, remove
themselves from the result, find the best place to evacuate the share
to, upload the share, then delete their local copy). This could only
work for immutable shares, since the mutable write-enabler gets in the
way as usual, and it might interact weirdly with Accounting (the old
server would effectively be paying for the new share until the real
clients established new leases and took over ownership). But it'd
probably be more efficient: the share already exists, so no need to
re-encode the file.

cheers,
 -Brian
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] [pycryptopp] #67: Use of uninitialised value in CryptoPP::Rijndael_Enc_AdvancedProcessBlocks

2011-01-11 Thread pycryptopp
#67: Use of uninitialised value in  CryptoPP::Rijndael_Enc_AdvancedProcessBlocks
--+-
 Reporter:  Nikratio  |  Owner:  Nikratio
 Type:  defect| Status:  new 
 Priority:  major |Version:  0.5.19  
   Resolution:|   Keywords:  
Launchpad Bug:|  
--+-

Comment (by Nikratio):

 Here you go:

 {{{
 $ valgrind python-dbg contrib/test.py
 ==19162== Memcheck, a memory error detector
 ==19162== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
 ==19162== Using Valgrind-3.6.0.SVN-Debian and LibVEX; rerun with -h for
 copyright info
 ==19162== Command: python-dbg contrib/test.py
 ==19162==
 ==19162== Use of uninitialised value of size 4
 ==19162==at 0x5121325:
 CryptoPP::Rijndael_Enc_AdvancedProcessBlocks(void*, unsigned int const*)
 (in /usr/lib/libcrypto++.so.8.0.0)
 ==19162==by 0x512151D:
 CryptoPP::Rijndael::Enc::AdvancedProcessBlocks(unsigned char const*,
 unsigned char const*, unsigned char*, unsigned int, unsigned int) const
 (in /usr/lib/libcrypto++.so.8.0.0)
 ==19162==by 0x50FC341:
 CryptoPP::CTR_ModePolicy::OperateKeystream(CryptoPP::KeystreamOperation,
 unsigned char*, unsigned char const*, unsigned int) (in
 /usr/lib/libcrypto++.so.8.0.0)
 ==19162==by 0x4E2405E:
 CryptoPP::CTR_ModePolicy::WriteKeystream(unsigned char*, unsigned int)
 (modes.h:151)
 ==19162==by 0x505648E:
 
CryptoPP::AdditiveCipherTemplateCryptoPP::AbstractPolicyHolderCryptoPP::AdditiveCipherAbstractPolicy,
 CryptoPP::CTR_ModePolicy ::ProcessData(unsigned char*, unsigned char
 const*, unsigned int) (in /usr/lib/libcrypto++.so.8.0.0)
 ==19162==by 0x4E23A5D: AES_process(AES*, _object*) (aesmodule.cpp:77)
 ==19162==by 0x80F92A8: call_function (ceval.c:3738)
 ==19162==by 0x80F4ACA: PyEval_EvalFrameEx (ceval.c:2412)
 ==19162==by 0x80F98F3: fast_function (ceval.c:3836)
 ==19162==by 0x80F964C: call_function (ceval.c:3771)
 ==19162==by 0x80F4ACA: PyEval_EvalFrameEx (ceval.c:2412)
 ==19162==by 0x80F7214: PyEval_EvalCodeEx (ceval.c:3000)
 ==19162==
 [19593 refs]
 ==19162==
 ==19162== HEAP SUMMARY:
 ==19162== in use at exit: 565,451 bytes in 5,895 blocks
 ==19162==   total heap usage: 51,971 allocs, 46,076 frees, 5,439,309 bytes
 allocated
 ==19162==
 ==19162== LEAK SUMMARY:
 ==19162==definitely lost: 0 bytes in 0 blocks
 ==19162==indirectly lost: 0 bytes in 0 blocks
 ==19162==  possibly lost: 544,863 bytes in 5,576 blocks
 ==19162==still reachable: 20,588 bytes in 319 blocks
 ==19162== suppressed: 0 bytes in 0 blocks
 ==19162== Rerun with --leak-check=full to see details of leaked memory
 ==19162==
 ==19162== For counts of detected and suppressed errors, rerun with: -v
 ==19162== Use --track-origins=yes to see where uninitialised values come
 from
 ==19162== ERROR SUMMARY: 2 errors from 1 contexts (suppressed: 50 from 11)
 }}}


 {{{
 $ cat contrib/test.py
 import hmac
 import pycryptopp
 import hashlib
 import struct

 def encrypt(buf, passphrase, nonce):

 key = hashlib.sha256(passphrase + nonce).digest()
 cipher = pycryptopp.cipher.aes.AES(key)
 hmac_ = hmac.new(key, digestmod=hashlib.sha256)

 hmac_.update(buf)
 buf = cipher.process(buf)
 hash_ = cipher.process(hmac_.digest())

 return b''.join(
 (struct.pack(b'B', len(nonce)),
 nonce, hash_, buf))

 encrypt('foobar', 'passphrase', 'nonce')
 }}}

-- 
Ticket URL: http://allmydata.org/trac/pycryptopp/ticket/67#comment:2
pycryptopp http://allmydata.org/trac/pycryptopp
Python bindings for the Crypto++ library
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] [tahoe-lafs] #1300: turn on garbage collection by default, offer obvious deep-repair-lease, warn about unset config

2011-01-11 Thread tahoe-lafs
#1300: turn on garbage collection by default, offer obvious deep-repair-lease,
warn about unset config
-+--
 Reporter:  zooko|   Owner:  nobody  
 Type:  enhancement  |  Status:  new 
 Priority:  major|   Milestone:  undecided   
Component:  unknown  | Version:  1.8.1   
   Resolution:   |Keywords:  leases repair usability defaults
Launchpad Bug:   |  
-+--

Comment (by warner):

 I dunno, I prefer Tahoe's defaults to prefer data-retention over data-
 expiration (preferring durability over write-availability). Turning on GC
 by default is going to unpleasantly surprise some people about 30 days
 after they upload their data, and they're going to think that Tahoe is not
 a reliable storage system. I think we should write and enable tools inside
 the client node to automate lease-renewal (like the repair agent idea
 that we've kicked around, #483 and/or #543) and give client-side users a
 chance to upgrade to those versions before we change the servers to
 automatically delete shares.

 But I certainly agree that the expiration control knobs could be more
 visible. Maybe we should encourage each grid to write up a welcome page
 that sets out the policies of that particular grid, and for grids which
 choose to use expiration, put instructions on that page (both periodic-
 deep-renew commands for clients, and enable-expiration settings for
 servers).

-- 
Ticket URL: http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1300#comment:1
tahoe-lafs http://tahoe-lafs.org
secure decentralized storage
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] [tahoe-lafs] #1208: config should default to leaving 1G free

2011-01-11 Thread tahoe-lafs
#1208: config should default to leaving 1G free
--+-
 Reporter:  gdt   |   Owner:  somebody  
 Type:  enhancement   |  Status:  new   
 Priority:  major |   Milestone:  1.8.2 
Component:  code-storage  | Version:  1.8β  
   Resolution:|Keywords:  usability reliability defaults
Launchpad Bug:|  
--+-
Changes (by warner):

  * owner:  warner = somebody
  * milestone:  soon = 1.8.2


Comment:

 yeah, this is good for 1.8.2 . Someone want to write up a patch?

-- 
Ticket URL: http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1208#comment:4
tahoe-lafs http://tahoe-lafs.org
secure decentralized storage
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] [tahoe-lafs] #1302: installing Python 3 breaks bin\tahoe on Windows

2011-01-11 Thread tahoe-lafs
#1302: installing Python 3 breaks bin\tahoe on Windows
+---
 Reporter:  davidsarah  |   Owner:  somebody 
 Type:  defect  |  Status:  new  
 Priority:  major   |   Milestone:  undecided
Component:  packaging   | Version:  1.8.1
   Resolution:  |Keywords:  windows regression setuptools
Launchpad Bug:  |  
+---

Comment (by davidsarah):

 This is a regression in Tahoe-LAFS v1.8.0 relative to v1.7.1, caused by
 the fix to #1074. Before v1.8.0, {{{bin\tahoe.exe}}} (built from [http
 ://tahoe-lafs.org/trac/zetuptoolz/browser/trunk/launcher.c?rev=583#L186
 this source]) would have run the Python executable from {{{tahoe-
 script.py}}}'s shebang line, which is the one used for the Tahoe
 build/install, so it would have worked for the same reason as it does from
 a Cygwin prompt.

 So, my bad :-(

 Possible solutions (that don't regress #1074):
 a. Have
 [source:setuptools-0.6c16dev3.egg/setuptools/command/scriptsetup.py
 scriptsetup] associate .pyscript\shell\open\command with the current
 Python interpreter ({{{sys.executable}}} when scriptsetup is run) rather
 than Python.File.
 b. Make {{{tahoe.pyscript}}} work with any Python version, but use the
 Python executable from build time to run
 {{{support\Scripts\tahoe.pyscript}}}.
 c. Make {{{bin\tahoe}}} something other than a Python script (for example,
 a .bat or .cmd file).

 Note that a. has the property that if you run build/install with version X
 of Python, then all copies of Tahoe for the current user (since v1.8.0)
 will then use version X, rather than just the one you're
 building/installing. That differs from the behaviour on other operating
 systems or from a Cygwin prompt.

 I think b. is a bad idea; we eventually want to get rid of
 {{{tahoe.pyscript}}} (at least Brian and I do). Also, it imports
 pkg_resources, so pkg_resources would also have to work in Python 3, which
 is impractical/too much work.

 I tested c. with a {{{bin\tahoe.cmd}}} file containing
 {{{
 @C:\Python26\python.exe full path to renamed tahoe.pyscript %*
 }}}
 It worked once, and from then on failed silently. (I've seen this
 behaviour before with .cmd and .bat files on my system, and have never got
 to the bottom of it. Perhaps the Windows installation is just broken.)

 In summary, I'm not sure how to fix this yet.

-- 
Ticket URL: http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1302#comment:1
tahoe-lafs http://tahoe-lafs.org
secure decentralized storage
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev