Re: [s3ql] Exception during handling of unmount exception

2013-12-16 Thread Nikolaus Rath
Nicola Cocchiaro  writes:
> With s3ql 2.5 and 2.6, if umount.s3ql fails due to (for example) the mount 
> point being in use, the following stack trace will be output:
[...]

Thanks for the report! Just committed a (slightly simpler) patch to
Mercurial.

Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[s3ql] [ANNOUNCE] S3QL 2.7 has been released

2013-12-16 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 2.7.

Please note that starting with version 2.0, S3QL requires Python 3.3 or
newer. For older systems, the S3QL 1.x branch (which only requires
Python 2.7) will continue to be supported for the time being. However,
development concentrates on S3QL 2.x while the 1.x branch only receives
selected bugfixes. When possible, upgrading to S3QL 2.x is therefore
strongly recommended.

From the changelog:

2013-12-16, S3QL 2.7

  * Fixed another race condition that could lead to mount.s3ql
crashing with `ValueError: I/O operation on closed file`.

  * S3QL no longer generates warning messages for the first two times
that it has to resend a request to the storage backend. If there
is no success after the second try, messages are emitted as before.

  * S3QL now stores multiple copies of the master encryption key to
allow recovery if the backend looses the object holding the
primary copy. To take advantage of this functionality for existing
file systems, change the file system passphrase with s3qladm.

  * Fixed problem with automatic cache size detection (mount.s3ql was
treating bytes as kilobytes). Thanks to gvorm...@gmail.com for the
patch!

  * Fixed "AttributeError in LegacyDecryptDecompressFilter" crash when
reading objects written by old S3QL versions.

  * Fixed a problem with umount.s3ql giving a strange error when
the mountpoint is still in use.


Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (http://code.google.com/p/s3ql/issues/list).

The GPG checksum is attached to this message.


Enjoy,

   -Nikolaus


-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iGoEABECACoFAlKvz0IjGmh0dHA6Ly93d3cucmF0aC5vcmcvZ3BncG9saWN5Lmh0
bWwACgkQqa23+K5OQlwUwwCfY/y10RdkW8D7ByQ0eDHK6BVvXFkAn2RgFmcY+aNu
nHa+ie0j8O3r9gvi
=W6jo
-END PGP SIGNATURE-


signature.asc
Description: OpenPGP digital signature


Re: [s3ql] Re: Correct authfile provided, still *.s3ql commans ask me to enter credentials on terminal

2013-12-18 Thread Nikolaus Rath
On 12/18/2013 12:36 PM, Ray wrote:
> Found out:
> 
> the auth file will not work when behind a symlink.
> 
> it has to be the file itself..unfortunately

That's not intended behavior though. Could you file a bug?

Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] xml.etree.ElementTree.ParseError: no element found: line 1, column 0

2013-12-24 Thread Nikolaus Rath
Balázs  writes:
> Hi Niko,
>
> Happy holidays! 
>
> Things have been much better recently. I do have an ugly bug for you 
> (submitted as issue 447). I re-run the same bucket once more since I 
> submitted the original error, and got the same thing again :-( Well, not 
> the exact same error, just similar, and on the same bucket. Let me know 
> what to try :-)
[..]

Yeah, apologies for not responding sooner. I was a bit busy, and it
looked like this would be a rare case. I'll reply in the bug tracker.

Happy holidays,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[s3ql] Re: Fwd: Extending S3QL

2013-12-30 Thread Nikolaus Rath
Hi Diogo,

Diogo Vieira  writes:
> Hello once again and I'm sorry to message you directly.
>
> I tried to reply to the mailing list but it seems to not have been
> published because I can't find it browsing through google groups web
> interface and I haven't had any answer since my email was sent. Can
> you please inform me if I actually sent the email to the correct
> mailing list? And, if possible, can you help me with the problem I
> stated?

The address was correct, but your email did not get through. Are you
sure you did not get an (automated) reply with more information? Did you
subscribe to the group? (Posting is only allowed by members).


 I'm looking forward to extend (at least as just a prototype for now)
 your s3ql project to add a custom backend (with a new service being
 developed internally similar to S3). Since I didn't see any info in
 the documentation about that I thought that maybe you could give me
 some advice where to start. Would it be too much trouble to add 
 support to it?
>>> 
>>> If you're looking for documentation on the code, you'll have to look at
>>> the code itself, there should be plenty of docstrings and comments. To
>>> implement a backend, take a look at the AbstractBackend class in
>>> src/s3ql/backend/common.py. It describes all methods that you need to
>>> implement. For a simple example, take a look at the local backend in
>>> src/s3ql/backends/local.py, for something more complex look at
>>> src/s3ql/backends/s3c.py or swift.py.
>>> 
>>> The difficulty of adding a backend for your service is hard to judge
>>> without any information on your service. Is it documented somewhere?
>> 
>> I'm taking a look at this moment at the common.py and all the
>> backends files and it seems to be easily readable and
>> understandable. As for the documentation for the service there is
>> none at the moment because it's still a prototype, but the API tries
>> to emulate that of S3 closely so I guess it shouldn't be too
>> difficult.

In that case you may not have to do any coding at all, just use the
S3-compatible backend.

>> One thing I'm having difficulties with is understanding what should be the
>> workflow during development. Having written scripts in Python before I know 
>> the
>> syntax well, but I never really developed on open source software, 
>> especially when
>> it comes to more than just a script. For now what I did was to download the 
>> source (which
>> actually contains binaries)

I very much doubt that. The s3ql-2.7.tar.bz2 does not contain any
binaries. Are you sure you downloaded the right file?

>> from the website and started to work on the src folder. But how would
>> I actually iterate between development and testing? Imagine I start
>> to create a new backend. How would I test my new backend? I'm sorry
>> if I sound like a newbie, but in this matter I really am :)

Best way would be to add unit tests (see tests/t1_backends.py) and run
them with "py.test tests/". After that, try to use it in daily operation :-).


Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Re: Fwd: Extending S3QL

2013-12-30 Thread Nikolaus Rath
Diogo Vieira  writes:
 One thing I'm having difficulties with is understanding what should be the
 workflow during development. Having written scripts in Python before I 
 know the
 syntax well, but I never really developed on open source software, 
 especially when
 it comes to more than just a script. For now what I did was to download 
 the source (which
 actually contains binaries)
>> 
>> I very much doubt that. The s3ql-2.7.tar.bz2 does not contain any
>> binaries. Are you sure you downloaded the right file?
>
> Sorry if didn't make myself clear. I meant the python scripts in the
> bin folder.

Well, these are scripts, not binaries, as you said yourself :-).

> And I'm using the s3ql-1.17.tar.bz2. Should I use the other one (which
> I believe needs python 3)?

Yes, I strongly suggest to work on S3QL 2.x. The 1.x branch is
maintenance mode only, i.e. a dead end.

 from the website and started to work on the src folder. But how would
 I actually iterate between development and testing? Imagine I start
 to create a new backend. How would I test my new backend? I'm sorry
 if I sound like a newbie, but in this matter I really am :)
>> 
>> Best way would be to add unit tests (see tests/t1_backends.py) and run
>> them with "py.test tests/". After that, try to use it in daily operation :-).
>
> Okay, I'll take a look and add my own files. but if I try to literally
> run `py.test tests` in the root folder it doesn't work (maybe I got
> something wrong?).

Yes, you forgot to include the error message in your email.

> BTW, when developing, after I update the code should I run `python
> setup.py {build, Cython,build_ext}` and run the scripts in the bin
> folder, if I mean to use it interactively (that is running the code I
> changed)?

As long as you don't touch the *.pyx files, there is no need to run
setup.py. If you run the scripts in the bin/ folder, they'll pick up any
changes automatically (and so do the tests).

Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Extending S3QL

2013-12-30 Thread Nikolaus Rath
On 12/30/2013 10:06 AM, Diogo Vieira wrote:
>> Diogo Vieira  writes:
>> One thing I'm having difficulties with is understanding what should be 
>> the
>> workflow during development. Having written scripts in Python before I 
>> know the
>> syntax well, but I never really developed on open source software, 
>> especially when
>> it comes to more than just a script. For now what I did was to download 
>> the source (which
>> actually contains binaries)

 I very much doubt that. The s3ql-2.7.tar.bz2 does not contain any
 binaries. Are you sure you downloaded the right file?
>>>
>>> Sorry if didn't make myself clear. I meant the python scripts in the
>>> bin folder.
>>
>> Well, these are scripts, not binaries, as you said yourself :-).
>>
>>> And I'm using the s3ql-1.17.tar.bz2. Should I use the other one (which
>>> I believe needs python 3)?
>>
>> Yes, I strongly suggest to work on S3QL 2.x. The 1.x branch is
>> maintenance mode only, i.e. a dead end.
>>
>> from the website and started to work on the src folder. But how would
>> I actually iterate between development and testing? Imagine I start
>> to create a new backend. How would I test my new backend? I'm sorry
>> if I sound like a newbie, but in this matter I really am :)

 Best way would be to add unit tests (see tests/t1_backends.py) and run
 them with "py.test tests/". After that, try to use it in daily operation 
 :-).
>>>
>>> Okay, I'll take a look and add my own files. but if I try to literally
>>> run `py.test tests` in the root folder it doesn't work (maybe I got
>>> something wrong?).
>>
>> Yes, you forgot to include the error message in your email.
> 
> Well, it is just a "command not found". Is there a package I need to have 
> installed?

Yes, you need to install pytest (http://pytest.org/).


Best,
Nikolaus

PS: Please don't send CC's to my personal address, I'm reading the list :-).

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] mkfs.s3ql error: Invalid Credentials or skewed system clock

2014-01-06 Thread Nikolaus Rath
On 01/06/2014 02:13 AM, Diogo Vieira wrote:
>> I am attempting to create an S3 filesystem and I am receiving the
>> following error:
>>
>> "Invalid credentials or skewed system clock." 
>>
>> Information about my setup:
>>
>> >> mkfs.s3ql --version
>> >> S3QL 1.12
>>
>> Not my real values but just to make sure I am not making a mistake
>> with parameters.
>>
>> Access Key ID: BA8AHD760HA1ZXDJUZPR
>> Secret Access Key: valDQ8yuckEKzKVgxeHZRewcWb8zHVrVlezLyzPH
>> Bucket Name: 4CB68BE999C642FB
>>
>> >> mkfs.s3ql s3://4CB68BE999C642FB
>> >> Enter backend login: BA8AHD760HA1ZXDJUZPR
>> >> Enter backend password: valDQ8yuckEKzKVgxeHZRewcWb8zHVrVlezLyzPH
>> >> Invalid credentials or skewed system clock.
>>
>> To rule out the skewed clock I ran:
>>
>> >>ntpdate -q pool.ntp.org 
>> server 216.129.110.30, stratum 2, offset -0.007092, delay 0.05629
>> server 72.8.140.222, stratum 2, offset -0.003913, delay 0.10594
>> server 165.123.132.61, stratum 2, offset -0.011243, delay 0.08008
>> server 173.248.148.27, stratum 2, offset -0.005386, delay 0.10223
>> 18 Oct 23:14:08 ntpdate[16126]: adjust time server 216.129.110.30
>> offset -0.007092 sec
>>
>> Anyone see any problems with what I am doing?
> 
> I have exactly the same problem. Can someone help me?

Do your credentials work when you use a different program, e.g. s3cmd or
http://timkay.com/aws/?


Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] mkfs.s3ql error: Invalid Credentials or skewed system clock

2014-01-06 Thread Nikolaus Rath
On 01/06/2014 09:36 AM, Diogo Vieira wrote:
> On Jan 6, 2014, at 5:30 PM, Nikolaus Rath  wrote:
> 
>> On 01/06/2014 02:13 AM, Diogo Vieira wrote:
>>>> I am attempting to create an S3 filesystem and I am receiving the
>>>> following error:
>>>>
>>>> "Invalid credentials or skewed system clock." 
>>>>
>>>> Information about my setup:
>>>>
>>>>>> mkfs.s3ql --version
>>>>>> S3QL 1.12
>>>>
>>>> Not my real values but just to make sure I am not making a mistake
>>>> with parameters.
>>>>
>>>> Access Key ID: BA8AHD760HA1ZXDJUZPR
>>>> Secret Access Key: valDQ8yuckEKzKVgxeHZRewcWb8zHVrVlezLyzPH
>>>> Bucket Name: 4CB68BE999C642FB
>>>>
>>>>>> mkfs.s3ql s3://4CB68BE999C642FB
>>>>>> Enter backend login: BA8AHD760HA1ZXDJUZPR
>>>>>> Enter backend password: valDQ8yuckEKzKVgxeHZRewcWb8zHVrVlezLyzPH
>>>>>> Invalid credentials or skewed system clock.
>>>>
>>>> To rule out the skewed clock I ran:
>>>>
>>>>>> ntpdate -q pool.ntp.org <http://pool.ntp.org/>
>>>> server 216.129.110.30, stratum 2, offset -0.007092, delay 0.05629
>>>> server 72.8.140.222, stratum 2, offset -0.003913, delay 0.10594
>>>> server 165.123.132.61, stratum 2, offset -0.011243, delay 0.08008
>>>> server 173.248.148.27, stratum 2, offset -0.005386, delay 0.10223
>>>> 18 Oct 23:14:08 ntpdate[16126]: adjust time server 216.129.110.30
>>>> offset -0.007092 sec
>>>>
>>>> Anyone see any problems with what I am doing?
>>>
>>> I have exactly the same problem. Can someone help me?
>>
>> Do your credentials work when you use a different program, e.g. s3cmd or
>> http://timkay.com/aws/?
> 
> It does, at least with s3cmd.

Hm. Does it work if you put the credentials in ~/.s3ql/authinfo2? Also,
please make 120% sure that you use exactly the same credentials and
bucket name as for s3cmd. It would be pretty weird if S3QL's S3
authentication would fail for you but work for everyone else...

Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] mkfs.s3ql error: Invalid Credentials or skewed system clock

2014-01-06 Thread Nikolaus Rath
On 01/06/2014 10:50 AM, Diogo Vieira wrote:
> Now with the correct bucket name it did not give me the wrong
> credentials error but instead kept bouncing around between servers
> ending with a "Too many chained redirections" traceback as noted below:
> 
> Using proxy 10.10.4.254:3128
> Using proxy 10.10.4.254:3128
> _do_request(): redirected to s3.amazonaws.com 
> Using proxy 10.10.4.254:3128
> _do_request(): redirected to eurotux-teste2.s3.amazonaws.com
> 
> Using proxy 10.10.4.254:3128
> _do_request(): redirected to s3.amazonaws.com 
> Using proxy 10.10.4.254:3128
> _do_request(): redirected to eurotux-teste2.s3.amazonaws.com
> 
[...]

Could you please file a bug at https://bitbucket.org/nikratio/s3ql/issues?

When doing that, could you check if you have the same problem if you
unset the http_proxy environment variable before calling
mkfs.s3ql/mount.s3ql? (assuming the proxy isn't mandatory).


> BTW, although I have the authfile in place it always asked for my
> credentials again, unless I specified its path in with the flag
> --authfile (I don't know if it intended).

You probably did not put it in the location where s3ql tries to read it
from by default.


Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] mkfs.s3ql error: Invalid Credentials or skewed system clock

2014-01-07 Thread Nikolaus Rath
On 01/07/2014 02:11 AM, Diogo Vieira wrote:
>>> BTW, although I have the authfile in place it always asked for my
>>> >> credentials again, unless I specified its path in with the flag
>>> >> --authfile (I don't know if it intended).
>> > 
>> > You probably did not put it in the location where s3ql tries to read it
>> > from by default.
>> > 
> Isn't the default ~/.s3ql/authfile2? That's the location where I put it. I've 
> set its permissions to 600, that should be ok, right?

No, it's ~/.s3ql/authinfo2. You can see that when running e.g.
mount.s3ql --help.


Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Extending S3QL

2014-01-07 Thread Nikolaus Rath
Hi Diogo,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to (as I have done below). Thanks!

Diogo Vieira  writes:
> About extending S3QL, when I have my backend ready how am I supposed
> to let S3QL know about it (even on URL parsing, etc.)?
>
> I looked through some files and I believe I have to add it to
> s3ql/backends/__init__.py right? But do I need to do anything more?

No, this is all that's necessary. If you supply a storage URL starting
with foobar://, S3QL will attempt to load the s3ql.backends.foobar
module, instantiate it and hand it the rest of the storage URL.



Best,
Nikolaus

PS: Please remember the first paragraph when replying :-).

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[s3ql] [ANNOUNCE] S3QL Homepage has moved

2014-01-11 Thread Nikolaus Rath
Dear all,

Please note that the S3QL homepage has moved from Google Code to
BitBucket. The new URL is

https://bitbucket.org/nikratio/s3ql/

This move was necessary because Google Code no longer allows to offer
files for download.

As a positive side effect, the S3QL wiki is now finally an actual wiki:
it can be edited by everyone!


Happy new year,
-Nikolaus


-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Re: missing deltadump.c

2014-01-15 Thread Nikolaus Rath
Hi PA,

[Quoting fixed]

PA Nilsson  writes:
> On Wednesday, January 15, 2014 9:28:16 AM UTC+1, PA Nilsson wrote:
>>
>> I am trying to build s3ql from the 2.7 release (ubuntu 13.10, 64-bit) but 
>> gets an error when I am running:
>>
>> python3 setup.py build_ext --inplace
>>
>> MANIFEST.in exists, compiling with developer options
>> running build_ext
>> building 's3ql.deltadump' extension
>> creating build
>> creating build/temp.linux-x86_64-3.3
>> creating build/temp.linux-x86_64-3.3/src
>> creating build/temp.linux-x86_64-3.3/src/s3ql
>> x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall 
>> -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat 
>> -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.3m 
>> -c src/s3ql/deltadump.c -o build/temp.linux-x86_64-3.3/src/s3ql/deltadump.o 
>> -Wall -Werror -Wextra
>> x86_64-linux-gnu-gcc: error: src/s3ql/deltadump.c: No such file or 
>> directory
>> x86_64-linux-gnu-gcc: fatal error: no input files
>> compilation terminated.
>> error: command 'x86_64-linux-gnu-gcc' failed with exit status 4
>>
>>
>> Shall the 'deltadump.c' be copied from somewhere and I am missing a dep?
>> I believe I have installed all the requirements stated in the installation 
>> instructions.
>
> Found the problem by looking at the dev-instructions here:
> http://www.rath.org/s3ql-docs/installation.html#development-version
>
> I was missing the
>
> python3 setup.py build_cython
>
> line which was not included in the install instructions in the 2.7
> release.

I don't know what you were looking at, but it was definitely not the
S3QL 2.7 release:

$ wget --quiet https://bitbucket.org/nikratio/s3ql/downloads/s3ql-2.7.tar.bz2 
https://bitbucket.org/nikratio/s3ql/downloads/s3ql-2.7.tar.bz2.asc
$ gpg --verify s3ql-2.7.tar.bz2.asc 
gpg: Signature made Mon 16 Dec 2013 08:12:50 PM PST using DSA key ID AE4E425C
gpg: Good signature from "Nikolaus Rath (born 1983-08-13 in Essen, Germany)"
gpg: aka "Nikolaus Rath "
gpg: Signature policy: http://www.rath.org/gpgpolicy.html
$ tar xjf s3ql-2.7.tar.bz2 
$ ls -l s3ql-2.7/src/s3ql/deltadump.c 
-rw-rw-r-- 1 nikratio nikratio 437331 Dec 16 20:12 s3ql-2.7/src/s3ql/deltadump.c
$ grep cython s3ql-2.7/doc/html/installation.html 
Version 0.17 or newer of the http://www.cython.org/";>Cython compiler.
python3 setup.py build_cython


In other words, the S3QL 2.7 release contains deltadump.c (so the cython
step is not necessary), and it contains the cython line in the
documentation.


You might want to check if you got the correct sources...


Best,
Nikolaus


-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] HTTP timeouts

2014-02-05 Thread Nikolaus Rath
Hi Nicola,

On 02/05/2014 10:52 AM, Nicola Cocchiaro wrote:
> Hi Nikolaus,
> 
> Are there timeouts for HTTP requests sent by S3ql? I took a look at the
> 2.7 code (backends/common.py), and the http.client.HTTPConnection
> constructor calls there (as well as HTTPSConnection ones) don't seem to
> specify a timeout, and it's my understanding from the Python docs that
> this would mean defaulting to the global default timeout. However the
> default timeout for the socket library (which this should fall back to)
> is None in the absence of a call to socket.setdefaulttimeout(), meaning
> HTTP requests could block indefinitely. Is there a timeout set elsewhere
> in code that I haven't been able to find and you can point me to?

No, S3QL does not set any timeouts explicitly. It will wait until the
server closes its side of the TCP connection, or (if the connection is
interrupted) until the kernel's TCP stack decides the connection has
been lost.


Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Upgrade from S3QL 1.15 to 2.7

2014-02-25 Thread Nikolaus Rath
On 02/25/2014 01:06 PM, Dan Johansson wrote:
> Hi,
> 
> I am at last planning t o upgrade S3QL on my Gentoo box from 1.15 to 2.7
> (latest one in portage at the moment).
> 
> Can I go directly from 1.15 to 2.7 (without losing any data) or do I
> have to do it in smaller steps (and what steps in case of yes)?

IIRC you should be able to do it in one step. If not, then s3ql 2.7's
s3qladm upgrade command will tell you which intermediate version to use.


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Upgrade from S3QL 1.15 to 2.7

2014-02-26 Thread Nikolaus Rath
On 02/26/2014 01:16 PM, Dan Johansson wrote:
> On 25.02.2014 23:52, Nikolaus Rath wrote:
>> On 02/25/2014 01:06 PM, Dan Johansson wrote:
>>> Hi,
>>>
>>> I am at last planning t o upgrade S3QL on my Gentoo box from 1.15 to 2.7
>>> (latest one in portage at the moment).
>>>
>>> Can I go directly from 1.15 to 2.7 (without losing any data) or do I
>>> have to do it in smaller steps (and what steps in case of yes)?
>>
>> IIRC you should be able to do it in one step. If not, then s3ql 2.7's
>> s3qladm upgrade command will tell you which intermediate version to use.
> 
> Ok, I went ahead and updated to 2.7 - now I am getting the following
> error message "Buckets with dots in the name cannot be accessed over
> SSL." when trying to run "fsck.s3ql --batch s3://abc.def.ghi".
> 
> This was working perfectly well with 1.15.
> 
> Any suggestion?

S3QL 1.15 did not verify the server's SSL certificate. This means
traffic was encrypted, but you couldn't be sure that you're actually
talking to the correct server rather than some mischievous
man-in-the-middle.

This issue was fixed in S3QL 2.7 (or, more precisely, in Python 3.x).
However, due to the way that Amazon has implemented SSL encryption,
any bucket with a dot in its name will appear to have an invalid
certificate (this is because AWS always supplies a certificate for
*.s3.amazonaws.com - but the * does not match dots).

Is your file system itself encrypted? In that case my suggestion is to
just use --no-ssl. Your data will be just as secure (or insecure,
depending on your POV) as with S3QL 1.15, and as a side-effect
performance will increase (Amazon S3 servers are terribly slow when you
access them over SSL).


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Upgrade from S3QL 1.15 to 2.7

2014-02-27 Thread Nikolaus Rath
On 02/27/2014 09:04 AM, Dan Johansson wrote:
> One question though, can I set the --no-ssl "globally" in some
> config-file? That way I do not have to update my scripts that implement
> s3ql.

No, that's not possible at this time. Sorry.

Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Mount.s3ql hangs after 403 responses

2014-03-01 Thread Nikolaus Rath
Nicola Cocchiaro  writes:
> Nikolaus,
>
> A while ago we had discussed an llfuse exception in a thread called 
> "strange llfuse exception" that was not being correctly formed due to a 
> Cython bug. With the workaround in llfuse 0.40 the issue with the exception 
> was resolved, but the same triggering event of Google Storage returning 403 
> intermittently still triggers an exception that results in bad behavior. 
> Specifically, after an AccessDenied exception (or two as in the logs 
> below), mount.s3ql hangs and with it all file system operations also hang 
> (e.g., accessing the file system extended attributes or running the 'ls' 
> and 'df' commands).
[...]
>
> This triggers another AccessDenied exception, but that's where the log 
> ends. At this point S3ql is no longer responsive, 'ls' and 'df' hang, and 
> any other file system operation also seems to hang. This also happened 
> consistently with all S3ql processes I had running on other boxes that 
> happened to be uploading data in the time window when Google intermittently 
> returned 403.

Thanks for the report. I'll try to reason out what's happening
here. Could you please report this issue at
https://bitbucket.org/nikratio/s3ql/issues to make sure it doesn't get
lost?

Also, if you can reproduce this issue (or encounter it again) and
mount.s3ql hangs, could you please try to obtain a Python stacktrace as
explained on
https://bitbucket.org/nikratio/s3ql/wiki/Providing%20Debugging%20Info?
That'd make it much easier to diagnoses what's going on.


> Question is, is this due to the timing of those (repeated) exceptions and 
> how S3ql handles them, or a FUSE bug?

No, this is probably an S3QL bug.

Best,
Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[s3ql] S3QL 2.8 Prerelease available - please test

2014-03-01 Thread Nikolaus Rath
Hello,

I have just uploaded a prerelease of S3QL 2.8 to
https://bitbucket.org/nikratio/s3ql/downloads
(s3ql-snapshot.tar.bz2). If you can spare a few minutes and have a
not-mission-critical system or some data to test with, it would be great
if you could give it a try.

This release replaces the HTTP interface code with a completely new
module, so I'd like to have as much testing as possible before releasing
it.

To test the new release, download and extract the tarball, run "setup.py
build_ext --inplace" and then run the commands from the "bin"
directory. Do not run "setup.py install".


If you are interested: the new http client module allows to use HTTP
pipelining, which will speed up the upload of small objects as well as
removal of objects. In addition to that, it also gets rid of a rather
ugly hack that was necessary to support Expect-100 operation with the
standard Python http module. S3QL 2.8 uses the new module, but does not
yet use the additional features. These are expected to become available
in S3QL 2.9, once I have confidence that the new code works at least as
reliably as before.


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [s3ql] Can't fsck the file system after a clean mkfs: ObjectNotEncrypted Exception is thrown

2014-03-07 Thread Nikolaus Rath
Diogo Vieira  writes:
> Hi,
>
> I'm getting an ObjectNotEncrypted Exception when I fsck the file
> system I just created (without errors; and I checked that the files
> are created in the backend) like so:
>
> [dfv@host-10-10-6-23 s3ql-2.7]$ bin/mkfs.s3ql --force 
> eds://failing_eds_test/prefix/
> Starting new HTTP connection (1): 10.10.6.23
[...]

I think it would be appropriate to mention that you are not just using
S3QL, but that you are developing your own backend. This problem is
almost certainly due to a bug in your code. :-)


> Uncaught top-level exception:
> Traceback (most recent call last):
>   File "bin/fsck.s3ql", line 26, in 
> s3ql.fsck.main(sys.argv[1:])
>   File "/home/dfv/s3ql-2.7/src/s3ql/fsck.py", line 1109, in main
> elif backend.lookup('s3ql_metadata')['seq_no'] != param['seq_no']:
>   File "/home/dfv/s3ql-2.7/src/s3ql/backends/common.py", line 635, in lookup
> return self._unwrap_meta(metadata)
>   File "/home/dfv/s3ql-2.7/src/s3ql/backends/common.py", line 667, in 
> _unwrap_meta
> raise ObjectNotEncrypted()
> s3ql.backends.common.ObjectNotEncrypted
>
> Can someone please tell me what should I do?

I'd start with running the test suite and putting your code somewhere
online so that people can look at it.


Best,
-Nikolaus
-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Possible to run S3QL on Debian Wheezy (Python 3.2)?

2014-03-11 Thread Nikolaus Rath
Tor Krill  writes:
> Hi all,
>
> I really would like to run S3Ql 2.x on Debian Wheezy. Is this
> possible?

Yes, you just need to install Python 3.3.

> Looking at the PPA for Ubuntu S3QL requires Python 3.3 but Wheezy only
> has Python 3.2. I investigated the possibilities to backport Python
> 3.3 to Wheezy but that seems like quite an undertaking.

Hmm. I think the last time I checked, it was enough to just recompile
the jessy packages for wheezy:

apt-get source python3.3/testing
apt-get build-dep python3.3
(cd python3*; dpkg-buildpackage -us -uc)

Have you tried that?

> So to summarize, is it possible to run S3QL 2.x with Python 3.2?  If
> not, what is the problem?

S3QL 2.x won't run on Python 3.2 without changes.

There are some missing modules (faulthandler, lzma), some exceptions
don't have names yet
(http://docs.python.org/3/whatsnew/3.3.html#pep-3151), "yield from" does
not exist, contextlib.ExitStack is missing, hmac.compare_digest is
missing, and probably some more things.


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Private message regarding: [s3ql] HTTP timeouts

2014-03-12 Thread Nikolaus Rath
Hi Nicola,

Nicola Cocchiaro  writes:
> The reason I originally asked was due to seeing some outbound connections 
> not completing but just hanging, until using umount.s3ql would let them 
> return with a TimeoutError (no less than 15 minutes later in all cases 
> seen). I was not able to dig much deeper at the time, but to experiment 
> more I put together a simple patch to add a configurable socket timeout to 
> all S3ql tools that may make use of it. I've attached it if you'd like to 
> consider it.

Thanks for the patch! I'm generally rather reluctant to add new
command-line options unless they are absolutely crucial. The problem is
that the number of possible configurations (and potential interaction
bug) goes up exponentially with every option.

In this case, I am not sure I fully understand in which situation this
option is intended to be used (so I'm pretty sure that a regular user
wouldn't know when to use it either, which is always a bad sign for a
command line argument). Could you give some additional details on that?

For example, if I'm not happy with the system timeout (which seems to be
15 minutes in your case), shouldn't this be adjusted on the os level as
well? And if not, is there really a need to make the timeout
configurable rather than having S3QL simply use an internal default?

Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Possible to run S3QL on Debian Wheezy (Python 3.2)?

2014-03-12 Thread Nikolaus Rath
Tor Krill  writes:
>> Tor Krill  writes:
>>
>> > Looking at the PPA for Ubuntu S3QL requires Python 3.3 but Wheezy only
>> > has Python 3.2. I investigated the possibilities to backport Python
>> > 3.3 to Wheezy but that seems like quite an undertaking.
>>
>> Hmm. I think the last time I checked, it was enough to just recompile
>> the jessy packages for wheezy:
>>
>> apt-get source python3.3/testing
>> apt-get build-dep python3.3
>> (cd python3*; dpkg-buildpackage -us -uc)
>>
>> Have you tried that?
>>
>
> I tried this and the build succeeds. The problem is that i need to pull in
> a whole bunch of packages from Jessie to be able to do the build. This in
> the end results in a package that is not installable on a plain Debian
> Wheezy system :(

I think you're doing something wrong. You mean the "apt-get build-dep
python3.3" step requires installing packages from Jessie? Which ones?


> "Selecting previously unselected package python3.3-minimal.
> dpkg: regarding python3.3-minimal_3.3.5~rc2-1_amd64.deb containing
> python3.3-minimal, pre-dependency problem:
> python3.3-minimal pre-depends on libc6 (>= 2.15)
>   libc6:amd64 is installed, but is version 2.13-38+deb7u1."

Clearly, this python3.3-minimal package was not build on a wheezy
system. Are you sure you did the "apt-get build-dep" and
"dpkg-buildpackage" on a wheezy system?


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Private message regarding: [s3ql] HTTP timeouts

2014-03-13 Thread Nikolaus Rath
Nicola Cocchiaro  writes:
> On Tuesday, March 11, 2014 8:06:42 PM UTC-7, Nikolaus Rath wrote:
>>
>> Hi Nicola, 
>>
>> Nicola Cocchiaro writes: 
>> > The reason I originally asked was due to seeing some outbound 
>> connections 
>> > not completing but just hanging, until using umount.s3ql would let them 
>> > return with a TimeoutError (no less than 15 minutes later in all cases 
>> > seen). I was not able to dig much deeper at the time, but to experiment 
>> > more I put together a simple patch to add a configurable socket timeout 
>> to 
>> > all S3ql tools that may make use of it. I've attached it if you'd like 
>> to 
>> > consider it. 
>>
>> Thanks for the patch! I'm generally rather reluctant to add new 
>> command-line options unless they are absolutely crucial. The problem is 
>> that the number of possible configurations (and potential interaction 
>> bug) goes up exponentially with every option. 
>>
>> In this case, I am not sure I fully understand in which situation this 
>> option is intended to be used (so I'm pretty sure that a regular user 
>> wouldn't know when to use it either, which is always a bad sign for a 
>> command line argument). Could you give some additional details on that? 
>>
>> For example, if I'm not happy with the system timeout (which seems to be 
>> 15 minutes in your case), shouldn't this be adjusted on the os level as 
>> well? And if not, is there really a need to make the timeout 
>> configurable rather than having S3QL simply use an internal default? 
>>
>
>
> The problem is that there was no apparent system timeout, or those 
> connections did not seem to be timing out on their own in the cases 
> observed; I did not have the means at the time to figure out why exactly 
> this would happen, but the root cause of all this was a temporary 
> malfunction on the Google Storage side. The 15 minutes come from a 
> different process which had its own timeout (15 minutes in fact) for 
> allowing S3ql to unmount; in response to the timeout firing it called 
> umount.s3ql again, and that in turn seemed to allow the connections to be 
> recognized as timed out (possibly in response to sending SIGTERM to the 
> mount.s3ql process? This was my theory after looking at the timing from a 
> number of logs but again, unfortunately I do not have 100% solid evidence 
> that this was the reason).

Hmm. I don't think this is likely. umount.s3ql does not send SIGTERM. It
sets a termination flag that is checked for the in main file system
loop. So just calling umount.s3ql would not result in any pending socket
operations to terminate.

Is there any way to reproduce the problem you had? 

> A static, internal default may be enough and in fact it helped when I
> first tried it, but more advanced users may want to adapt the timeout
> to their own use case when relying on the OS doesn't seem to help like
> in the cases observed. I understand the reluctance to add more options
> and increase complexity for all users, but I thought I'd share this
> patch for consideration, perhaps as an extra option in a set of
> clearly marked "advanced" options.

Understood, thanks. I don't want to rule out such an option yet, but if
we add it, add the very least there should be some documentation
explaining what exactly the option does, and when it should be used. At
the moment, this seems rather unclear to me (see above).


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Possible to run S3QL on Debian Wheezy (Python 3.2)?

2014-03-13 Thread Nikolaus Rath
Tor Krill  writes:
> On Wed, Mar 12, 2014 at 4:55 PM, Nikolaus Rath  wrote:
>
>> >>
>> >> apt-get source python3.3/testing
>> >> apt-get build-dep python3.3
>> >> (cd python3*; dpkg-buildpackage -us -uc)
>> >>
>> >> Have you tried that?
>> >>
>> >
>> > I tried this and the build succeeds. The problem is that i need to pull
>> in
>> > a whole bunch of packages from Jessie to be able to do the build. This in
>> > the end results in a package that is not installable on a plain Debian
>> > Wheezy system :(
>>
>> I think you're doing something wrong. You mean the "apt-get build-dep
>> python3.3" step requires installing packages from Jessie? Which ones?
>>
>> > "Selecting previously unselected package python3.3-minimal.
>> > dpkg: regarding python3.3-minimal_3.3.5~rc2-1_amd64.deb containing
>> > python3.3-minimal, pre-dependency problem:
>> > python3.3-minimal pre-depends on libc6 (>= 2.15)
>> >   libc6:amd64 is installed, but is version 2.13-38+deb7u1."
>>
>> Clearly, this python3.3-minimal package was not build on a wheezy
>> system. Are you sure you did the "apt-get build-dep" and
>> "dpkg-buildpackage" on a wheezy system?
>
>
> The python3.3 depends on libmpdec-dev which in turn finally depends on
> libc6 giving the below problems.

This does not make sense. Either libmpdec-dev is not available in
wheezy, or it is available in wheezy and compatible with wheezy's
libc6. The above message looks as if you're trying to install a jessie
libmpdec-dev binary package in wheezy.

> apt-get build-dep python3.3 now says:
>
> E: Build-Depends dependency for python3.3 cannot be satisfied because
> candidate version of package gcc can't satisfy version requirements
>
> And trying to build the package anyway says:
>
> dpkg-checkbuilddeps: Unmet build dependencies: quilt autoconf sharutils
> libreadline6-dev libncursesw5-dev (>= 5.3) zlib1g-dev libbz2-dev
> liblzma-dev libgdbm-dev libdb-dev tk-dev blt-dev (>= 2.4z) libssl-dev
> libexpat1-dev libbluetooth-dev libsqlite3-dev libffi-dev (>= 3.0.5)
> python3:any gcc (>= 4:4.8) xvfb
>
> Where the dependency on gcc 4.8 seems like the big problem here.

Python 3.3 does not depend on GCC 4.8 for sure, so this could only
happen as a result to some Debian specific patch. I would try to just
build with an older gcc and see what happens (use dpkg-buildpackage -d).

> I would conclude this as it is no easy task to get python3.3 running
> on Debian Wheezy :(

It certainly was relatively straightforward a few weeks ago. I'm running
a wheezy system with Python 3.3 here. But it is possible that the Python
package in testing got some upgrades since then that makes it hard to
backport. Another idea would be to grab the testing python3.3 package
from snapshots.debian.org (try the last version before Python 3.3 became
the default python).


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[s3ql] [ANNOUNCE] S3QL 2.8 has been released

2014-03-13 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 2.8.

Please note that starting with version 2.0, S3QL requires Python 3.3 or
newer. For older systems, the S3QL 1.x branch (which only requires
Python 2.7) will continue to be supported for the time being. However,
development concentrates on S3QL 2.x while the 1.x branch only receives
selected bugfixes. When possible, upgrading to S3QL 2.x is therefore
strongly recommended.

From the changelog:

  * Fixed various problems with using a proxy server.

  * Sending SIGUSR1 to mount.s3ql now generates a stacktrace
(debugging feature).

  * When passing --installed to the test runner, S3QL commands
are now loaded from $PATH instead of the packages bin/
directory.

  * The runtest.py script now comes with the correct shebang
(i.e., it can now be called as "./runtests.py" instead of
"python3 runtests.py").

  * S3QL now requires the python dugong module
(https://bitbucket.org/nikratio/python-dugong)

  * Fixed a file system hang when all upload threads encounter
unexpected backend problems.


Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Enjoy,

   -Nikolaus


-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«





signature.asc
Description: OpenPGP digital signature


Re: [s3ql] encryption + compression + deduplication?

2014-03-14 Thread Nikolaus Rath
On 03/14/2014 08:11 AM, aweber1nj wrote:
> Do all the features in the subject work well together?  That is, can I
> enable all three options and assume:
> 
>  1. All my data will be stored encrypted.
>  2. Data is compressed before storing.
>  3. Identical blocks (of original content, pre-compression and
> encryption I would assume) are only stored once (correctly
> de-duplicated)?

Yes, they all work together. The order is: (1) deduplication, (2)
compression, (3) encryption.

Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] encryption + compression + deduplication?

2014-03-14 Thread Nikolaus Rath
Hi AJ,

(Please quote properly, i.e. use a consistent quote sign and don't put
your text on top)

AJ Weber  writes:
>> On 3/14/2014 12:04 PM, Nikolaus Rath wrote:
>>> On 03/14/2014 08:11 AM, aweber1nj wrote:
>>>> Do all the features in the subject work well together?  That is, can I
>>>> enable all three options and assume:
>>>>
>>>>   1. All my data will be stored encrypted.
>>>>   2. Data is compressed before storing.
>>>>   3. Identical blocks (of original content, pre-compression and
>>>>  encryption I would assume) are only stored once (correctly
>>>>  de-duplicated)?
>>> Yes, they all work together. The order is: (1) deduplication, (2)
>>> compression, (3) encryption.
>
> That makes sense...so you check the block checksum against the local
> DB first, if it has a match, you add a reference to the same block for
> the "new file" and continue on.  If not, you add the checksum/block
> details to the DB, compress it, encrypt it and send it for storage?

Yes.

Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Question about backups and versioning

2014-03-14 Thread Nikolaus Rath
j k  writes:
> Hi,
>
> I'm running s3ql 2.x with Google Cloud Storage as my backend. I'm running a 
> cloud based storage front-end for users where their data is stored on the 
> s3ql mounted drive. I have two questions about versioning and s3ql:
>
> 1) Would enabling versioning allow me to recover files that were 
> accidentally deleted (user error, application error, malicious user, etc) ? 
> It would be fairly simple to restore the deleted object in the bucket with 
> versioning enabled, but I'm wondering how s3ql would handle having a data 
> block file suddenly re-appear. I'm guessing it would just ignore it and 
> there would be no way to re-incorporate it back into the meta-data.

Correct.

> 2) If versioning as described above is a sufficient mechanism for 
> recovering lost files with s3ql, would enabling versioning on an 
> in-production bucket affect my data in any way?

No. However, if you undelete storage objects, you'll get warnings from
fsck.s3ql.


A much better way to protect against accidential deletion is to
periodically run s3qlcp + s3qllock. s3qlcp creates a snapshot of a
directory tree, and s3qllock makes the snapshot immutable.


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] limiting bandwidth?

2014-03-26 Thread Nikolaus Rath
Hi,

On 03/26/2014 06:23 AM, Balázs wrote:
> Hi Niko,
> 
> I was wondering if there is any way to limit / control the available
> bandwidth that is available to the s3ql backend for its upload to the
> cloud. The amount of data I have to move on a daily basis hangs into the
> production hours, and other services suffer. I was hoping I can somehow
> limit the upload total bandwidth, so that the syncs can run during the
> day and finish...

Not in S3QL itself, but the Linux IP stack is your friend :-).

As an alternative workaround, maybe it helps to use fewer compression
threads and lzma compression at a high level? That ought to slow things
down quite a bit...

Best,
-Nikolaus
-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] s3ql quotas

2014-03-27 Thread Nikolaus Rath
Hi PA,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to (as I have done below). Thanks!

PA Nilsson  writes:
> A follow up question on this.
>
> When mounting a file system using an s3c backend, running 'df' will
> report that the filsystem has 1TB size. There is no such information
> coming from the backend, but is is easily made available.
>
> Can information on this somehow be propagated?

As far as I know, there is no such information from most
backends. Google Storage, Amazon S3, OpenStack et al all have
effectively unlimited storage. I believe only the local backend could
effectively report a capacity. Which backend do you have in mind?

But even if we got a number from the backend, it's not clear what we
should report to df. How do we take into account compression and
deduplication?


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] pkg_resources.DistributionNotFound: llfuse>=0.39

2014-03-27 Thread Nikolaus Rath
srr p  writes:
> Hi,
> I am facing below  issue while mounting s3 bucket using mount.s3ql command.
>
> [root@localhost ~]# mount.s3ql s3://bimarian-test/ /mnt/
> Traceback (most recent call last):
>   File "/usr/local/bin/mount.s3ql", line 5, in 
> from pkg_resources import load_entry_point
>   File "", line 1565, in _find_and_load
>   File "", line 1532, in
> _find_and_load_unlocked
>   File
> "/usr/local/lib/python3.3/site-packages/setuptools-1.4.2-py3.3.egg/pkg_resources.py",
> line 2793, in 
>   File
> "/usr/local/lib/python3.3/site-packages/setuptools-1.4.2-py3.3.egg/pkg_resources.py",
> line 673, in require
>
>   File
> "/usr/local/lib/python3.3/site-packages/setuptools-1.4.2-py3.3.egg/pkg_resources.py",
> line 576, in resolve
> directories. The `full_env`, if supplied, should be an ``Environment``
> pkg_resources.DistributionNotFound: llfuse>=0.39


How did you install S3QL? It looks as if you did not install all the
prerequisites (in particular python-llfuse). See 
http://www.rath.org/s3ql-docs/installation.html


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[s3ql] [ANNOUNCE] S3QL 2.8.1 has been released

2014-03-29 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 2.8.1

Please note that starting with version 2.0, S3QL requires Python 3.3 or
newer. For older systems, the S3QL 1.x branch (which only requires
Python 2.7) will continue to be supported for the time being. However,
development concentrates on S3QL 2.x while the 1.x branch only receives
selected bugfixes. When possible, upgrading to S3QL 2.x is therefore
strongly recommended.

>From the changelog:

2014-03-29, S3QL 2.8.1

  * No changes in S3QL itself.

  * The S3QL 2.8 tarball accidentally included a copy of the Python
dugong module. This has been fixed in the 2.8.1 release.


Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Enjoy,

   -Nikolaus


-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«






-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] CentOS 6.5 installation with failures - NameError: name 'xrange' is not defined

2014-04-01 Thread Nikolaus Rath
On 04/01/2014 07:45 AM, Randy Black wrote:
[...]
>  -- Download s3ql and install
> 
> cd /tmp; curl -O http://s3ql.googlecode.com/files/s3ql-1.14.tar.bz2
> tar -jxvf s3ql-1.14.tar.bz2; cd s3ql-1.14
> python3.3 ./setup.py install

If you have Python 3.3, you don't need to use the (outdated) S3QL 1.x
branch. Use the up-to-date 2.x branch instead (current version is 2.8),
and everything should be fine.


Best,
-Nikolaus

-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] s3ql quotas

2014-04-07 Thread Nikolaus Rath
On 04/07/2014 12:06 AM, PA Nilsson wrote:
> On Thursday, March 27, 2014 5:19:26 PM UTC+1, Nikolaus Rath wrote:
> 
> 
> PA Nilsson > writes:
> > A follow up question on this.
> >
> > When mounting a file system using an s3c backend, running 'df' will
> > report that the filsystem has 1TB size. There is no such information
> > coming from the backend, but is is easily made available.
> >
> > Can information on this somehow be propagated?
> 
> As far as I know, there is no such information from most
> backends. Google Storage, Amazon S3, OpenStack et al all have
> effectively unlimited storage. I believe only the local backend could
> effectively report a capacity. Which backend do you have in mind?
> 
> But even if we got a number from the backend, it's not clear what we
> should report to df. How do we take into account compression and
> deduplication? 
> 
>  
> We are using a custom storage with a Amazon S3 like interface, so we
> will be using a slightly modified s3c backend.
> However, we will be using quotas in this implementation.
> But just as you say, it is very hard to account for compression and
> such, so in my opinion the only thing that can be reported is the actual
> capacity available. How that capacity is used should really not be of
> concern from the storage provider, it shall just report the capacity
> available on the disk.
> 
> Or am I missing something here?

Yes.  There is no way to retrieve an "actual capacity available" for the
majority of storage providers. It would only work for your custom
storage service.



Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Multiple file systems in one bucket?

2014-04-07 Thread Nikolaus Rath
On 04/07/2014 04:48 AM, Rich B wrote:
> Is it possible to keep several S3QL file systems in a single bucket?

Yes, just make sure to use distinct prefixes (and don't put one prefix
underneath the other).

> If so, is it wise to do so?

That depends on your particular situation. In general, I see neither big
advantages nor big disadvantages.

> Another question, is there anything in S3QL that would prevent one from
> accidentally mounting a filesystem twice (on different computers)?

Yes. If you try it, mount.s3ql should refuse to do the mount unless you
have exceptionally bad luck (there is a small window where mount.s3ql
cannot detect the duplicate mount).

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Compile s3ql 2.8.1 on Ubuntu 13.04 fine but fail self test

2014-04-08 Thread Nikolaus Rath
Jim H  writes:
> Hi All,
> Follow the install document at 
> http://www.rath.org/s3ql-docs/installation.html on Ubuntu 13.04.  Compile 
> fine but fail self test, what should I do next?
>
> ...
> tests/t2_block_cache.py:114: cache_tests.test_destroy_deadlock FAILED
>
> === FAILURES 
> ===
> __ cache_tests.test_destroy_deadlock 
> ___
> Traceback (most recent call last):
>   File "/root/s3ql/s3ql-2.8.1/tests/t2_block_cache.py", line 163, in 
> test_destroy_deadlock
> self.cache.destroy()
>   File "_pytest.python", line 1009, in __exit__
> pytest.fail("DID NOT RAISE")
>   File "_pytest.runner", line 456, in fail
> raise Failed(msg=msg, pytrace=pytrace)
> Failed: DID NOT RAISE
>  Interrupted: stopping after 1 failures 
> 
> === 1 failed, 120 passed, 81 skipped in 8.26 seconds
> ===

I'm a bit at a loss here. I'll create Ubuntu 13.04 packages in the S3QL
PPA shortly though. This will either work (then I'd recommend you just
use those), or it will fail with the same error and I'll fix it :-).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] filenames

2014-04-09 Thread Nikolaus Rath
On 04/09/2014 05:22 AM, LogIN wrote:
> Hello,
> i am new to s3ql so could you please help me understand this 3 questions:
> 
> I would like to know original name of file on S3 bucket like -
> s3ql_data_5 of some file.

Things aren't that simple. There is no 1:1 correspondence between files
and s3 objects. One file may be split among multiple S3 objects, and one
S3 object may hold contents for multiple files.

To figure out what parts of a file are stored in what objects, you need
to get the files inode from the contents table, the inode's block_ids
from the inode_blocks table, and then the object id that holds each
block from the blocks table.

> I would also like to know if i can read somewhere in db and know is it
> file in cache or its on s3, or only way is to check if's on local
> filesystem or not?

This information is not stored in the database. However, after you
unmount (or run s3ql flushcache), everything that's in the database is
guaranteed to be stored in S3.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Re: filenames

2014-04-09 Thread Nikolaus Rath
Hi LogIN,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to. Thanks!

> thanks for answers!
>
> that's a pity it could be huge bandwidth saver..
>
> i'll need to menage somehow else...

I'm confused. How is any of your question (or my answers) related to
saving bandwidth?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] dugong.UnsupportedResponse: No content-length and no chunked encoding

2014-04-13 Thread Nikolaus Rath
Adam Watkins  writes:
> Dear all,
>
> (Firstly, thank you very much - particularly to Nikolaus - for S3QL).
>
> I'm running S3QL 2.8.1 with GreenQloud's StorageQloud as an S3-compatible 
> backend. (Other version information is shown below.)

I don't know much about *Qloud. Is this a software you are running on
your own server, or is it a product and you're connecting to someone
else's server?

> Unfortunately, I'm struggling to get S3QL to work reliably in my setup. 
> Often the mount will become inaccessible ('Transport end point not 
> connected'), and I'm struggling to even get it to reliably unmount without 
> hanging and requiring a 'fusermount -u -z'.
>
> The only error that I've been able to detect in the mount.log is:
> dugong.UnsupportedResponse: No content-length and no chunked encoding.

Are you using a proxy server?

If not, this sounds like a bug in the server. Can you provide a traffic
dump (e.g. with wireshark) of the conversation between S3QL and the
server? (This is tricky unless you can turn off SSL). 



Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] AES key stored inside the data?

2014-04-13 Thread Nikolaus Rath
On 04/13/2014 01:46 PM, Adrin wrote:
> Hi,
> 
> I'm not sure if I understand it correctly, but is it true that you are
> storing the AES key along with the data? 

Yes.

> Then what's the point of encrypting the data in the fist place?

The AES key itself is encrypted with a second AES key, that is not
stored anywhere (unless you put it in ~/.s3ql/authinfo).

The reason for having two separate keys is that it allows you to change
your passphrase. If you would encrypt all the data with the passphrase
directly, then in order to change the passphrase you'd have to download,
decrypt, encrypt, and re-upload your entire file system.


In contrast, if you have two keys (the "master" key that encrypts the
data, and the "passphrase" that is used to decrypt the master key) all
you need to do to change the passphrase is download, decrypt, re-encrypt
and re-upload the master key.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] dugong.UnsupportedResponse: No content-length and no chunked encoding

2014-04-13 Thread Nikolaus Rath
On 04/13/2014 12:18 PM, Adam Watkins wrote:
> (I previously did try this with the Debian-provided s3ql, 1.11.1-3 in
> wheezy, as the author of that article presumbly did. I upgraded to 2.8.1
> in the hope of solving this issue.)

That makes me suspicious. S3QL 1.11 does not use python-dugong, so you
must have encountered a different error.

Could you please post the relevant contents of mount.s3ql when you
encounter the problems, for both S3QL 2.8.1 and S3QL 1.11? Please make
sure that you include the full traceback.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Test errors s3ql-2.8.1 on Centos 6.5 64bit

2014-04-13 Thread Nikolaus Rath
Cristian Manoni  writes:
> and now the test:
> # python3 runtests.py tests 
>
> after some succesful tests:
> tests/t2_block_cache.py:114: cache_tests.test_destroy_deadlock FAILED
>
> 
>  
> FAILURES 
> 
> ___ 
> cache_tests.test_destroy_deadlock 
> 
> Traceback (most recent call last):
>   File "/root/s3ql/s3ql-2.8.1/tests/t2_block_cache.py", line 163, in 
> test_destroy_deadlock
> self.cache.destroy()
>   File "_pytest.python", line 1009, in __exit__
> pytest.fail("DID NOT RAISE")
>   File "_pytest.runner", line 456, in fail
> raise Failed(msg=msg, pytrace=pytrace)
> Failed: DID NOT RAISE
> ! 
> Interrupted: stopping after 1 failures 
> !
>  1 failed, 104 
> passed, 97 skipped in 5.15 seconds 
> 
>
> What is missing or what is wrong?
> Can you help me?

Hmm. Does it by any chance help if you apply the following patch?

diff --git a/tests/t2_block_cache.py b/tests/t2_block_cache.py
--- a/tests/t2_block_cache.py
+++ b/tests/t2_block_cache.py
@@ -156,6 +156,7 @@
 
 # Shutdown threads
 llfuse.lock.release()
+time.sleep(10)
 try:
 with catch_logmsg('Unable to flush cache, no upload threads left 
alive',
   level=logging.ERROR):


Thanks,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] AES key stored inside the data?

2014-04-14 Thread Nikolaus Rath
On 04/14/2014 08:17 AM, Adrin wrote:
> 
> 
> On Monday, 14 April 2014 03:11:04 UTC+2, Nikolaus Rath wrote:
> On 04/13/2014 01:46 PM, Adrin wrote:
> > Hi,
> >
> > I'm not sure if I understand it correctly, but is it true that you are
> > storing the AES key along with the data?
> 
> Yes.
> 
> > Then what's the point of encrypting the data in the fist place?
> 
> The AES key itself is encrypted with a second AES key, that is not
> stored anywhere (unless you put it in ~/.s3ql/authinfo).
> 
> The reason for having two separate keys is that it allows you to change
> your passphrase. If you would encrypt all the data with the passphrase
> directly, then in order to change the passphrase you'd have to download,
> decrypt, encrypt, and re-upload your entire file system.
> 
> 
> In contrast, if you have two keys (the "master" key that encrypts the
> data, and the "passphrase" that is used to decrypt the master key) all
> you need to do to change the passphrase is download, decrypt, re-encrypt
> and re-upload the master key.
> 
> 
> I came across this package because I wanted to protect my data not only
> from someone who's sniffing the network, but also the data storage
> provider itself. For this purpose, it doesn't make sense to give the
> encryption key to the provider.

Luckily, this is also not what S3QL does. Your provider is unable to
decrypt the data.


> With the current implementation, the
> only thing the provider needs to do is to brute-force my password.

Calling this the "only" thing is correct, but very misleading, because
no matter what you do, the only thing necessary to break any encryption
it is to brute-force the password. This is not at all specific to the
"current implementation" in S3QL.

> In order to protect my data against that, I will need to have a passphrase
> with the entropy of 256bits or more, to have an equivalent encryption
> power to a proper AES 256. And all of this, makes the second layer of
> the encryption useless. 

I don't follow. Yes, your passphrase needs to have enough entropy, but
that holds no matter whether you use it to encrypt the master key, or to
encrypt the data directly. The use of the "second layer" is that it
allows you to change your password.

> I guess with the current status of the code, we can use it by only
> having a very strong passphrase, but it's doing lots of extra non-useful
> space on the server, as well as useless computation.

I have no idea what you are talking about here. The "extra non-useful
space on the server" is less than 1 kB. The "useless computation" has to
be performed once when mount.s3ql starts, and probably takes less than
100 microseconds on a modern computer.


> Could we add a feature to the code to read the key from the client
> instead of the server?

Unless you make a much better case for this feature, I'm afraid the
answer is no. As far as I can see, there is absolutely no reason to do
this, and lots of reasons not to do it that way.


Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Test errors s3ql-2.8.1 on Centos 6.5 64bit

2014-04-14 Thread Nikolaus Rath
On 04/13/2014 11:51 PM, Cristian Manoni wrote:
> 
> 
> Hmm. Does it by any chance help if you apply the following patch? 
> 
> diff --git a/tests/t2_block_cache.py b/tests/t2_block_cache.py
> --- a/tests/t2_block_cache.py
> +++ b/tests/t2_block_cache.py
> @@ -156,6 +156,7 @@
>  
>  # Shutdown threads
>  llfuse.lock.release()
> +time.sleep(10)
>  try: 
> 
> 
>  
> we are not lucky
> 
> I had the same error:


Darn. Could you please file a bug at
https://bitbucket.org/nikratio/s3ql/issues? I'll see what I can do.

Could you check if any other tests fail as well? If you remove the
--exitfirst argument in tests/pytest.ini, py.test will continue with the
remaining tests even when encountering a failure.

If the other tests all pass, you can probably still safely use S3QL,
i.e. the bug is in the testcase rather than in the program itself.

Thanks!
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] dugong.UnsupportedResponse: No content-length and no chunked encoding

2014-04-14 Thread Nikolaus Rath
On 04/14/2014 10:06 AM, Adam Watkins wrote:
> This comes from another server running Debian 7.4 and using the S3QL
> 1.11 package:
> This server _isn't_ running on my home network, and I've no reason to
> believe that it's connection is particularly unreliable.
> 
[]

Thanks! Could you please file this as a bug on
https://bitbucket.org/nikratio/s3ql/issues? I'll see what I can do.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] dugong.UnsupportedResponse: No content-length and no chunked encoding

2014-04-14 Thread Nikolaus Rath
On 04/14/2014 10:07 AM, Adam Watkins wrote:
> 
> 
> This comes from the server discussed in my first email, and is running
> S3QL 2.8.1:


Alright, I'll probably have to send you some extra code to get more
debugging information.

Could you please also file this as a (second, new) bug on
https://bitbucket.org/nikratio/s3ql/issues?

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] dugong.UnsupportedResponse: No content-length and no chunked encoding

2014-04-14 Thread Nikolaus Rath
On 04/14/2014 10:07 AM, Adam Watkins wrote:
> 
> 
> This comes from the server discussed in my first email, and is running
> S3QL 2.8.1:
[...]

Could you apply the attached patches the python-dugong and S3QL, and
then try to reproduce the problem?

To apply the patches, do a 'patch -p1 < patchname.diff' in the directory
created when extracting the tarball.


Hopefully this will result in a more helpful mount.log.


Thanks!
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
diff --git a/dugong/__init__.py b/dugong/__init__.py
--- a/dugong/__init__.py
+++ b/dugong/__init__.py
@@ -845,6 +845,42 @@
 raise self._encoding
 else:
 raise RuntimeError('ooops, this should not be possible')
+
+def read_raw(self, size):
+'''Read *size* bytes of uninterpreted data
+
+This method may be used even after `UnsupportedResponse` or
+`InvalidResponse` has been raised. It reads raw data from the socket
+without attempting to interpret it. This is probably only useful for
+debugging purposes to take a look at the raw data received from the
+server. This method blocks if no data is available, and returns ``b''``
+if the connection has been closed.
+
+Calling this method will break the internal state and switch the socket
+to blocking operation. The connection has to be closed and reestablished
+afterwards.
+
+**Don't use this method unless you know exactly what you are doing**.
+'''
+
+self._sock.setblocking(True)
+
+buf = bytearray()
+rbuf = self._rbuf
+while len(buf) < size:
+len_ = min(size - len(buf), len(rbuf))
+if len_ < len(rbuf):
+buf += rbuf.d[rbuf.b:rbuf.b+len_]
+rbuf.b += len_
+elif len_ == 0:
+buf2 = self._sock.recv(size - len(buf))
+if not buf2:
+break
+buf += buf2
+else:
+buf += rbuf.exhaust()
+
+return buf
 
 def readinto(self, buf):
 '''placeholder, will be replaced dynamically'''
diff --git a/src/s3ql/backends/s3c.py b/src/s3ql/backends/s3c.py
--- a/src/s3ql/backends/s3c.py
+++ b/src/s3ql/backends/s3c.py
@@ -14,7 +14,8 @@
   ABCDocstMeta)
 from io import BytesIO
 from shutil import copyfileobj
-from dugong import HTTPConnection, is_temp_network_error, BodyFollowing, CaseInsensitiveDict
+from dugong import (HTTPConnection, is_temp_network_error, BodyFollowing, CaseInsensitiveDict,
+UnsupportedResponse)
 from base64 import b64encode, b64decode
 from email.utils import parsedate_tz, mktime_tz
 from urllib.parse import urlsplit
@@ -134,9 +135,14 @@
 '''
 
 if body is None:
-body = self.conn.read(2048)
-if body:
-self.conn.discard()
+try:
+body = self.conn.read(2048)
+if body:
+self.conn.discard()
+except UnsupportedResponse:
+log.warning('Unsupported response, trying to retrieve data from raw socket!')
+body = self.conn.read_raw(2048)
+self.conn.close()
 else:
 body = body[:2048]
 


Re: [s3ql] Test errors s3ql-2.8.1 on Centos 6.5 64bit

2014-04-15 Thread Nikolaus Rath
Cristian Manoni  writes:
>> Could you check if any other tests fail as well? If you remove the 
>> --exitfirst argument in tests/pytest.ini, py.test will continue with the 
>> remaining tests even when encountering a failure. 
>>
>>
> Test failed are:
> _
>  
> cache_tests.test_destroy_deadlock 
> _
> Traceback (most recent call last):
>   File "/root/s3ql/s3ql-2.8.1/tests/t2_block_cache.py", line 164, in 
> test_destroy_deadlock
> self.cache.destroy()
>   File "_pytest.python", line 1009, in __exit__
> pytest.fail("DID NOT RAISE")
>   File "_pytest.runner", line 456, in fail
> raise Failed(msg=msg, pytrace=pytrace)
> Failed: DID NOT RAISE
> _
>  
> cache_tests.test_expire_deadlock 
> __
> Traceback (most recent call last):
>   File "/root/s3ql/s3ql-2.8.1/tests/t2_block_cache.py", line 211, in 
> test_expire_deadlock
> self.cache.clear()
>   File "_pytest.python", line 1009, in __exit__
> pytest.fail("DID NOT RAISE")
>   File "_pytest.runner", line 456, in fail
> raise Failed(msg=msg, pytrace=pytrace)
> Failed: DID NOT RAISE
>
>
>> If the other tests all pass, you can probably still safely use S3QL, 
>> i.e. the bug is in the testcase rather than in the program itself. 
>>
>
> I confirm, it seems works
> I'll continue with tests on building s3ql rpm 

Please also please file a bug at
https://bitbucket.org/nikratio/s3ql/issues for this.

Thanks!
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Re: Packaging S3QL 2.8.1 in Fedora

2014-04-15 Thread Nikolaus Rath
Marcin Skarbek  writes:
> Packages for Fedora 20 and Fedora Rawhide:
> http://copr.fedoraproject.org/coprs/mskarbek/s3ql/

Thanks for that! I have added the link to
https://bitbucket.org/nikratio/s3ql/wiki/installation_fedora, hope
that's ok.

Do you plan to keep these packages updated?

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Re: AttributeError: 'NoneType' object has no attribute 'discard' when connecting to Swift

2014-04-15 Thread Nikolaus Rath
Marcin Skarbek  writes:
> Removing `self.` from line 138 in backends/swift.py allows me to connect 
> but mkfs.s3ql fails anyway.
> mkfs.s3ql --debug all output: http://ur1.ca/h36ng

It looks as if the ocs-pl.oktawave.com server silently closes the
connection when S3QL tries to upload a file (the last thing that goes
through the connection is the PUT
/v1/AUTH_7b99bb65-0e99-4733-a938-fbd7c813923a/tasknext/s3ql_passphrase
request).

Note that the server does not even respond with an error, so there isn't
much that S3QL can do.

Do you have access to the server logs? Maybe they contain some more
information..


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Requirements for compilation/use on CentOS 6

2014-04-16 Thread Nikolaus Rath
Hi Andy,

On 04/16/2014 12:55 PM, Andy wrote:
> Like a few others posting here recently, I'd really like to use s3ql
> on CentOS 6.  Amongst the list of requirements is the upgrade of
> sqlite from 3.6 to 3.7.  This one is a little troubling, since sqlite
> is pretty intrinsic to CentOS (RHEL etc.) - used by yum, for example.
> Changing the version of sqlite from the CentOS 6-supplied 3.6, to a
> 3rd party repo-provided 3.7, means your OS technically stops being
> CentOS.  This may be something I can live with on a home system, but I
> cannot recommend this to anyone at $work.  We have enough trouble with
> other things like php from the atomic repo, since our customers don't
> often understand the rationale behind repos like atomic (latest!!
> yeah! oops, why has stuff stopped working?), vs the sensible,
> conservative CentOS/RHEL/EPEL repos (major version stays unchanged,
> all apps keep working, backfixing of bugs, sleep easy at night...!).
> 
> Nikolaus, please don't take this as a complaint, since you obviously
> have good reasons for the s3ql code needing sqlite 3.7, as is your
> right :), but I would love to know if this is a 'hard' requirement, or
> would there be any way to use s3ql with sqlite left at 3.6?

If I remember correctly, the reason for the requirement are three SQLite
bugs that are otherwise triggered by S3QL:

http://www.sqlite.org/src/info/c39ff61c43
http://www.sqlite.org/src/info/51914f6acd2cb462
http://www.sqlite.org/cvstrac/tktview?tn=3992

You could try backporting these fixes to SQLite 3.6.


Alternatively, you could also try to link s3ql and apsw statically
against a new SQlite 3.7, and keep the system-wide dynamic SQLite 3.6
library untouched. This will require some adjustments to S3QL's
setup.py, for apsw it's already supported (you just have to figure out
the correct parameters for its setup.py).


Hope that helps,

-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] swiftks

2014-04-17 Thread Nikolaus Rath
Randy Black  writes:
> Doing something stooopid.   Please point it out:
>
> authfile;
>
> [swift]
> backend-login::
> backend-password:
> storage-url: swiftks://://  ## container 
> does exist, created with swift cli
>
>
> mkfs command;
>
> mkfs.s3ql --debug all --authfile .s3ql_auth --plain --cachedir /.s3ql_cache 
> swiftks://://
>
> it still prompts for creds??

Does it work if you temporarily put just one entry in your authfile, and
set the storage-url to swiftks:// (i.e., don't include host etc)?

> I get a 401 and then a 'discard'.  Note: I can see a valid token given by 
> my auth server on the backend while tailing the logs.

Please try this patch: 
https://bitbucket.org/nikratio/s3ql/commits/3f008146e9e192a0bb424bc5da99132fb1e3c349


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] s3ql vagrant swift

2014-04-18 Thread Nikolaus Rath
Randy Black  writes:
> I am using s3ql on a vagrant vm (VirtualBox) CentOS 6.5 attaching to a 
> swift cluster using keystone authentication.  The vm did not unmount
> clean.

What does that mean? How can you mount (or unmount) a vm (virtual
machine)?


>  I ran a fsck.s3ql on the storage url and it get;
>
>> Backend reports that file system is still mounted elsewhere. Either
>> the file system has not been unmounted cleanly or the data has not yet
>> propagated through the backend. In the later case, waiting for a while
>> should fix the problem, in the former case you should try to run fsck
>> on the computer where the file system has been mounted most recently.
>> Enter "continue" to use the outdated data anyway:

This sounds as if the virtual machine running mount.s3ql crashed, and
you then ran ran fsck.s3ql on a different system or a new vm. Is that
correct?

Then this is expected behavior, you need to run fsck.s3ql on the same vm
where you last mounted the file system. This is where the data is.


> I deleted the metadata out of the swift container

Do you mean you manually deleted objects in the swift container? Why did
you do that? This is a recipe for desaster.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] credentials

2014-04-18 Thread Nikolaus Rath
Randy Black  writes:
> when specifying an auth file, why does the mounting process continue to 
> prompt for credentials?
>
> mount.s3ql --cachedir /root/.s3ql-cache --authfile /root/s3ql_auth 
> --allow-root swiftks://:// /s3ql


Probably because the storage URL in your auth file is subtly different
from the storage url that you pass to mkfs.

Please copy & paste (don't retype) both your command line and the
contents of you authinfo file without manually editing the storage url
(but don't forget to remove the passwords). If you're obfuscating the
URL, it's near impossible to help (for example, is there really a space
after  and before /s3ql?

Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] file chunking / name preservation

2014-04-18 Thread Nikolaus Rath
Randy Black  writes:
> Any way to eliminate chunking and preserver source filename?

Please clarify what exactly you mean, and what you want to achieve with
it.


Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] s3ql vagrant swift

2014-04-18 Thread Nikolaus Rath
Randy Black  writes:
> On Friday, April 18, 2014 12:13:14 PM UTC-5, Nikolaus Rath wrote:
>>
>> Randy Black > writes: 
>> > I am using s3ql on a vagrant vm (VirtualBox) CentOS 6.5 attaching to a 
>> > swift cluster using keystone authentication.  The vm did not unmount 
>> > clean. 
>>
>> What does that mean? How can you mount (or unmount) a vm (virtual 
>> machine)? 
>>
> Unmount the drive clean.

What drive?

> Vagrant manages your virtual machine, it's a 
> light weight way to make and reproduce vm's for testing.  It is still a 
> Virtualbox vm or vmware if you choose to go with that hypervisor.  Either 
> way, what my brain said was unmount the dirve, what my fingers said was 
> unmount,with no specification.

Sorry, I still have no idea what you actually did. You seem to use the
words vm, drive, and s3ql mountpoint interchangeably, although (to me)
they have very distinct meanings.

>> >  I ran a fsck.s3ql on the storage url and it get; 
>> > 
>> >> Backend reports that file system is still mounted elsewhere. Either 
>> >> the file system has not been unmounted cleanly or the data has not yet 
>> >> propagated through the backend. In the later case, waiting for a while 
>> >> should fix the problem, in the former case you should try to run fsck 
>> >> on the computer where the file system has been mounted most recently. 
>> >> Enter "continue" to use the outdated data anyway: 
>>
>> This sounds as if the virtual machine running mount.s3ql crashed, and 
>> you then ran ran fsck.s3ql on a different system or a new vm. Is that 
>> correct? 
>>
>> Then this is expected behavior, you need to run fsck.s3ql on the same vm 
>> where you last mounted the file system. This is where the data is. 
>
> it was the same vm, power cycles (crash) and fsck.s3ql reported back the 
> same message on multiple attempts.

Did you run it as the same user, with the same home directory, with the
same options?

Did you unmount the s3ql file system, or did you power cycle the vm? In
the latter case, it's quite possible that you lost the contents of the
~/.s3ql directory. In that case there is obviously nothing that s3ql can
do to restore the data.

>> > I deleted the metadata out of the swift container 
>>
>> Do you mean you manually deleted objects in the swift container? Why did 
>> you do that? This is a recipe for desaster.
>
> Agree, thats why I am using it in a test env now so I can understand and 
> avoid those issues in a prod/dev environment.

I still don't understand what gave you the idea to do this in the first place. 

Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] file chunking / name preservation

2014-04-18 Thread Nikolaus Rath
Randy Black  writes:
> On Friday, April 18, 2014 12:16:55 PM UTC-5, Nikolaus Rath wrote:
>>
>> Randy Black > writes: 
>> > Any way to eliminate chunking and preserver source filename? 
>>
>> Please clarify what exactly you mean, and what you want to achieve with 
>> it. 
>>
> The end goal would be to be able to pull data from the cluster without 
> having to go through a file system interface.

This is not possible without reimplementing major parts of S3QL. There
is no 1:1 mapping between file names and storage objects. But even if
you had the mapping, you'd still need to write quite some code to do
decryption and decompression.


Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Extending S3QL

2014-04-22 Thread Nikolaus Rath
On 04/22/2014 08:33 AM, Diogo Vieira wrote:
> Hi Nikolaus,
> 
>>> Diogo Vieira  writes:
 About extending S3QL, when I have my backend ready how am I supposed
 to let S3QL know about it (even on URL parsing, etc.)?

 I looked through some files and I believe I have to add it to
 s3ql/backends/__init__.py right? But do I need to do anything more?
>>>
>>> No, this is all that's necessary. If you supply a storage URL starting
>>> with foobar://, S3QL will attempt to load the s3ql.backends.foobar
>>> module, instantiate it and hand it the rest of the storage URL.
>>
>> Ok, thank you!
>>
>> Thank you very much,
>> Diogo Vieira
> 
> Sorry to bother once again about this but I installed the 2.8.1 version 
> in my machine (so far I was working with 2.7 version on a folder on my home 
> directory)
> but even after I edited the __init__.py and added my custom backend to the 
> library's backends
> folderI still get "No such backend" when trying to mount a filesystem.
> 
> Already tried to remove the __pycache__ folders but it seems to have made no 
> effect. Am I missing
> something here?

The usual things: exact error message, the commands you entered, the
changes that you made to the code, etc.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Extending S3QL

2014-04-23 Thread Nikolaus Rath
Hi Diogo,

Let's keep the discussion the list please.

On 04/23/2014 02:19 AM, Diogo Vieira wrote:
>>> Sorry to bother once again about this but I installed the 2.8.1
>>> version in my machine (so far I was working with 2.7 version on a
>>> folder on my home directory) but even after I edited the
>>> __init__.py and added my custom backend to the library's
>>> backends folderI still get "No such backend" when trying to mount
>>> a filesystem.
>>> 
>>> Already tried to remove the __pycache__ folders but it seems to
>>> have made no effect. Am I missing something here?
>> 
>> The usual things: exact error message, the commands you entered,
>> the changes that you made to the code, etc.
> 
> Well, the error message is the one I mentioned "No such backend:
> ". The command used was "mount.s3ql
> : " and the only change
> I made in the code was in
> /usr/lib64/python3.3/site-packages/s3ql/backends/__init__.py (added
> my backend name). The only thing I did then was to put my backend in
> /usr/lib64/python3.3/site-packages/s3ql/backends/.
> 
> So far I've been working with the source code downloaded from the
> website in a folder on my home directory and I was able to add it
> without any issues, even tough it was version 2.7.

I don't believe this is due to the upgrade to 2.7. Can you confirm that
it still works if you install S3QL 2.7 in
/usr/lib64/python3.3/site-packages/ instead?

My guess is that, in addition to upgrading, you also changed your
backend, and now it cannot be imported anymore. What happens if you run
'python3 -c 'import s3ql.backends.yourmodule'?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Extending S3QL

2014-04-24 Thread Nikolaus Rath
On 04/24/2014 03:32 AM, Diogo Vieira wrote:
> On Apr 24, 2014, at 4:23 AM, Nikolaus Rath  wrote:
> 
>> Hi Diogo,
>>
>> Let's keep the discussion the list please.
> 
> Sorry, I just replied to the email (didn't notice it was sent just to you). 
> Isn't there an option to set the Reply-To in the mailing list?

There is, but it's not a good idea to do that.

http://woozle.org/~neale/papers/reply-to-still-harmful.html
http://www.unicom.com/pw/reply-to-harmful.html


> I don't know how could I forget to try that. It clearly shows where's the 
> problem:
> 
>   Traceback (most recent call last):
>   File "", line 1, in 
>   File "/usr/lib64/python3.3/site-packages/s3ql/backends/eds.py", line 
> 12, in 
>   from .common import (AbstractBackend, NoSuchObject, retry, 
> retry_generator, ImportError: cannot import name is_temp_network_error
> 
> With a quick look through the source code I believe you moved 
> is_temp_network_error to the dugong package right? Should I just change the 
> import from common to dugong?

Depends on what you use the method for. If I remember correctly
(unfortunately you still haven't provided the source of your custom
backend) you are using 'requests' to communicate with the backend
server. I don't know if requests raises the same exceptions, or maybe
even retries automatically on its own. In that case the version from
dugong may not work so well (because it is designed to handle the
exceptions that may be raised by dugong).

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[s3ql] [ANNOUNCE] S3QL 1.18 has been released

2014-04-26 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new maintenance release of S3QL, version 1.18.

Please note that this is only a maintenance release. Development of S3QL
takes place in the 2.x series. The 1.x releases receive only selected
bugfixes and are only maintained for older systems that do not support
Python 3.3. For systems with Python 3.3 support, using the most recent
S3QL 2.x version is strongly recommended.

From the changelog:

2014-04-26, S3QL 1.18

  * Fixed a problem with mount.s3ql crashing with `KeyError` under
some circumstances.

  * Fixed a problem with mount.s3ql (incorrectly) reporting corrupted
data for compressed blocks of some specific sizes. Many thanks to
Balázs for extensive debugging of this problem.

  * s3qlrm now also works when running as root on a file system
mounted by a regular user.

Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

  »Time flies like an arrow, fruit flies like a Banana.«



signature.asc
Description: OpenPGP digital signature


Re: [s3ql] Data duplication/parity striping

2014-04-27 Thread Nikolaus Rath
On 04/27/2014 04:25 AM, mark.christopher.ca...@gmail.com wrote:
> Is there any intention in the future to include data duplication or
> parity striping features ala RAID-Z of ZFS?
> 
> So, if data is lost on the cloud storage provider (S3, GCS etc) it can
> be rebuilt?

No. It is expected that the cloud storage provider provides sufficient
redundancy.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Can't build S3QL version 2.8.1

2014-04-28 Thread Nikolaus Rath
Diogo Vieira  writes:
> I'm trying to build the 2.8.1 version of S3QL but it seems that the
> file deltadump.c is missing:
[...]
> Could this be a mistake while packaging or are the instructions on the
> wiki to install S3QL wrong (please note that I downloaded S3QL from
> https://bitbucket.org/nikratio/s3ql/downloads)?

I don't think so:

[0] nikratio@vostro:~/tmp$ wget --quiet 
https://bitbucket.org/nikratio/s3ql/downloads/s3ql-2.8.1.tar.{bz2,bz2.asc} 
[0] nikratio@vostro:~/tmp$ gpg s3ql-2.8.1.tar.bz2.asc 
gpg: Signature made Sat 29 Mar 2014 05:31:32 PM PDT
gpg:using RSA key 0xD113FCAC3C4E599F
gpg: Good signature from "Nikolaus Rath " [ultimate]
gpg: Signature policy: http://www.rath.org/gpgpolicy.html
[0] nikratio@vostro:~/tmp$ tar tjvf s3ql-2.8.1.tar.bz2 | grep deltadump
-rw-rw-r-- nikratio/nikratio 449109 2014-03-29 17:30 
s3ql-2.8.1/src/s3ql/deltadump.c
-rw-rw-r-- nikratio/nikratio  22066 2014-01-23 19:56 
s3ql-2.8.1/src/s3ql/deltadump.pyx


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[s3ql] [ANNOUNCE] S3QL 1.18.1 has been released

2014-04-28 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new maintenance release of S3QL, version 1.18.1.

Please note that this is only a maintenance release. Development of S3QL
takes place in the 2.x series. The 1.x releases receive only selected
bugfixes and are only maintained for older systems that do not support
Python 3.3. For systems with Python 3.3 support, using the most recent
S3QL 2.x version is strongly recommended.

From the changelog:

2014-04-28, S3QL 1.18.1

  * No changes in S3QL itself.

  * The S3QL 1.18 tarball accidentally included a copy of the Python
dugong module. This has been fixed in the 1.18.1 release.

2014-04-26, S3QL 1.18

  * Fixed a problem with mount.s3ql crashing with `KeyError` under
some circumstances.

  * Fixed a problem with mount.s3ql (incorrectly) reporting corrupted
data for compressed blocks of some specific sizes. Many thanks to
Balázs for extensive debugging of this problem.

  * s3qlrm now also works when running as root on a file system
mounted by a regular user.


Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

  »Time flies like an arrow, fruit flies like a Banana.«





signature.asc
Description: OpenPGP digital signature


Re: [s3ql] BlockManager instance was destroyed without calling destroy()

2014-05-01 Thread Nikolaus Rath
Brian Pribis  writes:
> Compiled s3ql 2.8.1 on ubuntu 14.04.  I get a fail on my tests like:
>
> Exception ignored in:  >
> Traceback (most recent call last):
>   File "/mnt/netshare/home/brian/s3ql-2.8.1/src/s3ql/block_cache.py", line 
> 923, in __del__
> raise RuntimeError("BlockManager instance was destroyed without calling 
> destroy()!")
> RuntimeError: BlockManager instance was destroyed without calling
> destroy()!

Please provide more context. Which test is failing?


> I'm not running the test as sudo (not sure if that makes a difference).  If 
> I run as sudo I get:
>
> Traceback (most recent call last):
>   File "/mnt/netshare/home/brian/s3ql-2.8.1/tests/t2_block_cache.py", line 
> 163, in test_destroy_deadlock
> self.cache.destroy()
>   File "_pytest.python", line 1009, in __exit__
> pytest.fail("DID NOT RAISE")
>   File "_pytest.runner", line 456, in fail
> raise Failed(msg=msg, pytrace=pytrace)
> Failed: DID NOT RAISE

This is a known problem when running tests as root. It will be fixed in
the next release. That said, I'd very much recommend to never ever run
tests as root.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Re: BlockManager instance was destroyed without calling destroy()

2014-05-02 Thread Nikolaus Rath
Hi Brian,

Please try quote the email that you are actually responding to, and
don't top-quote. Thanks!


Brian Pribis  writes:
>> Compiled s3ql 2.8.1 on ubuntu 14.04.  I get a fail on my tests like:
>>
>> Exception ignored in: > >
>> Traceback (most recent call last):
>>   File "/mnt/netshare/home/brian/s3ql-2.8.1/src/s3ql/block_cache.py", line 
>> 923, in __del__
>> raise RuntimeError("BlockManager instance was destroyed without 
>> calling destroy()!")
>> RuntimeError: BlockManager instance was destroyed without calling 
>> destroy()!
>
> Sorry, I could have been more clear.   That error happens at the END of the 
> tests.  I don't believe a test is failing per se.  Nothing is flagged as 
> failed, anyway (191 passed, 103 skipped in 63.60 seconds).  This looks like 
> it is something to do with cleanup.

Could you try to run the test files one by one, to see which one
produces the error at cleanup?

So instead of

# ./runtests.py tests/

do something like

# for file in tests/t?_*.py; do
echo Running ${file}
./runtests.py ${file}
sleep 5
done


Thanks!
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Re: BlockManager instance was destroyed without calling destroy()

2014-05-08 Thread Nikolaus Rath
Brian Pribis  writes:
>>
>>
>>> t2_block_cache.py  runs, and ends before seeing the error.  Then the next 
>> test starts.
>>
>>
>> === 12 passed, 1 skipped in 21.37 seconds 
>> ===
>> Exception ignored in: > >
>> Traceback (most recent call last):
>>   File "/mnt/netshare/home/brian/s3ql-2.8.1/src/s3ql/block_cache.py", line 
>> 923, in __del__
>> raise RuntimeError("BlockManager instance was destroyed without 
>> calling destroy()!")
>> RuntimeError: BlockManager instance was destroyed without calling 
>> destroy()!
>
> Did you need anything more from me?   

No, sorry. I've just been really busy (and probably will continue
to be for the next 2 weeks). Could you maybe report this as an issue on 
https://bitbucket.org/nikratio/s3ql/issues? That way I won't forget
about it for sure.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Detect that a bucket is mounted elsewhere?

2014-05-09 Thread Nikolaus Rath
On 05/09/2014 05:55 AM, andycr...@gmail.com wrote:
> Folks,
> 
> I am using s3ql-1.17 (on CentOS5 & 6)  and I have a question.
> From each system, it does fsck.s3ql to see if the bucket was previously
> formatted, and if not it does mkfs.s3ql.

This sounds dangerous. Can't you create the file system once ahead of
time? If not, why (and how) do you use fsck.s3ql for this purpose?
Wouldn't it be enough to always call mkfs.s3ql without --force?

> The issue is that if the bucket happened to be mounted on another
> system, the second one blocks forever in fsck.s3ql.

What do you mean by that? Do you mean it's waiting for input? In that
case, just redirect from /dev/null.

> Is there a way to detect that the bucket is mounted elsewhere, so I can
> avoid this?

There is no standalone program, but you should be able to just try to
mount or fsck it. If the file system is mounted elsewhere, this should
fail with an error message. Whether this is 100% reliable depends on
your backend, e.g. with Amazon S3 you are not guaranteed to always get
current data, so you can never be absolutely sure that the file system
is not mounted elsewhere.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] possibility of a pre-compiled package of S3QL for MAC OSX?

2014-05-11 Thread Nikolaus Rath
Ovidiu Pacuraru  writes:
> Would that be possible? I'm kinda lost when it comes to getting this to run 
> on my Macbook. 
> I had previously gotten it to run on my Debian Server and liked tis 
> functionality...

Possible yes, but I don't have a Mac to do it. Maybe someone else on the
list can help, I know some people are using S3QL under OS X.


Best,
Nikolaus


-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] [2.8.x series on Ubuntu Precise?]

2014-05-19 Thread Nikolaus Rath
On 05/18/2014 06:41 PM, Serge Victor wrote:
> I wonder if you see any way to make available on Nikolaus PPA s3ql 2.8x
> packages for Precise using this PPA providing newer python versions :
> 
> https://launchpad.net/~fkrull/+archive/deadsnakes
> 

That is technically possible. However, it would require everyone using
the S3QL PPA to also use the deadsnakes PPA, and it would pull in a
(potentially undesired) Python version. Therefore, I don't think this is
a good idea.

Could you use Python 3.3 from deadsnakes, and install S3QL from source?

> I was testing 1.18 and unfortunately it is not stable enough for
> production use, after few hours is simply gets stuck and requires kill
> -9 :-( I am storing a few hundred gig of data for backup purposes and
> s3ql would be really ideal, but unfortunately version 1.18 is not
> stable. Making an investigation does not have sense, as it is an old line.

Well, S3QL 1.x doesn't receive new features, but that is because it *is*
considered stable. Requiring kill -9 sounds like a serious bug, so it
would be nice if you could report it on
https://bitbucket.org/nikratio/s3ql/issues. Please include the
information from
https://bitbucket.org/nikratio/s3ql/wiki/Providing%20Debugging%20Info.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] problem with pycryptopp on CentOS5 i686

2014-05-23 Thread Nikolaus Rath
On 05/23/2014 09:38 AM, Andy Cress wrote:
> With pycryptopp-0.5.29 and
> pycryptopp-0.6.0.1206569328141510525648634803928199668821045408958, I
> get an ImportError with ostream_insert during the 'test' on CentOS5 i386.
> 
> #  python2.7 ./setup.py test
> 
[...]
>   File "build/lib.linux-i686-2.7/pycryptopp/__init__.py", line 8, in
> 
> 
> import _pycryptopp
> 
> ImportError: build/lib.linux-i686-2.7/pycryptopp/_pycryptopp.so:
> undefined symbol:
> _ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_i
>  
> 
> This works with pycryptopp-0.6.0 on CentOS5 x86_64 and CentOS6 x86_64
> (also python27).
> 
> Is there something I can do to make this build/resolve?

No idea, sorry. I'd suggest to ask on the pycryptopp mailing list, since
apparently something went wrong when building that module.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Using python requests module within s3ql

2014-05-31 Thread Nikolaus Rath
On 05/31/2014 07:27 AM, Tor Krill wrote:
> Hi,
> 
> I'm currently experimenting with s3ql and a custom authentication. To do
> this i use the python requests module to do http requests. I know there
> most likely is a correct way to do this within s3ql but i haven't dug
> that deep into the code yet.

S3QL uses dugong instead of requests, see http://pythonhosted.org
/dugong/ for documentation.

> My problem is that when i import the requests module from my code it
> seems to clash with the local logging.py
> 
> A quick way to test this is just to fire up python3 in the src/s3ql
> folder and issue 
> 
> import requests

Don't do that. If you run python in src/s3ql, it cannot recognize that
the logging.py is actually part of the s3ql package.

If you want to be able to import the s3ql modules, run python in the
'src' folder (not 'src/s3ql').


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Using python requests module within s3ql

2014-06-02 Thread Nikolaus Rath
On 06/02/2014 02:04 AM, Tor Krill wrote:
> > My problem is that when i import the requests module from my code it
> > seems to clash with the local logging.py
> >
> > A quick way to test this is just to fire up python3 in the src/s3ql
> > folder and issue
> >
> > import requests
> 
> Don't do that. If you run python in src/s3ql, it cannot recognize that
> the logging.py is actually part of the s3ql package.
> 
> If you want to be able to import the s3ql modules, run python in the
> 'src' folder (not 'src/s3ql').
> 
> This was only meant as a way to narrow down our problem. We started with
> our "own" backend but as soon as we imported requests all hell broke lose :)

That should not happen. There's at least one other person who's using
requests with s3ql in some internal project.

Even if you're going to use dugong (which I definitely recommend,
requests is too high-level for this kind of application), could you send
me a testcase where hell breaks loose when importing requests? It may
indicate some other bug.


Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to "continue"?

2014-06-15 Thread Nikolaus Rath
On 06/15/2014 08:21 AM, PA Nilsson wrote:
> Hi,
> 
> I am using a script to mount an s3ql filesystem.
> Before mounting the filesystem I am running:
> 
> fsck.s3ql  --batch $storage_url
> 
> If the file system was not unmounted cleanly this will fail and exit.

This should only happen if the file system was not unmounted cleanly
*and* you lost the local metadata. Otherwise it's a bug.

If you lost the local metadata, I really don't think you want to force
fsck to continue. Are you sure that's what you want?



Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to "continue"?

2014-06-15 Thread Nikolaus Rath
PA Nilsson  writes:
> On Sunday, June 15, 2014 9:02:42 PM UTC+2, Nikolaus Rath wrote:
>>
>> On 06/15/2014 08:21 AM, PA Nilsson wrote: 
>> > Hi, 
>> > 
>> > I am using a script to mount an s3ql filesystem. 
>> > Before mounting the filesystem I am running: 
>> > 
>> > fsck.s3ql  --batch $storage_url 
>> > 
>> > If the file system was not unmounted cleanly this will fail and exit. 
>>
>> This should only happen if the file system was not unmounted cleanly 
>> *and* you lost the local metadata. Otherwise it's a bug. 
>>
>> If you lost the local metadata, I really don't think you want to force 
>> fsck to continue. Are you sure that's what you want? 
>
> The system lost power, so the file system was not unmounted.
> How do I know if I have lost the local metadata?

The fsck.s3ql message should say something about that. What message are
you getting when it "fails and exits"?

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Question regarding data consistency in case of network issues

2014-06-15 Thread Nikolaus Rath
Andrei  writes:
> Hi,
>
> I couldn't find much related to this in the docs/FAQ so I'm asking here.
>
> 1.) What should I expect in terms of data consistency issues in case the 
> underlying network connection is flaky or goes down completely for minutes 
> (and for example I may have a daemon running which uses an s3ql file system 
> for various data storage purposes)?

It should work just fine as long as your changes fit into the local
cache. Once that is exhausted, any attempt to make changes to the file
system will block until the network connection is back.

> 2.) Also, what is the intended behavior of s3ql in such cases? For this, I 
> mean thngs like: does is hang until the network connection is restored (in 
> case someone attempts to write to an s3ql file system) or does it write to 
> cache instead hoping the connection will be restored etc. ?

The latter, until the cache is full. Then it falls back to the former.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Making an s3ql file system mount permanent

2014-06-15 Thread Nikolaus Rath
Andrei  writes:
> Hi,
>
> What is the recommended way to make an s3ql file system mount
> permanent?

Start mount.s3ql automatically at boot.

> For example, can I use autofs?

You can do that too.

> If yes, has this been tested by any chance?

I haven't heard anyone using autofs, but there are plenty of people
starting mount.s3ql from their init scripts.

> Any possible issues?

Make sure that you wait for umount.s3ql to finish before you
reboot. For most init systems, you will have to adjust the timeout.


Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to "continue"?

2014-06-16 Thread Nikolaus Rath
On 06/16/2014 10:27 AM, PA Nilsson wrote:
> 
> 
> On Sunday, June 15, 2014 11:01:12 PM UTC+2, Nikolaus Rath wrote:
> 
> PA Nilsson > writes:
> > On Sunday, June 15, 2014 9:02:42 PM UTC+2, Nikolaus Rath wrote:
> >>
> >> On 06/15/2014 08:21 AM, PA Nilsson wrote:
> >> > Hi,
> >> >
> >> > I am using a script to mount an s3ql filesystem.
> >> > Before mounting the filesystem I am running:
> >> >
> >> > fsck.s3ql  --batch $storage_url
> >> >
> >> > If the file system was not unmounted cleanly this will fail and
> exit.
> >>
> >> This should only happen if the file system was not unmounted cleanly
> >> *and* you lost the local metadata. Otherwise it's a bug.
> >>
> >> If you lost the local metadata, I really don't think you want to
> force
> >> fsck to continue. Are you sure that's what you want?
> >
> > The system lost power, so the file system was not unmounted.
> > How do I know if I have lost the local metadata?
> 
> The fsck.s3ql message should say something about that. What message are
> you getting when it "fails and exits"?
> 
> 
> Finally had time to reproduce this.
> The system was powercycled in the middle of running a backup with the FS
> mounted.
> Running fsck 
> 
> fsck.s3ql --ssl-ca-path ${capath} --cachedir ${s3ql_cachedir} --log
> $log_file --authfile ${auth_file}  $storage_url
> 
> "
> Starting fsck of x
> Ignoring locally cached metadata (outdated).
> Backend reports that file system is still mounted elsewhere. Either
> the file system has not been unmounted cleanly or the data has not yet
> propagated through the backend. In the later case, waiting for a while
> should fix the problem, in the former case you should try to run fsck
> on the computer where the file system has been mounted most recently.
> Enter "continue" to use the outdated data anyway:
> "
> 
> In this case, it is true that the file system was not cleanly unmounted,
> but what are my options here?

You should find out why you are loosing your local metadata copy.

Is your $s3ql_cachedir on a journaling file system? What happened to
this file system on the power cycle? Did it loose data?

What are the contents of $s3ql_cachedir when you run fsck.s3ql?

Are you running fsck.s3ql with the same $s3ql_cachedir as mount.s3ql?
Are you *absolutely* sure about that?


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to "continue"?

2014-06-17 Thread Nikolaus Rath
PA Nilsson  writes:
>> > fsck.s3ql --ssl-ca-path ${capath} --cachedir ${s3ql_cachedir} --log 
>> > $log_file --authfile ${auth_file}  $storage_url 
>> > 
>> > " 
>> > Starting fsck of x 
>> > Ignoring locally cached metadata (outdated). 
>> > Backend reports that file system is still mounted elsewhere. Either 
>> > the file system has not been unmounted cleanly or the data has not yet 
>> > propagated through the backend. In the later case, waiting for a while 
>> > should fix the problem, in the former case you should try to run fsck 
>> > on the computer where the file system has been mounted most recently. 
>> > Enter "continue" to use the outdated data anyway: 
>> > " 
>> > 
>> > In this case, it is true that the file system was not cleanly unmounted, 
>> > but what are my options here? 
>>
>> You should find out why you are loosing your local metadata copy. 
>>
>> Is your $s3ql_cachedir on a journaling file system? What happened to 
>> this file system on the power cycle? Did it loose data? 
>>
>> What are the contents of $s3ql_cachedir when you run fsck.s3ql? 
>>
>> Are you running fsck.s3ql with the same $s3ql_cachedir as mount.s3ql? 
>> Are you *absolutely* sure about that? 
>>
>>
> I can only trigger this when the system is powered off during an actual 
> transfer of data. If I let the data transfer finish and then power cycle 
> with the fs mounted, the FS recovers when running fsck.
>
> The system is running on an ext4 filesystem. The filesystem does not seem 
> to have lost any data.
> The cachedir is read from the same config file and works otherwise, to yes, 
> I am sure about that. 
> Contents of cachdir when failing is:
> -rw-r--r-- 1 root root 0 Jun 16 13:06 mount.s3ql_crit.log
> -rw--- 1 root root 77824 Jun 17 07:21 
> s3c:=2F=2Fstorage.openproducts.com=2F3feae012-2dfc-4aee-972a-532f15e99009.db
> -rw-r--r-- 1 root root   217 Jun 17 07:21 
> s3c:=2F=2Fstorage.openproducts.com=2F3feae012-2dfc-4aee-972a-532f15e99009.params

There is something very wrong here. While mount.s3ql is running, there
will always be a directory ending in -cache in the cache directory. This
directory is only removed after mount.s3ql exits, so if you reboot the
computer, it *must* still be there.

Can you confirm that the directory exists while mount.s3ql is running?

What happens if, instead of rebooting, you just kill -9 the mount.s3ql
process? Does the -cache directory exist? Does fsck.s3ql work in that case?


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-17 Thread Nikolaus Rath
Warren Daly  writes:
> I use Ubuntu 12.04 LTS 64 bit server. 
> Running python 2.7 so running s3ql-1.12 
> 

Aeh, which one now? 1.12 or 1.17/

> I mount an S3 bucket using mount.s3ql All has been working fine for
> quite sometime. I need to perform some security updates on the server.
> So I unmounted the S3 bucket. Rebooted the server.
>
> When I try to mount the S3 bucket:
> mount.s3ql --allow-other s3:/// /production/
> It returns:
> File system damaged or not unmounted cleanly, run fsck!
>
> So I run fsck.s3ql. 
> 
> MainThread: [fsck] ..processed 605000 objects so far..
> MainThread: [fsck] Deleted spurious object 181363
> MainThread: [fsck] Deleted spurious object 181364
> MainThread: [fsck] Deleted spurious object 181365
> MainThread: [fsck] Deleted spurious object 181366
> MainThread: [fsck] Deleted spurious object 181367
> MainThread: [fsck] Deleted spurious object 181368
> MainThread: [fsck] Deleted spurious object 181369
> MainThread: [fsck] Deleted spurious object 181370
> MainThread: [fsck] Deleted spurious object 181371
> MainThread: [fsck] Deleted spurious object 181372
>
> It's starts counting upwards from object 181363 saying Deleted spurious 
> object. So I killed the process. 
>
> *Is it safe to continue to run fsck and have it Deleted spurious
> objects? *

If the metadata is correct, yes. But that may not be the case for you.

> Is there a switch (I cannot find on in the man file, or help) to move to 
> lost&found or not to delete.

No, but I could create a quick patch if necessary.

> Please help. Any assistance appreciated. 
>
> When I run s3qladm download-metadata s3://x
> I see this:
>
>  No  NameDate   
>   0  s3ql_metadata_bak_0 2013-07-02 11:14:55
>   1  s3ql_metadata_bak_1 2013-07-01 11:14:14
>   2  s3ql_metadata_bak_102013-06-24 03:34:58
>   3  s3ql_metadata_bak_2 2013-07-01 03:20:06
>   4  s3ql_metadata_bak_3 2013-06-30 03:19:49
>   5  s3ql_metadata_bak_4 2013-06-29 03:13:52
>   6  s3ql_metadata_bak_5 2013-06-28 14:08:21
>   7  s3ql_metadata_bak_6 2013-06-28 03:36:40
>   8  s3ql_metadata_bak_7 2013-06-27 03:36:20
>   9  s3ql_metadata_bak_8 2013-06-26 03:35:40
>  10  s3ql_metadata_bak_9 2013-06-25 03:35:20
>
> *Why are the backups so old? Surely metadata refreshes at the default (24 
> hour) periods? *

It should, unless you pass a different option. What does your
~/.s3ql/mount.log file say? It should report whenever metadata is saved.

> So these will be pretty useless to me. But I have local metadata. The 
> server didn't crash was shutdown and restarted just fine.
>
> I have the the .db file and params file. I understand that fsck will use 
> this local metadata.

Did it say so? You unfortunately didn't include the full fsck.s3ql
output.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to "continue"?

2014-06-18 Thread Nikolaus Rath
PA Nilsson  writes:
> On Tuesday, June 17, 2014 10:16:23 PM UTC+2, Nikolaus Rath wrote:
>> PA Nilsson > writes: 
>> >> > fsck.s3ql --ssl-ca-path ${capath} --cachedir ${s3ql_cachedir} --log 
>> >> > $log_file --authfile ${auth_file}  $storage_url 
>> >> > 
>> >> > " 
>> >> > Starting fsck of x 
>> >> > Ignoring locally cached metadata (outdated). 
>> >> > Backend reports that file system is still mounted elsewhere. Either 
>> >> > the file system has not been unmounted cleanly or the data has not 
>> yet 
>> >> > propagated through the backend. In the later case, waiting for a 
>> while 
>> >> > should fix the problem, in the former case you should try to run fsck 
>> >> > on the computer where the file system has been mounted most recently. 
>> >> > Enter "continue" to use the outdated data anyway: 
>> >> > " 
>> >> > 
>> >> > In this case, it is true that the file system was not cleanly 
>> unmounted, 
>> >> > but what are my options here? 
>> >> 
>> >> You should find out why you are loosing your local metadata copy. 
>> >> 
>> >> Is your $s3ql_cachedir on a journaling file system? What happened to 
>> >> this file system on the power cycle? Did it loose data? 
>> >> 
>> >> What are the contents of $s3ql_cachedir when you run fsck.s3ql? 
>> >> 
>> >> Are you running fsck.s3ql with the same $s3ql_cachedir as mount.s3ql? 
>> >> Are you *absolutely* sure about that? 
>> >> 
>> >> 
>> > I can only trigger this when the system is powered off during an actual 
>> > transfer of data. If I let the data transfer finish and then power cycle 
>> > with the fs mounted, the FS recovers when running fsck. 
>> > 
>> > The system is running on an ext4 filesystem. The filesystem does not 
>> seem 
>> > to have lost any data. 
>> > The cachedir is read from the same config file and works otherwise, to 
>> yes, 
>> > I am sure about that. 
>> > Contents of cachdir when failing is: 
>> > -rw-r--r-- 1 root root 0 Jun 16 13:06 mount.s3ql_crit.log 
>> > -rw--- 1 root root 77824 Jun 17 07:21 
>> > s3c:=storageurl.db 
>> > -rw-r--r-- 1 root root   217 Jun 17 07:21 
>> > s3c:=storageurl.params 
>>
>> There is something very wrong here. While mount.s3ql is running, there 
>> will always be a directory ending in -cache in the cache directory. This 
>> directory is only removed after mount.s3ql exits, so if you reboot the 
>> computer, it *must* still be there. 
>>
>> Can you confirm that the directory exists while mount.s3ql is running? 
>>
>> What happens if, instead of rebooting, you just kill -9 the mount.s3ql 
>> process? Does the -cache directory exist? Does fsck.s3ql work in that 
>> case? 
>>
> The -cache dir is there while the and only removed when mount.s3ql
> finishes.  After a kill -9, the -cache is still there.  Then rebooting
> the system, the -cache is still there and fsck completes.
>
> However when closely monitoring the system, the -cache is created when the 
> FS is mounted but if the system is immediately reset, it is not there after 
> a reboot.
> So my thinking is that this is a problem that we have with our flash based 
> file system. The file is simply not yet written to flash.

It's a directory, not a file, and it is created when mount.s3ql
starts. If this directory (with its contents) disappears if you reboot
the system several minutes later, you have a real problem.

> This will be running on an "non maintained" system with no possibility for 
> user interaction.
> What is the drawback of always continuing the fsck operation?

You will loose any data that has not been written to the backend, and
you will loose all metadata updates since the last metadata updates -
which can imply that you loose some data even though the pure data has
already been written to the backend.

More importantly, though, you are ignoring a big problem with your flash
file system. Rebooting the system might affect recently written files,
but it should not result in the loss of an entire directory with its
contents that was created an arbitrary amount of time before the
reboot. In addition, it looks as if the .params file reverts to an
earlier state (this is the reason why fsck.s3ql things that the remote
metadata is newer). 


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-18 Thread Nikolaus Rath
Hi Warren,

When replying to emails on this list, please do not put your reply
above the quoted text, and do not quote the entire message you're
answering to. This makes it unnecessarily hard for other readers to
understand the context of your email. Instead, please cut quoted parts
that are not relevant to your reply, and insert your responses right
after the points you're replying to (as I have done below). Thanks!

Warren Daly  writes:
> A general question, under normal operations, What does fsck mean when it 
> says "[fsck] Deleted spurious object"
> is it removing items on the S3 drive that are no longer in the local 
> metadata file?

Correct.

> I thought that the mount would always use the local metadata when running 
> "mount.s3ql --allow-other s3:/// /folder/"

No, it (always) uses the most recent metadata. If you have previously
mounted the system on a different computer (or using a different cache
directory), the remote metadata will be more recent. If the previous
mount was done on the same system using the same cache directory, the
local metadata will be up-to-date and it will use that.

Best,
-Nikolaus

PS: Please remember the first paragraph when replying :-).

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Unlocking tree

2014-06-18 Thread Nikolaus Rath
Andrei  writes:
> Hi,
>
> Is there a way to unlock an immutable tree locked with s3qllock?

No. Immutable really means immutable, not "mutable with some extra
effort" :-).


> I would have thought this is done with the same command (with some
> switch) but apparently not. Also couldn't find some another way to do
> it. Is this possible? Or should I use some hack like copying with
> s3qlcp and then deleting the original tree?

That's the only way, yes.

Best,
Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-18 Thread Nikolaus Rath
Warren Daly  writes:
> Ok I have a backup of the local metadata - the one I was using is 48Mb, but 
> the one I have from backup is 178Mb (.db) file.
> So I probably did lose those objects that fsck deleted ([fsck] Deleted 
> spurious object) 
>
> So I placed the backup of the .db and params file in the .s3ql folder. 
>
> When I run:
> mount.s3ql --allow-other s3:// /folder/
> I see this in the mount.log file:
> 2014-06-18 12:52:07.928 [2980] MainThread: [mount] Using 4 upload threads.
> 2014-06-18 12:52:08.669 [2980] MainThread: [mount] Ignoring locally cached 
> metadata (outdated).
> 2014-06-18 12:52:08.772 [2980] MainThread: [root] Backend reports that fs 
> is still mounted elsewhere, aborting.
>
> I know there is nothing else mounting, because I've just booted the 
> instance. Investigating now. 
> Also reading up on how to not ignore local metadata?

There is no easy way to do that. One way is to change fsck.py to pretend
that the remote sequence number is zero. The following change will do
that (make sure to revert it after use!):

diff --git a/src/s3ql/fsck.py b/src/s3ql/fsck.py
--- a/src/s3ql/fsck.py
+++ b/src/s3ql/fsck.py
@@ -1084,7 +1084,8 @@
 raise QuietError(str(exc))
 
 cachepath = get_backend_cachedir(options.storage_url, options.cachedir)
-seq_no = get_seq_no(backend)
+log.warning("**WARNING** IGNORING REMOTE METADATA")
+seq_no = 0
 db = None
 
 if os.path.exists(cachepath + '.params'):



Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-18 Thread Nikolaus Rath
Warren Daly  writes:
 > When I run s3qladm download-metadata s3://x 
 > I see this: 
 > 
 >  No  NameDate   
 >   0  s3ql_metadata_bak_0 2013-07-02 11:14:55 
 >   1  s3ql_metadata_bak_1 2013-07-01 11:14:14 
 >   2  s3ql_metadata_bak_102013-06-24 03:34:58 
 >   3  s3ql_metadata_bak_2 2013-07-01 03:20:06 
 >   4  s3ql_metadata_bak_3 2013-06-30 03:19:49 
 >   5  s3ql_metadata_bak_4 2013-06-29 03:13:52 
 >   6  s3ql_metadata_bak_5 2013-06-28 14:08:21 
 >   7  s3ql_metadata_bak_6 2013-06-28 03:36:40 
 >   8  s3ql_metadata_bak_7 2013-06-27 03:36:20 
 >   9  s3ql_metadata_bak_8 2013-06-26 03:35:40 
 >  10  s3ql_metadata_bak_9 2013-06-25 03:35:20 
 > 
 > *Why are the backups so old? Surely metadata refreshes at the default 
 (24 
 > hour) periods? * 

 It should, unless you pass a different option. What does your 
 ~/.s3ql/mount.log file say? It should report whenever metadata is saved. 
>
> mount log:
> 2014-06-18 13:07:05.469 [3893] MainThread: [mount] Using 4 upload threads.
> 2014-06-18 13:07:06.476 [3893] MainThread: [mount] Ignoring locally cached 
> metadata (outdated).
> 2014-06-18 13:07:06.667 [3893] MainThread: [root] Backend reports that fs 
> is still mounted elsewhere, aborting.
[...]

What I meant is, log at the *old* entries in mount.log (or mount.log.1,
mount.log.2, etc) to determine why the metadata backups were not
written.

Was the system ever restarted since 2013-07-02?

Was the file system ever unmounted since 2013-07-02?

How are you starting mount.s3ql? Are you using the
--metadata-upload-interval option?

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to "continue"?

2014-06-20 Thread Nikolaus Rath
PA Nilsson  writes:
>> It's a directory, not a file, and it is created when mount.s3ql 
>> starts. If this directory (with its contents) disappears if you reboot 
>> the system several minutes later, you have a real problem. 
>>
> If I reboot several minutes later, the directory is there and everything 
> works. I need to force the reboot within seconds after the mount process, 
> or maybe even during it while the metadata is read/written from the
> server.

Ah, you didn't say that before. In that case there might be an easy
fix that can be implemented in S3QL. Stay tuned, I'll send you a patch
soonish.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Metadata locations

2014-06-20 Thread Nikolaus Rath
Warren Daly  writes:
> If you unmount the S3 drive correctly and cleanly, and 
> Metadata-Upload-Thread finishes. Where is the metadata stored on the 
> backend?

In the "s3ql_metadata" object.

> (e.g can you see it using #s3qladm download-metadata)

In S3QL 1.12, no. It shows only the backups. In recent S3QL versions,
yes. For example (with S3QL 2.8.1):

$ s3qladm download-metadata s3://foobrazl
The following backups are available:
 No  NameDate   
  0  s3ql_metadata   2014-06-20 08:24:15
  1  s3ql_metadata_bak_0 2014-06-18 16:46:16
  2  s3ql_metadata_bak_1 2014-06-17 21:46:44
[...]

> Is there a working metadata folder on the backend *and* the backups seen by 
> using #s3qladm download-metadata ? (is there 2 location for metadata on the 
> backend?)

I don't understand the question. S3QL backends don't have a folders,
they are simple key-value stores.

> If the S3 drive is not umounted cleanly and the Metadata-Upload-Thread does 
> not upload the metadata. What is best copy of metadata to use 
> A) the remote backup seen in #s3qladm download-metadata? (but why use this 
> if the last metadata upload was not successful?)
> B) the local metadata files?
> C) another location (I am not aware of)?

The local metadata, C.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] File may lack data, moved to /lost+found

2014-06-20 Thread Nikolaus Rath
Warren Daly  writes:
> File may lack data, moved to /lost+found: /folder/readme.txt
> object 175 only exists in table but not in backend, deleting
> File may lack data, moved to /lost+found: /folder/file1
> object 176 only exists in table but not in backend, deleting
>
> so it is "deleting" or "moved" ?

The file /folder/readme.txt has been moved to lost+found, because some
of its contents may have been lost.

The storage object 175 should exist (it is referred to in the metadata),
but the backend doesn't have it. References to it in the metadata have
been deleted.

> Also when it says /lost+found I guess it means the local folder
> /lost+found

It means the /lost+found directory in the S3QL file system that you are
fsck'ing.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-20 Thread Nikolaus Rath
Warren Daly  writes:
>>
>> > Also reading up on how to not ignore local metadata? 
>>
>> There is no easy way to do that. One way is to change fsck.py to pretend 
>> that the remote sequence number is zero. The following change will do 
>> that (make sure to revert it after use!): 
>>
>> diff --git a/src/s3ql/fsck.py b/src/s3ql/fsck.py 
>> --- a/src/s3ql/fsck.py 
>> +++ b/src/s3ql/fsck.py 
>> @@ -1084,7 +1084,8 @@ 
>>  raise QuietError(str(exc)) 
>>   
>>  cachepath = get_backend_cachedir(options.storage_url, 
>> options.cachedir) 
>> -seq_no = get_seq_no(backend) 
>> +log.warning("**WARNING** IGNORING REMOTE METADATA") 
>> +seq_no = 0 
>>  db = None 
>>   
>>  if os.path.exists(cachepath + '.params'): 
>>
>  
> Well this is the log file before the reboot. it would appear that these 
> logs indicate that I should just use the remote metadata (and in turn allow 
> deletion of Spurious objects)?

I don't follow. Do you mean the logfile that you quoted further down,
where it says that mount.s3ql refused to update the remote metadata?

Here is how I think your situation may have come about:

 - You mounted the file system on system A a long time ago
 - You then tried to mount it on system B, but got an error message that
   the system is still mounted elsewhere
 - You then ran fsck.s3ql on system B and told it to continue despite the lack 
of
   up-to-date metadata.
 - Now everything seemed working on system B.
 - Ever since then, however, system A (where the file system was still
   mounted) did not upload its metadata, because it detected that a
   "newer" version of the metadata has been uploaded by system B. This
   is why you do not have any recent metadata backups.
 - Now that you unmounted system A, you have a problem. The local
   metadata copy appears older than the remote metadata, but it was
   actually modified more recently.

The correct course of action depends on which system you did the more
important changes. If you only ran a brief fsck.s3ql on system B at some
indeterminate point in the past, you should use the local metadata from
system A (even though fsck.s3ql claims its outdated). If you mounted the
file system on system B and stored important data there, you may want to
use the remote metadata (but it that case will loose everything you did
on system A).

> The whole reason I have started this thread is anxiousness when it reports 
> deletion of so many "Spurious objects". To put things in perspective there 
> is about 360,000 files on the system. Fsck processes about 600,000 objects. 
> Then begins to delete "Spurious objects" one by one. I would prefer if it 
> moved them to Lost & Found or I could even move them to a specific
> folder.

Yeah, that's a planned feature:

https://bitbucket.org/nikratio/s3ql/issue/30/fscks3ql-should-save-orphaned-objects-to


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Metadata locations

2014-06-20 Thread Nikolaus Rath
On 06/20/2014 07:12 PM, Warren Daly wrote:
> 2014-06-16 12:16:29.444 [2342] Metadata-Upload-Thread: [mount] Remote
> metadata is newer than local (26 vs 25), refusing to overwrite!
> 
> So is the above log entry saying it cannot overwrite the s3ql Metadata
> object *or* the backups (that are listed using s3qladm)

If it doesn't write new metadata, there is little point in making a
backup (since the original is unchanged).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to "continue"?

2014-06-21 Thread Nikolaus Rath
Nikolaus Rath  writes:
> PA Nilsson  writes:
>>> It's a directory, not a file, and it is created when mount.s3ql 
>>> starts. If this directory (with its contents) disappears if you reboot 
>>> the system several minutes later, you have a real problem. 
>>>
>> If I reboot several minutes later, the directory is there and everything 
>> works. I need to force the reboot within seconds after the mount process, 
>> or maybe even during it while the metadata is read/written from the
>> server.
>
> Ah, you didn't say that before. In that case there might be an easy
> fix that can be implemented in S3QL. Stay tuned, I'll send you a patch
> soonish.

Does the following patch fix the problem?

(This is for S3QL 2.8, let me know if you are using an earlier version).

diff --git a/src/s3ql/mount.py b/src/s3ql/mount.py
--- a/src/s3ql/mount.py
+++ b/src/s3ql/mount.py
@@ -432,6 +432,8 @@
 backend['s3ql_seq_no_%d' % param['seq_no']] = b'Empty'
 with open(cachepath + '.params', 'wb') as fh:
 pickle.dump(param, fh, PICKLE_PROTOCOL)
+fh.flush()
+os.fsync(fh.fileno())
 param['needs_fsck'] = False
 
 


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] Force fsck to "continue"?

2014-06-21 Thread Nikolaus Rath
PA Nilsson  writes:
> So my thinking is that this is a problem that we have with our flash based 
> file system. The file is simply not yet written to flash.
> This will be running on an "non maintained" system with no possibility for 
> user interaction.

If there is a high chance that the system will be power-cycled without a
proper shutdown, you may want to apply the following patch as swell (in
addition to the patch from my other mail). It reduces the likelyhood of
metadata data corruption on power loss, but also reduces performance (so
this is not going to go into the official S3QL code):

diff --git a/src/s3ql/database.py b/src/s3ql/database.py
--- a/src/s3ql/database.py
+++ b/src/s3ql/database.py
@@ -32,7 +32,7 @@
# locking_mode to EXCLUSIVE, otherwise we can't switch the locking
# mode without first disabling WAL.
'PRAGMA synchronous = OFF',
-   'PRAGMA journal_mode = OFF',
+   'PRAGMA journal_mode = NORMAL',
#'PRAGMA synchronous = NORMAL',
#'PRAGMA journal_mode = WAL',
 

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-21 Thread Nikolaus Rath
Warren Daly  writes:
> There is no system B. 
>
>  I've compiled a list of the entire situation. I would appreciate your
>  guidance.
[...]

Please stop sending the same email over and over again. Instead, provide
only the additional information requested, and properly quote the email
you're responding to. You're making it extremely hard for me to follow.


If there is no system B, then I have no idea how you could have possibly
ended up in this situation. Are you absolutely sure? Note that "system A
with different --cachedir" would also count as a "system B.".


Could you please zip all the mount.log* and fsck.log* files and put them
somewhere on the web? I'd like to see when this problem started appearing.


Best,
-Nikolaus
-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-21 Thread Nikolaus Rath
On 06/21/2014 05:38 PM, Warren Daly wrote:
> 
> Could you please zip all the mount.log* and fsck.log* files and put
> them
> somewhere on the web? I'd like to see when this problem started
> appearing.
> 
> http://www.invisibleagent.com/20mntfsk.tar 

$ wget http://www.invisibleagent.com/20mntfsk.tar
--2014-06-21 17:44:26--  http://www.invisibleagent.com/20mntfsk.tar
Resolving www.invisibleagent.com (www.invisibleagent.com)...
108.162.196.30, 108.162.197.30, 2400:cb00:2048:1::6ca2:c41e, ...
Connecting to www.invisibleagent.com
(www.invisibleagent.com)|108.162.196.30|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2014-06-21 17:44:27 ERROR 403: Forbidden.


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-21 Thread Nikolaus Rath
On 06/21/2014 06:03 PM, Warren Daly wrote:
> 
> (www.invisibleagent.com
> )|108.162.196.30|:80... connected.
> HTTP request sent, awaiting response... 403 Forbidden
> 2014-06-21 17:44:27 ERROR 403: Forbidden.
> 
> Please try again, .htaccess line updated

$ wget http://www.invisibleagent.com/20mntfsk.tar
--2014-06-21 18:20:26--  http://www.invisibleagent.com/20mntfsk.tar
Resolving www.invisibleagent.com (www.invisibleagent.com)...
108.162.197.30, 108.162.196.30, 2400:cb00:2048:1::6ca2:c41e, ...
Connecting to www.invisibleagent.com
(www.invisibleagent.com)|108.162.197.30|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2014-06-21 18:20:27 ERROR 404: Not Found.

Please test the link before sending the next email.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [s3ql] fsck.s3ql Deleted spurious object

2014-06-21 Thread Nikolaus Rath
On 06/21/2014 07:03 PM, Warren Daly wrote:
> 
> 
> Please test the link before sending the next email.
> 
> 
> Sorry
> 
> wget http://www.invisibleagent.com/mntfsk.tar

Too bad, these logs don't go back far enough. So there's no telling how
and when this problem began.

If you have only accessed the file system from one computer, the best
course of action to get a consistent file system again would be to use
the most-recent copy of the local metadata that you have, and force
fsck.s3ql to use that in favor of the remote metadata using the
following patch (no that this is slightly different than the patch I
posted before):

--- a/src/s3ql/fsck.py
+++ b/src/s3ql/fsck.py
@@ -1095,6 +1095,8 @@
 assert os.path.exists(cachepath + '.db')
 with open(cachepath + '.params', 'rb') as fh:
 param = pickle.load(fh)
+log.warning('WARNING! Forcing use of local metadata!')
+param['seq_no'] = seq_no+1
 if param['seq_no'] < seq_no:
 log.info('Ignoring locally cached metadata (outdated).')
 param = backend.lookup('s3ql_metadata')


To detect problems like this earlier, I'd recommend to use a tool like
logcheck to automatically scan the log files. That way, you get notified
if something unexpected happens (and hopefully before anything actually
breaks).


Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[s3ql] [ANNOUNCE] S3QL 2.9 has been released

2014-06-28 Thread Nikolaus Rath
Dear all,

I am pleased to announce a new release of S3QL, version 2.9.

The source code of the new release can be downloaded from
https://bitbucket.org/nikratio/s3ql/downloads

Please note that starting with version 2.0, S3QL requires Python 3.3 or
newer. For older systems, the S3QL 1.x branch (which only requires
Python 2.7) will continue to be supported for the time being. However,
development concentrates on S3QL 2.x while the 1.x branch only receives
selected bugfixes. When possible, upgrading to S3QL 2.x is therefore
strongly recommended.

From the changelog:

2014-06-28, S3QL 2.9

  * Fix crash when using swift backend and server uses an
authentication URL other than /v1.0.

  * Fixed two test failures when running unit tests as root.

  * Fixed problems when receiving an HTTP error without a well-formed
XML body from the remote server (this may happen e.g. due to
failure in a transparent proxy or load balancer).

  * S3QL now depends on the defusedxml Python module
(https://pypi.python.org/pypi/defusedxml/).

  * S3QL is no longer vulnerable to DOS attacks from malicious backend
servers. Previously, a malicious server could induce S3QL to
consume arbitrary amounts of memory by recursive XML entity
expansion.

  * S3QL now supports Google OAuth2 authentication. To use it, specify
'oauth2' as backend login, and a valid OAuth2 refresh token as
backend password. To obtain the refresh token, the (new)
s3ql_oauth_client command may be used.

  * S3QL now requires version 3.1 or newer of the dugong module.

  * In some cases, transmission errors when reading storage objects
from a backend may have been misreported as corrupted backend
objects. This has been fixed.

  * S3QL no longer crashes when data corruption occurs in the first
few bytes of an LZMA compressed storage object.

  * S3QL now honors the "Retry-After" HTTP header when receiving an
XML error from a storage server.

  * Fixed a crash that could occur when the remote server (or some
intermediate proxy) sends a non-XML error response.

  * mount.s3ql and fsck.s3ql now use different exit codes to indicate
different failure conditions.


Please report any bugs on the mailing list (s3ql@googlegroups.com) or
the issue tracker (https://bitbucket.org/nikratio/s3ql/issues).


Enjoy,

   -Nikolaus


-- 
Encrypted emails preferred.
PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

 »Time flies like an arrow, fruit flies like a Banana.«







signature.asc
Description: OpenPGP digital signature


Re: [s3ql] memory usage and --compress options

2014-07-01 Thread Nikolaus Rath
On 06/30/2014 10:22 PM, Brice Burgess wrote:
> I went ahead and used the --compress none flag to disable compression. I
> had to re-create the fileksystem to get it working (as it was previously
> mounted with LZMA compression, and thus rightly retains this setting
> upon remounting it).

It should not. What makes you think it retained the setting? Mounting
the (existing) file system with --compress  should change the
compression of all new objects. There is no need to worry about
decompressing the old LZMA compressed objects, that only takes a small
amount of memory.

> Can anyone forsee issues running with --compress none? Obviously traffic
> is increased; but memory (and CPU) is tight on these small VMs. Would
> bzip/zlib be a better option?

Yes. Both require orders of magnitude less memory than lzma, so I'd
definitely try one of those instead of not compressing at all.

Best,
-Nikolaus

-- 
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   5   6   7   8   9   >