Re: [s3ql] Recovery using local cache and s3ql_data_ only
On Oct 12 2016, Nikhil Choudharywrote: >> The s3ql_passphrase object can be recovered from the master key, but >> currently there is no code to do that. > > Not so good to hear, if this is the roadblock that it sounds like, I’m > in for a restoration from scratch. Try the attached patch, it adds a 'recover' option to s3qladm. $ s3qladm recover local://bucket Enter master key: KMWg kgYd S5ni K9VQ fgzO tZcB nWvI KA1I q/1d P/ii bMY= Enter new encryption password: Confirm new encryption password: Make sure to try it on a test file system first, obviously! Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout. diff --git a/src/s3ql/adm.py b/src/s3ql/adm.py --- a/src/s3ql/adm.py +++ b/src/s3ql/adm.py @@ -19,7 +19,9 @@ from datetime import datetime as Datetime from getpass import getpass from contextlib import contextmanager +from base64 import b64decode import os +import re import shutil import functools import sys @@ -48,6 +50,8 @@ parents=[pparser]) subparsers.add_parser("clear", help="delete file system and all data", parents=[pparser]) +subparsers.add_parser("recover", help="Recover master key", + parents=[pparser]) subparsers.add_parser("download-metadata", help="Interactively download metadata backups. " "Use only if you know what you are doing.", @@ -87,6 +91,10 @@ with get_backend(options, raw=True) as backend: return clear(backend, options) +if options.action == 'recover': +with get_backend(options, raw=True) as backend: +return recover(backend, options) + with get_backend(options) as backend: if options.action == 'upgrade': return upgrade(backend, get_backend_cachedir(options.storage_url, @@ -167,6 +175,27 @@ backend['s3ql_passphrase_bak3'] = data_pw backend.passphrase = data_pw +def recover(backend, options): +print("Enter master key: ") +data_pw = sys.stdin.readline() +data_pw = re.sub(r'\s+', '', data_pw) +data_pw = b64decode(data_pw) +assert len(data_pw)== 32 + +if sys.stdin.isatty(): +wrap_pw = getpass("Enter new encryption password: ") +if not wrap_pw == getpass("Confirm new encryption password: "): +raise QuietError("Passwords don't match") +else: +wrap_pw = sys.stdin.readline().rstrip() +wrap_pw = wrap_pw.encode('utf-8') + +backend = ComprencBackend(wrap_pw, ('lzma', 2), backend) +backend['s3ql_passphrase'] = data_pw +backend['s3ql_passphrase_bak1'] = data_pw +backend['s3ql_passphrase_bak2'] = data_pw +backend['s3ql_passphrase_bak3'] = data_pw + def clear(backend, options): print('I am about to delete all data in %s.' % backend, 'This includes any S3QL file systems as well as any other stored objects.',
Re: [s3ql] Recovery using local cache and s3ql_data_ only
On Oct 12 2016, Nikhil Choudharywrote: >> On Oct 12, 2016, at 3:41 PM, Nikolaus Rath wrote: >> >> So the only objects that are lost are s3ql_metadata, s3ql_passphrase >> s3ql_seq_no no? > > Hello Nikolaus, thanks for the quick response! This is correct. > >> As long as at least one s3ql_seq_no object is present, this should not >> pose a problem. > > Does the particular object matter or is any one sufficient? I > recovered s3ql_seq_no_[4,5,6,8,10,13] Any one will do. >> If the s3ql_metadata object is missing, fsck.s3ql can recreate it from >> the .db file. You may have to comment out one or two lines in fsck.s3ql >> to deal with the complete absence of existing metadata. > > Good to hear, I looked into the patches that you’ve posted previously for > prioritizing local cache. > >> The s3ql_passphrase object can be recovered from the master key, but >> currently there is no code to do that. > > Not so good to hear, if this is the roadblock that it sounds like, I’m > in for a restoration from scratch. I'll see if I can find a few minutes to send you a patch. It shouldn't be too complicated. >> Sounds like a terrible idea. What are you trying to achieve with this? > > I’m not a fan of convoluted setups either, they seem prone to > breakage. That said, this is a way to get s3ql’s dedupe, encryption, > and caching capabilities working with ACD. Yeah, but why don't you simply use S3 instead of ACD? Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [s3ql] Recovery using local cache and s3ql_data_ only
Hello Nikolaus, thanks for the quick response! > So the only objects that are lost are s3ql_metadata, s3ql_passphrase > s3ql_seq_no no? Correct. > As long as at least one s3ql_seq_no object is present, this should not > pose a problem. Does the particular object matter or is any one sufficient? I recovered s3ql_seq_no_[4,5,6,8,10,13] > If the s3ql_metadata object is missing, fsck.s3ql can recreate it from > the .db file. You may have to comment out one or two lines in fsck.s3ql > to deal with the complete absence of existing metadata. Good to hear, I looked into the patches that you’ve posted previously for prioritizing local cache. > The s3ql_passphrase object can be recovered from the master key, but > currently there is no code to do that. Not so good to hear, if this is the roadblock that it sounds like, I’m in for a restoration from scratch. > Sounds like a terrible idea. What are you trying to achieve with this? I’m not a fan of convoluted setups either, they seem prone to breakage. That said, this is a way to get s3ql’s dedupe, encryption, and caching capabilities working with ACD. Surprisingly this test configuration’s performance is more than sufficient for my use: personal video that doesn’t change and will only be infrequently read at 5-6mbit, and perhaps once a week have new files added. The user-facing functionality gap is that the filesystem is offline while the local files are synced to acd, otherwise I haven’t run across any issues (other than user-stupidity). If this continues to work, I’m interested in learning to code a bit more and take a stab at adding a real ACD backend to s3ql. Thanks for s3ql, it really does look great. -Nikhil -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.