Re: [s3ql] Recovery using local cache and s3ql_data_ only
On Oct 12 2016, Nikhil Choudhary wrote: >> The s3ql_passphrase object can be recovered from the master key, but >> currently there is no code to do that. > > Not so good to hear, if this is the roadblock that it sounds like, I’m > in for a restoration from scratch. Try the attached patch, it adds a 'recover' option to s3qladm. $ s3qladm recover local://bucket Enter master key: KMWg kgYd S5ni K9VQ fgzO tZcB nWvI KA1I q/1d P/ii bMY= Enter new encryption password: Confirm new encryption password: Make sure to try it on a test file system first, obviously! Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout. diff --git a/src/s3ql/adm.py b/src/s3ql/adm.py --- a/src/s3ql/adm.py +++ b/src/s3ql/adm.py @@ -19,7 +19,9 @@ from datetime import datetime as Datetime from getpass import getpass from contextlib import contextmanager +from base64 import b64decode import os +import re import shutil import functools import sys @@ -48,6 +50,8 @@ parents=[pparser]) subparsers.add_parser("clear", help="delete file system and all data", parents=[pparser]) +subparsers.add_parser("recover", help="Recover master key", + parents=[pparser]) subparsers.add_parser("download-metadata", help="Interactively download metadata backups. " "Use only if you know what you are doing.", @@ -87,6 +91,10 @@ with get_backend(options, raw=True) as backend: return clear(backend, options) +if options.action == 'recover': +with get_backend(options, raw=True) as backend: +return recover(backend, options) + with get_backend(options) as backend: if options.action == 'upgrade': return upgrade(backend, get_backend_cachedir(options.storage_url, @@ -167,6 +175,27 @@ backend['s3ql_passphrase_bak3'] = data_pw backend.passphrase = data_pw +def recover(backend, options): +print("Enter master key: ") +data_pw = sys.stdin.readline() +data_pw = re.sub(r'\s+', '', data_pw) +data_pw = b64decode(data_pw) +assert len(data_pw)== 32 + +if sys.stdin.isatty(): +wrap_pw = getpass("Enter new encryption password: ") +if not wrap_pw == getpass("Confirm new encryption password: "): +raise QuietError("Passwords don't match") +else: +wrap_pw = sys.stdin.readline().rstrip() +wrap_pw = wrap_pw.encode('utf-8') + +backend = ComprencBackend(wrap_pw, ('lzma', 2), backend) +backend['s3ql_passphrase'] = data_pw +backend['s3ql_passphrase_bak1'] = data_pw +backend['s3ql_passphrase_bak2'] = data_pw +backend['s3ql_passphrase_bak3'] = data_pw + def clear(backend, options): print('I am about to delete all data in %s.' % backend, 'This includes any S3QL file systems as well as any other stored objects.',
Re: [s3ql] Recovery using local cache and s3ql_data_ only
On Oct 12 2016, Nikhil Choudhary wrote: >> On Oct 12, 2016, at 3:41 PM, Nikolaus Rath wrote: >> >> So the only objects that are lost are s3ql_metadata, s3ql_passphrase >> s3ql_seq_no no? > > Hello Nikolaus, thanks for the quick response! This is correct. > >> As long as at least one s3ql_seq_no object is present, this should not >> pose a problem. > > Does the particular object matter or is any one sufficient? I > recovered s3ql_seq_no_[4,5,6,8,10,13] Any one will do. >> If the s3ql_metadata object is missing, fsck.s3ql can recreate it from >> the .db file. You may have to comment out one or two lines in fsck.s3ql >> to deal with the complete absence of existing metadata. > > Good to hear, I looked into the patches that you’ve posted previously for > prioritizing local cache. > >> The s3ql_passphrase object can be recovered from the master key, but >> currently there is no code to do that. > > Not so good to hear, if this is the roadblock that it sounds like, I’m > in for a restoration from scratch. I'll see if I can find a few minutes to send you a patch. It shouldn't be too complicated. >> Sounds like a terrible idea. What are you trying to achieve with this? > > I’m not a fan of convoluted setups either, they seem prone to > breakage. That said, this is a way to get s3ql’s dedupe, encryption, > and caching capabilities working with ACD. Yeah, but why don't you simply use S3 instead of ACD? Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [s3ql] Recovery using local cache and s3ql_data_ only
Hello Nikolaus, thanks for the quick response! > So the only objects that are lost are s3ql_metadata, s3ql_passphrase > s3ql_seq_no no? Correct. > As long as at least one s3ql_seq_no object is present, this should not > pose a problem. Does the particular object matter or is any one sufficient? I recovered s3ql_seq_no_[4,5,6,8,10,13] > If the s3ql_metadata object is missing, fsck.s3ql can recreate it from > the .db file. You may have to comment out one or two lines in fsck.s3ql > to deal with the complete absence of existing metadata. Good to hear, I looked into the patches that you’ve posted previously for prioritizing local cache. > The s3ql_passphrase object can be recovered from the master key, but > currently there is no code to do that. Not so good to hear, if this is the roadblock that it sounds like, I’m in for a restoration from scratch. > Sounds like a terrible idea. What are you trying to achieve with this? I’m not a fan of convoluted setups either, they seem prone to breakage. That said, this is a way to get s3ql’s dedupe, encryption, and caching capabilities working with ACD. Surprisingly this test configuration’s performance is more than sufficient for my use: personal video that doesn’t change and will only be infrequently read at 5-6mbit, and perhaps once a week have new files added. The user-facing functionality gap is that the filesystem is offline while the local files are synced to acd, otherwise I haven’t run across any issues (other than user-stupidity). If this continues to work, I’m interested in learning to code a bit more and take a stab at adding a real ACD backend to s3ql. Thanks for s3ql, it really does look great. -Nikhil -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [s3ql] Recovery using local cache and s3ql_data_ only
> On Oct 12, 2016, at 3:41 PM, Nikolaus Rath wrote: > > So the only objects that are lost are s3ql_metadata, s3ql_passphrase > s3ql_seq_no no? Hello Nikolaus, thanks for the quick response! This is correct. > As long as at least one s3ql_seq_no object is present, this should not > pose a problem. Does the particular object matter or is any one sufficient? I recovered s3ql_seq_no_[4,5,6,8,10,13] > If the s3ql_metadata object is missing, fsck.s3ql can recreate it from > the .db file. You may have to comment out one or two lines in fsck.s3ql > to deal with the complete absence of existing metadata. Good to hear, I looked into the patches that you’ve posted previously for prioritizing local cache. > The s3ql_passphrase object can be recovered from the master key, but > currently there is no code to do that. Not so good to hear, if this is the roadblock that it sounds like, I’m in for a restoration from scratch. > Sounds like a terrible idea. What are you trying to achieve with this? I’m not a fan of convoluted setups either, they seem prone to breakage. That said, this is a way to get s3ql’s dedupe, encryption, and caching capabilities working with ACD. Surprisingly this test configuration’s performance is more than sufficient for my use: personal video that doesn’t change and will only be infrequently read at 5-6mbit, and perhaps once a week have new files added. The user-facing functionality gap is that the filesystem is offline while the local files are synced to acd, otherwise I haven’t run across any issues (other than user-stupidity). If this continues to work it would be interesting to learn to code a bit more and take a stab at adding a real ACD backend to s3ql. Thanks for s3ql, it really does look great. -Nikhil -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [s3ql] Recovery using local cache and s3ql_data_ only
Hello Nikolaus, thanks for the quick response! Apologies if this is a duplicate, replying directly to s3ql@googlegroups only didn’t show up in the web UI. > So the only objects that are lost are s3ql_metadata, s3ql_passphrase > s3ql_seq_no no? Correct. > As long as at least one s3ql_seq_no object is present, this should not > pose a problem. Does the particular object matter or is any one sufficient? I recovered s3ql_seq_no_[4,5,6,8,10,13] > If the s3ql_metadata object is missing, fsck.s3ql can recreate it from > the .db file. You may have to comment out one or two lines in fsck.s3ql > to deal with the complete absence of existing metadata. Good to hear, I looked into the patches that you’ve posted previously for prioritizing local cache. > The s3ql_passphrase object can be recovered from the master key, but > currently there is no code to do that. Not so good to hear, if this is the roadblock that it sounds like, I’m in for a restoration from scratch. > Sounds like a terrible idea. What are you trying to achieve with this? I’m not a fan of convoluted setups either, they seem prone to breakage. That said, this is a way to get s3ql’s dedupe, encryption, and caching capabilities working with ACD. Surprisingly this test configuration’s performance is more than sufficient for my use: personal video that doesn’t change and will only be infrequently read at 5-6mbit, and perhaps once a week have new files added. The user-facing functionality gap is that the filesystem is offline while the local files are synced to acd, otherwise I haven’t run across any issues (other than user-stupidity). If this continues to work, I’m interested in learning to code a bit more and take a stab at adding a real ACD backend to s3ql. Thanks for s3ql, it really does look great. -Nikhil -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [s3ql] Recovery using local cache and s3ql_data_ only
On Oct 12 2016, Nikhil Choudhary wrote: > Hi folks, > > Is it possible to recover a filesystem with an up-to-date local cache > (.db and .params files), s3ql_data_ files, and the encryption master > key? That depends on what you want to recover from. > I’m using s3ql with local storage backend (details below) - after > cleanly unmounting the filesystem, I made a critical error and > incorrectly deleted the s3ql files in the backend directory, except > the s3ql_data_ directory. Unfortunately, extundelete and other > utilities were only able to grab some of the older s3ql_metadata_bak > and s3ql_seq_no files, not the most recent s3ql_metadata file or the > passphrase file. So the only objects that are lost are s3ql_metadata, s3ql_passphrase s3ql_seq_no no? As long as at least one s3ql_seq_no object is present, this should not pose a problem. If the s3ql_metadata object is missing, fsck.s3ql can recreate it from the .db file. You may have to comment out one or two lines in fsck.s3ql to deal with the complete absence of existing metadata. The s3ql_passphrase object can be recovered from the master key, but currently there is no code to do that. > Background: I’m testing s3ql along with overlay and acd_cli’s FUSE > mount for Amazon Cloud Drive. s3ql’s local storage backend points to > an overlay mountpoint with acd_cli as a read-only layer and a local > directory as the rw layer. Files are copied to the s3ql mountpoint, > and periodically I unmount s3ql and acd, sync the local s3ql_data_ > directory to acd s3ql_data_, delete the local contents of s3ql_data_, > and re-mount acd and s3ql. Sounds like a terrible idea. What are you trying to achieve with this? Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
[s3ql] Recovery using local cache and s3ql_data_ only
Hi folks, Is it possible to recover a filesystem with an up-to-date local cache (.db and .params files), s3ql_data_ files, and the encryption master key? I’m using s3ql with local storage backend (details below) - after cleanly unmounting the filesystem, I made a critical error and incorrectly deleted the s3ql files in the backend directory, except the s3ql_data_ directory. Unfortunately, extundelete and other utilities were only able to grab some of the older s3ql_metadata_bak and s3ql_seq_no files, not the most recent s3ql_metadata file or the passphrase file. The last unmount action: Oct 12 09:47:07 mount.s3ql[14294]: mount.s3ql[14294:Metadata-Upload-Thread] s3ql.mount.run: File system unchanged, not uploading metadata. Oct 12 09:47:08 mount.s3ql[14294]: mount.s3ql[14294:MainThread] s3ql.mount.main: FUSE main loop terminated. Oct 12 09:47:08 mount.s3ql[14294]: mount.s3ql[14294:MainThread] s3ql.mount.unmount: Unmounting file system... Oct 12 09:47:08 mount.s3ql[14294]: mount.s3ql[14294:MainThread] s3ql.mount.main: File system unchanged, not uploading metadata. Oct 12 09:47:08 mount.s3ql[14294]: mount.s3ql[14294:MainThread] s3ql.mount.main: Cleaning up local metadata... Oct 12 09:47:08 mount.s3ql[14294]: mount.s3ql[14294:MainThread] s3ql.mount.main: All done. Cache dir: 20717568 Oct 12 09:47 local:=2F=2F=2Fs3ql=2Fs3ql-overlay=2F.db 197 Oct 12 09:47 local:=2F=2F=2Fs3ql=2Fs3ql-overlay=2F.params 0 Oct 9 15:42 mount.s3ql_crit.log Recovered files: 2628246 Oct 12 10:45 s3ql_metadata_bak_2 257749 Oct 12 10:45 s3ql_metadata_bak_4 650 Oct 12 10:44 s3ql_metadata_bak_8 276 Oct 12 10:47 s3ql_seq_no_10 276 Oct 12 10:47 s3ql_seq_no_13 275 Oct 12 10:49 s3ql_seq_no_4 338 Oct 12 10:48 s3ql_seq_no_5 275 Oct 12 10:48 s3ql_seq_no_6 275 Oct 12 10:48 s3ql_seq_no_8 It would save a few weeks of work to be able to restore access to the filesystem - any and all help is appreciated! Thanks, Nikhil Background: I’m testing s3ql along with overlay and acd_cli’s FUSE mount for Amazon Cloud Drive. s3ql’s local storage backend points to an overlay mountpoint with acd_cli as a read-only layer and a local directory as the rw layer. Files are copied to the s3ql mountpoint, and periodically I unmount s3ql and acd, sync the local s3ql_data_ directory to acd s3ql_data_, delete the local contents of s3ql_data_, and re-mount acd and s3ql. This setup is still in testing, so everything except file deletion is scripted, and naturally that’s the portion I erred on - instead of deleting just the local s3ql_data_ contents, I wiped out the entire contents of the local s3ql directory. The s3ql_data_ files are completely up to date on acd and safe, but the metadata, passphrase, and other files in the root of the s3ql mount are gone. -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to s3ql+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.