I am trying to move data from a local s3ql file system to the Amazon cloud
such that I could then mount the destination storage URL using s3ql and see
the same data in the cloud as I did in the local file system.
Every time I run clone_fs.py I get the following output:
Enter backend login:
Enter backend password:
Copied 2 objects so far...Uncaught top-level exception:
Traceback (most recent call last):
File "./clone_fs.py", line 103, in <module>
main(sys.argv[1:])
File "./clone_fs.py", line 77, in main
metadata = src_backend.lookup(key)
File "/home/phd/s3ql-1.19/src/s3ql/backends/local.py", line 70, in lookup
raise ChecksumError('Invalid metadata')
ChecksumError: Invalid metadata
I'm not sure if I'm using the incorrect local storage URL or if it is
something to do with the destination, but no matter what combination of
parameters I use I always get the same result.
I've tried using:
local:///path/to/s3ql/mountpoint s3://bucket/folder/prefix
local:///path/to/s3ql/backend s3://bucket/folder/prefix
And also the same two commands after running mkfs.s3ql using the cloud
destination URL.
After running this I can see the folder I wanted to write to on the cloud
side and sometimes a file or two.
I've also tried unmounting the local s3ql file system using umount.s3ql to
make sure all the metadata was in sync, but that also didn't help.
I'm using s3ql-1.19 for both the s3ql installation and the clone_fs.py file.
Any ideas?
-Nick Carboni
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.