Re: [s3ql] Re: S3QL 3.3 performance

2021-04-15 Thread Nikolaus Rath
On Apr 15 2021, Grunthos  wrote:
> OK...possibly made a silly mistake...I have the bucket mounted (for a copy) 
> elsewhere, while running clone-fs. While no writes are occurring, the 
> metadata does get uploaded periodically, I believe.
>
> Am I therefor likely now to have a corrupt clone? It doesn't look like 
> clone-fs has 'rsync-like' features, so I can't just unmount and re-do to 
> get updates AFAICT. Any thoughts?

If there were no changes, then the metadata has not changed and thus
will not be uploaded.

If there were minor changes (e.g. file atime but no file contents), then
running fsck.s3ql on the clone should clear the dirty flag without
finding any problems. After that, you're good to go.

Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

 »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/87y2djkc1k.fsf%40vostro.rath.org.


Re: [s3ql] Re: S3QL 3.3 performance

2021-04-15 Thread Grunthos
OK...possibly made a silly mistake...I have the bucket mounted (for a copy) 
elsewhere, while running clone-fs. While no writes are occurring, the 
metadata does get uploaded periodically, I believe.

Am I therefor likely now to have a corrupt clone? It doesn't look like 
clone-fs has 'rsync-like' features, so I can't just unmount and re-do to 
get updates AFAICT. Any thoughts?

On Thursday, April 15, 2021 at 8:42:42 PM UTC+10 Grunthos wrote:

> On Thursday, April 15, 2021 at 5:28:56 AM UTC+10 niko...@rath.org wrote:
>
>> You can try to adjust the number of threads, but apart from that I 
>> think you fundamentally cannot do any better than this (neither with 
>> other tools nor with major code changes to S3QL). 
>>
>>
> Yep! Seem like you are correct: `clone-fs.py` (16 threads) does a faster 
> download than `gsutil -m rsync...`. Still depressingly slow, though -- 
> which I assume is a limit put in place by my ISP and/or Google. CPU 
> cruising at 30% and link runs at same speed with 8 or 16 threads. FWIW, it 
> does indeed seem that the current fastest option might in fact be to clone 
> the fs then restore locally. The mount/rsync option is much slower. But I 
> will know more one I have actually restored from the local copy.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/d9f7f169-487f-41bd-8c1b-639ad1cead63n%40googlegroups.com.


Re: [s3ql] Re: S3QL 3.3 performance

2021-04-15 Thread Grunthos
On Thursday, April 15, 2021 at 5:28:56 AM UTC+10 niko...@rath.org wrote:

> You can try to adjust the number of threads, but apart from that I 
> think you fundamentally cannot do any better than this (neither with 
> other tools nor with major code changes to S3QL). 
>
>
Yep! Seem like you are correct: `clone-fs.py` (16 threads) does a faster 
download than `gsutil -m rsync...`. Still depressingly slow, though -- 
which I assume is a limit put in place by my ISP and/or Google. CPU 
cruising at 30% and link runs at same speed with 8 or 16 threads. FWIW, it 
does indeed seem that the current fastest option might in fact be to clone 
the fs then restore locally. The mount/rsync option is much slower. But I 
will know more one I have actually restored from the local copy.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/f995ef95-fd81-44e6-8ca4-0daa736ab3fen%40googlegroups.com.