On Mon, 15 Sep 2025, at 15:53, '[email protected]' via s3ql wrote:
> On Monday, September 15, 2025 at 4:05:32 AM UTC-4 Nikolaus Rath wrote:
>> On Mon, 15 Sep 2025, at 02:36, '[email protected]' via s3ql wrote:
>>> > Running fsck.s3ql --force on this filesystem takes about 4 days, is there
>>>> > a way to avoid doing this?
>>>> 
>>>> Which steps take the longest? Depending on that, --fast may or may not 
>>>> help.
> 
> The current fsck seems to be taking about the same time as the previous one, 
> the last one was using s3ql 5.3 with the --fast option, and the longest steps 
> were:
> 
> Checking DB integrity,  approx 1 day

Nothing that can be done here, unfortunately. This is just SQLite doing its 
thing. Only thing that might help is a faster CPU or putting the DB on a faster 
disk.

> Uploading metadata, approx 3 days

That's only after an unclean unmount, right? It should be much faster if you 
run fsck.s3ql --force on a clean filesystem?

The problem is that fsck.s3ql needs to re-upload the entire database (109 GB in 
your case), because the information about which parts were modified was lost 
when mount.s3ql crashed.

Is the upload time (3 days for 109 GB, maybe 50 GB after compression) 
consistent with your bandwidth?

If bandwidth is the limiting factor, then in theory this could be sped up by 
calculating a checksum of each DB block, comparing it with the checksum of the 
block stored in remote storage, and only re-uploading if the checksum changed. 
That'd require someone to write the code, of course :-).

Best,
-Nikolaus

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/s3ql/55619450-2d24-4b92-a38f-6368a110fb4d%40app.fastmail.com.

Reply via email to