Hey.

Not sure if that's valuable input for the devs, but here's some vague
real-world report about performance:

I'm just copying (via send/receive) a large filesystem (~7TB) from on
HDD over to another.
The devices are both connected via USB3, and each of the btrfs is on
top of dm-crypt.

It's already obvious that things are slowed down, compared to "normal"
circumstances, but from looking at iotop for a while (and the best disk
IO measuring tool ever: the LEDs on the USB/SATA bridge) it seems that
there are always times when basically no IO happens to disk.

There seems to be a repeating schema like this:
- First, there is some heavy disk IO (200-250 M/s), mostly on btrfs
send and receive processes
- Then there are times when send/receive seem to not do anything, but
either btrfs-transaction (this I see far less however, and the IO% is
far lower, while that of dmcrypt_write is usually to 99%) or
dmcrypt_write eat up all IO (I mean the percent value shown in iotop)
with now total/actual disk write and read being basically zero during
that.

Kinda feels as if there would be some large buffer written first, then
when that gets full, dm-crypt starts encrypting it during which there
is no disk-IO (since it waits for the encryption).

Not sure if this is something that could be optimised or maybe it's
even a non issue that happens for example while many small files are
read/written (the data consists of both, many small files as well as
many big files), which may explain why sometimes the actual IO goes up
to large >200M/s or at least > 150M/s and sometimes it caps at around
40-80M/s


Obviously, since I use dm-crypt and compression on both devices, it may
be a CPU issue, but it's a 8 core machine with i7-3612QM CPU @
2.10GHz... not the fastest, but not the slowest either... and looking
at top/htop is happens quite often that there is only very little CPU
utilisation, so it doesn't seem as if CPU would be the killing factor
here.



HTH,
Chris.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to