Rupert Gallagher [r...@protonmail.com] wrote:
> No, I am not using USB.

rsync between disks should be very fast. you are going from the sata to the
nvme ? NetBSD or FreeBSD or somebody made some speed improvements to nvme
that we should review. i can't remember right now. anyways, 10GB/hour sounds
extremely slow for an nvme SSD, way way way too slow for anything I have
experienced in recent memory. 

it might be interesting to try using cp between filesystems, or tar

such as: cp -r /usr/bin /mnt/usr/bin
or: tar cf - -C /usr/bin . | tar xpf - -C /mnt/usr/bin

also what speeds are you getting on the destination filesystem?

dd count=1 bs=1G if=/dev/zero of=/mnt/test conv=fsync

might give you some rough idea of what 1G write costs.

here's 1G write on my Samsung 845DC Pro which is one of my all-time favorite
SATA SSDs for reliability

# dd count=1 bs=1G if=/dev/zero of=test conv=fsync 
1+0 records in
1+0 records out
1073741824 bytes transferred in 2.906 secs (369450372 bytes/sec)

here's the same for a Crucial M500

# dd count=1 bs=1G if=/dev/zero of=test conv=fsync
1+0 records in
1+0 records out
1073741824 bytes transferred in 4.356 secs (246484472 bytes/sec)

it's not clear to me how much the buffer cache affects this but i'm hoping
here that conv=fsync helps. in a wierd twist, tests like this with conv=fsync
run consistently faster than without, so my understanding isn't that great.

Reply via email to