`copy-file/deduplicate, sparse files` tests fail on asahi linux. i do not have adequate grounding on this topic, but i made a few empirical tests changing the inputs.
the original tests create a file in `/tmp`, and fail. if instead the file is on disk, some(all?) tests pass. if instead the inputs are changed from 2*4096 to 2*16834, some tests pass. tmpfs on apple silicon has a block size of 16834. i'm guessing this corresponds to its RAM page size. i'm inferring that this test has a hidden dependency: the block size of the underlying filesystem that contains the tested files. it may be desirable to account for this dependency.
