On 10/9/21 01:24, Dan Mick wrote:
Ceph has been completely ported to build and run on ARM hardware
(architecture arm64/aarch64), but we're unable to test it due to lack of
hardware. We propose to purchase a significant number of ARM servers
(50+?) to install in our upstream Sepia test lab to
If there's intent to use this for performance comparisons between releases,
I would propose that you include rotational drive(s), as well. It will be
quite some time before everyone is running pure NVME/SSD clusters with the
storage costs associated with that type of workload, and this should be
-roughly how large is the expanded untared folder, and roughly how many
files ?
-also roughly, what cluster throughput and bandwidth do you see when
untaring the file, you could observe this from ceph status
-is the cluster running on the same client machine ? hdd/ssd ?
/Maged
On
I was testing with appending mailbox files. But the principle of getting data
from a mds server that has almost everything in cache instead of reading this
from different osd's I would assume is always faster.
>
> That is odd- I am running some game servers (ARK Survival) and the RBD
> mount
Den fre 8 okt. 2021 kl 17:21 skrev Sean :
> I don’t think this is possible, since CephFS is a network mounted
> filesystem. The inotify feature requires the kernel to be aware of file
> system changes. If the kernel is unaware of changes in a tracked directory,
> which is the case for all network