Hi Reed,
To add to this command by Weiwen:
On 28.05.21 13:03, 胡 玮文 wrote:
Have you tried just start multiple rsync process simultaneously to transfer
different directories? Distributed system like ceph often benefits from more
parallelism.
When I migrated from XFS on iSCSI (legacy system,
There is also a longstanding belief that using cpio saves you context switches
and data through a pipe. ymmv.
> On May 28, 2021, at 7:26 AM, Reed Dier wrote:
>
> I had it on my list of things to possibly try, a tar in | tar out copy to see
> if it yielded different results.
>
> On its
I had it on my list of things to possibly try, a tar in | tar out copy to see
if it yielded different results.
On its face, it seems like cp -a is getting ever so slightly better speed, but
not a clear night and day difference.
I will definitely look into this and report back any findings,
I guess I should probably have been more clear, this is one pool of many, so
the other OSDs aren't idle.
So I don't necessarily think that the PG bump would be the worst thing to try,
but its definitely not as bad as I may have made it sound.
Thanks,
Reed
> On May 27, 2021, at 11:37 PM,
Hi Reed,
Have you tried just start multiple rsync process simultaneously to transfer
different directories? Distributed system like ceph often benefits from more
parallelism.
Weiwen Hu
> 在 2021年5月28日,03:54,Reed Dier 写道:
>
> Hoping someone may be able to help point out where my
On Thu, May 27, 2021 at 02:54:00PM -0500, Reed Dier wrote:
> Hoping someone may be able to help point out where my bottleneck(s) may be.
>
> I have an 80TB kRBD image on an EC8:2 pool, with an XFS filesystem on top of
> that.
> This was not an ideal scenario, rather it was a rescue mission to