They will upload from the same network segment to the same network where the 
cluster is located.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---------------------------------------------------

-----Original Message-----
From: Janne Johansson <icepic...@gmail.com> 
Sent: Friday, August 20, 2021 3:52 PM
To: Marc <m...@f1-outsourcing.eu>
Cc: Szabo, Istvan (Agoda) <istvan.sz...@agoda.com>; Ceph Users 
<ceph-users@ceph.io>
Subject: Re: [ceph-users] Re: Max object size GB or TB in a bucket

Email received from the internet. If in doubt, don't click any link nor open 
any attachment !
________________________________

Den fre 20 aug. 2021 kl 10:45 skrev Marc <m...@f1-outsourcing.eu>:
>
> > > S3cmd chunks 15MB.
>
> There seems to be an s5cmd, which should be much much faster than s3cmd.

There is both s4cmd, s5cmd, minio-mc and rclone which all have some things that 
make them "better" than s3cmd in various ways, at the expense of lacking other 
options that s3cmd has, which you may or may not use.

One can tune s3cmd a bit with multipart_chunk_size_mb (I use 256M if I am close 
network-wise to the rgws) and send_chunk / recv_chunk which I have at 262144, 
but if you need parallelism at the network layer, other s3 clients are probably 
better.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to