There is a rgw_max_put_size which defaults to 5G, which limits the size of a single PUT request. But in that case, the http response would be 400 EntityTooLarge. For multipart uploads, there's also a rgw_multipart_part_upload_limit that defaults to 10000 parts, which would cause a 416 InvalidRange error. By default though, s3cmd does multipart uploads with 15MB parts, so your 11G object should only have ~750 parts.

Are you able to upload smaller objects successfully? These InvalidRange errors can also result from failures to create any rados pools that didn't exist already. If that's what you're hitting, you'd get the same InvalidRange errors for smaller object uploads, and you'd also see messages like this in your radosgw log:

> rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)

On 3/7/19 12:21 PM, Jan Kasprzak wrote:
        Hello, Ceph users,

does radosgw have an upper limit of object size? I tried to upload
a 11GB file using s3cmd, but it failed with InvalidRange error:

$ s3cmd put --verbose centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1810.iso 
s3://mybucket/
INFO: No cache file found, creating it.
INFO: Compiling list of local files...
INFO: Running stat() and reading/calculating MD5 values on 1 files, this may 
take some time...
INFO: Summary: 1 local files to upload
WARNING: CentOS-7-x86_64-Everything-1810.iso: Owner username not known. Storing 
UID=108 instead.
WARNING: CentOS-7-x86_64-Everything-1810.iso: Owner groupname not known. 
Storing GID=108 instead.
ERROR: S3 error: 416 (InvalidRange)

$ ls -lh centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1810.iso
-rw-r--r--. 1 108 108 11G Nov 26 15:28 
centos/7/isos/x86_64/CentOS-7-x86_64-Everything-1810.iso

Thanks for any hint how to increase the limit.

-Yenya

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to