[ceph-users] Re: radosgw - limit maximum file size

2022-12-09 Thread Boris Behrens
Hi Eric, am I reading it correct, that *rgw_max_put_size *only limits files, that are not uploaded as multipart? My understanding would be, with these default values, that someone can upload a 5TB file in 1 500MB multipart objects. But I want to limit the maximum file size, so no one can uplo

[ceph-users] Re: Newer linux kernel cephfs clients is more trouble?

2022-12-09 Thread Burkhard Linke
Hi, I would like to add a datapoint. I rebooted one of our client machines into kernel 5.4.0-135-generic (latest ubuntu 20.04 non hwe kernel) and performed the same test (copying a large file within cephfs). Both the source and target files stay in cache completely: # fincore bar   RES   PA

[ceph-users] Re: radosgw - limit maximum file size

2022-12-09 Thread Eric Goirand
Hello Boris, I think you may be looking for these RGW daemon parameters : # ceph config help *rgw_max_put_size* rgw_max_put_size - Max size (in bytes) of regular (non multi-part) object upload. (size, advanced) Default: 5368709120 Can update at runtime: true Services: [rgw] # ceph config

[ceph-users] radosgw - limit maximum file size

2022-12-09 Thread Boris Behrens
Hi, is it possible to somehow limit the maximum file/object size? I've read that I can limit the size of multipart objects and the amount of multipart objects, but I would like to limit the size of each object in the index to 100GB. I haven't found a config or quota value, that would fit. Cheers

[ceph-users] Re: Newer linux kernel cephfs clients is more trouble?

2022-12-09 Thread Burkhard Linke
Hi, On 07.12.22 11:58, Stefan Kooman wrote: On 5/13/22 09:38, Xiubo Li wrote: On 5/12/22 12:06 AM, Stefan Kooman wrote: Hi List, We have quite a few linux kernel clients for CephFS. One of our customers has been running mainline kernels (CentOS 7 elrepo) for the past two years. They starte

[ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

2022-12-09 Thread Adrien Georget
Hi, We were also affected by this bug when we deployed a new Pacific cluster. Any news about the release of this fix to Ceph Pacific? It looks done for Quincy version but not Pacific. https://github.com/ceph/ceph/pull/47292 Regards, Adrien Le 05/10/2022 à 13:21, Anh Phan Tuan a écrit : It s

[ceph-users] Set async+rdma in Ceph cluster, then stuck

2022-12-09 Thread Mitsumasa KONDO
Hi, I try to set rdma setting in Ceph cluster. But I set config, it's stucked... # ceph --version ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable) # ceph config set global ms_type async+rdma # ceph -s 2022-12-09T17:53:04.954+0900 7f85b55b7700 -1 Infiniband verify_pre

[ceph-users] Re: octopus rbd cluster just stopped out of nowhere (>20k slow ops)

2022-12-09 Thread Boris Behrens
Hello together, @Alex: I am not sure for what to look in /sys/block//device There are a lot of files.Is there anything I should check in particular? You have sysfs access in /sys/block//device - this will show a lot > of settings. You can go to this directory on CentOS vs. Ubuntu, and see if > a