[ceph-users] Re: radosgw - limit maximum file size

2022-12-09 Thread Boris Behrens
Hi Eric,

am I reading it correct, that *rgw_max_put_size *only limits files, that
are not uploaded as multipart?
My understanding would be, with these default values, that someone can
upload a 5TB file in 1 500MB multipart objects.

But I want to limit the maximum file size, so no one can upload a file
larger than 100GB, no matter how they size the multipart upload. Having
1000 99GB files is fine for me.
I want to mitigate this RGW bug [1], which currently leads to a lot of pain
on our side (some random customer seem to have lost all their rados object
from a bucket, because the GC went nuts.[2])
[1]: https://tracker.ceph.com/issues/53585
[2]:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/5XSUELNB64VTKRYRN6TXB5CU7VITPBVP/

Am Fr., 9. Dez. 2022 um 11:45 Uhr schrieb Eric Goirand :

> Hello Boris,
>
> I think you may be looking for these RGW daemon parameters :
>
> # ceph config help *rgw_max_put_size*
> rgw_max_put_size - Max size (in bytes) of regular (non multi-part) object
> upload.
>   (size, advanced)
>   Default: 5368709120
>   Can update at runtime: true
>   Services: [rgw]
>
> # ceph config help *rgw_multipart_part_upload_limit*
> rgw_multipart_part_upload_limit - Max number of parts in multipart upload
>   (int, advanced)
>   Default: 1
>   Can update at runtime: true
>   Services: [rgw]
>
> *rgw_max_put_size* is set in bytes.
>
> Regards,
> Eric.
>
> On Fri, Dec 9, 2022 at 11:24 AM Boris Behrens  wrote:
>
>> Hi,
>> is it possible to somehow limit the maximum file/object size?
>>
>> I've read that I can limit the size of multipart objects and the amount of
>> multipart objects, but I would like to limit the size of each object in
>> the
>> index to 100GB.
>>
>> I haven't found a config or quota value, that would fit.
>>
>> Cheers
>>  Boris
>>
>> --
>> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
>> groüen Saal.
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>

-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Newer linux kernel cephfs clients is more trouble?

2022-12-09 Thread Burkhard Linke

Hi,


I would like to add a datapoint. I rebooted one of our client machines 
into kernel 5.4.0-135-generic (latest ubuntu 20.04 non hwe kernel) and 
performed the same test (copying a large file within cephfs).


Both the source and target files stay in cache completely:

# fincore bar
  RES   PAGES SIZE FILE
  10G 2621353  10G bar

They also stay there for some time until the cap is eventually revoked 
by the MDS or the local cache is flushed. This is the expected behavior.


Regards,
Burkhard

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] radosgw - limit maximum file size

2022-12-09 Thread Boris Behrens
Hi,
is it possible to somehow limit the maximum file/object size?

I've read that I can limit the size of multipart objects and the amount of
multipart objects, but I would like to limit the size of each object in the
index to 100GB.

I haven't found a config or quota value, that would fit.

Cheers
 Boris

-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

2022-12-09 Thread Adrien Georget

Hi,

We were also affected by this bug when we deployed a new Pacific cluster.
Any news about the release of this fix to Ceph Pacific? It looks done 
for Quincy version but not Pacific.


https://github.com/ceph/ceph/pull/47292

Regards,
Adrien

Le 05/10/2022 à 13:21, Anh Phan Tuan a écrit :

It seems the 17.2.4 release has fixed this.

ceph-volume: fix fast device alloc size on mulitple device (pr#47293,

Arthur Outhenin-Chalandre)


Bug #56031: batch compute a lower size than what it should be for blockdb
with multiple fast device - ceph-volume - Ceph


Regards,
Anh Phan

On Fri, Sep 16, 2022 at 2:34 AM Christophe BAILLON  wrote:


Hi

The problem is still present in version 17.2.3,
thanks for the trick to work around...

Regards

- Mail original -

De: "Anh Phan Tuan" 
À: "Calhoun, Patrick" 
Cc: "Arthur Outhenin-Chalandre" ,

"ceph-users" 

Envoyé: Jeudi 11 Août 2022 10:14:17
Objet: [ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD
Hi Patrick,

I am also facing this bug when deploying a new cluster at the time 16.2.7
release.

The bugs relative to the way ceph calculator db_size form give db disk.

Instead of : slot db size = size of db disk / num slot per disk.
Ceph calculated the value: slot db size = size of db disk (just one

disk) /

total number of slots needed (number of osd prepared in that time).

In your case, you have 2 db disks, It will make the db size only 50% of

the

corrected value.
In my case, I have 4 db disks per host, It makes the db size only 25% of
the corrected value.

This bug happens even when you deploy by batch command.
In that time, I finally used to work around by batch command but only
deploy all osd relative to one db disk a time, in this case ceph

calculated

the correct value.

Cheers,
Anh Phan



On Sat, Jul 30, 2022 at 12:31 AM Calhoun, Patrick 

wrote:

Thanks, Arthur,

I think you are right about that bug looking very similar to what I've
observed. I'll try to remember to update the list once the fix is merged
and released and I get a chance to test it.

I'm hoping somebody can comment on what are ceph's current best

practices

for sizing WAL/DB volumes, considering rocksdb levels and compaction.

-Patrick


From: Arthur Outhenin-Chalandre 
Sent: Friday, July 29, 2022 2:11 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

Hi Patrick,

On 7/28/22 16:22, Calhoun, Patrick wrote:

In a new OSD node with 24 hdd (16 TB each) and 2 ssd (1.44 TB each),

I'd

like to have "ceph orch" allocate WAL and DB on the ssd devices.

I use the following service spec:
spec:
   data_devices:
 rotational: 1
 size: '14T:'
   db_devices:
 rotational: 0
 size: '1T:'
   db_slots: 12

This results in each OSD having a 60GB volume for WAL/DB, which

equates

to 50% total usage in the VG on each ssd, and 50% free.

I honestly don't know what size to expect, but exactly 50% of capacity

makes me suspect this is due to a bug:

https://tracker.ceph.com/issues/54541
(In fact, I had run into this bug when specifying block_db_size rather

than db_slots)

Questions:
   Am I being bit by that bug?
   Is there a better approach, in general, to my situation?
   Are DB sizes still governed by the rocksdb tiering? (I thought that

this was mostly resolved by https://github.com/ceph/ceph/pull/29687 )

   If I provision a DB/WAL logical volume size to 61GB, is that

effectively a 30GB database, and 30GB of extra room for compaction?

I don't use cephadm, but it's maybe related to this regression:
https://tracker.ceph.com/issues/56031. At list the symptoms looks very
similar...

Cheers,

--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

--
Christophe BAILLON
Mobile :: +336 16 400 522
Work :: https://eyona.com
Twitter :: https://twitter.com/ctof


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: octopus rbd cluster just stopped out of nowhere (>20k slow ops)

2022-12-09 Thread Boris Behrens
Hello together,

@Alex: I am not sure for what to look in /sys/block//device
There are a lot of files.Is there anything I should check in particular?

You have sysfs access in /sys/block//device - this will show a lot
> of settings.  You can go to this directory on CentOS vs. Ubuntu, and see if
> any setting is different?
>

@Matthias: yes the kernel is an old one (3.10.0-1160.76.1.el7.x86_64)
The await values are not significantly different (something 0.2 and 3 for
read and 0.1 and 0.4 for write)

> I guess Centos7 has a rather old kernel. What are the kernel versions on
> these hosts?
>
> I have seen a drastic increase in iostat %util numbers on a Ceph cluster
> on Ubuntu hosts, after an Ubuntu upgrade 18.04 => 20.04 => 22.04
> (upgrading Ceph along with it).  iostat %util was up high since, but
> iostat latency values dropped considerably. As the the cluster seemed
> slightly faster overall after these upgrades, I did not worry much about
> increased %util numbers.
>


@Anthony: Thanks for the link. Very nice read.

> https://brooker.co.za/blog/2014/07/04/iostat-pct.html
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io