[ceph-users] Re: Ceph User Survey 2022 - Comments on the Documentation

2022-08-24 Thread Robert Sander
Am 23.08.22 um 18:40 schrieb John Zachary Dover: "correct documentation, not the mix of out-dated and correct descriptions and examples" This is also my biggest issue with the documentation. There is a nice chapter on how to deploy the cluster with cephadm and do some operations on it. But

[ceph-users] ceph.conf

2022-08-24 Thread Loreth.Andreas
Dear Madam and Sir, We have an HPC based on Centos 7 and a Ceph Storage Cluster in version 15.2.16 octopus (stable). The Ceph Storage Cluster provides us with CephFS. We had mounted CephFS using the Kernel Driver with the noacl attribute, but then realised that we needed ACL for rootless Docker.

[ceph-users] ceph.conf

2022-08-24 Thread Loreth.Andreas
Dear Madam and Sir, We have an HPC based on Centos 7 and a Ceph Storage Cluster in version 15.2.16 octopus (stable). The Ceph Storage Cluster provides us with CephFS. We had mounted CephFS using the Kernel Driver with the noacl attribute, but then realised that we needed ACL for rootless Docker.

[ceph-users] Ceph Leadership Team Meeting Minutes (2022-08-24)

2022-08-24 Thread Ernesto Puerta
Hi Cephers, These are the topics covered in today's meeting: - *Container vulnerabilities*: in the last Ceph Users-Devels Monthly meeting Gaurav Sitlani raised a question about the vulnerabilities reported by quay.io

[ceph-users] Re: ceph.conf

2022-08-24 Thread Burkhard Linke
Hi, On 8/24/22 15:16, loreth.andr...@mh-hannover.de wrote: Dear Madam and Sir, We have an HPC based on Centos 7 and a Ceph Storage Cluster in version 15.2.16 octopus (stable). The Ceph Storage Cluster provides us with CephFS. We had mounted CephFS using the Kernel Driver with the noacl attribu

[ceph-users] Benefits of dockerized ceph?

2022-08-24 Thread Boris
Hi, I was just asked if we can switch to dockerized ceph, because it is easier to update. Last time I tried wo use ceph orch i failed really hard to get the rgw daemon running as I would like to (IP/port/zonegroup and so on). Also I never really felt comfortable running production workload in

[ceph-users] radosgw-admin hangs

2022-08-24 Thread Magdy Tawfik
Hi All I have a cluster with 5 MON & 3 MGR 12 OSD + RGW nodes was working OK with no issue I have moved 4 physical machines to VMs and redeployed mgr/mon daemons since that time when trying to access radows-admin tools it hangs with no response at all until killing it nothing gets out at all w

[ceph-users] Re: Benefits of dockerized ceph?

2022-08-24 Thread Satish Patel
Hi, I believe only advantage of running dockerize which isolate binaries from OS and as you said upgrade is easier, In my case i am running OSD/MON role on same servers so it provide greater isolation when i want to upgrade component. cephadm uses containers to deploy ceph clusters in production.

[ceph-users] Re: Benefits of dockerized ceph?

2022-08-24 Thread William Edwards
> Op 24 aug. 2022 om 22:08 heeft Boris het volgende geschreven: > > Hi, > I was just asked if we can switch to dockerized ceph, because it is easier to > update. > > Last time I tried wo use ceph orch i failed really hard to get the rgw daemon > running as I would like to (IP/port/zonegroup

[ceph-users] Re: radosgw-admin hangs

2022-08-24 Thread Boris
Hi Magdy, maybe this helps. https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/6J5KZ7ELC7EWUS6YMKOSJ3E3JRNTHKBQ/ Cheers Boris > Am 24.08.2022 um 22:09 schrieb Magdy Tawfik : > > Hi All > > I have a cluster with 5 MON & 3 MGR 12 OSD + RGW nodes > was working OK with no issue >

[ceph-users] Re: Benefits of dockerized ceph?

2022-08-24 Thread Boris
Ah great. Might have missed it. Will go through the ML archive then. Cheers Boris > Am 24.08.2022 um 22:20 schrieb William Edwards : > > > There was a very long discussion about this on the mailing list not too long > ago… ___ ceph-users mailing lis

[ceph-users] Fwd: Erasure coded pools and reading ranges of objects.

2022-08-24 Thread Teja A
Hello. I have been kicking the tires with Ceph using the librados API and observed some peculiar object access patterns when reading a portion of the object (as opposed to the whole object). First I want to offer some background. My use case requires that I use erasure coded pools and store large