[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-22 Thread Josh Durgin
On Wed., Jun. 22, 2022, 15:44 Yuri Weinstein, wrote: > > We did not get approvals for dashboard and rook, but we also did not get > disapproval :) > > Josh, David it's ready for publishing assuming you agree. > Sounds ready to me! > On Wed, Jun 22, 2022 at 3:26 PM Neha Ojha wrote: > >> On Wed

[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-22 Thread Travis Nielsen
Ok, let's declare Rook signed off. First time asking for Rook, I'll try to pay more attention going forward... :) Rook has some daily tests in the Rook repo running against the following tags. quay.io/ceph/daemon-base:latest-

[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-22 Thread Laura Flores
I did not see any Dashboard failures, and the Rook one is known and getting looked into. I cannot approve for Dashboard or Rook, but I can at least give that piece of information. - Laura On Wed, Jun 22, 2022 at 5:43 PM Yuri Weinstein wrote: > > We did not get approvals for dashboard and rook,

[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-22 Thread Neha Ojha
On Wed, Jun 22, 2022 at 11:44 AM Laura Flores wrote: > > Here is the summary of RADOS failures. Everything looks good and normal to > me! I will leave it to Neha to give final approval though. Thanks Laura. These runs look good. We encountered https://tracker.ceph.com/issues/56101 while upgrading

[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-22 Thread Laura Flores
Here is the summary of RADOS failures. Everything looks good and normal to me! I will leave it to Neha to give final approval though. https://tracker.ceph.com/issues/55974#note-1 Failures: 1. https://tracker.ceph.com/issues/52321 2. https://tracker.ceph.com/issues/56000 3. https://tra

[ceph-users] Inconsistent PGs after upgrade to Pacific

2022-06-22 Thread Pascal Ehlert
Hi all, I am currently battling inconsistent PGs after a far-reaching mistake during the upgrade from Octopus to Pacific. While otherwise following the guide, I restarted the Ceph MDS daemons (and this started the Pacific daemons) without previously reducing the ranks to 1 (from 2). This res

[ceph-users] Ceph Stretch Cluster - df pool size (Max Avail)

2022-06-22 Thread Kilian Ries
Hi, i'm running a ceph stretch cluster with two datacenters. Each of the datacenters has 3x OSD nodes (in total 6x) and 2x monitors. A third monitor is deployed as arbiter node in a third datacenter. Each OSD node has 6x SSDs with 1,8 TB storage - that gives me a total of about 63 TB storage

[ceph-users] use ceph rbd for windows cluster "scsi-3 persistent reservation"

2022-06-22 Thread farhad kh
I need a disk storage block that is shared between two Windows servers. Servers are active standby (server certification) Only one server can write at a time, but both servers can read the created files And if the first server shuts down, the second server can edit the files or create a new file

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-06-22 Thread Peter Lieven
> Am 22.06.2022 um 14:28 schrieb Ilya Dryomov : > > On Wed, Jun 22, 2022 at 11:14 AM Peter Lieven wrote: >> >> >> >> Von meinem iPhone gesendet >> Am 22.06.2022 um 10:35 schrieb Ilya Dryomov : >>> >>> On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote: Hi, >>>

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-06-22 Thread Ilya Dryomov
On Wed, Jun 22, 2022 at 11:14 AM Peter Lieven wrote: > > > > Von meinem iPhone gesendet > > > Am 22.06.2022 um 10:35 schrieb Ilya Dryomov : > > > > On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote: > >> > >> Hi, > >> > >> > >> we noticed that some of our long running VMs (1 year without migrati

[ceph-users] Re: ceph-container: docker restart, mon's unable to join

2022-06-22 Thread Kilian Ries
Had the time to debug it a little bit further today and i think i found a solution ;) The last logline i saw after container start was "Existing mon, trying to rejoin cluster..." https://github.com/ceph/ceph-container/blob/main/src/daemon/start_mon.sh#L154 So i modified the script and added

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-06-22 Thread Peter Lieven
> Am 22.06.2022 um 12:52 schrieb Janne Johansson : > >  >> >> I found relatively large allocations in the qemu smaps and checked the >> contents. It contained several hundred repetitions of osd and pool names. We >> use the default builds on Ubuntu 20.04. Is there a special memory allocator

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-06-22 Thread Janne Johansson
> I found relatively large allocations in the qemu smaps and checked the > contents. It contained several hundred repetitions of osd and pool names. We > use the default builds on Ubuntu 20.04. Is there a special memory allocator > in place that might not clean up properly? I think the promise

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-06-22 Thread Peter Lieven
Von meinem iPhone gesendet > Am 22.06.2022 um 10:35 schrieb Ilya Dryomov : > > On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote: >> >> Hi, >> >> >> we noticed that some of our long running VMs (1 year without migration) seem >> to have a very slow memory leak. Taking a dump of the leake

[ceph-users] Best value for "mds_cache_memory_limit" for large (more than 10 Po) cephfs

2022-06-22 Thread Arnaud M
Hello to everyone I have a ceph cluster currently serving cephfs. The size of the ceph filesystem is around 1 Po. 1 Active mds and 1 Standby-replay I do not have a lot of cephfs clients for now 5 but it may increase to 20 or 30. Here is some output Rank | State | Daemon

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-06-22 Thread Ilya Dryomov
On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote: > > Hi, > > > we noticed that some of our long running VMs (1 year without migration) seem > to have a very slow memory leak. Taking a dump of the leaked memory revealed > that it seemed to contain osd and pool information so we concluded that