[ceph-users] Re: RBD mirrored image usage

2022-10-10 Thread Josef Johansson
Hi, No, you must stop the image on the primary site (A) and make the image on the non primary site (B) primary. It's possible to clone a snapshot though. See https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/P6BHPUZEMSCK4NJY5BZSYOB5XBWVT424/

[ceph-users] RBD mirrored image usage

2022-10-10 Thread Aristide Bekroundjo
Good Morning, Please I have a concerne about rbd mirrored image. I have two clusters A and B, under 16.2.10 and I have implemented one way mirrored rbd (A to B) When client01 write data on image on cluster A, it is successfully mirrored to image Under cluster B. My issue is that, I want

[ceph-users] Re: LVM osds loose connection to disk

2022-10-10 Thread Frank Schilder
Hi Igor. The problem of OSD crashes was resolved after migrating just a little bit of the meta-data pool to other disks (we decided to evacuate the small OSDs onto larger disks to make space). Therefore, I don't think its an LVM or disk issue. The cluster is working perfectly now after

[ceph-users] Re: multisite replication issue with Quincy

2022-10-10 Thread Jane Zhu (BLOOMBERG/ 120 PARK)
Are there any suggestions/tips on how we can debug this type of multisite/replication issues? From: At: 10/04/22 19:08:56 UTC-4:00To: ceph-users@ceph.io Subject: [ceph-users] Re: multisite replication issue with Quincy We are able to consistently reproduce the replication issue now. The

[ceph-users] Re: crush hierarchy backwards and upmaps ...

2022-10-10 Thread Dan van der Ster
Hi, Here's a similar bug: https://tracker.ceph.com/issues/47361 Back then, upmap would generate mappings that invalidate the crush rule. I don't know if that is still the case, but indeed you'll want to correct your rule. Something else you can do before applying the new crush map is use

[ceph-users] crush hierarchy backwards and upmaps ...

2022-10-10 Thread Christopher Durham
Hello, I am using pacific 16.2.10 on Rocky 8.6 Linux. After setting upmap_max_deviation to 1 on the ceph balancer in ceph-mgr, I achieved a near perfect balance of PGs and space on my OSDs. This is great. However, I started getting the following errors on my ceph-mon logs, every three minutes,

[ceph-users] Re: MDS Performance and PG/PGP value

2022-10-10 Thread Patrick Donnelly
Hello Yoann, On Fri, Oct 7, 2022 at 10:51 AM Yoann Moulin wrote: > > Hello, > > >> Is 256 good value in our case ? We have 80TB of data with more than 300M > >> files. > > > > You want at least as many PGs that each of the OSDs host a portion of the > > OMAP data. You want to spread out OMAP

[ceph-users] Re: mgr/prometheus module port 9283 binds only with IPv6 ?

2022-10-10 Thread Ackermann, Christoph
Hello all, setting "*ceph config set mgr mgr/prometheus/server_addr 0.0.0.0*" as described in the manual config documentation and restarting all manager daemons solved the problem so far. :-) Thanks and best regards, Christoph Ackermann Am Mo., 10. Okt. 2022 um 16:25 Uhr schrieb Ackermann,

[ceph-users] Re: mgr/prometheus module port 9283 binds only with IPv6 ?

2022-10-10 Thread Ackermann, Christoph
Oh, see this... mgr advanced mgr/prometheus/server_addr localhost BANG! Am Mo., 10. Okt. 2022 um 16:24 Uhr schrieb Ackermann, Christoph < c.ackerm...@infoserve.de>: > Well, we have a well running ceph base system i've pimped this morning by > using cephadm method for

[ceph-users] Re: mgr/prometheus module port 9283 binds only with IPv6 ?

2022-10-10 Thread Ackermann, Christoph
Well, we have a well running ceph base system i've pimped this morning by using cephadm method for monitoring addon: https://docs.ceph.com/en/quincy/cephadm/services/monitoring/#deploying-monitoring-with-cephadm All three manager can be accessed via IPv4 address from other hosts. The

[ceph-users] Re: mgr/prometheus module port 9283 binds only with IPv6 ?

2022-10-10 Thread Matt Vandermeulen
That output suggests that the mgr is configured to only listen on the loopback address. I don't think that's a default... does a `ceph config dump | grep mgr` suggest it's been configured that way? On 2022-10-10 10:56, Ackermann, Christoph wrote: Hello list member after subsequent

[ceph-users] Re: mgr/prometheus module port 9283 binds only with IPv6 ?

2022-10-10 Thread Konstantin Shalygin
Hi, Do you set "mgr/prometheus//server_addr" ipv4 address in config? k > On 10 Oct 2022, at 16:56, Ackermann, Christoph > wrote: > > Hello list member > > after subsequent installation of Ceph (17.2.4) monitoring stuff we got this > error: The mgr/prometheus module at

[ceph-users] mgr/prometheus module port 9283 binds only with IPv6 ?

2022-10-10 Thread Ackermann, Christoph
Hello list member after subsequent installation of Ceph (17.2.4) monitoring stuff we got this error: The mgr/prometheus module at ceph1n020.int.infoserve.de:9283 is unreachable . (and also for second prometheus module). Prometheus module is activated indeed... [root@ceph1n020 ~]# ss -ant

[ceph-users] Re: Inherited CEPH nightmare

2022-10-10 Thread Janne Johansson
> osd_memory_target = 2147483648 > > Based on some reading, I'm starting to understand a little about what can be > tweaked. For example, I think the osd_memory_target looks low. I also think > the DB/WAL should be on dedicated disks or partitions, but have no idea what > procedure