[ceph-users] Clean prometheus files in /var/lib/ceph

2022-11-24 Thread Mevludin Blazevic
Hi all, on my ceph admin machine, a lot of large files are produced by prometheus, e.g.: ./var/lib/ceph/8c774934-1535-11ec-973e-525400130e4f/prometheus.cephadm/data/wal/00026165 ./var/lib/ceph/8c774934-1535-11ec-973e-525400130e4f/prometheus.cephadm/data/wal/00026166 ./var/lib/ceph/8c774934-153

[ceph-users] pacific: ceph-mon services stopped after OSDs are out/down

2022-12-06 Thread Mevludin Blazevic
Hi all, I'm running Pacific with cephadm. After installation, ceph automatically provisoned 5 ceph monitor nodes across the cluster. After a few OSDs crashed due to a hardware related issue with the SAS interface, 3 monitor services are stopped and won't restart again. Is it related to the OS

[ceph-users] Re: pacific: ceph-mon services stopped after OSDs are out/down

2022-12-13 Thread Mevludin Blazevic
. But without any logs or more details it's just guessing. Regards, Eugen Zitat von Mevludin Blazevic : Hi all, I'm running Pacific with cephadm. After installation, ceph automatically provisoned 5 ceph monitor nodes across the cluster. After a few OSDs crashed due to a hardware

[ceph-users] Re: pacific: ceph-mon services stopped after OSDs are out/down

2022-12-13 Thread Mevludin Blazevic
long to the ceph user. Can you check ls -l /var/lib/ceph/FSID/mon.sparci-store1/ Compare the keyring file with the ones on the working mon nodes. Zitat von Mevludin Blazevic : Hi Eugen, I assume the mon db is stored on the "OS disk". I could not find any error related lines in cephad

[ceph-users] Re: pacific: ceph-mon services stopped after OSDs are out/down

2022-12-13 Thread Mevludin Blazevic
g file with the ones on the working mon nodes. Zitat von Mevludin Blazevic : Hi Eugen, I assume the mon db is stored on the "OS disk". I could not find any error related lines in cephadm.log, here is what journalctl -xe tells me: Dec 13 11:24:21 sparci-store1 ceph-8c774934-1535-11e

[ceph-users] mds stuck in standby, not one active

2022-12-13 Thread Mevludin Blazevic
Hi all, in Ceph Pacific 6.2.5, the MDS failover function does not working. The one host with the active MDS hat to be rebooted and after that, the standby deamons did not jump in. The fs was not accessible, instead all mds remain until now to standby. Also the cluster remains in Ceph Error du

[ceph-users] Re: mds stuck in standby, not one active

2022-12-13 Thread Mevludin Blazevic
ndby seq 1 join_fscid=1 addr [v2:192.168.50.133:1a90/49cb4e4,v1:192.168.50.133:1a91/49cb4e4] compat {c=[1],r=[1],i=[1]}] dumped fsmap epoch 60 Am 13.12.2022 um 20:11 schrieb Patrick Donnelly: On Tue, Dec 13, 2022 at 2:02 PM Mevludin Blazevic wrote: Hi all, in Ceph Pacific 6.2.5, the MD

[ceph-users] Purge OSD does not delete the OSD deamon

2022-12-14 Thread Mevludin Blazevic
Hi all, while trying to perform an update from Ceph Pacific to the current Patch version, errors occure due to failed osd deamon which are still present and installed on some Ceph hosts although I purged the corresponding OSD using the GUI. I am using a Red Hat environment, what is the save wa

[ceph-users] Re: Purge OSD does not delete the OSD deamon

2022-12-14 Thread Mevludin Blazevic
the save way to tell ceph to also delete specific deamon ID (not OSD IDs)? Regards, Mevludin ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io -- Mevludin Blazevic, M.Sc. University of Koblenz

[ceph-users] Re: Purge OSD does not delete the OSD deamon

2022-12-14 Thread Mevludin Blazevic
Update: It was removed after 6min from the dashboard Am 14.12.2022 um 12:11 schrieb Stefan Kooman: On 12/14/22 11:40, Mevludin Blazevic wrote: Hi, the strange thing is that on 2 different host, an OSD deamon with the same ID is present, by doing ls on /var/lib/ceph/FSID, e.g. I am afraid

[ceph-users] Re: mds stuck in standby, not one active

2022-12-15 Thread Mevludin Blazevic
remove these daemons or what could be the preferred workaround? Regards, Mevludin Am 13.12.2022 um 20:32 schrieb Patrick Donnelly: On Tue, Dec 13, 2022 at 2:21 PM Mevludin Blazevic wrote: Hi, thanks for the quick response! CEPH STATUS: cluster: id: 8c774934-1535-11ec-973e

[ceph-users] Re: mds stuck in standby, not one active

2022-12-15 Thread Mevludin Blazevic
ue, but it seems none of the running standby daemons is responding. Am 15.12.2022 um 19:08 schrieb Patrick Donnelly: On Thu, Dec 15, 2022 at 7:24 AM Mevludin Blazevic wrote: Hi, while upgrading to ceph pacific 6.2.7, the upgrade process stuck exactly at the mds daemons. Before, I have tried t

[ceph-users] Re: pacific: ceph-mon services stopped after OSDs are out/down

2022-12-22 Thread Mevludin Blazevic
rting the other MONs did resolve it, have you tried that? [1] https://tracker.ceph.com/issues/52760 Zitat von Mevludin Blazevic : Its very strange. The keyring of the ceph monitor is the same as on one of the working monitor hosts. The failed mon and the working mons also have the same

[ceph-users] Re: Ceph All-SSD Cluster & Wal/DB Separation

2023-01-02 Thread Mevludin Blazevic
Hi all, I have a similar question regarding a cluster configuration consisting of HDDs, SSDs and NVMes. Let's say I would setup a OSD configuration in a yaml file like this: service_type:osd service_id:osd_spec_default placement: host_pattern:'*' spec: data_devices: model:HDD-Model-XY db_devi

[ceph-users] osd_memory_target values

2023-01-16 Thread Mevludin Blazevic
Hi all, for a ceph cluster with RAM size of 256 GB per node, I would increase the osd_memory_target from default 4GB up to 12GB. Through the ceph dashboard, different values are given to set the new value (global, mon, ..., osd). Is there any difference between them? From my point of view, I

[ceph-users] Ceph Host offline after performing dnf upgrade on RHEL 8.7 host

2023-05-08 Thread Mevludin Blazevic
Hi all, after I performed a minor RHEL  package upgrade (8.7 -> 8.7) in one of our Ceph hosts, I get a Ceph warning describing that cephadm "Can't communicate with remote host `...`, possibly because python3 is not installed there: [Errno 12] Cannot allocate memory, although Python3 is instal

[ceph-users] Re: Ceph Host offline after performing dnf upgrade on RHEL 8.7 host

2023-05-08 Thread Mevludin Blazevic
Ok, the hosts seems to be online again, but it took quite a long time.. Am 08.05.2023 um 13:22 schrieb Mevludin Blazevic: Hi all, after I performed a minor RHEL  package upgrade (8.7 -> 8.7) in one of our Ceph hosts, I get a Ceph warning describing that cephadm "Can't communicat

[ceph-users] Pacific: Drain hosts does not remove mgr daemon

2024-01-31 Thread Mevludin Blazevic
Hi all, after performing "ceph orch host drain" on one of our host with only the mgr container left, I encounter that another mgr daemon is indeed deployed on another host, but the "old" does not get removed from the drain command. The same happens if I edit the mgr service via UI to define d

[ceph-users] Error while adding Ceph/RBD for Cloudstack/KVM: pool not found

2021-09-23 Thread Mevludin Blazevic
1    1  25 MiB  320  ... sparci-ec   2   32 0 B    0  ... sparci-rbd  3   32    19 B    1  ... Have I missed out some extra installation steps needed on the ceph machines? Cheers Mevludin -- Mevludin Blazevic University of Koblenz-Landau Computin

[ceph-users] Ceph performance optimization with SSDs

2021-10-22 Thread Mevludin Blazevic
Dear Ceph users, I have a small Ceph cluster where each host consist of a small amount of SSDs and a larger number of HDDs. Is there a way to use the SSDs as performance optimization such as putting OSD Journals to the SSDs and/or using SSDs for caching? Best regards, Mevludin -- Mevludin

[ceph-users] Re: Ceph performance optimization with SSDs

2021-10-22 Thread Mevludin Blazevic
be a better and more stable option, although it is unlikely that you will be able to automate this with Ceph toolset. Best regards, Z On Fri, Oct 22, 2021 at 12:30 PM Mevludin Blazevic mailto:mblaze...@uni-koblenz.de>> wrote: Dear Ceph users, I have a small Ceph cluster wher

[ceph-users] RBD and Ceph FS for private cloud

2022-11-02 Thread Mevludin Blazevic
Hi all, i am planning to set up on my ceph cluster an RBD pool for virtual machines created on my Cloudstack environment. In parallel, a Ceph FS pool should be used as a secondary storage for VM snapshots, ISOs etc. Are there any performance issues when using both RBD and CephFS or is it bett