[ceph-users] Ceph Octopus rbd images stuck in trash

2023-01-11 Thread Jeff Welling
Hello there, I'm running Ceph 15.2.17 (Octopus) on Debian Buster and I'm starting an upgrade but I'm seeing a problem and I wanted to ask how best to proceed in case I make things worse by mucking with it without asking experts. I've moved an rbd image to the trash without clearing the

[ceph-users] Ceph Octopus RGW 15.2.17 - files not available in rados while still in bucket index

2022-08-21 Thread Boris Behrens
Cheers everybody, I had this issue some time ago, and we though it was fixed, but it seems to happen again. We have files, that get uploaded by one of our customer, only available in the index, but not in the rados. At first we thought this might be a bug ( https://tracker.ceph.com/issues/54528)

[ceph-users] Ceph Octopus RGW - files vanished from rados while still in bucket index

2022-06-13 Thread Boris Behrens
Hi everybody, are there other ways for rados objects to get removed, other than "rados -p POOL rm OBJECT"? We have a customer who got objects in the bucket index, but can't download it. After checking it seems like the rados object is gone. Ceph cluster is running ceph octopus 15.2.16

[ceph-users] Ceph Octopus on 'buster' - upgrades

2022-05-04 Thread Luke Hall
Hi, Looking to take our Octopus Ceph up to Pacific in the coming days. All the machines (physical - osd,mon,admin,meta) are running Debian 'buster' and the setup was done originally with cephdeploy (~2016). Previously I've been able to upgrade the core OS, keeping the ceph packages at the

[ceph-users] Ceph octopus v15.2.15-20220216 status

2022-04-21 Thread Dmitry Kvashnin
Does the v15.2.15-20220216 container include backports published since the release of v15.2.15-20211027 ? I'm interested in BACKPORT #53392 https://tracker.ceph.com/issues/53392, which was merged into the ceph:octopus branch on February 10th. ___

[ceph-users] ceph octopus lost RGW daemon, unable to add back due to HEALTH WARN

2021-07-21 Thread Ernesto O. Jacobs
I'm running a 11 node Ceph cluster running octopus (15.2.8) I mainly run this as a RGW cluster so had 8 RGW daemons on 8 nodes. Currently I got 1 PG degraded and some misplaced objects as I added a temporary node. Today I tried and expanded the RGW cluster from 8 to 10, this didn't work as one

[ceph-users] Ceph Octopus - How to customize the Grafana configuration

2021-06-10 Thread Ralph Soika
Hello, I have installed and bootsraped a Ceph manager node via cephadm and the options:     --initial-dashboard-user admin --initial-dashboard-password [PASSWORD] --dashboard-password-noupdate Everything works fine. I also have the Grafana Board to monitor my cluster. But the access to

[ceph-users] Ceph Octopus 15.2.11 - rbd diff --from-snap lists all objects

2021-05-12 Thread David Herselman
Hi, Has something change with 'rbd diff' in Octopus or have I hit a bug? I am no longer able to obtain the list of objects that have changed between two snapshots of an image, it always lists all allocated regions of the RBD image. This behaviour however only occurs when I add the

[ceph-users] ceph octopus mysterious OSD crash

2021-03-18 Thread Philip Brown
I've been banging on my ceph octopus test cluster for a few days now. 8 nodes. each node has 2 SSDs and 8 HDDs. They were all autoprovisioned so that each HDD gets an LVM slice of an SSD as a db partition. service_type: osd service_id: osd_spec_default placement: host_pattern: '*'

[ceph-users] (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-10 Thread seffyroff
I've inherited a Ceph Octopus cluster that seems like it needs urgent maintenance before data loss begins to happen. I'm the guy with the most Ceph experience on hand and that's not saying much. I'm experiencing most of the ops and repair tasks for the first time here. Ceph health output looks

[ceph-users] Ceph Octopus and Snapshot Schedules

2020-10-22 Thread Adam Boyhan
Hey all. I was wondering if Ceph Octopus is capable of automating/managing snapshot creation/retention and then replication? Ive seen some notes about it, but can't seem to find anything solid. Open to suggestions as well. Appreciate any input!

[ceph-users] ceph octopus centos7, containers, cephadm

2020-10-20 Thread Marc Roos
I am running Nautilus on centos7. Does octopus run similar as nautilus thus: - runs on el7/centos7 - runs without containers by default - runs without cephadm by default ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Ceph Octopus

2020-10-19 Thread Amudhan P
Hi, I have installed Ceph Octopus cluster using cephadm with a single network now I want to add a second network and configure it as a cluster address. How do I configure ceph to use second Network as cluster network?. Amudhan ___ ceph-users mailing

[ceph-users] [Ceph Octopus 15.2.3 ] MDS crashed suddently and failed to replay journal after restarting

2020-10-05 Thread carlimeunier
Hello, MDS process crashed suddently. After trying to restart it, it failed to replay journal and started to restart continually. Just to summarize, here is what happened : 1/ The cluster is up and running with 3 nodes (mon and mds in the same nodes) and 3 OSD. 2/ After a few days, 2

[ceph-users] [Ceph Octopus 15.2.3 ] MDS crashed suddenly

2020-07-20 Thread carlimeunier
Hi, I made a fresh install of Ceph Octopus 15.2.3 recently. And after a few days, the 2 standby MDS suddenly crashed with segmentation fault error. I try to restart it but it does not start. Here is the error : -20> 2020-07-17T13:50:27.888+ 7fc8c6c51700 10 monclient: _renew_subs -19>

[ceph-users] ceph octopus OSDs won't start with docker

2020-05-07 Thread Sean Johnson
I have a seemingly strange situation. I have three OSDs that I created with Ceph Octopus using the `ceph orch daemon add :device` command. All three were added and everything was great. Then I rebooted the host. Now the daemon’s won’t start via Docker. When I attempt to run the `docker` command