[ceph-users] Re: Uninstall ceph rgw

2024-03-05 Thread Robert Sander
as created. They usually have "rgw" in their name. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Hei

[ceph-users] Re: PGs with status active+clean+laggy

2024-03-05 Thread Robert Sander
Hi, On 3/5/24 13:05, ricardom...@soujmv.com wrote: I have a ceph quincy cluster with 5 nodes currently. But only 3 with SSDs. Do not mix HDDs and SSDs in the same pool. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel

[ceph-users] Re: OSD does not die when disk has failures

2024-03-19 Thread Robert Sander
Hi, On 3/19/24 13:00, Igor Fedotov wrote: translating EIO to upper layers rather than crashing an OSD is a valid default behavior. One can alter this by setting bluestore_fail_eio parameter to true. What benefit lies in this behavior when in the end client IO stalls? Regards -- Robert

[ceph-users] Re: Upgrading from Reef v18.2.1 to v18.2.2

2024-03-21 Thread Robert Sander
Hi, On 3/21/24 14:50, Michael Worsham wrote: Now that Reef v18.2.2 has come out, is there a set of instructions on how to upgrade to the latest version via using Cephadm? Yes, there is: https://docs.ceph.com/en/reef/cephadm/upgrade/ Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-03-25 Thread Robert Sander
pany running Debian since before then you have user IDs and group IDs in the range 500 - 1000. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 22000

[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-04-16 Thread Robert Sander
ilover and the NFS client cannot be "load balanced" to another backend NFS server. There is no use to configure an ingress service currently without failover. The NFS clients have to remount the NFS share in case of their current NFS server dies anyway. Regards -- Robert Sander Hein

[ceph-users] Re: ceph recipe for nfs exports

2024-04-25 Thread Robert Sander
concept of "pseudo path" This is an NFSv4 concept. It allows to mount a virtual root of the NFS server and access all exports below it without having to mount each one separately. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinle

[ceph-users] Re: Add node-exporter using ceph orch

2024-04-26 Thread Robert Sander
and its placement strategy. What does your node-exporter service look like? ceph orch ls node-exporter --export Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin

[ceph-users] Re: Add node-exporter using ceph orch

2024-04-26 Thread Robert Sander
: '*' If you apply this YAML code the orchestrator should deploy one node-exporter daemon to each host of the cluster. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgeri

[ceph-users] Ceph Squid released?

2024-04-28 Thread Robert Sander
Hi, https://www.linuxfoundation.org/press/introducing-ceph-squid-the-future-of-storage-today Does the LF know more than the mailing list? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051

[ceph-users] Re: Ceph Squid released?

2024-04-29 Thread Robert Sander
members and tiers and to sound the marketing drums a bit. :) The Ubuntu 24.04 release notes also claim that this release comes with Ceph Squid: https://discourse.ubuntu.com/t/noble-numbat-release-notes/39890 Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: Ceph Squid released?

2024-04-29 Thread Robert Sander
On 4/29/24 09:36, Alwin Antreich wrote: Who knows. I don't see any packages on download.ceph.com <http://download.ceph.com> for Squid. Ubuntu has them: https://packages.ubuntu.com/noble/ceph Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Be

[ceph-users] Re: ceph recipe for nfs exports

2024-04-29 Thread Robert Sander
e to write to the CephFS at first. Set squash to "no_root_squash" to be able to write as root to the NFS share. Create a directory and change its permissions to someone else. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-

[ceph-users] Re: Remove failed OSD

2024-05-04 Thread Robert Sander
per https://docs.ceph.com/en/reef/cephadm/services/osd/#remove-an-osd This will make sure that the OSD is not needed any more (data is drained etc). Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] MDS 17.2.7 crashes at rejoin

2024-05-06 Thread Robert Sander
, ceph::buffer::v15_2_0::list&, int)+0x290) [0x5614ac87ff90] 13: (MDSContext::complete(int)+0x5f) [0x5614aca41f4f] 14: (MDSIOContextBase::complete(int)+0x534) [0x5614aca426e4] 15: (Finisher::finisher_thread_entry()+0x18d) [0x7f1930b7884d] 16: /lib64/libpthread.so.0(+0x81ca)

[ceph-users] Re: MDS 17.2.7 crashes at rejoin

2024-05-06 Thread Robert Sander
Hi, would an update to 18.2 help? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: MDS crash in interval_set: FAILED ceph_assert(p->first <= start)

2024-05-10 Thread Robert Sander
On 5/9/24 07:22, Xiubo Li wrote: We are disscussing the same issue in slack thread https://ceph-storage.slack.com/archives/C04LVQMHM9B/p1715189877518529. Why is there a discussion about a bug off-list on a proprietary platform? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str

[ceph-users] Re: cephadm basic questions: image config, OS reimages

2024-05-16 Thread Robert Sander
t them to noout and will try to move other services away from the host if possible. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschä

[ceph-users] Re: cephadm basic questions: image config, OS reimages

2024-05-16 Thread Robert Sander
On 5/16/24 17:50, Robert Sander wrote: cephadm osd activate HOST would re-activate the OSDs. Small but important typo: It's ceph cephadm osd activate HOST Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 4

[ceph-users] Re: We are using ceph octopus environment. For client can we use ceph quincy?

2024-05-29 Thread Robert Sander
On 5/27/24 09:28, s.dhivagar@gmail.com wrote: We are using ceph octopus environment. For client can we use ceph quincy? Yes. -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht

[ceph-users] Re: Rebalance OSDs after adding disks?

2024-05-29 Thread Robert Sander
On 5/30/24 08:53, tpDev Tester wrote: Can someone please point me to the docs how I can expand the capacity of the pool without such problems. Please show the output of ceph status ceph df ceph osd df tree ceph osd crush rule dump ceph osd pool ls detail Regards -- Robert Sander

[ceph-users] How to setup NVMeoF?

2024-05-30 Thread Robert Sander
available? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread Robert Sander
Hi, On 5/30/24 11:58, Robert Sander wrote: I am trying to follow the documentation at https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an NVMe over Fabric service. It looks like the cephadm orchestrator in this 18.2.2 cluster uses the image quay.io/ceph/nvmeof:0.0.2

[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread Robert Sander
3:59:49.678809906+00:00", grpc_status:12, grpc_message:"Method not found!"}" Is this not production ready? Why is it in the documentation for a released Ceph version? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-su

[ceph-users] How to create custom container that exposes a listening port?

2024-05-31 Thread Robert Sander
ces/#extra-container-arguments Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Hei

[ceph-users] Re: How to create custom container that exposes a listening port?

2024-05-31 Thread Robert Sander
On 5/31/24 16:07, Robert Sander wrote: extra_container_args:   - "--publish 8080/tcp" Never mind, in the custom container service specification it's "args", not "extra_container_args". Regards -- Robert Sander Heinlein Consulting GmbH Schwe

[ceph-users] Re: tuning for backup target cluster

2024-06-04 Thread Robert Sander
multiple block devices and for the orchestrator they are completely separate. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B

[ceph-users] Re: Update OS with clean install

2024-06-04 Thread Robert Sander
to do these: * Set host in maintenance mode * Reinstall host with newer OS * Configure host with correct settings (for example cephadm user SSH key etc.) * Unset maintenance mode for the host * For OSD hosts run ceph cephadm osd activate Regards -- Robert Sander Heinlein Consulting GmbH Schwedter

[ceph-users] Re: tuning for backup target cluster

2024-06-04 Thread Robert Sander
partition table or logical volume signatures. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Si

[ceph-users] Re: Ceph crash :-(

2024-06-13 Thread Robert Sander
would not use Ceph packages shipped from a distribution but always the ones from download.ceph.com or even better the container images that come with the orchestrator. Why version do your other Ceph nodes run on? Regards -- Robert Sander Heinlein Support GmbH Linux: Akademie - Support - Ho

[ceph-users] Re: Ceph crash :-(

2024-06-13 Thread Robert Sander
pgrade the Ceph packages. download.ceph.com has packages for Ubuntu 22.04 and nothing for 24.04. Therefor I would assume Ubuntu 24.04 is not a supported platform for Ceph (unless you use the cephadm orchestrator and container). BTW: Please keep the discussion on the mailing list. Regards -- Rob

[ceph-users] Re: Slow down RGW updates via orchestrator

2024-06-26 Thread Robert Sander
Hi, On 6/26/24 11:49, Boris wrote: Is there a way to only update 1 daemon at a time? You can use the feature "staggered upgrade": https://docs.ceph.com/en/reef/cephadm/upgrade/#staggered-upgrade Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Ber

[ceph-users] Re: cannot delete service by ceph orchestrator

2024-06-29 Thread Robert Sander
create any new OSDs. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Hein

[ceph-users] use of db_slots in DriveGroup specification?

2024-07-10 Thread Robert Sander
/thread/6EVOYOHS3BTTNLKBRGLPTZ76HPNLP6FC/#6EVOYOHS3BTTNLKBRGLPTZ76HPNLP6FC Shouldn't db_slots make that easier? Is this a bug in the orchestrator? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-1

[ceph-users] Re: Use of db_slots in DriveGroup specification?

2024-07-11 Thread Robert Sander
Hi, On 7/11/24 09:01, Eugen Block wrote: apparently, db_slots is still not implemented. I just tried it on a test cluster with 18.2.2: I am thinking about a PR to correct the documentation. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https

[ceph-users] Re: cephadm for Ubuntu 24.04

2024-07-12 Thread Robert Sander
uggest to use Ubuntu 22.04 LTS as the base operating system. You can use cephadm on top of that without issues. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-

[ceph-users] Re: Cephadm has a small wart

2024-07-19 Thread Robert Sander
sed on CentOS 8. When you execute "cephadm shell" it starts a container with that image for you. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Char

[ceph-users] Re: How to specify id on newly created OSD with Ceph Orchestrator

2024-07-22 Thread Robert Sander
On 7/23/24 08:24, Iztok Gregori wrote: Am I missing something obvious or with Ceph orchestrator there are non way to specify an id during the OSD creation? Why would you want to do that? A new OSD always gets the lowest available ID. Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Bluestore issue using 18.2.2

2024-08-05 Thread Robert Sander
Hi Marianne, is there anything in the kernel logs of the VMs and the hosts where the VMs are running with regard to the VM storage? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19

[ceph-users] Re: Pull failed on cluster upgrade

2024-08-05 Thread Robert Sander
On 05.08.24 18:38, Nicola Mori wrote: docker.io/snack14/ceph-wizard This is not an official container image. The images from the Ceph project are on quay.io/ceph/ceph. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-16 Thread Robert Sander
rge number of nodes (more than 10) and a proportional number of OSDs. Mixed HDDs and SSDs in one pool is not good practice as a pool should have OSDs of the same speed. Kindest Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-16 Thread Robert Sander
Am 11.11.20 um 13:05 schrieb Hans van den Bogert: > And also the erasure coded profile, so an example on my cluster would be: > > k=2 > m=1 With this profile you can only loose one OSD at a time, which is really not that redundant. Regards -- Robert Sander Heinlein Support GmbH S

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-17 Thread Robert Sander
ot=default k=2 m=2 You need k+m=4 independent hosts for the EC parts, but your CRUSH map only shows two hosts. This is why all your PGs are undersized and degraded. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-4

[ceph-users] Re: Ceph on ARM ?

2020-11-24 Thread Robert Sander
com.tw/ Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz:

[ceph-users] Re: Clearing contents of OSDs without removing them?

2020-12-19 Thread Robert Sander
ls also removes the objects and you can start new. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschä

[ceph-users] bluefs_buffered_io=false performance regression

2021-01-11 Thread Robert Sander
0,88676 0,00338191 true rand 30,1007 82474194304 4194304 1095,92 273 25,5066 313 213 0,05719 0,99140 0,00325295 Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-4

[ceph-users] Re: bluefs_buffered_io=false performance regression

2021-01-11 Thread Robert Sander
Hi Marc and Dan, thanks for your quick responses assuring me that we did nothing totally wrong. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B

[ceph-users] Python API mon_comand()

2021-01-15 Thread Robert Sander
t;:"rbd","id":1,"stats":{"stored":27410520278,"objects":6781,"kb_used":80382849,"bytes_used":82312036566,"percent_used":0.1416085809469223,"max_avail":166317473792}},{"name":"cephfs_data",

[ceph-users] Re: Large rbd

2021-01-21 Thread Robert Sander
nked together using lvm or somesuch? What are the tradeoffs? IMHO there are no tradeoffs, there could even be benefits creating a volume group with multiple physical volumes on RBD as the requests can be bettere parallelized (i.e. virtio-single SCSI controller for qemu). Regards -- Robert San

[ceph-users] Re: Unable to use ceph command

2021-01-29 Thread Robert Sander
(error connecting to the cluster) This issue is mostly caused by not having a readable ceph.conf and ceph.client.admin.keyring file in /etc/ceph for the user that starts the ceph command. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-su

[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-04 Thread Robert Sander
Hi, Am 04.02.21 um 12:10 schrieb Frank Schilder: > Going to 2+2 EC will not really help On such a small cluster you cannot even use EC because there are not enough independent hosts. As a rule of thumb there should be k+m+1 hosts in a cluster AFAIK. Regards -- Robert Sander Heinlein Supp

[ceph-users] Re: firewall config for ceph fs client

2021-02-10 Thread Robert Sander
in the cluster. You need ports 3300 and 6789 for the MONs on their IPs and any dynamic port starting at 6800 used by the OSDs. The MDS also uses a port above 6800. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 4050

[ceph-users] Re: firewall config for ceph fs client

2021-02-10 Thread Robert Sander
Am 10.02.21 um 15:54 schrieb Frank Schilder: > Which ports are the clients using - if any? All clients only have outgoing connections and do not listen to any ports themselves. The Ceph cluster will not initiate a connection to the client. Kindest Regards -- Robert Sander Heinlein Support G

[ceph-users] Re: Ceph server

2021-03-12 Thread Robert Sander
0G bonded interfaces in the cluster network? I would assume that you would want to go at least 2x 25G here. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HR

[ceph-users] Re: Ceph server

2021-03-12 Thread Robert Sander
Am 10.03.21 um 20:44 schrieb Ignazio Cassano: > 1 small ssd is for operations system and 1 is for mon. Make that a RAID1 set of SSDs and be happier. ;) Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43

[ceph-users] Re: How big an OSD disk could be?

2021-03-12 Thread Robert Sander
Am 12.03.21 um 18:30 schrieb huxia...@horebdata.cn: > Any other aspects on the limits of bigger capacity hard disk drives? Recovery will take longer increasing the risk of another failure in the same time. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin h

[ceph-users] Re: lvm fix for reseated reseated device

2021-03-15 Thread Robert Sander
ready rebooted the box so I won't be able to > test immediately.) My experience with LVM is that only a reboot helps in this situation. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] OpenSSL security update for Octopus container?

2021-03-26 Thread Robert Sander
check docker.io/ceph/ceph:v15" but it tells me that the containers do not need to be upgraded. How will this security fix of OpenSSL be deployed in a timely manner to users of the Ceph container images? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://ww

[ceph-users] Re: Is metadata on SSD or bluestore cache better?

2021-04-05 Thread Robert Sander
B volumes and one OSD on each SSD. HDD only SSDs are quite slow. If you do not have enough SSDs for them go with an SSD only cephfs metadata pool. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Pacific unable to configure NFS-Ganesha

2021-04-05 Thread Robert Sander
pected condition which prevented it from fulfilling the request.", "request_id": "e89b8519-352f-4e44-a364-6e6faf9dc533"} '] I have no r

[ceph-users] Re: RGW failed to start after upgrade to pacific

2021-04-05 Thread Robert Sander
t; bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed > to start datalog_rados service ((5) Input/output error > bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed > to init services (ret=(5) Input/output error) I see the same issues on a

[ceph-users] Re: Pacific unable to configure NFS-Ganesha

2021-04-05 Thread Robert Sander
Hi, I forgot to mention that CephFS is enabled and working. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer

[ceph-users] Re: Problem using advanced OSD layout in octopus

2021-04-06 Thread Robert Sander
Hi, The DB device needs to be empty for an automatic OSD service. The service will then create N db slots using logical volumes and not partitions. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030

[ceph-users] Re: RGW failed to start after upgrade to pacific

2021-04-12 Thread Robert Sander
So when you have a Ceph cluster with Rados-Gateways you should not upgrade to Pacific currently. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 9381

[ceph-users] Re: cephadm custom mgr modules

2021-04-12 Thread Robert Sander
Hi, this is one of the use cases mentioned in Tim Serong's talk: https://youtu.be/pPZsN_urpqw Containers are great for deploying a fixed state of a software project (a release), but not so much for the development of plugins etc. Regards -- Robert Sander Heinlein Support GmbH Schwedte

[ceph-users] ceph orch upgrade fails when pulling container image

2021-04-21 Thread Robert Sander
Hi, # docker pull ceph/ceph:v16.2.1 Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit How do I update a Ceph cluster in this situation? Regards -- Robert

[ceph-users] Re: ceph orch upgrade fails when pulling container image

2021-04-21 Thread Robert Sander
Hi, Am 21.04.21 um 10:14 schrieb Robert Sander: > How do I update a Ceph cluster in this situation? I learned that I need to create an account on the website hub.docker.com to be able to download Ceph container images in the future. With the credentials I need to run "docker login&

[ceph-users] After upgrade to 15.2.11 no access to cluster any more

2021-04-22 Thread Robert Sander
ied (error connecting to the cluster) What should I do? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Char

[ceph-users] Re: After upgrade to 15.2.11 no access to cluster any more

2021-04-22 Thread Robert Sander
Am 22.04.21 um 09:07 schrieb Robert Sander: > What should I do? I should also upgrade the CLI client which still was at 15.2.8 (Ubuntu 20.04) because a "ceph orch upgrade" run only updates the software inside the containers. Regards -- Robert Sander Heinlein Consulting GmbH Schwed

[ceph-users] Download-Mirror eu.ceph.com misses Debian Release file

2021-04-22 Thread Robert Sander
Hi, to whomever it may concern: The mirror server eu.ceph.com does to carry the Release files for 15.2.11 in https://eu.ceph.com/debian-15.2.11/dists/*/ and 16.2.1 in https://eu.ceph.com/debian-16.2.1/dists/*/ Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread Robert Sander
h map. It looks like the OSD is the failure zone, and not the host. If it woould be the host the failure of any number of OSDs in a single host would not bring PGs down. For the default redundancy rule and pool size 3 you need three separate hosts. Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread Robert Sander
the mds suffer when only 4% of the osd goes > down (in the same node). I need to modify the crush map? With an unmodified crush map and the default placement rule this should not happen. Can you please show the output of "ceph osd crush rule dump"? Regards -- Robert Sander Hein

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread Robert Sander
ill lead to data loss or at least intermediate unavailability. The situation is now that all copies (resp. EC chunks) for a PG are stored on OSDs of the same host. These PGs will be unavailable if the host is down. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10

[ceph-users] Re: orch upgrade mgr starts too slow and is terminated?

2021-05-06 Thread Robert Sander
Am 06.05.21 um 17:18 schrieb Sage Weil: > I hit the same issue. This was a bug in 16.2.0 that wasn't completely > fixed, but I think we have it this time. Kicking of a 16.2.3 build > now to resolve the problem. Great. I also hit that today. Thanks for fixing it quickly. Rega

[ceph-users] Re: orch upgrade mgr starts too slow and is terminated?

2021-05-07 Thread Robert Sander
I had success with stopping the "looping" mgr container via "systemctl stop" on the node. Cephadm then switches to another MGR to continue the upgrade. After that I just started the stopped mgr container and the upgrade continued. Regards -- Robert Sander Heinlein Consulting GmbH S

[ceph-users] Re: Failover with 2 nodes

2021-06-15 Thread Robert Sander
On 15.06.21 15:16, nORKy wrote: > Why is there no failover ?? Because only one MON out of two is not in the majority to build a quorum. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Robert Sander
could theoretically RAID0 multiple disks and then put an OSD on top of that but this would create very large OSDs which are not good for recovering data. Recovering such a "beast" just would take too long. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http

[ceph-users] Re: pacific installation at ubuntu 20.04

2021-06-24 Thread Robert Sander
ssing between these two steps. The first creates /etc/apt/sources.list.d/ceph.list and the second installs packages, but the repo list was never updated. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 0

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-26 Thread Robert Sander
lding and hosting for open source projects is solved with the openSUSE build service: https://build.opensuse.org/ But I think what Sage meant was e.g. different versions of GCC on the distributions and not being able to use all the latest features needed for compiling Ceph. Regards -- Robe

[ceph-users] Unhandled exception from module 'devicehealth' while running on mgr.al111: 'NoneType' object has no attribute 'get'

2021-06-30 Thread Robert Sander
30 16:07:09 al111 bash[171790]: File "/usr/share/ceph/mgr/devicehealth/module.py", line 33, in get_ata_wear_level Jun 30 16:07:09 al111 bash[171790]: if page.get("number") != 7: Jun 30 16:07:09 al111 bash[171790]: AttributeError: 'NoneType' object has no attribute '

[ceph-users] RocksDB resharding does not work

2021-07-08 Thread Robert Sander
8 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.825+ 7efc32db4080 -1 ** ERROR: osd init failed: (5) Input/output error How do I correct the issue? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405

[ceph-users] Re: Size of cluster

2021-08-09 Thread Robert Sander
have 3 nodes with each 5x 12TB (60TB) and 2 nodes with each 4x 18TB (72TB) the maximum usable capacity will not be the sum of all disks. Remember that Ceph tries to evenly distribute the data. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein

[ceph-users] Re: Ceph Pacific mon is not starting after host reboot

2021-08-10 Thread Robert Sander
daemons (outside of osds I believe) from offline hosts. Sorry for maybe being rude but how on earth does one come up with the idea to automatically remove components from a cluster where just one node is currently rebooting without any operator interference? Regards -- Robert Sander Heinlein

[ceph-users] Re: How to safely turn off a ceph cluster

2021-08-11 Thread Robert Sander
h cluster? ceph osd set noout and after the cluster has been booted again and every OSD joined: ceph osd unset noout Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charl

[ceph-users] Re: A simple erasure-coding question about redundance

2021-08-27 Thread Robert Sander
heavy. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz: B

[ceph-users] Re: Performance optimization

2021-09-06 Thread Robert Sander
of block devices with the same size distribution in each node you will get an even data distribution. If you have a node with 4 3TB drives and one with 4 6TB drives Ceph cannot use the 6TB drives efficiently. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: Performance optimization

2021-09-06 Thread Robert Sander
w the data distribution among the OSDs. Are all of these HDDs? Are these HDDs equipped with RocksDB on SSD? HDD only will have abysmal performance. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Re: Performance optimization

2021-09-07 Thread Robert Sander
ll be faster, to write it to just one ssd, instead of writing it to the disk directly. Usually one SSD carries the WAL and RocksDB of four to five HDD-OSDs. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-4

[ceph-users] Re: SSDs/HDDs in ceph Octopus

2021-09-10 Thread Robert Sander
Pools should have a uniform class of storage. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsf

[ceph-users] Re: Ignore Ethernet interface

2021-09-13 Thread Robert Sander
this. The Linux kernel will happily answer ARP requests on any interface for the IPs it has configured anywhere. That means you have a constant ARP flapping in your network. Make the three interfaces bonded and configure all three IPs on the bonded interface. Regards -- Robert Sander Heinlein

[ceph-users] Re: Ignore Ethernet interface

2021-09-14 Thread Robert Sander
work as the same IP subnet cannot span multiple broadcast domains. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Robert Sander
g Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin __

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Robert Sander
Hi, I had to run ceph fs set cephfs max_mds 1 ceph fs set cephfs allow_standby_replay false and stop all MDS and NFS containers and start one after the other again to clear this issue. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Robert Sander
I just run ceph orch upgrade start Why does the orchestrator not run the necessary steps? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB

[ceph-users] Re: Cluster downtime due to unsynchronized clocks

2021-09-23 Thread Robert Sander
use chrony or ntpd. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz

[ceph-users] Re: How you loadbalance your rgw endpoints?

2021-09-27 Thread Robert Sander
s with the number of clients (kubernetes nodes) Nice hack. But why not establish a DNS name that points to 127.0.0.1? Why the hassle with iptables? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Re: Local NTP servers on monitor node's.

2021-12-07 Thread Robert Sander
Am 08.12.21 um 02:34 schrieb mhnx: - Sometimes NTP servers can respond but systemd-timesyncd can not sync the time without manual help. Just my 2¢: Do not use systemd-timesyncd. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein

[ceph-users] Re: v16.2.7 Pacific released

2021-12-08 Thread Robert Sander
implementations, it will simplify the user experience for those heavily relying on NFS exports. This change is introduced in a point release? After upgrading a cluster all NFS shares have to be configured again and in the meantime NFS services do not work. Not so great IMHO. Regards -- Robert Sa

[ceph-users] CEPHADM_STRAY_DAEMON with iSCSI service

2021-12-08 Thread Robert Sander
-- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz: Berlin

[ceph-users] Re: v16.2.7 Pacific released

2021-12-08 Thread Robert Sander
".nfs". Why has the feature to configure a specific cephx key been removed? What key is now used by nfs-ganesha to access the CephFS? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

<    1   2   3   4   >