[ceph-users] Re: unable to map device with krbd on el7 with ceph nautilus

2021-07-23 Thread Konstantin Shalygin
EL7 client is still compatible with Nautilus Sent from my iPhone > On 24 Jul 2021, at 00:58, cek+c...@deepunix.net wrote: > > Is that because the kernel module is too old? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email

[ceph-users] unable to map device with krbd on el7 with ceph nautilus

2021-07-23 Thread cek+ceph
Hi. I've followed the installation guide and got nautilus 14.2.22 running on el7 via https://download.ceph.com/rpm-nautilus/el7/x86_64/ yum repo. I'm now trying to map a device on an el7 and getting extremely weird errors: # rbd info test1/blk1 --name client.testing-rw rbd image 'blk1':

[ceph-users] OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-07-23 Thread Dave Piper
Hi all, We've got a containerized test cluster with 3 OSDs and ~ 220GiB of data. Shortly after upgrading from nautilus -> octopus, 2 of the 3 OSDs have started flapping. I've also got alarms about the MDS being damaged, which we've seen elsewhere and have a recovery process for, but I'm unable

[ceph-users] Re: [ceph] [pacific] cephadm cannot create OSD

2021-07-23 Thread Gargano Andrea
Hi Dimitri, Thank you, I'll retry and I'll let you know on monday. Andrea Ottieni Outlook per Android From: Dimitri Savineau Sent: Friday, July 23, 2021 5:35:22 PM To: Gargano Andrea Cc: ceph-users@ceph.io Subject: Re: [ceph-users]

[ceph-users] Re: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents available 113 GiB"

2021-07-23 Thread Igor Fedotov
Hi Dave, The follow log line indicates that allocator has just completed loading information about free disk blocks into memory.  And it looks perfectly fine. _open_alloc loaded 132 GiB in 2930776 extents available 113 GiB Subsequent rocksdb shutdown looks weird without any other log

[ceph-users] Re: Luminous won't fully recover

2021-07-23 Thread DHilsbos
Sean; These lines look bad: 14 scrub errors Reduced data availability: 2 pgs inactive Possible data damage: 8 pgs inconsistent osd.95 (root=default,host=hqosd8) is down I suspect you ran into a hardware issue with one more drives in some of the servers that did not go offline. osd.95 is

[ceph-users] Re: Is there any way to obtain the maximum number of node failure in ceph without data loss?

2021-07-23 Thread Josh Baergen
Hi Jerry, In general, your CRUSH rules should define the behaviour you're looking for. Based on what you've stated about your configuration, after failing a single node or an OSD on a single node, then you should still be able to tolerate two more failures in the system without losing data (or

[ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific

2021-07-23 Thread Robert W. Eckert
Sorry for so many replies, this time ceph config set mon auth_expose_insecure_global_id_reclaim false seems to have stuck and I can access the ceph drive from windows now. -Original Message- From: Robert W. Eckert Sent: Friday, July 23, 2021 2:30 PM To: Konstantin Shalygin ; Lucian

[ceph-users] Re: ceph-Dokan on windows 10 not working after upgrade to pacific

2021-07-23 Thread Robert W. Eckert
I am seeing the same thing, I think the build is pointing to the default branch, which is still 15.x From: Konstantin Shalygin Sent: Thursday, July 22, 2021 1:41 AM To: Lucian Petrut Cc: Robert W. Eckert ; ceph-users@ceph.io Subject: Re: [ceph-users] ceph-Dokan on windows 10 not working after

[ceph-users] Luminous won't fully recover

2021-07-23 Thread Shain Miley
We recently had a few Ceph nodes go offline which required a reboot. I have been able to get the cluster back to the state listed below however it does not seem like it will progress past the point of 23473/287823588 objects misplaced. Yesterday it was about 13% of the data that was

[ceph-users] OSD failed to load OSD map for epoch

2021-07-23 Thread Johan Hattne
Dear all; We have 3-node cluster that has two OSDs on separate nodes, each with wal on NVMe. It's been running fine for quite some time, albeit under very light load. This week, we moved from package-based Octopus to container-based ditto (15.2.13, all on Debian stable). Within a few

[ceph-users] Re: [ceph] [pacific] cephadm cannot create OSD

2021-07-23 Thread Ignazio Cassano
Thanks Ignazio Il Ven 23 Lug 2021, 18:29 Dimitri Savineau ha scritto: > It's probably better to create another thread for this instead of asking > on an existing one. > > Anyway, even if the documentation says `cluster_network` [1] then both > options work fine (with and without the

[ceph-users] Re: [ceph] [pacific] cephadm cannot create OSD

2021-07-23 Thread Dimitri Savineau
It's probably better to create another thread for this instead of asking on an existing one. Anyway, even if the documentation says `cluster_network` [1] then both options work fine (with and without the underscore). And I'm pretty sure this applies to all config options. [1]

[ceph-users] Re: [ceph] [pacific] cephadm cannot create OSD

2021-07-23 Thread Ignazio Cassano
Hello, I want to ask if the correct config in ceph.conf for cluster network is: cluster network = Or cluster_network = Thanks Il Ven 23 Lug 2021, 17:36 Dimitri Savineau ha scritto: > Hi, > > This looks similar to https://tracker.ceph.com/issues/46687 > > Since you want to use hdd devices to

[ceph-users] Re: [ceph] [pacific] cephadm cannot create OSD

2021-07-23 Thread Dimitri Savineau
Hi, This looks similar to https://tracker.ceph.com/issues/46687 Since you want to use hdd devices to bluestore data and ssd devices for bluestore db, I would suggest using the rotational [1] filter isn't dealing with the size filter. --- service_type: osd service_id: osd_spec_default placement:

[ceph-users] [ceph] [pacific] cephadm cannot create OSD

2021-07-23 Thread Gargano Andrea
Hi all, we are trying to install ceph on ubuntu 20.04 but we are not able to create OSD. Entering in cephadm shell we can see the following: root@tst2-ceph01:/# ceph -s cluster: id: 8b937a98-eb86-11eb-8509-c5c80111fd98 health: HEALTH_ERR Module 'cephadm' has failed: No

[ceph-users] Re: Where to find ceph.conf?

2021-07-23 Thread mabi
Oh I see, so I would simply create three totally new MON nodes on the new network and the add/integrate the OSD and MDS nodes? I have 3 dedicated OSD nodes and 2 MDS nodes doing just that. So this means that with cephadm there is a way to "import" or "adopt" OSD and MDS nodes? Is there any

[ceph-users] Re: Where to find ceph.conf?

2021-07-23 Thread mabi
Thank you Eugen for your answer. I missed the part about the monmap thingy and my previous thread somehow drifted off-topic. Regarding the monmap is there any documentation somewhere on how to modify that for changing the IP addresses of all nodes? ‐‐‐ Original Message ‐‐‐ On Friday,

[ceph-users] Where to find ceph.conf?

2021-07-23 Thread mabi
Hello, I need to change the IP addresses and domain name of my whole Octopus 8 nodes ceph cluster and I was told on this list that for that purpose I need to manually adapt the ceph.conf file. Unfortunately there is no ceph.conf file on any of my nodes. So where can I find this file? Note

[ceph-users] Re: Files listed in radosgw BI but is not available in ceph

2021-07-23 Thread Boris Behrens
Hi Dan, hi Rafael, we found the issue. It was a cleanup script that didn't work correctly. Basically it removed files via rados and the bucket index didn't update. Thank you a lot for your help. (will also close the bug on the ceph tracker) Am Fr., 23. Juli 2021 um 01:16 Uhr schrieb Rafael

[ceph-users] Re: Can't clear UPGRADE_REDEPLOY_DAEMON after fix

2021-07-23 Thread Arnaud MARTEL
OK. I found the answer (based on a previous discussion) and I was able to clean this warning using the following command: ceph orch restart mgr Arnaud - Mail original - De: "arnaud martel" À: "ceph-users" Envoyé: Jeudi 22 Juillet 2021 16:20:43 Objet: [ceph-users] Can't clear

[ceph-users] Re: Cephadm: How to remove a stray daemon ghost

2021-07-23 Thread Kai Stian Olstad
On 22.07.2021 13:56, Kai Stian Olstad wrote: Hi I have a warning that says "1 stray daemon(s) not managed by cephadm" What i did is the following. I have 3 nodes that the mon should run on, but because of a bug in 16.2.4 I couldn't run on then since they are in different subnet. But this was

[ceph-users] Is there any way to obtain the maximum number of node failure in ceph without data loss?

2021-07-23 Thread Jerry Lee
Hello, I would like to know the maximum number of node failures for a EC8+3 pool in a 12-node cluster with 3 OSDs in each node. The size and min_size of the EC8+3 pool is configured as 11 and 8, and OSDs of each PG are selected by host. When there is no node failure, the maximum number of node

[ceph-users] Re: Limiting subuser to his bucket

2021-07-23 Thread Seena Fallah
ّif you are using S3 you can try to use bucket policy: https://docs.ceph.com/en/latest/radosgw/bucketpolicy/ On Wed, Jul 21, 2021 at 6:28 PM Rok Jaklič wrote: > Hi, > > is it possible to limit access of the subuser that he sees (read, write) > only "his" bucket? And also be able to create a

[ceph-users] Re: Where to find ceph.conf?

2021-07-23 Thread Eugen Block
Hi, you can find the ceph.conf here: /var/lib/ceph/7bdffde0-623f-11eb-b3db-fa163e672db2/mon.ses7-host1/config If you edit that file and restart the container you'll see the changes. But as I wrote in your other thread, this won't be enough to migrate MONs to a different IP address, you