[ceph-users] ceph orch device ls extents

2022-07-08 Thread Curt
Hello,

Ran into an interesting error today and I'm not sure best way to fix it.
When I run 'ceph orch device ls', I get the following error "Insufficient
space (<10 extents) on vgs, LVM detected, locked", on every HD.

Here's the output of ceph-volume lvm list, incase it helps
== osd.0 ===

  [block]
/dev/ceph-efb83a91-3c7b-4329-babc-017b0a00e95a/osd-block-b017780d-38f9-4da7-b9df-2da66e1aa0fd

  block device
 
/dev/ceph-efb83a91-3c7b-4329-babc-017b0a00e95a/osd-block-b017780d-38f9-4da7-b9df-2da66e1aa0fd
  block uuid8kIdfD-kQSh-Mhe4-zRIL-b1Pf-PTaC-CVosbE
  cephx lockbox secret
  cluster fsid  1684fe88-aae0-11ec-9593-df430e3982a0
  cluster name  ceph
  crush device classNone
  encrypted 0
  osd fsid  b017780d-38f9-4da7-b9df-2da66e1aa0fd
  osd id0
  osdspec affinity  dashboard-admin-1648152609405
  type  block
  vdo   0
  devices   /dev/sdb

== osd.10 ==

  [block]
/dev/ceph-a0e85035-cfe2-4070-b58a-a88ec964794c/osd-block-3c353f8c-ab0f-4589-9e98-4f840e86341a

  block device
 
/dev/ceph-a0e85035-cfe2-4070-b58a-a88ec964794c/osd-block-3c353f8c-ab0f-4589-9e98-4f840e86341a
  block uuidgvvrMV-O98L-P6Sl-dnJT-NVwM-P85e-Reqql4
  cephx lockbox secret
  cluster fsid  1684fe88-aae0-11ec-9593-df430e3982a0
  cluster name  ceph
  crush device classNone
  encrypted 0
  osd fsid  3c353f8c-ab0f-4589-9e98-4f840e86341a
  osd id10
  osdspec affinity  dashboard-admin-1648152609405
  type  block
  vdo   0
  devices   /dev/sdh

== osd.12 ==

lvdisplay
--- Logical volume ---
  LV Path
 
/dev/ceph-a0e85035-cfe2-4070-b58a-a88ec964794c/osd-block-3c353f8c-ab0f-4589-9e98-4f840e86341a
  LV Nameosd-block-3c353f8c-ab0f-4589-9e98-4f840e86341a
  VG Nameceph-a0e85035-cfe2-4070-b58a-a88ec964794c
  LV UUIDgvvrMV-O98L-P6Sl-dnJT-NVwM-P85e-Reqql4
  LV Write Accessread/write
  LV Creation host, time hyperion02, 2022-03-24 20:12:17 +
  LV Status  available
  # open 24
  LV Size<1.82 TiB
  Current LE 476932
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:4

Let me know if you need any other information.

Thanks,
Curt
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: runaway mon DB

2022-07-08 Thread ceph
Do you have mon_compact_on_start on true and tried an mon restart?

Just a guess
Hth
Mehmet

Am 27. Juni 2022 16:46:26 MESZ schrieb Wyll Ingersoll 
:
>
>Running Ceph Pacific 16.2.7
>
>We have a very large cluster with 3 monitors.  One of the monitor DBs is > 2x 
>the size of the other 2 and is growing constantly (store.db fills up) and 
>eventually fills up the /var partition on that server.  The monitor in 
>question is not​ the leader.  The cluster itself is quite full but currently 
>we cannot remove any data due to it's current mission requirements, so it is 
>constantly in a state of rebalance and bumping up against the "toofull" limits.
>
>How can we keep the monitor DB from growing so fast?
>Why is it only on a secondary monitor not the primary?
>Can we force a monitor to compact it's DB while the system is actively 
>repairing ?
>
>Thanks,
>  Wyllys Ingersoll
>
>___
>ceph-users mailing list -- ceph-users@ceph.io
>To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Can't setup Basic Ceph Client

2022-07-08 Thread Jean-Marc FONTANA

Hello folks,

We're operating a small ceph test cluster made of 5 VMs, 1 
monitor/manager, 3 osd and 1 radosgateway for

owncloud S3 external storage use. This works almost fine.

We're planning to use rbd too and get block device for a linux server. 
In order to do that, we installed ceph-common packages
and created ceph.conf and ceph.keyring as explained at Basic Ceph Client 
Setup — Ceph Documentation 
 
(https://docs.ceph.com/en/pacific/cephadm/client-setup/)


This does not work.

Ceph seems to be installed

$ dpkg -l | grep ceph-common
ii  ceph-common   16.2.9-1~bpo11+1 amd64    common 
utilities to mount and interact with a ceph storage cluster
ii  python3-ceph-common   16.2.9-1~bpo11+1 all  Python 3 
utility libraries for Ceph


$ ceph -v
ceph version 16.2.9 (4c3647a322c0ff5a1dd2344e039859dcbd28c830) pacific 
(stable)


But, when using commands that interact with the cluster, we get this message

$ ceph -s
2022-07-08T15:51:24.965+0200 7f773b7fe700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
[errno 13] RADOS permission denied (error connecting to the cluster)

We tried to insert these lines in ceph.conf

auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

as explained in a former forum, but we still get the error message, 
slightly different



$ ceph -s
2022-07-08T15:51:24.965+0200 7f773b7fe700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [2]
[errno 13] RADOS permission denied (error connecting to the cluster)

Does anyone have an idea ? .

The cluster was installed with ceph-deploy in Nautilus 14.2.21 on Debian 
10.9, then upgraded in Octopus 15.2.16 with Ceph-deploy.
Then Debian was upgraded in 11.3 and ceph-deploy couldn't go further. 
The las cluster upgrade was in Pacific 16.2.9, made with

Debian apt-get.

If you need more information ask us, we would be grateful to get some help.

JM

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs mounting multiple filesystems

2022-07-08 Thread Robert Reihs
Dear Burkhard,
Thanks for the help, works also when specified in the mount option
Best
Robert

On Fri, Jul 8, 2022 at 11:39 AM Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:

> Hi,
>
> On 08.07.22 11:34, Robert Reihs wrote:
> > Hi,
> > I am very new to the ceph world, and working on setting up a cluster. We
> > have two cephfs filesystems (slow and fast), everything is running and
> > showing um in the dashboard. I can mount on of the filesystems (it mounts
> > it as default). How can I specify the filesystem in the mount command?
> > Ceph Version: ceph version 17.2.1
> > (ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy (stable)
> > The cluster is setup with ipv6.
> >
> > mount -t ceph [IPV6:Node1]:6789,[IPV6:Node2]:6789,[IPV6:Node3]:6789:/
> > /mnt/ceph-fs-fast -o name=admin,secret=SECRET==
>
>
> We use the 'mds_namespace' option in fstab to select the filesystem:
>
>
> mon-1,mon-2,mon-3://mountpointceph
> rw,name=volumes,secretfile=/etc/ceph/ceph.client.volumes.key,mds_namespace=bcf_fs,_netdev,defaults,auto,...
>
>00
>
>
> Regards,
>
> Burkhard
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Robert Reihs
Jakobsweg 22
8046 Stattegg
AUSTRIA

mobile: +43 (664) 51 035 90
robert.re...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs mounting multiple filesystems

2022-07-08 Thread Burkhard Linke

Hi,

On 08.07.22 11:34, Robert Reihs wrote:

Hi,
I am very new to the ceph world, and working on setting up a cluster. We
have two cephfs filesystems (slow and fast), everything is running and
showing um in the dashboard. I can mount on of the filesystems (it mounts
it as default). How can I specify the filesystem in the mount command?
Ceph Version: ceph version 17.2.1
(ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy (stable)
The cluster is setup with ipv6.

mount -t ceph [IPV6:Node1]:6789,[IPV6:Node2]:6789,[IPV6:Node3]:6789:/
/mnt/ceph-fs-fast -o name=admin,secret=SECRET==



We use the 'mds_namespace' option in fstab to select the filesystem:


mon-1,mon-2,mon-3:/    /mountpoint    ceph 
rw,name=volumes,secretfile=/etc/ceph/ceph.client.volumes.key,mds_namespace=bcf_fs,_netdev,defaults,auto,... 
  0    0



Regards,

Burkhard


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cephfs mounting multiple filesystems

2022-07-08 Thread Robert Reihs
Hi,
I am very new to the ceph world, and working on setting up a cluster. We
have two cephfs filesystems (slow and fast), everything is running and
showing um in the dashboard. I can mount on of the filesystems (it mounts
it as default). How can I specify the filesystem in the mount command?
Ceph Version: ceph version 17.2.1
(ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy (stable)
The cluster is setup with ipv6.

mount -t ceph [IPV6:Node1]:6789,[IPV6:Node2]:6789,[IPV6:Node3]:6789:/
/mnt/ceph-fs-fast -o name=admin,secret=SECRET==

Thanks for your helping.
Best
Robert
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MDS demons failing

2022-07-08 Thread Venky Shankar
On Fri, Jul 8, 2022 at 1:58 PM Santhosh Alugubelly
 wrote:
>
> Hello ceph community,
>
> Our ceph cluster is in the version 16.2.4. Our cluster is around 1
> year old. All of the sudden MDS demons went into the failed state and
> also it is showing that 1 filesystem is in degraded state. We tried to
> restart the demons then it came into the running state and then within
> 30 seconds it again went into a failed state. Currently we are not
> seeing any client IO on the cluster.
>
> We tried to restart the hosts but again MDS demons are going to fail state.

Could you share MDS logs?

>
> Here is the current health message
>
> 3 failed cephadm daemon(s)
> 1 filesystem is degraded
> insufficient standby MDS daemons available
> 1 MDSs behind on trimming
> 1 filesystem is online with fewer MDS than max_mds
>
>
>
> Our cluster overview
> No of MDS - 3
> No of MONs - 5
> No of MGRs - 5
> No of OSDs - 18 (on 9 hosts)
>
> If any additional details are required about cluster configuration
> i'll try to provide. Any help would be appreciated.
>
>
> Thanks,
>
> Santhosh.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Cheers,
Venky

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] MDS demons failing

2022-07-08 Thread Santhosh Alugubelly
Hello ceph community,

Our ceph cluster is in the version 16.2.4. Our cluster is around 1
year old. All of the sudden MDS demons went into the failed state and
also it is showing that 1 filesystem is in degraded state. We tried to
restart the demons then it came into the running state and then within
30 seconds it again went into a failed state. Currently we are not
seeing any client IO on the cluster.

We tried to restart the hosts but again MDS demons are going to fail state.

Here is the current health message

3 failed cephadm daemon(s)
1 filesystem is degraded
insufficient standby MDS daemons available
1 MDSs behind on trimming
1 filesystem is online with fewer MDS than max_mds



Our cluster overview
No of MDS - 3
No of MONs - 5
No of MGRs - 5
No of OSDs - 18 (on 9 hosts)

If any additional details are required about cluster configuration
i'll try to provide. Any help would be appreciated.


Thanks,

Santhosh.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Status occurring several times a day: CEPHADM_REFRESH_FAILED

2022-07-08 Thread E Taka
Hi,

since updating to 17.2.1 we get 5 – 10 times per day the message:

[WARN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices
host cephXX `cephadm gather-facts` failed: Unable to reach remote
host cephXX.

(cephXX is not always the same node).

This status is cleared after one or two minutes.

When this happens, the ceph.conf and ceph.client.admin.keyring files are
not present for a short time on the MGR node.

Do you have an idea what can I da about this?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io