Den tors 20 okt. 2022 kl 18:57 skrev Wyll Ingersoll
:
> What network does radosgw use when it reads/writes the objects to the cluster?
Everything in ceph EXCEPT osd<->osd traffic uses the public network.
Anything that isn't backfills or replication betweens OSDs is always
using the public network,
Folks,
I have deployed 15 OSDs node clusters using cephadm and encount duplicate
OSD on one of the nodes and am not sure how to clean that up.
root@datastorn1:~# ceph health
HEALTH_WARN 1 failed cephadm daemon(s); 1 pool(s) have no replicas
configured
osd.3 is duplicated on two nodes, i would li
Hi,
it looks like the OSDs haven't been cleaned up after removing them. Do
you see the osd directory in /var/lib/ceph//osd.3 on datastorn4?
Just remove the osd.3 directory, then cephadm won't try to activate it.
Zitat von Satish Patel :
Folks,
I have deployed 15 OSDs node clusters using
CC'ed David
Maybe Ilya can tag someone from DevOps additionally
Thanks,
k
> On 20 Oct 2022, at 20:07, Goutham Pacha Ravi wrote:
>
> +1
> The OpenStack community is interested in this as well. We're trying to move
> all our ubuntu testing to Ubuntu Jammy/22.04 [1]; and we consume packages
> fr
Hi,
I have a problem fully utilizing some disks with cephadm service spec. The
host I have has the following disks:
4 SSD 900GB
32 HDD 10TB
I would like to use the SSDs as DB devices and the HDD devices as block. 8
HDDs per SSD and the available size for the DB would be about 111GB
(900GB/8).
The
Hi Edward,
On Wed, 19 Oct 2022 at 21:27, Edward R Huyer wrote:
>
> I recently set up scheduled snapshots on my CephFS filesystem, and ever since
> the cluster has been intermittently going into HEALTH_WARN with an
> MDS_CLIENT_LATE_RELEASE notification.
>
> Specifically:
>
> [WARN] MDS_CLIENT_L
On 21/10/2022 19:39, Rishabh Dave wrote:
Hi Edward,
On Wed, 19 Oct 2022 at 21:27, Edward R Huyer wrote:
I recently set up scheduled snapshots on my CephFS filesystem, and ever since
the cluster has been intermittently going into HEALTH_WARN with an
MDS_CLIENT_LATE_RELEASE notification.
Sp
Am 21.10.22 um 13:38 schrieb Christian:
The spec I used does not fully utilize the SSDs though. Instead of 1/8th of
the SSD, it uses about 28GB, so 1/32th of the SSD.
This is a bug in certain versions of ceph-volume:
https://tracker.ceph.com/issues/56031
It should be fixed in the latest rele
Great, thank you both for the confirmation!
-Original Message-
From: Xiubo Li
Sent: Friday, October 21, 2022 8:43 AM
To: Rishabh Dave ; Edward R Huyer
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: MDS_CLIENT_LATE_RELEASE after setting up
scheduled CephFS snapshots
On 21/10/202
Hi Eugen,
I have delected osd.3 directory from datastorn4 node as you mentioned but
still i am seeing that duplicate osd in ps output.
root@datastorn1:~# ceph orch ps | grep osd.3
osd.3 datastorn4stopped 5m
ago 3w-42.6G
osd.3
Do you still see it with ‚cephadm ls‘ on that node? If yes you could
try ‚cephadm rm-daemon —name osd.3‘. Or you try it with the
orchestrator: ceph orch daemon rm…
I don’t have the exact command at the moment, you should check the docs.
Zitat von Satish Patel :
Hi Eugen,
I have delected os
Hi Eugen,
My error cleared up itself, Look like it took some time but now I am not
seeing any errors and the output is very clean. Thank you so much.
On Fri, Oct 21, 2022 at 1:46 PM Eugen Block wrote:
> Do you still see it with ‚cephadm ls‘ on that node? If yes you could
> try ‚cephadm rm-da
On Fri, Oct 21, 2022 at 12:48 PM Konstantin Shalygin wrote:
>
> CC'ed David
Hi Konstantin,
David has decided to pursue something else and is no longer working on
Ceph [1].
>
> Maybe Ilya can tag someone from DevOps additionally
I think Dan answered this question yesterday [2]:
> there are no
Thank you Ilya!
> On 21 Oct 2022, at 21:02, Ilya Dryomov wrote:
>
> On Fri, Oct 21, 2022 at 12:48 PM Konstantin Shalygin wrote:
>>
>> CC'ed David
>
> Hi Konstantin,
>
> David has decided to pursue something else and is no longer working on
> Ceph [1].
>
>>
>> Maybe Ilya can tag someone fro
In a situation where you have say 3 active MDS (and 3 standbys).
You have 3 ranks, 0,1,2
In your filesystem you have three directories at the root level [/a, /b, /c]
you pin:
/a to rank 0
/b to rank 1
/c to rank 2
and you need to upgrade your Ceph Version. When it becomes time to reduce
max_mds t
In my experience it just falls back to behave like its un-pinned.
For my use case I do the following:
/ pinned to rank 0
/env1 to rank 1
/env2 to rank 2
/env 3 to rank 3
If I do an upgrade it will collapse to single rank, all access/IO continues
after what would be a normal failover type of inte
IIRC cephadm refreshes its daemons within 15 minutes, at least that
was my last impression. So sometimes you have to be patient. :-)
Zitat von Satish Patel :
Hi Eugen,
My error cleared up itself, Look like it took some time but now I am not
seeing any errors and the output is very clean. Th
Hi folks,
I'm trying to install ceph on GCE VMs (debian/ubuntu) with PD-SSDs using
ceph-ansible image. The installation from clean has been good, but when I
purged ceph cluster and tried to re-install, I saw the error:
```
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
blues
18 matches
Mail list logo