schrieb Ernesto Puerta :
Hi Oliver,
This issue has already been discussed in this mailing list ([1] and [2]). There
you can find the details and steps to deal with it.
Kind Regards,
Ernesto
On Mon, Aug 16, 2021 at 12:51 PM Oliver Weinmann wrote:
Dear All,
after a very smooth upgrade from
Hi,
we had one failed osd in our cluster that we have replaced. Since then the
cluster is behaving very strange and some ceph commands like ceph crash or ceph
orch are stuck.
Cluster health:
[root@gedasvl98 ~]# ceph -s
cluster:
id: ec9e031a-cd10-11eb-a3c3-005056b7db1f
Dear All,
after a very smooth upgrade from 16.2.4 to 16.2.5 (CentOS 8 Stream), we are no
longer able to access the dashboard. The dashboard was accessible before the
upgrade. I googled and found a command to change the listening IP for the
dashboard, but I wonder why the upgrade should have
15.06.2021 um 12:01 schrieb Oliver Weinmann:
Dear All,
I have deployed the latest CEPH Pacific release in my lab and started
to check out the new ?stable? NFS Ganesha features. First of all I'm a
bit confused which method to actually use to deploy the NFS cluster:
cephadm or ceph nfs
Dear All,
I have deployed the latest CEPH Pacific release in my lab and started to check
out the new ?stable? NFS Ganesha features. First of all I'm a bit confused
which method to actually use to deploy the NFS cluster:
cephadm or ceph nfs cluster create?
I used "nfs cluster create"
,
Sebastian
Am 11.03.21 um 22:10 schrieb Oliver Weinmann:
Hi,
On my 3 node Octopus 15.2.5 test cluster, that I haven't used for quite
a while, I noticed that it shows some errors:
[root@gedasvl02 ~]# ceph health detail
INFO:cephadm:Inferring fsid d0920c36-2368-11eb-a5de-005056b703af
Hi,
On my 3 node Octopus 15.2.5 test cluster, that I haven't used for quite
a while, I noticed that it shows some errors:
[root@gedasvl02 ~]# ceph health detail
INFO:cephadm:Inferring fsid d0920c36-2368-11eb-a5de-005056b703af
INFO:cephadm:Inferring config
Dear All,
A questions that probalby has been asked by many other users before. I want to
do a POC. For the POC I can use old decomissioned hardware. Currently I have 3
x IBM X3550 M5 with:
1 Dualport 10G NIC
Intel(R) Xeon(R) CPU E5-2637 v3 @ 3.50GHz
64GB RAM
the other two have a slower CPU
Coud it be a problem that I'm running a mix of x86 and arm? I use
vagrant / virtualbox for the mons. Currently I only have two odroid hc-2
devices available.
Am 11.01.2021 um 23:48 schrieb Oliver Weinmann:
Hi again,
it took me some time but I figured out that on ubuntu focal there is a
more
command:
/usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
c3a567be-08e0-4802-8b08-07d6891de485
any more clues?
Am 08.01.2021 um 10:03 schrieb kefu chai:
Oliver Weinmann 于2021年1月8日 周五04:30写道:
Ok, I replaced the whole file
+++-==-=--===
ii ceph 14.2.15-3~bpo10+1 armhf distributed storage
and file system
Am 07.01.2021 um 13:07 schrieb kefu chai:
Oliver Weinmann <mailto:oliver.weinm...@me.com>>于2021年1月7日 周四16:32写道:
Hi,
thanks for the quick reply. I will test it.
Hi,
thanks for the quick reply. I will test it. Do I have to recompile ceph
in order to test it?
Am 07.01.2021 um 02:13 schrieb kefu chai:
On Thursday, January 7, 2021, Oliver Weinmann <mailto:oliver.weinm...@me.com>> wrote:
Hi,
I have a similar if not the same iss
Hi,
I have a similar if not the same issue. I run armbian buster on my
odroid hc2 which is the same as a xu4 and I get the following error,
trying to build a cluster with ceph-ansible:
ASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds]
ve+undersized+degraded+remapped+backfill_wait
1 active+undersized+degraded+remapped+backfilling
io:
recovery: 105 MiB/s, 25 objects/s
Many thanks for your help. This was an excellent "Recovery training" :)
Am 01.12.2020 um 11:50 schrieb Stefan Kooman:
On 2020-12-01 10:21
/ceph/d0920c36-2368-11eb-a5de-005056b703af/ceph-osd.0.log
Yesterday when I deleted the failed osd and recreated it there were lots
of message in the log file:
https://pastebin.com/5hH27pdR
Cheers,
Oliver
Am 01.12.2020 um 09:22 schrieb Stefan Kooman:
On 2020-11-30 15:55, Oliver Weinmann
Hi,
I'm still evaluating ceph 15.2.5 in a lab so the problem is not really hurting
me, but I want to understand it and hopefully fix it. It is a good practice. To
test the resilience of the cluster I try to break it by doing all kinds of
things. Today I powered off (clean shutdown) one osd
Today I played with a samba gateway and cephfs. I couldn’t get previous
versions displayed on a windows client and found very little info on the net
how to accomplish this. It seems that I need a vfs module called
ceph_snapshots. It’s not included in the latest samba version on Centos 8. by
I just recently installed my first ceph cluster using cephadm and used
the following config file for the bootstrap:
|cat /root/ceph.conf|
|[global]|
|||public| |network = ||192.168||.||30.0||/||24|
|||cluster network = ||192.168||.||41.0||/||24|
|
|
Then:
|cephadm bootstrap -c /root/ceph.conf
Hi,
on my fresh deployed cephadm bootstrap node I can no longer run the ceph
command. It is just stuck:
[root@gedasvl02 ~]# ceph orch dev ls
INFO:cephadm:Inferring fsid c7879f24-1f90-11eb-8ba2-005056b703af
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
[root@gedasvl02 ~]#
Hi,
I setup a small 3 node cluster as a POC. I bootstrapped the cluster with
separate networks for frontend (public network 192.168.30.0/24) and
backend (cluster network 192.168.41.0/24).
1st small question:
After the bootstrap, I recognized that I had mixed up cluster and public
network.
20 matches
Mail list logo