On 5/16/24 17:50, Robert Sander wrote:
cephadm osd activate HOST
would re-activate the OSDs.
Small but important typo: It's
ceph cephadm osd activate HOST
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051
m to noout and will
try to move other services away from the host if possible.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsfü
On 5/9/24 07:22, Xiubo Li wrote:
We are disscussing the same issue in slack thread
https://ceph-storage.slack.com/archives/C04LVQMHM9B/p1715189877518529.
Why is there a discussion about a bug off-list on a proprietary platform?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str
Hi,
would an update to 18.2 help?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
::v15_2_0::list&, int)+0x290) [0x5614ac87ff90]
13: (MDSContext::complete(int)+0x5f) [0x5614aca41f4f]
14: (MDSIOContextBase::complete(int)+0x534) [0x5614aca426e4]
15: (Finisher::finisher_thread_entry()+0x18d) [0x7f1930b7884d]
16: /lib64/libpthread.so.0(+0x81ca) [0x7f192fac81c
://docs.ceph.com/en/reef/cephadm/services/osd/#remove-an-osd
This will make sure that the OSD is not needed any more (data is drained
etc).
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
e to write to the CephFS at first.
Set squash to "no_root_squash" to be able to write as root to the NFS
share. Create a directory and change its permissions to someone else.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-
On 4/29/24 09:36, Alwin Antreich wrote:
Who knows. I don't see any packages on download.ceph.com
<http://download.ceph.com> for Squid.
Ubuntu has them: https://packages.ubuntu.com/noble/ceph
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
members and tiers and to
sound the marketing drums a bit. :)
The Ubuntu 24.04 release notes also claim that this release comes with
Ceph Squid:
https://discourse.ubuntu.com/t/noble-numbat-release-notes/39890
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
Hi,
https://www.linuxfoundation.org/press/introducing-ceph-squid-the-future-of-storage-today
Does the LF know more than the mailing list?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051
: '*'
If you apply this YAML code the orchestrator should deploy one
node-exporter daemon to each host of the cluster.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin
and its placement strategy.
What does your node-exporter service look like?
ceph orch ls node-exporter --export
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin
t;pseudo path"
This is an NFSv4 concept. It allows to mount a virtual root of the NFS server
and access all exports below it without having to mount each one separately.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
and the NFS client cannot be "load balanced" to another backend
NFS server.
There is no use to configure an ingress service currently without failover.
The NFS clients have to remount the NFS share in case of their current
NFS server dies anyway.
Regards
--
Robert Sander
Heinlein Consu
running Debian since before
then you have user IDs and group IDs in the range 500 - 1000.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
G
Hi,
On 3/21/24 14:50, Michael Worsham wrote:
Now that Reef v18.2.2 has come out, is there a set of instructions on how to
upgrade to the latest version via using Cephadm?
Yes, there is: https://docs.ceph.com/en/reef/cephadm/upgrade/
Regards
--
Robert Sander
Heinlein Consulting GmbH
Hi,
On 3/19/24 13:00, Igor Fedotov wrote:
translating EIO to upper layers rather than crashing an OSD is a valid
default behavior. One can alter this by setting bluestore_fail_eio
parameter to true.
What benefit lies in this behavior when in the end client IO stalls?
Regards
--
Robert
Hi,
On 3/5/24 13:05, ricardom...@soujmv.com wrote:
I have a ceph quincy cluster with 5 nodes currently. But only 3 with
SSDs.
Do not mix HDDs and SSDs in the same pool.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel
as created. They usually have "rgw" in their name.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Hei
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
reads=500"
rgw_crypt_require_ssl = false
ceph config assimilate-conf may be of help here.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlotten
can adjust settings with "ceph config" or the
Configuration tab of the Dashboard.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HR
/reef/cephadm/install/#deployment-in-an-isolated-environment
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein
, Password)
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
and other
cloud technologies.
If you run a classical file service on top of CephFS you usually do not
need subvolumes but can go with normal quotas on directories.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
t of PGs, to not run into statistical
edge cases.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Hei
e the SSDs for the OSDs' RocksDB?
Where do you plan to store the metadata pools for CephFS? They should be
stored on fats media.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgeri
question, should I have a
designated pool for S3 storage or can/should I use the same
cephfs_data_replicated/erasure pool ?
No, S3 needs its own pools. It cannot re-use CephFS pools.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
not update
/etc/ceph/ceph.conf.
Only when I again do "ceph mgr fail" the new MGR will update
/etc/ceph/ceph.conf on the hosts labeled with _admin.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
was at "*",
so all hosts. I have set that to "label:_admin".
It still does not put ceph.conf into /etc/ceph when adding the label _admin.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
cephtest23:/etc/ceph/ceph.client.admin.keyring
2024-01-18T11:47:08.212303+0100 mgr.cephtest32.ybltym [INF] Updating
cephtest23:/var/lib/ceph/ba37db20-2b13-11eb-b8a9-871ba11409f6/config/ceph.client.admin.keyring
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https
.
Both files are placed into the /var/lib/ceph//config directory.
Has something changed?
¹:
https://docs.ceph.com/en/quincy/cephadm/host-management/#special-host-labels
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030
with the cluster network.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
. It is used to determine the public
network.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz
Hi Eugen,
the release info is current only in the latest branch of the
documentation: https://docs.ceph.com/en/latest/releases/
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
Hi,
On 22.12.23 11:41, Albert Shih wrote:
for n in 1-100
Put off line osd on server n
Uninstall docker on server n
Install podman on server n
redeploy on server n
end
Yep, that's basically the procedure.
But first try it on a test cluster.
Regards
--
Robert Sander
Heinlein
On 21.12.23 22:27, Anthony D'Atri wrote:
It's been claimed to me that almost nobody uses podman in production, but I
have no empirical data.
I even converted clusters from Docker to podman while they stayed online
thanks to "ceph orch redeploy".
Regards
--
Robert Sande
Hi,
On 21.12.23 15:13, Nico Schottelius wrote:
I would strongly recommend k8s+rook for new clusters, also allows
running Alpine Linux as the host OS.
Why would I want to learn Kubernetes before I can deploy a new Ceph
cluster when I have no need for K8s at all?
Regards
--
Robert Sander
.
Everything needed for the Ceph containers is provided by podman.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg
... CentOS thing... what distro appears to be the most
straightforward to use with Ceph? I was going to try and deploy it on Rocky 9.
Any distribution with a recent systemd, podman, LVM2 and time
synchronization is viable. I prefer Debian, others RPM-based distributions.
Regards
--
Robert Sander
On 12/5/23 10:06, duluxoz wrote:
I'm confused - doesn't k4 m2 mean that you can loose any 2 out of the 6
osds?
Yes, but OSDs are not a good failure zone.
The host is the smallest failure zone that is practicable and safe
against data loss.
Regards
--
Robert Sander
Heinlein Consulting GmbH
r that you risk to lose data.
Erasure coding is possible with a cluster size of 10 nodes or more.
With smaller clusters you have to go with replicated pools.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-4
s the UUID of the Ceph cluster, $OSDID is the OSD id.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Si
to the list.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
or NTP)
- LVM2 for provisioning storage devices
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz
to
the pool? Can you help me?1 and 2 clusters are working. I want to view my data
from them and then transfer them to another place. How can I do this? I have
never used Ceph before.
Please send the output of:
ceph -s
ceph health detail
ceph osd df tree
Regards
--
Robert Sander
Heinlein Consulting GmbH
object is stored on the
OSD data partition and without it nobody knows where each object is. The
data is lost.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin
/latest/man/8/monmaptool/
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovering-a-monitor-s-broken-monmap
This way the remaining MON will be the only one in the map and will have
quorum and the cluster will work again.
Regards
--
Robert Sander
Heinlein Consulting GmbH
-store-failures
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
reasons.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
the service specifications with "ceph orch ls --export"
and import the YAML file with "ceph orch apply -i …".
This does not cover the hosts in the cluster.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030
ntation was a little be clearer on this topic.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Hei
uestions:
Is a dual stacked networking with IPv4 and IPv6 now supported or not?
From which version on is it considered stable?
Are OSDs now able to register themselves with two IP addresses in the
cluster map? MONs too?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b,
on the Proxmox forum at
https://forum.proxmox.com/ as they distribute their own Ceph packages.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B
. All OSDs will immediately know
about the new MONs.
The same goes when removing an old MON.
After that you have to update the ceph.conf on each host to make the
change "reboot safe".
No need to restart any other component including OSDs.
Regards
--
Robert Sander
Heinlein Consu
?
How do I add a dashboard in cephadm managed Grafana that shows the
values from smartctl_exporter? Where do I get such a dashboard?
How do I add alerts to the cephadm managed Alert-Manager? Where do I get
useful alert definitions for smartctl_exporter metrics?
Regards
--
Robert Sander
Heinlein
have a cluster where SMART data is available from the disks (tested
with smartctl and visible in the Ceph dashboard), but even with an
enabled diskprediction_local module no health and lifetime info is shown.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https
processes must have used a lock that
blocked new cephadm commands.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer
phmon01 [DBG] _kick_serve_loop
The container for crash.cephmon01 does not get restarted.
It looks like the service loop does not get executed.
Can we see what jobs are in this queue and why they do not get executed?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 1011
the image from quay.io.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
is that there is no information on which OSD cephadm tries to
upgrade next. There is no failure reported. It seems to just sit there
and wait for something.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
42503) pacific
(stable)": 2
},
"overall": {
"ceph version 16.2.13 (5378749ba6be3a0868b51803968ee9cde4833a3e) pacific
(stable)": 56,
"ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific
(sta
On 7/27/23 13:27, Eugen Block wrote:
[2] https://github.com/ceph/ceph/pull/47011
This PR implements the 204 HTTP code that I see in my test cluster.
I wonder why in the same situation the other cluster returns a 500 here.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b
mgr handle_mgr_map I am now activating
We have a test cluster running also with version 17.2.6 where
this does not happen. In this test cluster the passive MGRs return an HTTP
code 204 when the alert-manager tries to request /api/prometheus_receiver.
What is ha
not shine a good light on the project.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
would still be working in site A since there are still 2 OSDs, even without mon
quorum.
The site without MON quorum will stop to work completely.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030
internally?
No. When creating the OSD ceph-volume looks at
/sys/class/block/DEVICE/queue/rotational
to determine if it's an HDD (file contains 1) or not (file contains 0).
If you need to distinguish between SSD and NVMe you can manually assign
another device class to the OSDs.
Regards
--
Robert
/rbd/iscsi-initiators/
AFAIK VMware uses these in VMFS.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg
Hi,
a cluster has ms_bind_msgr1 set to false in the config database.
Newly created MONs still listen on port 6789 and add themselves as
providing messenger v1 into the monmap.
How do I change that?
Shouldn't the MONs use the config for ms_bind_msgr1?
Regards
--
Robert Sander
Heinlein
want it pay with performance penalties.
I understand this use case. But this would still mean that the client
encrypts the data. In your case the CephFS mount or with S3 the
rados-gateway.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https
On 23.05.23 08:42, huxia...@horebdata.cn wrote:
Indeed, the question is on server-side encryption with keys managed by ceph on
a per-user basis
What kind of security to you want to achieve with encryption keys stored
on the server side?
Regards
--
Robert Sander
Heinlein Support GmbH
Linux
On 26.04.23 13:24, Thomas Hukkelberg wrote:
[WRN] OSD_TOO_MANY_REPAIRS: Too many repaired reads on 1 OSDs
osd.34 had 9936 reads repaired
Are there any messages in the kernel log that indicate this device has
read errors? Have you considered replacing the disk?
Regards
--
Robert Sander
-UUID is the logical volume for the old OSD you
could have run this:
ceph orch osd rm 23
replace the faulty HDD
ceph orch daemon add osd
compute11:data_devices=/dev/sda,db_devices=ceph-UUID/osd-db-UUID
This will reuse the existing logical volume for the OSD DB.
Regards
--
Robert Sander
On 18.04.23 06:12, Lokendra Rathour wrote:
but if I try mounting from a normal Linux machine with connectivity
enabled between Ceph mon nodes, it gives the error as stated before.
Have you installed ceph-common on the "normal Linux machine"?
Regards
--
Robert Sander
Heinlein Su
On 14.04.23 12:17, Lokendra Rathour wrote:
*mount: /mnt/image: mount point does not exist.*
Have you created the mount point?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
ed: true" to the specification. After that
ceph orch apply -i osd.yml
Or you could just remove the specification with "ceph orch rm NAME".
The OSD service will be removed but the OSD will remain.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berli
orchestrator?
Which version?
Have you tried
ceph orch daemon add osd
host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/nvme0
as shown on https://docs.ceph.com/en/quincy/cephadm/services/osd/ ?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https
the
device class and assign it to the pool charlotte.rgw.buckets.data.
After that the autoscaler will be able to work again.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht
On 27.03.23 16:34, Pat Vaughan wrote:
Yes, all the OSDs are using the SSD device class.
Do you have multiple CRUSH rules by chance?
Are all pools using the same CRUSH rule?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
multiple device classes.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Si
On 14.03.23 15:22, b...@nocloud.ch wrote:
ah.. ok, it was not clear to me that skipping minor version when doing a major
upgrade was supported.
You can even skip one major version when doing an upgrade.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
On 14.03.23 14:21, bbk wrote:
`
# ceph orch upgrade start --ceph-version 17.2.0
I would never recommend to update to a .0 release.
Why not go directly to the latest 17.2.5?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
these CLI tools are available.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein
as this is the recommended installation method.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg
just label one of the cluster hosts with _admin:
ceph orch host label add hostname _admin
https://docs.ceph.com/en/quincy/cephadm/host-management/#special-host-labels
https://docs.ceph.com/en/quincy/cephadm/operations/#client-keyrings-and-configs
Regards
--
Robert Sander
Heinlein Consulting GmbH
s a really bad idea outside of a desaster
scenario where the other two copies are completely lost to a fire.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Char
On 28.02.23 16:31, Marc wrote:
Anyone know of a s3 compatible interface that I can just run, and reads/writes
files from a local file system and not from object storage?
Have a look at Minio:
https://min.io/product/overview#architecture
Regards
--
Robert Sander
Heinlein Support GmbH
Linux
.
How would the process look to get development started in this direction?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer
uot;: {}
}
}
"s3cmd ls s3://testbucket/" shows nothing.
"s3cmd rb s3://testbucket/" removes the bucket but the RADOS
objects of the S3 objects remain in the data pool.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein
a result the bucket
is empty when listing via S3. A bucket remove is successful but leaves
all the RADOS objects in the index and data pools.
Why is there no operation to rebuild the index for a bucket based on the
existing RADOS objects in the data pool?
Regards
--
Robert Sander
Heinlein Consu
Hi,
There is an operation "radosgw-admin bi purge" that removes all bucket
index objects for one bucket in the rados gateway.
What is the undo operation for this?
After this operation the bucket cannot be listed or removed any more.
Regards
--
Robert Sander
Heinlein Consu
has to
ask what to do if a file has been changed on both sides.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinl
fails.
To increase fault tolerance you need to streamline your processes and
replace a failed node immediately before the next one fails. In such
small clusters each consecutive failure can lead to data loss.
Best would be to add more nodes.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwed
ph daemons.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinl
ple 4MB
sized RAOS objects by the rados-gateway.
This is why you see much more RADOS objects than S3 objects.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93
the cluster.
Please show the output of "ceph versions".
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Gesch
ized setup.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Si
but the running MON from it. Then this MON will only see
itself as active in the cluster and form the quorum.
https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-mon
https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-mons/#removing-monitors
Regards
--
Robert Sander
Heinlein
Hi,
you can also use SRV records in DNS to publish the IPs of the MONs.
Read https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
for more info.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030
1 - 100 of 275 matches
Mail list logo