as created. They usually have "rgw" in their name.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Hei
Hi,
On 3/5/24 13:05, ricardom...@soujmv.com wrote:
I have a ceph quincy cluster with 5 nodes currently. But only 3 with
SSDs.
Do not mix HDDs and SSDs in the same pool.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel
Hi,
On 3/19/24 13:00, Igor Fedotov wrote:
translating EIO to upper layers rather than crashing an OSD is a valid
default behavior. One can alter this by setting bluestore_fail_eio
parameter to true.
What benefit lies in this behavior when in the end client IO stalls?
Regards
--
Robert
Hi,
On 3/21/24 14:50, Michael Worsham wrote:
Now that Reef v18.2.2 has come out, is there a set of instructions on how to
upgrade to the latest version via using Cephadm?
Yes, there is: https://docs.ceph.com/en/reef/cephadm/upgrade/
Regards
--
Robert Sander
Heinlein Consulting GmbH
pany running Debian since before
then you have user IDs and group IDs in the range 500 - 1000.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 22000
ilover and the NFS client cannot be "load balanced" to another backend
NFS server.
There is no use to configure an ingress service currently without failover.
The NFS clients have to remount the NFS share in case of their current
NFS server dies anyway.
Regards
--
Robert Sander
Hein
concept of "pseudo path"
This is an NFSv4 concept. It allows to mount a virtual root of the NFS server
and access all exports below it without having to mount each one separately.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinle
and its placement strategy.
What does your node-exporter service look like?
ceph orch ls node-exporter --export
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin
: '*'
If you apply this YAML code the orchestrator should deploy one
node-exporter daemon to each host of the cluster.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgeri
Hi,
https://www.linuxfoundation.org/press/introducing-ceph-squid-the-future-of-storage-today
Does the LF know more than the mailing list?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051
members and tiers and to
sound the marketing drums a bit. :)
The Ubuntu 24.04 release notes also claim that this release comes with
Ceph Squid:
https://discourse.ubuntu.com/t/noble-numbat-release-notes/39890
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
On 4/29/24 09:36, Alwin Antreich wrote:
Who knows. I don't see any packages on download.ceph.com
<http://download.ceph.com> for Squid.
Ubuntu has them: https://packages.ubuntu.com/noble/ceph
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Be
e to write to the CephFS at first.
Set squash to "no_root_squash" to be able to write as root to the NFS
share. Create a directory and change its permissions to someone else.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-
per https://docs.ceph.com/en/reef/cephadm/services/osd/#remove-an-osd
This will make sure that the OSD is not needed any more (data is drained
etc).
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
,
ceph::buffer::v15_2_0::list&, int)+0x290) [0x5614ac87ff90]
13: (MDSContext::complete(int)+0x5f) [0x5614aca41f4f]
14: (MDSIOContextBase::complete(int)+0x534) [0x5614aca426e4]
15: (Finisher::finisher_thread_entry()+0x18d) [0x7f1930b7884d]
16: /lib64/libpthread.so.0(+0x81ca)
Hi,
would an update to 18.2 help?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
On 5/9/24 07:22, Xiubo Li wrote:
We are disscussing the same issue in slack thread
https://ceph-storage.slack.com/archives/C04LVQMHM9B/p1715189877518529.
Why is there a discussion about a bug off-list on a proprietary platform?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str
t them to noout and will
try to move other services away from the host if possible.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschä
On 5/16/24 17:50, Robert Sander wrote:
cephadm osd activate HOST
would re-activate the OSDs.
Small but important typo: It's
ceph cephadm osd activate HOST
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 4
On 5/27/24 09:28, s.dhivagar@gmail.com wrote:
We are using ceph octopus environment. For client can we use ceph quincy?
Yes.
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht
On 5/30/24 08:53, tpDev Tester wrote:
Can someone please point me to the docs how I can expand the capacity of
the pool without such problems.
Please show the output of
ceph status
ceph df
ceph osd df tree
ceph osd crush rule dump
ceph osd pool ls detail
Regards
--
Robert Sander
available?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
Hi,
On 5/30/24 11:58, Robert Sander wrote:
I am trying to follow the documentation at
https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an
NVMe over Fabric service.
It looks like the cephadm orchestrator in this 18.2.2 cluster uses the image
quay.io/ceph/nvmeof:0.0.2
3:59:49.678809906+00:00", grpc_status:12, grpc_message:"Method not
found!"}"
Is this not production ready?
Why is it in the documentation for a released Ceph version?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-su
ces/#extra-container-arguments
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Hei
On 5/31/24 16:07, Robert Sander wrote:
extra_container_args:
- "--publish 8080/tcp"
Never mind, in the custom container service specification it's "args",
not "extra_container_args".
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwe
multiple block
devices and for the orchestrator they are completely separate.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
to do these:
* Set host in maintenance mode
* Reinstall host with newer OS
* Configure host with correct settings (for example cephadm user SSH key etc.)
* Unset maintenance mode for the host
* For OSD hosts run ceph cephadm osd activate
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter
partition
table or logical volume signatures.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Si
would not use Ceph packages shipped from a distribution but always the
ones from download.ceph.com or even better the container images that
come with the orchestrator.
Why version do your other Ceph nodes run on?
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Ho
pgrade the Ceph
packages.
download.ceph.com has packages for Ubuntu 22.04 and nothing for 24.04.
Therefor I would assume Ubuntu 24.04 is not a supported platform for
Ceph (unless you use the cephadm orchestrator and container).
BTW: Please keep the discussion on the mailing list.
Regards
--
Rob
Hi,
On 6/26/24 11:49, Boris wrote:
Is there a way to only update 1 daemon at a time?
You can use the feature "staggered upgrade":
https://docs.ceph.com/en/reef/cephadm/upgrade/#staggered-upgrade
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Ber
create any new OSDs.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Hein
/thread/6EVOYOHS3BTTNLKBRGLPTZ76HPNLP6FC/#6EVOYOHS3BTTNLKBRGLPTZ76HPNLP6FC
Shouldn't db_slots make that easier?
Is this a bug in the orchestrator?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-1
Hi,
On 7/11/24 09:01, Eugen Block wrote:
apparently, db_slots is still not implemented. I just tried it on a test
cluster with 18.2.2:
I am thinking about a PR to correct the documentation.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https
uggest to use Ubuntu 22.04 LTS as the base operating system.
You can use cephadm on top of that without issues.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-
sed on CentOS 8.
When you execute "cephadm shell" it starts a container with that image
for you.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Char
On 7/23/24 08:24, Iztok Gregori wrote:
Am I missing something obvious or with Ceph orchestrator there are non
way to specify an id during the OSD creation?
Why would you want to do that?
A new OSD always gets the lowest available ID.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Hi Marianne,
is there anything in the kernel logs of the VMs and the hosts where the
VMs are running with regard to the VM storage?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
On 05.08.24 18:38, Nicola Mori wrote:
docker.io/snack14/ceph-wizard
This is not an official container image.
The images from the Ceph project are on quay.io/ceph/ceph.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel
rge
number of nodes (more than 10) and a proportional number of OSDs.
Mixed HDDs and SSDs in one pool is not good practice as a pool should
have OSDs of the same speed.
Kindest Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030
Am 11.11.20 um 13:05 schrieb Hans van den Bogert:
> And also the erasure coded profile, so an example on my cluster would be:
>
> k=2
> m=1
With this profile you can only loose one OSD at a time, which is really
not that redundant.
Regards
--
Robert Sander
Heinlein Support GmbH
S
ot=default
k=2
m=2
You need k+m=4 independent hosts for the EC parts, but your CRUSH map
only shows two hosts. This is why all your PGs are undersized and degraded.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-4
com.tw/
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz:
ls also
removes the objects and you can start new.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschä
0,88676 0,00338191
true rand 30,1007 82474194304 4194304 1095,92
273 25,5066 313 213 0,05719 0,99140 0,00325295
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-4
Hi Marc and Dan,
thanks for your quick responses assuring me that we did nothing totally
wrong.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B
t;:"rbd","id":1,"stats":{"stored":27410520278,"objects":6781,"kb_used":80382849,"bytes_used":82312036566,"percent_used":0.1416085809469223,"max_avail":166317473792}},{"name":"cephfs_data",
nked together using lvm or somesuch? What are the tradeoffs?
IMHO there are no tradeoffs, there could even be benefits creating a
volume group with multiple physical volumes on RBD as the requests can
be bettere parallelized (i.e. virtio-single SCSI controller for qemu).
Regards
--
Robert San
(error connecting to the cluster)
This issue is mostly caused by not having a readable ceph.conf and
ceph.client.admin.keyring file in /etc/ceph for the user that starts the
ceph command.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-su
Hi,
Am 04.02.21 um 12:10 schrieb Frank Schilder:
> Going to 2+2 EC will not really help
On such a small cluster you cannot even use EC because there are not
enough independent hosts. As a rule of thumb there should be k+m+1 hosts
in a cluster AFAIK.
Regards
--
Robert Sander
Heinlein Supp
in the cluster.
You need ports 3300 and 6789 for the MONs on their IPs and any dynamic
port starting at 6800 used by the OSDs. The MDS also uses a port above 6800.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 4050
Am 10.02.21 um 15:54 schrieb Frank Schilder:
> Which ports are the clients using - if any?
All clients only have outgoing connections and do not listen to any
ports themselves.
The Ceph cluster will not initiate a connection to the client.
Kindest Regards
--
Robert Sander
Heinlein Support G
0G
bonded interfaces in the cluster network? I would assume that you would
want to go at least 2x 25G here.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HR
Am 10.03.21 um 20:44 schrieb Ignazio Cassano:
> 1 small ssd is for operations system and 1 is for mon.
Make that a RAID1 set of SSDs and be happier. ;)
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Am 12.03.21 um 18:30 schrieb huxia...@horebdata.cn:
> Any other aspects on the limits of bigger capacity hard disk drives?
Recovery will take longer increasing the risk of another failure in the
same time.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
h
ready rebooted the box so I won't be able to
> test immediately.)
My experience with LVM is that only a reboot helps in this situation.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
check docker.io/ceph/ceph:v15" but it
tells me that the containers do not need to be upgraded.
How will this security fix of OpenSSL be deployed in a timely manner to
users of the Ceph container images?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://ww
B
volumes and one OSD on each SSD.
HDD only SSDs are quite slow. If you do not have enough SSDs for them go
with an SSD only cephfs metadata pool.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
pected condition which
prevented it from fulfilling the request.", "request_id":
"e89b8519-352f-4e44-a364-6e6faf9dc533"}
']
I have no r
t; bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed
> to start datalog_rados service ((5) Input/output error
> bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed
> to init services (ret=(5) Input/output error)
I see the same issues on a
Hi,
I forgot to mention that CephFS is enabled and working.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer
Hi,
The DB device needs to be empty for an automatic OSD service. The service will
then create N db slots using logical volumes and not partitions.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030
So when you have a Ceph cluster with Rados-Gateways you should not
upgrade to Pacific currently.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 9381
Hi,
this is one of the use cases mentioned in Tim Serong's talk:
https://youtu.be/pPZsN_urpqw
Containers are great for deploying a fixed state of a software project (a
release), but not so much for the development of plugins etc.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedte
Hi,
# docker pull ceph/ceph:v16.2.1
Error response from daemon: toomanyrequests: You have reached your pull
rate limit. You may increase the limit by authenticating and upgrading:
https://www.docker.com/increase-rate-limit
How do I update a Ceph cluster in this situation?
Regards
--
Robert
Hi,
Am 21.04.21 um 10:14 schrieb Robert Sander:
> How do I update a Ceph cluster in this situation?
I learned that I need to create an account on the website hub.docker.com
to be able to download Ceph container images in the future.
With the credentials I need to run "docker login&
ied (error connecting to the cluster)
What should I do?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Char
Am 22.04.21 um 09:07 schrieb Robert Sander:
> What should I do?
I should also upgrade the CLI client which still was at 15.2.8 (Ubuntu
20.04) because a "ceph orch upgrade" run only updates the software
inside the containers.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwed
Hi,
to whomever it may concern:
The mirror server eu.ceph.com does to carry the Release files for
15.2.11 in https://eu.ceph.com/debian-15.2.11/dists/*/ and 16.2.1 in
https://eu.ceph.com/debian-16.2.1/dists/*/
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
h map. It looks like the
OSD is the failure zone, and not the host. If it woould be the host the
failure of any number of OSDs in a single host would not bring PGs down.
For the default redundancy rule and pool size 3 you need three separate
hosts.
Regards
--
Robert Sander
Heinlein Consulting GmbH
the mds suffer when only 4% of the osd goes
> down (in the same node). I need to modify the crush map?
With an unmodified crush map and the default placement rule this should
not happen.
Can you please show the output of "ceph osd crush rule dump"?
Regards
--
Robert Sander
Hein
ill lead to data loss or at least intermediate
unavailability.
The situation is now that all copies (resp. EC chunks) for a PG are
stored on OSDs of the same host. These PGs will be unavailable if the
host is down.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10
Am 06.05.21 um 17:18 schrieb Sage Weil:
> I hit the same issue. This was a bug in 16.2.0 that wasn't completely
> fixed, but I think we have it this time. Kicking of a 16.2.3 build
> now to resolve the problem.
Great. I also hit that today. Thanks for fixing it quickly.
Rega
I had success with stopping the "looping" mgr container via "systemctl
stop" on the node. Cephadm then switches to another MGR to continue the
upgrade. After that I just started the stopped mgr container and the
upgrade continued.
Regards
--
Robert Sander
Heinlein Consulting GmbH
S
On 15.06.21 15:16, nORKy wrote:
> Why is there no failover ??
Because only one MON out of two is not in the majority to build a quorum.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051
could theoretically RAID0 multiple disks and then put an OSD on top
of that but this would create very large OSDs which are not good for
recovering data. Recovering such a "beast" just would take too long.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http
ssing between these two steps.
The first creates /etc/apt/sources.list.d/ceph.list and the second
installs packages, but the repo list was never updated.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 0
lding and hosting for open source projects
is solved with the openSUSE build service:
https://build.opensuse.org/
But I think what Sage meant was e.g. different versions of GCC on the
distributions and not being able to use all the latest features needed
for compiling Ceph.
Regards
--
Robe
30 16:07:09 al111 bash[171790]: File
"/usr/share/ceph/mgr/devicehealth/module.py", line 33, in get_ata_wear_level
Jun 30 16:07:09 al111 bash[171790]: if page.get("number") != 7:
Jun 30 16:07:09 al111 bash[171790]: AttributeError: 'NoneType' object has no
attribute '
8 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.825+
7efc32db4080 -1 ** ERROR: osd init failed: (5) Input/output error
How do I correct the issue?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405
have 3 nodes with each 5x 12TB (60TB) and 2 nodes with each 4x
18TB (72TB) the maximum usable capacity will not be the sum of all
disks. Remember that Ceph tries to evenly distribute the data.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein
daemons (outside of osds I believe) from offline hosts.
Sorry for maybe being rude but how on earth does one come up with the
idea to automatically remove components from a cluster where just one
node is currently rebooting without any operator interference?
Regards
--
Robert Sander
Heinlein
h cluster?
ceph osd set noout
and after the cluster has been booted again and every OSD joined:
ceph osd unset noout
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charl
heavy.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: B
of block devices with the same size
distribution in each node you will get an even data distribution.
If you have a node with 4 3TB drives and one with 4 6TB drives Ceph
cannot use the 6TB drives efficiently.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
w the data distribution among the OSDs.
Are all of these HDDs? Are these HDDs equipped with RocksDB on SSD?
HDD only will have abysmal performance.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
ll be faster, to write it to just one ssd, instead of
writing it to the disk directly.
Usually one SSD carries the WAL and RocksDB of four to five HDD-OSDs.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-4
Pools should have a uniform class of storage.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsf
this. The Linux kernel will happily answer ARP requests on any
interface for the IPs it has configured anywhere. That means you have a
constant ARP flapping in your network.
Make the three interfaces bonded and configure all three IPs on the
bonded interface.
Regards
--
Robert Sander
Heinlein
work as the same IP subnet cannot span multiple
broadcast domains.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin
g
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
__
Hi,
I had to run
ceph fs set cephfs max_mds 1
ceph fs set cephfs allow_standby_replay false
and stop all MDS and NFS containers and start one after the other again
to clear this issue.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein
I just run
ceph orch upgrade start
Why does the orchestrator not run the necessary steps?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB
use chrony or ntpd.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz
s with the number of clients
(kubernetes nodes)
Nice hack. But why not establish a DNS name that points to 127.0.0.1?
Why the hassle with iptables?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
Am 08.12.21 um 02:34 schrieb mhnx:
- Sometimes NTP servers can respond but systemd-timesyncd can not sync
the time without manual help.
Just my 2¢: Do not use systemd-timesyncd.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein
implementations, it will
simplify the user experience for those heavily relying on NFS exports.
This change is introduced in a point release?
After upgrading a cluster all NFS shares have to be configured again and
in the meantime NFS services do not work. Not so great IMHO.
Regards
--
Robert Sa
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
".nfs".
Why has the feature to configure a specific cephx key been removed?
What key is now used by nfs-ganesha to access the CephFS?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
101 - 200 of 300 matches
Mail list logo