gmail marks too many messages on this mailing list as spam.
Am Mi., 16. Nov. 2022 um 11:01 Uhr schrieb Kai Stian Olstad <
ceph+l...@olstad.com>:
> On 16.11.2022 00:25, Daniel Brunner wrote:
> > are my mails not getting through?
> >
> > is anyone receiving my emails?
>
> You can check this yoursel
Ceph: 17.2.5, dockerized with Ubuntu 20.04
Hi all,
I try to enable the Centralized Logging in Dashboard as described in
https://docs.ceph.com/en/quincy/cephadm/services/monitoring/#cephadm-monitoring-centralized-logs
Logging inti files is enabled:
ceph config set global log_to_file true
cep
; need to login to the
> grafana using your grafana username and password. If you do that and
> refresh the dashboard,
> I think the Loki page should be visible from the Daemon Logs page.
>
> Regards,
> Nizam
>
> On Wed, Nov 16, 2022 at 7:31 PM E Taka <0eta...@gmail.com> wr
Ubuntu 20.04, Ceph 17.2.5, dockerized
Hello all,
this is frequently asked, but the answers I found are either old or do not
cover an extra WAL/DB device. Given an OSD that is located on a HDD, where
WAL/DB is located on a SSD, which is used by all OSDs of the host. The OSD
is in, up and running.
Some information is missing to give a helpful answer.
How do you backup? (Files? RBD via Ceph? Block Device with qemu-img? Which
Device driver do you use (Virtio? SATA?).
In our production we use Virtio RBD and the Hypervisor standard cache mode.
The Disks are snapshoted before the backup with 'q
Hi,
when removing some OSD with the command `ceph orch osd rm X`, the
rebalancing starts very fast, but after a while it almost stalls with a
very low recovering rate:
Dec 15 18:47:17 … : cluster [DBG] pgmap v125312: 3361 pgs: 13
active+clean+scrubbing+deep, 4 active+remapped+backfilling, 3344
ac
t; Hi erich,
> You can reference following link:
> https://www.suse.com/support/kb/doc/?id=19693
>
> Thanks,
> Liang Zheng
>
>
> E Taka <0eta...@gmail.com> 于2022年12月16日周五 01:52写道:
>
>> Hi,
>>
>> when removing some OSD with the command `ceph orch
Ceph 17.2.5, dockerized, Ubuntu 20.04, OSD on HDD with WAL/DB on SSD.
Hi all,
old topic, but the problem still exists. I tested it extensively,
with osd_op_queue set either to mclock_scheduler (and profile set to high
recovery) or wpq and the well known options (sleep_time, max_backfill) from
htt
We had a similar problem, and it was a (visible) logfile. It is easy to
find with the ncdu utility (`ncdu -x /var`). There's no need of a reboot,
you can get rid of it with restarting the Monitor with `ceph orch daemon
restart mon.NODENAME`. You may also lower the debug level.
Am Do., 12. Jan. 202
You have to wait until the rebalancing finished.
Am Di., 10. Jan. 2023 um 17:14 Uhr schrieb Wyll Ingersoll <
wyllys.ingers...@keepertech.com>:
> Running ceph-pacific 16.2.9 using ceph orchestrator.
>
> We made a mistake adding a disk to the cluster and immediately issued a
> command to remove it
Ceph 17.2.5:
Hi,
I'm looking for a reasonable and useful MDS configuration for a – in
future, no experiences until now – heavily used CephFS (~100TB).
For example, does it make a difference to increase the
mds_cache_memory_limit or the number of MDS instances?
The hardware does not set any limit
/
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
> Web: https://croit.io/ | YouTube: https://goo.gl/PGE1Bx
>
>
>
> On 15 Jan 2023, at 09:26, E Taka <0eta...@gmail.com> wrote
> (I have set mid-point to 0.5) and that seems more than enough. What is
> really important is to pin directories. We use manual pinning over all
> ranks and it works like a charm. If you don't pin, the MDSes will not work
> very well. I had a thread on that 2-3 months ago.
>
Hi,
I wonder if it is possible to define a host pattern, which includes the
host names
ceph01…ceph19, but no other hosts, especially not ceph00. That means, this
pattern is wrong: ceph[01][0-9] , since it includes ceph00.
Not really a problem, but it seems that the "“host-pattern” is a regex that
t;wantNum=$1
>if [[ $wantNum =~ ^ceph(0[2-9]|1[0-9])$ ]] ; then
> wantNum=${BASH_REMATCH[1]}
>
> Which gives me the number, if it is in the range 02-19
>
> Dunno, if that helps :)
>
> Ciao, Uli
>
> > On 27. Jan 2023, at 18:17, E Taka <0eta...@gmail.com> w
I'm using a dockerized Ceph 17.2.6 under Ubuntu 22.04.
Presumably I'm missing a very basic thing, since this seems a very simple
question: how can I call cephfs-top in my environment? It is not inckuded
in the Docker Image which is accessed by "cephadm shell".
And calling the version found in the
Dockerized Ceph 17.2.6 on Ununtu 22.04
The Cephfs filesystem has a size of 180TB, used are only 66TB.
When running a `ls -lR` the output stops and all accesses to the directory
stall. ceph health says:
# ceph health detail
HEALTH_WARN 1 clients failing to respond to capability release; 1 MDSs
r
Same problem here with Ceph 17.2.6 on Ubuntu 22.04 and Clients Debian 11,
Kernel 6.0.12-1~bpo11+1.
We are still looking for a solution. At the time being we let restart the
Orchestrator MDS daemons by removig/adding labels to the servers. We use
multiple MDS and have many CPU cores and memory. The
This small (Bash) wrapper around the "ceph daemon" command, especially the
auto-completeion with the TAB key, ist quite helpful, IMHO:
https://github.com/test-erik/ceph-daemon-wrapper
Am Fr., 1. Dez. 2023 um 15:03 Uhr schrieb Phong Tran Thanh <
tranphong...@gmail.com>:
> It works!!!
>
> Thanks Ka
Hello,
in our cluster we have one node with SSD, which are used, but we cannot see
it in "ceph orch device ls". Everything als looks OK. For better
understanding, the diskname is /dev/sda, it's osd.138:
~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:01 7T 0 disk
~# wipefs /d
We had the same problem. It turned out that one disk was slowly dying. It
was easy to identify by the commands (in your case):
ceph pg dump | grep -F 6.78
ceph pg dump | grep -F 6.60
…
This command shows the OSDs of a PG in square brackets. If is there always
the same number, then you've found th
0 0 00
> 00 0 0 0
> active+clean 2024-01-25T08:19:04.148941+0200 0'0
> 107546:161331 [11,24,35] 11 [11,24,35] 11
> 0'0 2024-01-25T08:19:04.148837+02
Hi,
The requirements are actually not high: 1. there should be a generally
known address for access. 2. it should be possible to reboot or shut down a
server without the RGW connections being down the entire time. A downtime
of a few seconds is OK.
Constant load balancing would be nice, but is no
Hi all,
we have a serious problem with CephFS. A few days ago, the CephFS file
systems became inaccessible, with the message MDS_DAMAGE: 1 mds daemon
damaged
The cephfs-journal-tool tells us: "Overall journal integrity: OK"
The usual attempts with redeploy were unfortunately not successful.
Aft
Hello,
I want to install Ceph Octopus on Ubuntu 20.04. The nodes for have 2
network interfaces: 192.168.1.0/24 for the cluster network, and a
10.10.0.0/16 is the public network. When I bootstrap with cephadm, which
Network do I use? That means, do i use cephadm bootstrap --mon-ip
192.168.1.1 or do
, but that may be my fault…
Am Fr., 13. Nov. 2020 um 21:40 Uhr schrieb Stefan Kooman :
> On 2020-11-13 21:19, E Taka wrote:
> > Hello,
> >
> > I want to install Ceph Octopus on Ubuntu 20.04. The nodes for have 2
> > network interfaces: 192.168.1.0/24 for the cluster n
Hello,
I'm having difficulties with setting up the web certificates for the
Dashboard on hostnames ceph*01..n*.domain.tld.
I set the keys and crt with ceph-config-key. ceph-config-key get
mgr/dashbord/crt shows the correct certificate,
the same applies to mgr/dashbord/key, mgr/cephadm/grafana_key
FYI: I've found a solution for the Grafana Certificate. Just run the
following commands:
1.
ceph config-key set mgr/cephadm/grafana_crt -i
ceph config-key set mgr/cephadm/grafana_key -i
2.
ceph orch redeploy grafana
3.
ceph config set mgr mgr/dashboard/GRAFANA_API_URL
https://ceph01.domain.t
Hi,
if the host fails, to which the grafana-api-url points (in the example
below ceph01.hostxyz.tld:3000), Ceph Dashboard can't Display Grafana Data:
# ceph dashboard get-grafana-api-url
https://ceph01.hostxyz.tld:3000
Is it possible to automagically switch to an other host?
Thanks, Erich
Hi,
this is documented with many links to other documents, which
unfortunately only confused me. In our 6-Node-Ceph-Cluster (Pacific)
the Dashboard tells me that I should "provide the URL to the API of
Prometheus' Alertmanager". We only use Grafana and Prometheus which
are deployed by cephadm. We
If I understand the documentation for the placements in "ceph orch
apply" correctly, I can place the daemons by number or on specific
host. But what I want is:
"Start 3 mgr services, and one of it should be started on node ceph01."
How I can achieve this?
Thanks!
Hi,
the dashboard warns me about a node, that it is filling up fast.
Actually, on this node is a large directory (now 12 GB) large
directory /var/lib/ceph/$FSID/removed/
Is this directory or its content needed? Can I remove the content, or
is there a Ceph command for purging it?
Thanks!
Hi,
we have enabled Cluster → Monitoring in the Dashboard. Some of the
regularly shown messages are not really useful for us (packet drops
in OVS) and we want to suppress them. Creating a silence does not
help, because the messages still appear, but in blue instead of red
color.
Is there a way t
;
> [1] https://tracker.ceph.com/issues/51987
>
> ____
> From: E Taka <0eta...@gmail.com>
> Sent: Friday, July 30, 2021 12:39 PM
> To: ceph-users
> Subject: [ceph-users] Dashboard Montitoring: really suppress messages
>
> Hi,
>
>
A few hours ago we had the same problem, also with Ubuntu 20.04, and
there is a coincidence in time with the latest docker update, which
was triggered from Puppet. After all, all the containers came back up
without a reboot. Thanks for the hint.
Note to myself: change the package parameter for the
Hello all,
a year ago we started with a 3-node-Cluster for Ceph with 21 HDD and 3
SSD, which we installed with Cephadm, configuring the disks with
`ceph orch apply osd --all-available-devices`
Over the time the usage grew quite significantly: now we have another
5 nodes with 8-12 HDD and 1-2 SSD
One can find questions about this topic in the WWW, but most of them
for older versions of Ceph. So I ask specifically for the actual
version:
· Pacific 16.2.5
· 7 nodes (with many cores and RAM) with 111 OSD
· all OSD included by: ceph orch apply osd --all-available-devices
· bucket created in th
Hi,
thanks for the answers. My goal was to speed up the S3 interface, an
not only a single program. This was successful with this method:
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#block-and-block-db
However, one major disadvantage was that Cephadm considered the O
The Dashboard URLs can be very confusing, especially since SSL certificates
require an FQDN, but Ceph itself is recommended with the short names. Not
to mention that ceph mgr services shows since 16.2.5 or so IP addresses
instead of names.
Please read https://docs.ceph.com/en/pacific/mgr/dashboard
see https://docs.ceph.com/en/pacific/cephfs/multimds/
If I understand it, do this:
ceph fs set max_mds 2
ceph fs set standby_count_wanted 1
ceph orch apply mds 3
Am So., 24. Okt. 2021 um 09:52 Uhr schrieb huxia...@horebdata.cn <
huxia...@horebdata.cn>:
> Dear Cephers,
>
> When setting up mul
Hi Yuri,
I faced the same problem that recently only IP addresses are listed in
`ceph mgr services`
As a easy workarounf I installed a lightttp for just one CGI script:
#!/bin/bash
DASHBOARD=$(ceph mgr services | jq '.dash
Hi Jonas,
I'm impressed, Thanks!
I have a question about the usage: do I have to turn off the automatic
balancing feature (ceph balancer off)? Do the upmap balancer and your
customizations get in each other's way, or can I run your script from time
to time?
Thanks
Erich
Am Mo., 25. Okt. 2021 um
Dashboard → Filesystems → (filesystem name) → Directories
fails on a particular file system with error "500 - Internal Server Error".
The log shows:
Jan 16 11:22:18 ceph00 bash[96786]: File
"/usr/share/ceph/mgr/dashboard/services/cephfs.py", line 57, in opendir
Jan 16 11:22:18 ceph00 bash[96
Hello Ernesto,
the commands worked without any problems, with Ubuntus 20.04 Ceph packages
and inside "cephadm shell". I tried all 55k directories of the filesystem.
Best,
Erich
Am Mo., 17. Jan. 2022 um 21:10 Uhr schrieb Ernesto Puerta <
epuer...@redhat.com>:
> Hi E Taka,
&g
libcephfs is "cephfs.OSError: opendir
> failed: Permission denied [Errno 13]", it could be that the mgr doesn't
> have rights (ceph auth) to access the filesystem. Could you check the mds
> logs for any trace when the Dashboard error appears?
>
> Kind Regards,
&
Hi,
this sounds like a FAQ, but I haven't found a clear answer:
How can I identify the most active RBD images, where "most active" means
the images with many I/O operations?
Thanks
Erich
(Background of the question is that we have many virtual machines running
that we can't monitor directly, a
That's exactly what I was looking for, thanks!
Am So., 13. Feb. 2022 um 13:32 Uhr schrieb Marc :
> >
> > this sounds like a FAQ, but I haven't found a clear answer:
> >
> > How can I identify the most active RBD images, where "most active" means
> > the images with many I/O operations?
> >
>
> rb
Hi,
is there a reason that the current Pacific container of NFS-Ganesha does
not provide the actual version Ganesha 4?
If yes: will Quincy use Ganesha 4?
If not: what's the recommended way to use Ganesha 4 together with Ceph
Pacific?
Thanks
Erich
___
c
Hello all,
how important is it to use the same Linux kernel version on all Hosts?
Background is, that new hosts are installed with the actual Ubuntu server
22.04 while the older ones run with Ubuntu 20.04.
In other words: may I disable this check:
ceph cephadm config-check disable kernel_versio
Version: Pacific 16.2.9
Hi,
when clicking in the Dashboard at "Object Gateway" submenus, for example
"Daemons", the Dashboard gets an HTTP error 500. The logs says about this:
requests.exceptions.SSLError: HTTPSConnectionPool(host='10.149.12.179',
port=8000): Max retries exceeded with url: /admi
the
> host’s IP for that? But I’m just guessing here. Not sure if there’s a
> way to reset it with a ceph command or if removing and adding the host
> again would be necessary. There was a thread about that just this week.
>
>
> Zitat von E Taka <0eta...@gmail.com>:
>
Hi,
since updating to 17.2.1 we get 5 – 10 times per day the message:
[WARN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices
host cephXX `cephadm gather-facts` failed: Unable to reach remote
host cephXX.
(cephXX is not always the same node).
This status is cleared after one o
Yes, "_admin" is set. After some restarting and redeploying the problem
seemed to disappear.
Thanks anyway.
Erich
Am Fr., 8. Juli 2022 um 14:18 Uhr schrieb Adam King :
> Hello,
>
> Does the MGR node have an "_admin" label on it?
>
> Thanks,
> - Adam Kin
Ceph 17.2.3 (dockerized in Ubuntu 20.04)
The subject says it. The MDS process always crashes after evicting. ceph -w
shows:
2022-09-22T13:26:23.305527+0200 mds.ksz-cephfs2.ceph00.kqjdwe [INF]
Evicting (and blocklisting) client session 5181680 (
10.149.12.21:0/3369570791)
2022-09-22T13:26:35.72931
Hi,
since the last update from 17.2.3 to version 17.2.4, the
mgr/cephadm/grafana_crt
setting is ignored. The output of
ceph config-key get mgr/cephadm/grafana_crt
ceph config-key get mgr/cephadm/grafana_key
ceph dashboard get-grafana-frontend-api-url
ist correct.
Grafana and the Dashboard are r
; ceph config-key dump | grep grafana | grep crt
>
> I hope that helps,
> Redo.
>
>
>
> On Wed, Oct 5, 2022 at 3:19 PM E Taka <0eta...@gmail.com> wrote:
>
> > Hi,
> >
> > since the last update from 17.2.3 to version 17.2.4, the
> > mgr/cephadm
(17.2.4, 3 replicated, Container install)
Hello,
since many of the information found in the WWW or books is outdated, I want
to ask which procedure is recommended to repair damaged PG with status
active+clean+inconsistent for Ceph Quincy.
IMHO, the best process for a pool with 3 replicas it woul
quot;primary": false,
>"errors": [],
>"size": 2780496,
>"omap_digest": "0x",
>"data_digest": "0x11e1764c"
> },
> {
>"osd&qu
Question 1) makes me wonder too.
This results in errors:
2022-10-25T11:20:00.000109+0200 mon.ceph00 [INF] overall HEALTH_OK
2022-10-25T11:21:05.422793+0200 mon.ceph00 [WRN] Health check failed:
failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)
2022-10-25T11:22:06.037456+0200 mon.ceph00
59 matches
Mail list logo