Hi,
great that you found a solution. Maybe that also helps to get rid of
the cache-tier entirely?
Zitat von Cedric :
Hello,
Sorry for the late reply, so yes we finally find a solution, which
was to split apart the cache_pool on dedicated OSD. It had the
effect to clear off slow ops
Hello,
Sorry for the late reply, so yes we finally find a solution, which was to split
apart the cache_pool on dedicated OSD. It had the effect to clear off slow ops
and allow the cluster to serves clients again, after 5 days of lock down,
hopefully the majority of VM resume well, thanks to
Found this mention in the CLT Minutes posted this morning[1], of a discussion
on ceph-dev[2] about dropping ubuntu focal builds for the squid release, and
beginning builds of quincy for jammy to facilitate quincy->squid upgrades.
> there was a consensus to drop support for ubuntu focal and
Hi folks,
Today we discussed:
- [casey] on dropping ubuntu focal support for squid
- Discussion thread:
https://lists.ceph.io/hyperkitty/list/d...@ceph.io/thread/ONAWOAE7MPMT7CP6KH7Y4NGWIP5SZ7XR/
- Quincy doesn't build jammy packages, so quincy->squid upgrade tests
I have it working on my machines- the global configuration for me looks like
[global]
fsid = fe3a7cb0-69ca-11eb-8d45-c86000d08867
mon_host = [v2:192.168.2.142:3300/0,v1:192.168.2.142:6789/0]
[v2:192.168.2.141:3300/0,v1:192.168.2.141:6789/0]
Maybe this [2] helps, one specific mountpoint is excluded:
mountpoint !~ "/mnt.*"
[2] https://alex.dzyoba.com/blog/prometheus-alerts/
Zitat von Eugen Block :
Hi,
let me refer you to my response to a similar question [1]. I don't
have a working example how to exclude some mointpoints but
Hi,
let me refer you to my response to a similar question [1]. I don't
have a working example how to exclude some mointpoints but it should
be possible to modify existing rules.
Regards,
Eugen
[1]
Hi,
I’d double check that the 3300 port is accessible (e.g. using telnet, which can
be installed as an optional Windows feature). Make sure that it’s using the
default port and not a custom one, also be aware the v1 protocol uses 6789 by
default.
Increasing the messenger log level to 10 might
Hi,
I'm debating with myself if I should
1. Stop both OSD 223 and 269,
2. Just one of them.
I understand your struggle, I think I would stop them both just to
rule out a replication of corrupted data.
Zitat von Kai Stian Olstad :
Hi Eugen, thank you for the reply.
The OSD was drained
Hi All,
I'm looking for some pointers/help as to why I can't get my Win10 PC
to connect to our Ceph Cluster's CephFS Service. Details are as follows:
Ceph Cluster:
- IP Addresses: 192.168.1.10, 192.168.1.11, 192.168.1.12
- Each node above is a monitor & an MDS
- Firewall Ports: open (ie
10 matches
Mail list logo