[ceph-users] Re: Status of 18.2.3

2024-05-23 Thread Sake Ceph
ck.com/archives/C054Q1NUBQT/p1711041666180929 > > Regards > YuriW > > On Thu, May 23, 2024 at 6:22 AM Sake Ceph wrote: > > > > I was wondering what happened to the release of 18.2.3? Validation started > > on April 13th and as far as I know there have been

[ceph-users] Status of 18.2.3

2024-05-23 Thread Sake Ceph
really need some fixes of this release. Kind regards, Sake ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Stuck in replay?

2024-04-22 Thread Sake Ceph
s it will have enough RAM to complete the replay? > > > > On 4/22/24 11:37 AM, Sake Ceph wrote: > >> Just a question: is it possible to block or disable all clients? Just > >> to prevent load on the system. > >> > >> Kind regards, &g

[ceph-users] Re: Stuck in replay?

2024-04-22 Thread Sake Ceph
Just a question: is it possible to block or disable all clients? Just to prevent load on the system. Kind regards, Sake > Op 22-04-2024 20:33 CEST schreef Erich Weiler : > > > I also see this from 'ceph health detail': > > # ceph health detail > HEALTH_WARN 1 filesystem is degraded; 1

[ceph-users] Re: Mysterious Space-Eating Monster

2024-04-19 Thread Sake Ceph
Hi Matthew, Cephadm doesn't cleanup old container images, at least with Quincy. After a upgrade we run the following commands: sudo podman system prune -a -f sudo podman volume prune -f But if someone has a better advice, please tell us. Kind regards, Sake > Op 19-04-2024 10:24 CEST

[ceph-users] Re: TLS 1.2 for dashboard

2024-01-25 Thread Sake Ceph
I would say drop it for squid release or if you keep it in squid, but going to disable it in a minor release later, please make a note in the release notes if the option is being removed. Just my 2 cents :) Best regards, Sake ___ ceph-users mailing

[ceph-users] Re: TLS 1.2 for dashboard

2024-01-25 Thread Sake Ceph
t from dashboard because of security > reasons. (But so far we are planning to keep it as it is atleast for the > older releases) > > Regards, > Nizam > > > On Thu, Jan 25, 2024, 19:41 Sake Ceph wrote: > > After upgrading to 17.2.7 our load balancers can't check the sta

[ceph-users] TLS 1.2 for dashboard

2024-01-25 Thread Sake Ceph
After upgrading to 17.2.7 our load balancers can't check the status of the manager nodes for the dashboard. After some troubleshooting I noticed only TLS 1.3 is availalbe for the dashboard. Looking at the source (quincy), TLS config got changed from 1.2 to 1.3. Searching in the tracker I

[ceph-users] Cephfs error state with one bad file

2024-01-02 Thread Sake Ceph
s/41/8f82507a0737c611720ed224bcc8b7a24fda01 rm: cannot remove '/mnt/shared_disk-app1/shared/data/repositories/11271/objects/41/8f82507a0737c611720ed224bcc8b7a24fda01': Input/output error Best regards, Sake ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: MDS subtree pinning

2023-12-31 Thread Sake Ceph
CET schreef Sake Ceph : > > > Hi! > > As I'm reading through the documentation about subtree pinning, I was > wondering if the following is possible. > > We've got the following directory structure. > / > /app1 > /app2 > /app3 > /app4 > > Ca

[ceph-users] MDS subtree pinning

2023-12-22 Thread Sake Ceph
to rank 3? I would like to load balance the subfolders of /app1 to 2 (or 3) MDS servers. Best regards, Sake ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: FS down - mds degraded

2023-12-21 Thread Sake Ceph
That wasn't really clear in the docs :( > Op 21-12-2023 17:26 CET schreef Patrick Donnelly : > > > On Thu, Dec 21, 2023 at 3:05 AM Sake Ceph wrote: > > > > Hi David > > > > Reducing max_mds didn't work. So I executed a fs reset: > > ceph fs set

[ceph-users] Re: FS down - mds degraded

2023-12-21 Thread Sake Ceph
reset atlassian-prod --yes-i-really-mean-it This brought the fs back online and the servers/applications are working again. Question: can I increase the max_mds and active standby_replay? Will collect logs, maybe we can pinpoint the cause. Best regards, Sake

[ceph-users] FS down - mds degraded

2023-12-20 Thread Sake Ceph
up:resolve seq 571 join_fscid=2 addr [v2:10.233.127.18:6800/3627858294,v1:10.233.127.18:6801/3627858294] compat {c=[1],r=[1],i=[7ff]}] Best regards, Sake ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le

[ceph-users] Re: Stretch mode size

2023-11-15 Thread Sake Ceph
Don't forget with stretch mode, osds only communicate with mons in the same DC and the tiebreaker only communicate with the other mons (to prevent split brain scenarios). Little late response, but I wanted you to know this :) ___ ceph-users mailing

[ceph-users] Re: Help needed with Grafana password

2023-11-10 Thread Sake Ceph
sword doesn't seem to be applied > > (don't know why yet). But since it's an "initial" password you can > > choose something simple like "admin", and during the first login you > > are asked to change it anyway. And then you can choose your more >

[ceph-users] Re: Stretch mode size

2023-11-09 Thread Sake Ceph
I believe they are working on it or want to work on it to revert from a stretched cluster, because of the reason you mention: if the other datacenter is totally burned down, you maybe want for the time being switch to one datacenter setup. Best regards, Sake > Op 09-11-2023 11:18 CET schreef

[ceph-users] Re: Help needed with Grafana password

2023-11-09 Thread Sake Ceph
I tried everything at this point, even waited a hour, still no luck. Got it 1 time accidentally working, but with a placeholder for a password. Tried with correct password, nothing and trying again with the placeholder didn't work anymore. So I thought to switch the manager, maybe something

[ceph-users] Re: Help needed with Grafana password

2023-11-09 Thread Sake Ceph
with 'find / -name *grafana*'. > Op 09-11-2023 09:53 CET schreef Eugen Block : > > > What doesn't work exactly? For me it did... > > Zitat von Sake Ceph : > > > To bad, that doesn't work :( > >> Op 09-11-2023 09:07 CET schreef Sake Ceph : > >> > &g

[ceph-users] Re: Help needed with Grafana password

2023-11-09 Thread Sake Ceph
To bad, that doesn't work :( > Op 09-11-2023 09:07 CET schreef Sake Ceph : > > > Hi, > > Well to get promtail working with Loki, you need to setup a password in > Grafana. > But promtail wasn't working with the 17.2.6 release, the URL was set to > container

[ceph-users] Re: Help needed with Grafana password

2023-11-09 Thread Sake Ceph
o with Loki though. > > Eugen > > Zitat von Sake Ceph : > > > I configured a password for Grafana because I want to use Loki. I > > used the spec parameter initial_admin_password and this works fine for a > > staging environment, where I never tried to us

[ceph-users] Help needed with Grafana password

2023-11-08 Thread Sake Ceph
a credentials error on environment where I tried to use Grafana with Loki in the past (with 17.2.6 of Ceph/cephadm). I changed the password in the past within Grafana, but how can I overwrite this now? Or is there a way to cleanup all Grafana files? Best regards, Sake

[ceph-users] Help needed with Grafana password

2023-11-08 Thread Sake Ceph
regards,  Sake ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: MDS cache is too large and crashes

2023-07-24 Thread Sake Ceph
Thank you Patrick for responding and fix the issue! Good to know the issue is know and been worked on :-) > Op 21-07-2023 15:59 CEST schreef Patrick Donnelly : > > > Hello Sake, > > On Fri, Jul 21, 2023 at 3:43 AM Sake Ceph wrote: > > > > At 01:27 this morn

[ceph-users] MDS cache is too large and crashes

2023-07-21 Thread Sake Ceph
(15GB/9GB); 0 inodes in use by clients, 0 stray files === Full health status === [WARN] MDS_CACHE_OVERSIZED: 1 MDSs report oversized cache mds.atlassian-prod.mds4.qlvypn(mds.0): MDS cache is too large (15GB/9GB); 0 inodes in use by clients, 0 stray files Best regards, Sake

[ceph-users] Cephadm fails to deploy loki with promtail correctly

2023-07-11 Thread Sake Ceph
someone know a workaround to set the correct URL for the time being? Best regards, Sake ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io