[ceph-users] NFS - HA and Ingress completion note?

2023-10-17 Thread andreas
NFS - HA and Ingress: [ https://docs.ceph.com/en/latest/mgr/nfs/#ingress ] Referring to Note#2, is NFS high-availability functionality considered complete (and stable)? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Duplicate help statements in Prometheus metrics in 16.2.13

2023-06-05 Thread Andreas Haupt
repaired in a pool Count # TYPE ceph_pg_objects_repaired counter ceph_pg_objects_repaired{poolid="32"} 0.0 [...] This annoys our exporter_exporter service so it rejects the export of ceph metrics. Is this a known issue? Will this be fixed in the next update? Cheers, Andreas -- |

[ceph-users] RBD snapshot mirror syncs all snapshots

2023-04-12 Thread Andreas Teuchert
to avoid having all snapshots being synced? We only need the latest version of the image on the destination cluster and the snapshots add around 200% disk space overhead on average. Best regards, Andreas ___ ceph-users mailing list -- ceph-users

[ceph-users] Re: Ceph rbd clients surrender exclusive lock in critical situation

2023-01-19 Thread Andreas Teuchert
, Andreas On 19.01.23 12:50, Frank Schilder wrote: Hi Ilya, thanks for the info, it did help. I agree, its the orchestration layer's responsibility to handle things right. I have a case open already with support and it looks like there is indeed a bug on that side. I was mainly after a way

[ceph-users] Re: Cannot create snapshots if RBD image is mapped with -oexclusive

2022-12-08 Thread Andreas Teuchert
the lock and with "-oexclusive" the RBD client is not going to release it. So this is not a bug. Best regards, Andreas On 30.11.22 12:58, Andreas Teuchert wrote: Hello, creating snapshots of RBD images that are mapped with -oexclusive seems not to be possible: # rbd map -oexclusiv

[ceph-users] Cannot create snapshots if RBD image is mapped with -oexclusive

2022-11-30 Thread Andreas Teuchert
not to mention this. Is this on purpose or a bug? Ceph version is 17.2.5, RBD client is Ubuntu 22.04 with kernel 5.15.0-52-generic. Best regards, Andreas ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le

[ceph-users] Re: MGR failures and pg autoscaler

2022-10-25 Thread Andreas Haupt
o some missing python modules ... Something suspicious in the output of "ceph crash ls" ? Cheers, Andreas -- | Andreas Haupt| E-Mail: andreas.ha...@desy.de | DESY Zeuthen| WWW:http://www-zeuthen.desy.de/~ahaupt | Platanenallee 6 | Phone: +49/

[ceph-users] Autoscaler stopped working after upgrade Octopus -> Pacific

2022-10-11 Thread Andreas Haupt
ce class only in Pacific in order to get a functional autoscaler? Thanks, Andreas -- | Andreas Haupt| E-Mail: andreas.ha...@desy.de | DESY Zeuthen| WWW:http://www-zeuthen.desy.de/~ahaupt | Platanenallee 6 | Phone: +49/33762/7-7359 | D-15738 Zeuthen

[ceph-users] tcmu-runner not in EPEL-8

2022-02-18 Thread Andreas Haupt
ally no problem compiling it on our own. But it would be much more convenient to have it in EPEL-8, as problably no one will run productive iSCSI gateways under Fedora ;-) Cheers, Andreas -- | Andreas Haupt| E-Mail: andreas.ha...@desy.de | DESY Zeuthen| WWW:http:/

[ceph-users] How to troubleshoot monitor node

2022-01-10 Thread Andreas Feile
Hi all, I've set up a 6-node ceph cluster to learn how ceph works and what I can do with it. However, I'm new to ceph, so if the answer to one of my questions is RTFM, point me to the right place. My problem is this: The cluster consists of 3 mons and 3 osds. Even though the dashboard shows

[ceph-users] Huge headaches with NFS and ingress HA failover

2021-07-21 Thread Andreas Weisker
y too much time to switch to some other solution. Best regards, Andreas ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Nautilus CentOS-7 rpm dependencies

2021-05-31 Thread Andreas Haupt
Dear all, ceph-mgr-dashboard-15.2.13-0.el7.noarch contains three rpm dependencies that cannot be resolved here (not part of CentOS & EPEL 7): python3-cherrypy python3-routes python3-jwt Does anybody know where they are expected to come from? Thanks, Andreas -- | Andreas Haupt

[ceph-users] Re: mon db growing. over 500Gb

2021-03-11 Thread Andreas John
o I shut them down again. >> >> Any idea what is going on? Or how can I shrik back down the db? >> >> >> >> ___ >> ceph-users mailing list -- ceph-users@ceph.io >> To unsubscribe send an email to ceph-users-le...@ceph.i

[ceph-users] Re: Best practices for OSD on bcache

2021-03-02 Thread Andreas John
reasonably sized). I might be totally wrong, though. If you just do it, because you don't want to re-create (or modify)  the OSDs, it's not worth the effort IMHO. rgds, derjohn On 02.03.21 10:48, Norman.Kern wrote: > On 2021/3/2 上午5:09, Andreas John wrote: >> Hallo, >> >>

[ceph-users] Best practices for OSD on bcache

2021-03-01 Thread Andreas John
one have any best practices for it?  Thanks. > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io -- Andreas John net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach Geschaeftsfuehr

[ceph-users] Re: 10G stackabe lacp switches

2021-02-16 Thread Andreas John
have linux bonding with mode slb, but to my experience that didn't work very well with COTS switches, maybe due to ARP learing issues. (We ended up buying Juniper QFX-5100 with MLAG support). Best Regards, Andreas P.S. I didn't try out the setup from above yet. If anyone did already or will do

[ceph-users] Re: How to reset an OSD

2021-01-13 Thread Andreas John
wrote: > failed: (22) Invalid argument -- Andreas John net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832 Tel: +49 69 8570033-1 | Fax: -2 | http://www.net-lab.net Facebook: https://www.facebook.com/netlabdotnet Twitter: https://t

[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-14 Thread Andreas John
Hello Alwin, do you know if it makes difference to disable "all green computing" in the BIOS vs. settings the governor to "performance" in the OS? Of not, I think I will will have some service cycles to set our proxmox-ceph nodes correctly. Best Regards, Andreas On 1

[ceph-users] Re: Ceph test cluster, how to estimate performance.

2020-10-13 Thread Andreas John
apable to deliver well >> above 50 KIOPS. Difference is magnitude. Any info is more welcome. >> Daniel Mezentsev, founder >> (+1) 604 313 8592. >> Soleks Data Group. >> Shaping the clouds. >> ___ >> ceph-users mailing list -- ceph-users@ceph.io >>

[ceph-users] Re: multiple OSD crash, unfound objects

2020-10-10 Thread Andreas John
>> cluster >>>> to query possible locations."  I'm not sure how long "some time" might >>>> take, but it hasn't changed after several hours. >>>> >>>> My questions are: >>>> >>>> * Is there a way to for

[ceph-users] Re: Massive Mon DB Size with noout on 14.2.11

2020-10-02 Thread Andreas John
>> ceph-users mailing list -- ceph-users@ceph.io >> To unsubscribe send an email to ceph-users-le...@ceph.io > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.i

[ceph-users] Massive Mon DB Size with noout on 14.2.11

2020-10-02 Thread Andreas John
on db size increased drastically. We have 14.2.11, 10 OSD @ 2TB and cephfs in use. Is this a known issue? Should we avoid noout? TIA, derjohn -- Andreas John net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832 Tel: +49 69 8570033-1 | Fax:

[ceph-users] Doing minor version update of Ceph cluster with ceph-ansible and rolling-update playbook

2020-09-28 Thread andreas . elvers+lists . ceph . io
. Is this assumption correct? The documentation (https://docs.ceph.com/projects/ceph-ansible/en/latest/day-2/upgrade.html) is short on this. Thanks! - Andreas ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le

[ceph-users] Re: Remove separate WAL device from OSD

2020-09-22 Thread Andreas John
> not clear to me if this can only move a WAL device or if it can be > used to remove it ... > > Regards, > Michael > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@cep

[ceph-users] Re: Unknown PGs after osd move

2020-09-22 Thread Andreas John
Hello, On 22.09.20 20:45, Nico Schottelius wrote: > Hello, > > after having moved 4 ssds to another host (+ the ceph tell hanging issue > - see previous mail), we ran into 241 unknown pgs: You mean, that you re-seated the OSDs into another chassis/host? Is the crush map aware about that? I

[ceph-users] Re: Mount CEPH-FS on multiple hosts with concurrent access to the same data objects?

2020-09-22 Thread Andreas John
Hello, https://docs.ceph.com/en/latest/rados/operations/erasure-code/ but, you could probably manually intervent, if you want an erasure coded pool. rgds, j. On 22.09.20 14:55, René Bartsch wrote: > Am Dienstag, den 22.09.2020, 14:43 +0200 schrieb Andreas John: >> Hello, >>

[ceph-users] Re: Mount CEPH-FS on multiple hosts with concurrent access to the same data objects?

2020-09-22 Thread Andreas John
ph cluster? > Does Proxmox support snapshots, backups and thin provisioning with RBD- > VM images? > > Regards, > > Renne > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ce

[ceph-users] Many scrub errors after update to 14.2.10

2020-08-06 Thread Andreas Haupt
9986 bytes, 0/0 manifest objects, 0/0 hit_set_archive bytes. Aug 6 08:28:44 krake08 ceph-osd: 2020-08-06 08:28:44.477 7fb6b2b9d700 -1 log_channel(cluster) log [ERR] : 12.38 repair 1 errors, 1 fixed Thanks in advance, Andreas -- | Andreas Haupt| E-Mail: andreas.ha...@desy.de | DESY Zeu

[ceph-users] Nautilus to Octopus Upgrade mds without downtime

2020-05-27 Thread Andreas Schiefer
Hello, if I understand correctly: if we upgrade from an running nautilus cluster to octopus we have a downtime on an update of MDS. Is this correct? Mit freundlichen Grüßen / Kind regards Andreas Schiefer Leiter Systemadministration / Head of systemadministration --- HOME OF LOYALTY CRM

[ceph-users] Re: missing amqp-exchange on bucket-notification with AMQP endpoint

2020-04-22 Thread Andreas Unterkircher
0 7f5aab2af700 1 handler->ERRORHANDLER: err_no=-2003 new_err_no=-2003 2020-04-23T07:02:17.745+0200 7f5aab2af700 2 req 1 0s http status=405 2020-04-23T07:02:17.745+0200 7f5aab2af700 1 == req done req=0x7f5aab2a6d50 op status=0 http_status=405 latency=0s == Best Regards, Andreas ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: missing amqp-exchange on bucket-notification with AMQP endpoint

2020-04-20 Thread Andreas Unterkircher
ocumentation is wrong, or is it? Cheers, Andreas [1] https://docs.ceph.com/docs/master/radosgw/notifications/#create-a-topic [2] Index: ceph-15.2.1/src/rgw/rgw_common.cc === --- ceph-15.2.1.orig/src/rgw/rgw_common.cc +++ ceph-15.2.1/

[ceph-users] missing amqp-exchange on bucket-notification with AMQP endpoint

2020-04-20 Thread Andreas Unterkircher
ere else? Best Regards, Andreas ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: RGW do not show up in 'ceph status'

2020-02-24 Thread Andreas Haupt
Sorry for the noise - problem was introduced by a missing iptables rule :-( On Fri, 2020-02-21 at 09:04 +0100, Andreas Haupt wrote: > Dear all, > > we recently added two additional RGWs to our CEPH cluster (version > 14.2.7). They work flawlessly, however they do not show up in 'c

[ceph-users] Re: RGW do not show up in 'ceph status'

2020-02-21 Thread Andreas Haupt
On Fri, 2020-02-21 at 15:19 +0700, Konstantin Shalygin wrote: > On 2/21/20 3:04 PM, Andreas Haupt wrote: > > As you can see, only the first, old RGW (ceph-s3) is listed. Is there > > any place where the RGWs need to get "announced"? Any idea, how to > > debug th

[ceph-users] RGW do not show up in 'ceph status'

2020-02-21 Thread Andreas Haupt
to get "announced"? Any idea, how to debug this? Thanks, Andreas -- | Andreas Haupt| E-Mail: andreas.ha...@desy.de | DESY Zeuthen| WWW:http://www-zeuthen.desy.de/~ahaupt | Platanenallee 6 | Phone: +49/33762/7-7359 | D-15738 Zeuthen | Fax:+

[ceph-users] Re: osd is immidietly down and uses CPU full.

2020-02-02 Thread Andreas John
>> The cluster is used for VM image storage and object storage. >> And I have a bucket which has more than 20 million objects. >> >> Now, I have a problem that cluster blocks operation. >> >> Suddenly cluster blocked operations, then VMs can't read disk. >>

[ceph-users] Re: Getting rid of trim_object Snap .... not in clones

2020-02-01 Thread Andreas John
Helllo, answering to myself in case some else sutmbles upon this thread in the future. I was able to remove the unexpected snap, here is the recipe: How to remove the unexpected snapshots: 1.) Stop the OSD ceph-osd -i 14 --flush-journal  ...  flushed journal /var/lib/ceph/osd/ceph-14/journal

[ceph-users] Re: Getting rid of trim_object Snap .... not in clones

2020-02-01 Thread Andreas John
:20, Andreas John wrote: > Hello, > > for those sumbling upon a similar issue: I was able to mitigate the > issue, by setting > > > === 8< === > > [osd.14] > osd_pg_max_concurrent_snap_trims = 0 > > = > > > in ceph.conf. You don't need to rest

[ceph-users] Getting rid of trim_object Snap .... not in clones

2020-02-01 Thread Andreas John
it correctly that in PG 7.374 there is with rbd prefix 59cb9c679e2a9e3 an object that ends with ..3096, which has a snap ID 29c44 ... ? What does the part A29AAB74__7 ? I was nit able to find in docs how the directory / filename is structured. Best Regrads, j. On 31.01.20 16:04, Andreas John w

[ceph-users] Getting rid of trim_object Snap .... not in clones

2020-01-31 Thread Andreas John
Hello, in my cluster one after the other OSD dies until I recognized that it was simply an "abort" in the daemon caused probably by 2020-01-31 15:54:42.535930 7faf8f716700 -1 log_channel(cluster) log [ERR] : trim_object Snap 29c44 not in clones Close to this msg I get a stracktrace:  ceph