[ceph-users] Re: Cannot recreate monitor in upgrade from pacific to quincy (leveldb -> rocksdb)

2024-02-02 Thread Mark Schouten
mon_warn_on_insecure_global_id_reclaim true root@proxmox01:~# ceph config get mon mon_warn_on_insecure_global_id_reclaim_allowed true — Mark Schouten CTO, Tuxis B.V. +31 318 200208 / m...@tuxis.nl -- Original Message -- From "Eugen Block" To ceph-users@ceph.io Date 02/02/2024, 08:30:45 Subject [ceph

[ceph-users] Re: Cannot recreate monitor in upgrade from pacific to quincy (leveldb -> rocksdb)

2024-01-31 Thread Mark Schouten
confirm that ancient (2017) leveldb database mons should just accept ‘mon.$hostname’ names for mons, a well as ‘mon.$id’ ? — Mark Schouten CTO, Tuxis B.V. +31 318 200208 / m...@tuxis.nl -- Original Message -- From "Eugen Block" To ceph-users@ceph.io Date 31/01/2024, 13:02:04 Sub

[ceph-users] Cannot recreate monitor in upgrade from pacific to quincy (leveldb -> rocksdb)

2024-01-31 Thread Mark Schouten
proxmox03”, "public_addrs": { "addrvec": [ { "type": “v2”, "addr": "10.10.10.3:3300”, "nonce": 0 },

[ceph-users] Re: OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck

2023-02-28 Thread Mark Schouten
Hi, I just destroyed the filestore osd and added it as a bluestore osd. Worked fine. — Mark Schouten, CTO Tuxis B.V. m...@tuxis.nl / +31 318 200208 -- Original Message -- From "Jan Pekař - Imatic" To m...@tuxis.nl; ceph-users@ceph.io Date 2/25/2023 4:14:54 PM Subject

[ceph-users] Re: OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck

2023-02-07 Thread Mark Schouten
Hi, Thanks. Someone told me that we could just destroy the FileStore OSD’s and recreate them as BlueStore, even though the cluster is partially upgraded. So I guess I’ll just do that. (Unless someone here tells me that that’s a terrible idea :)) — Mark Schouten, CTO Tuxis B.V. m...@tuxis.nl

[ceph-users] Re: OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck

2023-02-06 Thread Mark Schouten
to work around this is welcome :) — Mark Schouten, CTO Tuxis B.V. m...@tuxis.nl / +31 318 200208 -- Original Message -- From "Jan Pekař - Imatic" To ceph-users@ceph.io Date 1/12/2023 5:53:02 PM Subject [ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper upgrad

[ceph-users] Re: how to upgrade host os under ceph

2022-10-26 Thread Mark Schouten
Hi Simon, You can just dist-upgrade the underlying OS. Assuming that you installed the packages from https://download.ceph.com/debian-octopus/, just change bionic to focal in all apt-sources, and dist-upgrade away. — Mark Schouten, CTO Tuxis B.V. m...@tuxis.nl -- Original Message

[ceph-users] Cluster downtime due to unsynchronized clocks

2021-09-23 Thread Mark Schouten
ecause one monitor has incorrect time. Thanks! -- Mark Schouten CTO, Tuxis B.V. | https://www.tuxis.nl/ <mailto:m...@tuxis.nl> | +31 318 200208 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Upgrade tips from Luminous to Nautilus?

2021-07-08 Thread Mark Schouten
Hi, Op 15-05-2021 om 22:17 schreef Mark Schouten: Ok, so that helped for one of the MDS'es. Trying to deactivate another mds, it started to release inos and dns'es, until it was almost done. When it had a 50-ish left, a client started to complain and be blacklisted until I restarted

[ceph-users] Re: MDS stuck in up:stopping state

2021-05-27 Thread Mark Schouten
On Thu, May 27, 2021 at 12:38:07PM +0200, Mark Schouten wrote: > On Thu, May 27, 2021 at 06:25:44AM +, Martin Rasmus Lundquist Hansen > wrote: > > After scaling the number of MDS daemons down, we now have a daemon stuck in > > the > > "up:stopping" state.

[ceph-users] Re: MDS stuck in up:stopping state

2021-05-27 Thread Mark Schouten
it? I have no clients, and it still does not want to stop rank1. Funny thing is, while trying to fix this by restarting mdses, I sometimes see a list of clients popping up in the dashboard, even though no clients are connected.. -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http

[ceph-users] Re: [Spam] �ظ�: MDS stuck in up:stopping state

2021-05-27 Thread Mark Schouten
On Thu, May 27, 2021 at 10:37:33AM +0200, Mark Schouten wrote: > On Thu, May 27, 2021 at 07:02:16AM +, 胡 玮文 wrote: > > You may hit https://tracker.ceph.com/issues/50112, which we failed to find > > the root cause yet. I resolved this by restart rank 0. (I have only 2 > >

[ceph-users] Re: [Spam] �ظ�: MDS stuck in up:stopping state

2021-05-27 Thread Mark Schouten
| active | osdnode05 | Reqs:0 /s | 2760k | 2760k | | 1 | stopping | osdnode06 | | 10 | 11 | +--+--+---+---+---+---+ -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | i...

[ceph-users] Force processing of num_strays in mds

2021-05-18 Thread Mark Schouten
and takes a lot of time, while not neccesarily fixing the num_strays. How do I force the mds'es to process those strays so that clients do not get 'incorrect' errors? -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | i...@tuxis.nl

[ceph-users] Re: Upgrade tips from Luminous to Nautilus?

2021-05-15 Thread Mark Schouten
On Fri, May 14, 2021 at 09:12:07PM +0200, Mark Schouten wrote: > It seems (documentation was no longer available, so ik took some > searching) that I needed to run ceph mds deactivate $fs:$rank for every > MDS I wanted to deactivate. Ok, so that helped for one of the MDS'es. Trying to d

[ceph-users] Re: "No space left on device" when deleting a file

2021-05-14 Thread Mark Schouten
On Tue, May 11, 2021 at 02:55:05PM +0200, Mark Schouten wrote: > On Tue, May 11, 2021 at 09:53:10AM +0200, Mark Schouten wrote: > > This helped me too. However, should I see num_strays decrease again? > > I'm running a `find -ls` over my CephFS tree.. > > This helps, the

[ceph-users] Re: Upgrade tips from Luminous to Nautilus?

2021-05-14 Thread Mark Schouten
On Mon, May 10, 2021 at 10:46:45PM +0200, Mark Schouten wrote: > I still have three active ranks. Do I simply restart two of the MDS'es > and force max_mds to one daemon, or is there a nicer way to move two > mds'es from active to standby? It seems (documentation was no longer availab

[ceph-users] Re: "No space left on device" when deleting a file

2021-05-11 Thread Mark Schouten
On Tue, May 11, 2021 at 09:53:10AM +0200, Mark Schouten wrote: > This helped me too. However, should I see num_strays decrease again? > I'm running a `find -ls` over my CephFS tree.. This helps, the amount of stray files is slowly decreasing. But given the number of files in the cluster,

[ceph-users] Re: Upgrade tips from Luminous to Nautilus?

2021-05-11 Thread Mark Schouten
rectories are directories anyone has ever actively put pinning on... -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | i...@tuxis.nl ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an emai

[ceph-users] Re: Upgrade tips from Luminous to Nautilus?

2021-05-11 Thread Mark Schouten
FAIK. How can I check that? -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | i...@tuxis.nl ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: "No space left on device" when deleting a file

2021-05-11 Thread Mark Schouten
e num_strays decrease again? I'm running a `find -ls` over my CephFS tree.. -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | i...@tuxis.nl ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe

[ceph-users] Re: Upgrade tips from Luminous to Nautilus?

2021-05-10 Thread Mark Schouten
On Thu, Apr 29, 2021 at 10:58:15AM +0200, Mark Schouten wrote: > We've done our fair share of Ceph cluster upgrades since Hammer, and > have not seen much problems with them. I'm now at the point that I have > to upgrade a rather large cluster running Luminous and I would like to > hea

[ceph-users] Upgrade tips from Luminous to Nautilus?

2021-04-29 Thread Mark Schouten
upgrade all Ceph packages on the monitor-nodes and restart mons and then mgrs. After that, I would upgrade all Ceph packages on the OSD nodes and restart all the OSD's. Then, after that, the MDSes and RGWs. Restarting the OSD's will probably take a while. If anyone has a hint on what I should

[ceph-users] Re: Unable to delete versioned bucket

2021-04-29 Thread Mark Schouten
On Sat, Apr 24, 2021 at 06:06:04PM +0200, Mark Schouten wrote: > Using the following command: > 3cmd setlifecycle lifecycle.xml s3://syslog_tuxis_net > > That gave no error, and I see in s3browser that it's active. > > The RGW does not seem to kick in yet, bu

[ceph-users] Re: Unable to delete versioned bucket

2021-04-24 Thread Mark Schouten
GW does not seem to kick in yet, but I'll keep an eye on that. -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | i...@tuxis.nl ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Unable to delete versioned bucket

2021-04-23 Thread Mark Schouten
marker either. So I'm stuck with that bucket which I would like to remove without abusing radosgw-admin. This cluster is running 12.2.13 with civetweb rgw's behind a haproxy setup. All is working fine, except for this versioning bucket. Can anywone point me in the right direction to remove this bu

[ceph-users] CephFS max_file_size

2020-12-11 Thread Mark Schouten
Hi, There is a default limit of 1TiB for the max_file_size in CephFS. I altered that to 2TiB, but I now got a request for storing a file up to 7TiB. I'd expect the limit to be there for a reason, but what is the risk of setting that value to say 10TiB? -- Mark Schouten Tuxis, Ede, https

[ceph-users] Re: OSD takes more almost two hours to boot from Luminous -> Nautilus

2020-08-19 Thread Mark Schouten
in at a more convenient time? -- Mark Schouten | Tuxis B.V. KvK: 74698818 | http://www.tuxis.nl/ T: +31 318 200208 | i...@tuxis.nl ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] OSD takes more almost two hours to boot from Luminous -> Nautilus

2020-08-19 Thread Mark Schouten
stats"  ? Thanks! -- Mark Schouten Tuxis, Ede, https://www.tuxis.nl T: +31 318 200208    ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Upgrade procedure on Ubuntu Bionic with stock packages

2019-08-28 Thread Mark Schouten
Cool, thanks! -- Mark Schouten Tuxis, Ede, https://www.tuxis.nl T: +31 318 200208    - Originele bericht - Van: James Page (james.p...@canonical.com) Datum: 28-08-2019 11:02 Naar: Mark Schouten (m...@tuxis.nl) Cc: ceph-users@ceph.io Onderwerp: Re: [ceph-users] Upgrade procedure

[ceph-users] Upgrade procedure on Ubuntu Bionic with stock packages

2019-08-28 Thread Mark Schouten
proceed? Thanks, -- Mark Schouten Tuxis, Ede, https://www.tuxis.nl T: +31 318 200208    ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io