mon_warn_on_insecure_global_id_reclaim
true
root@proxmox01:~# ceph config get mon
mon_warn_on_insecure_global_id_reclaim_allowed
true
—
Mark Schouten
CTO, Tuxis B.V.
+31 318 200208 / m...@tuxis.nl
-- Original Message --
From "Eugen Block"
To ceph-users@ceph.io
Date 02/02/2024, 08:30:45
Subject [ceph
confirm that ancient (2017) leveldb database mons should just
accept ‘mon.$hostname’ names for mons, a well as ‘mon.$id’ ?
—
Mark Schouten
CTO, Tuxis B.V.
+31 318 200208 / m...@tuxis.nl
-- Original Message --
From "Eugen Block"
To ceph-users@ceph.io
Date 31/01/2024, 13:02:04
Sub
proxmox03”,
"public_addrs": {
"addrvec": [
{
"type": “v2”,
"addr": "10.10.10.3:3300”,
"nonce": 0
},
Hi,
I just destroyed the filestore osd and added it as a bluestore osd.
Worked fine.
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl / +31 318 200208
-- Original Message --
From "Jan Pekař - Imatic"
To m...@tuxis.nl; ceph-users@ceph.io
Date 2/25/2023 4:14:54 PM
Subject
Hi,
Thanks. Someone told me that we could just destroy the FileStore OSD’s
and recreate them as BlueStore, even though the cluster is partially
upgraded. So I guess I’ll just do that. (Unless someone here tells me
that that’s a terrible idea :))
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl
to work around this is welcome
:)
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl / +31 318 200208
-- Original Message --
From "Jan Pekař - Imatic"
To ceph-users@ceph.io
Date 1/12/2023 5:53:02 PM
Subject [ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper
upgrad
Hi Simon,
You can just dist-upgrade the underlying OS. Assuming that you installed
the packages from https://download.ceph.com/debian-octopus/, just change
bionic to focal in all apt-sources, and dist-upgrade away.
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl
-- Original Message
ecause one monitor has
incorrect time.
Thanks!
--
Mark Schouten
CTO, Tuxis B.V. | https://www.tuxis.nl/
<mailto:m...@tuxis.nl> | +31 318 200208
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
Op 15-05-2021 om 22:17 schreef Mark Schouten:
Ok, so that helped for one of the MDS'es. Trying to deactivate another
mds, it started to release inos and dns'es, until it was almost done.
When it had a 50-ish left, a client started to complain and be
blacklisted until I restarted
On Thu, May 27, 2021 at 12:38:07PM +0200, Mark Schouten wrote:
> On Thu, May 27, 2021 at 06:25:44AM +, Martin Rasmus Lundquist Hansen
> wrote:
> > After scaling the number of MDS daemons down, we now have a daemon stuck in
> > the
> > "up:stopping" state.
it?
I have no clients, and it still does not want to stop rank1. Funny
thing is, while trying to fix this by restarting mdses, I sometimes see
a list of clients popping up in the dashboard, even though no clients
are connected..
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http
On Thu, May 27, 2021 at 10:37:33AM +0200, Mark Schouten wrote:
> On Thu, May 27, 2021 at 07:02:16AM +, 胡 玮文 wrote:
> > You may hit https://tracker.ceph.com/issues/50112, which we failed to find
> > the root cause yet. I resolved this by restart rank 0. (I have only 2
> >
| active | osdnode05 | Reqs:0 /s | 2760k | 2760k |
| 1 | stopping | osdnode06 | | 10 | 11 |
+--+--+---+---+---+---+
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...
and takes a lot of time, while not neccesarily fixing
the num_strays.
How do I force the mds'es to process those strays so that clients do not
get 'incorrect' errors?
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
On Fri, May 14, 2021 at 09:12:07PM +0200, Mark Schouten wrote:
> It seems (documentation was no longer available, so ik took some
> searching) that I needed to run ceph mds deactivate $fs:$rank for every
> MDS I wanted to deactivate.
Ok, so that helped for one of the MDS'es. Trying to d
On Tue, May 11, 2021 at 02:55:05PM +0200, Mark Schouten wrote:
> On Tue, May 11, 2021 at 09:53:10AM +0200, Mark Schouten wrote:
> > This helped me too. However, should I see num_strays decrease again?
> > I'm running a `find -ls` over my CephFS tree..
>
> This helps, the
On Mon, May 10, 2021 at 10:46:45PM +0200, Mark Schouten wrote:
> I still have three active ranks. Do I simply restart two of the MDS'es
> and force max_mds to one daemon, or is there a nicer way to move two
> mds'es from active to standby?
It seems (documentation was no longer availab
On Tue, May 11, 2021 at 09:53:10AM +0200, Mark Schouten wrote:
> This helped me too. However, should I see num_strays decrease again?
> I'm running a `find -ls` over my CephFS tree..
This helps, the amount of stray files is slowly decreasing. But given
the number of files in the cluster,
rectories are directories anyone has ever actively put pinning on...
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an emai
FAIK. How can I check that?
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
e num_strays decrease again?
I'm running a `find -ls` over my CephFS tree..
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
On Thu, Apr 29, 2021 at 10:58:15AM +0200, Mark Schouten wrote:
> We've done our fair share of Ceph cluster upgrades since Hammer, and
> have not seen much problems with them. I'm now at the point that I have
> to upgrade a rather large cluster running Luminous and I would like to
> hea
upgrade all Ceph packages on the
monitor-nodes and restart mons and then mgrs.
After that, I would upgrade all Ceph packages on the OSD nodes and
restart all the OSD's. Then, after that, the MDSes and RGWs. Restarting
the OSD's will probably take a while.
If anyone has a hint on what I should
On Sat, Apr 24, 2021 at 06:06:04PM +0200, Mark Schouten wrote:
> Using the following command:
> 3cmd setlifecycle lifecycle.xml s3://syslog_tuxis_net
>
> That gave no error, and I see in s3browser that it's active.
>
> The RGW does not seem to kick in yet, bu
GW does not seem to kick in yet, but I'll keep an eye on that.
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
marker either.
So I'm stuck with that bucket which I would like to remove without
abusing radosgw-admin.
This cluster is running 12.2.13 with civetweb rgw's behind a haproxy
setup. All is working fine, except for this versioning bucket. Can
anywone point me in the right direction to remove this bu
Hi,
There is a default limit of 1TiB for the max_file_size in CephFS. I altered
that to 2TiB, but I now got a request for storing a file up to 7TiB.
I'd expect the limit to be there for a reason, but what is the risk of setting
that value to say 10TiB?
--
Mark Schouten
Tuxis, Ede, https
in
at a more convenient time?
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
stats" ?
Thanks!
--
Mark Schouten
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Cool, thanks!
--
Mark Schouten
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208
- Originele bericht -
Van: James Page (james.p...@canonical.com)
Datum: 28-08-2019 11:02
Naar: Mark Schouten (m...@tuxis.nl)
Cc: ceph-users@ceph.io
Onderwerp: Re: [ceph-users] Upgrade procedure
proceed?
Thanks,
--
Mark Schouten
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
31 matches
Mail list logo