There is also flatcar linux, which is another drop in replacement for CoreOS
https://www.flatcar-linux.org/
On Thu, Jan 14, 2021 at 8:31 AM Jonathan Sélea wrote:
> I have used CEPH together with Debian for many years and it has worked
> great for my usecase too.
> In the end it all comes down
Hello,
I see, do you know how much I should increase and how? Haven’t found much
documentation about it :(
> On 2021. Jan 14., at 22:36, dhils...@performair.com wrote:
>
> Email received from outside the company. If in doubt don't click links nor
> open attachments!
>
Thank you guys, so we might should use ububtu based then as it has good driver
support and the lts sounds like a working solution.
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e:
I have used CEPH together with Debian for many years and it has worked
great for my usecase too.
In the end it all comes down to what you find most convenient that works
best. CEPH is still CEPH despite what kind of distro that is running
below :)
--
Jonathan Sélea
Website:
What is not clear for me is what will be available for container images. I
run ceph in containers. Also I wonder if IBM will decide closing ceph and
other projects cimilar to CentOS.
On Thu, Jan 14, 2021, 7:01 AM Szabo, Istvan (Agoda)
wrote:
> Thank you guys, so we might should use ububtu
Istvan;
What version of Ceph are you running? Another email chain indicates you're
running on CentOS 8, which suggests Octopus (15).
We're running multisite replicated radosgw on Nautilus. I don't see the long
running time that you are suggesting, though we only have ~35k objects.
I
Hi,
thanks to derJohn I was send to the right path. Our setup is a bit more
difficult as we're using more than one OSD per drive. The setup is done
using rook. Therefore I used the docs at
https://github.com/rook/rook/blob/master/Documentation/ceph-osd-mgmt.md#remove-an-osd
But we ended up to
Hello,
we from croit use Ceph on Debian and deploy all our clusters with it.
It works like a charm and I personally have quite good experience with
it since ~20 years. It is a fantastic solid OS for Servers.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail:
Hi,
Just curious how you guys move forward with this Centos 8 change.
We just finished installing our full multisite cluster and looks like we need
to change the operating system.
So curious if you are using centos 8 with ceph, where you are going to move
forward.
Thank you
UPDATE: Finally got back the master sync command output:
radosgw-admin sync status
realm 5fd28798-9195-44ac-b48d-ef3e95caee48 realm)
zonegroup 31a5ea05-c87a-436d-9ca0-ccfcbad481e3 (data)
zone 9213182a-14ba-48ad-bde9-289a1c0c0de8 (hkg)
metadata sync no sync (zone is
Hello,
I have a 3 DC octopus Multisite setup with bucket sync policy applied.
I have 2 buckets where I’ve set the shard 24.000 and the other is 9.000 because
they want to use 1 bucket but with a huge amount of objects (2.400.000.000 and
900.000.000) and in case of multisite we need to preshard
One of our providers (cloudlinux) released a 1:1 binary compatible
redhat fork due to the changes with Centos 8.
Could be worth looking at.
https://almalinux.org/
In our case we're using ceph on debian 10.
--
David Majchrzak
CTO
Oderland Webbhotell AB
Östra Hamngatan 50B, 411 09 Göteborg,
There is a command `ceph pg getmap`.
It produces a binary file. Are there any utility to decode it?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello fellow CEPH-users,
we are currently investigating latency spikes in our CEPH(14.2.11) prod
cluster, usually occurring when under heavy load.
TLDR: Do you have an idea where to investigate some kv commit latency
spikes on a CEPH cluster with a LSI 9300-8i HBA and all SSD(Intel,
Micron)
Hello,
thank you very much. Your debugging helped me a lot finding a solution for my
own problem with keystone and radosgw.
Greets Stefan
- Original Message -
From: "Mika Saari"
To: "ceph-users"
Sent: Friday, 8 January, 2021 08:02:31
Subject: [ceph-users] Re: Ceph RadosGW & OpenStack
15 matches
Mail list logo