Hallo Dan,
I am using Nautilus with a slightly outdated version 14.2.16, and I
don't remember me playing with upmaps in the past.
Following your suggestion, I removed a bunch of upmaps (the "longer"
lines) and after a while I verified that all PGs are properly mapped.
Thanks!
Hi,
On 5/30/21 8:45 PM, mhnx wrote:
Hello Samuel. Thanks for the answer.
Yes the Intel S4510 series is a good choice but it's expensive.
I have 21 server and data distribution is quite well.
At power loss I don't think I'll lose data. All the VM's using same
image and the rest is cookie.
In thi
One could use enterprise NVMe SSD (with PLP) as DB/WAL for those consumer SSD
huxia...@horebdata.cn
From: mj
Date: 2021-06-04 11:23
To: ceph-users
Subject: [ceph-users] Re: SSD recommendations for RBD and VM's
Hi,
On 5/30/21 8:45 PM, mhnx wrote:
> Hello Samuel. Thanks for the answer.
>
> Ye
I wonder that when a osd came back from power-lost, all the data
scrubbing and there are 2 other copies.
PLP is important on mostly Block Storage, Ceph should easily recover
from that situation.
That's why I don't understand why I should pay more for PLP and other
protections.
In my use case %90 o
Hi,
On 6/4/21 12:57 PM, mhnx wrote:
I wonder that when a osd came back from power-lost, all the data
scrubbing and there are 2 other copies.
PLP is important on mostly Block Storage, Ceph should easily recover
from that situation.
That's why I don't understand why I should pay more for PLP and o
Hi
It seems that with command like this
aws --profile=my-user-tenant1 --endpoint=$HOST_S3_API --region="" iam
create-role --role-name="tenant2\$TemporaryRole"
--assume-role-policy-document file://json/trust-policy-assume-role.json
I can create a role in another tenant.
Executing user have roles:
On Fri, Jun 4, 2021 at 5:06 PM Daniel Iwan wrote:
> Hi
>
> It seems that with command like this
>
> aws --profile=my-user-tenant1 --endpoint=$HOST_S3_API --region="" iam
> create-role --role-name="tenant2\$TemporaryRole"
> --assume-role-policy-document file://json/trust-policy-assume-role.json
>
"Plus how I understand it also: using SSDs with PLP also reduces latency,
as the SSDs don't need to flush after each write."
I didn't know that but it makes sense. I should dig into this.
Thanks.
mj , 4 Haz 2021 Cum, 14:24 tarihinde şunu yazdı:
>
> Hi,
>
> On 6/4/21 12:57 PM, mhnx wrote:
> > I wo
Hello. I have a erasure pool and I didn't turn on compression at the beginning.
Now I'm writing new type of very small data and overhead is becoming an issue.
I'm thinking to turn on compression on the pool but in most
filesystems it will effect only the new data. What is the behavior in
ceph?
Hello,
I need to upgrade the OS that our Ceph cluster is running on to support new
versions of Ceph.
Has anyone devised a model for how you handle this?
Do you just:
Install some new nodes with the new OS
Install the old version of Ceph on the new nodes
Add those nodes/osds to the cluster
Remo
Hey Drew,
we have changed the OS multiple times in the lifetime of our ceph
cluster. In general, you can proceed the same way as a regular update,
starting with the mons/mgrs and then migrating the OSDs.
Cheers,
Nico
Drew Weaver writes:
> Hello,
>
> I need to upgrade the OS that our Ceph cl
Hi Drew,
I performed the upgrade from Nautilus (bare-metal deployment) -> Octopus
(podman containerization) and RHEL-7 -> RHEL-8.
Everything was done in-place. My sequence was:
ceph osd noout/norebalance
shutdown/disable running services
perform full OS upgrade
install necessary software like podm
Hello Drew,
or whole deployment and management solution is build on just replacing an
OS whenever there is an update. We at croit.io even provide Debian and Suse
based OS images and you can switch between per host at any time. No problem.
Just go and reinstall a node, install Ceph and the service
Do you use rbd images in containers that are residing on osd nodes? Does this
give any problems? I used to have kernel mounted cephfs on a osd node, after a
specific luminous release this was giving me problems.
> -Original Message-
> From: Eneko Lacunza
> Sent: Friday, 4 June 2021 15:
Hi,
We operate a few Ceph hyperconverged clusters with Proxmox, that
provides a custom ceph package repository. They do a great work; and
deployment is a brezee.
So, even as currently we would rely on Proxmox packages/distribution and
not upstream, we have a number of other projects deployed
> 在 2021年6月4日,21:51,Eneko Lacunza 写道:
>
> Hi,
>
> We operate a few Ceph hyperconverged clusters with Proxmox, that provides a
> custom ceph package repository. They do a great work; and deployment is a
> brezee.
>
> So, even as currently we would rely on Proxmox packages/distribution and no
Hi,
I managed to build a Ceph cluster with the help of cephadm tool. It works like
a charm.
I have a problem that i’m still not able to fix:
I know that zabbix-sender executable is not integrated into the cephadm image
of ceph-mgr pulled and started by podman because of this choice.
https://g
Hi,
Is there a way to connect from my nautilus ceph setup the pool that I created
in ceph to proxmox? Or need a totally different ceph install?
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
18 matches
Mail list logo