On 8/18/21 9:22 PM, 신희원 / 학생 / 컴퓨터공학부 wrote:
Hi,
I measured the performance of ceph-osd and crimson-osd with same single
core affinity.
I checked IOPS, Latency by rados-bench write , and crimson-osd has lower
performance than ceph-osd about 3 times. (ceph-osd(BlueStore): 228 IOPS,
crimson-osd(Al
Den ons 18 aug. 2021 kl 21:49 skrev Francesco Piraneo G. :
>
> Il 17.08.21 16:34, Marc ha scritto:
>
> > ceph-volume lvm zap --destroy /dev/sdb
> > ceph-volume lvm create --data /dev/sdb --dmcrypt
> >
> > systemctl enable ceph-osd@0
>
>
> Hi Marc,
>
> it worked! Thank you very much!
>
> I have some
Hi,
I measured the performance of ceph-osd and crimson-osd with same single
core affinity.
I checked IOPS, Latency by rados-bench write , and crimson-osd has lower
performance than ceph-osd about 3 times. (ceph-osd(BlueStore): 228 IOPS,
crimson-osd(AlienStore): 73 IOPS)
-> " $ rados bench -p rbd 1
Il 17.08.21 16:34, Marc ha scritto:
ceph-volume lvm zap --destroy /dev/sdb
ceph-volume lvm create --data /dev/sdb --dmcrypt
systemctl enable ceph-osd@0
Hi Marc,
it worked! Thank you very much!
I have some question:
1. ceph-volume already enable and run ceph-osd, so I'm not required to
ru
Yes, as far as i know it is a stable feature.
Feel free
Am 17. August 2021 11:02:11 MESZ schrieb zp_8483 :
>Hi all,
>
>
>
>
>Can we enable rbd-mirror feature in product environment? if not, are there
>any known issues?
>
>
>
>
>Thanks,
>
>
>
>
>Zhen
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
On 18/08/2021 21.26, Torkil Svensgaard wrote:
Did I miss something obvious?
Restarting the rbd-mirror daemons was the thing I missed. All good now.
Thanks,
Torkil
Thanks,
Torkil
On 18/08/2021 14.30, Ilya Dryomov wrote:
On Wed, Aug 18, 2021 at 12:40 PM Torkil Svensgaard
wrote:
Hi
I a
Hi Ilya
Ah, thanks. I misunderstood that part. However, I can't get it to work,
data still goes to the wrong pool.
I did this, which seemed to stick.
# ceph config set global rbd_default_data_pool rbd_data
# ceph config dump | grep rbd_default
globaladvanced rbd_default_data_
Hey Boris,
On 18/08/2021 08:49, Boris Behrens wrote:
I've set up realm,first zonegroup with the zone and a sync user in the
master setup, and commited.
Then I've pulled the periode on the 2nd setup and added a 2nd zonegroup
with a zone and commited.
Now I can create users in the master setup, b
Hi Luís,
Am 18.08.2021 um 19:02 schrieb Luis Henriques:
> Sebastian Knust writes:
>
>> Hi,
>>
>> I am running a Ceph Oc,topus (15.2.13) cluster mainly for CephFS. Moving
>> (with
>> mv) a large directory (mail server backup, so a few million small files)
>> within
>> the cluster takes multiple
Sebastian Knust writes:
> Hi,
>
> I am running a Ceph Octopus (15.2.13) cluster mainly for CephFS. Moving (with
> mv) a large directory (mail server backup, so a few million small files)
> within
> the cluster takes multiple days, even though both source and destination share
> the same (default
Hi,
I am running a Ceph Octopus (15.2.13) cluster mainly for CephFS. Moving
(with mv) a large directory (mail server backup, so a few million small
files) within the cluster takes multiple days, even though both source
and destination share the same (default) file layout and - at least on
the
On Tue, Aug 17, 2021 at 9:56 AM Daniel Persson wrote:
>
> Hi again.
>
> I've now solved my issue with help from people in this group. Thank you for
> helping out.
> I thought the process was a bit complicated so I created a short video
> describing the process.
>
> https://youtu.be/Ds4Wvvo79-M
>
>
On Wed, Aug 18, 2021 at 12:40 PM Torkil Svensgaard wrote:
>
> Hi
>
> I am looking at one way mirroring from cluster A to B cluster B.
>
> As pr [1] I have configured two pools for RBD on cluster B:
>
> 1) Pool rbd_data using default EC 2+2
> 2) Pool rbd using replica 2
>
> I have a peer relationsh
Hi
I am looking at one way mirroring from cluster A to B cluster B.
As pr [1] I have configured two pools for RBD on cluster B:
1) Pool rbd_data using default EC 2+2
2) Pool rbd using replica 2
I have a peer relationship set up so when I enable mirroring on an image
in cluster A it will be re
> Setting up cephadm was pretty straight forward and doing the upgrade was
> also "easy". But I was not fond of it at all as I felt that I lost
> control.
> I had set up a couple of machines with different hardware profiles to
> run
> various services on each, and when I put hosts into the cluster
Hi,
" but have a global namespace where all buckets and users are uniqe."
You mean manage multiple cluster from 1 "master" cluster but ono sync? So 1
realm, multiple dc BUT no sync?
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co
Hi,
We are using with replica 2 also but we have copy of the master data, just be
careful with replica 2.
Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---
Hi Everyone.
I thought I put in my 5 cents as I believe this is an exciting topic. I'm
also a newbie, only running a cluster for about a year. I did some research
before that and also have created a couple of videos on the topic. One of
them was upgrading a cluster using cephadm.
- AB
Den ons 18 aug. 2021 kl 08:41 skrev Christian Rohmann
:
>
> On 17/08/2021 13:37, Janne Johansson wrote:
> > Don't forget that v4 auth bakes in the clients idea of what the
> > hostname of the endpoint was, so its not only about changing headers.
> > If you are not using v2 auth, you will not be abl
19 matches
Mail list logo