uld gradually be increasing towards 256.
>
> [1] https://docs.ceph.com/en/latest/releases/nautilus/
>
> On Sat, 9 Mar 2024 at 09:39, Michel Niyoyita wrote:
>
>> Hello team,
>>
>> I have increased my volumes pool which was 128 PGs to 256 PGs , the
>> activity sta
Hello team,
I have increased my volumes pool which was 128 PGs to 256 PGs , the
activity started yesterday 5PM , It started when it was 5.733% of misplaced
object , after 4 to 5 hours it reaches to 5.022 % after that it come back
to the initial percentage 5.733% , kindly help to solve the issue.
setup
> successfully you should be able to configure more pools to be mirrored
> manually as described in the docs [1].
>
> [1] https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#pool-configuration
>
> Zitat von Michel Niyoyita :
>
> > Thank you Eugen , all errors have been s
gt; So the error you reported first is now resolved? What does the mirror
> daemon log?
>
> Zitat von Michel Niyoyita :
>
> > I have configured it as follow :
> >
> > ceph_rbd_mirror_configure: true
> > ceph_rbd_mirror_mode: "pool"
confirms what you pasted (daemon
> health: ERROR). You need to fix that first.
>
> Zitat von Michel Niyoyita :
>
> > Thanks Eugen,
> >
> > On my prod Cluster (as named it) this is the output the following command
> > checking the status : rbd mirror pool status images
e same value as the user
> > ({{ ceph_rbd_mirror_local_user }}) keyring secret from the primary
> > cluster.
>
> [1]
> https://docs.ceph.com/projects/ceph-ansible/en/latest/rbdmirror/index.html
>
> Zitat von Michel Niyoyita :
>
> > Hello team,
> >
> > I
Hello team,
I have two clusters in testing environment deployed using ceph-ansible
running on ubuntu 20.04 with ceph Pacific version . I am testing mirroring
between two clusters , in pool mode . Our production Cluster is for backend
storage for openstack. This is how I configured the
Thank you very much Sir , now it works.
Michel
On Fri, Feb 2, 2024 at 11:55 AM Eugen Block wrote:
> Have you tried to enable it?
>
> # ceph dashboard ac-user-enable admin
>
> Zitat von Michel Niyoyita :
>
> > Hello team,
> >
> > I failed to login to my cep
Hello team,
I failed to login to my ceph dashboard which is running pacific as version
and deployed using ceph-ansible . I have set admin password using the
following command : "ceph dashboard ac-user-set-password admin -i
ceph-dash-pass" where ceph-dash-pass possesses the real password. I am
And as said before still it is in warning state with pgs not deep-scrubed
in time . Hope this can be ignored and set those two flags "noout and
nobackfill" then reboot .
Thank you again Sir
On Thu, 1 Feb 2024, 16:11 Michel Niyoyita, wrote:
> Thank you very much Janne.
>
>
thy to begin with (not
> counting "not scrubbed in time" warnings, they don't matter in this
> case.)
>
>
>
> Den tors 1 feb. 2024 kl 12:21 skrev Michel Niyoyita :
> >
> > Thanks Very much Wesley,
> >
> > We have decided to restart one host among thr
kedIn <http://www.linkedin.com/in/wesleydillingham>
>>
>>
>> On Tue, Jan 30, 2024 at 10:18 AM Michel Niyoyita
>> wrote:
>>
>>> I tried that on one of my pool (pool id 3) but the number of pgs not
>>> deep-scrubbed in time increased also from 55
you are looking to
> undertake:
> https://ceph.io/en/news/blog/2019/new-in-nautilus-pg-merging-and-autotuning/
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>
>
> On Tue, Jan 30,
st 256.
>
> with the ultimate target of around 100-200 PGs per OSD which "ceph osd df
> tree" will show you in the PGs column.
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>
>
n only give
> general guidelines. Make changes, run benchmarks, re-evaluate. Take the
> time for it. The better you know your cluster and your users, the better
> the end result will be.
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109
AM Janne Johansson
> wrote:
> >
> > Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita :
> > >
> > > Thank you Frank ,
> > >
> > > All disks are HDDs . Would like to know if I can increase the number
> of PGs
> > > live in producti
On Mon, Jan 29, 2024 at 2:09 PM Michel Niyoyita wrote:
> Thank you Janne ,
>
> no need of setting some flags like ceph osd set nodeep-scrub ???
>
> Thank you
>
> On Mon, Jan 29, 2024 at 2:04 PM Janne Johansson
> wrote:
>
>> Den mån 29 jan. 2024 kl 12:58 skrev
Thank you Janne ,
no need of setting some flags like ceph osd set nodeep-scrub ???
Thank you
On Mon, Jan 29, 2024 at 2:04 PM Janne Johansson wrote:
> Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita :
> >
> > Thank you Frank ,
> >
> > All disks are HDDs .
th disk performance.
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Michel Niyoyita
> Sent: Monday, January 29, 2024 7:42 AM
> To: E Taka
> Cc: ceph-users
>
emove OSD 22, just to
> be sure about this: ceph orch osd rm osd.22
>
> If this does not help, just add it again.
>
> Am Fr., 26. Jan. 2024 um 08:05 Uhr schrieb Michel Niyoyita <
> mico...@gmail.com>:
>
>> It seems that are different OSDs as shown here . how have y
SDs of a PG in square brackets. If is there always
> the same number, then you've found the OSD which causes the slow scrubs.
>
> Am Fr., 26. Jan. 2024 um 07:45 Uhr schrieb Michel Niyoyita <
> mico...@gmail.com>:
>
>> Hello team,
>>
>> I have a cluster in production composed by
Hello team,
I have a cluster in production composed by 3 osds servers with 20 disks
each deployed using ceph-ansibleand ubuntu OS , and the version is pacific
. These days is in WARN state caused by pgs which are not deep-scrubbed in
time . I tried to deep-scrubbed some pg manually but seems
Hello Zac,
If possible also include the installation guide.
Regards
On Sat, 2 Dec 2023, 13:30 Zac Dover, wrote:
> The Upstream Documentation Team is writing a Beginner's Guide. If you're
> reading this email, then you are invited to contribute to it.
>
> I have a firm idea of what I want the
Hello Team ,
I am trying to make a ceph cluster with 3 nodes running ubuntu 20.04 and is
configured using ceph-ansible , because it is a testing cluster OSD servers
are same which will run as monitors . while installation I am facing
following error : TASK [ceph-mon : ceph monitor mkfs with
"frontend_config#0": "beast endpoint=
10.10.110.196:8080",
"frontend_type#0": "beast",
"hostname": "ceph-mon1",
"id": "ceph-mon1.rgw0",
h-osd2
[client.rgw.ceph-osd3]
rgw_dns_name = ceph-osd3
# Please do not change this file directly since it is managed by Ansible
and will be overwritten
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster network = 10.10.110.128/26
fsid = cb0caedc-eb5b-42d1-a34f
8c6ce3ff5cd94f01e711af894)
pacific (stable)": 6
},
"overall": {
"ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894)
pacific (stable)": 60
}
}
root@ceph-mon1:~#
Best Regards
Michel
On Mon, Feb 6, 2023 at 2:57 PM Robert Sander
gt; in Pacific it's not applied anymore. I'll need to check how it is
> determined now.
>
> Zitat von Michel Niyoyita :
>
> > Hello Eugen,
> >
> > Thanks for your reply ,
> >
> > I am trying the shared command but no output .
> >
> > root@ceph-mon1
Hello team,
I have a ceph cluster deployed using ceph-ansible , running on ubuntu 20.04
OS which have 6 hosts , 3 hosts for OSD and 3 hosts used as monitors and
managers , I have deployed RGW on all those hosts and RGWLOADBALENCER on
top of them , for testing purpose , I have switched off one
Hello team,,
I have deployed ceph pacific cluster using ceph-ansible running on ubuntu
20.04 which have 3 OSD hosts and 3 mons on each OSD host we have 20 osd . I
am integrating swift in the cluster but I fail to find the policy and
upload objects in the container . I have deployed
Thank you very much Anthony and Eugen , I followed your instructions and
now it works fine class are hdd and ssd , and also now we have 60 OSDS from
48
Thanks again
Michel
On Mon, 9 Jan 2023, 17:00 Anthony D'Atri, wrote:
> For anyone finding this thread down the road: I wrote to the poster
>
Hello team
I have an issue on ceph-deployment using ceph-ansible . we have two
categories of disk , HDD and SSD , while deploying ceph only HDD are
appearing no SSD appearing . the cluster is running on ubuntu OS 20.04 ,
unfortunately no errors appearing , did I miss something in configuration?
Hello team,
I have deployed ceph cluster in production , the cluster composed by two
types of disks HDD and SSD , and the cluster was deployed using
ceph-ansible , unfortunately after deployment the HDD disks appear only
without SSD , would like to restart deployment from scratch , but I miss
Dear team,
Kindly help on this , I am completely blocked.
Best Regards
Michel
On Thu, Jan 5, 2023 at 2:45 PM Michel Niyoyita wrote:
> Dear team,
>
> I have deployed the ceph cluster in production using ceph-ansible on
> ubuntu OS 20.04 it consists of 3 monitors and 3 osds (eac
rry I don't use the dashboard, I can't help you on that part.
>
> Cheers,
>
> --
> Arthur Outhenin-Chalandre
>
> On 5/9/22 10:14, Michel Niyoyita wrote:
> > Dear Arthur,
> >
> > Thanks for the recommandations it works, I changed the download url and
> > it
Hello Arthur ,
What can you recommend me to resolve the issue?
Best Regards
Michel
On Sun, May 8, 2022 at 2:49 PM Arthur Outhenin-Chalandre <
arthur.outhenin-chalan...@cern.ch> wrote:
> Hello Michel,
>
> On 5/7/22 11:34, Michel Niyoyita wrote:
> > Hello team,
> &g
Hello team,
I am trying to build a ceph cluster (pacific version) running using ansible
tool . I am facing the below issue while download grafana packages for the
dashboard.
TASK [ceph-grafana : download ceph grafana dashboards]
Hello team,
I am testing my ceph pacific cluster using Vms , which is integrated with
openstack . suddenly one of the hosts turned off and failed . I built
another host with same number of OSDs with the first one and redeploy again
the cluster . unfortunately the cluster still is up with 2 hosts
Hello team
I have a problem which I want the team to help me on.
I have ceph cluster with Health OK which is running in testing environment
with 3 nodes with 4 osds each ,and 3 mons plus 2 managers, deployed using
ansible. the purpose of the cluster is to work as backend of openstack as
storage
0.19530 osd.6 down 0 1.0
Try to restart these are down but failed
On Tue, Feb 22, 2022 at 4:42 PM wrote:
> What does
>
> ‘ceph osd tree’ show?
>
> How many OSD’s should you have 7 or 10?
>
> On 22 Feb 2022, at 14:40, Michel Niyoyita wrote:
&
e other 3? Was
> the data fully drained off these first?
>
> I see you have 11 Pool’s what are these setup as, type and min/max size?
>
> > On 22 Feb 2022, at 14:15, Michel Niyoyita wrote:
> >
> > Dear Ceph Users,
> >
> > Kindly help me to repair my cluster is down from
+
blocklist 10.10.29.156:6808/902968 expires 2022-02-23T04:31:08.540586+
blocklist 10.10.29.156:6802/875379 expires 2022-02-23T03:48:05.982757+
blocklist 10.10.29.156:0/3014798211 expires 2022-02-22T15:08:25.370745+
Regards
On Tue, Feb 22, 2022 at 4:15 PM Michel Niyoyita wrote
Dear Ceph Users,
Kindly help me to repair my cluster is down from yesterday up to now I am
not able to make it up and running . below are some findings:
id: 6ad86187-2738-42d8-8eec-48b2a43c298f
health: HEALTH_ERR
mons are allowing insecure global_id reclaim
orting + prometheus with
> grafana for metrics collection.
>
> Istvan Szabo
> Senior Infrastructure Engineer
> ---
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com
> ---
>
>
Hello team,
I would like to monitor my ceph cluster using one of the
monitoring tool, does someone has a help on that ?
Michel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
isk space on that node.
>
> Regards,
> Eugen
>
>
> Zitat von Michel Niyoyita :
>
> > Dear Team ,
> >
> > I have a warning on my cluster which I deployed using Ansible on ubuntu
> > 20.04 and with pacific ceph version , which says :
> >
> > ro
Dear Team ,
I have a warning on my cluster which I deployed using Ansible on ubuntu
20.04 and with pacific ceph version , which says :
root@ceph-mon1:~# ceph health detail
HEALTH_WARN mon ceph-mon1 is low on available space
[WRN] MON_DISK_LOW: mon ceph-mon1 is low on available space
Dear Team,
I would like to deploy a ceph cluster in production which have 3 nodes of
mons and 3 nodes of osds and 2 of the mons nodes will runs also mgrs . I
would like to ask if I can deploy such cluster using a VM(Virtual Machine)
as a ceph deployment .
your inputs would be highly appreciated.
ix your cluster, you should focus on OSD.
> A cluster can run without big troubles with 2 monitors for few days (if
> not years…).
>
> -
> Etienne Menguy
> etienne.men...@croit.io
>
>
>
>
> On 29 Oct 2021, at 14:08, Michel Niyoyita wrote:
>
> Hello team
>
&
''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
On Fri, Oct 29, 2021 at 12:37 PM Etienne Menguy
wrote:
> Have you tried to restart one of the OSD that seems to block PG recover?
>
> I don’t think increasing PG can help.
> -
> Etienne Menguy
>
t;
> -
> Etienne Menguy
> etienne.men...@croit.io
>
>
>
>
> On 29 Oct 2021, at 11:30, Michel Niyoyita wrote:
>
> Dear Etienne
>
> Is a way there you can enforce mon to rejoin a quorum ? I tried to restart
> it but nothing changed. I guess it is the cause If
"ondisk_log_start": "2833'3782663",
"created": 37,
"last_epoch_clean": 2629,
"parent": "0.0",
"parent_split_bits": 0,
"last_scrub&quo
active+clean+inconsistent
io:
client: 2.0 KiB/s rd, 88 KiB/s wr, 2 op/s rd, 12 op/s wr
On Fri, Oct 29, 2021 at 10:09 AM Etienne Menguy
wrote:
> Hi,
>
> Please share “ceph -s” output.
>
> -
> Etienne Menguy
> etienne.men...@croit.io
>
>
>
>
> On 29
Hello team
I am running a ceph cluster with 3 monitors and 4 OSDs nodes running 3osd
each , I deployed my ceph cluster using ansible and ubuntu 20.04 as OS ,
the ceph version is Octopus. yesterday , My server which hosts OSDs nodes
restarted because of power issue and to comeback on its status
Dear Sir Sage Weil ,
Thank you for everything you have done on the entire project (CEPH) , we
thank you for this brilliant project you have invented and the work you
spent on it.
for really we will miss your contribution .
I wish you success in your future endeavors.
Best regards
Michel
On
Dear team
I want to build two different cluster: one for primary site and the second
for DR site. I would like to ask if these two cluster can
communicate(synchronized) each other and data written to the PR site be
synchronized to the DR site , if once we got trouble for the PR site the
DR
and size = 3, those are reasonable values for replicated
> pools.
> If you shared your 'ceph osd tree' and your rulesets (and profiles if
> you intend to use EC) it would help getting a better overview of your
> cluster.
>
>
> Zitat von Michel Niyoyita :
>
> > Hello Team
> &
Hello Team
I am new in CEPH . I am going to deploy ceph in production at my first time
which will be integrated with openstack . below is my ceph.conf
configurations and my ansible inventory set up.
Please if I miss something important please let me know and advise on
changes I have to make. I
Hello team
I am running a ceph cluster pacific version deployed using ansible . I
would like to add other osds but it fails once riche to the mon
installation with this fatal error:
msg: |-
The conditional check 'groups.get(monitoring_group_name, []) | length >
0' failed. The error was:
Hello Eugen
Thank you very much for your guidance and support , now everything is
working fine , RGW has been replaced swift as I wanted.
Michel
On Thu, 9 Sep 2021, 13:59 Michel Niyoyita, wrote:
> Hello Eugen,
>
> Are there other config done on the OpenStack side except creating
>
Hello Mike
Where can we find a list of ambassadors and their perspective region? I
ask that to know if our region has someone who present us.
Thank you
On Fri, 17 Sep 2021, 19:25 Mike Perez, wrote:
> Hi everyone,
>
> We first introduced the Ceph Community Ambassador program in Ceph
> Month
Hello team ,
I am replacing swift by Ceph Radosgateway , and I am successful by creating
containers through openstack and ceph CLI side . but once trying to create
through the horizon dashboard I get errors: *Error: *Unable to fetch the
policy details. , Unable to get the Swift service info
ell for sure, but yes, I believe you need the openstack-swift
> package (with dependencies). What errors do you get? The more
> information you share the better people can help.
>
>
> Zitat von Michel Niyoyita :
>
> > I tried to install "sudo yum -y install python-sw
continued support.
Micheal
On Thu, Sep 2, 2021 at 9:14 AM Eugen Block wrote:
> I only configured the endpoints for the clients to directly access the
> RGWs, but you'll probably need to install the openstack-swift package.
> Or have you done that already?
>
>
> Zitat von M
Hi Eugen
Below is another erro I am getting once I try one interface.
(kolla-open) [stack@kolla-open kolla]$ openstack endpoint create
--publicurl http://ceph-osd3:8080/swift/v1 --region RegionOne
usage: openstack endpoint create [-h] [-f {json,shell,table,value,yaml}]
e/ <
> https://docs.ceph.com/en/latest/radosgw/keystone/> .
> - Use keystone as auth for RGW
> - Create service and register your RGW as swift
>
> Étienne
>
> > On 27 Aug 2021, at 15:47, Michel Niyoyita wrote:
> >
> > Hello ,
> >
> > I have configured RGW
Hello ,
I have configured RGW in my ceph cluster deployed using ceph ansible and
create sub user to access the created containers and would like to replace
swift by RGW in the openstack side. Anyone can help on configuration to be
done in the OpenStack side in order to integrate those services. I
Hi all ,
Going to deploy a ceph cluster in production with replicas size of 2 . Is
there any inconvenience on the service side ? I am going to change the
default (3) to 2.
Please advise.
Regards.
Michel
___
ceph-users mailing list --
Dear Ceph users,
I am going to deploy ceph in production , and I am going to deploy 3
monitors on 3 differents hosts to make a quorum. Is there any
inconvenience if I deploy 2 managers on the same hosts where I deployed
monitors ? Is it mendatory to be separate?
Kindly advise.
Best regards
Dear Ceph users,
I would like to ask if ceph object storage (RGW) is compatible only with
Amazon S3 and Openstack Swift . is any other way it can be used apart of
those 2 services?
kindly help me to understand , because in training the offer is for S3 and
SWIFT only .
Best Regards
ible (version victoria for
>> now)
>> * cephadm for ceph (started with v15.2.8, now I'm on v16.2.5)
>>
>> the installation was very smooth when I did.
>> --
>> *De :* Michel Niyoyita [mico...@gmail.com]
>> *Envoyé :* mardi 13 ju
hadm for ceph (started with v15.2.8, now I'm on v16.2.5)
>
> the installation was very smooth when I did.
> ------
> *De :* Michel Niyoyita [mico...@gmail.com]
> *Envoyé :* mardi 13 juillet 2021 17:02
> *À :* DESBUREAUX Sylvain INNOV/NET
> *Cc :* Nathan
in /etc/kolla/config/nova:
>
> .
> ├── ceph.client.cinder.keyring
> ├── ceph.client.nova.keyring
> └── ceph.conf
>
> and for glance in /etc/kolla/config/glance:
>
> .
> ├── ceph.client.glance.keyring
> └── ceph.conf
>
> I've also an override for volumes as I've got seve
13 Jul 2021 at 14:08, Michel Niyoyita wrote:
>
>> Hello Nathan,
>>
>> We are using an existing cluster and we have already created pools and
>> users.
>>
>> Regards
>>
>> On Tue, Jul 13, 2021 at 3:03 PM Nathan Harper
>> wrote:
>>
>>>
021 at 13:41, Michel Niyoyita wrote:
>
>> Dear Ceph users,
>>
>> I am trying to integrate openstack with ceph but I am facing some issues
>> on
>> glance and cinder . I am using Kolla ansible to deploy openstack . please
>> I
>> need your advises
Dear Ceph users,
I am trying to integrate openstack with ceph but I am facing some issues on
glance and cinder . I am using Kolla ansible to deploy openstack . please I
need your advises and find below the errors I am getting.
WARNING glance_store.driver
Dear Ceph users,
I would like to ask if it is possible to deploy Ceph Octopus in Centos 7.
waiting for your best reply
Michel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Dear Ceph Users,
Anyone can help on the guidance of how I can integrate ceph to openstack ?
especially RGW.
Regards
Michel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Dear all ,
IS it possible to configure and run ISCSI when you are deploying ceph using
ansible running on ubuntu 18.04 OS? please help me to know and if possible
provide helpful links on that.
Best Regard
Michel
___
ceph-users mailing list --
Dear Ceph user,
I want to configure ceph in our production environment using ubuntu .
anyone who used it can help for the tutorial used .
Best regards.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
ations.
> in meantime you can use this link:
>
> https://computingforgeeks.com/how-to-deploy-ceph-storage-cluster-on-ubuntu-18-04-lts/
>
> On Sun, Apr 4, 2021 at 12:54 PM Michel Niyoyita wrote:
>
>> Dear Ceph users,
>>
>> Kindly help on how I can deploy ceph on ubu
Dear Ceph users,
Kindly help on how I can deploy ceph on ubuntu 18.04 TLS , I am learning
Ceph from scratch , please your inputs are highly appreciated.
Regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
mico...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
83 matches
Mail list logo