[ceph-users] Re: ceph-ansible installation error

2024-08-30 Thread Milan Kupcevic
On 8/30/24 12:38, Tim Holloway wrote: I believe that the original Ansible installation process is deprecated. This would be a bad news as I repeatedly hear from admins running large storage deployments that they prefer to stay away from containers. Milan -- Milan Kupcevic Research

[ceph-users] Re: ceph-ansible installation error

2024-08-30 Thread Milan Kupcevic
://docs.ceph.com/projects/ceph-ansible/en/latest/ Milan -- Milan Kupcevic Research Computing Lead Storage Engineer Harvard University HUIT, University Research Computing On 8/30/24 10:53, Michel Niyoyita wrote: Dear team , I configuring ceph cluster using ceph-ansible , ubuntu OS 20.04

[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-29 Thread Milan Kupcevic
chose Ubuntu and not this rocky Linux or centos8 stream? Not quite hard to guess: Well established distribution with stable predictable roadmap vs. a new uncertain rocky experiment vs. CentOS 8 with dubious future. Milan -- Milan Kupcevic Senior Cyberinfrastructure Engineer at Project NESE

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-18 Thread Milan Kupcevic
On 3/18/21 2:36 AM, Lars Täuber wrote: > I vote for an SSH orchestrator for a bare metal installation too! +1 Cephadm with a no containers option would do. Milan -- Milan Kupcevic Senior Cyberinfrastructure Engineer at Project NESE Harvard University FAS Research Comput

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Milan Kupcevic
rizing everything brings any benefits except the > collocation of services. > +1 To a man with a Docker, everything looks like a container. Milan -- Milan Kupcevic Senior Cyberinfrastructure Engineer at Project NESE Harvard University

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Milan Kupcevic
On 3/17/21 1:26 PM, Matthew H wrote: > There should not be any performance difference between an un-containerized > version and a containerized one. > That is right. Let us choose which one fits our setup better. Milan -- Milan Kupcevic Senior Cyberinfrastructure Engineer at Pro

[ceph-users] Re: Questions RE: Ceph/CentOS/IBM

2021-03-03 Thread Milan Kupcevic
nload.ceph.com/debian-nautilus/dists/bionic/ Planing to run Ceph 15.2 Octopus on Ubuntu 20.4 Focal Fossa: https://download.ceph.com/debian-octopus/dists/focal/ Regards, Milan -- Milan Kupcevic Senior Cyberinfrastructure Engineer at Project NES

[ceph-users] Re: Proper solution of slow_ops

2021-02-11 Thread Milan Kupcevic
a restart. Type String Valid Choices low, high Default low > >> On Feb 9, 2021, at 4:42 AM, Milan Kupcevic > <mailto:milan_kupce...@harvard.edu>> wrote: >> >> On 2/9/21 7:29 AM, Michal Strnad wrote: >>> >>> we are looking for a pr

[ceph-users] Re: Proper solution of slow_ops

2021-02-09 Thread Milan Kupcevic
osd osd_op_queue_cut_off high -- Milan Kupcevic Senior Cyberinfrastructure Engineer at Project NESE Harvard University FAS Research Computing ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: osd crash: Caught signal (Aborted) thread_name:tp_osd_tp

2020-11-24 Thread Milan Kupcevic
nt object. > > But I would greatly appreciate if we dissect this case for a bit > > > On 11/24/2020 9:55 AM, Milan Kupcevic wrote: >> Hello, >> >> Three OSD daemons crash at the same time while processing the same >> object located in an rbd ec4+2 pool lea

[ceph-users] osd crash: Caught signal (Aborted) thread_name:tp_osd_tp

2020-11-23 Thread Milan Kupcevic
Please take a look at the attached log file. Ceph status reports: Reduced data availability: 1 pg inactive, 1 pg down Any hints on how to get this placement group back online would be greatly appreciated. Milan -- Milan Kupcevic Senior Cyberinfrastructure Engineer at Project NESE H

[ceph-users] Re: The confusing output of ceph df command

2020-09-10 Thread Milan Kupcevic
USED [...] 2.1 PiB 1.2 PiB It is hard to know what the cluster space usage is and how much free space is actually available. Milan -- Milan Kupcevic Senior Cyberinfrastructure Engineer at Project NESE Harvard University FAS Research Computing ___

[ceph-users] Re: Sizing your MON storage with a large cluster

2020-06-14 Thread Milan Kupcevic
s mon01,mon02,mon03,mon04,mon05 are using a lot of disk space mon.mon02 is 126 GiB >= mon_data_size_warn (15 GiB) mon.mon03 is 126 GiB >= mon_data_size_warn (15 GiB) mon.mon04 is 126 GiB >= mon_data_size_warn (15 GiB) mon.mon05 is 127 GiB >= mon_data_size_warn (15 GiB)