[ceph-users] Re: download.ceph.com repository changes

2019-09-24 Thread David Turner
IRT a testing/cutting edge repo, the non-LTS versions of Ceph have been removed because very few people ever used them and tested them. The majority of people that would be using the testing repo would be people needing a bug fix ASAP. Very few people would actually use this regularly and its

[ceph-users] Re: OSD rebalancing issue - should drives be distributed equally over all nodes

2019-09-24 Thread Reed Dier
Hi Thomas, How does your crush map/tree look? If your crush failure domain is by host, then your 96x 8T disks will be as useful as you're 1.6T disks, because smallest failure domain is your limiting factor. So you can either redistribute your disks to be 16x8T+32x1.6T per host, or you could

[ceph-users] Re: configuration of Ceph-ISCSI gateway

2019-09-24 Thread Mike Christie
On 09/24/2019 01:08 PM, Gesiel Galvão Bernardes wrote: > Hi everyone, > > I'm configurating ISCSI gateway in Ceph Mimic (13.2.6) using ceph manual: > > https://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/ > > But i stopped in this problem: In manual says: > "Set the client’s CHAP username to

[ceph-users] Re: download.ceph.com repository changes

2019-09-24 Thread Ken Dreyer
On Tue, Sep 17, 2019 at 8:03 AM Sasha Litvak wrote: > > * I am bothered with a quality of the releases of a very complex system that > can bring down a whole house and keep it down for a while. While I wish the > QA would be perfect, I wonder if it would be practical to release new > packages to

[ceph-users] configuration of Ceph-ISCSI gateway

2019-09-24 Thread Gesiel Galvão Bernardes
Hi everyone, I'm configurating ISCSI gateway in Ceph Mimic (13.2.6) using ceph manual: https://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/ But i stopped in this problem: In manual says: "Set the client’s CHAP username to myiscsiusername and password to myiscsipassword: >

[ceph-users] Re: RGW orphaned shadow objects

2019-09-24 Thread EDH - Manuel Rios Fernandez
My radosgw-admin orphans find generated +64 shards and it show a lot of _shadow_ , _multipart and other undefined object type. Waiting for someone clarify what to do with the output. Regards De: P. O. Enviado el: martes, 24 de septiembre de 2019 11:26 Para: ceph-users@ceph.io

[ceph-users] Re: Health error: 1 MDSs report slow metadata IOs, 1 MDSs report slow requests

2019-09-24 Thread Robert LeBlanc
On Tue, Sep 24, 2019 at 4:56 AM Thomas Schneider <74cmo...@gmail.com> wrote: > > Can you please advise how to fix this (manually)? > My cluster is not getting healthy since 14 days now. > >> Reduced data availability: 33 pgs inactive, 32 pgs peering > >> Degraded data

[ceph-users] Re: Nautilus : ceph dashboard ssl not working

2019-09-24 Thread Volker Theile
https://www.thegeekdiary.com/centos-rhel-67-why-the-files-in-tmp-directory-gets-deleted-periodically/ Am 24.09.19 um 14:53 schrieb Lenz Grimmer: > On 9/24/19 1:37 PM, Miha Verlic wrote: > >> I've got slightly different problem. After a few days of running fine, >> dashboard stops working because

[ceph-users] Re: How to reduce or control memory usage during recovery?

2019-09-24 Thread Robert LeBlanc
On Tue, Sep 24, 2019 at 12:27 AM Amudhan P wrote: > > memory usage was high even when backfills is set to "1". Memory usage will not decrease by adding more backfills. EC is very CPU and RAM intensive during recovery as it has to rebuild the shards. I don't know if reducing stripe size or object

[ceph-users] Configuration of Ceph-ISCSI gateway

2019-09-24 Thread Gesiel Galvão Bernardes
Hi everyone, I'm configurating ISCSI gateway in Ceph Mimic (13.2.6) using ceph manual: https://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/ But i stopped in this problem: In manual says: "Set the client’s CHAP username to myiscsiusername and password to myiscsipassword: >

[ceph-users] Re: Health error: 1 MDSs report slow metadata IOs, 1 MDSs report slow requests

2019-09-24 Thread Thomas Schneider
Can you please advise how to fix this (manually)? My cluster is not getting healthy since 14 days now. Am 24.09.2019 um 13:35 schrieb Burkhard Linke: > Hi, > > > you need to fix the non active PGs first. They are also probably the > reason for the blocked requests. > > > Regards, > > Burkhard >

[ceph-users] Re: Health error: 1 MDSs report slow metadata IOs, 1 MDSs report slow requests

2019-09-24 Thread Burkhard Linke
Hi, you need to fix the non active PGs first. They are also probably the reason for the blocked requests. Regards, Burkhard On 9/24/19 1:30 PM, Thomas wrote: Hi, ceph health reports 1 MDSs report slow metadata IOs 1 MDSs report slow requests This is the complete output of ceph -s:

[ceph-users] Re: rados bench performance in nautilus

2019-09-24 Thread Maged Mokhtar
On 24/09/2019 10:25, Marc Roos wrote: > The intent of this change is to increase iops on bluestore, it was implemented in 14.2.4 but it is a > general bluestore issue not specific to Nautilus. I am confused. Is it not like this that an increase in iops on bluestore = increase in overall

[ceph-users] Health error: 1 MDSs report slow metadata IOs, 1 MDSs report slow requests

2019-09-24 Thread Thomas
Hi, ceph health reports 1 MDSs report slow metadata IOs 1 MDSs report slow requests This is the complete output of ceph -s: root@ld3955:~# ceph -s   cluster:     id: 6b1b5117-6e08-4843-93d6-2da3cf8a6bae     health: HEALTH_ERR     1 MDSs report slow metadata IOs     1 MDSs

[ceph-users] RGW orphaned shadow objects

2019-09-24 Thread P. O.
Hi All, I have a question about "orphaned" objects in default.rgw.buckets.data pool. Few days ago i ran "radosgw-admin orphans find ..." [dc-1 root@mon-1 tmp]$ radosgw-admin orphans list-jobs [ "orphans-find-1" ] Today I checked the result. I listed orphaned objects by command: $# for i in

[ceph-users] Re: unsubscribe

2019-09-24 Thread Wesley Peng
As the signature shows, please send an email to ceph-users-le...@ceph.io for unsubscribing. hou guanghua wrote: To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email

[ceph-users] Re: rados bench performance in nautilus

2019-09-24 Thread Marc Roos
> The intent of this change is to increase iops on bluestore, it was implemented in 14.2.4 but it is a > general bluestore issue not specific to Nautilus. I am confused. Is it not like this that an increase in iops on bluestore = increase in overall iops? It is specific to Nautilus,

[ceph-users] unsubscribe

2019-09-24 Thread hou guanghua
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] unsubscribe

2019-09-24 Thread hou guanghua
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: How to reduce or control memory usage during recovery?

2019-09-24 Thread Amudhan P
memory usage was high even when backfills is set to "1". On Mon, Sep 23, 2019 at 8:54 PM Robert LeBlanc wrote: > On Fri, Sep 20, 2019 at 5:41 AM Amudhan P wrote: > > I have already set "mon osd memory target to 1Gb" and I have set > max-backfill from 1 to 8. > > Reducing the number of