Re: [ceph-users] Consumer-grade SSD in Ceph

2019-12-20 Thread Matthew H
Hi Sinan, I would not recommend using 860 EVO or Crucial MX500 SSD's in a Ceph cluster, as those are consumer grade solutions and not enterprise ones. Performance and durability will be issues. If feasible, I would simply go NVMe as it sounds like you will be using this disk to store the journ

Re: [ceph-users] rebalancing ceph cluster

2019-06-25 Thread Matthew H
If you are running Luminous or newer, you can simply enable the balancer module [1]. [1] http://docs.ceph.com/docs/luminous/mgr/balancer/ From: ceph-users on behalf of Robert LeBlanc Sent: Tuesday, June 25, 2019 5:22 PM To: jinguk.k...@ungleich.ch Cc: ceph-us

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-05 Thread Matthew H
, From: Christian Rice Sent: Tuesday, March 5, 2019 2:07 PM To: Matthew H; ceph-users Subject: Re: radosgw sync falling behind regularly Matthew, first of all, let me say we very much appreciate your help! So I don’t think we turned dynamic resharding on, nor did

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-04 Thread Matthew H
Christian, Can you provide your zonegroup and zones configurations for all 3 rgw sites? (run the commands for each site please) Thanks, From: Christian Rice Sent: Monday, March 4, 2019 5:34 PM To: Matthew H; ceph-users Subject: Re: radosgw sync falling behind

Re: [ceph-users] Problems creating a balancer plan

2019-03-02 Thread Matthew H
Hi Massimo! What version of Ceph is in use? Thanks, From: ceph-users on behalf of Massimo Sgaravatto Sent: Friday, March 1, 2019 1:24 PM To: Ceph Users Subject: [ceph-users] Problems creating a balancer plan Hi I already used the balancer in my ceph luminous

Re: [ceph-users] rbd unmap fails with error: rbd: sysfs write failed rbd: unmap failed: (16) Device or resource busy

2019-03-02 Thread Matthew H
You can force an rbd unmap with the command below: rbd unmap -o force $DEV If it still doesn't unmap, then you have pending IO blocking you. As llya mentioned for good measure you should also check to see if LVM is in use on this RBD volume. If it is, then that could be blocking you from unmap

Re: [ceph-users] PG Calculations Issue

2019-03-01 Thread Matthew H
I believe the question was in regards to which formula to use. There are two different formulas here [1] and here [2]. The difference being the additional steps used to calculate the appropriate PG counts for a pool. In Nautilus though, this mostly moot as the mgr service now has a module to au

Re: [ceph-users] rbd space usage

2019-02-28 Thread Matthew H
It looks like he used 'rbd map' to map his volume. If so, then yes just run fstrim on the device. If it's an instance with a cinder, or a nova ephemeral disk (on ceph) then you have to use virtio-scsi to run discard in your instance. From: ceph-users on behalf

Re: [ceph-users] rbd space usage

2019-02-28 Thread Matthew H
I think the command you are looking for is 'rbd du' example rbd du rbd/myimagename From: ceph-users on behalf of solarflow99 Sent: Thursday, February 28, 2019 5:31 PM To: Jack Cc: Ceph Users Subject: Re: [ceph-users] rbd space usage yes, but: # rbd showmappe

Re: [ceph-users] Fwd: Re: Blocked ops after change from filestore on HDD to bluestore on SDD

2019-02-28 Thread Matthew H
Could you send your ceph.conf file over please? Are you setting any tunables for OSD or Bluestore currently? From: ceph-users on behalf of Uwe Sauter Sent: Thursday, February 28, 2019 8:33 AM To: Marc Roos; ceph-users; vitalif Subject: Re: [ceph-users] Fwd: Re:

Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-02-28 Thread Matthew H
Is fstrim or discard enabled for these SSD's? If so, how did you enable it? I've seen similiar issues with poor controllers on SSDs. They tend to block I/O when trim kicks off. Thanks, From: ceph-users on behalf of Paul Emmerich Sent: Friday, February 22, 201

Re: [ceph-users] Blocked ops after change from filestore on HDD to bluestore on SDD

2019-02-28 Thread Matthew H
Have you made any changes to your ceph.conf? If so, would you mind copying them into this thread? From: ceph-users on behalf of Vitaliy Filippov Sent: Wednesday, February 27, 2019 4:21 PM To: Ceph Users Subject: Re: [ceph-users] Blocked ops after change from fi

Re: [ceph-users] [Ceph-community] How does ceph use the STS service?

2019-02-28 Thread Matthew H
This feature is in the Nautilus release. The first release (14.1.0) of Nautilus is available from download.ceph.com as of last Friday. From: ceph-users on behalf of admin Sent: Thursday, February 28, 2019 4:22 AM To: Pritha Srivastava; Sage Weil; ceph-us...@ce

Re: [ceph-users] Multi-Site Cluster RGW Sync issues

2019-02-27 Thread Matthew H
Hey Ben, Could you include the following? radosgw-admin mdlog list Thanks, From: ceph-users on behalf of Benjamin.Zieglmeier Sent: Tuesday, February 26, 2019 9:33 AM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Multi-Site Cluster RGW Sync issues

Re: [ceph-users] radosgw sync falling behind regularly

2019-02-27 Thread Matthew H
Hey Christian, I'm making a while guess, but assuming this is 12.2.8. If so, it it possible that you can upgrade to 12.2.11? There's been rgw multisite bug fixes for metadata syncing and data syncing ( both separate issues ) that you could be hitting. Thanks, F

Re: [ceph-users] New OSD with weight 0, rebalance still happen...

2018-11-23 Thread Matthew H
Greetings, You need to set the following configuration option under [osd] in your ceph.conf file for your new OSDs. [osd] osd_crush_initial_weight = 0 This will ensure your new OSDs come up with a 0 crush weight, thus preventing the automatic rebalance that you see occuring. Good luck, _

Re: [ceph-users] Ceph backfill problem

2018-09-20 Thread Matthew H
Without knowing more about the underlying hardware, you likely are reaching some type of IO resource constraint. Are your journals colocated or non-colocated? How fast is your backend OSD storage device? You may also want to look at setting the norebalance flag. Good luck! > On Sep 20, 2018,

Re: [ceph-users] ceph-ansible

2018-09-20 Thread Matthew H
Setup a python virtual environment and install the required notario package version. You'll want to also install ansible into that virtual environment along with netaddr. On Sep 20, 2018, at 18:04, solarflow99 mailto:solarflo...@gmail.com>> wrote: oh, was that all it was... git clone https: