Hi Sinan,
I would not recommend using 860 EVO or Crucial MX500 SSD's in a Ceph cluster,
as those are consumer grade solutions and not enterprise ones.
Performance and durability will be issues. If feasible, I would simply go NVMe
as it sounds like you will be using this disk to store the journ
If you are running Luminous or newer, you can simply enable the balancer module
[1].
[1]
http://docs.ceph.com/docs/luminous/mgr/balancer/
From: ceph-users on behalf of Robert
LeBlanc
Sent: Tuesday, June 25, 2019 5:22 PM
To: jinguk.k...@ungleich.ch
Cc: ceph-us
,
From: Christian Rice
Sent: Tuesday, March 5, 2019 2:07 PM
To: Matthew H; ceph-users
Subject: Re: radosgw sync falling behind regularly
Matthew, first of all, let me say we very much appreciate your help!
So I don’t think we turned dynamic resharding on, nor did
Christian,
Can you provide your zonegroup and zones configurations for all 3 rgw sites?
(run the commands for each site please)
Thanks,
From: Christian Rice
Sent: Monday, March 4, 2019 5:34 PM
To: Matthew H; ceph-users
Subject: Re: radosgw sync falling behind
Hi Massimo!
What version of Ceph is in use?
Thanks,
From: ceph-users on behalf of Massimo
Sgaravatto
Sent: Friday, March 1, 2019 1:24 PM
To: Ceph Users
Subject: [ceph-users] Problems creating a balancer plan
Hi
I already used the balancer in my ceph luminous
You can force an rbd unmap with the command below:
rbd unmap -o force $DEV
If it still doesn't unmap, then you have pending IO blocking you.
As llya mentioned for good measure you should also check to see if LVM is in
use on this RBD volume. If it is, then that could be blocking you from
unmap
I believe the question was in regards to which formula to use. There are two
different formulas here [1] and here [2].
The difference being the additional steps used to calculate the appropriate PG
counts for a pool. In Nautilus though, this mostly moot as the mgr service now
has a module to au
It looks like he used 'rbd map' to map his volume. If so, then yes just run
fstrim on the device.
If it's an instance with a cinder, or a nova ephemeral disk (on ceph) then you
have to use virtio-scsi to run discard in your instance.
From: ceph-users on behalf
I think the command you are looking for is 'rbd du'
example
rbd du rbd/myimagename
From: ceph-users on behalf of solarflow99
Sent: Thursday, February 28, 2019 5:31 PM
To: Jack
Cc: Ceph Users
Subject: Re: [ceph-users] rbd space usage
yes, but:
# rbd showmappe
Could you send your ceph.conf file over please? Are you setting any tunables
for OSD or Bluestore currently?
From: ceph-users on behalf of Uwe Sauter
Sent: Thursday, February 28, 2019 8:33 AM
To: Marc Roos; ceph-users; vitalif
Subject: Re: [ceph-users] Fwd: Re:
Is fstrim or discard enabled for these SSD's? If so, how did you enable it?
I've seen similiar issues with poor controllers on SSDs. They tend to block I/O
when trim kicks off.
Thanks,
From: ceph-users on behalf of Paul Emmerich
Sent: Friday, February 22, 201
Have you made any changes to your ceph.conf? If so, would you mind copying them
into this thread?
From: ceph-users on behalf of Vitaliy
Filippov
Sent: Wednesday, February 27, 2019 4:21 PM
To: Ceph Users
Subject: Re: [ceph-users] Blocked ops after change from fi
This feature is in the Nautilus release.
The first release (14.1.0) of Nautilus is available from download.ceph.com as
of last Friday.
From: ceph-users on behalf of admin
Sent: Thursday, February 28, 2019 4:22 AM
To: Pritha Srivastava; Sage Weil; ceph-us...@ce
Hey Ben,
Could you include the following?
radosgw-admin mdlog list
Thanks,
From: ceph-users on behalf of
Benjamin.Zieglmeier
Sent: Tuesday, February 26, 2019 9:33 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Multi-Site Cluster RGW Sync issues
Hey Christian,
I'm making a while guess, but assuming this is 12.2.8. If so, it it possible
that you can upgrade to 12.2.11? There's been rgw multisite bug fixes for
metadata syncing and data syncing ( both separate issues ) that you could be
hitting.
Thanks,
F
Greetings,
You need to set the following configuration option under [osd] in your
ceph.conf file for your new OSDs.
[osd]
osd_crush_initial_weight = 0
This will ensure your new OSDs come up with a 0 crush weight, thus preventing
the automatic rebalance that you see occuring.
Good luck,
_
Without knowing more about the underlying hardware, you likely are reaching
some type of IO resource constraint. Are your journals colocated or
non-colocated? How fast is your backend OSD storage device?
You may also want to look at setting the norebalance flag.
Good luck!
> On Sep 20, 2018,
Setup a python virtual environment and install the required notario package
version. You'll want to also install ansible into that virtual environment
along with netaddr.
On Sep 20, 2018, at 18:04, solarflow99
mailto:solarflo...@gmail.com>> wrote:
oh, was that all it was... git clone https:
18 matches
Mail list logo