Hello,
We had a power failure and after some trouble 2 of our OSDs started crashing
with this error:
"FAILED assert(last_e.version.version < e.version.version)"
I know what's the problematic PG, and searching the ceph lists and the web I
saw that ultimately I should fix that PG using
On Thu, Jul 27, 2017 at 02:08:59AM -0300, Leonardo Vaz wrote:
> Hey Cephers,
>
> This is just a friendly reminder that the next Ceph Developer Montly
> meeting is coming up:
>
> https://wiki.ceph.com/Planning
>
> If you have work that you're doing that it a feature work, significant
>
I'm dealing with a situation in which the placement groups in an EC
Pool is stuck. The EC Pool is configured as 6+2 (pool 15) with host
failure domain.
In this scenario, one of the nodes in the cluster was torn down and
recreated with the OSDs being marked as lost and then being rebuilt
from
Hi
We have a 4 physical nodes cluster running Jewel, our app talks S3 to
the cluster and uses S3 index heavily no-doubt. We've had several big
outages in the past that seem caused by a deep-scrub on one of the PGs
in S3 index pool. Generally it starts from a deep scrub on one such PG
then
Hello Webert,
Thank you for your response.
I am not interested in the SSD cache tier pool at all as that is on the Ceph
Storage Cluster Server and is somewhat well documented/understood.
My question is regards enabling caching at the ceph clients that talk to the
Ceph Storage Cluster.
Hi Anish, in case you're still interested, we're using cephfs in production
since jewel 10.2.1.
I have a few similar clusters with some small set up variations. They're
not so big but they're under heavy workload.
- 15~20 x 6TB HDD OSDs (5 per node), ~4 x 480GB SSD OSDs (2 per node, set
for
Sorry to take so long in replying
I ended up evacuating data and rebuilding using Luminous with BlueStore
OSDs. I need my usual drive/host failure testing before going live. Of
course other things are burning right now and have my attention. Hopefully
I can finish that work in the next few
Hi,
Is there an librados API to clone objects ?
I could able to see options available on radosgw API to copy object and rbd
to clone images. Not able to find similar options on librados native
library to clone object.
It would be good if you can point be to right document if it is possible.
Hi John,
Sorry for the delay, it took a bit of work to set up a luminous test
environment. I'm sorry to have to report that the 12.1.1 RC version
also suffers from this problem - when two nodes open the same file for
read/write, and read from it, the performance is awful (under 1
Sehr geehrter Herr Kartzmareck,
bzgl. der Diktat-Probleme am Rechner unserer Pathologen in Ihrem Haus
hat
sich herausgestellt, dass offensichtlich die Ursache hierfür in der
Installation der
Kaspersky Endpoint Security 10 am 4.7.2017 liegt.
Anhand der Logs des Programms ist zu sehen, dass die
You could just use the "rbd du" command to calculate the real disk
usage of images / snapshots and compare that to the thin-provisioned
size of the images.
On Mon, Jul 31, 2017 at 11:28 PM, Italo Santos wrote:
> Hello everyone,
>
> As we know the Openstack ceph integration uses
On 01/08/17 12:41, Osama Hasebou wrote:
> Hi,
>
> What would be the best possible and efficient way for big Ceph clusters when
> maintenance needs to be performed ?
>
> Lets say that we have 3 copies of data, and one of the servers needs to be
> maintained, and maintenance might take 1-2 days
Hi,
What would be the best possible and efficient way for big Ceph clusters when
maintenance needs to be performed ?
Lets say that we have 3 copies of data, and one of the servers needs to be
maintained, and maintenance might take 1-2 days due to some unprepared issues
that come up.
Hi,
I'm running into a issue with RGW running Civetweb behind a Apache mod_proxy
server.
The problem is that when AWS credentials and signatures are send using the
Query String the host header calculated by RGW is something like this:
host:rgw.mydomain.local:7480
RGW thinks it's running on
Hi,
$ radosgw-admin metadata list user
--
Jarek
--
Jarosław Owsiewski
2017-08-01 9:52 GMT+02:00 Diedrich Ehlerding <
diedrich.ehlerd...@ts.fujitsu.com>:
> Hello,
>
> according to the manpages of radosgw-admin, it is possible to
> suspend, resume, create, remove a single radosgw user, but I
Hello,
according to the manpages of radosgw-admin, it is possible to
suspend, resume, create, remove a single radosgw user, but I
haven't yet found a method to see a list of all defined radoswg
users. Is that possible, and how is it possible?
TIA,
Diedrich
--
Diedrich Ehlerding, Fujitsu
16 matches
Mail list logo