[ceph-users] FAILED assert(last_e.version.version < e.version.version) - Or: how to use ceph-kvstore-tool?

2017-08-01 Thread Ricardo J. Barberis
Hello, We had a power failure and after some trouble 2 of our OSDs started crashing with this error: "FAILED assert(last_e.version.version < e.version.version)" I know what's the problematic PG, and searching the ceph lists and the web I saw that ultimately I should fix that PG using

Re: [ceph-users] Ceph Developers Monthly - August

2017-08-01 Thread Leonardo Vaz
On Thu, Jul 27, 2017 at 02:08:59AM -0300, Leonardo Vaz wrote: > Hey Cephers, > > This is just a friendly reminder that the next Ceph Developer Montly > meeting is coming up: > > https://wiki.ceph.com/Planning > > If you have work that you're doing that it a feature work, significant >

[ceph-users] EC Pool Stuck w/ holes in PG Mapping

2017-08-01 Thread Billy Olsen
I'm dealing with a situation in which the placement groups in an EC Pool is stuck. The EC Pool is configured as 6+2 (pool 15) with host failure domain. In this scenario, one of the nodes in the cluster was torn down and recreated with the OSDs being marked as lost and then being rebuilt from

[ceph-users] deep-scrub taking long time(possible leveldb corruption?)

2017-08-01 Thread Stanley Zhang
Hi We have a 4 physical nodes cluster running Jewel, our app talks S3 to the cluster and uses S3 index heavily no-doubt. We've had several big outages in the past that seem caused by a deep-scrub on one of the PGs in S3 index pool. Generally it starts from a deep scrub on one such PG then

Re: [ceph-users] ceph and Fscache : can you kindly share your experiences?

2017-08-01 Thread Anish Gupta
Hello Webert, Thank you for your response. I am not interested in the SSD cache tier pool at all as that is on the Ceph Storage Cluster Server and is somewhat well documented/understood. My question is regards enabling caching at the ceph clients that talk to the Ceph Storage Cluster.

Re: [ceph-users] ceph and Fscache : can you kindly share your experiences?

2017-08-01 Thread Webert de Souza Lima
Hi Anish, in case you're still interested, we're using cephfs in production since jewel 10.2.1. I have a few similar clusters with some small set up variations. They're not so big but they're under heavy workload. - 15~20 x 6TB HDD OSDs (5 per node), ~4 x 480GB SSD OSDs (2 per node, set for

Re: [ceph-users] LevelDB corruption

2017-08-01 Thread Mazzystr
Sorry to take so long in replying I ended up evacuating data and rebuilding using Luminous with BlueStore OSDs. I need my usual drive/host failure testing before going live. Of course other things are burning right now and have my attention. Hopefully I can finish that work in the next few

[ceph-users] Rados lib object clone api

2017-08-01 Thread Muthusamy Muthiah
Hi, Is there an librados API to clone objects ? I could able to see options available on radosgw API to copy object and rbd to clone images. Not able to find similar options on librados native library to clone object. It would be good if you can point be to right document if it is possible.

Re: [ceph-users] CephFS: concurrent access to the same file from multiple nodes

2017-08-01 Thread Andras Pataki
Hi John, Sorry for the delay, it took a bit of work to set up a luminous test environment. I'm sorry to have to report that the 12.1.1 RC version also suffers from this problem - when two nodes open the same file for read/write, and read from it, the performance is awful (under 1

[ceph-users] Probleme mit Pathologie-Rechner (Job: 116.152)

2017-08-01 Thread Steffen Weißgerber
Sehr geehrter Herr Kartzmareck, bzgl. der Diktat-Probleme am Rechner unserer Pathologen in Ihrem Haus hat sich herausgestellt, dass offensichtlich die Ursache hierfür in der Installation der Kaspersky Endpoint Security 10 am 4.7.2017 liegt. Anhand der Logs des Programms ist zu sehen, dass die

Re: [ceph-users] Ceph - OpenStack space efficiency

2017-08-01 Thread Jason Dillaman
You could just use the "rbd du" command to calculate the real disk usage of images / snapshots and compare that to the thin-provisioned size of the images. On Mon, Jul 31, 2017 at 11:28 PM, Italo Santos wrote: > Hello everyone, > > As we know the Openstack ceph integration uses

Re: [ceph-users] Ceph Maintenance

2017-08-01 Thread Richard Hesketh
On 01/08/17 12:41, Osama Hasebou wrote: > Hi, > > What would be the best possible and efficient way for big Ceph clusters when > maintenance needs to be performed ? > > Lets say that we have 3 copies of data, and one of the servers needs to be > maintained, and maintenance might take 1-2 days

[ceph-users] Ceph Maintenance

2017-08-01 Thread Osama Hasebou
Hi, What would be the best possible and efficient way for big Ceph clusters when maintenance needs to be performed ? Lets say that we have 3 copies of data, and one of the servers needs to be maintained, and maintenance might take 1-2 days due to some unprepared issues that come up.

[ceph-users] Override SERVER_PORT and SERVER_PORT_SECURE and AWS4

2017-08-01 Thread Wido den Hollander
Hi, I'm running into a issue with RGW running Civetweb behind a Apache mod_proxy server. The problem is that when AWS credentials and signatures are send using the Query String the host header calculated by RGW is something like this: host:rgw.mydomain.local:7480 RGW thinks it's running on

Re: [ceph-users] RGW: how to get a list of defined radosgw users?

2017-08-01 Thread Jaroslaw Owsiewski
Hi, $ radosgw-admin metadata list user -- Jarek -- Jarosław Owsiewski 2017-08-01 9:52 GMT+02:00 Diedrich Ehlerding < diedrich.ehlerd...@ts.fujitsu.com>: > Hello, > > according to the manpages of radosgw-admin, it is possible to > suspend, resume, create, remove a single radosgw user, but I

[ceph-users] RGW: how to get a list of defined radosgw users?

2017-08-01 Thread Diedrich Ehlerding
Hello, according to the manpages of radosgw-admin, it is possible to suspend, resume, create, remove a single radosgw user, but I haven't yet found a method to see a list of all defined radoswg users. Is that possible, and how is it possible? TIA, Diedrich -- Diedrich Ehlerding, Fujitsu