[ceph-users] /var/lib/ceph/osd/ceph-xxx/current/meta shows "Structure needs cleaning"

2018-03-07 Thread 赵贺东
Hi All, Every time after we activate osd, we got “Structure needs cleaning” in /var/lib/ceph/osd/ceph-xxx/current/meta. /var/lib/ceph/osd/ceph-xxx/current/meta # ls -l ls: reading directory .: Structure needs cleaning total 0 Could Anyone say something about this error? Thank you!

Re: [ceph-users] Multipart Upload - POST fails

2018-03-07 Thread Ingo Reimann
No-one? -Ursprüngliche Nachricht- Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von Ingo Reimann Gesendet: Freitag, 2. März 2018 14:15 An: ceph-users Betreff: [ceph-users] Multipart Upload - POST fails Hi, we discovered some problem with our installation -

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-07 Thread shadow_lin
Hi David, Thanks for the info. Could I assume that if use active/passive multipath with rbd exclusive lock then all targets which support rbd(via block) are safe? 2018-03-08 shadow_lin 发件人:David Disseldorp 发送时间:2018-03-08 08:47 主题:Re: [ceph-users] iSCSI Multipath (Load

Re: [ceph-users] improve single job sequencial read performance.

2018-03-07 Thread Alex Gorbachev
On Wed, Mar 7, 2018 at 8:37 PM, Alex Gorbachev wrote: > On Wed, Mar 7, 2018 at 9:43 AM, Cassiano Pilipavicius > wrote: >> Hi all, this issue already have been discussed in older threads and I've >> already tried most of the solutions proposed in

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-07 Thread David Disseldorp
Hi shadowlin, On Wed, 7 Mar 2018 23:24:42 +0800, shadow_lin wrote: > Is it safe to use active/active multipath If use suse kernel with > target_core_rbd? > Thanks. A cross-gateway failover race-condition similar to what Mike described is currently possible with active/active target_core_rbd.

Re: [ceph-users] Don't use ceph mds set max_mds

2018-03-07 Thread Patrick Donnelly
On Wed, Mar 7, 2018 at 5:29 AM, John Spray wrote: > On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster wrote: >> Hi all, >> >> What is the purpose of >> >>ceph mds set max_mds >> >> ? >> >> We just used that by mistake on a cephfs cluster when

Re: [ceph-users] pg inconsistent

2018-03-07 Thread Brad Hubbard
On Thu, Mar 8, 2018 at 1:22 AM, Harald Staub wrote: > "ceph pg repair" leads to: > 5.7bd repair 2 errors, 0 fixed > > Only an empty list from: > rados list-inconsistent-obj 5.7bd --format=json-pretty > > Inspired by http://tracker.ceph.com/issues/12577 , I tried again with

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-07 Thread shadow_lin
Hi Christie, Is it safe to use active/passive multipath with krbd with exclusive lock for lio/tgt/scst/tcmu? Is it safe to use active/active multipath If use suse kernel with target_core_rbd? Thanks. 2018-03-07 shadowlin 发件人:Mike Christie 发送时间:2018-03-07 03:51

[ceph-users] pg inconsistent

2018-03-07 Thread Harald Staub
"ceph pg repair" leads to: 5.7bd repair 2 errors, 0 fixed Only an empty list from: rados list-inconsistent-obj 5.7bd --format=json-pretty Inspired by http://tracker.ceph.com/issues/12577 , I tried again with more verbose logging and searched the osd logs e.g. for "!=", "mismatch", could not

Re: [ceph-users] CephFS Client Capabilities questions

2018-03-07 Thread John Spray
On Wed, Mar 7, 2018 at 2:45 PM, Kenneth Waegeman wrote: > Hi all, > > I am playing with limiting client access to certain subdirectories of cephfs > running latest 12.2.4 and latest centos 7.4 kernel, both using kernel client > and fuse > > I am following

[ceph-users] CephFS Client Capabilities questions

2018-03-07 Thread Kenneth Waegeman
Hi all, I am playing with limiting client access to certain subdirectories of cephfs running latest 12.2.4 and latest centos 7.4 kernel, both using kernel client and fuse I am following http://docs.ceph.com/docs/luminous/cephfs/client-auth/: /To completely restrict the client to the

[ceph-users] improve single job sequencial read performance.

2018-03-07 Thread Cassiano Pilipavicius
Hi all, this issue already have been discussed in older threads and I've already tried most of the solutions proposed in older threads. I have a small and  old ceph cluster (slarted in hammer and upgraded until luminous 12.2.2) , connected thru single 1gbe link shared (I know this is not

Re: [ceph-users] Don't use ceph mds set max_mds

2018-03-07 Thread John Spray
On Wed, Mar 7, 2018 at 2:02 PM, Dan van der Ster wrote: > On Wed, Mar 7, 2018 at 2:29 PM, John Spray wrote: >> On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster >> wrote: >>> Hi all, >>> >>> What is the purpose of >>> >>>ceph mds

Re: [ceph-users] Don't use ceph mds set max_mds

2018-03-07 Thread Dan van der Ster
On Wed, Mar 7, 2018 at 2:29 PM, John Spray wrote: > On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster wrote: >> Hi all, >> >> What is the purpose of >> >>ceph mds set max_mds >> >> ? >> >> We just used that by mistake on a cephfs cluster when

Re: [ceph-users] Don't use ceph mds set max_mds

2018-03-07 Thread John Spray
On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster wrote: > Hi all, > > What is the purpose of > >ceph mds set max_mds > > ? > > We just used that by mistake on a cephfs cluster when attempting to > decrease from 2 to 1 active mds's. > > The correct command to do this is

[ceph-users] Uneven pg distribution cause high fs_apply_latency on osds with more pgs

2018-03-07 Thread shadow_lin
Hi list, Ceph version is jewel 10.2.10 and all osd are using filestore. The Cluster has 96 osds and 1 pool with size=2 replication with 4096 pg(base on pg calculate method from ceph doc for 100pg/per osd). The osd with the most pg count has 104 PGs and there are 6 osds have above 100 PGs

[ceph-users] Journaling feature causes cluster to have slow requests and inconsistent PG

2018-03-07 Thread Alex Gorbachev
First noticed this problem in our ESXi/iSCSI cluster, but not I can replicate it in lab with just Ubuntu: 1. Create an image with journaling (and required exclusive-lock) feature 2. Mount the image, make a fs and write a large file to it: rbd-nbd map matte/scuttle2 /dev/nbd0 mkfs.xfs

Re: [ceph-users] No more Luminous packages for Debian Jessie ??

2018-03-07 Thread Fabian Grünbichler
On Wed, Mar 07, 2018 at 02:04:52PM +0100, Fabian Grünbichler wrote: > On Wed, Feb 28, 2018 at 10:24:50AM +0100, Florent B wrote: > > Hi, > > > > Since yesterday, the "ceph-luminous" repository does not contain any > > package for Debian Jessie. > > > > Is it expected ? > > AFAICT the packages

Re: [ceph-users] No more Luminous packages for Debian Jessie ??

2018-03-07 Thread Fabian Grünbichler
On Wed, Feb 28, 2018 at 10:24:50AM +0100, Florent B wrote: > Hi, > > Since yesterday, the "ceph-luminous" repository does not contain any > package for Debian Jessie. > > Is it expected ? AFAICT the packages are all there[2], but the Packages file only references the ceph-deploy package so apt

Re: [ceph-users] OSD crash during pg repair - recovery_info.ss.clone_snaps.end and other problems

2018-03-07 Thread Jan Pekař - Imatic
On 6.3.2018 22:28, Gregory Farnum wrote: On Sat, Mar 3, 2018 at 2:28 AM Jan Pekař - Imatic > wrote: Hi all, I have few problems on my cluster, that are maybe linked together and now caused OSD down during pg repair. First few

Re: [ceph-users] No more Luminous packages for Debian Jessie ??

2018-03-07 Thread Sean Purdy
On Wed, 7 Mar 2018, Wei Jin said: > Same issue here. > Will Ceph community support Debian Jessie in the future? Seems odd to stop it right in the middle of minor point releases. Maybe it was an oversight? Jessie's still supported in Debian as oldstable and not even in LTS yet. Sean > On

Re: [ceph-users] No more Luminous packages for Debian Jessie ??

2018-03-07 Thread Wei Jin
Same issue here. Will Ceph community support Debian Jessie in the future? On Mon, Mar 5, 2018 at 6:33 PM, Florent B wrote: > Jessie is no more supported ?? > https://download.ceph.com/debian-luminous/dists/jessie/main/binary-amd64/Packages > only contains ceph-deploy package

[ceph-users] Don't use ceph mds set max_mds

2018-03-07 Thread Dan van der Ster
Hi all, What is the purpose of ceph mds set max_mds ? We just used that by mistake on a cephfs cluster when attempting to decrease from 2 to 1 active mds's. The correct command to do this is of course ceph fs set max_mds So, is `ceph mds set max_mds` useful for something? If not,

Re: [ceph-users] Why one crippled osd can slow down or block all request to the whole ceph cluster?

2018-03-07 Thread shadow_lin
What you said make sense. I have encountered a few hardware related issue that caused one osd to work abnormal and blocked all io of the whole cluster(all osd in one pool) which makes me think how to avoid this situation. 2018-03-07 shadow_lin 发件人:David Turner

Re: [ceph-users] Delete a Pool - how hard should be?

2018-03-07 Thread Max Cuttins
Il 06/03/2018 16:23, David Turner ha scritto: That said, I do like the idea of being able to disable buckets, rbds, pools, etc so that no client could access them. That is useful for much more than just data deletion and won't prevent people from deleting data prematurely. To me, if nobody