Hi All,
Every time after we activate osd, we got “Structure needs cleaning” in
/var/lib/ceph/osd/ceph-xxx/current/meta.
/var/lib/ceph/osd/ceph-xxx/current/meta
# ls -l
ls: reading directory .: Structure needs cleaning
total 0
Could Anyone say something about this error?
Thank you!
No-one?
-Ursprüngliche Nachricht-
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von
Ingo Reimann
Gesendet: Freitag, 2. März 2018 14:15
An: ceph-users
Betreff: [ceph-users] Multipart Upload - POST fails
Hi,
we discovered some problem with our installation -
Hi David,
Thanks for the info.
Could I assume that if use active/passive multipath with rbd exclusive lock
then all targets which support rbd(via block) are safe?
2018-03-08
shadow_lin
发件人:David Disseldorp
发送时间:2018-03-08 08:47
主题:Re: [ceph-users] iSCSI Multipath (Load
On Wed, Mar 7, 2018 at 8:37 PM, Alex Gorbachev wrote:
> On Wed, Mar 7, 2018 at 9:43 AM, Cassiano Pilipavicius
> wrote:
>> Hi all, this issue already have been discussed in older threads and I've
>> already tried most of the solutions proposed in
Hi shadowlin,
On Wed, 7 Mar 2018 23:24:42 +0800, shadow_lin wrote:
> Is it safe to use active/active multipath If use suse kernel with
> target_core_rbd?
> Thanks.
A cross-gateway failover race-condition similar to what Mike described
is currently possible with active/active target_core_rbd.
On Wed, Mar 7, 2018 at 5:29 AM, John Spray wrote:
> On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster wrote:
>> Hi all,
>>
>> What is the purpose of
>>
>>ceph mds set max_mds
>>
>> ?
>>
>> We just used that by mistake on a cephfs cluster when
On Thu, Mar 8, 2018 at 1:22 AM, Harald Staub wrote:
> "ceph pg repair" leads to:
> 5.7bd repair 2 errors, 0 fixed
>
> Only an empty list from:
> rados list-inconsistent-obj 5.7bd --format=json-pretty
>
> Inspired by http://tracker.ceph.com/issues/12577 , I tried again with
Hi Christie,
Is it safe to use active/passive multipath with krbd with exclusive lock for
lio/tgt/scst/tcmu?
Is it safe to use active/active multipath If use suse kernel with
target_core_rbd?
Thanks.
2018-03-07
shadowlin
发件人:Mike Christie
发送时间:2018-03-07 03:51
"ceph pg repair" leads to:
5.7bd repair 2 errors, 0 fixed
Only an empty list from:
rados list-inconsistent-obj 5.7bd --format=json-pretty
Inspired by http://tracker.ceph.com/issues/12577 , I tried again with
more verbose logging and searched the osd logs e.g. for "!=",
"mismatch", could not
On Wed, Mar 7, 2018 at 2:45 PM, Kenneth Waegeman
wrote:
> Hi all,
>
> I am playing with limiting client access to certain subdirectories of cephfs
> running latest 12.2.4 and latest centos 7.4 kernel, both using kernel client
> and fuse
>
> I am following
Hi all,
I am playing with limiting client access to certain subdirectories of
cephfs running latest 12.2.4 and latest centos 7.4 kernel, both using
kernel client and fuse
I am following http://docs.ceph.com/docs/luminous/cephfs/client-auth/:
/To completely restrict the client to the
Hi all, this issue already have been discussed in older threads and I've
already tried most of the solutions proposed in older threads.
I have a small and old ceph cluster (slarted in hammer and upgraded
until luminous 12.2.2) , connected thru single 1gbe link shared (I know
this is not
On Wed, Mar 7, 2018 at 2:02 PM, Dan van der Ster wrote:
> On Wed, Mar 7, 2018 at 2:29 PM, John Spray wrote:
>> On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster
>> wrote:
>>> Hi all,
>>>
>>> What is the purpose of
>>>
>>>ceph mds
On Wed, Mar 7, 2018 at 2:29 PM, John Spray wrote:
> On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster wrote:
>> Hi all,
>>
>> What is the purpose of
>>
>>ceph mds set max_mds
>>
>> ?
>>
>> We just used that by mistake on a cephfs cluster when
On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster wrote:
> Hi all,
>
> What is the purpose of
>
>ceph mds set max_mds
>
> ?
>
> We just used that by mistake on a cephfs cluster when attempting to
> decrease from 2 to 1 active mds's.
>
> The correct command to do this is
Hi list,
Ceph version is jewel 10.2.10 and all osd are using filestore.
The Cluster has 96 osds and 1 pool with size=2 replication with 4096 pg(base on
pg calculate method from ceph doc for 100pg/per osd).
The osd with the most pg count has 104 PGs and there are 6 osds have above 100
PGs
First noticed this problem in our ESXi/iSCSI cluster, but not I can
replicate it in lab with just Ubuntu:
1. Create an image with journaling (and required exclusive-lock) feature
2. Mount the image, make a fs and write a large file to it:
rbd-nbd map matte/scuttle2
/dev/nbd0
mkfs.xfs
On Wed, Mar 07, 2018 at 02:04:52PM +0100, Fabian Grünbichler wrote:
> On Wed, Feb 28, 2018 at 10:24:50AM +0100, Florent B wrote:
> > Hi,
> >
> > Since yesterday, the "ceph-luminous" repository does not contain any
> > package for Debian Jessie.
> >
> > Is it expected ?
>
> AFAICT the packages
On Wed, Feb 28, 2018 at 10:24:50AM +0100, Florent B wrote:
> Hi,
>
> Since yesterday, the "ceph-luminous" repository does not contain any
> package for Debian Jessie.
>
> Is it expected ?
AFAICT the packages are all there[2], but the Packages file only
references the ceph-deploy package so apt
On 6.3.2018 22:28, Gregory Farnum wrote:
On Sat, Mar 3, 2018 at 2:28 AM Jan Pekař - Imatic > wrote:
Hi all,
I have few problems on my cluster, that are maybe linked together and
now caused OSD down during pg repair.
First few
On Wed, 7 Mar 2018, Wei Jin said:
> Same issue here.
> Will Ceph community support Debian Jessie in the future?
Seems odd to stop it right in the middle of minor point releases. Maybe it was
an oversight? Jessie's still supported in Debian as oldstable and not even in
LTS yet.
Sean
> On
Same issue here.
Will Ceph community support Debian Jessie in the future?
On Mon, Mar 5, 2018 at 6:33 PM, Florent B wrote:
> Jessie is no more supported ??
> https://download.ceph.com/debian-luminous/dists/jessie/main/binary-amd64/Packages
> only contains ceph-deploy package
Hi all,
What is the purpose of
ceph mds set max_mds
?
We just used that by mistake on a cephfs cluster when attempting to
decrease from 2 to 1 active mds's.
The correct command to do this is of course
ceph fs set max_mds
So, is `ceph mds set max_mds` useful for something? If not,
What you said make sense.
I have encountered a few hardware related issue that caused one osd to work
abnormal and blocked all io of the whole cluster(all osd in one pool) which
makes me think how to avoid this situation.
2018-03-07
shadow_lin
发件人:David Turner
Il 06/03/2018 16:23, David Turner ha scritto:
That said, I do like the idea of being able to disable buckets, rbds,
pools, etc so that no client could access them. That is useful for
much more than just data deletion and won't prevent people from
deleting data prematurely.
To me, if nobody
25 matches
Mail list logo