[ceph-users] QEMU/KVM client compatibility

2019-05-27 Thread Kevin Olbrich
Hi!

How can I determine which client compatibility level (luminous, mimic,
nautilus, etc.) is supported in Qemu/KVM?
Does it depend on the version of ceph packages on the system? Or do I need
a recent version Qemu/KVM?
Which component defines, which client level will be supported?

Thank you very much!

Kind regards
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] assume_role() :http_code 400 error

2019-05-27 Thread Pritha Srivastava
Hello,

What is the value of rgw sts key in your ceph.conf file? It has to be 16
bytes in length e.g. abcdefghijklmnop

Thanks,
Pritha

On Tue, May 28, 2019 at 9:16 AM Yuan Minghui  wrote:

> The log file says:
>
> invalid secret key.
>
>
>
> The key I put is ‘tom’s  accessKey and secret_Key’.
>
> And I am sure that the ‘tom’s key’ is correct.
>
>
>
> *发件人**: *Yuan Minghui 
> *日期**: *2019年5月28日 星期二 上午11:35
> *收件人**: *Pritha Srivastava 
> *抄送**: *"ceph-users@lists.ceph.com" 
> *主题**: *[ceph-users] assume_role() :http_code 400 error
>
>
>
> Hello Pritha:
>
>I reinstall the latest ceph version 14.2.1. and when I use ‘
> assume_role()’ there are something wrong about http_code = 400.
>
> Do you know the reasons?
>
> Thanks a lot.
>
> yuan
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] assume_role() :http_code 400 error

2019-05-27 Thread Yuan Minghui
The log file says:

invalid secret key.

 

The key I put is ‘tom’s  accessKey and secret_Key’.

And I am sure that the ‘tom’s key’ is correct.

 

发件人: Yuan Minghui 
日期: 2019年5月28日 星期二 上午11:35
收件人: Pritha Srivastava 
抄送: "ceph-users@lists.ceph.com" 
主题: [ceph-users] assume_role() :http_code 400 error

 

Hello Pritha:

   I reinstall the latest ceph version 14.2.1. and when I use ‘assume_role()’ 
there are something wrong about http_code = 400.

Do you know the reasons?

Thanks a lot.

yuan

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] assume_role() :http_code 400 error

2019-05-27 Thread Yuan Minghui
Hello Pritha:

   I reinstall the latest ceph version 14.2.1. and when I use ‘assume_role()’ 
there are something wrong about http_code = 400.

Do you know the reasons?

Thanks a lot.

yuan

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Yan, Zheng
On Mon, May 27, 2019 at 6:54 PM Oliver Freyermuth
 wrote:
>
> Am 27.05.19 um 12:48 schrieb Oliver Freyermuth:
> > Am 27.05.19 um 11:57 schrieb Dan van der Ster:
> >> On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth
> >>  wrote:
> >>>
> >>> Dear Dan,
> >>>
> >>> thanks for the quick reply!
> >>>
> >>> Am 27.05.19 um 11:44 schrieb Dan van der Ster:
>  Hi Oliver,
> 
>  We saw the same issue after upgrading to mimic.
> 
>  IIRC we could make the max_bytes xattr visible by touching an empty
>  file in the dir (thereby updating the dir inode).
> 
>  e.g. touch  /cephfs/user/freyermu/.quota; rm  
>  /cephfs/user/freyermu/.quota
> >>>
> >>> sadly, no, not even with sync's in between:
> >>> -
> >>> $ touch /cephfs/user/freyermu/.quota; sync; rm -f 
> >>> /cephfs/user/freyermu/.quota; sync; getfattr --absolute-names 
> >>> --only-values -n ceph.quota.max_bytes /cephfs/user/freyermu/
> >>> /cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute
> >>> -
> >>> Also restarting the FUSE client after that does not change it. Maybe this 
> >>> requires the rest of the cluster to be upgraded to work?
> >>> I'm just guessing here, but maybe the MDS needs the file creation / 
> >>> update of the directory inode to "update" the way the quota attributes 
> >>> are exported. If something changed here with Mimic,
> >>> this would explain why the "touch" is needed. And this would also explain 
> >>> why this might only help if the MDS is upgraded to Mimic, too.
> >>>
> >>
> >> I think the relevant change which is causing this is the new_snaps in 
> >> mimic.
> >>
> >> Did you already enable them? `ceph fs set cephfs allow_new_snaps 1`
> >
> > Good point! We wanted to enable these anyways with Mimic.
> >
> > I've enabled it just now (since servers are still Luminous, that required 
> > "--yes-i-really-mean-it") but sadly, the max_bytes attribute is still not 
> > there
> > (also not after remounting on the client / using the file creation and 
> > deletion trick).
>
> That's interesting - it suddenly started to work for one directory after 
> creating a snapshot for one directory subtree on which we have quotas enabled,
> and removing that snapshot again.
> I can reproduce that for other directories.
> So it seems enabling snapshots and snapshotting once fixes it for that 
> directory tree.
>
> If that's the case, maybe this could be added to the upgrade notes?
>

quota handling code changed in mimic. mimic client + luminous mds have
compat issue.  there should be no issue if  both mds and client are
both upgraded to mimic,

Regards
Yan, Zheng

> Cheers,
> Oliver
>
> >
> > Cheers,
> >  Oliver
> >
> >>
> >> -- dan
> >>
> >>
> >>> We have scheduled the remaining parts of the upgrade for Wednesday, and 
> >>> worst case could survive until then without quota enforcement, but it's a 
> >>> really strange and unexpected incompatibility.
> >>>
> >>> Cheers,
> >>>  Oliver
> >>>
> 
>  Does that work?
> 
>  -- dan
> 
> 
>  On Mon, May 27, 2019 at 11:36 AM Oliver Freyermuth
>   wrote:
> >
> > Dear Cephalopodians,
> >
> > in the process of migrating a cluster from Luminous (12.2.12) to Mimic 
> > (13.2.5), we have upgraded the FUSE clients first (we took the chance 
> > during a time of low activity),
> > thinking that this should not cause any issues. All MDS+MON+OSDs are 
> > still on Luminous, 12.2.12.
> >
> > However, it seems quotas have stopped working - with a (FUSE) Mimic 
> > client (13.2.5), I see:
> > $ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
> > /cephfs/user/freyermu/
> > /cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute
> >
> > A Luminous client (12.2.12) on the same cluster sees:
> > $ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
> > /cephfs/user/freyermu/
> > 5
> >
> > It does not seem as if the attribute has been renamed (e.g. 
> > https://github.com/ceph/ceph/blob/mimic/qa/tasks/cephfs/test_quota.py 
> > still references it, same for the docs),
> > and I have to assume the clients also do not enforce quota if they do 
> > not see it.
> >
> > Is this a known incompatibility between Mimic clients and a Luminous 
> > cluster?
> > The release notes of Mimic only mention that quota support was added to 
> > the kernel client, but nothing else quota related catches my eye.
> >
> > Cheers,
> >   Oliver
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>>
> >>> --
> >>> Oliver Freyermuth
> >>> Universität Bonn
> >>> Physikalisches Institut, Raum 1.047
> >>> Nußallee 12
> >>> 53115 Bonn
> >>> --
> >>> Tel.: +49 228 73 2367
> >>> Fax:  +49 228 73 7869
> >>> --
> >>>
> >
> >
>

Re: [ceph-users] Luminous OSD: replace block.db partition

2019-05-27 Thread Yury Shevchuk
Hi Swami,

In Luminous you will have to delete and re-create the OSD with the
desired size.  Please follow this link for details:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/034805.html

-- Yury

PS [cross-posting to ceph-devel removed]

On Mon, May 27, 2019 at 05:37:02PM +0530, M Ranga Swami Reddy wrote:
> Hello - I have created an OSD with 20G block.db, now I wanted to change the
> block.db to 100G size.
> Please let us know if there is a process for the same.
> 
> PS: Ceph version 12.2.4 with bluestore backend.
> 
> Thanks
> Swami

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: Luminous OSD: replace block.db partition

2019-05-27 Thread M Ranga Swami Reddy
-- Forwarded message -
From: M Ranga Swami Reddy 
Date: Mon, May 27, 2019 at 5:37 PM
Subject: Luminous OSD: replace block.db partition
To: ceph-devel , ceph-users 


Hello - I have created an OSD with 20G block.db, now I wanted to change the
block.db to 100G size.
Please let us know if there is a process for the same.

PS: Ceph version 12.2.4 with bluestore backend.

Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] large omap object in usage_log_pool

2019-05-27 Thread shubjero
Thanks Casey. This helped me understand the purpose of this pool. I
trimmed the usage logs which reduced the number of keys stored in that
index significantly and I may even disable the usage log entirely as I
don't believe we use it for anything.

On Fri, May 24, 2019 at 3:51 PM Casey Bodley  wrote:
>
>
> On 5/24/19 1:15 PM, shubjero wrote:
> > Thanks for chiming in Konstantin!
> >
> > Wouldn't setting this value to 0 disable the sharding?
> >
> > Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/
> >
> > rgw override bucket index max shards
> > Description:Represents the number of shards for the bucket index
> > object, a value of zero indicates there is no sharding. It is not
> > recommended to set a value too large (e.g. thousand) as it increases
> > the cost for bucket listing. This variable should be set in the client
> > or global sections so that it is automatically applied to
> > radosgw-admin commands.
> > Type:Integer
> > Default:0
> >
> > rgw dynamic resharding is enabled:
> > ceph daemon mon.controller1 config show | grep rgw_dynamic_resharding
> >  "rgw_dynamic_resharding": "true",
> >
> > I'd like to know more about the purpose of our .usage pool and the
> > 'usage_log_pool' in general as I cant find much about this component
> > of ceph.
>
> You can find docs for the usage log at
> http://docs.ceph.com/docs/master/radosgw/admin/#usage
>
> Unless trimmed, the usage log will continue to grow. If you aren't using
> it, I'd recommend turning it off and trimming it all.
>
> >
> > On Thu, May 23, 2019 at 11:24 PM Konstantin Shalygin  wrote:
> >> in the config.
> >> ```"rgw_override_bucket_index_max_shards": "8",```. Should this be
> >> increased?
> >>
> >> Should be decreased to default `0`, I think.
> >>
> >> Modern Ceph releases resolve large omaps automatically via bucket dynamic 
> >> resharding:
> >>
> >> ```
> >>
> >> {
> >>  "option": {
> >>  "name": "rgw_dynamic_resharding",
> >>  "type": "bool",
> >>  "level": "basic",
> >>  "desc": "Enable dynamic resharding",
> >>  "long_desc": "If true, RGW will dynamicall increase the number of 
> >> shards in buckets that have a high number of objects per shard.",
> >>  "default": true,
> >>  "daemon_default": "",
> >>  "tags": [],
> >>  "services": [
> >>  "rgw"
> >>  ],
> >>  "see_also": [
> >>  "rgw_max_objs_per_shard"
> >>  ],
> >>  "min": "",
> >>  "max": ""
> >>  }
> >> }
> >> ```
> >>
> >> ```
> >>
> >> {
> >>  "option": {
> >>  "name": "rgw_max_objs_per_shard",
> >>  "type": "int64_t",
> >>  "level": "basic",
> >>  "desc": "Max objects per shard for dynamic resharding",
> >>  "long_desc": "This is the max number of objects per bucket index 
> >> shard that RGW will allow with dynamic resharding. RGW will trigger an 
> >> automatic reshard operation on the bucket if it exceeds this number.",
> >>  "default": 10,
> >>  "daemon_default": "",
> >>  "tags": [],
> >>  "services": [
> >>  "rgw"
> >>  ],
> >>  "see_also": [
> >>  "rgw_dynamic_resharding"
> >>  ],
> >>  "min": "",
> >>  "max": ""
> >>  }
> >> }
> >> ```
> >>
> >>
> >> So when your bucket reached new 100k objects rgw will shard this bucket 
> >> automatically.
> >>
> >> Some old buckets may be not sharded, like your ancients from Giant. You 
> >> can check fill status like this: `radosgw-admin bucket limit check | jq 
> >> '.[]'`. If some buckets is not reshared you can shart it by hand via 
> >> `radosgw-admin reshard add ...`. Also, there may be some stale reshard 
> >> instances (fixed ~ in 12.2.11), you can check it via `radosgw-admin 
> >> reshard stale-instances list` and then remove via `reshard stale-instances 
> >> rm`.
> >>
> >>
> >>
> >> k
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Luminous OSD: replace block.db partition

2019-05-27 Thread M Ranga Swami Reddy
Hello - I have created an OSD with 20G block.db, now I wanted to change the
block.db to 100G size.
Please let us know if there is a process for the same.

PS: Ceph version 12.2.4 with bluestore backend.

Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS hangs in "heartbeat_map" deadlock

2019-05-27 Thread Stefan Kooman
Quoting Stefan Kooman (ste...@bit.nl):
> Hi Patrick,
> 
> Quoting Stefan Kooman (ste...@bit.nl):
> > Quoting Stefan Kooman (ste...@bit.nl):
> > > Quoting Patrick Donnelly (pdonn...@redhat.com):
> > > > Thanks for the detailed notes. It looks like the MDS is stuck
> > > > somewhere it's not even outputting any log messages. If possible, it'd
> > > > be helpful to get a coredump (e.g. by sending SIGQUIT to the MDS) or,
> > > > if you're comfortable with gdb, a backtrace of any threads that look
> > > > suspicious (e.g. not waiting on a futex) including `info threads`.
> > 
> > Today the issue reappeared (after being absent for ~ 3 weeks). This time
> > the standby MDS could take over and would not get into a deadlock
> > itself. We made gdb traces again, which you can find over here:
> > 
> > https://8n1.org/14011/d444
> 
> We are still seeing these crashes occur ~ every 3 weeks or so. Have you
> find the time to look into the backtraces / gdb dumps?

We have not seen this issue anymore for the past three months. We have
updated the cluster to 12.2.11 in the meantime, but not sure if that is
related. Hopefully it stays away.

FYI,

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Oliver Freyermuth

Am 27.05.19 um 12:48 schrieb Oliver Freyermuth:

Am 27.05.19 um 11:57 schrieb Dan van der Ster:

On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth
 wrote:


Dear Dan,

thanks for the quick reply!

Am 27.05.19 um 11:44 schrieb Dan van der Ster:

Hi Oliver,

We saw the same issue after upgrading to mimic.

IIRC we could make the max_bytes xattr visible by touching an empty
file in the dir (thereby updating the dir inode).

e.g. touch  /cephfs/user/freyermu/.quota; rm  /cephfs/user/freyermu/.quota


sadly, no, not even with sync's in between:
-
$ touch /cephfs/user/freyermu/.quota; sync; rm -f /cephfs/user/freyermu/.quota; 
sync; getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
/cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute
-
Also restarting the FUSE client after that does not change it. Maybe this 
requires the rest of the cluster to be upgraded to work?
I'm just guessing here, but maybe the MDS needs the file creation / update of the 
directory inode to "update" the way the quota attributes are exported. If 
something changed here with Mimic,
this would explain why the "touch" is needed. And this would also explain why 
this might only help if the MDS is upgraded to Mimic, too.



I think the relevant change which is causing this is the new_snaps in mimic.

Did you already enable them? `ceph fs set cephfs allow_new_snaps 1`


Good point! We wanted to enable these anyways with Mimic.

I've enabled it just now (since servers are still Luminous, that required 
"--yes-i-really-mean-it") but sadly, the max_bytes attribute is still not there
(also not after remounting on the client / using the file creation and deletion 
trick).


That's interesting - it suddenly started to work for one directory after 
creating a snapshot for one directory subtree on which we have quotas enabled,
and removing that snapshot again.
I can reproduce that for other directories.
So it seems enabling snapshots and snapshotting once fixes it for that 
directory tree.

If that's the case, maybe this could be added to the upgrade notes?

Cheers,
Oliver



Cheers,
 Oliver



-- dan



We have scheduled the remaining parts of the upgrade for Wednesday, and worst 
case could survive until then without quota enforcement, but it's a really 
strange and unexpected incompatibility.

Cheers,
 Oliver



Does that work?

-- dan


On Mon, May 27, 2019 at 11:36 AM Oliver Freyermuth
 wrote:


Dear Cephalopodians,

in the process of migrating a cluster from Luminous (12.2.12) to Mimic 
(13.2.5), we have upgraded the FUSE clients first (we took the chance during a 
time of low activity),
thinking that this should not cause any issues. All MDS+MON+OSDs are still on 
Luminous, 12.2.12.

However, it seems quotas have stopped working - with a (FUSE) Mimic client 
(13.2.5), I see:
$ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
/cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute

A Luminous client (12.2.12) on the same cluster sees:
$ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
5

It does not seem as if the attribute has been renamed (e.g. 
https://github.com/ceph/ceph/blob/mimic/qa/tasks/cephfs/test_quota.py still 
references it, same for the docs),
and I have to assume the clients also do not enforce quota if they do not see 
it.

Is this a known incompatibility between Mimic clients and a Luminous cluster?
The release notes of Mimic only mention that quota support was added to the 
kernel client, but nothing else quota related catches my eye.

Cheers,
  Oliver

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Oliver Freyermuth
Universität Bonn
Physikalisches Institut, Raum 1.047
Nußallee 12
53115 Bonn
--
Tel.: +49 228 73 2367
Fax:  +49 228 73 7869
--





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Oliver Freyermuth
Universität Bonn
Physikalisches Institut, Raum 1.047
Nußallee 12
53115 Bonn
--
Tel.: +49 228 73 2367
Fax:  +49 228 73 7869
--



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Oliver Freyermuth

Am 27.05.19 um 11:57 schrieb Dan van der Ster:

On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth
 wrote:


Dear Dan,

thanks for the quick reply!

Am 27.05.19 um 11:44 schrieb Dan van der Ster:

Hi Oliver,

We saw the same issue after upgrading to mimic.

IIRC we could make the max_bytes xattr visible by touching an empty
file in the dir (thereby updating the dir inode).

e.g. touch  /cephfs/user/freyermu/.quota; rm  /cephfs/user/freyermu/.quota


sadly, no, not even with sync's in between:
-
$ touch /cephfs/user/freyermu/.quota; sync; rm -f /cephfs/user/freyermu/.quota; 
sync; getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
/cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute
-
Also restarting the FUSE client after that does not change it. Maybe this 
requires the rest of the cluster to be upgraded to work?
I'm just guessing here, but maybe the MDS needs the file creation / update of the 
directory inode to "update" the way the quota attributes are exported. If 
something changed here with Mimic,
this would explain why the "touch" is needed. And this would also explain why 
this might only help if the MDS is upgraded to Mimic, too.



I think the relevant change which is causing this is the new_snaps in mimic.

Did you already enable them? `ceph fs set cephfs allow_new_snaps 1`


Good point! We wanted to enable these anyways with Mimic.

I've enabled it just now (since servers are still Luminous, that required 
"--yes-i-really-mean-it") but sadly, the max_bytes attribute is still not there
(also not after remounting on the client / using the file creation and deletion 
trick).

Cheers,
Oliver



-- dan



We have scheduled the remaining parts of the upgrade for Wednesday, and worst 
case could survive until then without quota enforcement, but it's a really 
strange and unexpected incompatibility.

Cheers,
 Oliver



Does that work?

-- dan


On Mon, May 27, 2019 at 11:36 AM Oliver Freyermuth
 wrote:


Dear Cephalopodians,

in the process of migrating a cluster from Luminous (12.2.12) to Mimic 
(13.2.5), we have upgraded the FUSE clients first (we took the chance during a 
time of low activity),
thinking that this should not cause any issues. All MDS+MON+OSDs are still on 
Luminous, 12.2.12.

However, it seems quotas have stopped working - with a (FUSE) Mimic client 
(13.2.5), I see:
$ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
/cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute

A Luminous client (12.2.12) on the same cluster sees:
$ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
5

It does not seem as if the attribute has been renamed (e.g. 
https://github.com/ceph/ceph/blob/mimic/qa/tasks/cephfs/test_quota.py still 
references it, same for the docs),
and I have to assume the clients also do not enforce quota if they do not see 
it.

Is this a known incompatibility between Mimic clients and a Luminous cluster?
The release notes of Mimic only mention that quota support was added to the 
kernel client, but nothing else quota related catches my eye.

Cheers,
  Oliver

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Oliver Freyermuth
Universität Bonn
Physikalisches Institut, Raum 1.047
Nußallee 12
53115 Bonn
--
Tel.: +49 228 73 2367
Fax:  +49 228 73 7869
--




--
Oliver Freyermuth
Universität Bonn
Physikalisches Institut, Raum 1.047
Nußallee 12
53115 Bonn
--
Tel.: +49 228 73 2367
Fax:  +49 228 73 7869
--



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Mike Perez
Hi Peter,

Thanks for verifying this. September 17 is the new date. We moved it in order 
to get a bigger room for the event after receiving good interest about it 
during Cephalocon.

— Mike Perez (thingee)
On May 27, 2019, 2:56 AM -0700, Peter Wienemann , 
wrote:
> Hi Mike,
>
> there is a date incompatibility between your announcement and Dan's
> initial announcement [0]. Which date is correct: September 16 or
> September 17?
>
> Peter
>
> [0]
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-April/034259.html
>
> On 27.05.19 11:22, Mike Perez wrote:
> > Hey everyone,
> >
> > Ceph CERN Day will be a full-day event dedicated to fostering Ceph's
> > research and non-profit user communities. The event is hosted by the
> > Ceph team from the CERN IT department.
> >
> > We invite this community to meet and discuss the status of the Ceph
> > project, recent improvements, and roadmap, and to share practical
> > experiences operating Ceph for their novel use-cases.original
> >
> > We also invite potential speakers to submit an abstract on any of the
> > following topics:
> >
> > * Ceph use-cases for scientific and research applications
> > * Ceph deployments in academic or non-profit organizations
> > * Applications of CephFS or Object Storage for HPC
> > * Operational highlights, tools, procedures, or other tips you want to
> > share with the community
> >
> > The day will end with a cocktail reception.
> >
> > Visitors may be interested in combining their visit to CERN with the
> > CERN Open Days being held September 14-15.
> >
> > All event information for CFP, registration, accommodations can be
> > found on the CERN website:
> >
> > https://indico.cern.ch/event/765214/
> >
> > And thank you to Dan van der Ster for reaching out to organizer this event!
> >
> > --
> > Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Dan van der Ster
Tuesday Sept 17 is indeed the correct day!

We had to move it by one day to get a bigger room... sorry for the confusion.

-- dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Dan van der Ster
On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth
 wrote:
>
> Dear Dan,
>
> thanks for the quick reply!
>
> Am 27.05.19 um 11:44 schrieb Dan van der Ster:
> > Hi Oliver,
> >
> > We saw the same issue after upgrading to mimic.
> >
> > IIRC we could make the max_bytes xattr visible by touching an empty
> > file in the dir (thereby updating the dir inode).
> >
> > e.g. touch  /cephfs/user/freyermu/.quota; rm  /cephfs/user/freyermu/.quota
>
> sadly, no, not even with sync's in between:
> -
> $ touch /cephfs/user/freyermu/.quota; sync; rm -f 
> /cephfs/user/freyermu/.quota; sync; getfattr --absolute-names --only-values 
> -n ceph.quota.max_bytes /cephfs/user/freyermu/
> /cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute
> -
> Also restarting the FUSE client after that does not change it. Maybe this 
> requires the rest of the cluster to be upgraded to work?
> I'm just guessing here, but maybe the MDS needs the file creation / update of 
> the directory inode to "update" the way the quota attributes are exported. If 
> something changed here with Mimic,
> this would explain why the "touch" is needed. And this would also explain why 
> this might only help if the MDS is upgraded to Mimic, too.
>

I think the relevant change which is causing this is the new_snaps in mimic.

Did you already enable them? `ceph fs set cephfs allow_new_snaps 1`

-- dan


> We have scheduled the remaining parts of the upgrade for Wednesday, and worst 
> case could survive until then without quota enforcement, but it's a really 
> strange and unexpected incompatibility.
>
> Cheers,
> Oliver
>
> >
> > Does that work?
> >
> > -- dan
> >
> >
> > On Mon, May 27, 2019 at 11:36 AM Oliver Freyermuth
> >  wrote:
> >>
> >> Dear Cephalopodians,
> >>
> >> in the process of migrating a cluster from Luminous (12.2.12) to Mimic 
> >> (13.2.5), we have upgraded the FUSE clients first (we took the chance 
> >> during a time of low activity),
> >> thinking that this should not cause any issues. All MDS+MON+OSDs are still 
> >> on Luminous, 12.2.12.
> >>
> >> However, it seems quotas have stopped working - with a (FUSE) Mimic client 
> >> (13.2.5), I see:
> >> $ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
> >> /cephfs/user/freyermu/
> >> /cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute
> >>
> >> A Luminous client (12.2.12) on the same cluster sees:
> >> $ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
> >> /cephfs/user/freyermu/
> >> 5
> >>
> >> It does not seem as if the attribute has been renamed (e.g. 
> >> https://github.com/ceph/ceph/blob/mimic/qa/tasks/cephfs/test_quota.py 
> >> still references it, same for the docs),
> >> and I have to assume the clients also do not enforce quota if they do not 
> >> see it.
> >>
> >> Is this a known incompatibility between Mimic clients and a Luminous 
> >> cluster?
> >> The release notes of Mimic only mention that quota support was added to 
> >> the kernel client, but nothing else quota related catches my eye.
> >>
> >> Cheers,
> >>  Oliver
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Oliver Freyermuth
> Universität Bonn
> Physikalisches Institut, Raum 1.047
> Nußallee 12
> 53115 Bonn
> --
> Tel.: +49 228 73 2367
> Fax:  +49 228 73 7869
> --
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Peter Wienemann
Hi Mike,

there is a date incompatibility between your announcement and Dan's
initial announcement [0]. Which date is correct: September 16 or
September 17?

Peter

[0]
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-April/034259.html

On 27.05.19 11:22, Mike Perez wrote:
> Hey everyone,
> 
> Ceph CERN Day will be a full-day event dedicated to fostering Ceph's
> research and non-profit user communities. The event is hosted by the
> Ceph team from the CERN IT department.
> 
> We invite this community to meet and discuss the status of the Ceph
> project, recent improvements, and roadmap, and to share practical
> experiences operating Ceph for their novel use-cases.original
> 
> We also invite potential speakers to submit an abstract on any of the
> following topics:
> 
> * Ceph use-cases for scientific and research applications
> * Ceph deployments in academic or non-profit organizations
> * Applications of CephFS or Object Storage for HPC
> * Operational highlights, tools, procedures, or other tips you want to
> share with the community
> 
> The day will end with a cocktail reception.
> 
> Visitors may be interested in combining their visit to CERN with the
> CERN Open Days being held September 14-15.
> 
> All event information for CFP, registration, accommodations can be
> found on the CERN website:
> 
> https://indico.cern.ch/event/765214/
> 
> And thank you to Dan van der Ster for reaching out to organizer this event!
> 
> --
> Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Oliver Freyermuth

Dear Dan,

thanks for the quick reply!

Am 27.05.19 um 11:44 schrieb Dan van der Ster:

Hi Oliver,

We saw the same issue after upgrading to mimic.

IIRC we could make the max_bytes xattr visible by touching an empty
file in the dir (thereby updating the dir inode).

e.g. touch  /cephfs/user/freyermu/.quota; rm  /cephfs/user/freyermu/.quota


sadly, no, not even with sync's in between:
-
$ touch /cephfs/user/freyermu/.quota; sync; rm -f /cephfs/user/freyermu/.quota; 
sync; getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
/cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute
-
Also restarting the FUSE client after that does not change it. Maybe this 
requires the rest of the cluster to be upgraded to work?
I'm just guessing here, but maybe the MDS needs the file creation / update of the 
directory inode to "update" the way the quota attributes are exported. If 
something changed here with Mimic,
this would explain why the "touch" is needed. And this would also explain why 
this might only help if the MDS is upgraded to Mimic, too.

We have scheduled the remaining parts of the upgrade for Wednesday, and worst 
case could survive until then without quota enforcement, but it's a really 
strange and unexpected incompatibility.

Cheers,
Oliver



Does that work?

-- dan


On Mon, May 27, 2019 at 11:36 AM Oliver Freyermuth
 wrote:


Dear Cephalopodians,

in the process of migrating a cluster from Luminous (12.2.12) to Mimic 
(13.2.5), we have upgraded the FUSE clients first (we took the chance during a 
time of low activity),
thinking that this should not cause any issues. All MDS+MON+OSDs are still on 
Luminous, 12.2.12.

However, it seems quotas have stopped working - with a (FUSE) Mimic client 
(13.2.5), I see:
$ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
/cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute

A Luminous client (12.2.12) on the same cluster sees:
$ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
5

It does not seem as if the attribute has been renamed (e.g. 
https://github.com/ceph/ceph/blob/mimic/qa/tasks/cephfs/test_quota.py still 
references it, same for the docs),
and I have to assume the clients also do not enforce quota if they do not see 
it.

Is this a known incompatibility between Mimic clients and a Luminous cluster?
The release notes of Mimic only mention that quota support was added to the 
kernel client, but nothing else quota related catches my eye.

Cheers,
 Oliver

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Oliver Freyermuth
Universität Bonn
Physikalisches Institut, Raum 1.047
Nußallee 12
53115 Bonn
--
Tel.: +49 228 73 2367
Fax:  +49 228 73 7869
--



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Dan van der Ster
Hi Oliver,

We saw the same issue after upgrading to mimic.

IIRC we could make the max_bytes xattr visible by touching an empty
file in the dir (thereby updating the dir inode).

e.g. touch  /cephfs/user/freyermu/.quota; rm  /cephfs/user/freyermu/.quota

Does that work?

-- dan


On Mon, May 27, 2019 at 11:36 AM Oliver Freyermuth
 wrote:
>
> Dear Cephalopodians,
>
> in the process of migrating a cluster from Luminous (12.2.12) to Mimic 
> (13.2.5), we have upgraded the FUSE clients first (we took the chance during 
> a time of low activity),
> thinking that this should not cause any issues. All MDS+MON+OSDs are still on 
> Luminous, 12.2.12.
>
> However, it seems quotas have stopped working - with a (FUSE) Mimic client 
> (13.2.5), I see:
> $ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
> /cephfs/user/freyermu/
> /cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute
>
> A Luminous client (12.2.12) on the same cluster sees:
> $ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
> /cephfs/user/freyermu/
> 5
>
> It does not seem as if the attribute has been renamed (e.g. 
> https://github.com/ceph/ceph/blob/mimic/qa/tasks/cephfs/test_quota.py still 
> references it, same for the docs),
> and I have to assume the clients also do not enforce quota if they do not see 
> it.
>
> Is this a known incompatibility between Mimic clients and a Luminous cluster?
> The release notes of Mimic only mention that quota support was added to the 
> kernel client, but nothing else quota related catches my eye.
>
> Cheers,
> Oliver
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-27 Thread Oliver Freyermuth

Dear Cephalopodians,

in the process of migrating a cluster from Luminous (12.2.12) to Mimic 
(13.2.5), we have upgraded the FUSE clients first (we took the chance during a 
time of low activity),
thinking that this should not cause any issues. All MDS+MON+OSDs are still on 
Luminous, 12.2.12.

However, it seems quotas have stopped working - with a (FUSE) Mimic client 
(13.2.5), I see:
$ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
/cephfs/user/freyermu/: ceph.quota.max_bytes: No such attribute

A Luminous client (12.2.12) on the same cluster sees:
$ getfattr --absolute-names --only-values -n ceph.quota.max_bytes 
/cephfs/user/freyermu/
5

It does not seem as if the attribute has been renamed (e.g. 
https://github.com/ceph/ceph/blob/mimic/qa/tasks/cephfs/test_quota.py still 
references it, same for the docs),
and I have to assume the clients also do not enforce quota if they do not see 
it.

Is this a known incompatibility between Mimic clients and a Luminous cluster?
The release notes of Mimic only mention that quota support was added to the 
kernel client, but nothing else quota related catches my eye.

Cheers,
Oliver



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Multisite RGW

2019-05-27 Thread Matteo Dacrema
Hi all,

I’m planning to replace a swift multi-region deployment using Ceph.
Right now Swift is deployed across 3 region in Europe and the data is 
replicated across this 3 regions.

Is it possible to configure Ceph to do the same? 
I think I need to go with multiple zone group with a single realm right? 
I also noticed that if I lose the master zone group the whole object storage 
service might stop working. Is it right?

Can Ceph right now compete with Swift in terms of distributed multi-region 
object storage?

Another thing is: in Swift I’m placing one replica per single region. If I lose 
one HDD in that region Swift recover the object by reading from other regions. 
Do Ceph act at the same mode recovering from other regions?


Thank you
Regards
Matteo

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-05-27 Thread Mike Perez
Hey everyone,

Ceph CERN Day will be a full-day event dedicated to fostering Ceph's
research and non-profit user communities. The event is hosted by the
Ceph team from the CERN IT department.

We invite this community to meet and discuss the status of the Ceph
project, recent improvements, and roadmap, and to share practical
experiences operating Ceph for their novel use-cases.

We also invite potential speakers to submit an abstract on any of the
following topics:

* Ceph use-cases for scientific and research applications
* Ceph deployments in academic or non-profit organizations
* Applications of CephFS or Object Storage for HPC
* Operational highlights, tools, procedures, or other tips you want to
share with the community

The day will end with a cocktail reception.

Visitors may be interested in combining their visit to CERN with the
CERN Open Days being held September 14-15.

All event information for CFP, registration, accommodations can be
found on the CERN website:

https://indico.cern.ch/event/765214/

And thank you to Dan van der Ster for reaching out to organizer this event!

--
Mike Perez (thingee)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] performance in a small cluster

2019-05-27 Thread Stefan Kooman
Quoting Robert Sander (r.san...@heinlein-support.de):
> Hi,
> 
> we have a small cluster at a customer's site with three nodes and 4 SSD-OSDs
> each.
> Connected with 10G the system is supposed to perform well.
> 
> rados bench shows ~450MB/s write and ~950MB/s read speeds with 4MB objects
> but only 20MB/s write and 95MB/s read with 4KB objects.
> 
> This is a little bit disappointing as the 4K performance is also seen in KVM
> VMs using RBD.
> 
> Is there anything we can do to improve performance with small objects /
> block sizes?

Josh gave a talk about this:
https://static.sched.com/hosted_files/cephalocon2019/10/Optimizing%20Small%20Ceph%20Clusters.pdf

TL;DR: 
- For small clusters use relatively more PGs than for large clusters
- Make sure your cluster is well balanced, and this script might
be useful:
https://github.com/JoshSalomon/Cephalocon-2019/blob/master/pool_pgs_osd.sh

Josh is also tuning the objecter_* attributes (if you have plenty of
CPU/Memory):

objecter_inflight_ops = 5120
objecter_inflight_op_bytes = 524288000 (512 * 1,024,000)
## You can multiply / divide both with the same factor

Some more tuning tips in the presentation by Wido/Piotr that might be
useful:
https://static.sched.com/hosted_files/cephalocon2019/d6/ceph%20on%20nvme%20barcelona%202019.pdf

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cephfs free space vs ceph df free space disparity

2019-05-27 Thread Stefan Kooman
Quoting Robert Ruge (robert.r...@deakin.edu.au):
> Ceph newbie question.
> 
> I have a disparity between the free space that my cephfs file system
> is showing and what ceph df is showing.  As you can see below my
> cephfs file system says there is 9.5TB free however ceph df says there
> is 186TB which with replication size 3 should equate to 62TB free
> space.  I guess the basic question is how can I get cephfs to see and
> use all of the available space?  I recently changed my number of pg's
> on the cephfs_data pool from 2048 to 4096 and this gave me another 8TB
> so do I keep increasing the number of pg's or is there something else
> that I am missing? I have only been running ceph for ~6 months so I'm
> relatively new to it all and not being able to use all of the space is
> just plain bugging me.

My guess here is you have a lot of small files in your cephfs, is that
right? Do you have HDD or SDD/NVMe?

Mohamad Gebai gave a talk about this at Cephalocon 2019:
https://static.sched.com/hosted_files/cephalocon2019/d2/cephalocon-2019-mohamad-gebai.pdf
for the slides and the recording:
https://www.youtube.com/watch?v=26FbUEbiUrw&list=PLrBUGiINAakNCnQUosh63LpHbf84vegNu&index=29&t=0s

TL;DR: there is a bluestore_min_alloc_size_ssd which is 16K default for
SSD and 64K default for HDD. With lots of small objects this might add
up to *a lot* of overhead. You can change that to 4k:

bluestore min alloc size ssd = 4096
bluestore min alloc size hdd = 4096

You will have to rebuild _all_ of your OSDs though.

Here is another thread about this:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/thread.html#24801

Gr. Stefan


-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com