[ceph-users] Question about PGMonitor::waiting_for_finished_proposal

2017-05-31 Thread 许雪寒
Hi, everyone. 

Recently, I’m reading the source code of Monitor. I found that, in 
PGMonitor::preprare_pg_stats() method, a callback C_Stats is put into 
PGMonitor::waiting_for_finished_proposal. I wonder, if a previous PGMap 
incremental is in PAXOS's proposeaccept phase at the moment C_Stats is put 
into PGMonitor::waiting_for_finished_proposal, would this C_Stats be called 
when that PGMap incremental's PAXOS procedure is complete and 
PaxosService::_active() is invoked? If so, there exists the possibility that a 
MPGStats request get responsed before going through the PAXOS procedure.

Is this right? Thank you:-)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph.conf and monitors

2017-05-31 Thread Curt
Hello all,

Had a recent issue with ceph monitors and osd's when connecting to
second/third monitor.  I don't have any debug logs to currently paste, but
wanted to get feedback on my ceph.conf for the monitors.

This is giant release.

Here's the error from monB that stuck out "osd_map(174373..174373 src has
173252..174373)...failed lossy con, dropping message".  If I understand
that correctly, the OSD had an older version of the map that the mon
didn't? The OSD logs show auth errors decoding block, failed verifying auth
reply.  Changing the ceph.conf to only point to monA, fixed the issue, but
it's only a temp work around.

Any suggestions on the cause or recommended fix for this?

Org conf(ip's changed):
mon_initial_members = monitorA
mon_host = 1.1.1.1 ,1.1.1.2, 1.1.1.3

Conf now:
mon_initial_members = monitorA
mon_host = 1.1.1.1

Does initial members needs to be all the mons?  Any other config places I
should check?

Cheers,
Curt
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd map fails, ceph release jewel

2017-05-31 Thread David Turner
You are trying to use the kernel client to map the RBD in Jewel.  Jewel
RBDs have options enabled that require you to run a kernel 4.9 or newer.
You can disable the features that are requiring the newer kernel, but
that's not very good as those new features are very nice to have.  You can
use RBD-fuse to mount them, that is up to date for your Ceph version.  I
would probably go the RBD-fuse route in your position, unless upgrading
your kernel to 4.9 is an option.

On Wed, May 31, 2017 at 9:36 AM Shambhu Rajak  wrote:

> Hi Cepher,
>
> I have created a pool and trying to create rbd image on the ceph client,
> while mapping the rbd image it fails as:
>
>
>
> ubuntu@shambhucephnode0:~$ sudo rbd map pool1-img1 -p pool1
>
> rbd: sysfs write failed
>
> In some cases useful info is found in syslog - try "dmesg | tail" or so.
>
> rbd: map failed: (5) Input/output error
>
>
>
>
>
> so I checked the dmesg as suggested:
>
>
>
> ubuntu@shambhucephnode0:~$ dmesg | tail
>
> [788743.741818] libceph: mon2 10.186.210.243:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 200
>
> [788743.746352] libceph: mon2 10.186.210.243:6789 socket error on read
>
> [788753.757934] libceph: mon2 10.186.210.243:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 200
>
> [788753.777578] libceph: mon2 10.186.210.243:6789 socket error on read
>
> [788763.773857] libceph: mon0 10.186.210.241:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 200
>
> [788763.780539] libceph: mon0 10.186.210.241:6789 socket error on read
>
> [788773.790371] libceph: mon1 10.186.210.242:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 200
>
> [788773.811208] libceph: mon1 10.186.210.242:6789 socket error on read
>
> [788783.805987] libceph: mon1 10.186.210.242:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 200
>
> [788783.826907] libceph: mon1 10.186.210.242:6789 socket error on read
>
>
>
> I am not sure what is going wrong here, my cluster health is HEALTH_OK
> though.
>
>
>
>
>
>
>
> My configuration details:
>
> Ceph version: ceph version 10.2.7
> (50e863e0f4bc8f4b9e31156de690d765af245185)
>
> OSD: 12 on 3 storage nodes
>
> Monitor : 3 running on the 3 osd nodes
>
>
>
> OS:
>
> No LSB modules are available.
>
> Distributor ID: Ubuntu
>
> Description:Ubuntu 14.04.5 LTS
>
> Release:14.04
>
> Codename:   trusty
>
>
>
> Ceph Client Kernal Version:
>
> Linux version 3.13.0-95-generic (buildd@lgw01-58) (gcc version 4.8.4
> (Ubuntu 4.8.4-2ubuntu1~14.04.3) )
>
>
>
> KRBD:
>
> ubuntu@shambhucephnode0:~$ /sbin/modinfo rbd
>
> filename:   /lib/modules/3.13.0-95-generic/kernel/drivers/block/rbd.ko
>
> license:GPL
>
> author: Jeff Garzik 
>
> description:rados block device
>
> author: Yehuda Sadeh 
>
> author: Sage Weil 
>
> author: Alex Elder 
>
> srcversion: 48BFBD5C3D31D799F01D218
>
> depends:libceph
>
> intree: Y
>
> vermagic:   3.13.0-95-generic SMP mod_unload modversions
>
> signer: Magrathea: Glacier signing key
>
> sig_key:51:D5:D7:73:F1:07:BA:1B:C0:9D:33:68:38:C4:3C:DE:74:9E:4E:05
>
> sig_hashalgo:   sha512
>
>
>
> Thanks,
>
> Shambhu Rajak
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw refuses upload when Content-Type missing from POST policy

2017-05-31 Thread Dave Holland
Hi,

I'm trying to get files into radosgw (Ubuntu Ceph package
10.2.3-0ubuntu0.16.04.2) using Fine Uploader
https://github.com/FineUploader/fine-uploader but I'm running into
difficulties in the case where the uploaded file has a filename
extension which the browser can't map to a MIME type (or, no extension
at all).

The radosgw replies with a 403 error, "Policy missing condition:
Content-Type". Examining the policy which the browser sends as part of
the multipart data confirms that there is no Content-Type. (The POST and
multipart do have Content-Type headers.) The same code and POST works
fine against AWS S3. Should radosgw require a Content-Type in the POST
policy when AWS S3 doesn't? It seems that for maximum compatibility, it
shouldn't.

The bucket's CORS policy is "*" but it
doesn't work with explicitly mentioning Content-Type either.

I put a radosgw debug=20 log of the successful OPTIONS call and failing
POST call here:
https://docs.google.com/document/d/1i3exJSil1xj14ZrDOF_oM9eZC238gnNVAsnaZ-Pkvzo/edit?usp=sharing

Happy to provide other debug info if necessary.

thanks,
Dave
-- 
** Dave Holland ** Systems Support -- Informatics Systems Group **
** 01223 496923 ** The Sanger Institute, Hinxton, Cambridge, UK **


-- 
 The Wellcome Trust Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Adding a new node to a small cluster (size = 2)

2017-05-31 Thread David Turner
How full is the cluster before adding the third node?  If it's over 65% I
would recommend adding 2 new nodes instead of 1.  The reason for that is
that it if you lose one of the nodes, your cluster will try to backfill
back onto the 2 nodes and be way too full.  There is no rule or
recommendation about having a multiple number of servers to match your
replicas.  3 servers is fine for a replica of 2.

On Wed, May 31, 2017 at 11:42 AM Kevin Olbrich  wrote:

> Hi!
>
> A customer is running a small two node ceph cluster with 14 disks each.
> He has min_size 1 and size 2 and it is only used for backups.
>
> If we add a third member with 14 identical disks and remain size = 2,
> replicas should be distributed evenly, right?
> Or is an uneven count of hosts unadvisable or not working?
>
> Kind regards,
> Kevin
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Adding a new node to a small cluster (size = 2)

2017-05-31 Thread Kevin Olbrich
Hi!

A customer is running a small two node ceph cluster with 14 disks each.
He has min_size 1 and size 2 and it is only used for backups.

If we add a third member with 14 identical disks and remain size = 2,
replicas should be distributed evenly, right?
Or is an uneven count of hosts unadvisable or not working?

Kind regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd map fails, ceph release jewel

2017-05-31 Thread Shambhu Rajak
Hi Cepher,
I have created a pool and trying to create rbd image on the ceph client, while 
mapping the rbd image it fails as:

ubuntu@shambhucephnode0:~$ sudo rbd map pool1-img1 -p pool1
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (5) Input/output error


so I checked the dmesg as suggested:

ubuntu@shambhucephnode0:~$ dmesg | tail
[788743.741818] libceph: mon2 10.186.210.243:6789 feature set mismatch, my 
4a042a42 < server's 2004a042a42, missing 200
[788743.746352] libceph: mon2 10.186.210.243:6789 socket error on read
[788753.757934] libceph: mon2 10.186.210.243:6789 feature set mismatch, my 
4a042a42 < server's 2004a042a42, missing 200
[788753.777578] libceph: mon2 10.186.210.243:6789 socket error on read
[788763.773857] libceph: mon0 10.186.210.241:6789 feature set mismatch, my 
4a042a42 < server's 2004a042a42, missing 200
[788763.780539] libceph: mon0 10.186.210.241:6789 socket error on read
[788773.790371] libceph: mon1 10.186.210.242:6789 feature set mismatch, my 
4a042a42 < server's 2004a042a42, missing 200
[788773.811208] libceph: mon1 10.186.210.242:6789 socket error on read
[788783.805987] libceph: mon1 10.186.210.242:6789 feature set mismatch, my 
4a042a42 < server's 2004a042a42, missing 200
[788783.826907] libceph: mon1 10.186.210.242:6789 socket error on read

I am not sure what is going wrong here, my cluster health is HEALTH_OK though.



My configuration details:
Ceph version: ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)
OSD: 12 on 3 storage nodes
Monitor : 3 running on the 3 osd nodes

OS:
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 14.04.5 LTS
Release:14.04
Codename:   trusty

Ceph Client Kernal Version:
Linux version 3.13.0-95-generic (buildd@lgw01-58) (gcc version 4.8.4 (Ubuntu 
4.8.4-2ubuntu1~14.04.3) )

KRBD:
ubuntu@shambhucephnode0:~$ /sbin/modinfo rbd
filename:   /lib/modules/3.13.0-95-generic/kernel/drivers/block/rbd.ko
license:GPL
author: Jeff Garzik 
description:rados block device
author: Yehuda Sadeh 
author: Sage Weil 
author: Alex Elder 
srcversion: 48BFBD5C3D31D799F01D218
depends:libceph
intree: Y
vermagic:   3.13.0-95-generic SMP mod_unload modversions
signer: Magrathea: Glacier signing key
sig_key:51:D5:D7:73:F1:07:BA:1B:C0:9D:33:68:38:C4:3C:DE:74:9E:4E:05
sig_hashalgo:   sha512

Thanks,
Shambhu Rajak
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-05-31 Thread Mark Nelson



On 05/31/2017 05:21 AM, nokia ceph wrote:

+ ceph-devel ..

$ps -ef | grep 294
ceph 3539720   1 14 08:04 ?00:16:35 /usr/bin/ceph-osd -f
--cluster ceph --id 294 --setuser ceph --setgroup ceph

$gcore -o coredump-osd  3539720


$(gdb) bt
#0  0x7f5ef68f56d5 in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#1  0x7f5ef9cc45ab in ceph::logging::Log::entry() ()
#2  0x7f5ef68f1dc5 in start_thread () from /lib64/libpthread.so.0
#3  0x7f5ef57e173d in clone () from /lib64/libc.so.6


Can you do a thread apply all bt instead so we can see what the 
tp_osd_tp threads were doing?  That might be the bigger clue.


Mark




*2017-05-31 10:11:51.064495 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
*2017-05-31 10:11:51.064496 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
*2017-05-31 10:11:51.064497 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
*2017-05-31 10:11:51.064497 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
*2017-05-31 10:11:51.064498 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
2017-05-31 10:11:51.957092 7f5eef88a700 -1 osd.294 1004 heartbeat_check:
no reply from 10.50.62.154:6858  osd.64 since
back 2017-05-31 09:58:53.016145 front 2017-05-31 09:58:53.016145 (cutoff
2017-05-31 10:11:31.957089)
2017-05-31 10:11:51.957114 7f5eef88a700 -1 osd.294 1004 heartbeat_check:
no reply from 10.50.62.154:6931  osd.82 since
back 2017-05-31 10:04:48.204500 front 2017-05-31 10:04:48.204500 (cutoff
2017-05-31 10:11:31.957089)
2017-05-31 10:11:51.957121 7f5eef88a700 -1 osd.294 1004 heartbeat_check:
no reply from 10.50.62.152:6929  osd.162 since
back 2017-05-31 09:57:37.451595 front 2017-05-31 09:57:37.451595 (cutoff
2017-05-31 10:11:31.957089)



Thanks
Jayaram

On Tue, May 30, 2017 at 7:33 PM, nokia ceph > wrote:

Hello Mark,

Yes this issue happens once the test/write started after 60 secs
which correspond config value -- "threadpool_default_timeout = 60 "
. Do you require the down OSD coredump to analyse tp_osd_tp state. .
Please be specify which process dump you would require to analyse.

Like ,
#gcore 

or using wallclock profiler, I'm not much aware how to use this tool.

Thanks
Jayaram

On Tue, May 30, 2017 at 6:57 PM, Mark Nelson > wrote:

On 05/30/2017 05:07 AM, nokia ceph wrote:

Hello Mark,

I can able to reproduce this problem everytime.


Ok, next question, does it happen 60s after starting the 200MB/s
load, or does it take a while?  Sounds like it's pretty random
across OSDs? I'm thinking we want to figure out what state the
tp_osd_tp threads are in when this is happening (maybe via a gdb
bt or using the wallclock profiler to gather several samples)
and then figure out via the logs what lead to the chain of
events that put it in that state.

Mark


Env:-- 5 node, v12.0.3, EC 4+1 bluestore , RHEL 7.3 -
3.10.0-514.el7.x86_64

Tested with debug bluestore = 20...

From ceph watch
===

2017-05-30 08:57:33.510794 mon.0 [INF] pgmap v15649: 8192
pgs: 8192
active+clean; 774 GB data, 1359 GB used, 1854 TB / 1855 TB
avail; 35167
B/s rd, 77279 kB/s wr, 726 op/s
2017-05-30 08:57:35.134604 mon.0 [INF] pgmap v15650: 8192
pgs: 8192
active+clean; 774 GB data, 1359 GB used, 1854 TB / 1855 TB
avail; 19206
B/s rd, 63835 kB/s wr, 579 op/s
2017-05-30 08:57:36.295086 mon.0 [INF] pgmap v15651: 8192
pgs: 8192
active+clean; 774 GB data, 1359 GB used, 1854 TB / 1855 TB
avail; 10999
B/s rd, 44093 kB/s wr, 379 op/s
2017-05-30 08:57:37.228422 mon.0 [INF] osd.243
10.50.62.154:6895/1842853 
> failed (4 reporters from
different
host after 20.062159 >= grace 20.00)
2017-05-30 08:57:37.234308 mon.0 [INF] osd.19
10.50.62.152:6856/1940715 
> failed (4 reporters from
different

host after 20.000234 >= grace 20.00)
2017-05-30 08:57:37.368342 mon.0 [INF] pgmap v15652: 8192
pgs: 8192
active+clean; 774 GB data, 1360 GB 

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-05-31 Thread nokia ceph
+ ceph-devel ..

$ps -ef | grep 294
ceph 3539720   1 14 08:04 ?00:16:35 /usr/bin/ceph-osd -f
--cluster ceph --id 294 --setuser ceph --setgroup ceph

$gcore -o coredump-osd  3539720


$(gdb) bt
#0  0x7f5ef68f56d5 in pthread_cond_wait@@GLIBC_2.3.2 () from
/lib64/libpthread.so.0
#1  0x7f5ef9cc45ab in ceph::logging::Log::entry() ()
#2  0x7f5ef68f1dc5 in start_thread () from /lib64/libpthread.so.0
#3  0x7f5ef57e173d in clone () from /lib64/libc.so.6


*2017-05-31 10:11:51.064495 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
*2017-05-31 10:11:51.064496 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
*2017-05-31 10:11:51.064497 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
*2017-05-31 10:11:51.064497 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
*2017-05-31 10:11:51.064498 7f5ef383b700  1 heartbeat_map is_healthy
'tp_osd_tp thread tp_osd_tp' had timed out after 60*
2017-05-31 10:11:51.957092 7f5eef88a700 -1 osd.294 1004 heartbeat_check: no
reply from 10.50.62.154:6858 osd.64 since back 2017-05-31 09:58:53.016145
front 2017-05-31 09:58:53.016145 (cutoff 2017-05-31 10:11:31.957089)
2017-05-31 10:11:51.957114 7f5eef88a700 -1 osd.294 1004 heartbeat_check: no
reply from 10.50.62.154:6931 osd.82 since back 2017-05-31 10:04:48.204500
front 2017-05-31 10:04:48.204500 (cutoff 2017-05-31 10:11:31.957089)
2017-05-31 10:11:51.957121 7f5eef88a700 -1 osd.294 1004 heartbeat_check: no
reply from 10.50.62.152:6929 osd.162 since back 2017-05-31 09:57:37.451595
front 2017-05-31 09:57:37.451595 (cutoff 2017-05-31 10:11:31.957089)



Thanks
Jayaram

On Tue, May 30, 2017 at 7:33 PM, nokia ceph 
wrote:

> Hello Mark,
>
> Yes this issue happens once the test/write started after 60 secs which
> correspond config value -- "threadpool_default_timeout = 60 " . Do you
> require the down OSD coredump to analyse tp_osd_tp state. . Please be
> specify which process dump you would require to analyse.
>
> Like ,
> #gcore 
>
> or using wallclock profiler, I'm not much aware how to use this tool.
>
> Thanks
> Jayaram
>
> On Tue, May 30, 2017 at 6:57 PM, Mark Nelson  wrote:
>
>> On 05/30/2017 05:07 AM, nokia ceph wrote:
>>
>>> Hello Mark,
>>>
>>> I can able to reproduce this problem everytime.
>>>
>>
>> Ok, next question, does it happen 60s after starting the 200MB/s load, or
>> does it take a while?  Sounds like it's pretty random across OSDs? I'm
>> thinking we want to figure out what state the tp_osd_tp threads are in when
>> this is happening (maybe via a gdb bt or using the wallclock profiler to
>> gather several samples) and then figure out via the logs what lead to the
>> chain of events that put it in that state.
>>
>> Mark
>>
>>
>>> Env:-- 5 node, v12.0.3, EC 4+1 bluestore , RHEL 7.3 -
>>> 3.10.0-514.el7.x86_64
>>>
>>> Tested with debug bluestore = 20...
>>>
>>> From ceph watch
>>> ===
>>>
>>> 2017-05-30 08:57:33.510794 mon.0 [INF] pgmap v15649: 8192 pgs: 8192
>>> active+clean; 774 GB data, 1359 GB used, 1854 TB / 1855 TB avail; 35167
>>> B/s rd, 77279 kB/s wr, 726 op/s
>>> 2017-05-30 08:57:35.134604 mon.0 [INF] pgmap v15650: 8192 pgs: 8192
>>> active+clean; 774 GB data, 1359 GB used, 1854 TB / 1855 TB avail; 19206
>>> B/s rd, 63835 kB/s wr, 579 op/s
>>> 2017-05-30 08:57:36.295086 mon.0 [INF] pgmap v15651: 8192 pgs: 8192
>>> active+clean; 774 GB data, 1359 GB used, 1854 TB / 1855 TB avail; 10999
>>> B/s rd, 44093 kB/s wr, 379 op/s
>>> 2017-05-30 08:57:37.228422 mon.0 [INF] osd.243 10.50.62.154:6895/1842853
>>>  failed (4 reporters from different
>>> host after 20.062159 >= grace 20.00)
>>> 2017-05-30 08:57:37.234308 mon.0 [INF] osd.19 10.50.62.152:6856/1940715
>>>  failed (4 reporters from different
>>>
>>> host after 20.000234 >= grace 20.00)
>>> 2017-05-30 08:57:37.368342 mon.0 [INF] pgmap v15652: 8192 pgs: 8192
>>> active+clean; 774 GB data, 1360 GB used, 1854 TB / 1855 TB avail; 12628
>>> B/s rd, 50471 kB/s wr, 444 op/s
>>> 2017-05-30 08:57:37.506743 mon.0 [INF] osdmap e848: 340 osds: 338 up,
>>> 340 in
>>>
>>>
>>> From failed osd.19 log
>>> =
>>>
>>> ===
>>> 2017-05-30 08:57:04.155836 7f9d6c723700 10
>>> bluestore(/var/lib/ceph/osd/ceph-19) _omap_setkeys 1.1e85s4_head
>>> 4#1:a178head# = 0
>>> 2017-05-30 08:57:04.155840 7f9d6c723700 10
>>> bluestore(/var/lib/ceph/osd/ceph-19) _txc_calc_cost 0x7f9da2d1f180 cost
>>> 3542664 (2 ios * 150 + 542664 bytes)
>>> 2017-05-30 08:57:04.155841 7f9d6c723700 20
>>> bluestore(/var/lib/ceph/osd/ceph-19) _txc_write_nodes txc 0x7f9da2d1f180
>>> onodes 0x7f9da6651840 shared_blobs
>>> 2017-05-30 08:57:04.155843 7f9d6c723700 20
>>> bluestore.extentmap(0x7f9da6651930) update
>>> 4#1:a179feb3:::%2fc20%2fvx033%2fpot%2fchannel1%2fhls%2fv
>>> 

[ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Diedrich Ehlerding
Hello.

The documentation which I found proposes to create the ceph client 
for a rados gateway with very global capabilities, namely
"mon allow rwx, osd allow rwx". 

Are there any reasons for these very global capabilities (allowing 
this client to access and modify (even remove) all pools, all rbds, 
etc., event thiose in use vy other ceph clients? I tried to restrict 
the rights, and my rados gateway seems to work with 
capabilities "mon allow r, osd allow rwx pool=.rgw.root, allow rwx 
pool=a.root, allow rwx pool=am.rgw.control [etc. for all the pools 
which this gateway uses]" 

Are there any reasons not to restrict the capabilities in this way?

Thank you.
-- 
Diedrich Ehlerding, Fujitsu Technology Solutions GmbH, 
MIS ITST CE PS WST, Hildesheimer Str 25, D-30880 Laatzen
Fon +49 511 8489-1806, Fax -251806, Mobil +49 173 2464758
Firmenangaben: http://de.ts.fujitsu.com/imprint.html

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com