[ceph-users] Question about PGMonitor::waiting_for_finished_proposal

2017-05-31 Thread 许雪寒
Hi, everyone. Recently, I’m reading the source code of Monitor. I found that, in PGMonitor::preprare_pg_stats() method, a callback C_Stats is put into PGMonitor::waiting_for_finished_proposal. I wonder, if a previous PGMap incremental is in PAXOS's proposeaccept phase at the moment

[ceph-users] Ceph.conf and monitors

2017-05-31 Thread Curt
Hello all, Had a recent issue with ceph monitors and osd's when connecting to second/third monitor. I don't have any debug logs to currently paste, but wanted to get feedback on my ceph.conf for the monitors. This is giant release. Here's the error from monB that stuck out

Re: [ceph-users] rbd map fails, ceph release jewel

2017-05-31 Thread David Turner
You are trying to use the kernel client to map the RBD in Jewel. Jewel RBDs have options enabled that require you to run a kernel 4.9 or newer. You can disable the features that are requiring the newer kernel, but that's not very good as those new features are very nice to have. You can use

[ceph-users] radosgw refuses upload when Content-Type missing from POST policy

2017-05-31 Thread Dave Holland
Hi, I'm trying to get files into radosgw (Ubuntu Ceph package 10.2.3-0ubuntu0.16.04.2) using Fine Uploader https://github.com/FineUploader/fine-uploader but I'm running into difficulties in the case where the uploaded file has a filename extension which the browser can't map to a MIME type (or,

Re: [ceph-users] Adding a new node to a small cluster (size = 2)

2017-05-31 Thread David Turner
How full is the cluster before adding the third node? If it's over 65% I would recommend adding 2 new nodes instead of 1. The reason for that is that it if you lose one of the nodes, your cluster will try to backfill back onto the 2 nodes and be way too full. There is no rule or recommendation

[ceph-users] Adding a new node to a small cluster (size = 2)

2017-05-31 Thread Kevin Olbrich
Hi! A customer is running a small two node ceph cluster with 14 disks each. He has min_size 1 and size 2 and it is only used for backups. If we add a third member with 14 identical disks and remain size = 2, replicas should be distributed evenly, right? Or is an uneven count of hosts unadvisable

[ceph-users] rbd map fails, ceph release jewel

2017-05-31 Thread Shambhu Rajak
Hi Cepher, I have created a pool and trying to create rbd image on the ceph client, while mapping the rbd image it fails as: ubuntu@shambhucephnode0:~$ sudo rbd map pool1-img1 -p pool1 rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail" or so. rbd: map

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-05-31 Thread Mark Nelson
On 05/31/2017 05:21 AM, nokia ceph wrote: + ceph-devel .. $ps -ef | grep 294 ceph 3539720 1 14 08:04 ?00:16:35 /usr/bin/ceph-osd -f --cluster ceph --id 294 --setuser ceph --setgroup ceph $gcore -o coredump-osd 3539720 $(gdb) bt #0 0x7f5ef68f56d5 in

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-05-31 Thread nokia ceph
+ ceph-devel .. $ps -ef | grep 294 ceph 3539720 1 14 08:04 ?00:16:35 /usr/bin/ceph-osd -f --cluster ceph --id 294 --setuser ceph --setgroup ceph $gcore -o coredump-osd 3539720 $(gdb) bt #0 0x7f5ef68f56d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0

[ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Diedrich Ehlerding
Hello. The documentation which I found proposes to create the ceph client for a rados gateway with very global capabilities, namely "mon allow rwx, osd allow rwx". Are there any reasons for these very global capabilities (allowing this client to access and modify (even remove) all pools, all