[ceph-users] 答复: 答复: too many PGs per OSD (307 > max 300)

2016-07-28 Thread zhu tong
Right, that was the one that I calculated the osd_pool_default_pg_num in our test cluster. 7 OSD, 11 pools, osd_pool_default_pg_num is calculated to be 256, but when ceph status shows health HEALTH_WARN too many PGs per OSD (5818 > max 300) monmap e1: 1 mons at

Re: [ceph-users] 答复: too many PGs per OSD (307 > max 300)

2016-07-28 Thread Christian Balzer
Hello, On Fri, 29 Jul 2016 03:18:10 + zhu tong wrote: > The same problem is confusing me recently too, trying to figure out the > relationship (an equation would be the best) among number of pools, OSD and > PG. > The pgcalc tool and the equation on that page are your best bet/friend.

Re: [ceph-users] [jewel][rgw]why the usage log record date is 16 hours later than the real operate time

2016-07-28 Thread Yehuda Sadeh-Weinraub
On Thu, Jul 28, 2016 at 5:53 PM, Leo Yu wrote: > hi all, > i want get the usage of user,so i use the command radosgw-admin usage show > ,but i can not get the usage when i use the --start-date unless minus 16 > hours > > i have rgw both on ceph01 and

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-07-28 Thread Bill Sharer
Removing osd.4 and still getting the scrub problems removes its drive from consideration as the culprit. Try the same thing again for osd.16 and then osd.28. smartctl may not show anything out of sorts until the marginally bad sector or sectors finally goes bad and gets remapped. The only

Re: [ceph-users] too many PGs per OSD (307 > max 300)

2016-07-28 Thread Christian Balzer
On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote: > Hi list, > > I just followed the placement group guide to set pg_num for the rbd pool. > How many other pools do you have, or is that the only pool? The numbers mentioned are for all pools, not per pool, something that isn't abundantly

[ceph-users] too many PGs per OSD (307 > max 300)

2016-07-28 Thread Chengwei Yang
Hi list, I just followed the placement group guide to set pg_num for the rbd pool. " Less than 5 OSDs set pg_num to 128 Between 5 and 10 OSDs set pg_num to 512 Between 10 and 50 OSDs set pg_num to 4096 If you have more than 50 OSDs, you need to understand the tradeoffs and how to

[ceph-users] Cmake and rpmbuild

2016-07-28 Thread Gerard Braad
Hi All, At the moment I am setting up CI pipelines for Ceph and ran into a small issue; I have some memory constrained runners (2G). So, when performing a build using do-cmake all is fine... the build might last long, but after an hour or two I am greeted with a 'Build succeeded' message, I

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-07-28 Thread Christian Balzer
Hello, On Thu, 28 Jul 2016 14:46:58 +0200 c wrote: > Hello Ceph alikes :) > > i have a strange issue with one PG (0.223) combined with "deep-scrub". > > Always when ceph - or I manually - run a " ceph pg deep-scrub 0.223 ", > this leads to many "slow/block requests" so that nearly all of my

[ceph-users] [jewel][rgw]why the usage log record date is 16 hours later than the real operate time

2016-07-28 Thread Leo Yu
hi all, i want get the usage of user,so i use the command radosgw-admin usage show ,but i can not get the usage when i use the --start-date unless minus 16 hours i have rgw both on ceph01 and ceph03,civeweb:7480 port ,and the ceph version is jewel 10.2.2 the time zone of ceph01 and ceph03

Re: [ceph-users] ceph-fuse (jewel 10.2.2): No such file or directory issues

2016-07-28 Thread Goncalo Borges
Hi Greg For now we have to wait and see if it appears again. If it does, than at least we provide a strace and perform any further debug. We will update this thread when/ if it appears again. Cheers G. From: Gregory Farnum [gfar...@redhat.com] Sent: 29

Re: [ceph-users] CephFS snapshot preferred behaviors

2016-07-28 Thread Alexandre Oliva
On Jul 25, 2016, Gregory Farnum wrote: > * Right now, we allow users to rename snapshots. (This is newish, so > you may not be aware of it if you've been using snapshots for a > while.) Is that an important ability to preserve? I recall wishing for it back in the early days

Re: [ceph-users] RocksDB compression

2016-07-28 Thread Somnath Roy
I am using snappy and it is working fine with Bluestore.. Thanks & Regards Somnath -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Nelson Sent: Thursday, July 28, 2016 2:03 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users]

Re: [ceph-users] RocksDB compression

2016-07-28 Thread Mark Nelson
Should work fine AFAIK, let us know if it doesn't. :) FWIW, the goal at the moment is to make the onode so dense that rocksdb compression isn't going to help after we are done optimizing it. Mark On 07/28/2016 03:37 PM, Garg, Pankaj wrote: Hi, Has anyone configured compression in RockDB

Re: [ceph-users] ceph-fuse (jewel 10.2.2): No such file or directory issues

2016-07-28 Thread Gregory Farnum
On Wed, Jul 27, 2016 at 6:37 PM, Goncalo Borges wrote: > Hi Greg > > Thanks for replying. Answer inline. > > > >>> Dear cephfsers :-) >>> >>> We saw some weirdness in cephfs that we do not understand. >>> >>> We were helping some user which complained that her batch

[ceph-users] RocksDB compression

2016-07-28 Thread Garg, Pankaj
Hi, Has anyone configured compression in RockDB for BlueStore? Does it work? Thanks Pankaj ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] blind buckets

2016-07-28 Thread Yehuda Sadeh-Weinraub
On Thu, Jul 28, 2016 at 12:11 PM, Tyler Bischel wrote: > Can I not update an existing placement target's index_type? I had tried to > update the default pool's index type: > > radosgw-admin zone get --rgw-zone=default > default-zone.json > > #replace index_type:0 to

Re: [ceph-users] blind buckets

2016-07-28 Thread Tyler Bischel
Can I not update an existing placement target's index_type? I had tried to update the default pool's index type: radosgw-admin zone get --rgw-zone=default > default-zone.json #replace index_type:0 to index_type:1 in the default zone file, under the default-placement entry of the placement_pools

Re: [ceph-users] blind buckets

2016-07-28 Thread Yehuda Sadeh-Weinraub
In order to use indexless (blind) buckets, you need to create a new placement target, and then set the placement target's index_type param to 1. Yehuda On Tue, Jul 26, 2016 at 10:30 AM, Tyler Bischel wrote: > Hi there, > We are looking at using Ceph (Jewel) for a

Re: [ceph-users] rgw query bucket usage quickly

2016-07-28 Thread Brian Andrus
I'm not sure what mechanism is used, but perhaps the Admin Ops API could provide what you're looking for. http://docs.ceph.com/docs/master/radosgw/adminops/#get-usage I believe also that the usage log should be enabled for the gateway. On Thu, Jul 28, 2016 at 12:19 PM, Sean Redmond

Re: [ceph-users] rgw query bucket usage quickly

2016-07-28 Thread Sean Redmond
Hi, This seems pretty quick here on a jewel cluster here, But I guess the key questions is how large is large? Is it perhaps a large number of smaller files that is slowing this down? Is the bucket index shared / on SSD? [root@korn ~]# time s3cmd du s3://seanbackup 1656225129419 29 objects

Re: [ceph-users] syslog broke my cluster

2016-07-28 Thread Sergio A. de Carvalho Jr.
We tracked the problem down to the following rsyslog configuration in our test cluster: *.* @@: $ActionExecOnlyWhenPreviousIsSuspended on & /var/log/failover.log $ActionExecOnlyWhenPreviousIsSuspended off It seems that the $ActionExecOnlyWhenPreviousIsSuspended directive doesn't work well with

Re: [ceph-users] rgw query bucket usage quickly

2016-07-28 Thread Dan van der Ster
On Thu, Jul 28, 2016 at 5:33 PM, Abhishek Lekshmanan wrote: > > Dan van der Ster writes: > >> Hi, >> >> Does anyone know a fast way for S3 users to query their total bucket >> usage? 's3cmd du' takes a long time on large buckets (is it iterating >> over all the objects?).

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-07-28 Thread c
Am 2016-07-28 15:26, schrieb Bill Sharer: I suspect the data for one or more shards on this osd's underlying filesystem has a marginally bad sector or sectors. A read from the deep scrub may be causing the drive to perform repeated seeks and reads of the sector until it gets a good read from

Re: [ceph-users] rgw query bucket usage quickly

2016-07-28 Thread Abhishek Lekshmanan
Dan van der Ster writes: > Hi, > > Does anyone know a fast way for S3 users to query their total bucket > usage? 's3cmd du' takes a long time on large buckets (is it iterating > over all the objects?). 'radosgw-admin bucket stats' seems to know the > bucket usage immediately, but I didn't find a

[ceph-users] rgw query bucket usage quickly

2016-07-28 Thread Dan van der Ster
Hi, Does anyone know a fast way for S3 users to query their total bucket usage? 's3cmd du' takes a long time on large buckets (is it iterating over all the objects?). 'radosgw-admin bucket stats' seems to know the bucket usage immediately, but I didn't find a way to expose that to end users.

Re: [ceph-users] Can't create bucket (ERROR: endpoints not configured for upstream zone)

2016-07-28 Thread Arvydas Opulskis
Hi, We solved it by running Micha scripts, plus we needed to run period update and commit commands (for some reason we had to do it in separate commands): radosgw-admin period update radosgw-admin period commit Btw, we added endpoints to json file, but I am not sure these are needed. And I

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-07-28 Thread Bill Sharer
I suspect the data for one or more shards on this osd's underlying filesystem has a marginally bad sector or sectors. A read from the deep scrub may be causing the drive to perform repeated seeks and reads of the sector until it gets a good read from the filesystem. You might want to look at

Re: [ceph-users] osd wrongly maked as down

2016-07-28 Thread Goncalo Borges
Firewall or communication issues? From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of M Ranga Swami Reddy [swamire...@gmail.com] Sent: 28 July 2016 22:00 To: ceph-users Subject: [ceph-users] osd wrongly maked as down Hello, hello - I use

[ceph-users] ONE pg deep-scrub blocks cluster

2016-07-28 Thread c
Hello Ceph alikes :) i have a strange issue with one PG (0.223) combined with "deep-scrub". Always when ceph - or I manually - run a " ceph pg deep-scrub 0.223 ", this leads to many "slow/block requests" so that nearly all of my VMs stop working for a while. This happens only to this one PG

[ceph-users] osd wrongly maked as down

2016-07-28 Thread M Ranga Swami Reddy
Hello, hello - I use 100+ osds cluster...here, I am getting a few osds wrongly marked down for a few seconds...and recovery starts...after a few seconds again these OSDs will be up... Any hint will help here.. Thanks Swami ___ ceph-users mailing list

[ceph-users] radosgw ignores rgw_frontends? (10.2.2)

2016-07-28 Thread Zoltan Arnold Nagy
Hi, I just did a test deployment using ceph-deploy rgw create after which I've added [client.rgw.c11n1] rgw_frontends = “civetweb port=80” to the config. Using show-config I can see that it’s there: root@c11n1:~# ceph --id rgw.c11n1 --show-config | grep civet debug_civetweb = 1/10

Re: [ceph-users] ceph auth caps failed to cleanup user's cap

2016-07-28 Thread Chengwei Yang
In addition, I tried `ceph auth rm`, neither failed. ``` # ceph auth rm client.chengwei Error EINVAL: ``` -- Thanks, Chengwei On Thu, Jul 28, 2016 at 06:23:09PM +0800, Chengwei Yang wrote: > Hi list, > > I'm learning ceph and follow >

[ceph-users] ceph auth caps failed to cleanup user's cap

2016-07-28 Thread Chengwei Yang
Hi list, I'm learning ceph and follow http://docs.ceph.com/docs/master/rados/operations/user-management/ to experience ceph user management. I create a user `client.chengwei` which looks like below. ``` exported keyring for client.chengwei [client.chengwei] key =

[ceph-users] how to deploy a bluestore ceph cluster without ceph-deploy

2016-07-28 Thread m13913886148
  hello cepher , I use ceph-10.2.2 source deploy a cluster.Since I am the source deployment , I deploy it without ceph-deploy.     how to deploy a bluestore ceph cluster without ceph-deploy.No official online documentation. Where relevant

[ceph-users] how to deploy bluestore ceph without ceph-deploy

2016-07-28 Thread m13913886148
     hello cepher , I use ceph-10.2.2 source deploy a cluster.Since I am the source deployment , I deploy it without ceph-deploy.     how to deploy a bluestore ceph cluster without ceph-deploy.No official online documentation. Where relevant documents?

Re: [ceph-users] How to hide monitoring ip in cephfs mounted clients

2016-07-28 Thread gjprabu
Hi All, Anybody facing similar issue, please let us know how to hide or avoid to use cephfs monitoring ip while mounting partition. Regards Prabu GJ On Wed, 20 Jul 2016 13:03:31 +0530 gjprabu gjpr...@zohocorp.comwrote Hi Team, We are using