[ceph-users] Increasing pg_num

2016-05-15 Thread Chris Dunlop
Hi, I'm trying to understand the potential impact on an active cluster of increasing pg_num/pgp_num. The conventional wisdom, as gleaned from the mailing lists and general google fu, seems to be to increase pg_num followed by pgp_num, both in small increments, to the target size, using "osd max

[ceph-users] failing to respond to cache pressure

2016-05-15 Thread Andrus, Brian Contractor
So this 'production ready' CephFS for jewel seems a little not quite Currently I have a single system mounting CephFS and merely scp-ing data to it. The CephFS mount has 168 TB used, 345 TB / 514 TB avail. Every so often, I get a HEALTH_WARN message of mds0: Client failing to respond to

[ceph-users] Pacemaker Resource Agents for Ceph by Andreas Kurz

2016-05-15 Thread Alex Gorbachev
Following a conversation with Sage in NYC, I would like to share links to the excellent resource agents for Pacemaker, developed by Andreas Kurz to present Ceph images to iSCSI and FC fabrics. We are using these as part of the Storcium solution, and these RAs have withstood quite a few beatings

[ceph-users] Help...my cluster has multiple rgw related pools after upgrading from H to J

2016-05-15 Thread 易明
Dear Cepher, It's my first time to write email to the list, I hope my problem is depicted clearly. I have a cluster with 4 physical servers, 3 mons on each server and 4 osds per one server, as well as one server as rgw client. I just upgrade 3 servers from H to J, except the rgw server. After

Re: [ceph-users] Help ... some osd does not wan't to start after a dumpling->firefly->hammer upgrade

2016-05-15 Thread Emmanuel Lacour
Le 15/05/2016 15:35, Emmanuel Lacour a écrit : > Dear ceph users, > > > I have clusters running debian wheezy with dumpling. > > I upgraded one cluster from dumpling to firefly, then to hammer without > problem. > > Then I upgraded a second cluster from dumpling to firefly without > problem,

Re: [ceph-users] Starting a cluster with one OSD node

2016-05-15 Thread Alex Gorbachev
> On Friday, May 13, 2016, Mike Jacobacci wrote: > Hello, > > I have a quick and probably dumb question… We would like to use Ceph > for our storage, I was thinking of a cluster with 3 Monitor and OSD > nodes. I was wondering if it was a bad idea to

Re: [ceph-users] How to remove a placement group?

2016-05-15 Thread Kostis Fardelas
There is the "ceph pg {pgid} mark_unfound_lost revert|delete" command but you may also find interesting to utilize ceph-objectstore-tool to do the job On 15 May 2016 at 20:22, Michael Kuriger wrote: > I would try: > > ceph pg repair 15.3b3 > > > > > > [image: yp] > > > > Michael

Re: [ceph-users] How to remove a placement group?

2016-05-15 Thread Michael Kuriger
I would try: ceph pg repair 15.3b3 [yp] Michael Kuriger Sr. Unix Systems Engineer • mk7...@yp.com |• 818-649-7235 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Romero Junior Sent: Saturday, May 14, 2016 11:46 AM To:

Re: [ceph-users] reweight-by-utilization warning

2016-05-15 Thread Dan van der Ster
Hi Blaire! (re-copying to list) The good news is that the functionality of that python script is now available natively in jewel and has been backported to hammer 0.96.7. Now you can use ceph osd test-reweight-by-(pg|utilization) in order to see how the weights would change if you were to

[ceph-users] reweight-by-utilization warning

2016-05-15 Thread Blair Bethwaite
Hi all, IMHO reweight-by-utilization should come with some sort of warning, it just suddenly reweights everything - no dry run, no confirmation, apparently no option to see what it's going to do. It also doesn't appear to consider pools and hence crush rulesets, which I imagine could result in it

[ceph-users] Help ... some osd does not wan't to start after a dumpling->firefly->hammer upgrade

2016-05-15 Thread Emmanuel Lacour
Dear ceph users, I have clusters running debian wheezy with dumpling. I upgraded one cluster from dumpling to firefly, then to hammer without problem. Then I upgraded a second cluster from dumpling to firefly without problem, thought I forget to restart 2 osds/10 so they stayed in dumpling. I

[ceph-users] Configure Civetweb Infernalis

2016-05-15 Thread giannis androulidakis
Hey, i've been using the Infernalis Ceph version in a VM cluster, with 2 OSDs, 1 monitor and 1 gateway node. Is there a standard way to change the options of the web server the gateway is using? (Civetweb according to the docs) For example it is quite simple to change the default port (from

Re: [ceph-users] Erasure pool performance expectations

2016-05-15 Thread Peter Kerdisle
Hey Nick, I've been playing around with the osd_tier_promote_max_bytes_sec setting but I'm not really seeing any changes. What would be expected when setting a max bytes value? I would expected that my OSDs would throttle themselves to this rate when doing promotes but this doesn't seem to be