Re: [ceph-users] Altering crush-failure-domain

2019-03-04 Thread Kees Meijs
Thanks guys. Regards, Kees On 04-03-19 22:18, Smith, Eric wrote: > This will cause data migration. > > -Original Message- > From: ceph-users On Behalf Of Paul > Emmerich > Sent: Monday, March 4, 2019 2:32 PM > To: Kees Meijs > Cc: Ceph Users > Subject: Re: [ceph-users] Altering

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-04 Thread Christian Rice
sure thing. sv5-ceph-rgw1 zonegroup get { "id": "de6af748-1a2f-44a1-9d44-30799cf1313e", "name": "us", "api_name": "us", "is_master": "true", "endpoints": [ "http://sv5-ceph-rgw1.savagebeast.com:8080; ], "hostnames": [], "hostnames_s3website": [],

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-04 Thread Matthew H
Christian, Can you provide your zonegroup and zones configurations for all 3 rgw sites? (run the commands for each site please) Thanks, From: Christian Rice Sent: Monday, March 4, 2019 5:34 PM To: Matthew H; ceph-users Subject: Re: radosgw sync falling behind

Re: [ceph-users] How to use STS Lite correctly?

2019-03-04 Thread myxingkong
Hello. I successfully created the role and attached the permission policy, but it still didn't work as expected. When I request the root path, it returns an HTTP 400 error: Request: POST / HTTP/1.1 Host: 192.168.199.81:8080 Accept-Encoding: identity Content-Length: 159 Content-Type:

[ceph-users] 14.1.0, No dashboard module

2019-03-04 Thread Ashley Merrick
I have just spun up a small test environment to give the first RC a test run. Have managed to get a MON / MGR running fine on latest .dev packages on Ubuntu 18.04, however when I go to try enable the dashboard I get the following error. ceph mgr module enable dashboard Error ENOENT: all mgr

Re: [ceph-users] Intel P4600 3.2TB U.2 form factor NVMe firmware problems causing dead disks

2019-03-04 Thread Kjetil Joergensen
Hi, If QDV10130 pre-dates feb/march 2018, you may have suffered the same firmware bug as existed on the DC S4600 series. I'm under NDA so I can't bitch and moan about specifics, but your symptoms sounds very familiar. It's entirely possible that there's *something* about bluestore that has

Re: [ceph-users] radosgw sync falling behind regularly

2019-03-04 Thread Christian Rice
So we upgraded everything from 12.2.8 to 12.2.11, and things have gone to hell. Lots of sync errors, like so: sudo radosgw-admin sync error list [ { "shard_id": 0, "entries": [ { "id": "1_1549348245.870945_5163821.1", "section":

Re: [ceph-users] Altering crush-failure-domain

2019-03-04 Thread Smith, Eric
This will cause data migration. -Original Message- From: ceph-users On Behalf Of Paul Emmerich Sent: Monday, March 4, 2019 2:32 PM To: Kees Meijs Cc: Ceph Users Subject: Re: [ceph-users] Altering crush-failure-domain Yes, these parts of the profile are just used to create a crush

Re: [ceph-users] Altering crush-failure-domain

2019-03-04 Thread Paul Emmerich
Yes, these parts of the profile are just used to create a crush rule. You can change the crush rule like any other crush rule. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89

[ceph-users] Altering crush-failure-domain

2019-03-04 Thread Kees Meijs
Hi Cephers, Documentation on http://docs.ceph.com/docs/master/rados/operations/erasure-code/ states: > Choosing the right profile is important because it cannot be modified > after the pool is created: a new pool with a different profile needs > to be created and all objects from the previous

Re: [ceph-users] [Nfs-ganesha-devel] NFS-Ganesha CEPH_FSAL ceph.quota.max_bytes not enforced

2019-03-04 Thread David C
On Mon, Mar 4, 2019 at 5:53 PM Jeff Layton wrote: > > On Mon, 2019-03-04 at 17:26 +, David C wrote: > > Looks like you're right, Jeff. Just tried to write into the dir and am > > now getting the quota warning. So I guess it was the libcephfs cache > > as you say. That's fine for me, I don't

Re: [ceph-users] [Nfs-ganesha-devel] NFS-Ganesha CEPH_FSAL ceph.quota.max_bytes not enforced

2019-03-04 Thread David C
Looks like you're right, Jeff. Just tried to write into the dir and am now getting the quota warning. So I guess it was the libcephfs cache as you say. That's fine for me, I don't need the quotas to be too strict, just a failsafe really. Interestingly, if I create a new dir, set the same 100MB

Re: [ceph-users] How to just delete PGs stuck incomplete on EC pool

2019-03-04 Thread Daniel K
Thanks for the suggestions. I've tried both -- setting osd_find_best_info_ignore_history_les = true and restarting all OSDs, as well as 'ceph osd-force-create-pg' -- but both still show incomplete PG_AVAILABILITY Reduced data availability: 2 pgs inactive, 2 pgs incomplete pg 18.c is

Re: [ceph-users] 13.2.4 odd memory leak?

2019-03-04 Thread Paul Emmerich
Bloated to ~4 GB per OSD and you are on HDDs? 13.2.3 backported the cache auto-tuning which targets 4 GB memory usage by default. See https://ceph.com/releases/13-2-4-mimic-released/ The bluestore_cache_* options are no longer needed. They are replaced by osd_memory_target, defaulting to 4GB.

Re: [ceph-users] How to use STS Lite correctly?

2019-03-04 Thread Pritha Srivastava
There are two steps that have to be performed before calling AssumeRole: 1. A role named S3Access needs to be created to which it is mandatory to attach an assume role policy document. For example, radosgw-admin role create --role-name=S3Access --path=/application_abc/component_xyz/

[ceph-users] 13.2.4 odd memory leak?

2019-03-04 Thread Steffen Winther Sørensen
List Members, patched a centos 7 based cluster from 13.2.2 to 13.2.4 last monday, everything appeared working fine. Only this morning I found all OSDs in the cluster to be bloated in memory foot print, possible after weekend backup through MDS. Anyone else seeing possible memory leak in

[ceph-users] How to use STS Lite correctly?

2019-03-04 Thread myxingkong
I want to use the STS service to generate temporary credentials for use by third-party clients. I configured STS lite based on the documentation. http://docs.ceph.com/docs/master/radosgw/STSLite/ This is my configuration file: [global] fsid = 42a7cae1-84d1-423e-93f4-04b0736c14aa

Re: [ceph-users] ceph tracker login failed

2019-03-04 Thread M Ranga Swami Reddy
Fixed: use only user id (swamireddy) instead of full openID url. On Thu, Feb 28, 2019 at 7:04 PM M Ranga Swami Reddy wrote: > > I tried to login to ceph tracker - it failing with openID url.? > > I tried with my OpenID: > http://tracker.ceph.com/login > > my id:

Re: [ceph-users] Erasure coded pools and ceph failure domain setup

2019-03-04 Thread Hector Martin
On 02/03/2019 01:02, Ravi Patel wrote: Hello, My question is how crush distributes chunks throughout the cluster with erasure coded pools. Currently, we have 4 OSD nodes with 36 drives(OSD daemons) per node. If we use ceph_failire_domaon=host, then we are necessarily limited to k=3,m=1, or