Re: [ceph-users] Snapshot cleanup performance impact on client I/O?

2017-07-02 Thread Gagandeep Arora
Hello Kenneth, To throttle the snapshot trimming transactions on osds, set the "osd snap trim sleep" to value greater than 0(default). [global] osd snap trim sleep = 1 #the above will cause the osd to sleep for 1 second before submitting the next batch snap trimming transactions. Put it in your

Re: [ceph-users] Unable to start rgw after upgrade from hammer to jewel

2017-03-04 Thread Gagandeep Arora
esearch I commentded out my custom > setting: > > #rgw zonegroup root pool = se.root > #rgw zone root pool = se.root > > and after those rgw successfully started. Now setting are placed in > default pool: .rgw.root > > > Суббота, 4 марта 2017, 6:40 +05:00 от Gag

Re: [ceph-users] Unable to start rgw after upgrade from hammer to jewel

2017-03-03 Thread Gagandeep Arora
> setfacl -m u:ceph:r /etc/ceph/ceph.client.radosgw.keyring > > > On Fri, Mar 3, 2017 at 5:57 PM Gagandeep Arora > wrote: > >> Hi all, >> >> Unable to start radosgw after upgrading hammer(0.94.10) to jewel(10.2.5). >> Please see the following log. Can someone help plea

[ceph-users] Unable to start rgw after upgrade from hammer to jewel

2017-03-03 Thread Gagandeep Arora
Hi all, Unable to start radosgw after upgrading hammer(0.94.10) to jewel(10.2.5). Please see the following log. Can someone help please? # cat cephprod-client.radosgw.gps-prod-1.log 2017-03-04 10:35:10.459830 7f24316189c0 0 set uid:gid to 167:167 (ceph:ceph) 2017-03-04 10:35:10.459883 7f24316189

[ceph-users] OSDs down and out of cluster

2014-11-25 Thread Gagandeep Arora
Hello, We are running 6-node ceph cluster version 0.80.7 operating system centos 7. The osd's on one node are not gettng marked up and in. I have started/restarted the osd's couple of times with no luck. All the osds have the following message: 2014-11-25 08:36:04.150120 7f9ff676f700 0 -- 192.1

[ceph-users] tgt rbd

2014-11-05 Thread Gagandeep Arora
Hello, I am running ceph firefly with the cluster name cephprod and trying to create a lun with the following options: tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 --backing-store iscsi-spin/test-dr --bstype rbd --bsopts="conf=/etc/ceph/cephprod.conf" but it fails with the error

[ceph-users] Emperor Upgrade: osds not starting

2014-01-16 Thread Gagandeep Arora
Hello, Osds are not starting on any of the nodes after I upgraded ceph-0.67.4 to emperor 0.72.2. Tried to start osd see the following verbose output. The same error comes up on all nodes when starting osds. [root@ceph2 ~]# service ceph -v start osd.20 /usr/bin/ceph-conf -c /etc/ceph/ceph.conf -n