Re: [ceph-users] MDS damaged

2017-10-26 Thread Ronny Aasen
if you were following this page: http://docs.ceph.com/docs/jewel/rados/troubleshooting/troubleshooting-pg/ then there is normally hours of troubleshooting in the following paragraph, before finally admitting defeat and marking the object as lost: "It is possible that there are other locations

[ceph-users] crush optimize does not work

2017-10-26 Thread Stefan Priebe - Profihost AG
Hello, while trying to optimize a ceph cluster running jewel i get the following output: 2017-10-26 10:43:27,615 argv = optimize --crushmap /home/spriebe/ceph.report --out-path /home/spriebe/optimized.crush --pool 5 --pool=5 --choose-args=5 --replication-count=3 --pg-num=4096 --pgp-num=4096 --rule

Re: [ceph-users] s3 bucket permishions

2017-10-26 Thread Abhishek Lekshmanan
nigel davies writes: > I am fallowing a guide at the mo. > > But I believe it's RWG users We have support for aws like bucket policies, http://docs.ceph.com/docs/master/radosgw/bucketpolicy/ Some amount of permissions can also be controlled by acls > > On 25 Oct 2017 5:29 pm, "David Turner" wr

Re: [ceph-users] s3 bucket permishions

2017-10-26 Thread nigel davies
Thanks i spotted this, when i run the example i get ERROR: S3 error: 400 (InvalidArgument) I found that bucket link will, link my buckets to different users (what is what i am kind of after) But i also link to make sure, if an new user was added. they have no access to any buckets until i allow

Re: [ceph-users] Lots of reads on default.rgw.usage pool

2017-10-26 Thread Mark Schouten
Setting rgw_enable_usage_log is not even helping. I get a lot of reads still, caused by the calls in my previous email.. Met vriendelijke groeten, --  Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ Mark Schouten | Tuxis Internet Engineering KvK: 61527076 | http://www.tuxis.nl/ T: 0

Re: [ceph-users] MDS damaged

2017-10-26 Thread Daniel Davidson
I increased the logging of the mds to try and get some more information.  I think the relevant lines are: 2017-10-26 05:03:17.661683 7f1c598a6700  0 mds.0.cache.dir(607) _fetched missing object for [dir 607 ~mds0/stray7/ [2,head] auth v=108918871 cv=0/0 ap=1+0+0 state=1610645632 f(v1 m2017-10-

Re: [ceph-users] MDS damaged

2017-10-26 Thread Daniel Davidson
And at the risk of bombing the mailing list, I can also see that the stray7_head omapkey is not being recreated: rados -p igbhome_data listomapkeys 100. stray0_head stray1_head stray2_head stray3_head stray4_head stray5_head stray6_head stray8_head stray9_head On 10/26/2017 05:08 AM, D

Re: [ceph-users] ceph zstd not for bluestor due to performance reasons

2017-10-26 Thread Sage Weil
On Thu, 26 Oct 2017, Stefan Priebe - Profihost AG wrote: > Hi Sage, > > Am 25.10.2017 um 21:54 schrieb Sage Weil: > > On Wed, 25 Oct 2017, Stefan Priebe - Profihost AG wrote: > >> Hello, > >> > >> in the lumious release notes is stated that zstd is not supported by > >> bluestor due to performance

Re: [ceph-users] ceph zstd not for bluestor due to performance reasons

2017-10-26 Thread Sage Weil
On Thu, 26 Oct 2017, Haomai Wang wrote: > in our test, lz4 is better than snappy Let's switch the default then? sage ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] MDS damaged

2017-10-26 Thread John Spray
On Thu, Oct 26, 2017 at 12:40 PM, Daniel Davidson wrote: > And at the risk of bombing the mailing list, I can also see that the > stray7_head omapkey is not being recreated: > rados -p igbhome_data listomapkeys 100. > stray0_head > stray1_head > stray2_head > stray3_head > stray4_head > st

[ceph-users] Ceph Tech Talk Cancelled

2017-10-26 Thread Leonardo Vaz
Hey Cephers, Sorry for the short notice, but the Ceph Tech Talk for October (scheduled for today) has been canceled. Kindest regards, Leo -- Leonardo Vaz Ceph Community Manager Open Source and Standards Team ___ ceph-users mailing list ceph-users@lis

[ceph-users] Ceph Developers Monthly - November

2017-10-26 Thread Leonardo Vaz
Hey Cephers, This is just a friendly reminder that the next Ceph Developer Montly meeting is coming up: http://wiki.ceph.com/Planning If you have work that you're doing that it a feature work, significant backports, or anything you would like to discuss with the core team, please add it to the

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-26 Thread Russell Glaue
On Wed, Oct 25, 2017 at 7:09 PM, Maged Mokhtar wrote: > It depends on what stage you are in: > in production, probably the best thing is to setup a monitoring tool > (collectd/grahite/prometheus/grafana) to monitor both ceph stats as well > as resource load. This will, among other things, show yo

[ceph-users] Install Ceph on Fedora 26

2017-10-26 Thread GiangCoi Mr
Hi all I am installing ceph luminous on fedora 26, I installed ceph luminous success but when I install ceph mon, it’s error: it doesn’t find client.admin.keyring. How I can fix it, Thank so much Regard, GiangLT Sent from my iPhone ___ ceph-users mai

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-26 Thread Alan Johnson
If using defaults try chmod +r /etc/ceph/ceph.client.admin.keyring -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of GiangCoi Mr Sent: Thursday, October 26, 2017 11:09 AM To: ceph-us...@ceph.com Subject: [ceph-users] Install Ceph on Fedora 26 H

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-26 Thread GiangCoi Mr
Dear Alan Johnson I install with command: ceph-deploy install ceph-node1 —no-adjust-repos. When install success, I run command: ceph-deploy mon ceph-node1, it’s error because it didn’t find file ceph.client.admin.keyring. So how I make permission for this file? Sent from my iPhone > On Oct 26,

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-26 Thread Denes Dolhay
Hi, Did you to create a cluster first? ceph-deploy new {initial-monitor-node(s)} Cheers, Denes. On 10/26/2017 05:25 PM, GiangCoi Mr wrote: Dear Alan Johnson I install with command: ceph-deploy install ceph-node1 —no-adjust-repos. When install success, I run command: ceph-deploy mon ceph-nod

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-26 Thread GiangCoi Mr
Hi Denes. I created with command: ceph-deploy new ceph-node1 Sent from my iPhone > On Oct 26, 2017, at 10:34 PM, Denes Dolhay wrote: > > Hi, > Did you to create a cluster first? > > ceph-deploy new {initial-monitor-node(s)} > > Cheers, > Denes. >> On 10/26/2017 05:25 PM, GiangCoi Mr wrote: >>

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-26 Thread Denes Dolhay
Hi, If you ssh to ceph-node1, what are the rights, owner, group, content of /etc/ceph/ceph.client.admin.keyring ? [you should mask out the key, just show us that it is there] On 10/26/2017 05:41 PM, GiangCoi Mr wrote: Hi Denes. I created with command: ceph-deploy new ceph-node1 Sent from m

Re: [ceph-users] MDS damaged

2017-10-26 Thread Daniel Davidson
Thanks John.  It has been up for a few hours now, and I am slowly adding more workload to it over time, just so I can see what id going on better. I was wondering, since this object is used to delete data, if there was a chance that deleting data from the system could cause it to be used and t

Re: [ceph-users] s3 bucket permishions

2017-10-26 Thread nigel davies
Thanks all for offering input I believe i worked it out :D you can set permissions using s3cmd On Thu, Oct 26, 2017 at 10:20 AM, nigel davies wrote: > Thanks i spotted this, when i run the example i get > ERROR: S3 error: 400 (InvalidArgument) > > I found that bucket link will, link my buckets

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-26 Thread Maged Mokhtar
I wish the firmware update will fix things for you. Regarding monitoring: if your tool is able to record disk busy%, iops, throughout then you do not need to run atop. I still highly recommend you run the fio SSD test for sync writes: https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-26 Thread Gerhard W. Recher
Would be nice to see your output of: rados bench -p rbd 60 write --no-cleanup -t 56 -b 4096  -o 1M Total time run: 60.005452 Total writes made:  438295 Write size: 4096 Object size:    1048576 Bandwidth (MB/sec): 28.5322 Stddev Bandwidth:   0.514721 Max ban

Re: [ceph-users] Hammer to Jewel Upgrade - Extreme OSD Boot Time

2017-10-26 Thread Chris Jones
The long running functionality appears to be related to clear_temp_objects(); from OSD.cc called from init(). What is this functionality intended to do? Is it required to be run on every OSD startup? Any configuration settings that would help speed this up? ---

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-26 Thread Christian Wuerdig
Hm, no necessarily directly related to your performance problem, however: These SSDs have a listed endurance of 72TB total data written - over a 5 year period that's 40GB a day or approx 0.04 DWPD. Given that you run the journal for each OSD on the same disk, that's effectively at most 0.02 DWPD (a

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-26 Thread GiangCoi Mr
Hi Denes Dolhay, This is error when I run command: ceph-deploy mon create-initial [ceph_deploy.mon][INFO ] mon.ceph-node1 monitor has reached quorum! [ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum [ceph_deploy.mon][INFO ] Running gatherkeys... [ceph_deploy.gath