Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-14 Thread Adam Heczko
Hi folks, OTOH BTRFS warns users about 'nobarrier' mount option [1] 'Do not use device barriers. NOTE: Using this option greatly increases the chances of you experiencing data corruption during a power failure situation. This means full file-system corruption, and not just losing or corrupting

[ceph-users] Cache tier full not evicting

2015-09-14 Thread deeepdish
Hi Everyone, Getting close to cracking my understanding of cache tiering, and ec pools. Stuck on one anomaly which I do not understand — spent hours reviewing docs online, can’t seem to pin point what I’m doing wrong. Referencing http://ceph.com/docs/master/rados/operations/cache-tiering/

Re: [ceph-users] Cache tier full not evicting

2015-09-14 Thread Nick Fisk
Have you set the target_max_bytes? Otherwise those ratios are not relative to anything, they use the target_max_bytes as a max, not the pool size. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of deeepdish Sent: 14 September 2015 16:27 To: ceph-users@lists.ceph.com

[ceph-users] OSD refuses to start after problem with adding monitors

2015-09-14 Thread Chang, Fangzhe (Fangzhe)
Hi, I started a new Ceph cluster with a single instance, and later added two new osd on different machines using ceph-deploy. The osd data directories reside on a separate disk than the conventional /var/local/ceph/osd- directory. Correspondingly, I changed the replication factor size to 3

[ceph-users] Starting a Non-default Cluster at Machine Startup

2015-09-14 Thread John Cobley
Hi All, Hope someone can point me in the right direction here. I've been following the instructions on the manual deployment of a Ceph cluster here - http://docs.ceph.com/docs/master/install/manual-deployment/ . All is going OK however we are setting up our cluster with a non-default

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-14 Thread Gregory Farnum
It's been a while since I looked at this, but my recollection is that the FileStore will check if it should split on every object create, and will check if it should merge on every delete. It's conceivable it checks for both whenever the number of objects changes, though, which would make things

Re: [ceph-users] Query about contribution regarding monitoring of Ceph Object Storage

2015-09-14 Thread Gregory Farnum
On Sat, Sep 12, 2015 at 6:13 AM, pragya jain wrote: > Hello all > > I am carrying out research in the area of cloud computing under Department > of CS, University of Delhi. I would like to contribute my research work > regarding monitoring of Ceph Object Storage to the Ceph

Re: [ceph-users] CephFS and caching

2015-09-14 Thread Gregory Farnum
On Thu, Sep 10, 2015 at 1:07 PM, Kyle Hutson wrote: > A 'rados -p cachepool ls' takes about 3 hours - not exactly useful. > > I'm intrigued that you say a single read may not promote it into the cache. > My understanding is that if you have an EC-backed pool the clients can't

Re: [ceph-users] Question on cephfs recovery tools

2015-09-14 Thread Shinobu Kinjo
Thank you for getting back to me with more. What my understanding is what you would like to do are: 1.How you recover broken metadata / data. 2.How you avoid same condition from the next. Regarding to No.2, developers should have this responsibility. Because you can not do anything, once

Re: [ceph-users] Question on cephfs recovery tools

2015-09-14 Thread Goncalo Borges
Hello John... Thank you for the replies. I do have some comments in line. Bare a bit with me while I give you a bit of context. Questions will appear at the end. 1) I am currently running ceph 9.0.3 and I have install it to test the cephfs recovery tools. 2) I've created a situation where

Re: [ceph-users] rados bench seq throttling

2015-09-14 Thread Gregory Farnum
On Thu, Sep 10, 2015 at 1:02 PM, Deneau, Tom wrote: > Running 9.0.3 rados bench on a 9.0.3 cluster... > In the following experiments this cluster is only 2 osd nodes, 6 osds each > and a separate mon node (and a separate client running rados bench). > > I have two pools

Re: [ceph-users] [SOLVED] Cache tier full not evicting

2015-09-14 Thread deeepdish
Thanks Nick. That did it! Cache cleans it self up now. > On Sep 14, 2015, at 11:49 , Nick Fisk wrote: > > Have you set the target_max_bytes? Otherwise those ratios are not relative to > anything, they use the target_max_bytes as a max, not the pool size. > > From:

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-14 Thread Christian Balzer
Hello, Firstly thanks to Richard on getting back to us about this. On Mon, 14 Sep 2015 09:31:01 +0100 Nick Fisk wrote: > Are we sure nobarriers is safe? From what I understand barriers are > there to ensure correct ordering of writes, not just to make sure data > is flushed down to a

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-14 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Christian Balzer > Sent: 14 September 2015 09:43 > To: ceph-us...@ceph.com > Subject: Re: [ceph-users] XFS and nobarriers on Intel SSD > > > Hello, > > Firstly thanks to Richard on

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-14 Thread Jan Schermer
I looked into this just last week. Everybody seems to think it's safe to disable barriers if you have a non-volatile cache on the block device (be it controller, drive or SAN array), all documentation for major databases and distributions indicate you can disable them safely in this case.

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-14 Thread Nick Fisk
Are we sure nobarriers is safe? From what I understand barriers are there to ensure correct ordering of writes, not just to make sure data is flushed down to a non-volatile medium. Although the Intel SSD’s have power loss protection, is there not a risk that the Linux scheduler might be writing

Re: [ceph-users] SOLVED: CRUSH odd bucket affinity / persistence

2015-09-14 Thread Christian Balzer
Hello, looking at your example HW configuration below, I'd suggest you scour the ML archives for some (rather recent) discussions about node size, mixing SSD and HDD pools on the same node and the performance (or lack of it when it comes to cache tiers). Christian On Sun, 13 Sep 2015 16:58:42

Re: [ceph-users] Monitor segfault

2015-09-14 Thread Eino Tuominen
Hello, I'm pretty sure I did it just like you were trying to do. The cluster has since been upgraded a couple of times. Unfortunately I can't remember when I created that particular faulty rule. -- Eino Tuominen > Kefu Chai kirjoitti 14.9.2015 kello 11.57: > > Eino, >