[ceph-users] cephfs toofull

2016-08-28 Thread gjprabu
Hi All, We are new with cephfs and we have 5 OSD and each size has 3.3TB. As of now data has been stored around 12 TB size, unfortunately osd5 went down and while remapped+backfill below error is showing even though we have around 2TB free spaces. Kindly provide the solution to solve

Re: [ceph-users] Filling up ceph past 75%

2016-08-28 Thread Christian Balzer
Hello, On Sun, 28 Aug 2016 21:23:41 -0500 Sean Sullivan wrote: > I've seen it in the past in the ML but I don't remember seeing it lately. > We recently had an ceph engineer come out from RH and he mentioned he > hasn't seen this kind of disparity either which made me jump on here to > double ch

Re: [ceph-users] Filling up ceph past 75%

2016-08-28 Thread Sean Sullivan
I've seen it in the past in the ML but I don't remember seeing it lately. We recently had an ceph engineer come out from RH and he mentioned he hasn't seen this kind of disparity either which made me jump on here to double check as I thought it was a well known thing. So I'm not crazy and the rou

Re: [ceph-users] My first CEPH cluster

2016-08-28 Thread Christian Balzer
Hello, On Mon, 29 Aug 2016 09:46:53 +0800 Rob Gunther wrote: > I learned of Ceph only a few weeks ago, think the concept is really cool. > > I wanted to try my hand at it, but did not want to try setting it all up > with VM boxes. > > So I put together a system using 7 physical nodes, using li

Re: [ceph-users] Filling up ceph past 75%

2016-08-28 Thread Christian Balzer
Hello, On Sun, 28 Aug 2016 14:34:25 -0500 Sean Sullivan wrote: > I was curious if anyone has filled ceph storage beyond 75%. If you (re-)search the ML archives, you will find plenty of cases like this, albeit most of them involuntary. Same goes for uneven distribution. > Admitedly we > lost

[ceph-users] what does omap do?

2016-08-28 Thread 王海涛
Hello everyone: I'm using ceph-10.1.1, when I write data to rbd image, I found that there are many operations about omap. Especially when I use small IO block size, e.g. 1K, to write 64MB data into rbd image, the data of omap put though rocksdb can be almost 64MB, that is a big write a

[ceph-users] My first CEPH cluster

2016-08-28 Thread Rob Gunther
I learned of Ceph only a few weeks ago, think the concept is really cool. I wanted to try my hand at it, but did not want to try setting it all up with VM boxes. So I put together a system using 7 physical nodes, using little ARM based computers. I got four Banana Pi

Re: [ceph-users] CephFS Big Size File Problem

2016-08-28 Thread Yan, Zheng
On Sun, Aug 28, 2016 at 1:57 AM, Lazuardi Nasution wrote: > Hi, > > I have retry the test, but with FUSE CephFS client. It seem everything is > OK. Any explainantion? Is Kernel CephFS client less featured (more limited) > and/or less stable than FUSE CephFS client like on RBD? > No, idea. I never

Re: [ceph-users] Intel SSD (DC S3700) Power_Loss_Cap_Test failure

2016-08-28 Thread Christian Balzer
Hello, as a follow-up, conclusion and dire warning to all who happen to encounter this failure mode: The server with that failed power loss capacitor SSD had a religious experience 2 days ago and needed a power cycle to revive it. Now in theory the data should have been safe, as the drive had m

[ceph-users] Filling up ceph past 75%

2016-08-28 Thread Sean Sullivan
I was curious if anyone has filled ceph storage beyond 75%. Admitedly we lost a single host due to power failure and are down 1 host until the replacement parts arrive but outside of that I am seeing disparity between the most and least full osd:: ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR M

[ceph-users] creating rados S3 gateway

2016-08-28 Thread Andrus, Brian Contractor
All, I'm having trouble using ceph-deploy to create a rados gateway. I initially did it and it worked, but my default pg_num was too large so it was complaining about that. To remedy, I stopped the ceph-radosgw service and deleted the pools that were created. default.rgw.log default.rgw.gc defau