Re: [ceph-users] Proplem about capacity when mount using CephFS?

2013-07-15 Thread Ta Ba Tuan
Thanks Sage, I wories about returned capacity when mounting CephFS. but when disk is full, capacity will showed 50% or 100% Used? On 07/16/2013 11:01 AM, Sage Weil wrote: On Tue, 16 Jul 2013, Ta Ba Tuan wrote: Hi everyone. I have 83 osds, and every osds have same 2TB, (Capacity sumary is 166

Re: [ceph-users] Proplem about capacity when mount using CephFS?

2013-07-15 Thread Sage Weil
On Tue, 16 Jul 2013, Ta Ba Tuan wrote: > Hi everyone. > > I have 83 osds, and every osds have same 2TB, (Capacity sumary is 166TB) > I'm using replicate 3 for pools ('data','metadata'). > > But when mounting Ceph filesystem from somewhere (using: mount -t ceph > Monitor_IP:/ /ceph -o name=admin,s

Re: [ceph-users] Problems with tgt with ceph support

2013-07-15 Thread Dan Mick
Apologies; I don't really understand the results. They're labeled "RBD Device /dev/rbd/rbd/iscsi-image-part1 exported with tgt" and "TGT-RBD connector". Is the former nothing to do with tgt (i.e., just the kernel block device), and the latter is stgt/tgtd? Do you interpret them to say the fir

[ceph-users] Proplem about capacity when mount using CephFS?

2013-07-15 Thread Ta Ba Tuan
Hi everyone. I have 83 osds, and every osds have same 2TB, (Capacity sumary is 166TB) I'm using replicate 3 for pools ('data','metadata'). But when mounting Ceph filesystem from somewhere (using: mount -t ceph Monitor_IP:/ /ceph -o name=admin,secret=xx") then/*capacity sumary is showed

[ceph-users] OSD recovery failed because of "leveldb: Corruption : checksum mismatch"

2013-07-15 Thread wanhai_zhu
Dear guys : I have a ceph cluster which is used for backend storage of kvm guest, and this cluster has four nodes, each node has three disks. And the ceph version is 0.61.4. Because of electrical power down, the ceph cluster have been shutdown innormally several days ago. When I restarted

[ceph-users] OSD crash upon pool creation

2013-07-15 Thread Andrey Korolyov
Hello, Using db2bb270e93ed44f9252d65d1d4c9b36875d0ea5 I had observed some disaster-alike behavior after ``pool create'' command - every osd daemon in the cluster will die at least once(some will crash times in a row after bringing back). Please take a look on the backtraces(almost identical) below

Re: [ceph-users] all oas crush on start

2013-07-15 Thread Vladislav Gorbunov
ruleset 3 is: rule iscsi { ruleset 3 type replicated min_size 1 max_size 10 step take iscsi step chooseleaf firstn 0 type datacenter step chooseleaf firstn 0 type host step emit } 2013/7/16 Vladislav Gorbunov : > sorry, after i'm try

Re: [ceph-users] all oas crush on start

2013-07-15 Thread Vladislav Gorbunov
sorry, after i'm try to apply crush ruleset 3 (iscsi) to pool iscsi: ceph osd pool set iscsi crush_ruleset 3 2013/7/16 Vladislav Gorbunov : >>Have you run this crush map through any test mappings yet? > Yes, it worked on test cluster, and after apply map to main cluster. > OSD servers downed after

Re: [ceph-users] all oas crush on start

2013-07-15 Thread Vladislav Gorbunov
>Have you run this crush map through any test mappings yet? Yes, it worked on test cluster, and after apply map to main cluster. OSD servers downed after i'm try to apply crush ruleset 3 (iscsi) to pool iscsi: ceph osd pool set data crush_ruleset 3 2013/7/16 Gregory Farnum : > It's probably not th

Re: [ceph-users] all oas crush on start

2013-07-15 Thread Gregory Farnum
It's probably not the same issue as that ticket, which was about the OSD handling a lack of output incorrectly. (It might be handling the output incorrectly in some other way, but hopefully not...) Have you run this crush map through any test mappings yet? -Greg Software Engineer #42 @ http://inkt

Re: [ceph-users] Num of PGs

2013-07-15 Thread Gregory Farnum
On Mon, Jul 15, 2013 at 1:30 AM, Stefan Priebe - Profihost AG wrote: > Am 15.07.2013 10:19, schrieb Sylvain Munaut: >> Hi, >> >> I'm curious what would be the official recommendation for when you >> have multiple pools. >> In total we have 21 pools and that lead to around 12000 PGs for only 24 OSD

Re: [ceph-users] Including pool_id in the crush hash ? FLAG_HASHPSPOOL ?

2013-07-15 Thread Gregory Farnum
On Mon, Jul 15, 2013 at 2:03 AM, Sylvain Munaut wrote: > Hi, > >>> I'd like the pool_id to be included in the hash used for the PG, to >>> try and improve the data distribution. (I have 10 pool). >>> >>> I see that there is a flag named FLAG_HASHPSPOOL. Is it possible to >>> enable it on existing

Re: [ceph-users] Lack of jessie debian repository

2013-07-15 Thread Gary Lowell
Hi Maciej - We will be adding Jessie eventually, but probably not before it is officially released. Current policy is to support the latest two releases of the distros. We would welcome a patch for the cookbooks if you would like to submit one. Thanks, Gary On Jul 13, 2013, at 6:08 AM, Maci

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-15 Thread Dzianis Kahanovich
Tom Verdaat пишет: >3. Is anybody doing this already and willing to share their experience? Relative yes. Before ceph I was use drbd+ocfs2 (with o2cb stack), now both this servers are inside of VM's with same ocfs2. There are same like over drbd, but just remeber to turn rbd cache to "off" (R

Re: [ceph-users] Problems with tgt with ceph support

2013-07-15 Thread Toni F. [ackstorm]
Here's my results. The performance test was seq write On 15/07/13 10:12, Toni F. [ackstorm] wrote: I'm going to do a performance test with fio to see the difference. Regards On 12/07/13 18:15, Dan Mick wrote: Ceph performance is a very very complicated subject. How does that compare to oth

Re: [ceph-users] Including pool_id in the crush hash ? FLAG_HASHPSPOOL ?

2013-07-15 Thread Sylvain Munaut
Hi, >> I'd like the pool_id to be included in the hash used for the PG, to >> try and improve the data distribution. (I have 10 pool). >> >> I see that there is a flag named FLAG_HASHPSPOOL. Is it possible to >> enable it on existing pool ? > > Hmm, right now there is not. :( I've made a ticket: >

Re: [ceph-users] Num of PGs

2013-07-15 Thread Stefan Priebe - Profihost AG
Am 15.07.2013 10:19, schrieb Sylvain Munaut: > Hi, > > I'm curious what would be the official recommendation for when you > have multiple pools. > In total we have 21 pools and that lead to around 12000 PGs for only 24 OSDs. > > The 'data' and 'metadata' pools are actually unused, and then we hav

Re: [ceph-users] Num of PGs

2013-07-15 Thread Sylvain Munaut
Hi, I'm curious what would be the official recommendation for when you have multiple pools. In total we have 21 pools and that lead to around 12000 PGs for only 24 OSDs. The 'data' and 'metadata' pools are actually unused, and then we have 9 pools of 'rgw' meta data ( .rgw, .rgw.control, .users.u

Re: [ceph-users] Problems with tgt with ceph support

2013-07-15 Thread Toni F. [ackstorm]
I'm going to do a performance test with fio to see the difference. Regards On 12/07/13 18:15, Dan Mick wrote: Ceph performance is a very very complicated subject. How does that compare to other access methods? Say, rbd import/export for an easy test? On Jul 12, 2013 8:22 AM, "Toni F. [acks