Re: [ceph-users] Encryption

2013-10-01 Thread Nicolas Thomas
Ciao Gippa,


From http://ceph.com/releases/v0-61-cuttlefish-released/
* ceph-disk: dm-crypt support for OSD disks

Hope this helps,


On 01/10/2013 08:57, Giuseppe 'Gippa' Paterno' wrote:
> Hi!
> Maybe an FAQ, but is encryption of data available (or will be available)
> in ceph at a storage level?
> Thanks,
> Giuseppe
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Best Regards,
  Nicolas Thomas EMEA Sales Engineer - Canonical
Planned absence: Nov 1st to Tue 12th
GPG FPR: D592 4185 F099 9031 6590 6292 492F C740 F03A 7EB9



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple kernel RBD clients failures

2013-10-01 Thread Travis Rhoden
Eric,

Yeah, your OSD weights are a little crazy...

For example, looking at one host from your output of "ceph osd tree"...

-3  31.5host tca23
1   3.63osd.1   up  1
7   0.26osd.7   up  1
13  2.72osd.13  up  1
19  2.72osd.19  up  1
25  0.26osd.25  up  1
31  3.63osd.31  up  1
37  2.72osd.37  up  1
43  0.26osd.43  up  1
49  3.63osd.49  up  1
55  0.26osd.55  up  1
61  3.63osd.61  up  1
67  0.26osd.67  up  1
73  3.63osd.73  up  1
79  0.26osd.79  up  1
85  3.63osd.85  up  1

osd.7 is set to 0.26, with others set to > 3.  Under normal
circumstances, the rule of thumb would be to set weights equal to the
disk size in TB.  So, a 2TB disk would have a weight of 2, a 1.5TB
disk == 1.5, etc.

These weights control what proportion of data is directed to each OSD.
 I'm guessing you do have very different size disks, though, as it
looks like the disk that are reporting near full all have relatively
small weights (OSD 43 is at 91%, weight = 0.26).  Is this really a
260GB disk?  A mix of HDD and SSDs? or maybe just a small partition?
Either way, you probably have something wrong with the weights.  I'd
look into that.  Having a single pool made of disks of such varied
size may not be a good option, but I'm not sure if that's your setup
or not.

To the best of my knowledge, Ceph halts IO operations when any disk
reaches the near full scenario (85% by default).  I'm not 100% certain
on that one, but I believe that is true.

Hope that helps,

 - Travis

On Tue, Oct 1, 2013 at 2:51 AM, Yan, Zheng  wrote:
> On Mon, Sep 30, 2013 at 11:50 PM, Eric Eastman  wrote:
>>
>> Thank you for the reply
>>
>>
>>> -28 == -ENOSPC (No space left on device). I think it's is due to the
>>
>> fact that some osds are near full.
>>>
>>>
>>> Yan, Zheng
>>
>>
>> I thought that may be the case, but I would expect that ceph health would
>> tell me I had a full OSDs, but it is only saying they are near full:
>>
>>
 # ceph health detail
 HEALTH_WARN 9 near full osd(s)
 osd.9 is near full at 85%
 osd.29 is near full at 85%
 osd.43 is near full at 91%
 osd.45 is near full at 88%
 osd.47 is near full at 88%
 osd.55 is near full at 94%
 osd.59 is near full at 94%
 osd.67 is near full at 94%
 osd.83 is near full at 94%
>>
>>
>
> Are these OSD's disks smaller than other OSD's. If they do, you need
> to lower these OSD's weights.
>
> Regards
> Yan, Zheng
>
>> As I still have lots of space:
>>
>>
 # ceph df
 GLOBAL:
SIZE AVAIL RAW USED %RAW USED
249T 118T  131T 52.60

 POOLS:
NAME ID USED   %USED OBJECTS
data   0  0 0 0
metadata  1  0 0 0
rbd 2  8 0 1
rbd-pool3  67187G 26.30 17713336
>>
>>
>> And I setup lots of Placement Groups:
>>
>> # ceph osd dump | grep 'rep size' | grep rbd-pool
>> pool 3 'rbd-pool' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
>> pg_num 4500 pgp_num 4500 last_change 360 owner 0
>>
>> Why did the OSDs fill up long before I ran out of space?
>>
>
>
>
>
>> Thanks,
>>
>> Eric
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Weird behavior of PG distribution

2013-10-01 Thread Chen, Ching-Cheng (KFRM 1)
Found a weird behavior (or looks like weird) with ceph 0.67.3

I have 5 servers.  Monitor runs on server 1.   And server 2 to 5 have one OSD 
running each (osd.0 - osd.3)

I did a 'ceph pg dump'.  I can see PGs got somehow randomly distributed to all 
4 OSDs which is expected behavior.

However, if I bring up one OSD in the same server running monitor.   It seems 
all PGs has their primary ODS move to this new OSD.  After I add a new OSD 
(osd.4) to the same server running monitor, the 'ceph pg dump' command showing 
active OSDs as [4,x] for all PGs.

Is this expected behavior??

Regards,

Chen

Ching-Cheng Chen
CREDIT SUISSE
Information Technology | MDS - New York, KVBB 41
One Madison Avenue | 10010 New York | United States
Phone +1 212 538 8031 | Mobile +1 732 216 7939
chingcheng.c...@credit-suisse.com | 
www.credit-suisse.com



=== 
Please access the attached hyperlink for an important electronic communications 
disclaimer: 
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html 
=== 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] trouble adding OSDs - which documentation to use

2013-10-01 Thread Jogi Hofmüller
Dear all,

I am back to managing the cluster before starting to use it even on a
test host.  First of all a question regarding the docs:

Is this [1] outdated?  If not, why are the links to chef-* not working?
 Is chef-* still recommended/used?

After adding a new OSD (with ceph-deploy version 1.2.6) and starting the
daemon after a reboot of the osd-server it complains:

root@ceph-server1:~# service ceph start
=== osd.0 ===
No filesystem type defined!

I could not find anything in the docs on how to specify the fs-type.
How is mounting the data-partition done usually?  It works if I mount it
via an entry in /etc/fstab (or manually) but I would have to edit that
manually.

All this is done using ceph "dumpling" installed/deployed according to
the getting started info from [2].

[1]  http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
[2]  http://ceph.com/docs/master/start/quick-ceph-deploy/

Regards!
-- 
j.hofmüller

Optimism doesn't alter the laws of physics. - Subcommander T'Pol



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Weird behavior of PG distribution

2013-10-01 Thread Mike Dawson

Ching-Cheng,

Data placement is handled by CRUSH. Please examine the following:

ceph osd getcrushmap -o crushmap && crushtool -d crushmap -o 
crushmap.txt && cat crushmap.txt


That will show the topology and placement rules Ceph is using.
Pay close attention to the "step chooseleaf" lines inside the rule for 
each pool. Under certain configurations, I believe the placement that 
you describe is in fact the expected behavior.


Thanks,

Mike Dawson
Co-Founder, Cloudapt LLC


On 10/1/2013 10:46 AM, Chen, Ching-Cheng (KFRM 1) wrote:

Found a weird behavior (or looks like weird) with ceph 0.67.3

I have 5 servers.  Monitor runs on server 1.   And server 2 to 5 have
one OSD running each (osd.0 - osd.3)

I did a 'ceph pg dump'.  I can see PGs got somehow randomly distributed
to all 4 OSDs which is expected behavior.

However, if I bring up one OSD in the same server running monitor.   It
seems all PGs has their primary ODS move to this new OSD.  After I add a
new OSD (osd.4) to the same server running monitor, the 'ceph pg dump'
command showing active OSDs as [4,x] for all PGs.

Is this expected behavior??

Regards,

Chen

Ching-Cheng Chen

*CREDIT SUISSE*

Information Technology | MDS - New York, KVBB 41

One Madison Avenue | 10010 New York | United States

Phone +1 212 538 8031 | Mobile +1 732 216 7939

chingcheng.c...@credit-suisse.com
 | www.credit-suisse.com




==
Please access the attached hyperlink for an important electronic
communications disclaimer:
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
==




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw S3 permissions

2013-10-01 Thread Fuchs, Andreas (SwissTXT)
Hi
According to http://ceph.com/docs/master/radosgw/s3/ radosgw support ACLs but I 
cannot find a way how to do it.
What we need to do is to have a key/secret with read write permission and one 
with read only permission to a certain bucket, is this possible? How?

Regards
Andi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Weird behavior of PG distribution

2013-10-01 Thread Chen, Ching-Cheng (KFRM 1)
Mike:

Thanks for the reply.

However, I did the crushtool command but the output doesn't give me any obvious 
explanation why osd.4 should be the primary OSD for PGs.

All the rule has this "step chooseleaf firstn 0 type host".  According to Ceph 
document, PG should select two buckets from the host type.   And all OSD has 
same weight/type/etc etc.   Why would all PG choose osd.4 as primary OSD?

Here is the content of my crush map.


# begin crush map

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4

# types
type 0 osd
type 1 host
type 2 rack
type 3 row
type 4 room
type 5 datacenter
type 6 root

# buckets
host gbl10134201 {
id -2   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.4 weight 0.000
}
host gbl10134202 {
id -3   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.1 weight 0.000
}
host gbl10134203 {
id -4   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.2 weight 0.000
}
host gbl10134214 {
id -5   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.3 weight 0.000
}
host gbl10134215 {
id -6   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.0 weight 0.000
}
root default {
id -1   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item gbl10134201 weight 0.000
item gbl10134202 weight 0.000
item gbl10134203 weight 0.000
item gbl10134214 weight 0.000
item gbl10134215 weight 0.000
}

# rules
rule data {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule metadata {
ruleset 1
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule rbd {
ruleset 2
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

# end crush map


Regards,

Chen



-Original Message-
From: Mike Dawson [mailto:mike.daw...@cloudapt.com] 
Sent: Tuesday, October 01, 2013 11:31 AM
To: Chen, Ching-Cheng (KFRM 1); ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Weird behavior of PG distribution

Ching-Cheng,

Data placement is handled by CRUSH. Please examine the following:

ceph osd getcrushmap -o crushmap && crushtool -d crushmap -o 
crushmap.txt && cat crushmap.txt

That will show the topology and placement rules Ceph is using.
Pay close attention to the "step chooseleaf" lines inside the rule for 
each pool. Under certain configurations, I believe the placement that 
you describe is in fact the expected behavior.

Thanks,

Mike Dawson
Co-Founder, Cloudapt LLC


On 10/1/2013 10:46 AM, Chen, Ching-Cheng (KFRM 1) wrote:
> Found a weird behavior (or looks like weird) with ceph 0.67.3
>
> I have 5 servers.  Monitor runs on server 1.   And server 2 to 5 have
> one OSD running each (osd.0 - osd.3)
>
> I did a 'ceph pg dump'.  I can see PGs got somehow randomly distributed
> to all 4 OSDs which is expected behavior.
>
> However, if I bring up one OSD in the same server running monitor.   It
> seems all PGs has their primary ODS move to this new OSD.  After I add a
> new OSD (osd.4) to the same server running monitor, the 'ceph pg dump'
> command showing active OSDs as [4,x] for all PGs.
>
> Is this expected behavior??
>
> Regards,
>
> Chen
>
> Ching-Cheng Chen
>
> *CREDIT SUISSE*
>
> Information Technology | MDS - New York, KVBB 41
>
> One Madison Avenue | 10010 New York | United States
>
> Phone +1 212 538 8031 | Mobile +1 732 216 7939
>
> chingcheng.c...@credit-suisse.com
>  | www.credit-suisse.com
> 
>
>
>
> ==
> Please access the attached hyperlink for an important electronic
> communications disclaimer:
> http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
> ==
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


=== 
Please access 

Re: [ceph-users] Multiple kernel RBD clients failures

2013-10-01 Thread Eric Eastman

Hi Travis,

Both you and Yan saw the same thing, in that the drives in my test 
system go from 300GB to 4TB.  I used ceph-deploy to create all the 
OSDs, which I assume picked the weights of 0.26 for my 300GB drives, 
and 3.63 for my 4TB drives. All the OSDs that are reporting nearly full 
are the 300GB drives.  As several of my OSDs reached 94% full, I assume 
that is when  the krbd driver reported no more space.   It would be 
nice if "ceph health detail" reported the OSD as full and not as near 
full:


# ceph health detail
HEALTH_WARN 9 near full osd(s)
osd.9 is near full at 85%
osd.29 is near full at 85%
osd.43 is near full at 91%
osd.45 is near full at 88%
osd.47 is near full at 88%
osd.55 is near full at 94%
osd.59 is near full at 94%
osd.67 is near full at 94%
osd.83 is near full at 94%

I have put into my development notes that a 10x size difference in 
drives in a pool is a bad idea.  In my next test I will run with just 
the 3TB and 4TB drives.


Thank you for confirming what was going on.

Eric


Eric,

Yeah, your OSD weights are a little crazy...

For example, looking at one host from your output of "ceph osd 

tree"...



-3  31.5host tca23
1   3.63osd.1   up  1
7   0.26osd.7   up  1
13  2.72osd.13  up  1
19  2.72osd.19  up  1
25  0.26osd.25  up  1
31  3.63osd.31  up  1
37  2.72osd.37  up  1
43  0.26osd.43  up  1
49  3.63osd.49  up  1
55  0.26osd.55  up  1
61  3.63osd.61  up  1
67  0.26osd.67  up  1
73  3.63osd.73  up  1
79  0.26osd.79  up  1
85  3.63osd.85  up  1


osd.7 is set to 0.26, with others set to > 3.  Under normal
circumstances, the rule of thumb would be to set weights equal to the
disk size in TB.  So, a 2TB disk would have a weight of 2, a 1.5TB
disk == 1.5, etc.

These weights control what proportion of data is directed to each OSD.

>I'm guessing you do have very different size disks, though, as it

looks like the disk that are reporting near full all have relatively
small weights (OSD 43 is at 91%, weight = 0.26).  Is this really a
260GB disk?  A mix of HDD and SSDs? or maybe just a small partition?
Either way, you probably have something wrong with the weights.  I'd
look into that.  Having a single pool made of disks of such varied
size may not be a good option, but I'm not sure if that's your setup
or not.

To the best of my knowledge, Ceph halts IO operations when any disk
reaches the near full scenario (85% by default).  I'm not 100% certain
on that one, but I believe that is true.

Hope that helps,

- Travis


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Weird behavior of PG distribution

2013-10-01 Thread Aaron Ten Clay
On Tue, Oct 1, 2013 at 10:11 AM, Chen, Ching-Cheng (KFRM 1) <
chingcheng.c...@credit-suisse.com> wrote:

> Mike:
>
> Thanks for the reply.
>
> However, I did the crushtool command but the output doesn't give me any
> obvious explanation why osd.4 should be the primary OSD for PGs.
>
> All the rule has this "step chooseleaf firstn 0 type host".  According to
> Ceph document, PG should select two buckets from the host type.   And all
> OSD has same weight/type/etc etc.   Why would all PG choose osd.4 as
> primary OSD?
>
> Here is the content of my crush map.
>
>
> 
> # begin crush map
>
> # devices
> device 0 osd.0
> device 1 osd.1
> device 2 osd.2
> device 3 osd.3
> device 4 osd.4
>
> # types
> type 0 osd
> type 1 host
> type 2 rack
> type 3 row
> type 4 room
> type 5 datacenter
> type 6 root
>
> # buckets
> host gbl10134201 {
> id -2   # do not change unnecessarily
> # weight 0.000
> alg straw
> hash 0  # rjenkins1
> item osd.4 weight 0.000
> }
> host gbl10134202 {
> id -3   # do not change unnecessarily
> # weight 0.000
> alg straw
> hash 0  # rjenkins1
> item osd.1 weight 0.000
> }
> host gbl10134203 {
> id -4   # do not change unnecessarily
> # weight 0.000
> alg straw
> hash 0  # rjenkins1
> item osd.2 weight 0.000
> }
> host gbl10134214 {
> id -5   # do not change unnecessarily
> # weight 0.000
> alg straw
> hash 0  # rjenkins1
> item osd.3 weight 0.000
> }
> host gbl10134215 {
> id -6   # do not change unnecessarily
> # weight 0.000
> alg straw
> hash 0  # rjenkins1
> item osd.0 weight 0.000
> }
> root default {
> id -1   # do not change unnecessarily
> # weight 0.000
> alg straw
> hash 0  # rjenkins1
> item gbl10134201 weight 0.000
> item gbl10134202 weight 0.000
> item gbl10134203 weight 0.000
> item gbl10134214 weight 0.000
> item gbl10134215 weight 0.000
> }
>
> # rules
> rule data {
> ruleset 0
> type replicated
> min_size 1
> max_size 10
> step take default
> step chooseleaf firstn 0 type host
> step emit
> }
> rule metadata {
> ruleset 1
> type replicated
> min_size 1
> max_size 10
> step take default
> step chooseleaf firstn 0 type host
> step emit
> }
> rule rbd {
> ruleset 2
> type replicated
> min_size 1
> max_size 10
> step take default
> step chooseleaf firstn 0 type host
> step emit
> }
>
> # end crush map
>
>
> 
> Regards,
>
> Chen
>


You have a weight of 0 for all your OSDs... I bet that's confusing ceph
just a bit. I believe the baseline recommendation is to use the size of the
OSD in TiB as the starting weight for each OSD, and each level up the tree
you use the combined weight of everything below it (although with a single
OSD per host you don't need to worry about that yet.)

You might want to try e.g.

...
host gbl10134215 {
id -6   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.0 weight 3.630
}
root default {
id -1   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item gbl10134201 weight 0.000
item gbl10134202 weight 0.000
item gbl10134203 weight 0.000
item gbl10134214 weight 0.000
item gbl10134215 weight 3.630
}
...

(If you have 4TB disks)

Once you edit the crush map, you compile it with 'crushtool -c 
', then set that map active on the cluster with 'ceph osd
setcrushmap -i '.

You can read more at http://ceph.com/docs/master/rados/operations/crush-map/
.

-Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Weird behavior of PG distribution

2013-10-01 Thread Chen, Ching-Cheng (KFRM 1)
Aaron:

Bingo!

All my 5 VMs are exactly same setup so I didn't bother with weight setting.   
Thinking all 0.000 will they will be treat equally.

After following your suggestion put some numbers (I made them all 0.200) and 
the I got expected behavior.


Really appreciated,

Ching-Cheng Chen
MDS - New York
+1 212 538 8031 (*106 8031)

From: Aaron Ten Clay [mailto:aaro...@aarontc.com]
Sent: Tuesday, October 01, 2013 2:53 PM
To: Chen, Ching-Cheng (KFRM 1)
Cc: Mike Dawson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Weird behavior of PG distribution


On Tue, Oct 1, 2013 at 10:11 AM, Chen, Ching-Cheng (KFRM 1) 
mailto:chingcheng.c...@credit-suisse.com>> 
wrote:
Mike:

Thanks for the reply.

However, I did the crushtool command but the output doesn't give me any obvious 
explanation why osd.4 should be the primary OSD for PGs.

All the rule has this "step chooseleaf firstn 0 type host".  According to Ceph 
document, PG should select two buckets from the host type.   And all OSD has 
same weight/type/etc etc.   Why would all PG choose osd.4 as primary OSD?

Here is the content of my crush map.


# begin crush map

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4

# types
type 0 osd
type 1 host
type 2 rack
type 3 row
type 4 room
type 5 datacenter
type 6 root

# buckets
host gbl10134201 {
id -2   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.4 weight 0.000
}
host gbl10134202 {
id -3   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.1 weight 0.000
}
host gbl10134203 {
id -4   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.2 weight 0.000
}
host gbl10134214 {
id -5   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.3 weight 0.000
}
host gbl10134215 {
id -6   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.0 weight 0.000
}
root default {
id -1   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item gbl10134201 weight 0.000
item gbl10134202 weight 0.000
item gbl10134203 weight 0.000
item gbl10134214 weight 0.000
item gbl10134215 weight 0.000
}

# rules
rule data {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule metadata {
ruleset 1
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
rule rbd {
ruleset 2
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

# end crush map


Regards,

Chen

You have a weight of 0 for all your OSDs... I bet that's confusing ceph just a 
bit. I believe the baseline recommendation is to use the size of the OSD in TiB 
as the starting weight for each OSD, and each level up the tree you use the 
combined weight of everything below it (although with a single OSD per host you 
don't need to worry about that yet.)

You might want to try e.g.

...
host gbl10134215 {
id -6   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item osd.0 weight 3.630
}
root default {
id -1   # do not change unnecessarily
# weight 0.000
alg straw
hash 0  # rjenkins1
item gbl10134201 weight 0.000
item gbl10134202 weight 0.000
item gbl10134203 weight 0.000
item gbl10134214 weight 0.000
item gbl10134215 weight 3.630
}
...
(If you have 4TB disks)
Once you edit the crush map, you compile it with 'crushtool -c  
', then set that map active on the cluster with 'ceph osd setcrushmap 
-i '.
You can read more at http://ceph.com/docs/master/rados/operations/crush-map/.

-Aaron


=== 
Please access the attached hyperlink for an important electronic communications 
disclaimer: 
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html 
=== 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Client Timeout on Rados Gateway

2013-10-01 Thread Gruher, Joseph R
Hello-

I've set up a rados gateway but I'm having trouble accessing it from clients.  
I can access it using rados command line just fine from any system in my ceph 
deployment, including my monitors and OSDs, the gateway system, and even the 
admin system I used to run ceph-deploy.  However, when I set up a client 
outside the ceph nodes I get a timeout error as shown at the bottom of the 
output pasted below.  I've turned off authentication for the moment to simplify 
things.  Systems are able to resolve names and reach each other via ping.  Any 
thoughts on what could be the issue here or how to debug?

The failure:

ceph@cephclient01:/etc/ceph$ rados df
2013-10-01 19:57:07.488970 7fd381db0780 monclient(hunting): authenticate timed 
out after 30
2013-10-01 19:57:07.489174 7fd381db0780 librados: client.admin authentication 
error (110) Connection timed out
couldn't connect to cluster! error -110


ceph@cephclient01:/etc/ceph$ sudo rados df
2013-10-01 19:57:44.461273 7fb6712d5780 monclient(hunting): authenticate timed 
out after 30
2013-10-01 19:57:44.461440 7fb6712d5780 librados: client.admin authentication 
error (110) Connection timed out
couldn't connect to cluster! error -110
ceph@cephclient01:/etc/ceph$


Some details from the client:

ceph@cephclient01:/etc/ceph$ pwd
/etc/ceph


ceph@cephclient01:/etc/ceph$ ls
ceph.client.admin.keyring  ceph.conf  keyring.radosgw.gateway


ceph@cephclient01:/etc/ceph$ cat ceph.conf
[global]
fsid = a45e6e54-70ef-4470-91db-2152965deec5
mon_initial_members = cephtest02, cephtest03, cephtest04
mon_host = 10.0.0.2,10.0.0.3,10.0.0.4
osd_journal_size = 1024
filestore_xattr_use_omap = true
auth_cluster_required = none #cephx
auth_service_required = none #cephx
auth_client_required = none #cephx

[client.radosgw.gateway]
host = cephtest06
keyring = /etc/ceph/keyring.radosgw.gateway
rgw_socket_path = /tmp/radosgw.sock
log_file = /var/log/ceph/radosgw.log


ceph@cephclient01:/etc/ceph$ ping cephtest06
PING cephtest06.jf.intel.com (10.23.37.175) 56(84) bytes of data.
64 bytes from cephtest06.jf.intel.com (10.23.37.175): icmp_req=1 ttl=64 
time=0.216 ms
64 bytes from cephtest06.jf.intel.com (10.23.37.175): icmp_req=2 ttl=64 
time=0.209 ms
^C
--- cephtest06.jf.intel.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.209/0.212/0.216/0.015 ms


ceph@cephclient01:/etc/ceph$ ping cephtest06.jf.intel.com
PING cephtest06.jf.intel.com (10.23.37.175) 56(84) bytes of data.
64 bytes from cephtest06.jf.intel.com (10.23.37.175): icmp_req=1 ttl=64 
time=0.223 ms
64 bytes from cephtest06.jf.intel.com (10.23.37.175): icmp_req=2 ttl=64 
time=0.242 ms
^C
--- cephtest06.jf.intel.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.223/0.232/0.242/0.017 ms


I did try putting the client on the 10.0.0.x network to see if that would 
affect behavior but that just seemed to introduce a new problem:

ceph@cephclient01:/etc/ceph$ rados df
2013-10-01 21:37:29.439410 7f60d2a43700 failed to decode message of type 59 v1: 
buffer::end_of_buffer
2013-10-01 21:37:29.439583 7f60d4a47700 monclient: hunting for new mon

ceph@cephclient01:/etc/ceph$ ceph -m 10.0.0.2 -s
2013-10-01 21:37:42.341480 7f61eacd5700 monclient: hunting for new mon
2013-10-01 21:37:45.341024 7f61eacd5700 monclient: hunting for new mon
2013-10-01 21:37:45.343274 7f61eacd5700 monclient: hunting for new mon

ceph@cephclient01:/etc/ceph$ ceph health
2013-10-01 21:39:52.833560 mon <- [health]
2013-10-01 21:39:52.834671 mon.0 -> 'unparseable JSON health' (-22)
ceph@cephclient01:/etc/ceph$
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD: Newbie question regarding ceph-deploy odd create

2013-10-01 Thread Jogi Hofmüller
Hi Piers,

Am 2013-09-27 22:59, schrieb Piers Dawson-Damer:

> I'm trying to setup my first cluster,   (have never manually
> bootstrapped a cluster)

I am about at the same stage here ;)

> Is ceph-deploy odd activate/prepare supposed to write to the master
> ceph.conf file, specific entries for each OSD along the lines
> of http://ceph.com/docs/master/rados/configuration/osd-config-ref/ ?

All I can say that it does not do so.  Still waiting for an answer to a
similar question I posed yesterday ...

I let you know if I get closer to solving these things ;)

Cheers!
-- 
j.hofmüller

Optimism doesn't alter the laws of physics. - Subcommander T'Pol



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com