Re: [ceph-users] Using RBD with LVM

2013-09-24 Thread Wido den Hollander

On 09/25/2013 02:00 AM, John-Paul Robinson wrote:

Hi,

I'm exploring a configuration with multiple Ceph block devices used with
LVM.  The goal is to provide a way to grow and shrink my file systems
while they are on line.

I've created three block devices:

$ sudo ./ceph-ls  | grep home
jpr-home-lvm-p01: 102400 MB
jpr-home-lvm-p02: 102400 MB
jpr-home-lvm-p03: 102400 MB

And have them mapped into my kernel (3.2.0-23-generic #36-Ubuntu SMP):

$ sudo rbd showmapped
id pool imagesnap device
0  rbd  jpr-test-vol01   -/dev/rbd0
1  rbd  jpr-home-lvm-p01 -/dev/rbd1
2  rbd  jpr-home-lvm-p02 -/dev/rbd2
3  rbd  jpr-home-lvm-p03 -/dev/rbd3

In order to use them with LVM, I need to define them as physical
volumes.  But when I run this command I get an unexpected error:

$ sudo pvcreate /dev/rbd1
   Device /dev/rbd1 not found (or ignored by filtering).



Try this:

$ sudo pvcreate -vvv /dev/rbd1

It has something to do with LVM filtering RBD devices away, you might 
need to add them manually in /etc/lvm/lvm.conf


I've seen this before and fixed it, but I forgot what the root cause was.

Wido


I am able to use other RBD on this same machine to create file systems
directly and mount them:

$ df -h /mnt-test
Filesystem  Size  Used Avail Use% Mounted on
/dev/rbd050G  885M   47G   2% /mnt-test

Is there a reason that the /dev/rbd[1-2] devices can't be initialized as
physical volumes in LVM?

Thanks,

~jpr
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using RBD with LVM

2013-09-24 Thread Mandell Degerness
You need to add a line to /etc/lvm/lvm.conf:

types = [ "rbd", 1024 ]

It should be in the "devices" section of the file.

On Tue, Sep 24, 2013 at 5:00 PM, John-Paul Robinson  wrote:
> Hi,
>
> I'm exploring a configuration with multiple Ceph block devices used with
> LVM.  The goal is to provide a way to grow and shrink my file systems
> while they are on line.
>
> I've created three block devices:
>
> $ sudo ./ceph-ls  | grep home
> jpr-home-lvm-p01: 102400 MB
> jpr-home-lvm-p02: 102400 MB
> jpr-home-lvm-p03: 102400 MB
>
> And have them mapped into my kernel (3.2.0-23-generic #36-Ubuntu SMP):
>
> $ sudo rbd showmapped
> id pool imagesnap device
> 0  rbd  jpr-test-vol01   -/dev/rbd0
> 1  rbd  jpr-home-lvm-p01 -/dev/rbd1
> 2  rbd  jpr-home-lvm-p02 -/dev/rbd2
> 3  rbd  jpr-home-lvm-p03 -/dev/rbd3
>
> In order to use them with LVM, I need to define them as physical
> volumes.  But when I run this command I get an unexpected error:
>
> $ sudo pvcreate /dev/rbd1
>   Device /dev/rbd1 not found (or ignored by filtering).
>
> I am able to use other RBD on this same machine to create file systems
> directly and mount them:
>
> $ df -h /mnt-test
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/rbd050G  885M   47G   2% /mnt-test
>
> Is there a reason that the /dev/rbd[1-2] devices can't be initialized as
> physical volumes in LVM?
>
> Thanks,
>
> ~jpr
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Scaling RBD module

2013-09-24 Thread Somnath Roy
Hi Sage,
Thanks for your input. I will try those. Please see my response inline.

Thanks & Regards
Somnath

-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Tuesday, September 24, 2013 3:47 PM
To: Somnath Roy
Cc: Travis Rhoden; Josh Durgin; ceph-de...@vger.kernel.org; Anirban Ray; 
ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Scaling RBD module

Hi Somnath!

On Tue, 24 Sep 2013, Somnath Roy wrote:
>
> Hi Sage,
>
> We did quite a few experiment to see how ceph read performance can scale up.
> Here is the summary.
>
>
>
> 1.
>
> First we tried to see how far a single node cluster with one osd can
> scale up. We started with cuttlefish release and the entire osd file
> system is on the ssd. What we saw with 4K size object and with single
> rados client with dedicated 10G network, throughput can't go beyond a certain 
> point.

Are you using 'rados bench' to generate this load or something else?
We've noticed that individual rados bench commands do not scale beyond a point 
but have never looked into it; the problem may be in the bench code and not in 
librados or SimpleMessenger.

[Somnath] Yes, for generating load and measure the performance at the rados 
level.

> We dig through the code and found out SimpleMessenger is opening
> single socket connection (per client)to talk to the osd. Also, we saw
> there is only one dispatcher Q (Dispatch thread)/ SimpleMessenger to
> carry these requests to OSD. We started adding more dispatcher threads
> in Dispatch Q, rearrange several locks in the Pipe.cc to identify the
> bottleneck. What we end up discovering is that there is bottleneck
> both in upstream as well as in the downstream at osd level and
> changing the locking scheme in io path  will affect lot of other codes (that 
> we don't even know).
>
> So, we stopped that activity and started workaround the upstream
> bottleneck by introducing more clients to the single OSD. What we saw
> single OSD is scaling with lot of cpu utilization. To produce ~40K
> iops (4K) it is taking almost 12 core of cpu.

Just to make sure I understand: the single OSD dispatch queue does not become a 
problem with multiple clients?

[Somnath] We saw with single client/single osd, if we increase the dispatch 
thread till 3, we were getting some improvement. But, not beyond 3.
This is what we were also wondering !..But, looking at the architecture, it 
seems if the upstream bottleneck is removed this might be the next bottleneck. 
The next io request will not be in the OSD worker Q, till it completes 
OSD::ms_dispatch(). And there is lot of staff happening in this function.
Now, top of the function it is taking osd level lock , so, increasing the 
threads is not helping, but, I think rearranging the locks will help here.

Possibilities that come to mind:

- DispatchQueue is doing some funny stuff to keep individual clients'
messages ordered but to fairly process requests from multiple clients.
There could easily be a problem with the per-client queue portion of this.

- Pipe's use of MSG_MORE is making the TCP stream efficient... you might try 
setting 'ms tcp nodelay = false'.

[Somnath] I will try this.

- The message encode is happening in the thread that sends messages over the 
wire.  Maybe doing it in send_message() instead of writer() will keep that on a 
separate core than the thread that's shoveling data into the socket.

[Somnath] Are you telling to  move the following code snippet from 
Pipe::writer() to SimpleMessenger::_send_message() ?

// encode and copy out of *m
m->encode(connection_state->get_features(), 
!msgr->cct->_conf->ms_nocrc);

> Another point, I didn't see this single osd scale with the Dumpling
> release with the multiple clients !! Something changed..

What is it with dumpling?

[Somnath] We tried to compare but lot of changes , so, gave up :-(...But, I 
think eventually, if we want to increase the overall throughput we need to make 
individual osd efficient (both cpu/performance). So, we will definitely 
comeback to this.

> 2.   After that, we setup a proper cluster with 3 high performing
> nodes and total 30 osds. Here also, we are seeing single rados bech
> client as well as rbd client instance is not scaling beyond a certain
> limit. It is not able to generate much load as node cpu utilization
> remains very low. But running multiple client instance the performance is 
> scaling till hit the cpu limit.
>
> So, it is pretty clear we are not able to saturate anything with
> single client and that's why the 'noshare' option was very helpful to
> measure the rbd performance benchmark. I have a single osd/single
> client level call grind  data attached here.

Something from perf that shows a call graph would be more helpful to identify 
where things are waiting.  We haven't done much optimizing at this level at 
all, so these results aren't entirely surprising.

[Somnath] We will get this.

> Now, I am doing the benchmark for radosgw and I think I am stuck w

[ceph-users] Using RBD with LVM

2013-09-24 Thread John-Paul Robinson
Hi,

I'm exploring a configuration with multiple Ceph block devices used with
LVM.  The goal is to provide a way to grow and shrink my file systems
while they are on line.

I've created three block devices:

$ sudo ./ceph-ls  | grep home
jpr-home-lvm-p01: 102400 MB
jpr-home-lvm-p02: 102400 MB
jpr-home-lvm-p03: 102400 MB

And have them mapped into my kernel (3.2.0-23-generic #36-Ubuntu SMP):

$ sudo rbd showmapped
id pool imagesnap device
0  rbd  jpr-test-vol01   -/dev/rbd0
1  rbd  jpr-home-lvm-p01 -/dev/rbd1
2  rbd  jpr-home-lvm-p02 -/dev/rbd2
3  rbd  jpr-home-lvm-p03 -/dev/rbd3

In order to use them with LVM, I need to define them as physical
volumes.  But when I run this command I get an unexpected error:

$ sudo pvcreate /dev/rbd1
  Device /dev/rbd1 not found (or ignored by filtering).

I am able to use other RBD on this same machine to create file systems
directly and mount them:

$ df -h /mnt-test
Filesystem  Size  Used Avail Use% Mounted on
/dev/rbd050G  885M   47G   2% /mnt-test

Is there a reason that the /dev/rbd[1-2] devices can't be initialized as
physical volumes in LVM?

Thanks,

~jpr
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Scaling RBD module

2013-09-24 Thread Sage Weil
Hi Somnath!

On Tue, 24 Sep 2013, Somnath Roy wrote:
> 
> Hi Sage,
> 
> We did quite a few experiment to see how ceph read performance can scale up.
> Here is the summary.
> 
>  
> 
> 1.
> 
> First we tried to see how far a single node cluster with one osd can scale
> up. We started with cuttlefish release and the entire osd file system is on
> the ssd. What we saw with 4K size object and with single rados client with
> dedicated 10G network, throughput can't go beyond a certain point.

Are you using 'rados bench' to generate this load or something else?  
We've noticed that individual rados bench commands do not scale beyond a 
point but have never looked into it; the problem may be in the bench code 
and not in librados or SimpleMessenger.

> We dig through the code and found out SimpleMessenger is opening single
> socket connection (per client)to talk to the osd. Also, we saw there is only
> one dispatcher Q (Dispatch thread)/ SimpleMessenger to carry these requests
> to OSD. We started adding more dispatcher threads in Dispatch Q, rearrange
> several locks in the Pipe.cc to identify the bottleneck. What we end up
> discovering is that there is bottleneck both in upstream as well as in the
> downstream at osd level and changing the locking scheme in io path  will
> affect lot of other codes (that we don't even know).
> 
> So, we stopped that activity and started workaround the upstream bottleneck
> by introducing more clients to the single OSD. What we saw single OSD is
> scaling with lot of cpu utilization. To produce ~40K iops (4K) it is taking
> almost 12 core of cpu.

Just to make sure I understand: the single OSD dispatch queue does not 
become a problem with multiple clients?

Possibilities that come to mind:

- DispatchQueue is doing some funny stuff to keep individual clients' 
messages ordered but to fairly process requests from multiple clients.  
There could easily be a problem with the per-client queue portion of this.

- Pipe's use of MSG_MORE is making the TCP stream efficient... you might 
try setting 'ms tcp nodelay = false'.

- The message encode is happening in the thread that sends messages over 
the wire.  Maybe doing it in send_message() instead of writer() will keep 
that on a separate core than the thread that's shoveling data into the 
socket.

> Another point, I didn't see this single osd scale with the Dumpling release
> with the multiple clients !! Something changed..

What is it with dumpling?

> 2.   After that, we setup a proper cluster with 3 high performing nodes and
> total 30 osds. Here also, we are seeing single rados bech client as well as
> rbd client instance is not scaling beyond a certain limit. It is not able to
> generate much load as node cpu utilization remains very low. But running
> multiple client instance the performance is scaling till hit the cpu limit.
> 
> So, it is pretty clear we are not able to saturate anything with single
> client and that's why the 'noshare' option was very helpful to measure the
> rbd performance benchmark. I have a single osd/single client level call
> grind  data attached here.

Something from perf that shows a call graph would be more helpful to 
identify where things are waiting.  We haven't done much optimizing at 
this level at all, so these results aren't entirely surprising.

> Now, I am doing the benchmark for radosgw and I think I am stuck with
> similar bottleneck here. Could you please confirm that if radosgw also
> opening single client instance to the cluster?
>  

It is: each radosgw has a single librados client instance.

> If so, is there any similar option like 'noshare' in this case ? Here also,
> creating multiple radosgw instance on separate nodes the performance is
> scaling.

No, but

> BTW, is there a way to run multiple radosgw to a single node or it has to be
> one/node ?

yes.  You just need to make sure they have different fastcgi sockets they 
listen on and probably set up a separate web server in front of each one.

I think the next step to understanding what is going on is getting the 
right profiling tools in place so we can see where the client threads are 
spending their (non-idle and idle) time...

sage


> 
>  
> 
> Thanks & Regards
> 
> Somnath
> 
>  
> 
>    
> 
>  
> 
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org
> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Sage Weil
> Sent: Tuesday, September 24, 2013 2:16 PM
> To: Travis Rhoden
> Cc: Josh Durgin; ceph-de...@vger.kernel.org; Anirban Ray;
> ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Scaling RBD module
> 
>  
> 
> On Tue, 24 Sep 2013, Travis Rhoden wrote:
> 
> > This "noshare" option may have just helped me a ton -- I sure wish I
> 
> > would have asked similar questions sooner, because I have seen the
> 
> > same failure to scale.  =)
> 
> >
> 
> > One question -- when using the "noshare" option (or really, even
> 
> > with

Re: [ceph-users] Scaling RBD module

2013-09-24 Thread Somnath Roy
Hi Sage,

We did quite a few experiment to see how ceph read performance can scale up. 
Here is the summary.



1.

First we tried to see how far a single node cluster with one osd can scale up. 
We started with cuttlefish release and the entire osd file system is on the 
ssd. What we saw with 4K size object and with single rados client with 
dedicated 10G network, throughput can't go beyond a certain point.

We dig through the code and found out SimpleMessenger is opening single socket 
connection (per client)to talk to the osd. Also, we saw there is only one 
dispatcher Q (Dispatch thread)/ SimpleMessenger to carry these requests to OSD. 
We started adding more dispatcher threads in Dispatch Q, rearrange several 
locks in the Pipe.cc to identify the bottleneck. What we end up discovering is 
that there is bottleneck both in upstream as well as in the downstream at osd 
level and changing the locking scheme in io path  will affect lot of other 
codes (that we don't even know).

So, we stopped that activity and started workaround the upstream bottleneck by 
introducing more clients to the single OSD. What we saw single OSD is scaling 
with lot of cpu utilization. To produce ~40K iops (4K) it is taking almost 12 
core of cpu.

Another point, I didn't see this single osd scale with the Dumpling release 
with the multiple clients !! Something changed..



2.   After that, we setup a proper cluster with 3 high performing nodes and 
total 30 osds. Here also, we are seeing single rados bech client as well as rbd 
client instance is not scaling beyond a certain limit. It is not able to 
generate much load as node cpu utilization remains very low. But running 
multiple client instance the performance is scaling till hit the cpu limit.



So, it is pretty clear we are not able to saturate anything with single client 
and that's why the 'noshare' option was very helpful to measure the rbd 
performance benchmark. I have a single osd/single client level callgrind  data. 
Attachment is not going through the community I guess and that's why can't send 
it to you.



Now, I am doing the benchmark for radosgw and I think I am stuck with similar 
bottleneck here. Could you please confirm that if radosgw also opening single 
client instance to the cluster ?

If so, is there any similar option like 'noshare' in this case ? Here also, 
creating multiple radosgw instance on separate nodes the performance is scaling.

BTW, is there a way to run multiple radosgw to a single node or it has to be 
one/node ?





Thanks & Regards

Somnath







-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Tuesday, September 24, 2013 2:16 PM
To: Travis Rhoden
Cc: Josh Durgin; ceph-de...@vger.kernel.org; 
Anirban Ray; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Scaling RBD module



On Tue, 24 Sep 2013, Travis Rhoden wrote:

> This "noshare" option may have just helped me a ton -- I sure wish I

> would have asked similar questions sooner, because I have seen the

> same failure to scale.  =)

>

> One question -- when using the "noshare" option (or really, even

> without it) are there any practical limits on the number of RBDs that

> can be mounted?  I have servers with ~100 RBDs on them each, and am

> wondering if I switch them all over to using "noshare" if anything is

> going to blow up, use a ton more memory, etc.  Even without noshare,

> are there any known limits to how many RBDs can be mapped?



With noshare each mapped image will appear as a separate client instance, which 
means it will have it's own session with teh monitors and own TCP connections 
to the OSDs.  It may be a viable workaround for now but in general I would not 
recommend it.



I'm very curious what the scaling issue is with the shared client.  Do you have 
a working perf that can capture callgraph information on this machine?



sage



>

> Thanks!

>

>  - Travis

>

>

> On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy 
> mailto:somnath@sandisk.com>>

> wrote:

>   Thanks Josh !

>   I am able to successfully add this noshare option in the image

>   mapping now. Looking at dmesg output, I found that was indeed

>   the secret key problem. Block performance is scaling now.

>

>   Regards

>   Somnath

>

>   -Original Message-

>   From: 
> ceph-devel-ow...@vger.kernel.org

>   [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Josh

>   Durgin

>   Sent: Thursday, September 19, 2013 12:24 PM

>   To: Somnath Roy

>   Cc: Sage Weil; 
> ceph-de...@vger.kernel.org; Anirban Ray;

>   ceph-users@lists.ceph.com

>   Subject: Re: [ceph-users] Scaling RBD module

>

>   On 09/1

Re: [ceph-users] Scaling RBD module

2013-09-24 Thread Travis Rhoden
On Tue, Sep 24, 2013 at 5:16 PM, Sage Weil  wrote:
> On Tue, 24 Sep 2013, Travis Rhoden wrote:
>> This "noshare" option may have just helped me a ton -- I sure wish I would
>> have asked similar questions sooner, because I have seen the same failure to
>> scale.  =)
>>
>> One question -- when using the "noshare" option (or really, even without it)
>> are there any practical limits on the number of RBDs that can be mounted?  I
>> have servers with ~100 RBDs on them each, and am wondering if I switch them
>> all over to using "noshare" if anything is going to blow up, use a ton more
>> memory, etc.  Even without noshare, are there any known limits to how many
>> RBDs can be mapped?
>
> With noshare each mapped image will appear as a separate client instance,
> which means it will have it's own session with teh monitors and own TCP
> connections to the OSDs.  It may be a viable workaround for now but in
> general I would not recommend it.

Good to know.  We are still playing with CephFS as our ultimate
solution, but in the meantime this may indeed be a good workaround for
me.

>
> I'm very curious what the scaling issue is with the shared client.  Do you
> have a working perf that can capture callgraph information on this
> machine?

Not currently, but I could certainly work on it.  The issue that we
see is basically what the OP showed -- that there seems to be a finite
amount of bandwidth that I can read/write from a machine, regardless
of how many RBDs are involved.  i.e., if I can get 1GB/sec writes on
one RBD when everything else is idle, running the same test on two
RBDs in parallel *from the same machine* ends up with the sum of the
two at ~1GB/sec, split fairly evenly. However, if I do the same thing
and run the same test on two RBDs, each hosted on a separate machine,
I definitely see increased bandwidth.  Monitoring network traffic and
the Ceph OSD nodes seems to imply that they are not overloaded --
there is more bandwidth to be had, the clients just aren't able to
push the data fast enough.  That's why I'm hoping creating a "new"
client for each RBD will improve things.

I'm not going to enable this everywhere just yet, we will test things
on a few RBDs and test, and perhaps enable on some RBDs that are
particularly heavily loaded.

I'll work on the perf capture!

Thanks for the feedback, as always.

 - Travis
>
> sage
>
>>
>> Thanks!
>>
>>  - Travis
>>
>>
>> On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy 
>> wrote:
>>   Thanks Josh !
>>   I am able to successfully add this noshare option in the image
>>   mapping now. Looking at dmesg output, I found that was indeed
>>   the secret key problem. Block performance is scaling now.
>>
>>   Regards
>>   Somnath
>>
>>   -Original Message-
>>   From: ceph-devel-ow...@vger.kernel.org
>>   [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Josh
>>   Durgin
>>   Sent: Thursday, September 19, 2013 12:24 PM
>>   To: Somnath Roy
>>   Cc: Sage Weil; ceph-de...@vger.kernel.org; Anirban Ray;
>>   ceph-users@lists.ceph.com
>>   Subject: Re: [ceph-users] Scaling RBD module
>>
>>   On 09/19/2013 12:04 PM, Somnath Roy wrote:
>>   > Hi Josh,
>>   > Thanks for the information. I am trying to add the following
>>   but hitting some permission issue.
>>   >
>>   > root@emsclient:/etc# echo
>>   :6789,:6789,:6789
>>   > name=admin,key=client.admin,noshare test_rbd ceph_block_test'
>>   >
>>   > /sys/bus/rbd/add
>>   > -bash: echo: write error: Operation not permitted
>>
>>   If you check dmesg, it will probably show an error trying to
>>   authenticate to the cluster.
>>
>>   Instead of key=client.admin, you can pass the base64 secret
>>   value as shown in 'ceph auth list' with the
>>   secret=X option.
>>
>>   BTW, there's a ticket for adding the noshare option to rbd map
>>   so using the sysfs interface like this is never necessary:
>>
>>   http://tracker.ceph.com/issues/6264
>>
>>   Josh
>>
>>   > Here is the contents of rbd directory..
>>   >
>>   > root@emsclient:/sys/bus/rbd# ll
>>   > total 0
>>   > drwxr-xr-x  4 root root0 Sep 19 11:59 ./
>>   > drwxr-xr-x 30 root root0 Sep 13 11:41 ../
>>   > --w---  1 root root 4096 Sep 19 11:59 add
>>   > drwxr-xr-x  2 root root0 Sep 19 12:03 devices/
>>   > drwxr-xr-x  2 root root0 Sep 19 12:03 drivers/
>>   > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
>>   > --w---  1 root root 4096 Sep 19 12:03 drivers_probe
>>   > --w---  1 root root 4096 Sep 19 12:03 remove
>>   > --w---  1 root root 4096 Sep 19 11:59 uevent
>>   >
>>   >
>>   > I checked even if I am logged in as root , I can't write
>>   anything on /sys.
>>   >
>>   > Here is the Ubuntu version I am using..
>>   >
>>   > root@emsclient:/etc# lsb_release -a
>>   > No 

Re: [ceph-users] Scaling RBD module

2013-09-24 Thread Sage Weil
On Tue, 24 Sep 2013, Travis Rhoden wrote:
> This "noshare" option may have just helped me a ton -- I sure wish I would
> have asked similar questions sooner, because I have seen the same failure to
> scale.  =)
> 
> One question -- when using the "noshare" option (or really, even without it)
> are there any practical limits on the number of RBDs that can be mounted?  I
> have servers with ~100 RBDs on them each, and am wondering if I switch them
> all over to using "noshare" if anything is going to blow up, use a ton more
> memory, etc.  Even without noshare, are there any known limits to how many
> RBDs can be mapped?

With noshare each mapped image will appear as a separate client instance, 
which means it will have it's own session with teh monitors and own TCP 
connections to the OSDs.  It may be a viable workaround for now but in 
general I would not recommend it.

I'm very curious what the scaling issue is with the shared client.  Do you 
have a working perf that can capture callgraph information on this 
machine?

sage

> 
> Thanks!
> 
>  - Travis
> 
> 
> On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy 
> wrote:
>   Thanks Josh !
>   I am able to successfully add this noshare option in the image
>   mapping now. Looking at dmesg output, I found that was indeed
>   the secret key problem. Block performance is scaling now.
> 
>   Regards
>   Somnath
> 
>   -Original Message-
>   From: ceph-devel-ow...@vger.kernel.org
>   [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Josh
>   Durgin
>   Sent: Thursday, September 19, 2013 12:24 PM
>   To: Somnath Roy
>   Cc: Sage Weil; ceph-de...@vger.kernel.org; Anirban Ray;
>   ceph-users@lists.ceph.com
>   Subject: Re: [ceph-users] Scaling RBD module
> 
>   On 09/19/2013 12:04 PM, Somnath Roy wrote:
>   > Hi Josh,
>   > Thanks for the information. I am trying to add the following
>   but hitting some permission issue.
>   >
>   > root@emsclient:/etc# echo
>   :6789,:6789,:6789
>   > name=admin,key=client.admin,noshare test_rbd ceph_block_test'
>   >
>   > /sys/bus/rbd/add
>   > -bash: echo: write error: Operation not permitted
> 
>   If you check dmesg, it will probably show an error trying to
>   authenticate to the cluster.
> 
>   Instead of key=client.admin, you can pass the base64 secret
>   value as shown in 'ceph auth list' with the
>   secret=X option.
> 
>   BTW, there's a ticket for adding the noshare option to rbd map
>   so using the sysfs interface like this is never necessary:
> 
>   http://tracker.ceph.com/issues/6264
> 
>   Josh
> 
>   > Here is the contents of rbd directory..
>   >
>   > root@emsclient:/sys/bus/rbd# ll
>   > total 0
>   > drwxr-xr-x  4 root root    0 Sep 19 11:59 ./
>   > drwxr-xr-x 30 root root    0 Sep 13 11:41 ../
>   > --w---  1 root root 4096 Sep 19 11:59 add
>   > drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/
>   > drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/
>   > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
>   > --w---  1 root root 4096 Sep 19 12:03 drivers_probe
>   > --w---  1 root root 4096 Sep 19 12:03 remove
>   > --w---  1 root root 4096 Sep 19 11:59 uevent
>   >
>   >
>   > I checked even if I am logged in as root , I can't write
>   anything on /sys.
>   >
>   > Here is the Ubuntu version I am using..
>   >
>   > root@emsclient:/etc# lsb_release -a
>   > No LSB modules are available.
>   > Distributor ID: Ubuntu
>   > Description:    Ubuntu 13.04
>   > Release:        13.04
>   > Codename:       raring
>   >
>   > Here is the mount information
>   >
>   > root@emsclient:/etc# mount
>   > /dev/mapper/emsclient--vg-root on / type ext4
>   (rw,errors=remount-ro)
>   > proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys
>   type
>   > sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type
>   tmpfs (rw)
>   > none on /sys/fs/fuse/connections type fusectl (rw) none on
>   > /sys/kernel/debug type debugfs (rw) none on
>   /sys/kernel/security type
>   > securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755)
>   devpts on
>   > /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
>   > tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
>   > none on /run/lock type tmpfs
>   (rw,noexec,nosuid,nodev,size=5242880)
>   > none on /run/shm type tmpfs (rw,nosuid,nodev) none on
>   /run/user type
>   > tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
>   > /dev/sda1 on /boot type ext2 (rw)
>   > /dev/mapper/emsclient--vg-home on /home type ext4 (rw)
>   >
>   >
>   > Any idea what went wrong here ?
>   >
>   > Thanks & Regards
>   > Somnath
>   >
>  

Re: [ceph-users] Scaling RBD module

2013-09-24 Thread Travis Rhoden
This "noshare" option may have just helped me a ton -- I sure wish I would
have asked similar questions sooner, because I have seen the same failure
to scale.  =)

One question -- when using the "noshare" option (or really, even without
it) are there any practical limits on the number of RBDs that can be
mounted?  I have servers with ~100 RBDs on them each, and am wondering if I
switch them all over to using "noshare" if anything is going to blow up,
use a ton more memory, etc.  Even without noshare, are there any known
limits to how many RBDs can be mapped?

Thanks!

 - Travis


On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy wrote:

> Thanks Josh !
> I am able to successfully add this noshare option in the image mapping
> now. Looking at dmesg output, I found that was indeed the secret key
> problem. Block performance is scaling now.
>
> Regards
> Somnath
>
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:
> ceph-devel-ow...@vger.kernel.org] On Behalf Of Josh Durgin
> Sent: Thursday, September 19, 2013 12:24 PM
> To: Somnath Roy
> Cc: Sage Weil; ceph-de...@vger.kernel.org; Anirban Ray;
> ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Scaling RBD module
>
> On 09/19/2013 12:04 PM, Somnath Roy wrote:
> > Hi Josh,
> > Thanks for the information. I am trying to add the following but hitting
> some permission issue.
> >
> > root@emsclient:/etc# echo :6789,:6789,:6789
> > name=admin,key=client.admin,noshare test_rbd ceph_block_test' >
> > /sys/bus/rbd/add
> > -bash: echo: write error: Operation not permitted
>
> If you check dmesg, it will probably show an error trying to authenticate
> to the cluster.
>
> Instead of key=client.admin, you can pass the base64 secret value as shown
> in 'ceph auth list' with the secret=X option.
>
> BTW, there's a ticket for adding the noshare option to rbd map so using
> the sysfs interface like this is never necessary:
>
> http://tracker.ceph.com/issues/6264
>
> Josh
>
> > Here is the contents of rbd directory..
> >
> > root@emsclient:/sys/bus/rbd# ll
> > total 0
> > drwxr-xr-x  4 root root0 Sep 19 11:59 ./
> > drwxr-xr-x 30 root root0 Sep 13 11:41 ../
> > --w---  1 root root 4096 Sep 19 11:59 add
> > drwxr-xr-x  2 root root0 Sep 19 12:03 devices/
> > drwxr-xr-x  2 root root0 Sep 19 12:03 drivers/
> > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
> > --w---  1 root root 4096 Sep 19 12:03 drivers_probe
> > --w---  1 root root 4096 Sep 19 12:03 remove
> > --w---  1 root root 4096 Sep 19 11:59 uevent
> >
> >
> > I checked even if I am logged in as root , I can't write anything on
> /sys.
> >
> > Here is the Ubuntu version I am using..
> >
> > root@emsclient:/etc# lsb_release -a
> > No LSB modules are available.
> > Distributor ID: Ubuntu
> > Description:Ubuntu 13.04
> > Release:13.04
> > Codename:   raring
> >
> > Here is the mount information
> >
> > root@emsclient:/etc# mount
> > /dev/mapper/emsclient--vg-root on / type ext4 (rw,errors=remount-ro)
> > proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type
> > sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw)
> > none on /sys/fs/fuse/connections type fusectl (rw) none on
> > /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type
> > securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on
> > /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
> > tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
> > none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
> > none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type
> > tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
> > /dev/sda1 on /boot type ext2 (rw)
> > /dev/mapper/emsclient--vg-home on /home type ext4 (rw)
> >
> >
> > Any idea what went wrong here ?
> >
> > Thanks & Regards
> > Somnath
> >
> > -Original Message-
> > From: Josh Durgin [mailto:josh.dur...@inktank.com]
> > Sent: Wednesday, September 18, 2013 6:10 PM
> > To: Somnath Roy
> > Cc: Sage Weil; ceph-de...@vger.kernel.org; Anirban Ray;
> > ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Scaling RBD module
> >
> > On 09/17/2013 03:30 PM, Somnath Roy wrote:
> >> Hi,
> >> I am running Ceph on a 3 node cluster and each of my server node is
> running 10 OSDs, one for each disk. I have one admin node and all the nodes
> are connected with 2 X 10G network. One network is for cluster and other
> one configured as public network.
> >>
> >> Here is the status of my cluster.
> >>
> >> ~/fio_test# ceph -s
> >>
> >> cluster b2e0b4db-6342-490e-9c28-0aadf0188023
> >>  health HEALTH_WARN clock skew detected on mon. ,
> mon. 
> >>  monmap e1: 3 mons at {=xxx.xxx.xxx.xxx:6789/0,
> =xxx.xxx.xxx.xxx:6789/0,
> =xxx.xxx.xxx.xxx:6789/0}, election epoch 64, quorum 0,1,2
> ,,
> >>  osdmap e391: 30 osds: 30 up, 30 in
> >>   pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB data, 27912

[ceph-users] kvm rbd ceph and SGBD

2013-09-24 Thread zorg

Hi
I want to use ceph and kvm with rdb hosting mysql and oracle
I have already use kvm with iscsi but with sgbd it suffer of io limitation

is there some people who have good and bad experience on hosting sgbd.


thank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph.conf changes and restarting ceph.

2013-09-24 Thread John Wilkins
Either one should work. For RHEL, CentOS, etc., use sysvinit.

I rewrote the ops doc, but it's in a wip branch right now. Here:
http://ceph.com/docs/wip-doc-quickstart/rados/operations/operating/

I still may make some edits to it, but follow the sysvinit section.


On Tue, Sep 24, 2013 at 10:08 AM, Snider, Tim  wrote:
> Is the form: auth cluster required = none or auth_cluster_required = none? 
> ("_"s as a word separator)
>
> -Original Message-
> From: John Wilkins [mailto:john.wilk...@inktank.com]
> Sent: Tuesday, September 24, 2013 11:43 AM
> To: Aronesty, Erik
> Cc: Snider, Tim; Gary Mazzaferro; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.
>
> From your pastie details, it looks like you are using "auth supported = 
> none".  That's pre 0.51, as noted in the documentation. Perhaps I should omit 
> the old usage or omit it entirely.
>
> It should look like this:
>
> auth cluster required = none
> auth service required = none
> auth client required = none
>
> not
>
> auth supported = none
>
> On Tue, Sep 24, 2013 at 8:00 AM, Aronesty, Erik 
>  wrote:
>> I did the same thing, restarted with upstart, and I still need to use
>> authentication.   Not sure why yet.   Maybe I didn’t change the /etc/ceph
>> configs on all the nodes….
>>
>>
>>
>> From: ceph-users-boun...@lists.ceph.com
>> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Snider, Tim
>> Sent: Tuesday, September 24, 2013 9:15 AM
>> To: Gary Mazzaferro; John Wilkins
>> Cc: ceph-users@lists.ceph.com
>>
>>
>> Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.
>>
>>
>>
>> Authentication works. I was interested in trying it without authentication.
>> I didn’t see the upstart link earlier.
>>
>> Is the plan to only use upstart and not service for Dumpling and beyond?
>>
>> Tim
>>
>>
>>
>> From: Gary Mazzaferro [mailto:ga...@oedata.com]
>> Sent: Tuesday, September 24, 2013 1:16 AM
>> To: John Wilkins
>> Cc: Snider, Tim; ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.
>>
>>
>>
>> Hi John
>>
>>
>>
>> Why ? do the 'service' scripts not work ? (sorry I don't have access
>> to the systems from my location) I used dumpling and ceph-deploy on debian.
>>
>>
>>
>> -gary
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Sep 23, 2013 at 11:25 PM, John Wilkins
>> 
>> wrote:
>>
>> I will update the Cephx docs. The usage in those docs for restarting
>> is for Debian/Ubuntu deployed with mkcephfs.  If you are using
>> Dumpling and deployed with ceph-deploy, you will need to use Upstart.
>> See
>> http://ceph.com/docs/master/rados/operations/operating/#running-ceph-w
>> ith-upstart for details. If you are using Ceph on RHEL, CentOS, etc.,
>> use sysvinit.
>>
>>
>> On Mon, Sep 23, 2013 at 3:21 PM, Gary Mazzaferro  wrote:
>>> Tim
>>>
>>> Did it work with authentication enabled  ?
>>>
>>> -gary
>>>
>>>
>>> On Mon, Sep 23, 2013 at 2:10 PM, Snider, Tim 
>>> wrote:

 I modified /etc/ceph.conf for no authentication and to specify both
 private and public networks. /etc/ceph/ceph.conf was distributed to
 all nodes in the cluster

 ceph was restarted on all nodes using  "service ceph -a restart".

 After that authentication is still required and no ports are open on
 the cluster facing (192.168.10.0) network.

 Details in  http://pastie.org/8349534.

 What am I missing something?



 Thanks,

 Tim


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> --
>> John Wilkins
>> Senior Technical Writer
>> Intank
>> john.wilk...@inktank.com
>> (415) 425-9599
>> http://inktank.com
>>
>>
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph.conf changes and restarting ceph.

2013-09-24 Thread Snider, Tim
Is the form: auth cluster required = none or auth_cluster_required = none? 
("_"s as a word separator)

-Original Message-
From: John Wilkins [mailto:john.wilk...@inktank.com] 
Sent: Tuesday, September 24, 2013 11:43 AM
To: Aronesty, Erik
Cc: Snider, Tim; Gary Mazzaferro; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.

From your pastie details, it looks like you are using "auth supported = none".  
That's pre 0.51, as noted in the documentation. Perhaps I should omit the old 
usage or omit it entirely.

It should look like this:

auth cluster required = none
auth service required = none
auth client required = none

not

auth supported = none

On Tue, Sep 24, 2013 at 8:00 AM, Aronesty, Erik 
 wrote:
> I did the same thing, restarted with upstart, and I still need to use
> authentication.   Not sure why yet.   Maybe I didn’t change the /etc/ceph
> configs on all the nodes….
>
>
>
> From: ceph-users-boun...@lists.ceph.com 
> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Snider, Tim
> Sent: Tuesday, September 24, 2013 9:15 AM
> To: Gary Mazzaferro; John Wilkins
> Cc: ceph-users@lists.ceph.com
>
>
> Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.
>
>
>
> Authentication works. I was interested in trying it without authentication.
> I didn’t see the upstart link earlier.
>
> Is the plan to only use upstart and not service for Dumpling and beyond?
>
> Tim
>
>
>
> From: Gary Mazzaferro [mailto:ga...@oedata.com]
> Sent: Tuesday, September 24, 2013 1:16 AM
> To: John Wilkins
> Cc: Snider, Tim; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.
>
>
>
> Hi John
>
>
>
> Why ? do the 'service' scripts not work ? (sorry I don't have access 
> to the systems from my location) I used dumpling and ceph-deploy on debian.
>
>
>
> -gary
>
>
>
>
>
>
>
>
>
> On Mon, Sep 23, 2013 at 11:25 PM, John Wilkins 
> 
> wrote:
>
> I will update the Cephx docs. The usage in those docs for restarting 
> is for Debian/Ubuntu deployed with mkcephfs.  If you are using 
> Dumpling and deployed with ceph-deploy, you will need to use Upstart.
> See
> http://ceph.com/docs/master/rados/operations/operating/#running-ceph-w
> ith-upstart for details. If you are using Ceph on RHEL, CentOS, etc., 
> use sysvinit.
>
>
> On Mon, Sep 23, 2013 at 3:21 PM, Gary Mazzaferro  wrote:
>> Tim
>>
>> Did it work with authentication enabled  ?
>>
>> -gary
>>
>>
>> On Mon, Sep 23, 2013 at 2:10 PM, Snider, Tim 
>> wrote:
>>>
>>> I modified /etc/ceph.conf for no authentication and to specify both 
>>> private and public networks. /etc/ceph/ceph.conf was distributed to 
>>> all nodes in the cluster
>>>
>>> ceph was restarted on all nodes using  "service ceph -a restart".
>>>
>>> After that authentication is still required and no ports are open on 
>>> the cluster facing (192.168.10.0) network.
>>>
>>> Details in  http://pastie.org/8349534.
>>>
>>> What am I missing something?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Tim
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
>
>



--
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] few port per ceph-osd

2013-09-24 Thread Gregory Farnum
On Sat, Sep 21, 2013 at 11:05 PM, yy-nm  wrote:
> On 2013/9/10 4:57, Samuel Just wrote:
>>
>> That's normal, each osd listens on a few different ports for different
>> reasons.
>> -Sam
>>
>> On Mon, Sep 9, 2013 at 12:27 AM, Timofey Koolin  wrote:
>>>
>>> I use ceph 0.67.2.
>>> When I start
>>> ceph-osd -i 0
>>> or
>>> ceph-osd -i 1
>>> it start one process, but it process open few tcp-ports, is it normal?
>>>
>>> netstat -nlp | grep ceph
>>> tcp0  0 10.11.0.73:6789 0.0.0.0:*
>>> LISTEN
>>> 1577/ceph-mon - mon
>>> tcp0  0 10.11.0.73:6800 0.0.0.0:*
>>> LISTEN
>>> 3649/ceph-osd - osd.0
>>> tcp0  0 10.11.0.73:6801 0.0.0.0:*
>>> LISTEN
>>> 3649/ceph-osd - osd.0
>>> tcp0  0 10.11.0.73:6802 0.0.0.0:*
>>> LISTEN
>>> 3649/ceph-osd - osd.0
>>> tcp0  0 10.11.0.73:6803 0.0.0.0:*
>>> LISTEN
>>> 3649/ceph-osd - osd.0
>>> tcp0  0 10.11.0.73:6804 0.0.0.0:*
>>> LISTEN
>>> 3764/ceph-osd - osd.1
>>> tcp0  0 10.11.0.73:6805 0.0.0.0:*
>>> LISTEN
>>> 3764/ceph-osd - osd.1
>>> tcp0  0 10.11.0.73:6808 0.0.0.0:*
>>> LISTEN
>>> 3764/ceph-osd - osd.1
>>> tcp0  0 10.11.0.73:6809 0.0.0.0:*
>>> LISTEN
>>> 3764/ceph-osd - osd.1
>>>
>>> --
>>> Blog: www.rekby.ru
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> four port for each osd? or it changes in 0.67.2
> it should be three like
> http://ceph.com/docs/master/rados/configuration/network-config-ref/#osd-ip-tables

Yes, that needs to be updated — Ceph has been heartbeating on both the
public and cluster addresses for several versions now.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Full OSD questions

2013-09-24 Thread Gregory Farnum
On Sun, Sep 22, 2013 at 5:25 AM, Gaylord Holder  wrote:
>
>
> On 09/22/2013 02:12 AM, yy-nm wrote:
>>
>> On 2013/9/10 6:38, Gaylord Holder wrote:
>>>
>>> Indeed, that pool was created with the default 8 pg_nums.
>>>
>>> 8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
>>>
>>> I bumped up the pg_num to 600 for that pool and nothing happened.
>>> I bumped up the pgp_num to 600 for that pool and ceph started shifting
>>> things around.
>>>
>>> Can you explain the difference between pg_num and pgp_num to me?
>>> I can't understand the distinction.
>>>
>>> Thank you for your help!
>>>
>>> -Gaylord
>>>
>>> On 09/09/2013 04:58 PM, Samuel Just wrote:

 This is usually caused by having too few pgs.  Each pool with a
 significant amount of data needs at least around 100pgs/osd.
 -Sam

 On Mon, Sep 9, 2013 at 10:32 AM, Gaylord Holder
  wrote:
>
> I'm starting to load up my ceph cluster.
>
> I currently have 12 2TB drives (10 up and in, 2 defined but down and
> out).
>
> rados df
>
> says I have 8TB free, but I have 2 nearly full OSDs.
>
> I don't understand how/why these two disks are filled while the
> others are
> relatively empty.
>
> How do I tell ceph to spread the data around more, and why isn't it
> already
> doing it?
>
> Thank you for helping me understand this system better.
>
> Cheers,
> -Gaylord
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> well, pg_num as the total num of pgs, and pgp_num means the num of pgs
>> which are used now
>
>
> The reference
>
>
>> you can reference on
>> http://ceph.com/docs/master/rados/operations/pools/#create-a-pool
>> the description of pgp_num
>
>
> simply says pgp_num is:
>
>> The total number of placement groups for placement purposes.
>
> Why is the number of placement groups different from the number of placement
> groups for placement purposes?
>
> When would you want them to be different?
>
> Thank you for helping me understand this.

This is for supporting the PG split/merge functionality (only split is
implemented right now). You can split your PGs in half in one stage
(but keep them located together to reduce the number of map overrides
required) and then let them rebalance across the cluster separately.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-osd leak

2013-09-24 Thread Gregory Farnum
On Sun, Sep 22, 2013 at 10:00 AM, Serge Slipchenko
 wrote:
> On Fri, Sep 20, 2013 at 11:44 PM, Gregory Farnum  wrote:
>>
>> [ Re-added the list — please keep emails on there so everybody can
>> benefit! ]
>>
>> On Fri, Sep 20, 2013 at 12:24 PM, Serge Slipchenko
>>  wrote:
>> >
>> >
>> >
>> > On Fri, Sep 20, 2013 at 5:59 PM, Gregory Farnum 
>> > wrote:
>> >>
>> >> On Fri, Sep 20, 2013 at 6:40 AM, Serge Slipchenko
>> >>  wrote:
>> >> > Hi,
>> >> >
>> >> > I'm using CephFS 0.67.3 as a backend for Hypertable and
>> >> > ElasticSearch.
>> >> > Active reading/writing to the cephfs causes uncontrolled OSD memory
>> >> > growth
>> >> > and at the final stage swapping and server unavailability.
>> >>
>> >> What kind of memory growth are you seeing?
>> >
>> > 10-20Gb
>> >
>> >>
>> >> > To keep the cluster in working condition I have to restart OSD's with
>> >> > excessive memory consumption.
>> >> > This is definitely wrong, but I hope it will help to understand
>> >> > problem.
>> >> >
>> >> > One of the nodes scrubbing, go series of faults from MON and OSD is
>> >> > restarted by the memory guard script.
>> >>
>> >> What makes you think a monitor is involved? The log below doesn't look
>> >> like a monitor unless you've done something strange with your config
>> >> (wrong ports).
>> >
>> > Yes, I am somewhat inaccurate. I mean 144.76.13.103  is also a monitor
>> > node.
>> >
>> >>
>> >> > 2013-09-20 10:54:39.901871 7f74374a0700  0 log [INF] : 5.e0 scrub ok
>> >> > 2013-09-20 10:56:50.563862 7f74374a0700  0 log [INF] : 1.27 scrub ok
>> >> > 2013-09-20 11:00:03.159553 7f742c826700  0 -- 5.9.136.227:6801/1389
>> >> > >>
>> >> > 5.9.136.227:6805/1510 pipe(0x97fcc80 sd=72 :6801 s=0 pgs=0 cs=0 l=0
>> >> > c=0x9889000).accept connect_seq 2 vs existing 1 stat
>> >> > e standby
>> >> > 2013-09-20 11:00:04.935305 7f7433685700  0 -- 5.9.136.227:6801/1389
>> >> > >>
>> >> > 144.76.13.103:6801/1771 pipe(0x963b000 sd=63 :56878 s=2 pgs=41599
>> >> > cs=553
>> >> > l=0
>> >> > c=0x9679160).fault with nothing to send, go
>> >> > ing to standby
>> >> > 2013-09-20 11:00:04.986654 7f742c725700  0 -- 5.9.136.227:0/1389 >>
>> >> > 144.76.13.103:6803/1771 pipe(0x9859780 sd=240 :0 s=1 pgs=0 cs=0 l=1
>> >> > c=0xb2b1b00).fault
>> >> > 2013-09-20 11:00:04.986662 7f7430157700  0 -- 5.9.136.227:0/1389 >>
>> >> > 144.76.13.103:6802/1771 pipe(0xbbf4780 sd=144 :0 s=1 pgs=0 cs=0 l=1
>> >> > c=0xa89b000).fault
>> >> > 2013-09-20 11:03:23.499091 7f7432379700  0 -- 5.9.136.227:6801/1389
>> >> > >>
>> >> > 144.76.13.103:6801/17989 pipe(0xb2d0500 sd=230 :6801 s=0 pgs=0 cs=0
>> >> > l=0
>> >> > c=0xa89b6e0).accept connect_seq 46 vs existing 0
>> >> >  state connecting
>> >> > 2013-09-20 11:03:23.499704 7f7432379700  0 -- 5.9.136.227:6801/1389
>> >> > >>
>> >> > 144.76.13.103:6801/17989 pipe(0xb2d0500 sd=230 :6801 s=1 pgs=2107
>> >> > cs=47
>> >> > l=0
>> >> > c=0xf247580).fault
>> >> > 2013-09-20 11:03:23.505559 7f7431369700  0 -- 5.9.136.227:6801/1389
>> >> > >>
>> >> > 144.76.13.103:6801/17989 pipe(0x9874c80 sd=230 :6801 s=0 pgs=0 cs=0
>> >> > l=0
>> >> > c=0xa89b000).accept connect_seq 1 vs existing 47
>> >> >  state connecting
>> >> > 2013-09-20 11:15:03.239657 7f742c826700  0 -- 5.9.136.227:6801/1389
>> >> > >>
>> >> > 5.9.136.227:6805/1510 pipe(0x97fcc80 sd=72 :6801 s=2 pgs=1297 cs=3
>> >> > l=0
>> >> > c=0x9855b00).fault with nothing to send, going to
>> >> >  standby
>> >> >
>> >> > A similar chain of events is repeated on different servers with
>> >> > regularity
>> >> > of 2 hours.
>> >> >
>> >> > It looks similar to the old bug http://tracker.ceph.com/issues/3883 ,
>> >> > but
>> >> > I'm using plain log files.
>> >>
>> >> Not if your issue is correlated with writes rather than scrubs. :)
>> >
>> > Could those problems be caused by slow network?
>> >
>> >>
>> >> > Is it anything well known or something new?
>> >>
>> >> Nobody's reported anything like it yet.
>> >> In addition to the above, we'll also need to know about your cluster.
>> >> How many nodes, what does each look like, what's your network look
>> >> like, what OS and where did you get your Ceph packages?
>> >
>> > I have 8 servers connected via 1Gb network, but for some servers actual
>> > speed is 100-200Mb.
>>
>> Well, yeah, that'll do it. 200Mb/s is only ~25MB/s, which is much
>> slower than your servers can write to disk. So your machines with
>> faster network are ingesting data and putting it on disk much more
>> quickly than they can replicate it to the servers with slower network
>> connections and the replication messages are just getting queued up in
>> RAM. Ceph is designed so you can make it work with async hardware but
>> making it work well with an async network is going to be more
>> challenging.
>
> Yes, it looks like servers that have 800Mb and higher connections never have
> memory problems.
>
>>
>> You can play around with a couple different things to try and make this
>> better:
>> 1) Make the weight of the nodes proportional to their bandwidth.
>
> Am I corre

Re: [ceph-users] ceph.conf changes and restarting ceph.

2013-09-24 Thread John Wilkins
From your pastie details, it looks like you are using "auth supported
= none".  That's pre 0.51, as noted in the documentation. Perhaps I
should omit the old usage or omit it entirely.

It should look like this:

auth cluster required = none
auth service required = none
auth client required = none

not

auth supported = none

On Tue, Sep 24, 2013 at 8:00 AM, Aronesty, Erik
 wrote:
> I did the same thing, restarted with upstart, and I still need to use
> authentication.   Not sure why yet.   Maybe I didn’t change the /etc/ceph
> configs on all the nodes….
>
>
>
> From: ceph-users-boun...@lists.ceph.com
> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Snider, Tim
> Sent: Tuesday, September 24, 2013 9:15 AM
> To: Gary Mazzaferro; John Wilkins
> Cc: ceph-users@lists.ceph.com
>
>
> Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.
>
>
>
> Authentication works. I was interested in trying it without authentication.
> I didn’t see the upstart link earlier.
>
> Is the plan to only use upstart and not service for Dumpling and beyond?
>
> Tim
>
>
>
> From: Gary Mazzaferro [mailto:ga...@oedata.com]
> Sent: Tuesday, September 24, 2013 1:16 AM
> To: John Wilkins
> Cc: Snider, Tim; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.
>
>
>
> Hi John
>
>
>
> Why ? do the 'service' scripts not work ? (sorry I don't have access to the
> systems from my location) I used dumpling and ceph-deploy on debian.
>
>
>
> -gary
>
>
>
>
>
>
>
>
>
> On Mon, Sep 23, 2013 at 11:25 PM, John Wilkins 
> wrote:
>
> I will update the Cephx docs. The usage in those docs for restarting
> is for Debian/Ubuntu deployed with mkcephfs.  If you are using
> Dumpling and deployed with ceph-deploy, you will need to use Upstart.
> See
> http://ceph.com/docs/master/rados/operations/operating/#running-ceph-with-upstart
> for details. If you are using Ceph on RHEL, CentOS, etc., use
> sysvinit.
>
>
> On Mon, Sep 23, 2013 at 3:21 PM, Gary Mazzaferro  wrote:
>> Tim
>>
>> Did it work with authentication enabled  ?
>>
>> -gary
>>
>>
>> On Mon, Sep 23, 2013 at 2:10 PM, Snider, Tim 
>> wrote:
>>>
>>> I modified /etc/ceph.conf for no authentication and to specify both
>>> private and public networks. /etc/ceph/ceph.conf was distributed to all
>>> nodes in the cluster
>>>
>>> ceph was restarted on all nodes using  "service ceph -a restart".
>>>
>>> After that authentication is still required and no ports are open on the
>>> cluster facing (192.168.10.0) network.
>>>
>>> Details in  http://pastie.org/8349534.
>>>
>>> What am I missing something?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Tim
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com
>
>



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] clients in cluster network?

2013-09-24 Thread Gregory Farnum
On Tue, Sep 24, 2013 at 1:14 AM, Kurt Bauer  wrote:
>
>
> John Wilkins schrieb:
>
> Clients use the public network. The cluster network is principally for
> OSD-to-OSD communication--heartbeats, replication, backfill, etc.
>
> Hmm, well, I'm aware of this, but the question is, if it is nevertheless
> possible, ie. is it actively prohibited or "just" not recommended? And if
> not recommended, what the issues would/could be?

To the Ceph daemons, a network is just an IP to bind to. It will not
take the "wrong" kind of traffic off of that IP, but if the IPs are on
the same underlying network (or are the same), or if the networks are
routable to each other, Ceph won't try and prevent anything like that.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] performance and disk usage of snapshots

2013-09-24 Thread Corin Langosch

Hi there,

do snapshots have an impact on write performance? I assume on each write all 
snapshots have to get updated (cow) so the more snapshots exist the worse write 
performance will get?


Is there any way to see how much disk space a snapshot occupies? I assume 
because of cow snapshots start with 0 real disk usage and grow over time as the 
underlying object changes?


Corin

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how to set flag on pool

2013-09-24 Thread Corin Langosch

Am 24.09.2013 12:24, schrieb Joao Eduardo Luis:
I believe that at the moment you'll only be able to have that flag set on a 
pool at creation time, if 'osd pool default flag hashpspool = true' on your conf.


I just updated my config like this:

[osd]
  osd journal size = 100
  filestore xattr use omap = true
  auth cluster required = cephx
  auth service required = cephx
  auth client required = cephx
  osd min pg log entries = 1000
  osd max pg log entries = 3000
  osd pool default flag hashpspool = true
  cephx require signatures = true

Then I created a new pool using "ceph osd pool create test 8" but the flag is 
not set:


pool 26 'test' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 
8 pgp_num 8 last_change 434 owner 0




Also found issue #5614 [1], which hasn't seen any work as far as I can tell, 
that would allow what you're after.


Ok, that's bad news. I just searched for a method to copy a complete pool and 
found this issue http://tracker.ceph.com/issues/2666 It's in state resolved, but 
there's no command to copy a pool in the latest command line utils? So I wonder 
what's the best/ easiest way to copy all objects from one pool to another (I'll 
make sure there's no write activity on source while copy is in progress).


Corin

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph.conf changes and restarting ceph.

2013-09-24 Thread Aronesty, Erik
I did the same thing, restarted with upstart, and I still need to use 
authentication.   Not sure why yet.   Maybe I didn't change the /etc/ceph 
configs on all the nodes

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Snider, Tim
Sent: Tuesday, September 24, 2013 9:15 AM
To: Gary Mazzaferro; John Wilkins
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.

Authentication works. I was interested in trying it without authentication. I 
didn't see the upstart link earlier.
Is the plan to only use upstart and not service for Dumpling and beyond?
Tim

From: Gary Mazzaferro [mailto:ga...@oedata.com]
Sent: Tuesday, September 24, 2013 1:16 AM
To: John Wilkins
Cc: Snider, Tim; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.

Hi John

Why ? do the 'service' scripts not work ? (sorry I don't have access to the 
systems from my location) I used dumpling and ceph-deploy on debian.

-gary




On Mon, Sep 23, 2013 at 11:25 PM, John Wilkins 
mailto:john.wilk...@inktank.com>> wrote:
I will update the Cephx docs. The usage in those docs for restarting
is for Debian/Ubuntu deployed with mkcephfs.  If you are using
Dumpling and deployed with ceph-deploy, you will need to use Upstart.
See 
http://ceph.com/docs/master/rados/operations/operating/#running-ceph-with-upstart
for details. If you are using Ceph on RHEL, CentOS, etc., use
sysvinit.

On Mon, Sep 23, 2013 at 3:21 PM, Gary Mazzaferro 
mailto:ga...@oedata.com>> wrote:
> Tim
>
> Did it work with authentication enabled  ?
>
> -gary
>
>
> On Mon, Sep 23, 2013 at 2:10 PM, Snider, Tim 
> mailto:tim.sni...@netapp.com>> wrote:
>>
>> I modified /etc/ceph.conf for no authentication and to specify both
>> private and public networks. /etc/ceph/ceph.conf was distributed to all
>> nodes in the cluster
>>
>> ceph was restarted on all nodes using  "service ceph -a restart".
>>
>> After that authentication is still required and no ports are open on the
>> cluster facing (192.168.10.0) network.
>>
>> Details in  http://pastie.org/8349534.
>>
>> What am I missing something?
>>
>>
>>
>> Thanks,
>>
>> Tim
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

--
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph / RadosGW deployment questions

2013-09-24 Thread Yehuda Sadeh
On Tue, Sep 24, 2013 at 12:46 AM, Guang  wrote:
> Hi ceph-users,
> I deployed a Ceph cluster (including RadosGW) with use of ceph-deploy on
> RHEL6.4, during the deployment, I have a couple of questions which need your
> help.
>
> 1. I followed the steps http://ceph.com/docs/master/install/rpm/ to deploy
> the RadosGW node, however, after the deployment, all requests failed with
> 500 returned. With some hints from
> http://irclogs.ceph.widodh.nl/index.php?date=2013-01-25, I changed the
> FastCgiExternalServer to FastCgiServer within rgw.conf. Is this change valid
> or I missed somewhere else which leads the need for this change?

In theory you could have either, however, the preferred mode of
installation is with having FastCgiExternalServer and manually running
the radosgw.

>
> 2. It still does not work and the httpd has the following error log:
> [Mon Sep 23 07:34:32 2013] [crit] (98)Address already in use: FastCGI:
> can't create server "/var/www/s3gw.fcgi": bind() failed [/tmp/radosgw.sock]
> which indicates that radosgw is not started properly, so that I manually run
> "radosgw --rgw-socket-path=/tmp/radosgw.sock -c /etc/ceph/ceph.conf -n
> client.radosgw.gateway" to start a radosgw daemon and then the gateway
> starts working as expected.
> Did I miss anything this part?

That's one way to run the radosgw process. You still want to change
the apache conf to use the external server configuration, otherwise
apache will try relaunch it.

>
> 3. When I was trying to run ceph admin-daemon command on the radosGW host,
> it failed because it does not have the corresponding  asok file, however, I
> am able to run the command on monitor host and found that the radosGW's
> information can be retrieved there.
>
> @monitor (monitor and gateway are deployed on different hosts).
> [xxx@startbart ceph]$ sudo ceph --admin-daemon
> /var/run/ceph/ceph-mon.startbart.asok config show | grep rgw
>   "rgw": "1\/5",
>   "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-startbart",
>   "rgw_enable_apis": "s3, swift, swift_auth, admin",
>   "rgw_cache_enabled": "true",
>   "rgw_cache_lru_size": "1",
>   "rgw_socket_path": "",
>   "rgw_host": "",
>   "rgw_port": "",
>   "rgw_dns_name": "",
>   "rgw_script_uri": "",
>   "rgw_request_uri": "",
>   "rgw_swift_url": "",
>   "rgw_swift_url_prefix": "swift",
>   "rgw_swift_auth_url": "",
>   "rgw_swift_auth_entry": "auth",
>   "rgw_keystone_url": "",
>   "rgw_keystone_admin_token": "",
>   "rgw_keystone_accepted_roles": "Member, admin",
>   "rgw_keystone_token_cache_size": "1",
>   "rgw_keystone_revocation_interval": "900",
>   "rgw_admin_entry": "admin",
>   "rgw_enforce_swift_acls": "true",
>   "rgw_swift_token_expiration": "86400",
>   "rgw_print_continue": "true",
>   "rgw_remote_addr_param": "REMOTE_ADDR",
>   "rgw_op_thread_timeout": "600",
>   "rgw_op_thread_suicide_timeout": "0",
>   "rgw_thread_pool_size": "100",
> Is this expected?

The ceph configuration is monolithic, you see the mon configuration
here, and there are some rgw defaults, but it doesn't reflect the
actual rgw configuration. There's an open issue for gateway not
creating the admin socket by default, try adding 'admin socket' config
line to your gateway ceph.conf.

>
> 4. cephx authentication. After reading through the cephx introduction, I got
> the feeling that cephx is for client to cluster authentication, so that each
> librados user will need to create a new key. However, this page
> http://ceph.com/docs/master/rados/operations/authentication/#enabling-cephx
> got me confused in terms of why should we create keys for mon and osd? And
> how does that fit into the authentication diagram? BTW, I found the keyrings
> under /var/lib/cecph/{role}/ for each roles, are they being used when talk
> to other roles?
>

cephx is a kerberos-like authentication, and each entity needs to have
a key. In a distributed system like ceph, there's no single 'server'
(as in client-server). There could be many thousands of such servers,
and we want to authenticate each and one of them. That been said, when
a client gets a service ticket, it gets it for all services of the
same type, so it doesn't need to acquire a new ticket for each osd it
connects to.


Yehuda
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph.conf changes and restarting ceph.

2013-09-24 Thread Snider, Tim
Authentication works. I was interested in trying it without authentication. I 
didn't see the upstart link earlier.
Is the plan to only use upstart and not service for Dumpling and beyond?
Tim

From: Gary Mazzaferro [mailto:ga...@oedata.com]
Sent: Tuesday, September 24, 2013 1:16 AM
To: John Wilkins
Cc: Snider, Tim; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph.conf changes and restarting ceph.

Hi John

Why ? do the 'service' scripts not work ? (sorry I don't have access to the 
systems from my location) I used dumpling and ceph-deploy on debian.

-gary




On Mon, Sep 23, 2013 at 11:25 PM, John Wilkins 
mailto:john.wilk...@inktank.com>> wrote:
I will update the Cephx docs. The usage in those docs for restarting
is for Debian/Ubuntu deployed with mkcephfs.  If you are using
Dumpling and deployed with ceph-deploy, you will need to use Upstart.
See 
http://ceph.com/docs/master/rados/operations/operating/#running-ceph-with-upstart
for details. If you are using Ceph on RHEL, CentOS, etc., use
sysvinit.

On Mon, Sep 23, 2013 at 3:21 PM, Gary Mazzaferro 
mailto:ga...@oedata.com>> wrote:
> Tim
>
> Did it work with authentication enabled  ?
>
> -gary
>
>
> On Mon, Sep 23, 2013 at 2:10 PM, Snider, Tim 
> mailto:tim.sni...@netapp.com>> wrote:
>>
>> I modified /etc/ceph.conf for no authentication and to specify both
>> private and public networks. /etc/ceph/ceph.conf was distributed to all
>> nodes in the cluster
>>
>> ceph was restarted on all nodes using  "service ceph -a restart".
>>
>> After that authentication is still required and no ports are open on the
>> cluster facing (192.168.10.0) network.
>>
>> Details in  http://pastie.org/8349534.
>>
>> What am I missing something?
>>
>>
>>
>> Thanks,
>>
>> Tim
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


--
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Re ceph-deploy again

2013-09-24 Thread Alfredo Deza
On Tue, Sep 24, 2013 at 6:44 AM, bernhard glomm
wrote:

>
>
> *From: *bernhard glomm 
> *Subject: **Re: [ceph-users] ceph-deploy again*
> *Date: *September 24, 2013 11:47:00 AM GMT+02:00
> *To: *"Fuchs, Andreas (SwissTXT)" 
>
> Andi thnx,
>
> but as I said, ssh is not the problem.
> since the first command (nt.) that needs privileges to sync the clock
> works flawless.
> ssh is not the problem,
> sudo is not the problem,
> ceph-deploy doesn't want to use sudo anymore as it seems?
>
> bernhard
>
>
>   --
>   [image: *Ecologic Institute*]   *Bernhard Glomm*
> IT Administration
>
>Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype: 
> bernhard.glomm.ecologic [image:
> Website:]  [image: | 
> Video:] [image:
> | Newsletter:]  [image: |
> Facebook:]  [image: |
> Linkedin:] 
> [image:
> | Twitter:]  [image: | 
> YouTube:] [image:
> | Google+:]    Ecologic
> Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
> Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> --
>
> On Sep 24, 2013, at 10:52 AM, "Fuchs, Andreas (SwissTXT)" <
> andreas.fu...@swisstxt.ch> wrote:
>
> Make shure that you:
> -  On the same host and as the same user you run ceph-deploy
> -  Ssh to the host you wan’t to install
>
>
Actually this answer is on par with the expectations of ceph-deploy,
specially the "same user you run ceph-deploy".

So it is a combination of using sudo (no need for ceph-deploy to use sudo)
and an ssh config that is changing the
user you are connecting with.

> there must be no password needed !
> -  Sudo –I after you connected above
> Again to do this there must be no password needed!
>
> If this is successfully ceph-deploy will be able todo his work
> Otherwise follow the instructions to setup passwordless ssh and sudo
>
> Regards
> Andi
>
> *From:* ceph-users-boun...@lists.ceph.com [mailto:ceph-
> users-boun...@lists.ceph.com] *On Behalf Of *Bernhard Glomm
> *Sent:* Dienstag, 24. September 2013 09:28
> *To:* alfredo.d...@inktank.com
> *Cc:* ceph-us...@ceph.com
> *Subject:* Re: [ceph-users] ceph-deploy again
> ** **
> Am 23.09.2013 21:56:56, schrieb Alfredo Deza:
> ** **
>
> ** **
> On Mon, Sep 23, 2013 at 11:23 AM, Bernhard Glomm <
> bernhard.gl...@ecologic.eu> wrote:
>
> Hi all,
>
> something with ceph-deploy doesen't work at all anymore.
> After an upgrade ceph-depoly failed to roll out a new monitor
> with "permission denied. are you root?"
> (obviously there shouldn't be a root login so I had another user
> for ceph-deploy before which worked perfectly, why not now?)
>
> ceph_deploy.install][DEBUG ] Purging host ping ...
> Traceback (most recent call last):
> E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission
> denied)
> E: Unable to lock the administration directory (/var/lib/dpkg/), are you
> root?
>
> Does this mean I have to let root log into my Cluster with a passwordless
> key?
> I would rather like to use another log in, like so far, if possible.
> Can you paste here the exact command you are running (and with what user) ?
> 
>
> ** **
>
>  
>
>  well I used to run this script
>
> ** **
>
> ##
>
> #!/bin/bash
> # initialize the ceph cluster
>
> # our csgstems
> ceph_osds="ping pong"
> ceph_mons="ping pong nuke36"
> options="-v"
>
> cd /tmp
>
> for i in $ceph_mons; do
> ssh $i "sudo service ntp stop && sudo ntpdate-debian && sudo service
> ntp start && date";echo -e "\n\n"
> done
> ** **
>
> ceph-deploy $options purge $ceph_mons
> ceph-deploy $options purgedata $ceph_mons
>
> mkdir /etc/ceph
> cd /etc/ceph
>
> # install ceph
> ceph-deploy $options install --stable dumpling $ceph_mons
>
> ** **
>
> # create cluster
> ceph-deploy $options new $ceph_mons
>
> # inject your extra configuration options here
> # switch on debugging
> echo -e "debug ms = 1
> debug mon = 20" >> /etc/ceph/ceph.conf
>
> # create the monitors
> ceph-deploy $options --overwrite-conf mon create $ceph_mons
>
> sleep 10
> # get the keys
> for host in $ceph_mons; do
> ceph-deploy $options gatherkeys $host
> done
>
> for host in $ceph_osds;do
> ceph-deploy disk zap $host:/dev/sdb
> ceph-deploy $options osd create $host:/dev/sdb
> done
>
> # check
> ceph status
>
> exit 0
>
> ** **
>
> ##
>
> ** **
>
> I ran this script as root
>
> with a .ss

Re: [ceph-users] ceph-deploy again

2013-09-24 Thread Alfredo Deza
On Tue, Sep 24, 2013 at 3:27 AM, Bernhard Glomm
wrote:

> Am 23.09.2013 21:56:56, schrieb Alfredo Deza:
>
>
>
>
> On Mon, Sep 23, 2013 at 11:23 AM, Bernhard Glomm <
> bernhard.gl...@ecologic.eu> wrote:
>
>> Hi all,
>>
>> something with ceph-deploy doesen't work at all anymore.
>> After an upgrade ceph-depoly failed to roll out a new monitor
>> with "permission denied. are you root?"
>> (obviously there shouldn't be a root login so I had another user
>> for ceph-deploy before which worked perfectly, why not now?)
>>
>> ceph_deploy.install][DEBUG ] Purging host ping ...
>> Traceback (most recent call last):
>> E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission
>> denied)
>> E: Unable to lock the administration directory (/var/lib/dpkg/), are you
>> root?
>>
>> Does this mean I have to let root log into my Cluster with a passwordless
>> key?
>> I would rather like to use another log in, like so far, if possible.
>>
>> Can you paste here the exact command you are running (and with what user)
> ?
>
>
>
>
>  well I used to run this script
>
>
> ##
>
> #!/bin/bash
> # initialize the ceph cluster
>
> # our csgstems
> ceph_osds="ping pong"
> ceph_mons="ping pong nuke36"
> options="-v"
>
> cd /tmp
>
> for i in $ceph_mons; do
> ssh $i "sudo service ntp stop && sudo ntpdate-debian && sudo service
> ntp start && date";echo -e "\n\n"
> done
>
> ceph-deploy $options purge $ceph_mons
> ceph-deploy $options purgedata $ceph_mons
>
> mkdir /etc/ceph
> cd /etc/ceph
>
> # install ceph
> ceph-deploy $options install --stable dumpling $ceph_mons
>
>
> # create cluster
> ceph-deploy $options new $ceph_mons
>
> # inject your extra configuration options here
> # switch on debugging
> echo -e "debug ms = 1
> debug mon = 20" >> /etc/ceph/ceph.conf
>
> # create the monitors
> ceph-deploy $options --overwrite-conf mon create $ceph_mons
>
> sleep 10
> # get the keys
> for host in $ceph_mons; do
> ceph-deploy $options gatherkeys $host
> done
>
> for host in $ceph_osds;do
> ceph-deploy disk zap $host:/dev/sdb
> ceph-deploy $options osd create $host:/dev/sdb
> done
>
> # check
> ceph status
>
> exit 0
>
>
> ##
>
>
> I ran this script as root
>

That was what I was afraid of. You should not use `sudo` nor execute
ceph-deploy as root if you are login in as
a non-root user via ssh config to the remote host.

This is because ceph-deploy will detect if you are root to avoid using sudo
in the remote host (sudo for root causes other issues).

The documentation was updated to reflect this:
http://ceph.com/docs/master/start/quick-start-preflight/#configure-ssh

You should've also seen in the logs that before connecting, ceph-deploy was
advertising if it was going to (or not) use sudo commands
on the remote host.

> with a .ssh/config to switch
>
> to the user I can log into the cluuster nodes.
>
> there is no problem with the ssh nor the sudo
>
> since the ntp commands in the beginning are working fine
>
>
>
>
>
>
>> The howto on ceph.com doesn't say anything about it,
>> the  changelog.Debian.gz isn't very helpful either and
>> another changelog isn't (provided nor a README)
>>
>> ceph-deploy is version 1.2.6
>> system is freshly installed raring
>>
>> got this both lines in my sources.list
>> deb http://192.168.242.91:3142/ceph.com/debian/ raring main
>> deb http://192.168.242.91:3142/ceph.com/packages/ceph-extras/debian/raring 
>> main
>>
>> since this both didn't work
>> #deb
>> http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/
>> raring main
>> #deb http://gitbuilder.ceph.com/cdep-deb-raring-x86_64-basic/ref/master/
>> raring main
>> (couldn't find the python-pushy version ceph-deploy depends on)
>>
>> TIA
>>
>> Bernhard
>>
>
>
> --
>   --
>   [image: *Ecologic Institute*]   *Bernhard Glomm*
> IT Administration
>
>Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype: 
> bernhard.glomm.ecologic [image:
> Website:]  [image: | 
> Video:] [image:
> | Newsletter:]  [image: |
> Facebook:]  [image: |
> Linkedin:] 
> [image:
> | Twitter:]  [image: | 
> YouTube:] [image:
> | Google+:]    Ecologic
> Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
> Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> --
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-u

Re: [ceph-users] where to put config and whats the correct syntax

2013-09-24 Thread Joao Eduardo Luis

On 09/23/2013 10:10 AM, Fuchs, Andreas (SwissTXT) wrote:

I'm following different threads here, mainly the poor radosgw performance one.
And what I see there are often recommendation to put a certain config to 
ceph.conf, but often it's unclear to me where exactly to put them

- does it matter if I put a config for all OSD's in [global] or [osd] ?
   Example:
[osd]
osd max attr size = 655360

or should it be
[global]
osd max attr size = 655360



[global] is quite self-explanatory:  options will be applied to any 
daemon or subsystem, library or whatever that might use ceph.conf.  Not 
all options are used by all components though.  For instance, if you 
were to specify 'debug osd = 10' on [global], for all intents and 
purposes, only OSDs would make use of it;  otoh, if you were to specify 
'keyring = /tmp/foo' under [global], then any components using the 
keyring would use that option.


All options under [osd] however would be applied only to OSDs.  For 
instance, consider you wanted the global keyring to be on '/tmp/foo', 
and the OSDs keyring to be on '/tmp/foo.osd' (merely illustrative, don't 
take this seriously).  Then you'd have:


[global]
  keyring = /tmp/foo
[osd]
  keyring = /tmp/foo.osd

The same thing that happens for [osd] would be valid for other 
components: mon, mds, etc.


You could even have a global keyring, an osd-specific keyring, and then 
specific keyrings for just a couple of specific osds.  Once again, I 
don't even know if this is feasible, this is just an example.


[global]
  keyring = /tmp/foo
[osd]
  keyring = /tmp/foo.osd
[osd.0]
  keyring = /tmp/foo.osd.0
[osd.10]
  keyring = /tmp/foo.osd.10

Back to your question, in that particular case, adding it under [osd] or 
under [global] should bear the same effect, as that option is only used 
on the osd.  Having it under [osd] should be slightly more 
straightforward on who is being affected by it though.




- different syntax
   We saw recommendations to add
rgw enable ops log = false
   but also
rgw_enable_ops_log disabled

  which one is correct?


From ceph's point-of-view, having 'rgw enable ops log' or 
'rgw_enable_ops_log' or even (I think!) 'rgw-enable-ops-log' is pretty 
much the same thing.  What really matters here is that the option 
expects to be a 'key = value', so the latter would be incorrect. 
However, I would imagine that someone meant 'having rgw_enable_ops_log 
disabled' as having 'rgw_enable_ops_log = false', which in turn is 
equivalent to the first one.  From ceph.conf's PoV, the correct syntax 
would be


  rgw enable ops log = false
or
  rgw_enable_ops_log = false



  can it be added to [client.radosgw] and it is valid for both of our 
radosgw's? or does it need to be added to global or somewhere else?


According to what I stated before, it is my belief that having it under 
[client.radosgw] would affect as many instances as you may have.  Having 
it under [global] would only mean that, if any other component would 
happen to use the same option, that said option would be available to 
any component using it.




- is there a way to verify which confg rules are applied?


I believe you can obtain it through the service's admin socket, using

  ceph --admin-socket /var/run/ceph/foo.asok config show


Hope this helps.

  -Joao


--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Who is using Ceph?

2013-09-24 Thread Joao Eduardo Luis

On 09/20/2013 10:27 AM, Maciej Gałkiewicz wrote:

Hi guys

Do you have any list of companies that use Ceph in production?

regards



Inktank has a list of customers up on the site:

http://www.inktank.com/customers/

  -Joao

--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how to set flag on pool

2013-09-24 Thread Joao Eduardo Luis

On 09/24/2013 10:22 AM, Corin Langosch wrote:

Hi there,

I want to set the flag hashpspool on an existing pool. "ceph osd pool
set {pool-name} {field} {value}" does not seem to work. So I wonder how
I can set/ unset flags on pools?



I believe that at the moment you'll only be able to have that flag set 
on a pool at creation time, if 'osd pool default flag hashpspool = true' 
on your conf.


Also found issue #5614 [1], which hasn't seen any work as far as I can 
tell, that would allow what you're after.


I may be wrong on this one, so if anyone else has knowledge of it being 
otherwise (or having been otherwise in the past), please speak up! :-)


  -Joao


[1] - http://tracker.ceph.com/issues/5614


--
Joao Eduardo Luis
Software Engineer | http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how to set flag on pool

2013-09-24 Thread Corin Langosch

Hi there,

I want to set the flag hashpspool on an existing pool. "ceph osd pool set 
{pool-name} {field} {value}" does not seem to work. So I wonder how I can set/ 
unset flags on pools?


Corin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] clients in cluster network?

2013-09-24 Thread Kurt Bauer


John Wilkins schrieb:
> Clients use the public network. The cluster network is principally for
> OSD-to-OSD communication--heartbeats, replication, backfill, etc.
Hmm, well, I'm aware of this, but the question is, if it is nevertheless
possible, ie. is it actively prohibited or "just" not recommended? And
if not recommended, what the issues would/could be?

Thanks,
br,
Kurt

> On Mon, Sep 23, 2013 at 7:42 AM, Kurt Bauer  wrote:
>> Hi,
>>  just a short question to which I couldn't find an answer in the
>> documentation:
>> When I run a cluster with public and cluster network seperated, would it
>> still be possible to have clients accessing the cluster (ie. RBDs) from
>> within the cluster network?
>>
>> Thanks for your help,
>> best regards,
>> Kurt
>>
>>
>> --
>> Kurt Bauer 
>> Vienna University Computer Center - ACOnet - VIX
>> Universitaetsstrasse 7, A-1010 Vienna, Austria, Europe
>> Tel: ++43 1 4277 - 14070 (Fax: - 9140)  KB1970-RIPE
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>

-- 
Kurt Bauer 
Vienna University Computer Center - ACOnet - VIX
Universitaetsstrasse 7, A-1010 Vienna, Austria, Europe
Tel: ++43 1 4277 - 14070 (Fax: - 9140)  KB1970-RIPE


smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph / RadosGW deployment questions

2013-09-24 Thread Guang
Hi ceph-users,
I deployed a Ceph cluster (including RadosGW) with use of ceph-deploy on 
RHEL6.4, during the deployment, I have a couple of questions which need your 
help.

1. I followed the steps http://ceph.com/docs/master/install/rpm/ to deploy the 
RadosGW node, however, after the deployment, all requests failed with 500 
returned. With some hints from 
http://irclogs.ceph.widodh.nl/index.php?date=2013-01-25, I changed the 
FastCgiExternalServer to FastCgiServer within rgw.conf. Is this change valid or 
I missed somewhere else which leads the need for this change?

2. It still does not work and the httpd has the following error log:
[Mon Sep 23 07:34:32 2013] [crit] (98)Address already in use: FastCGI: 
can't create server "/var/www/s3gw.fcgi": bind() failed [/tmp/radosgw.sock]
which indicates that radosgw is not started properly, so that I manually run 
"radosgw --rgw-socket-path=/tmp/radosgw.sock -c /etc/ceph/ceph.conf -n 
client.radosgw.gateway" to start a radosgw daemon and then the gateway starts 
working as expected.
Did I miss anything this part?

3. When I was trying to run ceph admin-daemon command on the radosGW host, it 
failed because it does not have the corresponding  asok file, however, I am 
able to run the command on monitor host and found that the radosGW's 
information can be retrieved there.

@monitor (monitor and gateway are deployed on different hosts).
[xxx@startbart ceph]$ sudo ceph --admin-daemon 
/var/run/ceph/ceph-mon.startbart.asok config show | grep rgw
  "rgw": "1\/5",
  "rgw_data": "\/var\/lib\/ceph\/radosgw\/ceph-startbart",
  "rgw_enable_apis": "s3, swift, swift_auth, admin",
  "rgw_cache_enabled": "true",
  "rgw_cache_lru_size": "1",
  "rgw_socket_path": "",
  "rgw_host": "",
  "rgw_port": "",
  "rgw_dns_name": "",
  "rgw_script_uri": "",
  "rgw_request_uri": "",
  "rgw_swift_url": "",
  "rgw_swift_url_prefix": "swift",
  "rgw_swift_auth_url": "",
  "rgw_swift_auth_entry": "auth",
  "rgw_keystone_url": "",
  "rgw_keystone_admin_token": "",
  "rgw_keystone_accepted_roles": "Member, admin",
  "rgw_keystone_token_cache_size": "1",
  "rgw_keystone_revocation_interval": "900",
  "rgw_admin_entry": "admin",
  "rgw_enforce_swift_acls": "true",
  "rgw_swift_token_expiration": "86400",
  "rgw_print_continue": "true",
  "rgw_remote_addr_param": "REMOTE_ADDR",
  "rgw_op_thread_timeout": "600",
  "rgw_op_thread_suicide_timeout": "0",
  "rgw_thread_pool_size": "100",
Is this expected?

4. cephx authentication. After reading through the cephx introduction, I got 
the feeling that cephx is for client to cluster authentication, so that each 
librados user will need to create a new key. However, this page 
http://ceph.com/docs/master/rados/operations/authentication/#enabling-cephx got 
me confused in terms of why should we create keys for mon and osd? And how does 
that fit into the authentication diagram? BTW, I found the keyrings under 
/var/lib/cecph/{role}/ for each roles, are they being used when talk to other 
roles?

Thanks,
Guang 



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy again

2013-09-24 Thread Bernhard Glomm
Am 23.09.2013 21:56:56, schrieb Alfredo Deza:


> 
> On Mon, Sep 23, 2013 at 11:23 AM, Bernhard Glomm > <> 
> bernhard.gl...@ecologic.eu> >>  wrote:
> > Hi all,
> > 
> > something with ceph-deploy doesen't work at all anymore.
> > After an upgrade ceph-depoly failed to roll out a new monitor
> > 
with "permission denied. are you root?"
> > (obviously there shouldn't be a root login so I had another user
> > for ceph-deploy before which worked perfectly, why not now?)
> > 
> > ceph_deploy.install][DEBUG ] Purging host ping ...
> > 
Traceback (most recent call last):
> > E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission 
> > denied)
> > E: Unable to lock the administration directory (/var/lib/dpkg/), are you 
> > root?
> > 
> > Does this mean I have to let root log into my Cluster with a passwordless 
> > key?
> > 
I would rather like to use another log in, like so far, if possible.
> > 
> Can you paste here the exact command you are running (and with what user) ?


  well I used to run this script 

##
#!/bin/bash
# initialize the ceph cluster

# our csgstems
ceph_osds="ping pong"
ceph_mons="ping pong nuke36"
options="-v"

cd /tmp

for i in $ceph_mons; do
    ssh $i "sudo service ntp stop && sudo ntpdate-debian && sudo service ntp 
start && date";echo -e "\n\n"
done

ceph-deploy $options purge $ceph_mons
ceph-deploy $options purgedata $ceph_mons

mkdir /etc/ceph
cd /etc/ceph

# install ceph
ceph-deploy $options install --stable dumpling $ceph_mons

# create cluster
ceph-deploy $options new $ceph_mons

# inject your extra configuration options here
# switch on debugging
echo -e "debug ms = 1
debug mon = 20" >> /etc/ceph/ceph.conf

# create the monitors
ceph-deploy $options --overwrite-conf mon create $ceph_mons

sleep 10
# get the keys
for host in $ceph_mons; do
    ceph-deploy $options gatherkeys $host
done

for host in $ceph_osds;do
    ceph-deploy disk zap $host:/dev/sdb
    ceph-deploy $options osd create $host:/dev/sdb
done

# check
ceph status

exit 0

##
I ran this script as rootwith a .ssh/config to switchto the user I can log into 
the cluuster nodes.there is no problem with the ssh nor the sudosince the ntp 
commands in the beginning are working fine



>  
> > > 
> > The howto on > > ceph.com> >  doesn't say anything about it,
> > the  > > changelog.Debian.gz> >  isn't very helpful either and
> > another changelog isn't (provided nor a README)
> > 
> > ceph-deploy is version 1.2.6
> > system is freshly installed raring
> > 
> > got this both lines in my sources.list
> > deb > > http://192.168.242.91:3142/ceph.com/debian/> >  raring main
> > 
deb > > http://192.168.242.91:3142/ceph.com/packages/ceph-extras/debian/> >  
raring main
> > 
> > since this both didn't work
> > #deb > > 
> > http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/> >    
> >     raring main
> > 
#deb > > http://gitbuilder.ceph.com/cdep-deb-raring-x86_64-basic/ref/master/> > 
    raring main
> > (couldn't find the python-pushy version ceph-deploy depends on)
> > 
> > TIA
> > 
> > Bernhard

> 
> 
> > 



-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com