Re: [ceph-users] Scaling RBD module

2013-09-19 Thread Somnath Roy
Thanks Josh ! I am able to successfully add this noshare option in the image mapping now. Looking at dmesg output, I found that was indeed the secret key problem. Block performance is scaling now. Regards Somnath -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-de

Re: [ceph-users] 10/100 network for Mons?

2013-09-19 Thread David Zafman
I believe that the nature of the monitor network traffic should be fine with 10/100 network ports. David Zafman Senior Developer http://www.inktank.com On Sep 18, 2013, at 1:24 PM, Gandalf Corvotempesta wrote: > Hi to all. > Actually I'm building a test cluster with 3 OSD servers connected w

Re: [ceph-users] Scaling RBD module

2013-09-19 Thread Somnath Roy
Hi Josh, Thanks for the information. I am trying to add the following but hitting some permission issue. root@emsclient:/etc# echo :6789,:6789,:6789 name=admin,key=client.admin,noshare test_rbd ceph_block_test' > /sys/bus/rbd/add -bash: echo: write error: Operation not permitted Here is the con

Re: [ceph-users] Scaling RBD module

2013-09-19 Thread Josh Durgin
On 09/19/2013 12:04 PM, Somnath Roy wrote: Hi Josh, Thanks for the information. I am trying to add the following but hitting some permission issue. root@emsclient:/etc# echo :6789,:6789,:6789 name=admin,key=client.admin,noshare test_rbd ceph_block_test' > /sys/bus/rbd/add -bash: echo: write er

[ceph-users] monitor deployment during quick start

2013-09-19 Thread Gruher, Joseph R
Could someone make a quick clarification on the quick start guide for me? On this page: http://ceph.com/docs/next/start/quick-ceph-deploy/. After I do "ceph-deploy new" to a system is that system then a monitor from that point forward? Or do I then have to do "ceph-deploy mon create" to that

Re: [ceph-users] OSDMap problem: osd does not exist.

2013-09-19 Thread Yasuhiro Ohara
Hi Sage, Thank you for the response. So, it seems that mon data can be removed and recovered later, only if osdmap is saved (in binary) and incorporated at the time of initial creation of mon data (i.e., mon --mkfs) ? I created the new osdmap by osdmaptool --createsimple, which provided a diffe

Re: [ceph-users] ceph-deploy not including sudo?

2013-09-19 Thread Gruher, Joseph R
>-Original Message- >From: Alfredo Deza [mailto:alfredo.d...@inktank.com] > >Can you try running ceph-deploy *without* sudo ? > Ah, OK, sure. Without sudo I end up hung here again: ceph@cephtest01:~$ ceph-deploy install cephtest03 cephtest04 cephtest05 cephtest06 [cephtest03][INFO ] R

Re: [ceph-users] ulimit max user processes (-u) and non-root ceph clients

2013-09-19 Thread Gregory Farnum
On Wed, Sep 18, 2013 at 11:43 PM, Dan Van Der Ster wrote: > > On Sep 18, 2013, at 11:50 PM, Gregory Farnum > wrote: > >> On Wed, Sep 18, 2013 at 6:33 AM, Dan Van Der Ster >> wrote: >>> Hi, >>> We just finished debugging a problem with RBD-backed Glance image creation >>> failures, and thought

Re: [ceph-users] PG distribution scattered

2013-09-19 Thread Gregory Farnum
It will not lose any of your data. But it will try and move pretty much all of it, which will probably send performance down the toilet. -Greg On Thursday, September 19, 2013, Mark Nelson wrote: > Honestly I don't remember, but I would be wary if it's not a test system. > :) > > Mark > > On 09/19

Re: [ceph-users] PG distribution scattered

2013-09-19 Thread Warren Wang
Good timing then. I just fired up the cluster 2 days ago. Thanks. -- Warren On Sep 19, 2013, at 12:34 PM, Gregory Farnum wrote: > It will not lose any of your data. But it will try and move pretty much all > of it, which will probably send performance down the toilet. > -Greg > > On Thursday

Re: [ceph-users] poor radosgw performance

2013-09-19 Thread Yehuda Sadeh
On Thu, Sep 19, 2013 at 8:52 AM, Matt Thompson wrote: > Hi All, > > We're trying to test swift API performance of swift itself (1.9.0) and > ceph's radosgw (0.67.3) using the following hardware configuration: > > Shared servers: > > * 1 server running keystone for authentication > * 1 server runni

Re: [ceph-users] Excessive mon memory usage in cuttlefish 0.61.8

2013-09-19 Thread Joao Eduardo Luis
On 09/19/2013 04:46 PM, Andrey Korolyov wrote: On Thu, Sep 19, 2013 at 1:00 PM, Joao Eduardo Luis wrote: On 09/18/2013 11:25 PM, Andrey Korolyov wrote: Hello, Just restarted one of my mons after a month of uptime, memory commit raised ten times high than before: 13206 root 10 -10 12.8g

Re: [ceph-users] poor radosgw performance

2013-09-19 Thread Matt Thompson
Hi All, We're trying to test swift API performance of swift itself (1.9.0) and ceph's radosgw (0.67.3) using the following hardware configuration: Shared servers: * 1 server running keystone for authentication * 1 server running swift-proxy, a single MON, and radosgw + Apache / FastCGI Ceph: *

Re: [ceph-users] PG distribution scattered

2013-09-19 Thread Mark Nelson
Honestly I don't remember, but I would be wary if it's not a test system. :) Mark On 09/19/2013 11:28 AM, Warren Wang wrote: Is this safe to enable on a running cluster? -- Warren On Sep 19, 2013, at 9:43 AM, Mark Nelson wrote: On 09/19/2013 08:36 AM, Niklas Goerke wrote: Hi there I'm cu

Re: [ceph-users] PG distribution scattered

2013-09-19 Thread Warren Wang
Is this safe to enable on a running cluster? -- Warren On Sep 19, 2013, at 9:43 AM, Mark Nelson wrote: > On 09/19/2013 08:36 AM, Niklas Goerke wrote: >> Hi there >> >> I'm currently evaluating ceph and started filling my cluster for the >> first time. After filling it up to about 75%, it repor

Re: [ceph-users] Excessive mon memory usage in cuttlefish 0.61.8

2013-09-19 Thread Andrey Korolyov
On Thu, Sep 19, 2013 at 1:00 PM, Joao Eduardo Luis wrote: > On 09/18/2013 11:25 PM, Andrey Korolyov wrote: >> >> Hello, >> >> Just restarted one of my mons after a month of uptime, memory commit >> raised ten times high than before: >> >> 13206 root 10 -10 12.8g 8.8g 107m S65 14.0 0:53.

Re: [ceph-users] Uploading large files to swift interface on radosgw

2013-09-19 Thread Yehuda Sadeh
Now you're hitting issue #6336 (it's a regression in dumpling that we'll fix soon). The current workaround is setting the following in your osd: osd max attr size = try a value of 10485760 (10M) which I think is large enough. Yehuda On Thu, Sep 19, 2013 at 7:30 AM, Gerd Jakobovitsch wrote:

Re: [ceph-users] Uploading large files to swift interface on radosgw

2013-09-19 Thread Gerd Jakobovitsch
Thank you very much, now it worked, with the value you suggested. Regards. On 09/19/2013 12:10 PM, Yehuda Sadeh wrote: Now you're hitting issue #6336 (it's a regression in dumpling that we'll fix soon). The current workaround is setting the following in your osd: osd max attr size = try a va

[ceph-users] Cluster stuck at 15% degraded

2013-09-19 Thread Greg Chavez
We have an 84-osd cluster with volumes and images pools for OpenStack. I was having trouble with full osds, so I increased the pg count from the 128 default to 2700. This balanced out the osds but the cluster is stuck at 15% degraded http://hastebin.com/wixarubebe.dos That's the output of ceph

Re: [ceph-users] MONs numbers, hardware sizing and write ack

2013-09-19 Thread Joao Eduardo Luis
On 09/19/2013 10:03 AM, Gandalf Corvotempesta wrote: 2013/9/19 Joao Eduardo Luis : We have no benchmarks on that, that I am aware of. But the short and sweet answer should be "not really, highly unlikely". If anything, increasing the number of mons should increase the response time, although f

Re: [ceph-users] PG distribution scattered

2013-09-19 Thread Mark Nelson
On 09/19/2013 08:36 AM, Niklas Goerke wrote: Hi there I'm currently evaluating ceph and started filling my cluster for the first time. After filling it up to about 75%, it reported some OSDs being "near-full". After some evaluation I found that the PGs are not distributed evenly over all the osd

[ceph-users] PG distribution scattered

2013-09-19 Thread Niklas Goerke
Hi there I'm currently evaluating ceph and started filling my cluster for the first time. After filling it up to about 75%, it reported some OSDs being "near-full". After some evaluation I found that the PGs are not distributed evenly over all the osds. My Setup: * Two Hosts with 45 Disks ea

Re: [ceph-users] Impossible to Create Bucket on RadosGW?

2013-09-19 Thread Alexander Sidorenko
Georg Höllrigl writes: > > Hello, > > I'm horribly failing at creating a bucket on radosgw at ceph 0.67.2 > running on ubuntu 12.04. > > Right now I feel frustrated about radosgw-admin for beeing inconsistent > in its options. It's possible to list the buckets and also to delete > them but

Re: [ceph-users] OSDMap problem: osd does not exist.

2013-09-19 Thread Sage Weil
On Thu, 19 Sep 2013, Yasuhiro Ohara wrote: > > Hi Sage, > > Thanks, after thrashing it became a little bit better, > but not yet healthy. > > ceph -s: http://pastebin.com/vD28FJ4A > ceph osd dump: http://pastebin.com/37FLNxd7 > ceph pg dump: http://pastebin.com/pccdg20j > > (osd.0 and 1 are not

Re: [ceph-users] ceph-deploy not including sudo?

2013-09-19 Thread Alfredo Deza
On Wed, Sep 18, 2013 at 11:54 PM, Gruher, Joseph R wrote: > Using latest ceph-deploy: > > ceph@cephtest01:/my-cluster$ sudo ceph-deploy --version > > 1.2.6 > > > > I get this failure: > > > > ceph@cephtest01:/my-cluster$ sudo ceph-deploy install cephtest03 cephtest04 > cephtest05 cephtest06 How y

Re: [ceph-users] Objects get via s3api FastCGI "incomplete headers" and hanging up

2013-09-19 Thread Mihály Árva-Tóth
Hello, I wrote a PHP client script (using latest AWS S3 API lib) and this not solve my (hanging download) problem at all. So the problem not be in libs3/s3 tool. I changed MTU from 1500 to 9000 and back does not solved. Are there any apache (mpm-worker), fastcgi, rgw or librados tuning options to

Re: [ceph-users] [SOLVED] Re: Ceph bock storage and Openstack Cinder Scheduler issue

2013-09-19 Thread Darren Birkett
On 19 September 2013 11:51, Gavin wrote: > > Hi, > > Please excuse/disregard my previous email, I just needed a > clarification on my understanding of how this all fits together. > > I was kindly pointed in the right direction by a friendly gentleman > from Rackspace. Thanks Darren. :) > > The re

[ceph-users] [SOLVED] Re: Ceph bock storage and Openstack Cinder Scheduler issue

2013-09-19 Thread Gavin
On 19 September 2013 11:57, Gavin wrote: > Hi there, > > Can someone possibly shed some light on and issue we are experiencing > with the way Cinder is scheduling Ceph volumes in our environment. > > We are running cinder-volume on each of our compute nodes, and they > are all configured to make u

Re: [ceph-users] Decrease radosgw logging level

2013-09-19 Thread Mihály Árva-Tóth
2013/9/17 Joao Eduardo Luis > On 09/13/2013 01:02 PM, Mihály Árva-Tóth wrote: > >> Hello, >> >> How can I decrease logging level of radosgw? I uploaded 400k pieces of >> objects and my radosgw log raises to 2 GiB. Current settings: >> >> rgw_enable_usage_log = true >> rgw_usage_log_tick_interval

[ceph-users] Ceph bock storage and Openstack Cinder Scheduler issue

2013-09-19 Thread Gavin
Hi there, Can someone possibly shed some light on and issue we are experiencing with the way Cinder is scheduling Ceph volumes in our environment. We are running cinder-volume on each of our compute nodes, and they are all configured to make use of our Ceph cluster. As far as we can tell the Cep

Re: [ceph-users] MONs numbers, hardware sizing and write ack

2013-09-19 Thread Gandalf Corvotempesta
2013/9/19 Joao Eduardo Luis : > We have no benchmarks on that, that I am aware of. But the short and sweet > answer should be "not really, highly unlikely". > > If anything, increasing the number of mons should increase the response > time, although for such low numbers that should also be virtual

Re: [ceph-users] Excessive mon memory usage in cuttlefish 0.61.8

2013-09-19 Thread Joao Eduardo Luis
On 09/18/2013 11:25 PM, Andrey Korolyov wrote: Hello, Just restarted one of my mons after a month of uptime, memory commit raised ten times high than before: 13206 root 10 -10 12.8g 8.8g 107m S65 14.0 0:53.97 ceph-mon normal one looks like 30092 root 10 -10 4411m 790m 46m S

Re: [ceph-users] MONs numbers, hardware sizing and write ack

2013-09-19 Thread Joao Eduardo Luis
On 09/19/2013 09:17 AM, Gandalf Corvotempesta wrote: Hi to all, increasing the total numbers of MONs available in a cluster, for example growing from 3 to 5, will also decrease the hardware requirements (i.e. RAM and CPU) for each mon instance ? We have no benchmarks on that, that I am aware of

[ceph-users] MONs numbers, hardware sizing and write ack

2013-09-19 Thread Gandalf Corvotempesta
Hi to all, increasing the total numbers of MONs available in a cluster, for example growing from 3 to 5, will also decrease the hardware requirements (i.e. RAM and CPU) for each mon instance ? I'm asking this because our cluster will be made with 5 OSD server and I can easily put one MON on each O

Re: [ceph-users] OSDMap problem: osd does not exist.

2013-09-19 Thread Yasuhiro Ohara
Hi Sage, Thanks, after thrashing it became a little bit better, but not yet healthy. ceph -s: http://pastebin.com/vD28FJ4A ceph osd dump: http://pastebin.com/37FLNxd7 ceph pg dump: http://pastebin.com/pccdg20j (osd.0 and 1 are not running. I issued some "osd in" commands. osd.4 are running but