Re: [ceph-users] add crush rule in one command

2013-07-25 Thread Rongze Zhu
On Fri, Jul 26, 2013 at 2:27 PM, Rongze Zhu wrote: > > > > On Fri, Jul 26, 2013 at 1:22 PM, Gregory Farnum wrote: > >> On Thu, Jul 25, 2013 at 7:41 PM, Rongze Zhu >> wrote: >> > Hi folks, >> > >> > Recently, I use puppet to deploy Ceph and integrate Ceph with >> OpenStack. We >> > put computean

Re: [ceph-users] add crush rule in one command

2013-07-25 Thread Rongze Zhu
On Fri, Jul 26, 2013 at 1:22 PM, Gregory Farnum wrote: > On Thu, Jul 25, 2013 at 7:41 PM, Rongze Zhu > wrote: > > Hi folks, > > > > Recently, I use puppet to deploy Ceph and integrate Ceph with OpenStack. > We > > put computeand storage together in the same cluster. So nova-compute and > > OSDs

Re: [ceph-users] What is this HEALTH_WARN indicating?

2013-07-25 Thread Gregory Farnum
On Thu, Jul 25, 2013 at 7:42 PM, Greg Chavez wrote: > Any idea how we tweak this? If I want to keep my ceph node root > volume at 85% used, that's my business, man. There are config options you can set. On the monitors they are "mon osd full ratio" and "mon osd nearfull ratio"; on the OSDs you m

Re: [ceph-users] add crush rule in one command

2013-07-25 Thread Gregory Farnum
On Thu, Jul 25, 2013 at 7:41 PM, Rongze Zhu wrote: > Hi folks, > > Recently, I use puppet to deploy Ceph and integrate Ceph with OpenStack. We > put computeand storage together in the same cluster. So nova-compute and > OSDs will be in each server. We will create a local pool for each server, > an

[ceph-users] Upgrade from 0.61.4 to 0.61.6 mon failed. Upgrade to 0.61.7 mon still failed.

2013-07-25 Thread Keith Phua
Hi all, 2 days ago, i upgraded one of my mon from 0.61.4 to 0.61.6. The mon failed to start. I checked the mailing list and found reports of mon failed after upgrading to 0.61.6. So I wait for the next release and upgraded the failed mon from 0.61.6 to 0.61.7. My mon still fail to start up.

Re: [ceph-users] What is this HEALTH_WARN indicating?

2013-07-25 Thread Greg Chavez
Any idea how we tweak this? If I want to keep my ceph node root volume at 85% used, that's my business, man. Thanks. --Greg On Mon, Jul 8, 2013 at 4:27 PM, Mike Bryant wrote: > Run "ceph health detail" and it should give you more information. > (I'd guess an osd or mon has a full hard disk) >

[ceph-users] add crush rule in one command

2013-07-25 Thread Rongze Zhu
Hi folks, Recently, I use puppet to deploy Ceph and integrate Ceph with OpenStack. We put computeand storage together in the same cluster. So nova-compute and OSDs will be in each server. We will create a local pool for each server, and the pool only use the disks of each server. Local pools will

[ceph-users] Update your account information

2013-07-25 Thread PayPal Team
Title: Update your account information   Update your account information   Dear valued customer To

Re: [ceph-users] Mounting RBD or CephFS on Ceph-Node?

2013-07-25 Thread Josh Durgin
On 07/23/2013 06:09 AM, Oliver Schulz wrote: Dear Ceph Experts, I remember reading that at least in the past I wasn't recommended to mount Ceph storage on a Ceph cluster node. Given a recent kernel (3.8/3.9) and sufficient CPU and memory resources on the nodes, would it now be safe to * Mount R

Re: [ceph-users] Kernel's rbd in 3.10.1

2013-07-25 Thread Josh Durgin
On 07/24/2013 09:37 PM, Mikaël Cluseau wrote: Hi, I have a bug in the 3.10 kernel under debian, be it a self compiled linux-stable from the git (built with make-kpkg) or the sid's package. I'm using format-2 images (ceph version 0.61.6 (59ddece17e36fef69ecf40e239aeffad33c9db35)) to make snapsho

Re: [ceph-users] testing ceph - very slow write performances

2013-07-25 Thread Chris Hoy Poy
yes, those drives are horrible, and you have them partitioned etc. - don't use MDADM for Ceph OSDs, in my experience it *does* impair performance, it just doesn't play nice with OSDs. -- Ceph does its own block replication - though be careful, a size of "2" is not necessarily as "safe" as raid10

Re: [ceph-users] ceph-deploy and bugs 5195/5205: mon.host1 does not exist in monmap, will attempt to join an existing cluster

2013-07-25 Thread Josh Holland
Hi Sage, On 25 July 2013 17:21, Sage Weil wrote: > I suspect the difference here is that the dns names you are specifying in > ceph-deploy new do not match. Aha, this could well be the problem. The current DNS names resolve to the address bound to an interface that is intended to be used mostly

Re: [ceph-users] ceph monitors stuck in a loop after install with ceph-deploy

2013-07-25 Thread Sage Weil
On Wed, 24 Jul 2013, pe...@2force.nl wrote: > On 2013-07-24 07:19, Sage Weil wrote: > > On Wed, 24 Jul 2013, S?bastien RICCIO wrote: > > > > > > Hi! While trying to install ceph using ceph-deploy the monitors nodes are > > > stuck waiting on this process: > > > /usr/bin/python /usr/sbin/ceph-creat

Re: [ceph-users] v0.67-rc2 dumpling release candidate

2013-07-25 Thread Guido Winkelmann
Am Mittwoch, 24. Juli 2013, 22:45:55 schrieb Sage Weil: > Go forth and test! I just upgraded a 0.61.7 cluster to 0.67-rc2. I restarted the mons first, and as expected, they did not join a quorum with the 0.61.7 mons, but after all of the mons were restarted, there was no problem any more. One o

Re: [ceph-users] Flapping osd / continuously reported as failed

2013-07-25 Thread Gregory Farnum
On Thu, Jul 25, 2013 at 12:47 AM, Mostowiec Dominik wrote: > Hi > We found something else. > After osd.72 flapp, one PG '3.54d' was recovering long time. > > -- > ceph health details > HEALTH_WARN 1 pgs recovering; recovery 1/39821745 degraded (0.000%) > pg 3.54d is active+recovering, acting [72,1

Re: [ceph-users] [Xen-API] The vdi is not available

2013-07-25 Thread Sébastien RICCIO
mount.nfs 10.254.253.9:/xen/9f9aa794-86c0-9c36-a99d-1e5fdc14a206 -o soft,timeo=133,retrans=2147483647,tcp,noac this gives mount -o doesnt exist Moya Solutions, Inc. am...@moyasolutions.com 0 | 646-918-5238 x 102 F | 646-390-1806 - Original Message - From: "Sébastien RICCIO" To: "Andr

Re: [ceph-users] ceph-deploy and bugs 5195/5205: mon.host1 does not exist in monmap, will attempt to join an existing cluster

2013-07-25 Thread Sage Weil
On Thu, 25 Jul 2013, Josh Holland wrote: > Hi List, > > I've been having issues getting mons deployed following the > ceph-deploy instructions here[0]. My steps were: > > $ ceph-deploy new host{1..3} > $ vi ceph.conf # Add in public network/cluster network details, as > well as change the mon I

Re: [ceph-users] A lot of pools?

2013-07-25 Thread Gregory Farnum
On Thursday, July 25, 2013, Dzianis Kahanovich wrote: > I think to make pool-per-user (primary for cephfs; for security, quota, > etc), > hundreds (or even more) of them. But I remember 2 facts: > 1) info in manual about slowdown on many pools; Yep, this is still a problem; pool-per-user isn't g

[ceph-users] testing ceph - very slow write performances

2013-07-25 Thread Sébastien RICCIO
Hi ceph-users, I'm actually evaluating ceph for a project and I'm getting quite low write performances, so please if you have time reading this post and give me some advices :) My test setup using some free hardware we have laying in our datacenter: Three ceph server nodes, on each one is r

Re: [ceph-users] ceph-deploy and bugs 5195/5205: mon.host1 does not exist in monmap, will attempt to join an existing cluster

2013-07-25 Thread Josh Holland
Links I forgot to include the first time: [0] http://ceph.com/docs/master/rados/deployment/ceph-deploy-install/ [1] http://tracker.ceph.com/issues/5195 [2] http://tracker.ceph.com/issues/5205 Apologies for the noise, Josh ___ ceph-users mailing list ceph

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-25 Thread peter
On 2013-07-25 15:55, Joao Eduardo Luis wrote: On 07/25/2013 02:39 PM, pe...@2force.nl wrote: On 2013-07-25 15:21, Joao Eduardo Luis wrote: On 07/25/2013 11:20 AM, pe...@2force.nl wrote: On 2013-07-25 12:08, Wido den Hollander wrote: On 07/25/2013 12:01 PM, pe...@2force.nl wrote: On 2013-07-2

[ceph-users] ceph-deploy and bugs 5195/5205: mon.host1 does not exist in monmap, will attempt to join an existing cluster

2013-07-25 Thread Josh Holland
Hi List, I've been having issues getting mons deployed following the ceph-deploy instructions here[0]. My steps were: $ ceph-deploy new host{1..3} $ vi ceph.conf # Add in public network/cluster network details, as well as change the mon IPs to those on the correct interface $ ceph-deploy insta

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-25 Thread Joao Eduardo Luis
On 07/25/2013 02:39 PM, pe...@2force.nl wrote: On 2013-07-25 15:21, Joao Eduardo Luis wrote: On 07/25/2013 11:20 AM, pe...@2force.nl wrote: On 2013-07-25 12:08, Wido den Hollander wrote: On 07/25/2013 12:01 PM, pe...@2force.nl wrote: On 2013-07-25 11:52, Wido den Hollander wrote: On 07/25/20

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-25 Thread peter
On 2013-07-25 15:21, Joao Eduardo Luis wrote: On 07/25/2013 11:20 AM, pe...@2force.nl wrote: On 2013-07-25 12:08, Wido den Hollander wrote: On 07/25/2013 12:01 PM, pe...@2force.nl wrote: On 2013-07-25 11:52, Wido den Hollander wrote: On 07/25/2013 11:46 AM, pe...@2force.nl wrote: Any news on

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-25 Thread Joao Eduardo Luis
On 07/25/2013 11:20 AM, pe...@2force.nl wrote: On 2013-07-25 12:08, Wido den Hollander wrote: On 07/25/2013 12:01 PM, pe...@2force.nl wrote: On 2013-07-25 11:52, Wido den Hollander wrote: On 07/25/2013 11:46 AM, pe...@2force.nl wrote: Any news on this? I'm not sure if you guys received the li

[ceph-users] A lot of pools?

2013-07-25 Thread Dzianis Kahanovich
I think to make pool-per-user (primary for cephfs; for security, quota, etc), hundreds (or even more) of them. But I remember 2 facts: 1) info in manual about slowdown on many pools; 2) something in later changelog about hashed pool IDs (?). How about now and numbers of pools? And how to avoid ser

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-25 Thread peter
On 2013-07-25 12:08, Wido den Hollander wrote: On 07/25/2013 12:01 PM, pe...@2force.nl wrote: On 2013-07-25 11:52, Wido den Hollander wrote: On 07/25/2013 11:46 AM, pe...@2force.nl wrote: Any news on this? I'm not sure if you guys received the link to the log and monitor files. One monitor and

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-25 Thread Wido den Hollander
On 07/25/2013 12:01 PM, pe...@2force.nl wrote: On 2013-07-25 11:52, Wido den Hollander wrote: On 07/25/2013 11:46 AM, pe...@2force.nl wrote: Any news on this? I'm not sure if you guys received the link to the log and monitor files. One monitor and osd is still crashing with the error below. I

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-25 Thread peter
On 2013-07-25 11:52, Wido den Hollander wrote: On 07/25/2013 11:46 AM, pe...@2force.nl wrote: Any news on this? I'm not sure if you guys received the link to the log and monitor files. One monitor and osd is still crashing with the error below. I think you are seeing this issue: http://track

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-25 Thread Wido den Hollander
On 07/25/2013 11:46 AM, pe...@2force.nl wrote: Any news on this? I'm not sure if you guys received the link to the log and monitor files. One monitor and osd is still crashing with the error below. I think you are seeing this issue: http://tracker.ceph.com/issues/5737 You can try with new pack

Re: [ceph-users] v0.61.6 Cuttlefish update released

2013-07-25 Thread peter
Any news on this? I'm not sure if you guys received the link to the log and monitor files. One monitor and osd is still crashing with the error below. On 2013-07-24 09:57, pe...@2force.nl wrote: Hi Sage, I just had a 0.61.6 monitor crash and one osd. The mon and all osds restarted just fine a

Re: [ceph-users] Flapping osd / continuously reported as failed

2013-07-25 Thread Mostowiec Dominik
Hi We found something else. After osd.72 flapp, one PG '3.54d' was recovering long time. -- ceph health details HEALTH_WARN 1 pgs recovering; recovery 1/39821745 degraded (0.000%) pg 3.54d is active+recovering, acting [72,108,23] recovery 1/39821745 degraded (0.000%) -- Last flap down/up osd.72 w