Re: [ceph-users] ceph monitors stuck in a loop after install with ceph-deploy

2013-07-23 Thread Dan van der Ster
On Wednesday, July 24, 2013 at 7:19 AM, Sage Weil wrote: > On Wed, 24 Jul 2013, S?bastien RICCIO wrote: > > > > Hi! While trying to install ceph using ceph-deploy the monitors nodes are > > stuck waiting on this process: > > /usr/bin/python /usr/sbin/ceph-create-keys -i a (or b or c) > > > > I tr

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Eric Eastman
I still have much to learn about how ceph is built. The ceph-deploy list command is now working with my system using a cciss boot disk. ceph-deploy disk list ceph11 /dev/cciss/c0d0 : /dev/cciss/c0d0p1 other, ext4, mounted on / /dev/cciss/c0d0p2 other /dev/cciss/c0d0p5 swap, swap /dev/fd0 other

Re: [ceph-users] ceph monitors stuck in a loop after install with ceph-deploy

2013-07-23 Thread Sébastien RICCIO
Hi, Oh thanks I'll try again with those removed. My mistakre, sorry... I was trying to follow the conf file guide :) Cheers, Sébastien On 24.07.2013 07:19, Sage Weil wrote: It's the config file. You no longer need to (or should) enumerate the daemons in the config file; the sysvinit/upsta

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Sage Weil
On Wed, 24 Jul 2013, Eric Eastman wrote: > Still looks the same. I even tried doing a purge to make sure tried a purge > to make sure everything was clean Sorry, I just pushed the branch... I forgot to mention it will take 10-15 minutes for it to build and update at gitbuilder.ceph.com. Keep an

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Eric Eastman
Still looks the same. I even tried doing a purge to make sure tried a purge to make sure everything was clean root@ceph00:~# ceph-deploy -v purge ceph11 Purging from cluster ceph hosts ceph11 Detecting platform for host ceph11 ... Distro Ubuntu codename raring Purging host ceph11 ... root@ceph0

Re: [ceph-users] ceph monitors stuck in a loop after install with ceph-deploy

2013-07-23 Thread Sage Weil
On Wed, 24 Jul 2013, S?bastien RICCIO wrote: > > Hi! While trying to install ceph using ceph-deploy the monitors nodes are > stuck waiting on this process: > /usr/bin/python /usr/sbin/ceph-create-keys -i a (or b or c) > > I tried to run mannually the command and it loops on this: > connect to /va

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Sage Weil
On Wed, 24 Jul 2013, Eric Eastman wrote: > The software install works now. > > Still seeing a problem with ceph-deploy > > root@ceph00:~# ceph-deploy install --dev=wip-cuttlefish-ceph-disk ceph11 > OK > root@ceph00:~# ceph-deploy disk list ceph11 > disk list failed: Traceback (most recent call la

[ceph-users] ceph monitors stuck in a loop after install with ceph-deploy

2013-07-23 Thread Sébastien RICCIO
Hi! While trying to install ceph using ceph-deploy the monitors nodes are stuck waiting on this process: /usr/bin/python /usr/sbin/ceph-create-keys -i a (or b or c) I tried to run mannually the command and it loops on this: connect to /var/run/ceph/ceph-mon.a.asok failed with (2) No such file

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Eric Eastman
The software install works now. Still seeing a problem with ceph-deploy root@ceph00:~# ceph-deploy install --dev=wip-cuttlefish-ceph-disk ceph11 OK root@ceph00:~# ceph-deploy disk list ceph11 disk list failed: Traceback (most recent call last): File "/usr/sbin/ceph-disk", line 2298, in main

[ceph-users] v0.61.6 Cuttlefish update released

2013-07-23 Thread Sage Weil
There was a problem with the monitor daemons in v0.61.5 that would prevent them from restarting after some period of time. This release fixes the bug and works around the issue to allow affected monitors to restart. All v0.61.5 users are strongly recommended to upgrade. Thanks everyone who he

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Sage Weil
On Tue, 23 Jul 2013, Eric Eastman wrote: > I tried running: > > ceph-deploy install --dev=wip-cuttlefish-ceph-disk HOST > > To a clean system, and just after I ran: > > ceph-deploy install ceph11 (Which worked without error.) > > and in both cases, ceph-deploy failed with the output: > > > #

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Eric Eastman
I tried running: ceph-deploy install --dev=wip-cuttlefish-ceph-disk HOST To a clean system, and just after I ran: ceph-deploy install ceph11 (Which worked without error.) and in both cases, ceph-deploy failed with the output: #ceph-deploy install --dev=wip-cuttlefish-ceph-disk ceph11 OK WTr

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Eric Eastman
I will test it out and let you know. Eric On Tue, 23 Jul 2013, Eric Eastman wrote: I am seeing issues with ceph-deploy and ceph-disk, which it calls, if the storage devices are not generic sdx devices. On my older HP systems, ceph-deploy fails on the cciss devices, and I tried to use it wi

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Luke Jing Yuan
Dear all, There may be a small chance that the wip ceph-disk may fail the journal part despite managed to partition the disk properly. In my case I use the same disk for both data and journal on /dev/cciss/c0d1, though I am not sure if I was using the latest from the wip branch. In such case, a

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Sage Weil
On Tue, 23 Jul 2013, Eric Eastman wrote: > I am seeing issues with ceph-deploy and ceph-disk, which it calls, if > the storage devices are not generic sdx devices. On my older HP > systems, ceph-deploy fails on the cciss devices, and I tried to use it > with multipath dm devices, and that did n

Re: [ceph-users] Openstack on ceph rbd installation failure

2013-07-23 Thread johnu
The issue is, I can create the volume but I can attach to instance only if it is in shutdown state. If an instance is already in shutdown state and I attach a volume, and then if i restart the instance, it goes into "error state" The logs are attached. Jul 23 17:06:10 master 2013-07-23 17:06:10.

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Eric Eastman
I am seeing issues with ceph-deploy and ceph-disk, which it calls, if the storage devices are not generic sdx devices. On my older HP systems, ceph-deploy fails on the cciss devices, and I tried to use it with multipath dm devices, and that did not work at all. Logging is not verbose enough t

Re: [ceph-users] Weird problem - maybe quorum related

2013-07-23 Thread Gregory Farnum
"ceph --admin-daemon mon_status" "ceph --admin-daemon quorum_status" from each monitor. These are good ones to know for debugging any sort of monitor issue as they let you see each monitor's perspective on its status and the total cluster status. -Greg Software Engineer #42 @ http://inktank.com

Re: [ceph-users] rgw bucket index

2013-07-23 Thread Craig Lewis
On 7/22/2013 11:50 PM, Yehuda Sadeh wrote: > 2. Bucket sharding > > Keep bucket index over multiple objects. A trivial implementation will > just set a constant number of shards per index. A (much much) more > complex implementation will adjust the number of shards on the fly > according to the num

Re: [ceph-users] Weird problem - maybe quorum related

2013-07-23 Thread Gregory Farnum
Can you get the quorum and related dumps out of the admin socket for each running monitor and see what they say? -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Jul 23, 2013 at 4:51 PM, Mandell Degerness wrote: > One of our tests last night failed in a weird way. We s

[ceph-users] Weird problem - maybe quorum related

2013-07-23 Thread Mandell Degerness
One of our tests last night failed in a weird way. We started with a three node cluster, with three monitors, expanded to a 5 node cluster with 5 monitors and dropped back to a 4 node cluster with three monitors. The sequence of events was: start 3 monitors (monitors 0, 1, 2) - monmap e1 add one

Re: [ceph-users] Openstack on ceph rbd installation failure

2013-07-23 Thread johnu
There is a hidden bug which I couldn't reproduce. I was using devstack for openstack and I enabled syslog option for getting nova and cinder logs . After reboot, Everything was fine. I was able to create volumes and I verified in rados. Another thing I noticed is, I don't have cinder user as in de

Re: [ceph-users] Flapping osd / continuously reported as failed

2013-07-23 Thread Studziński Krzysztof
> -Original Message- > From: Gregory Farnum [mailto:g...@inktank.com] > Sent: Wednesday, July 24, 2013 12:28 AM > To: Studziński Krzysztof; Yehuda Sadeh > Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com; Mostowiec > Dominik > Subject: Re: [ceph-users] Flapping osd / continuously r

Re: [ceph-users] Unclean PGs in active+degrared or active+remapped

2013-07-23 Thread Gregory Farnum
On Fri, Jul 19, 2013 at 3:44 PM, Pawel Veselov wrote: > Hi. > > I'm trying to understand the reason behind some of my unclean pages, after > moving some OSDs around. Any help would be greatly appreciated.I'm sure we > are missing something, but can't quite figure out what. > > [root@ip-10-16-43-12

Re: [ceph-users] ceph-deploy mon create doesn't create keyrings

2013-07-23 Thread Gregory Farnum
On Mon, Jul 22, 2013 at 2:53 AM, wrote: > I am using RHEL6. > > From the ceph admin machine I executed: > > ceph-deploy install cephserverX > ceph-deploy new cephserverX > ceph-deploy mon create cephserverX > > is there a debug mode more verbose than -v that I can enable, in order to see > more

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Sebastien Han
Arf no worries. Even after a quick dive into the logs, I haven't find anything. (default log level).Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70Email : sebastien@enovance.com – Skype : han.sbastienAddr

Re: [ceph-users] Flapping osd / continuously reported as failed

2013-07-23 Thread Gregory Farnum
On Tue, Jul 23, 2013 at 3:20 PM, Studziński Krzysztof wrote: >> On Tue, Jul 23, 2013 at 2:50 PM, Studziński Krzysztof >> wrote: >> > Hi, >> > We've got some problem with our cluster - it continuously reports failed >> one osd and after auto-rebooting everything seems to work fine for some >> time

Re: [ceph-users] Removing osd with zero data causes placement shift

2013-07-23 Thread Andrey Korolyov
Unfortunately this may cause peering process to be twice as long than regular, and production cluster will suffer hardly on it. It would be wonderful if the solution to such problems and related (non-idempotent-alike behavior of the mon quorum which I reported about a week ago) will be presented in

Re: [ceph-users] Flapping osd / continuously reported as failed

2013-07-23 Thread Studziński Krzysztof
> On Tue, Jul 23, 2013 at 2:50 PM, Studziński Krzysztof > wrote: > > Hi, > > We've got some problem with our cluster - it continuously reports failed > one osd and after auto-rebooting everything seems to work fine for some > time (few minutes). CPU util of this osd is max 8%, iostat is very low.

Re: [ceph-users] Removing osd with zero data causes placement shift

2013-07-23 Thread Gregory Farnum
Yeah, this is because right now when you mark an OSD out the weights of the buckets above it aren't changing. I guess conceivably we could set it up to do so, hrm... In any case, if this is inconvenient you can do something like unlink the OSD right after you mark it out; that should update the CRU

Re: [ceph-users] Flapping osd / continuously reported as failed

2013-07-23 Thread Gregory Farnum
On Tue, Jul 23, 2013 at 2:50 PM, Studziński Krzysztof wrote: > Hi, > We've got some problem with our cluster - it continuously reports failed one > osd and after auto-rebooting everything seems to work fine for some time (few > minutes). CPU util of this osd is max 8%, iostat is very low. We tri

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Gregory Farnum
On Tue, Jul 23, 2013 at 2:55 PM, Sebastien Han wrote: > > Hi Greg, > > Just tried the list watchers, on a rbd with the QEMU driver and I got: > > root@ceph:~# rados -p volumes listwatchers rbd_header.789c2ae8944a > watcher=client.30882 cookie=1 > > I also tried with the kernel module but didn't se

[ceph-users] Removing osd with zero data causes placement shift

2013-07-23 Thread Andrey Korolyov
Hello I had a couple of osds with down+out state and completely clean cluster, but after pushing a button ``osd crush remove'' there was some data redistribution with shift proportional to osd weight in the crushmap but lower than 'osd out' amount of data replacement over osd with same weight appr

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Sebastien Han
Hi Greg,Just tried the list watchers, on a rbd with the QEMU driver and I got:root@ceph:~# rados -p volumes listwatchers rbd_header.789c2ae8944awatcher=client.30882 cookie=1I also tried with the kernel module but didn't see anything…No IP addresses anywhere… :/, any idea?Nice tip btw :)Sébastie

[ceph-users] Flapping osd / continuously reported as failed

2013-07-23 Thread Studziński Krzysztof
Hi, We've got some problem with our cluster - it continuously reports failed one osd and after auto-rebooting everything seems to work fine for some time (few minutes). CPU util of this osd is max 8%, iostat is very low. We tried to "ceph osd out" such flapping osd, but after recovering this beh

[ceph-users] Osd crash and misplaced objects after rapid object deletion

2013-07-23 Thread Michael Lowe
On two different occasions I've had an osd crash and misplace objects when rapid object deletion has been triggered by discard/trim operations with the qemu rbd driver. Has anybody else had this kind of trouble? The objects are still on disk, just not in a place where the osd thinks is valid.

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Gregory Farnum
On Tue, Jul 23, 2013 at 1:28 PM, Wido den Hollander wrote: > On 07/23/2013 09:09 PM, Gaylord Holder wrote: >> >> Is it possible to find out which machines are mapping and RBD? > > > No, that is stateless. You can use locking however, you can for example put > the hostname of the machine in the loc

Re: [ceph-users] ceph-deploy

2013-07-23 Thread Sébastien RICCIO
Hi, Same issue here about disk list, it doesn't list all available disks and output some bogus information. Also had some problems with ceph-deploy osd create which I guess uses some code from osd prepare. Same thing about the temporary directory. After re-trying the command 3-4 times it fina

[ceph-users] ceph-deploy

2013-07-23 Thread Hariharan Thantry
I'm seeing quite a few errors with ceph-deploy that makes me wonder if the tool is stable. For instance, ceph-deploy disk list ; returns a partial set of disks, misses a few partitions and returns incorrect partitions (for XFS type that aren't listed by parted) ceph-deploy osd prepare :/dev/sd{a}

Re: [ceph-users] RBD Mapping

2013-07-23 Thread Wido den Hollander
On 07/23/2013 09:09 PM, Gaylord Holder wrote: Is it possible to find out which machines are mapping and RBD? No, that is stateless. You can use locking however, you can for example put the hostname of the machine in the lock. But that's not mandatory in the protocol. Maybe you are able to l

[ceph-users] RBD Mapping

2013-07-23 Thread Gaylord Holder
Is it possible to find out which machines are mapping and RBD? -Gaylord ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Errors on OSD creation

2013-07-23 Thread Hariharan Thantry
Following up, I kept the default "ceph" name of the cluster, and didn't muck with any defaults in the ceph.conf file (for auth settings). Using ceph-deploy to prepare an OSD resulted in the following error. It created a 1G journal file on the mount I had specified, and I do see a new partition bein

[ceph-users] using ceph-deploy with no authentication

2013-07-23 Thread Hariharan Thantry
Hi, I'm trying to setup a 3-node ceph cluster using ceph deploy from an admin machine (VM box). Firstly, I did the following from the admin node: 1. ceph-deploy --cluster test new ceph-1 ceph-2 ceph-3 {3 monitors} 2. Edited the resultant test.conf file to put auth supported = none 3. Then did $ce

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread Joao Eduardo Luis
On 07/23/2013 04:59 PM, pe...@2force.nl wrote: On 2013-07-23 17:31, Joao Eduardo Luis wrote: On 07/23/2013 04:19 PM, Dan van der Ster wrote: On Tuesday, July 23, 2013 at 4:46 PM, pe...@2force.nl wrote: On 2013-07-22 18:20, Joao Eduardo Luis wrote: On 07/22/2013 04:59 PM, pe...@2force.nl

Re: [ceph-users] Location of MONs

2013-07-23 Thread Alex Bligh
On 23 Jul 2013, at 17:16, Gregory Farnum wrote: >> And without wanting to sound daft having missed a salient configuration >> detail, but there's no way to release when it's written the primary? > > Definitely not. Ceph's consistency guarantees and recovery mechanisms > are all built on top of a

Re: [ceph-users] Location of MONs

2013-07-23 Thread Matthew Walster
On 23 July 2013 17:16, Gregory Farnum wrote: > > And without wanting to sound daft having missed a salient configuration > > detail, but there's no way to release when it's written the primary? > > Definitely not. > ​I thought as much -- and now you've explained why, I completely understand the

Re: [ceph-users] Location of MONs

2013-07-23 Thread Gregory Farnum
On Tue, Jul 23, 2013 at 9:12 AM, Matthew Walster wrote: > On 23 July 2013 17:07, Gregory Farnum wrote: >> >> If you have three osds that are >> separated by 5ms each and all hosting a PG, then your lower-bound >> latency for a write op is 10ms — 5 ms to send from the primary to the >> replicas,

Re: [ceph-users] Location of MONs

2013-07-23 Thread Matthew Walster
On 23 July 2013 17:07, Gregory Farnum wrote: > If you have three osds that are > separated by 5ms each and all hosting a PG, then your lower-bound > latency for a write op is 10ms — 5 ms to send from the primary to the > replicas, 5ms for them to ack back. And without wanting to sound daft ha

Re: [ceph-users] Location of MONs

2013-07-23 Thread Gregory Farnum
On Tue, Jul 23, 2013 at 9:04 AM, Matthew Walster wrote: > > That's fantastic, thanks. I'm assuming that 5ms is probably too much for the > OSDs -- do we have any idea/data as to the effect of latency on OSDs if they > were split over a similar distance? Or even a spread - 0.5ms, 1ms, 2ms etc. > Ob

Re: [ceph-users] Location of MONs

2013-07-23 Thread Matthew Walster
On 23 July 2013 16:59, Gregory Farnum wrote: > On Tue, Jul 23, 2013 at 8:54 AM, Matthew Walster > wrote: > > I've got a relatively small Ceph cluster I'm playing with at the moment, > and > > against advice, I'm running the MONs on the OSDs. > > > > Ideally, I'd like to have half the OSDs in a d

Re: [ceph-users] Getting a list of all defined monitors from a client

2013-07-23 Thread Sage Weil
On Tue, 23 Jul 2013, Gregory Farnum wrote: > On Tue, Jul 23, 2013 at 8:50 AM, Guido Winkelmann > wrote: > > Hi, > > > > How can I get a list of all defined monitors in a ceph cluster from a client > > when using the C API? > > > > I need to store the monitors for available ceph clusters in a datab

Re: [ceph-users] Location of MONs

2013-07-23 Thread Gregory Farnum
On Tue, Jul 23, 2013 at 8:54 AM, Matthew Walster wrote: > I've got a relatively small Ceph cluster I'm playing with at the moment, and > against advice, I'm running the MONs on the OSDs. > > Ideally, I'd like to have half the OSDs in a different facility and > therefore have one MON in each facili

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread peter
On 2013-07-23 17:31, Joao Eduardo Luis wrote: On 07/23/2013 04:19 PM, Dan van der Ster wrote: On Tuesday, July 23, 2013 at 4:46 PM, pe...@2force.nl wrote: On 2013-07-22 18:20, Joao Eduardo Luis wrote: On 07/22/2013 04:59 PM, pe...@2force.nl wrote: Hi Joao, I have sen

Re: [ceph-users] Getting a list of all defined monitors from a client

2013-07-23 Thread Gregory Farnum
On Tue, Jul 23, 2013 at 8:50 AM, Guido Winkelmann wrote: > Hi, > > How can I get a list of all defined monitors in a ceph cluster from a client > when using the C API? > > I need to store the monitors for available ceph clusters in a database, and I > would like to build this software so that a) t

[ceph-users] Location of MONs

2013-07-23 Thread Matthew Walster
I've got a relatively small Ceph cluster I'm playing with at the moment, and against advice, I'm running the MONs on the OSDs. Ideally, I'd like to have half the OSDs in a different facility and therefore have one MON in each facility. Which gives the problem of where to put that pesky third MON.

[ceph-users] Getting a list of all defined monitors from a client

2013-07-23 Thread Guido Winkelmann
Hi, How can I get a list of all defined monitors in a ceph cluster from a client when using the C API? I need to store the monitors for available ceph clusters in a database, and I would like to build this software so that a) the administrator has to enter only one of the monitor addresses and

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread peter
On 2013-07-23 17:19, Dan van der Ster wrote: On Tuesday, July 23, 2013 at 4:46 PM, pe...@2force.nl wrote: On 2013-07-22 18:20, Joao Eduardo Luis wrote: On 07/22/2013 04:59 PM, pe...@2force.nl wrote: Hi Joao, I have sent you the link to the monitor files. I stopped one other monitor to have

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread Joao Eduardo Luis
On 07/23/2013 04:19 PM, Dan van der Ster wrote: On Tuesday, July 23, 2013 at 4:46 PM, pe...@2force.nl wrote: On 2013-07-22 18:20, Joao Eduardo Luis wrote: On 07/22/2013 04:59 PM, pe...@2force.nl wrote: Hi Joao, I have sent you the link to the monitor files. I stopped

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread Sage Weil
On Tue, 23 Jul 2013, Sage Weil wrote: > On Tue, 23 Jul 2013, Stefan Priebe - Profihost AG wrote: > > i had the same reported some days ago. > > Yeah, it's in the tracker as bug #5704 and we're working on it right now. > Thanks! Joao just identified the bug. There is a workaround in wip-cuttle

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread Dan van der Ster
On Tuesday, July 23, 2013 at 4:46 PM, pe...@2force.nl wrote: > On 2013-07-22 18:20, Joao Eduardo Luis wrote: > > On 07/22/2013 04:59 PM, pe...@2force.nl (mailto:pe...@2force.nl) wrote: > > > Hi Joao, > > > > > > I have sent you the link to the monitor files. I stopped one other > > > monitor to h

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread Sage Weil
On Tue, 23 Jul 2013, Stefan Priebe - Profihost AG wrote: > i had the same reported some days ago. Yeah, it's in the tracker as bug #5704 and we're working on it right now. Thanks! sage > > Stefan > > Am 23.07.2013 14:11, schrieb Piotr Lorek: > > Hi, > > > > I have the same problem like Pet

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread peter
On 2013-07-22 18:20, Joao Eduardo Luis wrote: On 07/22/2013 04:59 PM, pe...@2force.nl wrote: Hi Joao, I have sent you the link to the monitor files. I stopped one other monitor to have a consistent tarball but now it won't start, crashing with the same error message. I hope there is a trick to

Re: [ceph-users] Openstack on ceph rbd installation failure

2013-07-23 Thread Sebastien Han
Can you send your ceph.conf too?Is /etc/ceph/ceph.conf present? Is the key of user volume present too?Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70Email : sebastien@enovance.com – Skype : han.sbastienAd

[ceph-users] Mounting RBD or CephFS on Ceph-Node?

2013-07-23 Thread Oliver Schulz
Dear Ceph Experts, I remember reading that at least in the past I wasn't recommended to mount Ceph storage on a Ceph cluster node. Given a recent kernel (3.8/3.9) and sufficient CPU and memory resources on the nodes, would it now be safe to * Mount RBD oder CephFS on a Ceph cluster node? * Run a

[ceph-users] Mounting RBD or CephFS on Ceph-Node?

2013-07-23 Thread Oliver Schulz
Dear Ceph Experts, I remember reading that at least in the past I wasn't recommended to mount Ceph storage on a Ceph cluster node. Given a recent kernel (3.8/3.9) and sufficient CPU and memory resources on the nodes, would it now be safe to * Mount RBD oder CephFS on a Ceph cluster node? * Run a

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread Stefan Priebe - Profihost AG
i had the same reported some days ago. Stefan Am 23.07.2013 14:11, schrieb Piotr Lorek: > Hi, > > I have the same problem like Peter. I updated ceph from 0.61.4-1raring > to 0.61.5-1raring and all monitors started with no problems, but after > reboot they all are failed. > > Some logs: > > 201

Re: [ceph-users] Monitor is unable to start after reboot: OSDMonitor::update_from_paxos(bool*) FAILED assert(latest_bl.length() != 0

2013-07-23 Thread Piotr Lorek
Hi, I have the same problem like Peter. I updated ceph from 0.61.4-1raring to 0.61.5-1raring and all monitors started with no problems, but after reboot they all are failed. Some logs: 2013-07-23 12:30:19.242684 7fd78e3927c0 0 ceph version 0.61.5 (8ee10dc4bb73bdd918873f29c70eedc3c7ef1979),

Re: [ceph-users] SSD recommendations for OSD journals

2013-07-23 Thread Wido den Hollander
On 07/23/2013 10:00 AM, Chris Hoy Poy wrote: +1 for sTec, I'm using sTec ZeusRAM devices (expensive, small capacity - 8GB) but if write latency is important.. I haven't deployed them in a Ceph environment, but I've been using them for ZFS ZILs for a couple of years. Amazingly fast SSDs, but

Re: [ceph-users] SSD recommendations for OSD journals

2013-07-23 Thread Chris Hoy Poy
+1 for sTec, I'm using sTec ZeusRAM devices (expensive, small capacity - 8GB) but if write latency is important.. 4x OSDs (3TB 7200rpm SATA) to each ZeusRAM for our scenario. \\Chris - Original Message - From: "Charles 'Boyo" To: "Mark Nelson" Cc: ceph-users@lists.ceph.com Sent: Tue

Re: [ceph-users] SSD recommendations for OSD journals

2013-07-23 Thread Oliver Fuckner
Barring the cost, sTec solutions have proven reliable for me. Check out the s1122 with 1.6 TB capacity and 90PB write endurance: http://www.stec-inc.com/products/s1120-pcie-accelerator/ Sounds expensive, how much do these cards cost? The smallest/cheapest should be big enough as a ceph journ