[ceph-users] Error DATE 1970

2015-04-01 Thread Jimmy Goffaux
English Version : Hello, I found a strange behavior in Ceph. This behavior is visible on Buckets (RGW) and pools (RDB). pools: `` root@:~# qemu-img info rbd:pool/kibana2 image: rbd:pool/kibana2 file format: raw virtual size: 30G (32212254720 bytes) disk size: unavailable Snapshot list: ID

[ceph-users] Spurious MON re-elections

2015-04-01 Thread Sylvain Munaut
Hi, For some unknown reason, periodically, the master is kicked out and another one becomes leader. And then a couple second later, the original master calls for re-election and becomes leader again. This also seems to cause some load even after the original master is back. Here's a couple of

Re: [ceph-users] Establishing the Ceph Board

2015-04-01 Thread Milosz Tanski
Patrick, I'm not sure where you are at with forming this. I would love to be considered. Previously I've contributed commits (mostly CephFS kernel side) and I'd love to contribute more there. Usually the hardest part about these kinds of things is finding the time to participate. Since Ceph is a

[ceph-users] Cores/Memory/GHz recommendation for SSD based OSD servers

2015-04-01 Thread Sreenath BH
Hi all, we are considering building all SSD OSD servers for RBD pool. Couple of questions: Does Ceph have any recommendation for number of cores/memory/ghz per SSD drive, similar to what is usually followed for hard drives(1 core/1 GB Ram/1Ghz speed)? thanks, Sreenath

Re: [ceph-users] Cascading Failure of OSDs

2015-04-01 Thread Quentin Hartman
Right now we're just scraping the output of ifconfig: ifconfig p2p1 | grep -e 'RX\|TX' | grep packets | awk '{print $3}' It clunky, but it works. I'm sure there's a cleaner way, but this was expedient. QH On Tue, Mar 31, 2015 at 5:05 PM, Francois Lafont flafdiv...@free.fr wrote: Hi,

Re: [ceph-users] Ceph and Openstack

2015-04-01 Thread Quentin Hartman
I am conincidentally going through the same process right now. The best reference I've found is this: http://ceph.com/docs/master/rbd/rbd-openstack/ When I did Firefly / icehouse, this (seemingly) same guide Just Worked(tm), but now with Giant / Juno I'm running into similar trouble to that

Re: [ceph-users] Production Ceph :: PG data lost : Cluster PG incomplete, inactive, unclean

2015-04-01 Thread Craig Lewis
Both of those say they want to talk to osd.115. I see from the recovery_state, past_intervals that you have flapping OSDs. osd.140 will drop out, then come back. osd.115 will drop out, then come back. osd.80 will drop out, then come back. So really, you need to solve the OSD flapping. That

Re: [ceph-users] Ceph and Openstack

2015-04-01 Thread Erik McCormick
Can you both set Cinder and / or Glance logging to debug and provide some logs? There was an issue with the first Juno release of Glance in some vendor packages, so make sure you're fully updated to 2014.2.2 On Apr 1, 2015 7:12 PM, Quentin Hartman qhart...@direwolfdigital.com wrote: I am

Re: [ceph-users] Production Ceph :: PG data lost : Cluster PG incomplete, inactive, unclean

2015-04-01 Thread Somnath Roy
No sure whether it is relevant to your setup or not. But, we saw OSDs are flapping while rebalancing is going on with say ~150 TB of data within 6 nodes cluster. During root causing we saw continuous dropping of packets in dmesg and may be because of that osd heartbeat responses are lost. As a

Re: [ceph-users] Calamari Questions

2015-04-01 Thread Bruce McFarland
Quentin, I got the config page to come up by exiting Calamari, deleting the salt keys on the calamari master ‘salt-key –D’, then restarting Calamari on the master and accepting the salt keys on the master ‘salt-key –A’ after doing salt-minion and diamond service restart on the ceph nodes. Once

Re: [ceph-users] Cores/Memory/GHz recommendation for SSD based OSD servers

2015-04-01 Thread Christian Balzer
Hello, On Wed, 1 Apr 2015 18:40:10 +0530 Sreenath BH wrote: Hi all, we are considering building all SSD OSD servers for RBD pool. I'd advise you to spend significant time reading the various threads in this ML about SSD based pools. Both about the current shortcomings and limitations of

Re: [ceph-users] 答复: One of three monitors can not be started

2015-04-01 Thread 张皓宇
i checked the cluster state, it has recoveried to HEALTH_OK. i don's know why. yesterday, 09:02, i started the mon.computer06 , it can not be started, the log‘s in attachment 0902. and 16:38, i started the mon.computer06 again, it also stucked with these processes: /usr/bin/ceph-mon -i

[ceph-users] Calamari Questions

2015-04-01 Thread Bruce McFarland
I've built the Calamari client, server, and diamond packages from source for trusty and centos and installed it on the trusty Master. Installed diamond and salt packages on the storage nodes. I can connect to the calamari master, accept salt keys from the ceph nodes, but then Calamari reports 3

Re: [ceph-users] Production Ceph :: PG data lost : Cluster PG incomplete, inactive, unclean

2015-04-01 Thread Karan Singh
Any pointers to fix incomplete PG would be grateful I tried the following with no success. pg scrub pg deep scrub pg repair osd out , down , rm , in osd lost # ceph -s cluster 2bd3283d-67ef-4316-8b7e-d8f4747eae33 health HEALTH_WARN 7 pgs down; 20 pgs incomplete; 1 pgs recovering;

Re: [ceph-users] Radosgw authorization failed

2015-04-01 Thread Neville
On 31 Mar 2015, at 11:38, Neville neville.tay...@hotmail.co.uk wrote: Date: Mon, 30 Mar 2015 12:17:48 -0400 From: yeh...@redhat.com To: neville.tay...@hotmail.co.uk CC: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Radosgw authorization failed - Original

Re: [ceph-users] Radosgw authorization failed

2015-04-01 Thread Yehuda Sadeh-Weinraub
- Original Message - From: Neville neville.tay...@hotmail.co.uk To: Yehuda Sadeh-Weinraub yeh...@redhat.com Cc: ceph-users@lists.ceph.com Sent: Wednesday, April 1, 2015 11:45:09 AM Subject: Re: [ceph-users] Radosgw authorization failed On 31 Mar 2015, at 11:38, Neville

[ceph-users] Ceph and Openstack

2015-04-01 Thread Iain Geddes
All, Apologies for my ignorance but I don't seem to be able to search an archive. I've spent a lot of time trying but am having difficulty in integrating Ceph (Giant) into Openstack (Juno). I don't appear to be recording any errors anywhere, but simply don't seem to be writing to the cluster if

Re: [ceph-users] Spurious MON re-elections

2015-04-01 Thread Gregory Farnum
On Wed, Apr 1, 2015 at 5:03 AM, Sylvain Munaut s.mun...@whatever-company.com wrote: Hi, For some unknown reason, periodically, the master is kicked out and another one becomes leader. And then a couple second later, the original master calls for re-election and becomes leader again. This

Re: [ceph-users] Calamari Questions

2015-04-01 Thread Quentin Hartman
You should have a config page in calamari UI where you can accept osd nodes into the cluster as Calamari sees it. If you skipped the little first-setup window like I did, it's kind of a pain to find. QH On Wed, Apr 1, 2015 at 12:34 PM, Bruce McFarland bruce.mcfarl...@taec.toshiba.com wrote:

Re: [ceph-users] Establishing the Ceph Board

2015-04-01 Thread Oaters
Hey Milosz, I have to ask, did you mean to say that or was it a spell check fail? I almost choked on my coffee :-) O On 2 April 2015 at 00:05, Milosz Tanski mil...@adfin.com wrote: Patrick, I'm not sure where you are at with forming this. I would love to be considered. Previously I've