Got it going.
This helped http://tracker.ceph.com/issues/5205
My ceph.conf has cluster and public addresses defined in global. I commented them out and mon.c started successfully.
[global]
auth cluster required = cephx
auth service required = cephx
auth client required
On Tue, Jun 25, 2013 at 02:24:35PM +0100, Joao Eduardo Luis wrote:
> (Re-adding the list for future reference)
>
> Wolfgang, from your log file:
>
> 2013-06-25 14:58:39.739392 7fa329698780 -1 common/config.cc: In
> function 'void md_config_t::set_val_or_die(const char*, const
> char*)' thread 7fa
Nope, same outcome.
[root@ceph3 mon]# ceph mon remove c
removed mon.c at 192.168.6.103:6789/0, there are now 2 monitors
[root@ceph3 mon]# mkdir tmp
[root@ceph3 mon]# ceph auth get mon. -o tmp/keyring
exported keyring for mon.
[root@ceph3 mon]# ceph mon getmap -o tmp/monmap
2013-06-26 13:51:26.640
I've typically moved it off to a non-conflicting path in lieu of
deleting it outright, but either way should work. IIRC, I used something
like:
sudo mv /var/lib/ceph/mon/ceph-c /var/lib/ceph/mon/ceph-c-bak && sudo
mkdir /var/lib/ceph/mon/ceph-c
- Mike
On 6/25/2013 11:08 PM, Darryl Bond wrot
Thanks for your prompt response.
Given that my mon.c /var/lib/ceph/mon/ceph-c is currently populated,
should I delete it's contents after removing the monitor and before
re-adding it?
Darryl
On 06/26/13 12:50, Mike Dawson wrote:
Darryl,
I've seen this issue a few times recently. I believe Joao
FYI. I get the same error with an osd too.
-11> 2013-06-25 16:00:37.604042 7f0751f1b700 1 -- 172.18.11.32:6802/1594
<== osd.1 172.18.11.30:0/10964 5300 osd_ping(ping e2200 stamp 2013-06-25
16:00:37.588367) v2 47+0+0 (3462129666 0 0) 0x4a0ce00 con 0x4a094a0
-10> 2013-06-25 16:00
Darryl,
I've seen this issue a few times recently. I believe Joao was looking
into it at one point, but I don't know if it has been resolved (Any news
Joao?). Others have run into it too. Look closely at:
http://tracker.ceph.com/issues/4999
http://irclogs.ceph.widodh.nl/index.php?date=2013-06
Look like the same error I reported yesterday. Sage is looking at it ?
-- Original --
From: "Darryl Bond";
Date: Wed, Jun 26, 2013 10:34 AM
To: "ceph-users@lists.ceph.com";
Subject: [ceph-users] One monitor won't start after upgrade from 6.1.3 to6.1.4
Upgr
Upgrading a cluster from 6.1.3 to 6.1.4 with 3 monitors. Cluster had
been successfully upgraded from bobtail to cuttlefish and then from
6.1.2 to 6.1.3. There have been no changes to ceph.conf.
Node mon.a upgrade, a,b,c monitors OK after upgrade
Node mon.b upgrade a,b monitors OK after upgrade (
On Mon, 17 Jun 2013, Sage Weil wrote:
> Hi Florian,
>
> If you can trigger this with logs, we're very eager to see what they say
> about this! The http://tracker.ceph.com/issues/5336 bug is open to track
> this issue.
Downgrading this bug until we hear back.
sage
>
> Thanks!
> sage
>
>
>
Some guesses are inline.
On Tue, Jun 25, 2013 at 4:06 PM, Wido den Hollander wrote:
> Hi,
>
> I'm not sure what happened, but on a Ceph cluster I noticed that the
> monitors (running 0.61) started filling up the disks, so they were restarted
> with:
>
> mon compact on start = true
>
> After a res
Hi,
I'm not sure what happened, but on a Ceph cluster I noticed that the
monitors (running 0.61) started filling up the disks, so they were
restarted with:
mon compact on start = true
After a restart the osdmap was empty, it showed:
osdmap e2: 0 osds: 0 up, 0 in
pgmap v624077: 15296
Our next development release v0.65 is out, with a few big changes. First
and foremost, this release includes a complete revamp of the architecture
for the command line interface in order to lay the groundwork for our
ongoing REST management API work. The 'ceph' command line tool is now a
thin p
On Tue, 25 Jun 2013, joachim.t...@gad.de wrote:
> hi folks,
>
> i have a question concerning data replication using the crushmap.
>
> Is it possible to write a crushmap to achive a 2 times 2 replcation in the
> way a have a pool replication in one data center and an overall replication
> of this
Hi,
at the moment I have a little problem with when it comes to a remapping
of the ceph file system. In VM's with large disk (4 TB each) it comes to
freeze the operating system. The freeze is always accompanied with the
message "[WRN] 1 slow requests". At the moment, bobtail is installed.
Does any
Sorry, I forgot to mention ceph osd set noout. Sébastien Han wrote a blog post
about it.
http://www.sebastien-han.fr/blog/2012/08/17/ceph-storage-node-maintenance/
Dave Spano
Optogenics
- Original Message -
From: "Michael Lowe"
To: "Nigel Williams"
Cc: ceph-users@lists.ceph.c
(Re-adding the list for future reference)
Wolfgang, from your log file:
2013-06-25 14:58:39.739392 7fa329698780 -1 common/config.cc: In function
'void md_config_t::set_val_or_die(const char*, const char*)' thread
7fa329698780 time 2013-06-25 14:58:39.738501
common/config.cc: 621: FAILED asser
On 05/30/2013 11:06 PM, Sage Weil wrote:
> Hi everyone,
Hi again,
> I wanted to mention just a few things on this thread.
Thank you for taking the time.
> The first is obvious: we are extremely concerned about stability.
> However, Ceph is a big project with a wide range of use cases, and i
hi folks,
i have a question concerning data replication using the crushmap.
Is it possible to write a crushmap to achive a 2 times 2 replcation in the
way a have a pool replication in one data center and an overall
replication
of this in the backup datacenter?
Best regards
Joachim
___
On 06/25/2013 10:52 AM, Wolfgang Hennerbichler wrote:
On 06/25/2013 11:45 AM, Joao Eduardo Luis wrote:
On mon a I see:
# ceph --admin-daemon /run/ceph/ceph-mon.a.asok mon_status
{ "name": "a",
"rank": 0,
"state": "probing",
"election_epoch": 1,
"quorum": [],
"outside_quo
We have a working ceph with rados version 0.61.4, we are try to use some
example applications [1,2] to test the direct upload to rados using cors.
With a patched boto [3], we are able to get and set xml cors on a
bucket, by the way using one of the apps, chrome give us an access
control allow
On 06/25/2013 11:45 AM, Joao Eduardo Luis wrote:
>> On mon a I see:
>>
>> # ceph --admin-daemon /run/ceph/ceph-mon.a.asok mon_status
>> { "name": "a",
>>"rank": 0,
>>"state": "probing",
>>"election_epoch": 1,
>>"quorum": [],
>>"outside_quorum": [
>> "a"],
>>"ext
On 06/25/2013 09:06 AM, Wolfgang Hennerbichler wrote:
On 06/24/2013 07:50 PM, Gregory Farnum wrote:
On Mon, Jun 24, 2013 at 10:36 AM, Jeppesen, Nelson
wrote:
What do you mean ‘bring up the second monitor with enough information’?
Here are the basic steps I took. It fails on step 4. If I s
On 06/24/2013 07:50 PM, Gregory Farnum wrote:
> On Mon, Jun 24, 2013 at 10:36 AM, Jeppesen, Nelson
> wrote:
>> What do you mean ‘bring up the second monitor with enough information’?
>>
>>
>>
>> Here are the basic steps I took. It fails on step 4. If I skip step 4, I get
>> a number out of range
24 matches
Mail list logo