Hi,
Perhaps not a great analogy. At least in the case of the UK
Perhaps not, I don't know the UK system.
I was merely trying to illustrate the difference between the number of
mons that the system is configured with (the ones eligible to vote), and
the number of mons actually alive and able
On 6 Sep 2013, at 13:46, Jens Kristian Søgaard wrote:
> You created 7 mons in ceph. This is like having a parliament with 7 members.
>
> Whenever you want to do something, you need to convince a majority of
> parliament to vote yes. A majority would then be 4 members voting yes.
>
> If two mem
Le 06/09/2013 12:12, Nigel Williams a écrit :
On 06/09/2013, at 7:49 PM, "Bernhard Glomm" wrote:
Can I introduce the cluster network later on, after the cluster is deployed and
started working?
(by editing ceph.conf, push it to the cluster members and restart the daemons?)
Thanks Bernhard for
Hi Bernhard,
I thought 4 out of seven wouldn't be good because it's not an odd number...
but I guess after I would have brought up the cluster with 4 MONs I could
have removed one of the MONs to reach that (well, or add one)
Think of it like this:
You created 7 mons in ceph. This is like havi
Thnx a lot for making this clear!
I thought 4 out of seven wouldn't be good because it's not an odd number...
but I guess after I would have brought up the cluster with 4 MONs I could
have removed one of the MONs to reach that (well, or add one)
thnx again
Bernhard
Am 06.09.2013 13:26:37, schri
Hi,
In order to reach a quorum after reboot, you need to have more than half
of yours mons running.
with 7 MONs I have to have at least 5 MONS running?
No. 4 is more than half of 7, so 4 would be a majority and thus would be
able to form a quorum.
4 would be more than the half b
On 06/09/2013, at 7:49 PM, "Bernhard Glomm" wrote:
> Can I introduce the cluster network later on, after the cluster is deployed
> and started working?
> (by editing ceph.conf, push it to the cluster members and restart the
> daemons?)
Thanks Bernhard for asking this question, I have the same q
thnx Jens
> > I have my testcluster consisting two OSDs that also host MONs plus
> > one to five MONs.
> Are you saying that you have a total of 7 mons?
yepp
> > down the at last, not the other MON though (since - surprise - they
> > are in this test szenario just virtual instances residing o
> > > > And a second question regarding ceph-deploy:
> > > > How do I specify a second NIC/address to be used as the intercluster
> > > > communication?
> > > You will not be able to do something like this with ceph-deploy. This
> > > sounds like a very specific (or a bit more advanced)
> > > co
On Thu, Sep 5, 2013 at 12:38 PM, Gregory Farnum wrote:
> On Thu, Sep 5, 2013 at 9:31 AM, Alfredo Deza wrote:
>> On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
>> wrote:
>>>
>>> Hi all,
>>>
>>> as a ceph newbie I got another question that is probably solved long ago.
>>> I have my testcluster co
On Thu, Sep 5, 2013 at 9:31 AM, Alfredo Deza wrote:
> On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
> wrote:
>>
>> Hi all,
>>
>> as a ceph newbie I got another question that is probably solved long ago.
>> I have my testcluster consisting two OSDs that also host MONs
>> plus one to five MONs.
>
On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
wrote:
>
> Hi all,
>
> as a ceph newbie I got another question that is probably solved long ago.
> I have my testcluster consisting two OSDs that also host MONs
> plus one to five MONs.
> Now I want to reboot all instance, simulating a power failure.
Hi Bernhard,
I have my testcluster consisting two OSDs that also host MONs plus
one to five MONs.
Are you saying that you have a total of 7 mons?
down the at last, not the other MON though (since - surprise - they
are in this test szenario just virtual instances residing on some
ceph rbds)
Hi all,
as a ceph newbie I got another question that is probably solved long ago.
I have my testcluster consisting two OSDs that also host MONs
plus one to five MONs.
Now I want to reboot all instance, simulating a power failure.
So I shutdown the extra MONs,
Than shutting down the first OSD/MON
14 matches
Mail list logo