Hi,
Thanks for your advice. Will try it out.
Best Regards,
/ST Wong
From: Maged Mokhtar [mailto:mmokh...@petasan.org]
Sent: Wednesday, February 14, 2018 4:20 PM
To: ST Wong (ITSC)
Cc: Luis Periquito; Kai Wagner; Ceph Users
Subject: Re: [ceph-users] Newbie question: stretch ceph cluster
Hi
> Maged
>
> On 2018-02-14 04:12, ST Wong (ITSC) wrote:
>
> Hi,
>
> Thanks for your advice,
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Luis Periquito
> Sent: Friday, February 09, 2018 11:34 PM
>
; To: Kai Wagner
> Cc: Ceph Users
> Subject: Re: [ceph-users] Newbie question: stretch ceph cluster
>
> On Fri, Feb 9, 2018 at 2:59 PM, Kai Wagner wrote: Hi and
> welcome,
>
> On 09.02.2018 15:46, ST Wong (ITSC) wrote:
>
> Hi, I'm new to CEPH and got a task to s
Hi,
Thanks for your advice,
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Luis
Periquito
Sent: Friday, February 09, 2018 11:34 PM
To: Kai Wagner
Cc: Ceph Users
Subject: Re: [ceph-users] Newbie question: stretch ceph cluster
On Fri, Feb 9
Hi,
Thanks a lot,
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Kai
Wagner
Sent: Friday, February 09, 2018 11:00 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Newbie question: stretch ceph cluster
Hi and welcome,
On 09.02.2018 15:46, ST Wong (ITSC
On Fri, Feb 9, 2018 at 2:59 PM, Kai Wagner wrote:
> Hi and welcome,
>
>
> On 09.02.2018 15:46, ST Wong (ITSC) wrote:
>
> Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR feature.
> We've 2 10Gb connected data centers in the same campus.I wonder if it's
> possible to setup a CEP
Hi and welcome,
On 09.02.2018 15:46, ST Wong (ITSC) wrote:
>
> Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR
> feature. We've 2 10Gb connected data centers in the same campus. I
> wonder if it's possible to setup a CEPH cluster with following
> components in each data cente
Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR feature.
We've 2 10Gb connected data centers in the same campus.I wonder if it's
possible to setup a CEPH cluster with following components in each data center:
3 x mon + mds + mgr
3 x OSD (replicated factor=2, between data
Scottix
Sent: Wednesday, October 02, 2013 10:37 AM
To: Andy Paluch
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Newbie question
I actually am looking for a similar answer. If 1 osd = 1 HDD, in dumpling it
will relocate the data for me after the timeout which is great. If I just want
to
I actually am looking for a similar answer. If 1 osd = 1 HDD, in dumpling
it will relocate the data for me after the timeout which is great. If I
just want to replace the osd with an unformated new HDD what is the
procedure?
One method that has worked for me is to remove it out of the crush map th
What happens when a drive goes bad in ceph and has to be replaced (at
the physical level) . In the Raid world you pop out the bad disk and
stick a new one in and the controller takes care of getting it back into
the system. With what I've been reading so far, it probably going be a
mess to do t
Hi,
Perhaps not a great analogy. At least in the case of the UK
Perhaps not, I don't know the UK system.
I was merely trying to illustrate the difference between the number of
mons that the system is configured with (the ones eligible to vote), and
the number of mons actually alive and able
On 6 Sep 2013, at 13:46, Jens Kristian Søgaard wrote:
> You created 7 mons in ceph. This is like having a parliament with 7 members.
>
> Whenever you want to do something, you need to convince a majority of
> parliament to vote yes. A majority would then be 4 members voting yes.
>
> If two mem
Le 06/09/2013 12:12, Nigel Williams a écrit :
On 06/09/2013, at 7:49 PM, "Bernhard Glomm" wrote:
Can I introduce the cluster network later on, after the cluster is deployed and
started working?
(by editing ceph.conf, push it to the cluster members and restart the daemons?)
Thanks Bernhard for
Hi Bernhard,
I thought 4 out of seven wouldn't be good because it's not an odd number...
but I guess after I would have brought up the cluster with 4 MONs I could
have removed one of the MONs to reach that (well, or add one)
Think of it like this:
You created 7 mons in ceph. This is like havi
Thnx a lot for making this clear!
I thought 4 out of seven wouldn't be good because it's not an odd number...
but I guess after I would have brought up the cluster with 4 MONs I could
have removed one of the MONs to reach that (well, or add one)
thnx again
Bernhard
Am 06.09.2013 13:26:37, schri
Hi,
In order to reach a quorum after reboot, you need to have more than half
of yours mons running.
with 7 MONs I have to have at least 5 MONS running?
No. 4 is more than half of 7, so 4 would be a majority and thus would be
able to form a quorum.
4 would be more than the half b
On 06/09/2013, at 7:49 PM, "Bernhard Glomm" wrote:
> Can I introduce the cluster network later on, after the cluster is deployed
> and started working?
> (by editing ceph.conf, push it to the cluster members and restart the
> daemons?)
Thanks Bernhard for asking this question, I have the same q
thnx Jens
> > I have my testcluster consisting two OSDs that also host MONs plus
> > one to five MONs.
> Are you saying that you have a total of 7 mons?
yepp
> > down the at last, not the other MON though (since - surprise - they
> > are in this test szenario just virtual instances residing o
> > > > And a second question regarding ceph-deploy:
> > > > How do I specify a second NIC/address to be used as the intercluster
> > > > communication?
> > > You will not be able to do something like this with ceph-deploy. This
> > > sounds like a very specific (or a bit more advanced)
> > > co
On Thu, Sep 5, 2013 at 12:38 PM, Gregory Farnum wrote:
> On Thu, Sep 5, 2013 at 9:31 AM, Alfredo Deza wrote:
>> On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
>> wrote:
>>>
>>> Hi all,
>>>
>>> as a ceph newbie I got another question that is probably solved long ago.
>>> I have my testcluster co
On Thu, Sep 5, 2013 at 9:31 AM, Alfredo Deza wrote:
> On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
> wrote:
>>
>> Hi all,
>>
>> as a ceph newbie I got another question that is probably solved long ago.
>> I have my testcluster consisting two OSDs that also host MONs
>> plus one to five MONs.
>
On Thu, Sep 5, 2013 at 11:42 AM, Bernhard Glomm
wrote:
>
> Hi all,
>
> as a ceph newbie I got another question that is probably solved long ago.
> I have my testcluster consisting two OSDs that also host MONs
> plus one to five MONs.
> Now I want to reboot all instance, simulating a power failure.
Hi Bernhard,
I have my testcluster consisting two OSDs that also host MONs plus
one to five MONs.
Are you saying that you have a total of 7 mons?
down the at last, not the other MON though (since - surprise - they
are in this test szenario just virtual instances residing on some
ceph rbds)
Hi all,
as a ceph newbie I got another question that is probably solved long ago.
I have my testcluster consisting two OSDs that also host MONs
plus one to five MONs.
Now I want to reboot all instance, simulating a power failure.
So I shutdown the extra MONs,
Than shutting down the first OSD/MON
On 04/01/2013 06:07 AM, Papaspyrou, Alexander wrote:
Folks,
we are in the process of setting up a ceph cluster with about 40 OSDs
spread over 25 or so machines within our hosting provider's infrastructure.
Unfortunately, we have certain limitations from the provider side that
we cannot really o
Folks,
we are in the process of setting up a ceph cluster with about 40 OSDs spread over 25 or so machines within our hosting provider's infrastructure.
Unfortunately, we have certain limitations from the provider side that we cannot really overcome:
1- We only have one public network, no
27 matches
Mail list logo