are all 3 nics in the same bond together?
I don't think bonding nics of various speeds is a great idea,
How are you separating the ceph traffic into the individual nics?
On Fri, Jul 6, 2018 at 7:04 AM, John Spray wrote:
> On Fri, Jul 6, 2018 at 11:10 AM Marcus Haarmann
> wrote:
> >
> > Hi e
On Fri, Jul 6, 2018 at 11:10 AM Marcus Haarmann
wrote:
>
> Hi experts,
>
> we have setup a proxmox cluster with a minimal environment for some testing.
> We have put some VMs on the cluster and encountered mon quorum problems
> while backups are executed. (possibly polluting either hard disk I/O o
Hi experts,
we have setup a proxmox cluster with a minimal environment for some testing.
We have put some VMs on the cluster and encountered mon quorum problems
while backups are executed. (possibly polluting either hard disk I/O or network
I/O)
Setup:
4 Machines with Proxmox 5.2-2 (Ceph 12.
On 04/05/2013 12:38 PM, Jeff Anderson-Lee wrote:
> The point is I believe that you don't need a 3rd replica of everything,
> just a 3rd MON running somewhere else.
Bear in mind that you still need a physical machine somewhere in that
"somewhere else".
--
Dimitri Maziuk
Programmer/sysadmin
BioMa
On 4/5/2013 10:32 AM, Gregory Farnum wrote:
On Fri, Apr 5, 2013 at 10:28 AM, Dimitri Maziuk wrote:
On 04/05/2013 10:12 AM, Wido den Hollander wrote:
Think about it this way. You have two racks and the network connection
between them fails. If both racks keep operating because they can still
r
On Fri, Apr 5, 2013 at 10:28 AM, Dimitri Maziuk wrote:
> On 04/05/2013 10:12 AM, Wido den Hollander wrote:
>
>> Think about it this way. You have two racks and the network connection
>> between them fails. If both racks keep operating because they can still
>> reach that single monitor in their ra
On 04/05/2013 10:12 AM, Wido den Hollander wrote:
> Think about it this way. You have two racks and the network connection
> between them fails. If both racks keep operating because they can still
> reach that single monitor in their rack you will end up with data
> inconsistency.
Yes. In DRBD la
On 04/05/2013 05:02 PM, Dimitri Maziuk wrote:
On 4/5/2013 7:57 AM, Wido den Hollander wrote:
You always need a majority of your monitors to be up. In this case you
loose 66% of your monitors, so mon.b can't get a majority.
With 3 monitors you need at least 2 to be up to have your cluster
worki
If, in the case above, you have a monitor per room (a, b) and one in a
third location outside of either (c), you would have the ability to
take down the entirety of either room and still maintain monitor
quorum. (a,c or b,c) The cluster would continue to work.
On Fri, Apr 5, 2013 at 10:02 AM, Dimi
On 4/5/2013 7:57 AM, Wido den Hollander wrote:
You always need a majority of your monitors to be up. In this case you
loose 66% of your monitors, so mon.b can't get a majority.
With 3 monitors you need at least 2 to be up to have your cluster working.
That's kinda useless, isn't it? I'd've th
Hi,
On 04/05/2013 01:57 PM, Alexis GÜNST HORN wrote:
Hello to all,
I've a Ceph cluster composed of 4 nodes in 2 differents rooms.
room A : osd.1, osd.3, mon.a, mon.c
room B : osd.2, osd.4, mon.b
My crush rule is made to make replica accross rooms.
So normally, if I shut the whole room A, my c
Hello to all,
I've a Ceph cluster composed of 4 nodes in 2 differents rooms.
room A : osd.1, osd.3, mon.a, mon.c
room B : osd.2, osd.4, mon.b
My crush rule is made to make replica accross rooms.
So normally, if I shut the whole room A, my cluster should stay usable.
... but, in fact no.
When i
12 matches
Mail list logo