On 20 January 2017 at 01:15, Shyam wrote:
>
>
> On 01/17/2017 11:40 AM, Piotr Misiak wrote:
>
>>
>> 17 sty 2017 17:10 Jeff Darcy napisaĆ(a):
>>
>>>
>>> Do you think that is wise to run rebalance process manually on every
brick with the actual commit
Just did a restorecon and am able to download it now.
- Original Message -
> From: "Cedric Lemarchand"
> To: "gluster-users"
> Sent: Monday, January 23, 2017 1:37:03 PM
> Subject: [Gluster-users] nfs-ganesha rsa.pub download give 403
>
On 24/01/2017 6:33 PM, Alessandro Briosi wrote:
I'm in the process of creating a 3 server cluster, and use gluster as a
shared storage between the 3.
Exactly what I run - my three gluster nodes are also VM Servers (Proxmox
cluster);
I have 2 switches and each server has a 4 ethernet card
> On 24 Jan 2017, at 13:48, Lindsay Mathieson
> wrote:
>
> On 24/01/2017 10:23 PM, Alessandro Briosi wrote:
>> Ok so having 2 bonds 1 attached to each switch would work. Though I still
>> cannot get how to make gluster use both links (or at least one with
>>
On 24/01/2017 10:23 PM, Alessandro Briosi wrote:
Ok, I also am going to use Proxmox. Any advise on how to configure the
bricks?
I plan to have a 2 node replica. Would appreciate you sharing your
full setup :-)
Three node replica - preferred to two as quorum works best with a odd
number of
On 24/01/2017 10:23 PM, Alessandro Briosi wrote:
Ok so having 2 bonds 1 attached to each switch would work. Though I
still cannot get how to make gluster use both links (or at least one
with active/passive).
Should I work on RRDNS and keepalived? Or use some bonding of a bond
within the 2
Il 24/01/2017 13:53, Cedric Lemarchand ha scritto:
It would work with traditional LACP if both switches are manageable and in the
same stack. If switches are dumbs (ie only L2 or not stackable), I think there
is a Linux bonding mode that can do the work, but well I would stay away from
such