al to vet this process with your Red Hat support
> channel. It is difficult to provide authoritative advice on any downstream
> product based on upstream GlusterFS in this forum.
>
> Thanks,
> Vijay
>
> On Tue, Jul 10, 2018 at 10:42 PM Colin Coe wrote:
>
>> Hi all
thoughts on this process? Anything I'm missing or could do a better
way?
Thanks
On Wed, Jul 4, 2018 at 8:11 AM Colin Coe wrote:
> Hi all
>
> We've been running community supported gluster for a few years and now
> we've bought support subscriptions for RHGS.
>
> We currently ha
Hi all
We've been running community supported gluster for a few years and now
we've bought support subscriptions for RHGS.
We currently have a 3 node system (2 replicas plus quorum) in production
hosting several volumes with a TB or so of data.
I've logged a support ticket requesting the best
)
>>> 2) mkdir
>>>
>>>
>>> Latency
>>>
>>> 5ms
>>>
>>> 10ms
>>>
>>> 20ms
>>>
>>> 50ms
>>>
>>> 100ms
>>>
>>> 200ms
>>>
>>> Ops
>>>
Hi all
I've googled but can't find an answer to my question.
I have two data centers. Currently, I have a replica (count of 2 plus
arbiter) in one data center but is used by both.
I want to change this to be a distributed replica across the two data
centers.
There is a 20Mbps pipe and approx
Hi Kevin
Thanks for this. I did some more testing and found that adding a third,
brick-less node didn't really help. I think I need to upgrade to 3.8.x to
get this functionality.
I'll upgrade my test gluster environment and try again.
Thanks
On Fri, Jan 6, 2017 at 6:21 AM, Kevin Lemonnier
Ahh, that makes sense.
Can I add a quorum only node to V3.7.18?
Thanks
CC
On 5 Jan. 2017 4:02 pm, "Kevin Lemonnier" wrote:
> > I've configured two test gluster servers (RHEL7) running glusterfs
> 3.7.18.
> > [...]
> > Any ideas what I'm doing wrong?
>
> I'd say you need
Hi all
As the subject states, I'm doing glusterfs native client testing.
I've configured two test gluster servers (RHEL7) running glusterfs 3.7.18.
My test client is RHEL5.11 with the glusterfs-fuse RPM 3.7.18 installed.
The client has the following in /etc/fstab:
devfil01:/gv0 /share
Hi all
I've got a bunch of old HP (formerly LeftHand) P4500 7.2TB (12 * 600GB)
units that are out of support and no longer being used. I was thinking
about re purposing them for Gluster.
Has anyone else attempted this? Any war stories?
Thanks
CC
DFS
root.
Then create in DFS a share that will point to both node.
So Windows clients, using DFS client, will figure out to use available node
from all enabled.
You can also use cost co to direct client.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-08-17 5:54 GMT+02:00 Colin
Hi all
I've setup a two node replicated system that is providing NFS and
SMB/CIFS services to clients (RHEL5,6,7 with NFS and
WinXP,7,2008R2,2012R2).
I'm try to create a DFS mount point for Windows nodes to use. I'm
doing this so that if I have to take down one of the Gluster nodes for
11 matches
Mail list logo