I have volumes set up like this:
gluster> volume info
Volume Name: machines0
Type: Distribute
Volume ID: f602dd45-ddab-4474-8308-d278768f1e00
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster4:/data/brick1/machines0
Volume Name: group1
Type: Distribute
Volume ID:
Hi,
I there any performance gain (or can you even?) in bonding 2 x 1gb?
regards
Steven
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
As a part of my initial contribution to the organization, I have done the
following:
1. Installed gluster on 2 nodes on digitalocean
2. Specs of the nodes: 2 GB Memory / 40 GB Disk / NYC2 - Ubuntu 16.04.1
x64
3. Set up a volume and mounted it
4. Set up a client machine, and
++Bhaskar
- Original Message -
From: "Ashish Pandey"
To: "Menaka Mohan"
Cc: "Gluster Users"
Sent: Monday, October 17, 2016 4:15:02 PM
Subject: Re: [Gluster-users] [Gluster-devel] Need help in understanding
Keeping Bhaskar in loop as he has done testing on glusterfs with iozone.
- Original Message -
From: "Menaka Mohan"
To: gluster-de...@gluster.org
Sent: Tuesday, October 11, 2016 1:18:13 AM
Subject: [Gluster-devel] Need help in understanding IOZone config file
On Tue, Oct 11, 2016 at 11:52 AM, Abeer Mahendroo wrote:
> Hi all.
>
> We had a strange issue with Gluster 3.8.4 under RHEL 7.2.
>
>
>
> Initially, the partition storing the Gluster bricks ran out of space. We
> tried recovering after expanding the underlying partition.
Hi all, I would like to set up 2 Gluster node
to use with a VMWare server via NFS. With Gluster
I can set up only a volume to share with NFS, so
I would like to create a single Gluster volume
with 3 disks per node. Is there a way to achieve
this directely with Gluster? In the case the answer
is
Hi all.
We had a strange issue with Gluster 3.8.4 under RHEL 7.2.
Initially, the partition storing the Gluster bricks ran out of space. We
tried recovering after expanding the underlying partition. Eventually we
decided to ‘reset’ Gluster, create the volume again from scratch. I tried
purging
I got the NFS mounts to work, but I can't remember what fixed them. It
might have something to do with name resolution on one of the gluster
servers, I changed too many things to remember :-)
NFS mounts are working beautifully now, thanks all.
___
Hi Vijay,
It is quite difficult to provide the exact instances but below are the two
mostly occurred cases.
1. Get duplicate peer entries in 'peer status' command
2. We lost sync between two boards due to gluster mount point is not
present at one of the board.
Regards,
Abhisehk
On Mon, Oct
Il 14 ott 2016 17:37, "David Gossage" ha
scritto:
>
> Sorry to resurrect an old email but did any resolution occur for this or
a cause found? I just see this as a potential task I may need to also run
through some day and if their are pitfalls to watch for would be
Hi Ankireddypalle,
On 16/10/16 11:10, Ankireddypalle Reddy wrote:
The encryption xlator is the last one before posix and it’s here that
the data is getting encrypted. When the data is read back the encrypted
data is returned. Decryption is supposed to happen in read callback
which does not
>
>I see that network.ping-timeout on your setup is 15 seconds andA that's
>too low. Could you reconfigure that to 30 seconds?
>
Yes, I can. I set it to 15 to be sure no browser would timeout when trying to
load
a website on a frozen VM during the timeout, 15 seemed pretty good since
On Fri, Oct 14, 2016 at 10:37:03AM -0500, David Gossage wrote:
>Sorry to resurrect an old email but did any resolution occur for this or a
>cause found?A I just see this as a potential task I may need to also run
>through some day and if their are pitfalls to watch for would be good
14 matches
Mail list logo