Hi Alvin,
Thanks for the dump output. It helped a bit.
For now, recommend turning off open-behind and read-ahead performance
translators for you to get rid of this situation, As I noticed hung FLUSH
operations from these translators.
-Amar
On Wed, Mar 29, 2017 at 6:56 AM, Alvin Starr wrote:
>
How can I ensure that each parity brick is stored on a different server ?
Il 30 mar 2017 6:50 AM, "Ashish Pandey" ha scritto:
> Hi Terry,
>
> There is not constraint on number of nodes for erasure coded volumes.
> However, there are some suggestions to keep in mind.
>
> If you have 4+2 configura
Terry,
It is (data/parity)>=2. You can very well create 4+2 or 8+4 volume.
Are you seeing any error message that you can not create 4+2 config? (4 = data
brick and 2 = redundancy brick count)
Ashish
- Original Message -
From: "Terry McGuire"
To: gluster-users@gluster.org
Sent: F
Hi, all
Does anyone know which ssl protocol glusterfs use? Does glusterfs support TLS?
Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
This issue is now fixed in 3.10.1.
On Tue, 21 Mar 2017 at 19:07, David Chin wrote:
> I'm facing the same issue as well. I'm running the version 3.10.0-2 for
> both server and client.
>
> Works fine when the client and server are on the same machine.
>
>
> I did a telnet to the opened port relate
I can't answer all of these, but I think the only way to share existing
files is to create a new volume with sharding enabled and copy the files
over into it.
Cheers,
Laura B
On Friday, March 31, 2017, Alessandro Briosi wrote:
> Hi I need some advice.
>
> I'm currently on 3.8.10 and would like
Thanks Ashish, Cedric, for your comments.
I’m no longer concerned about my choice of 4 nodes to start, but, I realize
that there’s an issue with my subvolume config options. Turns out only my 8+3
choice is permitted, as the 4+2 and 8+4 options violate the data/parity>2 rule.
So, 8+3 it is, as
Glusterfs 3.10.1 has been tagged.
Packages for the various distributions will be available in a few days,
and with that a more formal release announcement will be made.
For those who are itchy to get a start,
- Tagged code: https://github.com/gluster/glusterfs/tree/v3.10.1
- Release notes:
ht
Hi all,
I have gluster 3.9. I have MTLS set up for both management traffic and
volumes. The gluster fuser client successfully mounts the gluster volume.
However, I see the following error in the gluster server logs when mount or
unmount happens on the gluster client. Is this a bug? Is this anythin
Hi I need some advice.
I'm currently on 3.8.10 and would like to know the following:
1. If I add an arbiter to an existing volume should I also run a rebalance?
2. If I had sharding enabled would adding the arbiter trigger the
corruption bug?
3. What's the procedure to enable sharding on an exist
Is it me ? or is it Gluster? I feel like there is (or hopefully) a simple
setting needs to be changed ( from the google searches I'm not the only
one) I've used GlusterFS on and off for years and even with KVM its always
been really slow. Its been ok for generic file storage)
I know with NFS t
On Thu, Mar 30, 2017 at 04:58:53AM -0700, Jeremiah Rothschild wrote:
> Well, at anyrate, here you can see that both servers can talk on 49152/tcp:
Here is the list of ports that were explicitly opened for gfs:
24007/tcp
24008/tcp
24009/tcp
24010/tcp
49152/tcp
49153/tcp
111/tcp
111/udp
> [root@il
On Thu, Mar 30, 2017 at 05:49:32AM -0400, Kotresh Hiremath Ravishankar wrote:
> Hi Jeremiah,
>
> I believe the bug ID is #1437244 and not #1327244.
Oops! You are correct.
> >From the geo-rep logs, the master volume is failed with "Transport Endpoint
> >Not Connected"
> ...
> [2017-03-30 07:40:5
Hi Jeremiah,
I believe the bug ID is #1437244 and not #1327244.
>From the geo-rep logs, the master volume is failed with "Transport Endpoint
>Not Connected"
...
[2017-03-30 07:40:57.150348] E [resource(/gv0/foo):234:errlog] Popen: command
"/usr/sbin/glusterfs --aux-gfid-mount --acl
--log-file=/
On 30/03/2017 08:35, Ashish Pandey wrote:
Good point Cedric!!
The only thing is that, I would prefer to say "bricks" instead of
"nodes" in your statement.
"starting with 4 bricks (3+1) can only evolve by adding 4 bricks (3+1)"
Oh right, thanks for correcting me !
Cheers
--
On Thu, Mar 30, 2017 at 12:51:23AM -0400, Kotresh Hiremath Ravishankar wrote:
> Hi Jeremiah,
Hi Kotresh! Thanks for the follow-up!
> That's really strange. Please enable DEBUG logs for geo-replication as below
> and send
> us the logs under "/var/log/glusterfs/geo-replication//*.log" from
> mas
16 matches
Mail list logo