I also found that the Ubuntu PPAs maintained by the gluster team, when
unpacked, contain a patch in the debian/patches directory that addresses
these issues (but of course it'd be better to have it fixed upstream).
On 22/02/17 18:42, Shyam wrote:
> Optionally try patching the sources with this
Hi everyone!
We have a gluster array of three servers supporting a large mail
server with about 10,000 e-mail accounts with the Maildir file
format. This means lots of random small file reads and writes.
Gluster's performance hasn't been great since we switched to
Hi Alessandro,
That will address the failover issue but it will not address configuring
the glusterfs client to connect to the brick over TLS. I would be happy to
be wrong. I was only able to get both by specifying that in the config
file. What's curious is why the config file doesn't handle
Il 24/02/2017 14:50, Joseph Lorenzini ha scritto:
> 1. I want the mount /etc/fstab to be able to fail over to any one of
> the three servers that I have. so if one server is down, the client
> can still mount from servers 2 and 3.
*backupvolfile-server *option
*
*should do the work or use the
Hi Mohammed,
You are right that mounting it this way will do the appropriate
replication. However, there are problems with that for my use case:
1. I want the mount /etc/fstab to be able to fail over to any one of the
three servers that I have. so if one server is down, the client can still
Hi Joseph,
I think there is gap in understanding your problem. Let me try to give
more clear picture on this,
First , couple of clarification points here
1) client graph is an internally generated configuration file based on
your volume, that said you don't need to create or edit your own. If
HI Mohammed,
Its not a bug per se, its a configuration and documentation issue. I
searched the gluster documentation pretty thoroughly and I did not find
anything that discussed the 1) client's call graph and 2) how to
specifically configure a native glusterfs client to properly specify that
call
I have 3 storage servers that I would like to use as gluster servers
for VM hosting and some Maildir hosting.
these 3 servers will be connected to a bunch of hypervisors servers.
I'll create a distributed replicated volume with SATA disks and ZFS
(to use SLOG) and another distributed replicated
It looks like it is ended up in split brain kind of situation. To find
the root cause we need to get logs for the first failure of volume start
or volume stop .
Or to work around it, you can do a volume start force.
Regards
Rafi KC
On 02/24/2017 01:36 PM, Deepak Naidu wrote:
>
> I keep on
I keep on getting this error when my config.transport is set to both tcp,rdma.
The volume doesn't start. I get the below error during volume start.
To get around this, I end up delete the volume, then configure either only rdma
or tcp. May be I am missing something, just trying to get the
10 matches
Mail list logo