You say that accessing Gluster via NFS is actually faster than native (fuse)
client?
Still I would like to know why we can’t use kernel NFS server on the data
bricks. I understand we can’t use it on MDS as it can’t support pNFS.
Ondrej
From: gluster-users-boun...@gluster.org
[mailto:gluster-us
Ok,
I agree Gluster fits more in certain scenarios - but you just can't expect the
same performance you receive from NAS based solution - this is especially true
when you deal with lots of relatively small files.
Ondrej
-Original Message-
From: Rik Theys [mailto:rik.th...@esat.kuleuven.
> On Mar 7, 2018, at 4:39 AM, Atin Mukherjee wrote:
>
> Please run 'gluster v get all cluster.max-op-version' and what ever value it
> throws up should be used to bump up the cluster.op-version (gluster v set all
> cluster.op-version ) . With that if you restart the rejected peer I
> believe
Gluster does the sync part better than corosync. It's not an
active/passive failover system. It more all active. Gluster handles the
recovery once all nodes are back online.
That requires the client tool chain to understand that a write goes to
all storage devices not just the active one.
3.10 is
Hello,
I'm designing a 2-node, HA NAS that must support NFS. I had planned on
using GlusterFS native NFS until I saw that it is being deprecated. Then, I
was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA
support ended after 3.10 and its replacement is still a WIP. So, I land
Hi,
On 2018-03-07 16:35, Ondrej Valousek wrote:
Why do you need to replace your existing solution?
If you don't need to scale out due to the capacity reasons, the async
NFS server will always outperform GlusterFS
The current solution is 8 years old and is reaching its end of life.
The reason
I happened to review the status of volume clients and realized they were
reporting a mix of different op-versions: 3.13 clients were still
connecting to the downgraded 3.12 server (likely a timing issue between
downgrading clients and mounting volumes). Remounting the reported
clients has resu
Thanks for the feedback! I was able to fix my problem... turns out,
the standard docker debian and php-cli images don't include a critical
package: apt-transport-https
So to test gluster FS client using docker, start with a file like this:
# you can also use 'FROM debian'
FROM php:cli
# use up to
Hi,
Why do you need to replace your existing solution?
If you don't need to scale out due to the capacity reasons, the async NFS
server will always outperform GlusterFS
Ondrej
-
The information contained in this e-mail and in any attachments is confidential
and is designated solely for the
Hi,
We are looking into replacing our current storage solution and are
evaluating gluster for this purpose. Our current solution uses a SAN
with two servers attached that serve samba and NFS 4. Clients connect to
those servers using NFS or SMB. All users' home directories live on this
server.
I w
2018-03-07 9:29 GMT-03:00 Shyam Ranganathan :
> On 03/05/2018 09:05 AM, Javier Romero wrote:
>>> I am about halfway through my own upgrade testing (using centOS7
>>> containers), and it is patterned around this [1], in case that helps.
>> Taking a look at this.
>>
>>
>
>
> Thanks for confirming the
Please run 'gluster v get all cluster.max-op-version' and what ever value
it throws up should be used to bump up the cluster.op-version (gluster v
set all cluster.op-version ) . With that if you restart the rejected
peer I believe the problem should go away, if it doesn't I'd need to
investigate fu
On 03/05/2018 09:05 AM, Javier Romero wrote:
>> I am about halfway through my own upgrade testing (using centOS7
>> containers), and it is patterned around this [1], in case that helps.
> Taking a look at this.
>
>
Thanks for confirming the install of the bits.
On the upgrade front, I did find
Hi Guys,
I have a gluster volume with the following configuration.
~
Number of Bricks: 9
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 3 = 3
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 3 = 6
performance.flush-behin
14 matches
Mail list logo