On 02/24/2017 11:47 AM, max.degr...@kpn.com wrote:
>
> The version on the server of this specific mount is 3.7.11. The client
> is running version 3.4.2.
>
It is always better to have everything in one version, all clients and
all servers. In this case there is huge gap between the versions, 3.7
On 02/23/2017 03:54 PM, xina towner wrote:
> Hi,
>
> we are using glusterfs with replica 2, we have 16 server nodes and
> around 40 client nodes.
>
> It happens sometimes that they lose connectivity and when we restart
> the node so it can come online again the server kicks us from the
> server a
The version on the server of this specific mount is 3.7.11. The client is
running version 3.4.2.
There is more to that. This client is actually mounting to volumes where the
other server is running 3.4.2 as well. What's your advice, update that other
server to 3.7.11 (or higher) first? Of start
Hi
I want to know how the layout of directories does store? Is everything just
in trusted.glusterfs.dht? or something else has role in this subject?
another question is about making the layout, does layout make in client
side or server side? for example the trusted.glusterfs.dht get value in
client
On 02/23/2017 12:18 PM, max.degr...@kpn.com wrote:
>
> Hi,
>
>
>
> We have a 4 node glusterfs setup that seems to be running without any
> problems. We can’t find any problems with replication or whatever.
>
>
>
> We also have 4 machines running the glusterfs client. On all 4
> machines we se
Great effort. Kudos to the team.
Regards
Rafi KC
On 02/23/2017 07:12 PM, chris holcombe wrote:
> Hey Gluster Community!
>
> I wanted to announce that I have built support for ZFS bricks into the
> Gluster charm: https://github.com/cholcombe973/gluster-charm. If anyone
> wants to give it a spi
On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
> Hi all,
>
> I have a simple replicated volume with a replica count of 3. To ensure
> any file changes (create/delete/modify) are replicated to all bricks,
> I have this setting in my client configuration.
>
> volume gv0-replicate-0
> type clu
What are you looking at to determine the hit rate? l2_hits & l2_misses, or hits
& misses?
I’ve got a small cluster with 16G boxes and I’m about on those numbers for the
l2arc, my bigger group with dedicated storage servers uses more, but I’d like
to do more analysis on it and see how effective
Hi all,
I have a simple replicated volume with a replica count of 3. To ensure any
file changes (create/delete/modify) are replicated to all bricks, I have
this setting in my client configuration.
volume gv0-replicate-0
type cluster/replicate
subvolumes gv0-client-0 gv0-client-1 gv0-clie
Hey Gluster Community!
I wanted to announce that I have built support for ZFS bricks into the
Gluster charm: https://github.com/cholcombe973/gluster-charm. If anyone
wants to give it a spin and provide feedback I would be overjoyed :).
Thanks,
Chris
__
On 23/02/2017 9:49 PM, Gandalf Corvotempesta wrote:
Anyway, is possible to use the same ZIL partition for multiple
bricks/zfs vdevs ?
I presume you mean slog rather than zil :)
The slog is per pool and applies to all vdevs in the pool.
--
Lindsay Mathieson
__
2017-02-23 11:53 GMT+01:00 David Gossage :
> That's what my systems run though each of my nodes only has @3TB of storage.
> Current tests among zfs used as VM storage seems to be that l2arc sits
> almost wholly unused though. I removed mine since it had such a very low
> hit rate.
I have to use a
On Thu, Feb 23, 2017 at 3:57 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> How much RAM is suggested for gluster with ZFS (no dedup) ?
>
> Are 16GB enough with an SSD L2ARC?
>
> 8gb for the arc, 8gb for gluster and OS.
>
That's what my systems run though each of my nodes o
On 23/02/2017 7:57 PM, Gandalf Corvotempesta wrote:
How much RAM is suggested for gluster with ZFS (no dedup) ?
Are 16GB enough with an SSD L2ARC?
8gb for the arc, 8gb for gluster and OS.
I have a 8GB limitation for my ZFS arc. Gluster (server+client) only
seems to use about 3-4GB, so you sh
Hi,
we are using glusterfs with replica 2, we have 16 server nodes and around
40 client nodes.
It happens sometimes that they lose connectivity and when we restart the
node so it can come online again the server kicks us from the server and we
are unable to login using ssh but the server responds
How much RAM is suggested for gluster with ZFS (no dedup) ?
Are 16GB enough with an SSD L2ARC?
8gb for the arc, 8gb for gluster and OS.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
16 matches
Mail list logo