It looks like this is to do with the stale port issue.
I think it's pretty clear from the below that the digitalcorpora brick
process is shown by volume status as having the same TCP port as the public
volume brick on gluster-2, 49156. But is actually listening on 49154. So
although the brick pro
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil
wrote:
> gluster version 3.10.6, replica 3 volume, daemon is present but does not
> appear to be functioning
>
> peculiar behaviour. If I kill the glusterfs brick daemon and restart
> glusterd then the brick becomes available - but one of my other
gluster version 3.10.6, replica 3 volume, daemon is present but does not
appear to be functioning
peculiar behaviour. If I kill the glusterfs brick daemon and restart
glusterd then the brick becomes available - but one of my other volumes
bricks on the same server goes down in the same way it's l
On Tue, Oct 24, 2017 at 11:04 AM, atris adam wrote:
> thx for reply, that was so much interesting to me.
> How can I get these news about glusterfs new features?
>
The release notes generally contain information about new features. You can
also lookup the github projects page [2] for understand
I have 14,734 GFIDS that are different. All the different ones are only
on the brick that was live during the outage and concurrent file copy-
in. The brick that was down at that time has no GFIDs that are not also
on the up brick.
As the bricks are 10TB, the find is going to be a long running proc
thx for reply, that was so much interesting to me.
How can I get these news about glusterfs new features?
On Tue, Oct 24, 2017 at 5:54 PM, Vijay Bellur wrote:
>
> Halo replication [1] could be of interest here. This functionality is
> available since 3.11 and the current plan is to have it fully
Halo replication [1] could be of interest here. This functionality is
available since 3.11 and the current plan is to have it fully supported in
a 4.x release.
Note that Halo replication is built on existing synchronous replication in
Gluster and differs from the current geo-replication implementa
I always used IP addresses instead of names when I added a peer. In the
gluster peer status, I do see IP:
[root@DC-MTL-NAS-01 ~]# gluster peer status
Number of Peers: 2
Hostname: XXX.XXX.XXX.12
Uuid: ec1e10c1-0e38-4d2a-ab51-50fb0c67b6ee
State: Peer in Cluster (Connected)
Hostname: XXX.XXX.X
On 24/10/17 13:01, Alessandro Briosi wrote:
> I would set up a VPN (tinc could work well).
I, too, would recommend to try tinc for this, it can automatically route
traffic of nodes that don't have direct access to other nodes via those
nodes that do.
I have a publicly available setup of Gluster o
Il 24/10/2017 12:45, atris adam ha scritto:
> thanks for answering. But I have to setup and test it myself and
> record the result. Can you guide me a little more. The problem is, one
> valid ip for each data centers exist, and each data centers have 3
> servers. How should I config the network in
thanks for answering. But I have to setup and test it myself and record the
result. Can you guide me a little more. The problem is, one valid ip for
each data centers exist, and each data centers have 3 servers. How should I
config the network in which the server bricks see each other to create a
g
Hi,
You can, but unless the two datacenters are very close, it'll be slow as
hell. I tried it myself and even a 10ms ping between the bricks is
horrible.
On Tue, Oct 24, 2017 at 01:42:49PM +0330, atris adam wrote:
> Hi
>
> I have two data centers, each of them have 3 servers. This two data cente
Hi
I have two data centers, each of them have 3 servers. This two data centers
can see each other over the internet.
I want to create a distributed glusterfs volume with these 6 servers, but I
have only one valid ip in each data center. Is it possible to create a
glusterfs volume?Can anyone guide
Hi,
No, gluster doesn't support active-active geo-replication. It's not planned
in near future. We will let you know when it's planned.
Thanks,
Kotresh HR
On Tue, Oct 24, 2017 at 11:19 AM, atris adam wrote:
> hi everybody,
>
> Have glusterfs released a feature named active-active georeplicatio
Hi Jim,
Can you check whether the same hardlinks are present on both the bricks &
both of them have the link count 2?
If the link count is 2 then "find -samefile
//"
should give you the file path.
Regards,
Karthik
On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney wrote:
> I'm not so lucky. ALL of m
15 matches
Mail list logo