[Gluster-users] Arbiter Addition in Replicated environment

2016-12-06 Thread Atul Yadav
Hi Team,


Can we add Arbiter brick in 2 node replication running environment.

For an example
Glusterfs 2 node replication
Current glusterfs storage size 4 TB.
After adding Arbiter brick in this environment what will be the result.

#gluster volume add-brick test replica 3 arbiter 1 server3:/glusterfs/arbi


Thank You
Atul Yadav
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Glusterfs readonly Issue

2016-11-14 Thread Atul Yadav
Dear Team,

In the event of the failure of master1, master 2 glusterfs home directory
will become read only fs.

If we manually shutdown the master 2, then there is no impact on the file
system and all io operation will complete with out any problem.

can you please provide some guidance to isolate the problem.



# gluster peer status
Number of Peers: 2

Hostname: master1-ib.dbt.au
Uuid: a5608d66-a3c6-450e-a239-108668083ff2
State: Peer in Cluster (Connected)

Hostname: compute01-ib.dbt.au
Uuid: d2c47fc2-f673-4790-b368-d214a58c59f4
State: Peer in Cluster (Connected)



# gluster vol info home

Volume Name: home
Type: Replicate
Volume ID: 2403ddf9-c2e0-4930-bc94-734772ef099f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp,rdma
Bricks:
Brick1: master1-ib.dbt.au:/glusterfs/home/brick1
Brick2: master2-ib.dbt.au:/glusterfs/home/brick2
Options Reconfigured:
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
network.remote-dio: enable
cluster.quorum-type: auto
nfs.disable: on
performance.readdir-ahead: on
cluster.server-quorum-type: server
config.transport: tcp,rdma
network.ping-timeout: 10
cluster.server-quorum-ratio: 51%
cluster.enable-shared-storage: disable



# gluster vol heal home info
Brick master1-ib.dbt.au:/glusterfs/home/brick1
Status: Connected
Number of entries: 0

Brick master2-ib.dbt.au:/glusterfs/home/brick2
Status: Connected
Number of entries: 0


# gluster vol heal home info heal-failed
Gathering list of heal failed entries on volume home has been unsuccessful
on bricks that are down. Please check if all brick processes are
running[root@master2


Thank You
Atul Yadav
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Peer behavior

2016-07-05 Thread Atul Yadav
Hi ,

After restarting the service. service entered in to the fail state.
[root@master1 ~]# /etc/init.d/glusterd restart
Stopping glusterd: [FAILED]
Starting glusterd: [FAILED]

Note: This behavior only happening over rdma network. But with ethernet
there is no issue.

Thank you
Atul Yadav



On Tue, Jul 5, 2016 at 11:28 AM, Atin Mukherjee  wrote:

>
>
> On Tue, Jul 5, 2016 at 11:01 AM, Atul Yadav 
> wrote:
>
>> Hi All,
>>
>> The glusterfs environment details are given below:-
>>
>> [root@master1 ~]# cat /etc/redhat-release
>> CentOS release 6.7 (Final)
>> [root@master1 ~]# uname -r
>> 2.6.32-642.1.1.el6.x86_64
>> [root@master1 ~]# rpm -qa | grep -i gluster
>> glusterfs-rdma-3.8rc2-1.el6.x86_64
>> glusterfs-api-3.8rc2-1.el6.x86_64
>> glusterfs-3.8rc2-1.el6.x86_64
>> glusterfs-cli-3.8rc2-1.el6.x86_64
>> glusterfs-client-xlators-3.8rc2-1.el6.x86_64
>> glusterfs-server-3.8rc2-1.el6.x86_64
>> glusterfs-fuse-3.8rc2-1.el6.x86_64
>> glusterfs-libs-3.8rc2-1.el6.x86_64
>> [root@master1 ~]#
>>
>> Volume Name: home
>> Type: Replicate
>> Volume ID: 2403ddf9-c2e0-4930-bc94-734772ef099f
>> Status: Stopped
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: rdma
>> Bricks:
>> Brick1: master1-ib.dbt.au:/glusterfs/home/brick1
>> Brick2: master2-ib.dbt.au:/glusterfs/home/brick2
>> Options Reconfigured:
>> network.ping-timeout: 20
>> nfs.disable: on
>> performance.readdir-ahead: on
>> transport.address-family: inet
>> config.transport: rdma
>> cluster.server-quorum-type: server
>> cluster.quorum-type: fixed
>> cluster.quorum-count: 1
>> locks.mandatory-locking: off
>> cluster.enable-shared-storage: disable
>> cluster.server-quorum-ratio: 51%
>>
>> When my single master node is up only, but other nodes are still showing
>> connected mode 
>> gluster pool list
>> UUIDHostnameState
>> 89ccd72e-cb99-4b52-a2c0-388c99e5c7b3master2-ib.dbt.au   Connected
>> d2c47fc2-f673-4790-b368-d214a58c59f4compute01-ib.dbt.au Connected
>> a5608d66-a3c6-450e-a239-108668083ff2localhost   Connected
>> [root@master1 ~]#
>>
>>
>> Please advise us
>> Is this normal behavior Or This is issue.
>>
>
> First of, we don't have any master slave configuration mode for gluster
> trusted storage pool i.e. peer list. Secondly, if master2 and compute01 are
> still reflecting as 'connected' even though they are down it means that
> localhost here didn't receive disconnect events for some reason. Could you
> restart glusterd service on this node and check the output of gluster pool
> list again?
>
>
>
>>
>> Thank You
>> Atul Yadav
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster Peer behavior

2016-07-04 Thread Atul Yadav
Hi All,

The glusterfs environment details are given below:-

[root@master1 ~]# cat /etc/redhat-release
CentOS release 6.7 (Final)
[root@master1 ~]# uname -r
2.6.32-642.1.1.el6.x86_64
[root@master1 ~]# rpm -qa | grep -i gluster
glusterfs-rdma-3.8rc2-1.el6.x86_64
glusterfs-api-3.8rc2-1.el6.x86_64
glusterfs-3.8rc2-1.el6.x86_64
glusterfs-cli-3.8rc2-1.el6.x86_64
glusterfs-client-xlators-3.8rc2-1.el6.x86_64
glusterfs-server-3.8rc2-1.el6.x86_64
glusterfs-fuse-3.8rc2-1.el6.x86_64
glusterfs-libs-3.8rc2-1.el6.x86_64
[root@master1 ~]#

Volume Name: home
Type: Replicate
Volume ID: 2403ddf9-c2e0-4930-bc94-734772ef099f
Status: Stopped
Number of Bricks: 1 x 2 = 2
Transport-type: rdma
Bricks:
Brick1: master1-ib.dbt.au:/glusterfs/home/brick1
Brick2: master2-ib.dbt.au:/glusterfs/home/brick2
Options Reconfigured:
network.ping-timeout: 20
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
config.transport: rdma
cluster.server-quorum-type: server
cluster.quorum-type: fixed
cluster.quorum-count: 1
locks.mandatory-locking: off
cluster.enable-shared-storage: disable
cluster.server-quorum-ratio: 51%

When my single master node is up only, but other nodes are still showing
connected mode 
gluster pool list
UUIDHostnameState
89ccd72e-cb99-4b52-a2c0-388c99e5c7b3master2-ib.dbt.au   Connected
d2c47fc2-f673-4790-b368-d214a58c59f4compute01-ib.dbt.au Connected
a5608d66-a3c6-450e-a239-108668083ff2localhost   Connected
[root@master1 ~]#


Please advise us
Is this normal behavior Or This is issue.

Thank You
Atul Yadav
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] HPC with Glusterfs

2016-07-01 Thread Atul Yadav
Hi All,

We are using glusterfs in our HPC environment.

Infra details.
Master Node 2
Compute Node 10
Storage: glusterfs3.8rc2
Operating System: CentOS 6.7
Infiniband: RDMA

For achieving the high availability between both the master nodes
glusterfs-server  is setup in replicated mode.
And all 10 compute nodes glusterfs client.
Gluster peer created over IPoIB.

HOME directory on glusterfs replicated over RDMA.

Please share the fine tuning for the environment.

Thank You
Atul Yadav
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS RDMA Support

2016-06-22 Thread Atul Yadav
Hi Team,

We installed glusterfs3.8 in our HPC environment.

While configuring RDMA on glusterfs, the erroe is coming.

Is glusterfs rdma is compatible with OFED, Intel and Mellanox driver.
Or It is only compatible with operating system infiniband driver.

Thank You
Atul YAdav
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS First Time

2016-02-13 Thread Atul Yadav
Thanks for the reply.

As per your guidance, the glusterfs information is given below:
Volume Name: share
Type: Replicate
Volume ID: bd545058-0fd9-40d2-828b-7e60a4bae53c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: master1:/data/brick1/share
Brick2: master2:/data/brick2/share
Options Reconfigured:
performance.readdir-ahead: on
cluster.self-heal-daemon: enable
network.ping-timeout: 25
cluster.eager-lock: on

[root@master1 ~]# gluster pool list
UUIDHostnameState
5a479842-8ee9-4160-b8c6-0802d633b80fmaster2 Connected
5bbfaa4a-e7c5-46dd-9f5b-0a44f1a583e8localhost   Connected


Host information is given below:-
192.168.10.103  master1.local   master1 #Fixed
192.168.10.104  master2.local   master2 #Fixed


Test case is given below:-
Case 1
While writing 20MB of files continuously on client side. one of the
glusterfs server(Master1) is powered off.
Impact
Client IO operation will be on hold for 25 to 30 second and after that it
will work normally.

Case 2
When failed server is power-up during the IO operation at client side.
Impact
Client IO operation will be on hold for 25 to 30 second and after that it
will work normally.

Result:
There will be no IO loss during the event of failure. But there is
difference of data size on both the servers.
*Master1*
*Size */dev/mapper/brick1-brick1   19912704  508320
 19404384   3% /data/brick1
*Inodes */dev/mapper/brick1-brick19961472  1556
99599161% /data/brick1

*Master2*
*Size* /dev/mapper/brick2-brick2   19912704
 522608  19390096   3% /data/brick2
*Inodes */dev/mapper/brick2-brick29961472  1556
99599161% /data/brick2


*Client*
*Size   *master1.local:/share  19912704   522624  19390080   3% /media
*Inodes   *master1.local:/share 9961472   1556 99599161% /media



How we can match the size of data on both the servers or this normal
behavior.
And there will be impact on data integrity.

Thank You
Atul Yadav
09980066464







On Fri, Feb 12, 2016 at 1:21 AM, Gmail  wrote:

> find my answers inline.
>
> *— Bishoy*
>
> On Feb 11, 2016, at 11:42 AM, Atul Yadav  wrote:
>
> HI Team,
>
>
> I am totally new in Glusterfs and evaluating glusterfs for my requirement.
>
> Need your valuable input on achieving below requirement :-
> File locking
>
> Gluster uses DLM for locking.
>
> Performance
>
> It depends on your work load (is it small files big files, etc…), the
> number of drives and the kind of volume you create.
> I suggest you start with just a Distributed Replicated volume and from
> that point you can plan for the hardware and software configuration.
>
> High Availability
>
> I suggest replicating the bricks across the two nodes, as erasure coding
> with two nodes and a single drive on each one will not be of any benefit.
>
>
>
> Existing Infra details is given below:-
> CentOS 6.6
> glusterfs-server-3.7.8-1.el6.x86_64
> glusterfs-client-xlators-3.7.8-1.el6.x86_64
> GlusterFS server Quantity 2 with independent 6 TB storage
> 24 Glusterfs Client.
> Brick replication
>
>
> Thank You
> Atul Yadav
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS First Time

2016-02-11 Thread Atul Yadav
HI Team,


I am totally new in Glusterfs and evaluating glusterfs for my requirement.

Need your valuable input on achieving below requirement :-
File locking
Performance
High Availability


Existing Infra details is given below:-
CentOS 6.6
glusterfs-server-3.7.8-1.el6.x86_64
glusterfs-client-xlators-3.7.8-1.el6.x86_64
GlusterFS server Quantity 2 with independent 6 TB storage
24 Glusterfs Client.
Brick replication


Thank You
Atul Yadav
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users