Re: [Gluster-users] Monitoring tools for GlusterFS

2020-09-03 Thread Sachidananda Urs
On Fri, Sep 4, 2020 at 10:00 AM Artem Russakovskii 
wrote:

> Great, thanks. Thoughts on distributing updates via various repos for
> package managers? For example, I'd love to be able to update it via zypper
> on OpenSUSE.
>

Artem, I'm not thinking of creating packages deb/rpm/pkg. Will be glad to
help if any volunteers want to create packages.

-sac

>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-30 Thread Sachidananda Urs
On Sat, Aug 29, 2020 at 11:10 PM Artem Russakovskii 
wrote:

> Another small tweak: in your README, you have this:
> "curl -LO v1.0.3 gstatus (download)
> "
> This makes it impossible to just easily copy paste. You should just put
> the link in there, and wrap in code formatting blocks.
>

Ack. PR: https://github.com/gluster/gstatus/pull/48 should fix the  issue.




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-23 Thread Sachidananda Urs
On Sat, Aug 22, 2020 at 10:52 PM Artem Russakovskii 
wrote:

> The output currently has some whitespace issues.
>
> 1. The space shift under Cluster is different than under Volumes, making
> the output look a bit inconsistent.
> 2. Can you please fix the tabulation for when volume names are varying in
> length? This output is shifted and looks messy as a result for me.
>

Artem, since the volume names vary for longer volumes the column number
increases and users have to scroll to right.
To overcome this, I have decided to print the volume name in a row by
itself.  PR: https://github.com/gluster/gstatus/pull/44 fixes the issue.

The output looks like this:

root@master-node:/home/sac/work/gstatus# gstatus


Cluster:

 Status: Healthy GlusterFS: 9dev

 Nodes: 3/3  Volumes: 2/2


Volumes:


snap-1

Replicate  Started (UP) - 2/2 Bricks Up

   Capacity: (12.04% used) 5.00 GiB/40.00
GiB (used/total)

   Snapshots: 2

   Quota: On


very_very_long_long_name_to_test_the_gstatus_display

Replicate  Started (UP) - 2/2 Bricks Up

   Capacity: (12.04% used) 5.00 GiB/40.00
GiB (used/total)



root@master-node:/home/sac/work/gstatus#


-sac




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-21 Thread Sachidananda Urs
On Fri, Aug 21, 2020 at 4:07 AM Gilberto Nunes 
wrote:

> Hi Sachidananda!
> I am trying to use the latest release of gstatus, but when I cut off one
> of the nodes, I get timeout...
>

I tried to reproduce, but couldn't. How did you cut off the node? I killed
all the gluster processes on one of the nodes and I see this.
You can see one of the bricks is shown as offline. And nodes are 2/3. Can
you please tell me the steps to reproduce the issue.

root@master-node:/mnt/gluster/movies# gstatus -a


Cluster:

 Status: DegradedGlusterFS: 9dev

 Nodes: 2/3  Volumes: 1/1


Volumes:

  snap-1   Replicate  Started (PARTIAL) -
1/2 Bricks Up

  Capacity: (12.02%
used) 5.00 GiB/40.00 GiB (used/total)

  Self-Heal:


slave-1:/mnt/brick1/snapr1/r11
(7 File(s) to heal).

  Snapshots: 2

 Name:
snap_1_today_GMT-2020.08.15-15.39.10

 Status: Started
  Created On: 2020-08-15 15:39:10 +

 Name:
snap_2_today_GMT-2020.08.15-15.39.20

 Status: Stopped
  Created On: 2020-08-15 15:39:20 +

  Bricks:

 Distribute Group 1:


slave-1:/mnt/brick1/snapr1/r11
  (Online)


slave-2:/mnt/brick1/snapr2/r22
  (Offline)

  Quota: Off

  Note:
glusterd/glusterfsd is down in one or more nodes.

Sizes might not
be accurate.



root@master-node:/mnt/gluster/movies#

>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-20 Thread Sachidananda Urs
On Fri, Aug 14, 2020 at 10:04 AM Gilberto Nunes 
wrote:

> Hi
> Could you improve the output to show "Possibly undergoing heal" as well?
> gluster vol heal VMS info
> Brick gluster01:/DATA/vms
> Status: Connected
> Number of entries: 0
>
> Brick gluster02:/DATA/vms
> /images/100/vm-100-disk-0.raw - Possibly undergoing heal
> Status: Connected
> Number of entries: 1
>

Gilberto, the release 1.0.2 (
https://github.com/gluster/gstatus/releases/tag/v1.0.2) has included
self-heal status.
The output looks like this:

root@master-node:/home/sac/work/gstatus# gstatus


Cluster:

 Status: Healthy GlusterFS: 9dev

 Nodes: 3/3  Volumes: 1/1


Volumes:

  snap-1   Replicate  Started (UP) - 2/2
Bricks Up

  Capacity: (9.43%
used) 4.00 GiB/40.00 GiB (used/total)

  Self-Heal:


slave-1:/mnt/brick1/snapr1/r11
(13 File(s) to heal).

  Snapshots: 2

  Quota: On

Hope that helps.

-sac

>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-14 Thread Sachidananda Urs
On Fri, Aug 14, 2020 at 10:04 AM Gilberto Nunes 
wrote:

> Hi
> Could you improve the output to show "Possibly undergoing heal" as well?
> gluster vol heal VMS info
> Brick gluster01:/DATA/vms
> Status: Connected
> Number of entries: 0
>
> Brick gluster02:/DATA/vms
> /images/100/vm-100-disk-0.raw - Possibly undergoing heal
> Status: Connected
> Number of entries: 1
>
>

We plan to add heal count. For example:

Self-Heal: 456 pending files.

Or something similar. If we list files, and if the number of files is high
it takes a long time and fills the screen making it quite cumbersome.

-sac

>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-12 Thread Sachidananda Urs
On Thu, Aug 13, 2020 at 12:58 AM Strahil Nikolov 
wrote:

> I couldn't make it work  on C8...
> Maybe I was cloning the wrong branch.
>
> Details can be found at
> https://github.com/gluster/gstatus/issues/30#issuecomment-673041743


I have commented on the issue:
https://github.com/gluster/gstatus/issues/30#issuecomment-673238987
These are the steps:

 $ git clone https://github.com/gluster/gstatus.git
 $ cd gstatus
 $ VERSION=1.0.0 make gen-version
 # python3 setup.py install

make gen-version will create version.py

Thanks,
sac




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-12 Thread Sachidananda Urs
On Sun, Aug 9, 2020 at 10:43 PM Gilberto Nunes 
wrote:

> How did you deploy it ? - git clone, ./gstatus.py, and python gstatus.py
> install then gstatus
>
> What is your gluster version ? Latest stable to Debian Buster (v8)
>
>
>
Hello Gilberto. I just made a 1.0.0 release.
gstatus binary is available to download from (requires python >= 3.6)
https://github.com/gluster/gstatus/releases/tag/v1.0.0

You can find the complete documentation here:
https://github.com/gluster/gstatus/blob/master/README

Follow the below steps for a quick method to test it out:

# curl -LO
https://github.com/gluster/gstatus/releases/download/v1.0.0/gstatus

# chmod +x gstatus

# ./gstatus -a
# ./gstatus --help

If you like what you see. You can move it to /usr/local/bin

Would like to hear your feedback. Any feature requests/bugs/PRs are welcome.

-sac




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Fwd: gluster-ansible: Upstream release and current status

2018-10-14 Thread Sachidananda URS
+ gluster-users

-- Forwarded message --
From: Sachidananda URS 
Date: Sun, Oct 14, 2018 at 7:52 PM
Subject: gluster-ansible: Upstream release and current status
To: Gluster Devel 


Hi,

There is has been some activity in gluster-ansible project.

This week we built upstream packages[1] for:
* gluster-ansible-features
* gluster-ansible-infra
* gluster-ansible

Highlights:

- gluster-ansible-infra documentation was grossly out-of-date
   * We did a overhaul of the documentation and fixed obvious bugs
   * Removed examples/ and added playbooks/ directory with upto date
working playbooks

- Thanks to Sheersha and Nigel, we added our initial tests
- Reworked our end-to-end hyperconverged installation examples and cleanup
playbooks.
- Fixed a very nasty bug where mkfs was skipped when only thick volumes
were created.
- More improvements in hyperconverged deployments, fixed a bunch of bugs.

Call for help:
gluster-ansible is far from feature complete but it is in usable state for
deploying GlusterFS clusters.

* Patches and documentation are always welcome.
* Request to use gluster-ansible for deployments and report bugs,
improvements,
  feature requests, patches ...

-sac

[1] https://copr.fedorainfracloud.org/coprs/sac/gluster-ansible/builds/
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gdeploy, Centos7 & Ansible 2.3

2017-05-09 Thread Sachidananda URS
On Tue, May 9, 2017 at 4:19 PM, hvjunk  wrote:

> This looks much better, Thanks!
>

Thanks for reporting and helping in fixing this.

Cheers,
-sac
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gdeploy, Centos7 & Ansible 2.3

2017-05-09 Thread Sachidananda URS
Hi,

On Tue, May 9, 2017 at 2:54 PM, hvjunk <hvj...@gmail.com> wrote:

>
> On 09 May 2017, at 09:15 , Sachidananda URS <s...@redhat.com> wrote:
>
> Hi,
>
> On Mon, May 8, 2017 at 2:05 PM, hvjunk <hvj...@gmail.com> wrote:
>
>>
>> 
>>
>> [root@linked-clone-of-centos-linux ~]# gdeploy -c t.conf
>> ERROR! no action detected in task. This often indicates a misspelled
>> module name, or incorrect module path.
>>
>> The error appears to have been in '/tmp/tmpvhTM5i/pvcreate.yml': line 16,
>> column 5, but may
>> be elsewhere in the file depending on the exact syntax problem.
>>
>> The offending line appears to be:
>>
>>   # Create pv on all the disks
>>   - name: Create Physical Volume
>> ^ here
>>
>>
> That is strange. Can you please give me the following output?
>
> $ rpm -qa | grep deploy
> $ rpm -qa | grep ansible
>
>
> [root@linked-clone-of-centos-linux ~]# rpm -qa | egrep "deploy|ansible"
> gdeploy-2.0.2-6.noarch
> ansible-2.3.0.0-3.el7.noarch
>
>
I've fixed this issue. And you can get the new rpm from:
https://copr-be.cloud.fedoraproject.org/results/sac/gdeploy/epel-7-x86_64/00549451-gdeploy/gdeploy-2.0.2-7.noarch.rpm


The root cause for the issue is:

In ansible-2.3 ansible no longer considers modules under
ansible/modules/extras/ which is where we used to keep the
gdeploy related modules. The above release will fix the issue.

Can you please try now and let us know?

-sac
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gdeploy, Centos7 & Ansible 2.3

2017-05-09 Thread Sachidananda URS
Hi,

On Mon, May 8, 2017 at 2:05 PM, hvjunk  wrote:

>
> 
>
> [root@linked-clone-of-centos-linux ~]# gdeploy -c t.conf
> ERROR! no action detected in task. This often indicates a misspelled
> module name, or incorrect module path.
>
> The error appears to have been in '/tmp/tmpvhTM5i/pvcreate.yml': line 16,
> column 5, but may
> be elsewhere in the file depending on the exact syntax problem.
>
> The offending line appears to be:
>
>   # Create pv on all the disks
>   - name: Create Physical Volume
> ^ here
>
>
That is strange. Can you please give me the following output?

$ rpm -qa | grep deploy
$ rpm -qa | grep ansible

Because after your mail tested on my machine and things seem to be working.
So, I'm guessing
there could be very trivial setup/package issues here which is causing this.

Once you confirm that you are using gdeploy-2.0.2-6 and ansible-2.2 or
higher. Can you please
perform the below steps to narrow down the error.

Maybe you can just trim your config file to contain just:

[hosts]
10.10.10.11
10.10.10.12
10.10.10.13

[backend-setup]
devices=/dev/sdb
mountpoints=/gluster/brick1
brick_dirs=/gluster/brick1/one
pools=pool1

And run the command:

$ gdeploy -c t.conf -k

The -k will ensure that your playbook temporary directory is not deleted.
You see a message like:

You can view the generated configuration files inside /tmp/tmp...

Please cd to that directory and run:

$ ansible-playbook -i ansible_hosts pvcreate.yml

Can you please let me know the results? We may have to take a closer look
at your setup.

-sac
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gdeploy 2.0 is available!

2016-03-10 Thread Sachidananda URS
Hi,

I'm pleased to announce the 2.0 version of gdeploy[1]. RPMs can be
downloaded from:
http://download.gluster.org/pub/gluster/gdeploy/2.0/


gdeploy 2.0 adds a lot of new features like:

* Allows to create multiple volumes.
* Multiple section support, for eg: [service-1], [service-2] ..., to run
multiple services.
* Firewalld module to allow/block ports.

And lot more.

More details on gdeploy-2 can be found at:
https://github.com/gluster-deploy/gdeploy/blob/master/doc/gdeploy-2

A short introduction to gdeploy can be found at:
http://sacurs.blogspot.in/2015/09/gdeploy-short-introduction-gdeploy-is.html

gdeploy is backward compatible with version 1, some configuration examples
can be found here:
https://github.com/gluster/gdeploy/tree/1.1/examples


Please file bugs at bugzilla.redhat.com, you like/dislike something drop us
a mail,
pull requests are always welcome.

We look forward to add more documentation, usecases, examples, and
writeups. Will
keep you posted.

-sac

[1]https://github.com/gluster/gdeploy/tree/2.0

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gdeploy v.2 - roadmap

2016-01-07 Thread Sachidananda URS
Hi All,

We are making some changes to the gdeploy design.
With the new changes, it is very easy to develop modules without cluttering
the core functionality.

Documentation on the new design can be found at:

https://github.com/gluster/gdeploy/blob/master/doc/gdeploy-2

And development can be tracked at:

https://github.com/gluster/gdeploy/tree/2.0

Any comments and suggestions are welcome.

-sac
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gdeploy 1.1 is available!

2015-11-10 Thread Sachidananda URS
Hi,

I'm pleased to announce the 1.1 version of gdeploy[1]. RPMs can be
downloaded from:
http://download.gluster.org/pub/gluster/gdeploy/1.1/



In this release we have:

Patterns for hostnames in the configuration files.
Backend setup config changes.
Rerunning the config does not throw error
Backend reset
Intuitive configuration file format (We support old format too).
Host specific and group specific configurations.

And support for the following GlusterFS features.

* Quota
* Snapshot
* Geo-replication
* Subscription manager
* Package install
* Firewalld
* Samba
* CTDB
* CIFS mount

Some sample configuration files can be found at:
https://github.com/gluster/gdeploy/tree/1.1/examples

Happy deploying!

-sac

[1] https://github.com/gluster/gdeploy/tree/1.1
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gdeploy-1.0 released

2015-09-21 Thread Sachidananda URS
Hi All,

I'm pleased to announce the initial release of gdeploy[1]. RPMs can be
downloaded from:
http://download.gluster.org/pub/gluster/gdeploy/1.0/

gdeploy is a deployment tool that helps in:

* Setting up backends for GlusterFS.
* Creating a volume
* Adding a brick to the volume.
* Removing a brick from volume ...


A short introduction to gdeploy can be found here [2]
An example on how to use gdeploy can be found here [3]

Fork us on github: https://github.com/gluster/gdeploy/ 1.0 is the stable
branch if you plan to
run from source.

Any questions, suggestions, and pull request are welcome.

-sac

[1] https://github.com/gluster/gdeploy
[2]
http://sacurs.blogspot.in/2015/09/gdeploy-short-introduction-gdeploy-is.html
[3] http://sacurs.blogspot.in/2015/09/gdeploy-getting-started.html
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] oVirt replacement?

2015-09-10 Thread Sachidananda URS
Hi Ryan,

On Wed, Sep 9, 2015 at 5:20 PM, Ryan Nix  wrote:

> At the Redhat conference in June one of the lead developers mentioned
> Gluster would be getting a new GUI management tool to replace oVirt. Has
> there been any formal announcement on this?
>
>

Currently we have gdeploy which is a CLI for deployment and management of
Gluster clusters.
This is an ansible based tool, which can be used to:

* Set up backend bricks.
* Create volume.
* Mount the clients and bunch of other things.

The project is hosted at:
https://github.com/gluster/gdeploy/tree/1.0

I suggest using 1.0 branch, master is unstable and work in progress.

The tool is very easy to use. It relies on configuration files. Writing a
configuration file is explained
in the example configuration file:

https://github.com/gluster/gdeploy/blob/1.0/examples/gluster.conf.sample

A more detailed explanation of the fields can be found at:
https://github.com/gluster/gdeploy/blob/1.0/examples/README

`gdeploy --help' will print out help message on the usage.

Please let us know if you have any questions. It would be much awesome if
you can send us
pull requests.

-sac
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] oVirt replacement?

2015-09-10 Thread Sachidananda URS
On Fri, Sep 11, 2015 at 1:07 AM, Ryan Nix <ryan@gmail.com> wrote:

> Thanks. Seems like just about everything open source is using Ansible for
> deployment.
>


Considering the flexibility of ansible and the ease of bootstrapping makes
it a choice of deployment.



>
> On Thu, Sep 10, 2015 at 12:50 PM, Sachidananda URS <s...@redhat.com>
> wrote:
>
>> Hi Ryan,
>>
>> On Wed, Sep 9, 2015 at 5:20 PM, Ryan Nix <ryan@gmail.com> wrote:
>>
>>> At the Redhat conference in June one of the lead developers mentioned
>>> Gluster would be getting a new GUI management tool to replace oVirt. Has
>>> there been any formal announcement on this?
>>>
>>>
>>
>> Currently we have gdeploy which is a CLI for deployment and management of
>> Gluster clusters.
>> This is an ansible based tool, which can be used to:
>>
>> * Set up backend bricks.
>> * Create volume.
>> * Mount the clients and bunch of other things.
>>
>> The project is hosted at:
>> https://github.com/gluster/gdeploy/tree/1.0
>>
>> I suggest using 1.0 branch, master is unstable and work in progress.
>>
>> The tool is very easy to use. It relies on configuration files. Writing a
>> configuration file is explained
>> in the example configuration file:
>>
>> https://github.com/gluster/gdeploy/blob/1.0/examples/gluster.conf.sample
>>
>> A more detailed explanation of the fields can be found at:
>> https://github.com/gluster/gdeploy/blob/1.0/examples/README
>>
>> `gdeploy --help' will print out help message on the usage.
>>
>> Please let us know if you have any questions. It would be much awesome if
>> you can send us
>> pull requests.
>>
>> -sac
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 2 Node glusterfs quorum help

2015-02-09 Thread Sachidananda URS
Excuse my top posting:

Kaamesh,

There are two types of quorum implementations:

1)
cluster.server-quorum-type: server/none
cluster.server-quorum-ratio: (0-100%) default 50%

Whose behavior is to kill the glusterfsd daemons (server) if 
server-quorum-ratio is not met.

2) And we have quorum type:

cluster.quorum-type: fixed
cluster.quorum-count: 1

Whose behavior is to make the cluster read-only if the quorum is not met.
In your case if you don't want the cluster to go down (Without adding another 
node to the cluster),  
disable the cluster.server-quorum-type: server, and use the 
`cluster.quorum-type: fixed and 
cluster.quorum-count: 1' options.

In your volume options I see that you are using both the above methods to 
achieve quorum.

ref: 
http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options

You can see some explanation in the above link.

-sac


- Original Message -
 From: ML mail mlnos...@yahoo.com
 To: gluster-users@gluster.org
 Sent: Monday, February 9, 2015 1:53:56 PM
 Subject: Re: [Gluster-users] 2 Node glusterfs quorum help
 
 This seems to be a workaround, isn't there another proper way with the
 configuration of the volume to achieve this? I would not like to have to
 setup a third fake server just in order to avoid that.
 
 
 
 On Monday, February 9, 2015 2:27 AM, Kaamesh Kamalaaharan
 kaam...@novocraft.com wrote:
 
 
 It works! Thanks to craig's suggestion . i setup a third server without a
 brick and added it to the trusted pool. now it doesnt go down. thanks alot
 guys!
 
 Thank You Kindly,
 Kaamesh
 Bioinformatician
 Novocraft Technologies Sdn Bhd
 C-23A-05, 3 Two Square, Section 19, 46300 Petaling Jaya
 Selangor Darul Ehsan
 Malaysia
 Mobile: +60176562635
 Ph: +60379600541
 Fax: +60379600540
 
 On Mon, Feb 9, 2015 at 2:19 AM,  prmari...@gmail.com  wrote:
 
 
 
 Quorum only appli‎es when you have 3 or more bricks replicating each other.
 In other words it doesn't mean any thing in a 2 node 2 brick cluster so it
 shouldn't be set.
 
 In other words based on your settings it's acting correctly because it thinks
 that the online brick needs to have a minimum of one other brick it agrees
 with online.
 
 Sent from my BlackBerry 10 smartphone.
 From: Kaamesh Kamalaaharan
 Sent: Sunday, February 8, 2015 05:50
 To: gluster-users@gluster.org
 Subject: [Gluster-users] 2 Node glusterfs quorum help
 
 Hi guys. I have a 2 node replicated gluster setup with the quorum count set
 at 1 brick. By my understanding this means that the gluster will not go down
 when one brick is disconnected. This however proves false and when one brick
 is disconnected (i just pulled it off the network) the remaining brick goes
 down as well and i lose my mount points on the server.
 can anyone shed some light on whats wrong?
 
 my gfs config options are as following
 
 
 Volume Name: gfsvolume
 Type: Replicate
 Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
 Status: Started
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: gfs1:/export/sda/brick
 Brick2: gfs2:/export/sda/brick
 Options Reconfigured:
 cluster.quorum-count: 1
 auth.allow: 172.*
 cluster.quorum-type: fixed
 performance.cache-size: 1914589184
 performance.cache-refresh-timeout: 60
 cluster.data-self-heal-algorithm: diff
 performance.write-behind-window-size: 4MB
 nfs.trusted-write: off
 nfs.addr-namelookup: off
 cluster.server-quorum-type: server
 performance.cache-max-file-size: 2MB
 network.frame-timeout: 90
 network.ping-timeout: 30
 performance.quick-read: off
 cluster.server-quorum-ratio: 50%
 
 
 Thank You Kindly,
 Kaamesh
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Does Glusterfs works with ipv6?

2015-01-23 Thread Sachidananda URS
Hi,

- Original Message -
 From: Олег Кузнецов smallo...@gmail.com
 To: gluster-users@gluster.org
 Sent: Friday, January 23, 2015 5:00:56 PM
 Subject: [Gluster-users] Does Glusterfs works with ipv6?
 
 Hi all!
 Does Glusterfs works with ipv6?
 
 I'm install glusterfs 3.4.6-7, create volume and brick by gluster-cli.
 
 But glusterfsd/glusterd by default - doesn't listen any ipv6 addresses. And i
 can't peer probe another servers by ipv6.
 
 How i can fix it?

GlusterFS support for ipv6 is not at its best yet.

Refer:

http://www.gluster.org/community/documentation/index.php/Features/IPv6

We have some of open bugs listed below. If your bug is something different than 
the ones listed below please file it.

https://bugzilla.redhat.com/buglist.cgi?quicksearch=910836%20922801%201070685list_id=3178783

-sac

 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfs upgrade to 3.5 without downtime

2015-01-19 Thread Sachidananda URS
Hi,

- Original Message -
 From: Marco Marino marino@gmail.com
 To: gluster-users@gluster.org
 Sent: Monday, January 19, 2015 2:04:53 PM
 Subject: [Gluster-users] Glusterfs upgrade to 3.5 without downtime
 
 Hi. I'm using glusterfs (replica 2) as a storage backend for openstack vm
 instances (ephemeral disks). Each compute node mounts glusterfs in
 /mnt/instances but i need to upgrade glusterfs from 3.4 to 3.5 (
 http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5 ).
 This guide tells me that a downtime is required, but in my case downtime
 means:
 1) shutoff all vm instances
 2) umount glusterfs from all compute nodes
 3) upgrade glusterfs
 4) mount glusterfs from all compute nodes
 5) restart all vm instances
 
 
 Is there a solution without downtime (also using replication 2 factor)? I do
 not want to shut off all instances


It is possible to do without downtime (Since you are upgrading from 3.4 to 3.5
it should not be very difficult.) but, you have to be extremely careful.
Downtime is suggested because it is very easy to get into an unusable state,
unless you are very careful.

Here is how you can do the upgrades:

On one of the nodes in the replica pair `pkill gluster' ... ensure none of the
gluster* processes are running. Upgrade that node to 3.5 and `service glusterd
start', ensure that all the processes are up.

Then heal the volume: `gluster volume heal volname full' ... and keep checking
the heal info to ensure that the data is fully healed. Once data is fully
healed... repeat the above process on another node.

Later you can upgrade your clients at your convenience.

Hope it helps.

-sac
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Remove Replica Pair issue.

2014-07-14 Thread Sachidananda Urs

Hey Rob,

On 07/14/2014 07:59 PM, Robert Moss wrote:


/Gluster Gurus,/

//

/I am having an issue removing a gluster replica pair. I have tried 
numerous different iterations of the command and get back 'Incorrect 
brick tx4-gluster1-10:/brick10 for volume gv0'/


/Here is the output of gluster volume info:/

Volume Name: gv0

Type: Distributed-Replicate

Volume ID: 176c0a15-c107-4bdc-9b00-df7e5c841374

Status: Started

Number of Bricks: 10 x 2 = 20

Transport-type: tcp

Bricks:

Brick1: tx4-gluster1-1:/brick1/brick

Brick2: tx4-gluster1-2:/brick1/brick

Brick3: tx4-gluster1-3:/brick1/brick

Brick4: tx4-gluster1-4:/brick1/brick

Brick5: tx4-gluster1-5:/brick1/brick

Brick6: tx4-gluster1-6:/brick1/brick

Brick7: tx4-gluster1-7:/brick1/brick

Brick8: tx4-gluster1-8:/brick1/brick

Brick9: tx4-gluster1-9:/brick1/brick

Brick10: tx4-gluster1-10:/brick1/brick

Brick11: tx4-gluster1-1:/brick2/brick

Brick12: tx4-gluster1-2:/brick2/brick

Brick13: tx4-gluster1-3:/brick2/brick

Brick14: tx4-gluster1-4:/brick2/brick

Brick15: tx4-gluster1-5:/brick2/brick

Brick16: tx4-gluster1-6:/brick2/brick

Brick17: tx4-gluster1-7:/brick2/brick

Brick18: tx4-gluster1-8:/brick2/brick

Brick19: tx4-gluster1-9:/brick2/brick

Brick20: tx4-gluster1-10:/brick2/brick

Options Reconfigured:

nfs.disable: off

/I would like to remove gluster1-10 from the gluster cluster./

/Here is my command:/

gluster volume remove-brick gv0 replica 2 tx4-gluster1-10:/brick1 
tx4-gluster1-10:/brick2 force




You are getting the command wrong here. You can remove the replica 
pairs, in your case the pairs are:



Brick9: tx4-gluster1-9:/brick1/brick

Brick10: tx4-gluster1-10:/brick1/brick

And your command would be:

gluster volume remove-brick gv0 replica 2 tx4-gluster1-9:/brick1/brick 
tx4-gluster1-10:/brick1/brick


And if I understand correctly you want to remove the host gluster1-10 
from the cluster which is part of two replica pairs. So you will end up 
removing two replica pairs if you want to remove gluster1-10 from the 
cluster.


-sac


/Here is the result:/

volume remove-brick commit force: failed: Incorrect brick 
tx4-gluster1-10:/brick1 for volume gv0


I have tried a number of iterations here, using different names and 
trying tx4-gluster1-10 and 1-9 bricks. Nothing has worked so far.


This is a test setup so we don't care about data, we just need to learn.

We will probably not put the node back in after removal.

Any help would be greatly appreciated.

Thank you in advance.

Best regards,

Rob Moss



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] A very special announcement from Gluster.org

2012-06-06 Thread Sachidananda URS
Hi Philip,

Did you try installing libssl from source to meet the dependency?

-sac

Sent from my iPhone

On 02-Jun-2012, at 13:57, Philip flip...@googlemail.com wrote:

 It is still not possible to install the 3.3 deb on a stable release of debian 
 because squeeze has no libssl1.0.0.
 
 2012/5/31 John Mark Walker johnm...@redhat.com
 Today, we’re announcing the next generation of GlusterFS, version 3.3. The 
 release has been a year in the making and marks several firsts: the first 
 post-acquisition release under Red Hat, our first major act as an 
 openly-governed projectand our first foray beyond NAS. We’ve also taken our 
 first steps towards merging big data and unstructured data storage, giving 
 users and developers new ways of managing their data scalability challenges.
 
 GlusterFS is an open source, fully distributed storage solution for the 
 world’s ever-increasing volume of unstructured data. It is a software-only, 
 highly available, scale-out, centrally managed storage pool that can be 
 backed by POSIX filesystems that support extended attributes, such as Ext3/4, 
 XFS, BTRFS and many more.
 
 This release provides many of the most commonly requested features including 
 proactive self-healing, quorum enforcement, and granular locking for 
 self-healing, as well as many additional bug fixes and enhancements.
 
 Some of the more noteworthy features include:
 
 Unified File and Object storage – Blending OpenStack’s Object Storage API  
 with GlusterFS provides simultaneous read and write access to data as files 
 or as objects.
 HDFS compatibility – Gives Hadoop administrators the ability to run MapReduce 
 jobs on unstructured data on GlusterFS and access the data with well-known 
 tools and shell scripts.
 Proactive self-healing – GlusterFS volumes will now automatically restore 
 file integrity after a replica recovers from failure.
 Granular locking – Allows large files to be accessed even during 
 self-healing, a feature that is particularly important for VM images.
 Replication improvements – With quorum enforcement you can be confident that  
 your data has been written in at least the configured number of places before 
 the file operation returns, allowing a user-configurable adjustment to fault 
 tolerance vs performance.
 
 Visit http://www.gluster.org to download. Packages are available for most 
 distributions, including Fedora, Debian, RHEL, Ubuntu and CentOS.
 
 Get involved! Join us on #gluster on freenode, join our mailing list, ‘like’ 
 our Facebook page, follow us on Twitter, or check out our LinkedIn group.
 
 GlusterFS is an open source project sponsored by Red Hat®, who uses it in its 
 line of Red Hat Storage products.
 
 (this post published at 
 http://www.gluster.org/2012/05/introducing-glusterfs-3-3/ )
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.3 rdma in .deb package

2012-06-06 Thread Sachidananda URS
Hi Filipe,

I have built the RDMA packages and can be found in: 
http://download.gluster.com/pub/gluster/glusterfs/LATEST/Debian/5.0.3/

-sac

- Original Message -
From: Filipe Roque flip.ro...@gmail.com
To: gluster-users@gluster.org
Sent: Wednesday, June 6, 2012 5:02:51 PM
Subject: Re: [Gluster-users] 3.3 rdma in .deb package

 RDMA is definitely still supported - although we may have simply rearranged 
 how the packaging is done.

 You should be able to use it as before.

But if the provided debian package doesn't include RDMA, how can I use
it as before?

Only option I see is compiling myself.

flip
-- 
Rádio Zero
www.radiozero.pt
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Is glusterd required on clients?

2012-06-06 Thread Sachidananda Urs

On Tuesday 22 May 2012 10:54 AM, Toby Corkindale wrote:
On Linux clients, using the FUSE method of mounting volumes, do you 
need glusterd to be running?


I don't *think* so, but want to check.

You don't need glusterd to be running on the clients. It is a management 
daemon required only on servers if you are doing operations using the 
gluster cli


-sac
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.3 rdma in .deb package

2012-06-06 Thread Sachidananda Urs

Hi Filipe,

On Wednesday 06 June 2012 05:02 PM, Filipe Roque wrote:

RDMA is definitely still supported - although we may have simply rearranged how 
the packaging is done.

You should be able to use it as before.

But if the provided debian package doesn't include RDMA, how can I use
it as before?

Only option I see is compiling myself.


Currently, as we are not doing extensive RDMA tests on debian/Ubuntu 
machines, we have disabled RDMA in the packages.
However, as you mentioned you have to compile from source if you need 
RDMA support.


-sachidananda.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.3 beta on Debian

2012-05-03 Thread Sachidananda Urs
Hi,

On Thu, May 3, 2012 at 10:49 AM, Toby Corkindale 
toby.corkind...@strategicdata.com.au wrote:


 The files are located in a directory that looks like they were built for
 Debian Lenny, here:

 http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.3.0beta3/Debian/5.0.3/

 Note the 5.0.3 at the end of the path..


However, when attempting to install the .deb file, it gives an error
 about package libssl1.0.0 being missing.

 That package is only available in the upcoming Debian version 7.0
 (Wheezy), and is not available for Debian 5.0 or 6.0 at all.

 The packages are built for wheezy/sid:

debian5:~# cat /etc/debian_version
wheezy/sid
debian5:~# uname -a
Linux debian5.0.3 2.6.26-2-amd64 #1 SMP Wed Aug 19 22:33:18 UTC 2009 x86_64
GNU/Linux
debian5:~#

Can you see if libssl0.9.8 is available? That version should work.
Otherwise compiling from source should definitely work.

-sachidananda.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users