[Gluster-users] gluster native client failover testing

2017-01-04 Thread Colin Coe
Hi all

As the subject states, I'm doing glusterfs native client testing.

I've configured two test gluster servers (RHEL7) running glusterfs 3.7.18.

My test client is RHEL5.11 with the glusterfs-fuse RPM 3.7.18 installed.

The client has the following in /etc/fstab:
devfil01:/gv0   /share   glusterfs
 defaults,backupvolfile-server=devfil02   0 0

I used the following shell snippet to test gluster native client failover:
for I in `seq -w 1 1000`; do sleep 0.05; touch /share/test$I; echo $I; done

A hundred or so iterations in, I rebooted devfil01.  The snippet stopped
while devfil01 was down.  It didn't swap over to devfil02.

I've manually tested that devfil02 is working.

Any ideas what I'm doing wrong?

Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Performance testing striped 4 volume

2017-01-04 Thread Karan Sandha

Hi Zack,

As the bricks had already been used before, gluster doesn't allow to 
create volume with same brick path until you use "force" at the end of 
the command. As you are doing performance testing i would recommend to 
clean the bricks and  issue the same command.


. sudo gluster volume create gluster1 transport tcp 
cyan:/gluster/ssd1/brick1*new* green:/gluster/ssd1/brick2*new* 
red:/gluster/ssd1/brick3*new* pink:/gluster/ssd1/brick4*new

*

for time being this will solve the your problem.


Thanks & Regards

Karan Sandha


On 01/05/2017 05:53 AM, Zack Boll wrote:
In performance testing a striped 4 volume, I appeared to have crashed 
glusterfs using version 3.8.7 on Ubuntu 16.04.  I then stopped the 
volume and deleted it.  I am now having trouble creating a new volume, 
below is output


sudo gluster volume create gluster1 transport tcp 
cyan:/gluster/ssd1/brick1 green:/gluster/ssd1/brick2 
red:/gluster/ssd1/brick3 pink:/gluster/ssd1/brick4


volume create: gluster1: failed: Staging failed on green. Error: 
/gluster/ssd1/brick2 is already part of a volume
Staging failed on pink. Error: /gluster/ssd1/brick4 is already part of 
a volume
Staging failed on cyan. Error: /gluster/ssd1/brick1 is already part of 
a volume
Staging failed on red. Error: /gluster/ssd1/brick3 is already part of 
a volume


sudo gluster volume info
No volumes present

Any ideas on how to fix this?


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Performance testing striped 4 volume

2017-01-04 Thread Zack Boll
In performance testing a striped 4 volume, I appeared to have crashed
glusterfs using version 3.8.7 on Ubuntu 16.04.  I then stopped the volume
and deleted it.  I am now having trouble creating a new volume, below is
output

sudo gluster volume create gluster1 transport tcp cyan:/gluster/ssd1/brick1
green:/gluster/ssd1/brick2 red:/gluster/ssd1/brick3
pink:/gluster/ssd1/brick4

volume create: gluster1: failed: Staging failed on green. Error:
/gluster/ssd1/brick2 is already part of a volume
Staging failed on pink. Error: /gluster/ssd1/brick4 is already part of a
volume
Staging failed on cyan. Error: /gluster/ssd1/brick1 is already part of a
volume
Staging failed on red. Error: /gluster/ssd1/brick3 is already part of a
volume

sudo gluster volume info
No volumes present

Any ideas on how to fix this?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Cheers and some thoughts

2017-01-04 Thread Lindsay Mathieson
Hi all, just wanted to mention that since I had sole use of our cluster 
over the holidays and a complete set of backups :) I decided to test 
some alternate cluster software and do some stress testing.



Stress testing involved multiple soft and *hard* resets of individual 
servers and hard simultaneous resets of the entire cluster, where a hard 
reset is equivalent to a power outage.



Gluster (3.8.7) coped perfectly - no data loss, no maintenance required, 
each time it came up by itself with no hand holding and started healing 
nodes, which completed very quickly. VM's on gluster auto started with 
no problems, i/o load while healing was ok. I felt quite confident in it.



The alternate cluster fs - not so good. Many times running VM's were 
corrupted, several times I lost the entire filesystem. Also IOPS where 
atrocious (fuse based). It easy to claim HA when you exclude such things 
as power supply failures, dodgy network switches etc.



I think glusters active/active quorum based design, where is every node 
is a master is a winner, active/passive systems where you have a SPOF 
master are difficult to DR manage.



However :) Things I'd really like to see in Gluster:

- More flexible/easier management of servers and bricks (add/remove/replace)

- More flexible replication rules

One of the things I really *really* like with LizardFS is the powerful 
goal system and chunkservers. Nodes and disks can be trivially easily 
added/removed on the fly and chunks will be shuffled, replicated or 
deleted to balance the system. Individual objects can have difference 
goals (replication levels) which can also be changed on the fly and the 
system will rebalance them. Objects can even be changed from/to simple 
replication to Erasure Encoded objects.



I doubt this could be fitted to the existing gluster, but is there 
potential for this sort of thing in Gluster 4.0? I read the design docs 
and they look ambitious.



Cheers,


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Weekly Community Meeting - 20170104

2017-01-04 Thread Kaushal M
Good start to 2017. We had an active meeting this time.

The meeting minutes and weekly updates are available below, as well as
the links to the logs. The meeting agenda and updates have been
archived at 
https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-01-04
.
Next weeks meeting will be hosted by me, at the same time and place.
1200UTC in #gluster-meeting on Freenode. The agenda is now open for
topics and updates at https://bit.ly/gluster-community-meetings .

See you all next week.

~kaushal

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-01-04/gluster_community_meeting_20170104.2017-01-04-12.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-01-04/gluster_community_meeting_20170104.2017-01-04-12.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-01-04/gluster_community_meeting_20170104.2017-01-04-12.00.log.html

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Action Items from last week


- Discuss participation in the meetings in January.
- Carryover to January

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
- GD2
- New release GlusterD2 v4.0dev-4
- More details on the work done at
https://www.gluster.org/pipermail/gluster-devel/2017-January/051805.html

 GlusterFS 3.10

- Maintainers : shyam, kkeithley, rtalur
- Next release : 3.10.0
- Target date: February 14, 2017
- Release tracker : https://github.com/gluster/glusterfs/milestone/1
- Updates:
  - Feature list frozen, link as above
  - Branching date: 17th Jan, 2017 (~4 weeks prior to the release date
of 14th Feb, 2017)
  - Feature readiness checkpoint will be done around 3rd/4th Jan, 2017
  - Call out: Feature specs need reviews and closure (will be sending
a mail regarding the same)
  - Reference mail:
http://www.gluster.org/pipermail/gluster-devel/2016-December/051674.html

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.0
- Next release : 3.9.1
  - Release date : 20 January 2017
- Tracker bug :
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.1 (doesn't
exist)
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.9.0_resolved=1
- Roadmap : https://www.gluster.org/community/roadmap/3.9/
- Updates:
  - _None_

 GlusterFS 3.8

- Maintainers : ndevos, jiffin
- Current release : 3.8.7
- Next release : 3.8.8
  - Release date : 10 January 2017
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.8
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.8.8_resolved=1
- Updates:
  - _None_

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.18
- Next release : 3.7.19
  - Release date : 30 December 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.19
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.7.19_resolved=1
- Updates:
  - 3.7.18 was finally tagged a week late
  - Expect release/announcement later this week
  - Announcement done on 2016-12-13
  - 
https://www.gluster.org/pipermail/gluster-users/2016-December/029427.html
  - 3.7.19 should be tagged later this week (week of 20170101)

### Related projects and efforts

 Community Infra

- The Gerrit OS upgrade didn't go through during the holidays. We'll
be schduling it this month at an appropriate weekend.
- fstat also [shows branch
names](http://fstat.gluster.org/weeks/4/failure/82) now so you know if
a failure happens only in one specific branch. It only does this for
new jobs, not for old ones. (If there is demand, I'll add it for old
ones)
- When a NetBSD jobs is aborted, the machine will now be automatically
restarted.
- We've addded an additional machine for netbsd smoke since the queue
was getting quite long with just one machine.

 Samba

- _None_

 Ganesha

- _None_

 Containers

- _None_

 Testing

- _None_

 Others

- aravindavk, Updates on Geo-replication
- https://www.gluster.org/pipermail/gluster-devel/2016-December/051636.html
- Top 5 regressions in
[December](https://www.gluster.org/pipermail/gluster-devel/2016-December/051792.html)


Meeting summary
---
* Rollcall  (kshlm, 12:00:38)

* STM and backports  (kshlm, 12:06:18)
  * ACTION: Need to find out when 3.9.1 is happening  (kshlm, 12:17:51)

* A common location for testing-tools  (kshlm, 12:18:20)
  * ACTION: shyam will file a bug to get arequal included in glusterfs
packages  (kshlm, 12:42:46)

* Developer workflow problems  (kshlm, 12:43:16)

Meeting ended at 13:10:03 UTC.




Action Items

* Need to find out when 3.9.1 is happening
* shyam will file a bug to get arequal included in