Re: [Gluster-users] Problem adding replicated bricks on FreeBSD

2018-04-26 Thread Kaushal M
On Thu, Apr 26, 2018 at 9:06 PM Mark Staudinger 
wrote:

> Hi Folks,

> I'm trying to debug an issue that I've found while attempting to qualify
> GlusterFS for potential distributed storage projects on the FreeBSD-11.1
> server platform - using the existing package of GlusterFS v3.11.1_4

> The main issue I've encountered is that I cannot add new bricks while
> setting/increasing the replica count.

> If I create a replicated volume "poc" on two hosts, say s1:/gluster/1/poc
> and s2:/gluster/1/poc, the volume is created properly and shows replicated
> status, files are written to both volumes.

> If I create a single volume: s1:/gluster/1/poc as a single / distributed
> brick, and then try to run

> gluster volume add-brick poc replica 2 s2:/gluster/1/poc

> it will always fail (sometimes after a pause, sometimes not.)  The only
> error I'm seeing on the server hosting the new brick, aside from the
> generic "Unable to add bricks" message, is like so:

> I [MSGID: 106578]
> [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management:
> replica-count is set 2
> I [MSGID: 106578]
> [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management:
> type is set 2, need to change it
> E [MSGID: 106054]
> [glusterd-utils.c:12974:glusterd_handle_replicate_brick_ops] 0-management:
> Failed to set extended attribute trusted.add-brick : Operation not
> supported [Operation not supported]

The log here seems to indicate the filesystem on the new brick being added
doesn't seem to support setting xattrs.
Maybe check the new brick again?

> E [MSGID: 106074] [glusterd-brick-ops.c:2565:glusterd_op_add_brick]
> 0-glusterd: Unable to add bricks
> E [MSGID: 106123] [glusterd-mgmt.c:311:gd_mgmt_v3_commit_fn] 0-management:
> Add-brick commit failed.

> I was initially using ZFS and noted that ZFS on FreeBSD does not support
> xattr, so I reverted to using UFS as the storage type for the brick, and
> still encounter this behavior.

> I also recompiled the port (again, GlusterFS v3.11.1) with the patch from
> https://bugzilla.redhat.com/show_bug.cgi?id=1484246 as this deals
> specifically with xattr handling in FreeBSD.

> To recap - I'm able to create any type of volume (2 or 3-way replicated or
> distributed), but I'm unable to add replicated bricks to a volume.

> I was, however, able to add a second distributed brick ( gluster volume
> add-brick poc s2:/gluster/1/poc ) - so the issue seems specific to adding
> and/or changing the replica count while adding a new brick.

> Please let me know if there are any other issues in addition to bug
> #1452961 I should be aware of, or additional log or debug info I can
> provide.

> Best Regards,
> Mark
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] configure fails due to failure in locating libxml2-devel

2018-01-21 Thread Kaushal M
Did you run autogen.sh after installing libxml2-devel?

On Mon, Jan 22, 2018 at 11:10 AM, Raghavendra G
 wrote:
> All,
>
> # ./configure
> 
> configure: error: libxml2 devel libraries not found
>
> # ls /usr/lib64/libxml2.so
> /usr/lib64/libxml2.so
>
> # ls /usr/include/libxml2/
> libxml
>
> # yum install libxml2-devel
> Package libxml2-devel-2.9.1-6.el7_2.3.x86_64 already installed and latest
> version
> Nothing to do
>
> Looks like the issue is very similar to one filed in:
> https://bugzilla.redhat.com/show_bug.cgi?id=64134
>
> Has anyone encountered this? How did you workaround this?
>
> regards,
> --
> Raghavendra G
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] [GD2] New release - GlusterD2 v4.0dev-10

2018-01-12 Thread Kaushal M
We have a new GD2 release!!

This has been a while coming. The last release happened around the
time of the Gluster summit, and we have been working hard the last 2
months.

There have been a lot of changes, most of them aimed at getting GD2 in
shape for release.  We also have new commands and CLIs implemented for
you to try.

The release is available from [1].
RPMs are available from the COPR repository at [2]. The RPMs require
the nightly builds of GlusterFS master, currently only available for
EL [3].
There is a quick start guide available at [4].

We're working on implementing more commands and we hope to have some
more preview releases before GlusterFS-4.0 lands.

Thanks!
GD2 developers

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-10
[2]: https://copr.fedorainfracloud.org/coprs/kshlm/glusterd2/
[3]: http://artifacts.ci.centos.org/gluster/nightly/master.repo
[4]: 
https://github.com/gluster/glusterd2/blob/v4.0dev-10/doc/quick-start-user-guide.md
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting 2017-11-08

2017-11-08 Thread Kaushal M
!!REMINDER!!
Community meeting is back after 4 weeks off.
Today's community meeting is scheduled in about 3 hours from now, at 1500UTC.

Please add any topics you want to discuss and any updates you want to
share with the community into the meeting pad at
https://bit.ly/gluster-community-meetings

See you in #gluster-meeting!

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-05 Thread Kaushal M
On Fri, Nov 3, 2017 at 8:50 PM, Alastair Neil <ajneil.t...@gmail.com> wrote:
> Just so I am clear the upgrade process will be as follows:
>
> upgrade all clients to 4.0
>
> rolling upgrade all servers to 4.0 (with GD1)
>
> kill all GD1 daemons on all servers and run upgrade script (new clients
> unable to connect at this point)
>
> start GD2 ( necessary or does the upgrade script do this?)
>
>
> I assume that once the cluster had been migrated to GD2 the glusterd startup
> script will be smart enough to start the correct version?
>

This should be the process, mostly.

The upgrade script needs to GD2 running on all nodes before it can
begin migration.
But they don't need to have a cluster formed, the script should take
care of forming the cluster.


> -Thanks
>
>
>
>
>
> On 3 November 2017 at 04:06, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic <bu...@onholyground.com>
>> wrote:
>> > Will the various client packages (centos in my case) be able to
>> > automatically handle the upgrade vs new install decision, or will we be
>> > required to do something manually to determine that?
>>
>> We should be able to do this with CentOS (and other RPM based distros)
>> which have well split glusterfs packages currently.
>> At this moment, I don't know exactly how much can be handled
>> automatically, but I expect the amount of manual intervention to be
>> minimal.
>> The least minimum amount of manual work needed would be enabling and
>> starting GD2 and starting the migration script.
>>
>> >
>> > It’s a little unclear that things will continue without interruption
>> > because
>> > of the way you describe the change from GD1 to GD2, since it sounds like
>> > it
>> > stops GD1.
>>
>> With the described upgrade strategy, we can ensure continuous volume
>> access to clients during the whole process (provided volumes have been
>> setup with replication or ec).
>>
>> During the migration from GD1 to GD2, any existing clients still
>> retain access, and can continue to work without interruption.
>> This is possible because gluster keeps the management  (glusterds) and
>> data (bricks and clients) parts separate.
>> So it is possible to interrupt the management parts, without
>> interrupting data access to existing clients.
>> Clients and the server side brick processes need GlusterD to start up.
>> But once they're running, they can run without GlusterD. GlusterD is
>> only required again if something goes wrong.
>> Stopping GD1 during the migration process, will not lead to any
>> interruptions for existing clients.
>> The brick process continue to run, and any connected clients continue
>> to remain connected to the bricks.
>> Any new clients which try to mount the volumes during this migration
>> will fail, as a GlusterD will not be available (either GD1 or GD2).
>>
>> > Early days, obviously, but if you could clarify if that’s what
>> > we’re used to as a rolling upgrade or how it works, that would be
>> > appreciated.
>>
>> A Gluster rolling upgrade process, allows data access to volumes
>> during the process, while upgrading the brick processes as well.
>> Rolling upgrades with uninterrupted access requires that volumes have
>> redundancy (replicate or ec).
>> Rolling upgrades involves upgrading servers belonging to a redundancy
>> set (replica set or ec set), one at a time.
>> One at a time,
>> - A server is picked from a redundancy set
>> - All Gluster processes are killed on the server, glusterd, bricks and
>> other daemons included.
>> - Gluster is upgraded and restarted on the server
>> - A heal is performed to heal new data onto the bricks.
>> - Move onto next server after heal finishes.
>>
>> Clients maintain uninterrupted access, because a full redundancy set
>> is never taken offline all at once.
>>
>> > Also clarification that we’ll be able to upgrade from 3.x
>> > (3.1x?) to 4.0, manually or automatically?
>>
>> Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
>> gdeploy has playbooks to automate it.
>> At the end of this you will be left with a 4.0 cluster, but still be
>> running GD1.
>> Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
>> that automates this is planned only for 4.1.
>>
>> >
>> >
>> > 
>> > From: Kaushal M <kshlms...@gmail.com>
>> > Subject: [Gluster-users

Re: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-03 Thread Kaushal M
On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic <bu...@onholyground.com> wrote:
> Will the various client packages (centos in my case) be able to
> automatically handle the upgrade vs new install decision, or will we be
> required to do something manually to determine that?

We should be able to do this with CentOS (and other RPM based distros)
which have well split glusterfs packages currently.
At this moment, I don't know exactly how much can be handled
automatically, but I expect the amount of manual intervention to be
minimal.
The least minimum amount of manual work needed would be enabling and
starting GD2 and starting the migration script.

>
> It’s a little unclear that things will continue without interruption because
> of the way you describe the change from GD1 to GD2, since it sounds like it
> stops GD1.

With the described upgrade strategy, we can ensure continuous volume
access to clients during the whole process (provided volumes have been
setup with replication or ec).

During the migration from GD1 to GD2, any existing clients still
retain access, and can continue to work without interruption.
This is possible because gluster keeps the management  (glusterds) and
data (bricks and clients) parts separate.
So it is possible to interrupt the management parts, without
interrupting data access to existing clients.
Clients and the server side brick processes need GlusterD to start up.
But once they're running, they can run without GlusterD. GlusterD is
only required again if something goes wrong.
Stopping GD1 during the migration process, will not lead to any
interruptions for existing clients.
The brick process continue to run, and any connected clients continue
to remain connected to the bricks.
Any new clients which try to mount the volumes during this migration
will fail, as a GlusterD will not be available (either GD1 or GD2).

> Early days, obviously, but if you could clarify if that’s what
> we’re used to as a rolling upgrade or how it works, that would be
> appreciated.

A Gluster rolling upgrade process, allows data access to volumes
during the process, while upgrading the brick processes as well.
Rolling upgrades with uninterrupted access requires that volumes have
redundancy (replicate or ec).
Rolling upgrades involves upgrading servers belonging to a redundancy
set (replica set or ec set), one at a time.
One at a time,
- A server is picked from a redundancy set
- All Gluster processes are killed on the server, glusterd, bricks and
other daemons included.
- Gluster is upgraded and restarted on the server
- A heal is performed to heal new data onto the bricks.
- Move onto next server after heal finishes.

Clients maintain uninterrupted access, because a full redundancy set
is never taken offline all at once.

> Also clarification that we’ll be able to upgrade from 3.x
> (3.1x?) to 4.0, manually or automatically?

Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,
gdeploy has playbooks to automate it.
At the end of this you will be left with a 4.0 cluster, but still be
running GD1.
Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script
that automates this is planned only for 4.1.

>
>
> 
> From: Kaushal M <kshlms...@gmail.com>
> Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+
> Date: November 2, 2017 at 3:56:05 AM CDT
> To: gluster-users@gluster.org; Gluster Devel
>
> We're fast approaching the time for Gluster-4.0. And we would like to
> set out the expected upgrade strategy and try to polish it to be as
> user friendly as possible.
>
> We're getting this out here now, because there was quite a bit of
> concern and confusion regarding the upgrades between 3.x and 4.0+.
>
> ---
> ## Background
>
> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
> which is backwards incompatible with the GlusterD (GD1) in
> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
> established, rolling upgrades are not possible. This meant that
> upgrades from 3.x to 4.0 would require a volume downtime and possible
> client downtime.
>
> This was a cause of concern among many during the recently concluded
> Gluster Summit 2017.
>
> We would like to keep pains experienced by our users to a minimum, so
> we are trying to develop an upgrade strategy that avoids downtime as
> much as possible.
>
> ## (Expected) Upgrade strategy from 3.x to 4.0
>
> Gluster-4.0 will ship with both GD1 and GD2.
> For fresh installations, only GD2 will be installed and available by
> default.
> For existing installations (upgrades) GD1 will be installed and run by
> default. GD2 will also be installed simultaneously, but will not run
> automatically.
>
> GD1 will allow rolling upgrades, and allow properly setup Gluster
> volum

Re: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kaushal M
On Thu, Nov 2, 2017 at 4:00 PM, Amudhan P <amudha...@gmail.com> wrote:
> if doing an upgrade from 3.10.1 to 4.0 or 4.1, will I be able to access
> volume without any challenge?
>
> I am asking this because 4.0 comes with DHT2?

Very short answer, yes. Your volumes will remain the same. And you
will continue to access them the same way.

RIO (as DHT2 is now known as) developers in CC can provide more
information on this. But in short, RIO will not be replacing DHT. It
was renamed to make this clear.
Gluster 4.0 will continue to ship both DHT and RIO. All 3.x volumes
that exist will continue to use DHT, and continue to work as they
always have.
You will only be able to create new RIO volumes, and will not be able
to migrate DHT to RIO.

>
>
>
>
> On Thu, Nov 2, 2017 at 2:26 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> We're fast approaching the time for Gluster-4.0. And we would like to
>> set out the expected upgrade strategy and try to polish it to be as
>> user friendly as possible.
>>
>> We're getting this out here now, because there was quite a bit of
>> concern and confusion regarding the upgrades between 3.x and 4.0+.
>>
>> ---
>> ## Background
>>
>> Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
>> which is backwards incompatible with the GlusterD (GD1) in
>> GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
>> established, rolling upgrades are not possible. This meant that
>> upgrades from 3.x to 4.0 would require a volume downtime and possible
>> client downtime.
>>
>> This was a cause of concern among many during the recently concluded
>> Gluster Summit 2017.
>>
>> We would like to keep pains experienced by our users to a minimum, so
>> we are trying to develop an upgrade strategy that avoids downtime as
>> much as possible.
>>
>> ## (Expected) Upgrade strategy from 3.x to 4.0
>>
>> Gluster-4.0 will ship with both GD1 and GD2.
>> For fresh installations, only GD2 will be installed and available by
>> default.
>> For existing installations (upgrades) GD1 will be installed and run by
>> default. GD2 will also be installed simultaneously, but will not run
>> automatically.
>>
>> GD1 will allow rolling upgrades, and allow properly setup Gluster
>> volumes to be upgraded to 4.0 binaries, without downtime.
>>
>> Once the full pool is upgraded, and all bricks and other daemons are
>> running 4.0 binaries, migration to GD2 can happen.
>>
>> To migrate to GD2, all GD1 processes in the cluster need to be killed,
>> and GD2 started instead.
>> GD2 will not automatically form a cluster. A migration script will be
>> provided, which will form a new GD2 cluster from the existing GD1
>> cluster information, and migrate volume information from GD1 into GD2.
>>
>> Once migration is complete, GD2 will pick up the running brick and
>> other daemon processes and continue. This will only be possible if the
>> rolling upgrade with GD1 happened successfully and all the processes
>> are running with 4.0 binaries.
>>
>> During the whole migration process, the volume would still be online
>> for existing clients, who can still continue to work. New clients will
>> not be possible during this time.
>>
>> After migration, existing clients will connect back to GD2 for
>> updates. GD2 listens on the same port as GD1 and provides the required
>> SunRPC programs.
>>
>> Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
>> versions. without volume downtime, will be possible.
>>
>> ### FAQ and additional info
>>
>>  Both GD1 and GD2? What?
>>
>> While both GD1 and GD2 will be shipped, the GD1 shipped will
>> essentially be the GD1 from the last 3.x series. It will not support
>> any of the newer storage or management features being planned for 4.0.
>> All new features will only be available from GD2.
>>
>>  How long will GD1 be shipped/maintained for?
>>
>> We plan to maintain GD1 in the 4.x series for at least a couple of
>> releases, at least 1 LTM release. Current plan is to maintain it till
>> 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
>> then upgrade to newer releases.
>>
>>  Migration script
>>
>> The GD1 to GD2 migration script and the required features in GD2 are
>> being planned only for 4.1. This would technically mean most users
>> will only be able to migrate from 3.x to 4.1. But users can still
>> migrate from 3.x to 4.0 with GD1 and get many bug fixes and
>> improv

[Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+

2017-11-02 Thread Kaushal M
We're fast approaching the time for Gluster-4.0. And we would like to
set out the expected upgrade strategy and try to polish it to be as
user friendly as possible.

We're getting this out here now, because there was quite a bit of
concern and confusion regarding the upgrades between 3.x and 4.0+.

---
## Background

Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
which is backwards incompatible with the GlusterD (GD1) in
GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
established, rolling upgrades are not possible. This meant that
upgrades from 3.x to 4.0 would require a volume downtime and possible
client downtime.

This was a cause of concern among many during the recently concluded
Gluster Summit 2017.

We would like to keep pains experienced by our users to a minimum, so
we are trying to develop an upgrade strategy that avoids downtime as
much as possible.

## (Expected) Upgrade strategy from 3.x to 4.0

Gluster-4.0 will ship with both GD1 and GD2.
For fresh installations, only GD2 will be installed and available by default.
For existing installations (upgrades) GD1 will be installed and run by
default. GD2 will also be installed simultaneously, but will not run
automatically.

GD1 will allow rolling upgrades, and allow properly setup Gluster
volumes to be upgraded to 4.0 binaries, without downtime.

Once the full pool is upgraded, and all bricks and other daemons are
running 4.0 binaries, migration to GD2 can happen.

To migrate to GD2, all GD1 processes in the cluster need to be killed,
and GD2 started instead.
GD2 will not automatically form a cluster. A migration script will be
provided, which will form a new GD2 cluster from the existing GD1
cluster information, and migrate volume information from GD1 into GD2.

Once migration is complete, GD2 will pick up the running brick and
other daemon processes and continue. This will only be possible if the
rolling upgrade with GD1 happened successfully and all the processes
are running with 4.0 binaries.

During the whole migration process, the volume would still be online
for existing clients, who can still continue to work. New clients will
not be possible during this time.

After migration, existing clients will connect back to GD2 for
updates. GD2 listens on the same port as GD1 and provides the required
SunRPC programs.

Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
versions. without volume downtime, will be possible.

### FAQ and additional info

 Both GD1 and GD2? What?

While both GD1 and GD2 will be shipped, the GD1 shipped will
essentially be the GD1 from the last 3.x series. It will not support
any of the newer storage or management features being planned for 4.0.
All new features will only be available from GD2.

 How long will GD1 be shipped/maintained for?

We plan to maintain GD1 in the 4.x series for at least a couple of
releases, at least 1 LTM release. Current plan is to maintain it till
4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
then upgrade to newer releases.

 Migration script

The GD1 to GD2 migration script and the required features in GD2 are
being planned only for 4.1. This would technically mean most users
will only be able to migrate from 3.x to 4.1. But users can still
migrate from 3.x to 4.0 with GD1 and get many bug fixes and
improvements. They would only be missing any new features. Users who
live on the edge, should be able to the migration manually in 4.0.

---

Please note that the document above gives the expected upgrade
strategy, and is not final, nor complete. More details will be added
and steps will be expanded upon, as we move forward.

To move forward, we need your participation. Please reply to this
thread with any comments you have. We will try to answer and solve any
questions or concerns. If there a good new ideas/suggestions, they
will be integrated. If you just like it as is, let us know any way.

Thanks.

Kaushal and Gluster Developers.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting 2017-10-11

2017-10-11 Thread Kaushal M
We had a quick meeting today, with 2 main topics.

We have a new community issue tracker [1], which will be used to track
community initiatives. Amye will be sharing more information about
this in another email.

To co-ordinate people travelling to the Gluster Community Summit
better, a spreadsheet [2] has been setup to share information.

Apart from the above 2 topics, Shyam shared that he is on the lookout
for a partner to manage the 4.0 release.

For more information, meeting logs and minutes are available at the
links below. [3][4][5]

The meeting scheduled to be held on the 25 Oct, is being skipped. A
lot of the attendees will be travelling to the Gluster Summit at the
time. The next meeting now is scheduled for 8th Nov.

See you then.

~kaushal

[1]: https://github.com/gluster/community
[2]: 
https://docs.google.com/spreadsheets/d/1Jde-5XNc0q4a8bW8-OmLC2w_jiPg-e53ssR4wanIhFk/edit#gid=0
[3]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-10-11/gluster_community_meeting_2017-10-11.2017-10-11-15.03.html
[4]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-10-11/gluster_community_meeting_2017-10-11.2017-10-11-15.03.txt
[5]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-10-11/gluster_community_meeting_2017-10-11.2017-10-11-15.03.log.html
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Community Meeting 2017-08-30

2017-08-30 Thread Kaushal M
On Wed, Aug 30, 2017 at 3:07 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Hi All,
>
> This a reminder about today's meeting. The meeting will start later
> today at 1500UTC.
> Please add topics and updates to the meeting pad at
> https://bit.ly/gluster-community-meetings
>
> Thanks.

The meeting minutes and logs can be found at the links below. [1][2][3][4]

In this meeting we had updates on our docs site and the website.
The docs site is now accessible via the docs.gluster.org [5] address
and has improved search. There are still one more change in progress
that will complete the move to the docs.gluster.org domain. In the
longer term, the plan is to host docs on our own infrastructure.
The new community website is currently being staged at [6]. This
combines our previous website, blog and blog aggregator (planet) into
a single Wordpress instance with consistent look and feel. Please
report any bugs you find in the website at [7].

The next meeting is scheduled for 13th September. Add your updates and
topics to the meeting pad at [8].

Thanks.

~kaushal

[1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-08-30
[2]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-30/community_meeting_2017-08-30.2017-08-30-15.02.html
[3]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-30/community_meeting_2017-08-30.2017-08-30-15.02.txt
[4]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-30/community_meeting_2017-08-30.2017-08-30-15.02.log.html
[5]: http://docs.gluster.org
[6]: https://gluster.wpengine.com/
[7]: https://github.com/gluster/glusterweb/issues
[8]: https://bit.ly/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting 2017-08-30

2017-08-30 Thread Kaushal M
Hi All,

This a reminder about today's meeting. The meeting will start later
today at 1500UTC.
Please add topics and updates to the meeting pad at
https://bit.ly/gluster-community-meetings

Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting 2017-08-{02,16}

2017-08-16 Thread Kaushal M
This is combined update for the last two community meetings (because I
forgot to send out the update for the earlier meeting, my bad).

# Community Meeting 2017-08-02

There weren't any explicit topics of discussion, but we had updates on
action items and releases. The logs and minutes are available at
[1][2][3]. The minutes are also available at the end of this mail.

# Community Meeting 2017-08-16

Shyam called out 3.12rc0 and wanted to remind everyone to test it.

Without any other topics up for discussion, the meeting ended early.
Logs and minutes are available at [4][5][6].


The next meeting is scheduled for the 30th of August. Do attend, and
don't forget to add any topics/updates to the meeting pad at [7].

Thanks.

~kaushal

[1]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-02/community_meeting_2017-08-02.2017-08-02-15.06.html
[2]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-02/community_meeting_2017-08-02.2017-08-02-15.06.txt
[3]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-02/community_meeting_2017-08-02.2017-08-02-15.06.log.html
[4]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-16/community_meeting_2017-08-16.2017-08-16-15.01.html
‎[5‏]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-16/community_meeting_2017-08-16.2017-08-16-15.01.txt
‎[‎6‏]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-08-16/community_meeting_2017-08-16.2017-08-16-15.01.log.html
[7]: https://bit.ly/gluster-community-meetings


Meeting summary Community Meeting 2017-08-02
---
* ndevos will check with Eric Harney about the Openstack Gluster efforts
  (kshlm, 15:07:09)

* JoeJulian to invite Harsha to next meeting to discuss Minio  (kshlm,
  15:16:20)

* shyam will edit release pages and milestones to reflect 4.0 is STM
  (kshlm, 15:23:41)
  * LINK: https://www.gluster.org/community/release-schedule/# still
says LTM  (kkeithley, 15:26:47)

Meeting ended at 15:45:32 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (63)
* ndevos (13)
* kkeithley (10)
* JoeJulian (8)
* zodbot (4)
* amye (2)
* loadtheacc (1)
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//www.marudot.com//iCal Event Maker
X-WR-CALNAME:Gluster Community Meeting
CALSCALE:GREGORIAN
BEGIN:VEVENT
DTSTAMP:20170816T155050Z
UID:20170816t155050z-1618495...@marudot.com
DTSTART;TZID="Etc/UTC":20170830T15
DTEND;TZID="Etc/UTC":20170830T16
SUMMARY:Gluster Community Meeting
URL:https%3A%2F%2Fbit.ly%2Fgluster-community-meetings
DESCRIPTION:The Gluster Community Meeting. Agenda and meeting pad is at https://bit.ly/gluster-community-meetings
LOCATION:#gluster-meeting on Freenode
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:Gluster Community Meeting
TRIGGER:-PT5M
END:VALARM
END:VEVENT
END:VCALENDAR___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] When can I start using a peer that was added to a large volume?

2017-08-02 Thread Kaushal M
On Wed, Aug 2, 2017 at 8:27 PM, Tom Cannaerts - INTRACTO <
tom.cannae...@intracto.com> wrote:

> I added a peer to a 50GB replica volume and initial replication seems to
> go rather slow. It's about 50GB but has a lot of small files and a lot of
> files in the same folder.
>
> What would happen if I try to access a file on the new peer? Will it just
> fail? Will gluster fetch it sealessly from the replication partner? Or will
> the file just not be there?
>

If you have mounted the volume on the new peer, and are trying to access
the files through the GlusterFS mount, you should face no problems. You
wouldn't have the best performance possible owing to GlusterFS's general
behaviour with small files, and the additional load of the heal replication
happening.

You should never directly access files of a GlusterFS volume from the
backend brick directories.


>
> Thanks,
>
> --
> Met vriendelijke groeten,
> Tom Cannaerts
>
>
> *Service and MaintenanceIntracto - digital agency*
>
> Zavelheide 15 - 2200 Herentals
> Tel: +32 14 28 29 29
> www.intracto.com
>
>
> Ben je tevreden over deze e-mail?
>
> 
> 
> 
> 
> 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterd daemon - restart

2017-08-02 Thread Kaushal M
On Wed, Aug 2, 2017 at 5:07 PM, Mark Connor  wrote:
> Can the glusterd daemon be restarted on all storage nodes without causing
> any disruption to data being served or the cluster in general? I am running
> gluster 3.2 using distributed replica 2 volumes with fuse clients.

Yes, in general. Any clients already connected will still continue to work.

What can be a problem is new clients, because glusterd may sometimes
not pick up already running bricks. But most of these issues have been
fixed in recent versions of GlusterFS.
So, since you're using GlusterFS-3.2, which is really-really-really
old, you may face this issue.

You should try to run a more recent and supported version of GlusterFS
if possible. 3.2 is not supported and hasn't been updated in over 5
years.

> Regards,
> Mark
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] [Update] GD2 - what's been happening

2017-08-02 Thread Kaushal M
Hello!
We're restarting regular GD2 updates. This is the first one, and I
expect to send these out every other week.

In the last month, we've identified a few core areas that we need to
focus on. With solutions in place for these, we believe we're ready to
start more deeper integration with glusterfs, that would be requiring
changes in the rest of the code base.

As of release v4.0dev-7, GD2 provided the following features that
would be required for writing/developing Gluster management commands.

- A transaction framework to run orchestrate actions across the cluster
- A rest framework to add new rest endpoints to handle
- A central store based on etcd, to store cluster information
- An auto forming and scaling etcd cluster.

Using these features, we can currently form and run basic GlusterFS
clusters and volumes. While this works, this is not the most usable
yet. Nor is it ready for further integration with rest of GlusterFS.

We've identified and begun working on 3 specific features, that will
make GD2 more usable, and become ready for integration.

- CLI and ReST client packages [1][2][3]
  Aravinda has begun working on creating a CLI application for GD2,
that talks to GD2 using ReST. As a related effort, he's also creating
a GD2 rest-client Go package. With this available, users will be more
easily able to form and use a GD2 cluster. The client package will
also help us write tests using the end-to-end test framework.

- Volume set [3][4][5]
  Prashanth has been working on implementing the volume set
functionality. This is necessary because it allows volumes to be
customized after creation. Xlator options will be read directly from
the xlators, instead of being having a mapping table in GD2. This
means that xlator developers will not need any changes in GD2 to add
new options to their xlator. What this also means is that we will
require some changes to the xlator options table to add some
information that used to be available in the GD options table. We will
be detailing the required changes soon.

- Volgen [6]
  I've been working on a getting a volgen package and framework ready.
We had a very ambitious design earlier [7] involving a dynamic graph
generator with dependency resolution. Work was done on this long back
[8], but was stopped as it turned out to be too complex. The new plan
is much simpler, with graphs order being described using template
files, and having different template files for different volume types.
While this will not be as flexible to the end-user, it is much easier
for developers to add new xlators to graphs. As with volume set,
xlator developers will not need to change anything in GD2. But there
will be changes necessary in the xlators themselves to make the
xlators ready to be automatically picked up and used by GD2. We are
still figuring out the changes needed and finalizing the design. I
will update the wiki with more details on the design, and share
details on the changes required.

In addition to the above, we've had bug fixes to our store and etcd
packages, that make cluster scaling more reliable. Aravinda has also
started initial work on a geo-replication plugin [9], that will help
us develop our plugin infrastructure and be a demo/example of a GD2
plugin for developers.

This concludes the updates since the last update. Thanks for reading.

~kaushal

[1] https://github.com/gluster/glusterd2/pull/334
[2] https://github.com/gluster/glusterd2/pull/337
[3] https://github.com/gluster/glusterd2/pull/335
[4] https://github.com/gluster/glusterd2/pull/339
[5] https://github.com/gluster/glusterd2/pull/345
[6] https://github.com/gluster/glusterd2/pull/351
[7] 
https://github.com/gluster/glusterd2/wiki/Flexible-Volgen-(Old)#systemd-units-style-1
[8] https://github.com/kshlm/glusterd2-volgen/tree/volgen-systemd-style
[9] https://github.com/gluster/glusterd2/pull/349
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Community Meeting 2017-07-19

2017-07-24 Thread Kaushal M
On Wed, Jul 19, 2017 at 8:08 PM, Kaushal M <kshlms...@gmail.com> wrote:
> This is a (late) reminder about today's meeting. The meeting begins in
> ~20 minutes from now.
>
> The meeting notepad is at https://bit.ly/gluster-community-meetings
> and currently has no topics for discussion. If you have anything to be
> discussed please add it to the pad.
>
> ~kaushal

Apologies for the late update.

The last community meeting happened with good participation. I hope to
see the trend continuing.

The meeting minutes and logs are available at the links below.

The next meeting is scheduled for 2nd August. The meeting notepad is
at [4] for your updates and topics for discussion.

See you at the next meeting.

~kaushal

[1]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-07-19/community_meeting_2017-07-19.2017-07-19-15.02.html
[2]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-07-19/community_meeting_2017-07-19.2017-07-19-15.02.txt
[3]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-07-19/community_meeting_2017-07-19.2017-07-19-15.02.log.html
[4]: https://bit.ly/gluster-community-meetings

Meeting summary
---
* Should we build 3.12 packages for old distros  (kshlm, 15:06:23)
  * AGREED: 4.0 will drop support for EL6 and other old distros. Will
see what can be done if and when someone wants to do it anyway.
(kshlm, 15:21:48)

* Is 4.0 LTM or STM?  (kshlm, 15:24:04)
  * AGREED: 4.0 is STM. Will take call on 4.1 and beyond later.  (kshlm,
15:38:54)
  * ACTION: shyam will edit release pages and milestones to reflect 4.0
is STM.  (kshlm, 15:39:59)

Meeting ended at 16:02:12 UTC.




Action Items

* shyam will edit release pages and milestones to reflect 4.0 is STM.




Action Items, by person
---
* shyam
  * shyam will edit release pages and milestones to reflect 4.0 is STM.
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (91)
* bulde (30)
* ndevos (27)
* amye (22)
* shyam (20)
* nigelb (17)
* Snowman (16)
* kkeithley (13)
* vbellur (8)
* zodbot (3)
* jstrunk (3)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting 2017-07-19

2017-07-19 Thread Kaushal M
This is a (late) reminder about today's meeting. The meeting begins in
~20 minutes from now.

The meeting notepad is at https://bit.ly/gluster-community-meetings
and currently has no topics for discussion. If you have anything to be
discussed please add it to the pad.

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting 2017-07-05 Minutes

2017-07-07 Thread Kaushal M
Hi all,

The meeting minutes and logs for the community meeting held on
Wednesday are available at the links below. [1][2][3][4]

We had a good showing this meeting. Thank you everyone who attended
this meeting.

Our next meeting will be on 19th July. Everyone is welcome to attend.
The meeting note pad is available at [5] to add your topics for
discussion.

Thanks,
Kaushal

[1]: Minutes: 
https://meetbot-raw.fedoraproject.org/gluster-meeting/2017-07-05/gluster_community_meeting_2017-07-05.2017-07-05-15.02.html
[2]: Minutes (text):
https://meetbot-raw.fedoraproject.org/gluster-meeting/2017-07-05/gluster_community_meeting_2017-07-05.2017-07-05-15.02.txt
[3]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-07-05/gluster_community_meeting_2017-07-05.2017-07-05-15.02.log.html
[4]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-07-05
[5]: https://bit.ly/gluster-community-meetings

Meeting summary
---
* Experimental features  (kshlm, 15:06:16)

* Test cases contribution from community  (kshlm, 15:11:40)
  * ACTION: The person leading the application specific tests cases
should start the survey to collect info on applications used with
gluster  (kshlm, 15:39:13)
  * ACTION: kshlm to find out who that person is  (kshlm, 15:39:26)

* ndevos will check with Eric Harney about the Openstack Gluster efforts
  (kshlm, 15:45:17)
  * ACTION: ndevos will check with Eric Harney about the Openstack
Gluster efforts  (kshlm, 15:46:23)

* nigelb will document kkeithley's build process for glusterfs packages
  (kshlm, 15:47:21)
  * ACTION: nigelb will document the walkthrough given by kkeithley on
building packages  (kshlm, 15:48:58)
  * 3.11.1 was tagged. But there hasn't been a release announcement that
I've seen yet.  (kshlm, 15:50:58)
  * LINK:
http://lists.gluster.org/pipermail/gluster-users/2017-June/031618.html
(amye, 15:51:56)
  * 3.11.1 was announced.  (kshlm, 15:52:32)
  * LINK:
http://lists.gluster.org/pipermail/gluster-users/2017-June/031400.html
is the last post I saw on this  (amye, 15:59:42)

Meeting ended at 16:07:53 UTC.




Action Items

* The person leading the application specific tests cases should start
  the survey to collect info on applications used with gluster
* kshlm to find out who that person is
* ndevos will check with Eric Harney about the Openstack Gluster efforts
* nigelb will document the walkthrough given by kkeithley on building
  packages




Action Items, by person
---
* kkeithley
  * nigelb will document the walkthrough given by kkeithley on building
packages
* kshlm
  * kshlm to find out who that person is
* ndevos
  * ndevos will check with Eric Harney about the Openstack Gluster
efforts
* **UNASSIGNED**
  * The person leading the application specific tests cases should start
the survey to collect info on applications used with gluster




People Present (lines said)
---
* kshlm (95)
* ndevos (30)
* amye (18)
* kkeithley (15)
* zodbot (3)
* vbellur (3)
* jstrunk (2)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] [New Release] GlusterD2 v4.0dev-7

2017-07-05 Thread Kaushal M
After nearly 3 months, we have another preview release for GlusterD-2.0.

The highlights for this release are,
- GD2 now uses an auto scaling etcd cluster, which automatically
selects and maintains the required number of etcd servers in the
cluster.
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
- An end to end functional testing framework is now available
- And RPMs are available for Fedora >= 25 and EL7.

This release still doesn't provide a CLI. The HTTP ReST API is the
only access method right now.

Prebuilt binaries are available from [1]. RPMs have been built in
Fedora Copr and available at [2]. A Docker image is also available
from [3].

Try this release out and let us know if you face any problems at [4].

The GD2 development team is re-organizing and kicking of development
again. So regular updates can be expected again.

Cheers,
Kaushal and the GD2 developers.

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-7
[2]: https://copr.fedorainfracloud.org/coprs/kshlm/glusterd2/
[3]: https://hub.docker.com/r/gluster/glusterd2-test/
[4]: https://github.com/gluster/glusterd2/issues
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Community Meeting 2017-06-07

2017-06-07 Thread Kaushal M
Today's meeting didn't happen due to low turnout. The next meeting is
on 2017-06-21.

~kaushal

On Wed, Jun 7, 2017 at 6:06 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Just a reminder. The community meeting is scheduled to happen in about
> 2.5 hours. Please add topics you want to discuss and any updates you
> have to the meeting notepad at [1].
>
> ~kaushal
>
> [1]: https://bit.ly/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Community Meeting 2017-06-07

2017-06-07 Thread Kaushal M
Just a reminder. The community meeting is scheduled to happen in about
2.5 hours. Please add topics you want to discuss and any updates you
have to the meeting notepad at [1].

~kaushal

[1]: https://bit.ly/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting 2017-05-24

2017-05-24 Thread Kaushal M
Very poor turnout today, just 3 attendees including me.

But, we actually did have a discussion and came out with a couple of AIs.

The logs and minutes are available at the links below.

Archive: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-05-24
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-24/gluster_community_meeting_2017-05-24.2017-05-24-15.03.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-24/gluster_community_meeting_2017-05-24.2017-05-24-15.03.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-24/gluster_community_meeting_2017-05-24.2017-05-24-15.03.log.html


The next meeting will be held on 7-June. The meeting pad is at
https://bit.ly/gluster-community-meetings as always for your updates
and topics.

~kaushal

Meeting summary
---
* Roll Call  (kshlm, 15:08:19)

* Openstack Cinder glusterfs support has been removed  (kshlm, 15:17:35)
  * LINK:
https://wiki.openstack.org/wiki/ThirdPartySystems/RedHat_GlusterFS_CI
shows BharatK and deepakcs  (JoeJulian, 15:30:46)
  * LINK:

https://github.com/openstack/cinder/commit/16e93ccd4f3a6d62ed9d277f03b64bccc63ae060
(kshlm, 15:38:52)
  * ACTION: ndevos will check with Eric Harney about the Openstack
Gluster efforts  (kshlm, 15:39:49)
  * ACTION: JoeJulian will share his conversations with Eric Harney
(kshlm, 15:40:24)

Meeting ended at 15:42:15 UTC.




Action Items

* ndevos will check with Eric Harney about the Openstack Gluster efforts
* JoeJulian will share his conversations with Eric Harney




Action Items, by person
---
* JoeJulian
  * JoeJulian will share his conversations with Eric Harney
* ndevos
  * ndevos will check with Eric Harney about the Openstack Gluster
efforts
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (37)
* ndevos (31)
* JoeJulian (30)
* zodbot (6)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting 2017-05-10

2017-05-18 Thread Kaushal M
Once again, I couldn't send out this mail quick enough. Sorry for that.

The meeting minutes and logs for this mail are available at the links below.

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.log.html

The meeting pad has been archived at
https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-05-10
.

The next meeting is on 24th May. The meeting pad is available at
https://bit.ly/gluster-community-meetings to add updates and topics
for discussion.

~kaushal

==
#gluster-meeting: Gluster Community Meeting 2017-05-10
==


Meeting started by kshlm at 15:00:50 UTC. The full logs are available at
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-10/gluster_community_meeting_2017-05-10.2017-05-10-15.00.log.html
.



Meeting summary
---
* Roll Call  (kshlm, 15:05:28)

* Github issues  (kshlm, 15:10:06)

* Coverity progress  (kshlm, 15:23:51)

* Good build?  (kshlm, 15:30:41)
  * LINK:

https://software.intel.com/en-us/articles/intel-c-compiler-170-for-linux-release-notes-for-intel-parallel-studio-xe-2017
(kkeithley, 15:38:55)

* External Monitoring of Gluster performance / metrics  (kshlm,
  15:40:32)

* What is the status on getting gluster-block into Fedora?  (kshlm,
  15:53:31)

Meeting ended at 16:04:13 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (85)
* JoeJulian (21)
* kkeithley (21)
* jdarcy (17)
* BatS9 (12)
* amye (12)
* vbellur (8)
* zodbot (5)
* sanoj (5)
* ndevos (4)
* rafi (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community meeting 2017-05-10

2017-05-10 Thread Kaushal M
Hi all,

Today's meeting is scheduled to happen in 6 hours at 1500UTC. The
meeting pad is at https://bit.ly/gluster-community-meetings . Please
add your updates and topics for discussion.

I had forgotten to send out the meeting minutes and logs for the last
meeting which happened on 2017-04-26. There wasn't a lot of discussion
in the meeting, but the logs and minutes are available at the links
below.

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-26/gluster_community_meeting_2017-04-26.2017-04-26-15.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-26/gluster_community_meeting_2017-04-26.2017-04-26-15.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-26/gluster_community_meeting_2017-04-26.2017-04-26-15.00.log.html

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.11: Has been Branched (and pending feature notes)

2017-05-05 Thread Kaushal M
On Thu, May 4, 2017 at 6:40 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, May 4, 2017 at 4:38 PM, Niels de Vos <nde...@redhat.com> wrote:
>> On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
>>> On Wed, May 3, 2017 at 2:36 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>>
>>> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
>>> > <pkara...@redhat.com> wrote:
>>> > >
>>> > >
>>> > > On Sun, Apr 30, 2017 at 9:01 PM, Shyam <srang...@redhat.com> wrote:
>>> > >>
>>> > >> Hi,
>>> > >>
>>> > >> Release 3.11 for gluster has been branched [1] and tagged [2].
>>> > >>
>>> > >> We have ~4weeks to release of 3.11, and a week to backport features 
>>> > >> that
>>> > >> slipped the branching date (May-5th).
>>> > >>
>>> > >> A tracker BZ [3] has been opened for *blockers* of 3.11 release. 
>>> > >> Request
>>> > >> that any bug that is determined as a blocker for the release be noted
>>> > as a
>>> > >> "blocks" against this bug.
>>> > >>
>>> > >> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>>> > >> weeks need not be reflected against the blocker, *only* blocker bugs
>>> > >> identified that should prevent the release, need to be tracked against
>>> > this
>>> > >> tracker bug.
>>> > >>
>>> > >> We are not building beta1 packages, and will build out RC0 packages 
>>> > >> once
>>> > >> we cross the backport dates. Hence, folks interested in testing this
>>> > out can
>>> > >> either build from the code or wait for (about) a week longer for the
>>> > >> packages (and initial release notes).
>>> > >>
>>> > >> Features tracked as slipped and expected to be backported by 5th May
>>> > are,
>>> > >>
>>> > >> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>>> > >>
>>> > >> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>>> > >>   - Needs a +2 on https://review.gluster.org/13762
>>> > >>
>>> > >> 3) Enhance handleops readdirplus operation to return handles along with
>>> > >> dirents #174 (@skoduri)
>>> > >>
>>> > >> 4) Halo - Initial version (@pranith)
>>> > >
>>> > >
>>> > > I merged the patch on master. Will send out the port on Thursday. I have
>>> > to
>>> > > leave like right now to catch train and am on leave tomorrow, so will be
>>> > > back on Thursday and get the port done. Will also try to get the other
>>> > > patches fb guys mentioned post that preferably by 5th itself.
>>> >
>>> > Niels found that the HALO patch has pulled in a little bit of the IPv6
>>> > patch. This shouldn't have happened.
>>> > The IPv6 patch is currently stalled because it depends on an internal
>>> > FB library. The IPv6 bits that made it in pull this dependency.
>>> > This would have lead to a -2 on the HALO patch by me, but as I wasn't
>>> > aware of it, the patch was merged.
>>> >
>>> > The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
>>> > to affect anything HALO. So they should be easily removable and should
>>> > be removed.
>>> >
>>>
>>> As per the configure.ac the macro is enabled only when we are building
>>> gluster with "--with-fb-extras", which I don't think we do anywhere, so
>>> didn't think they are important at the moment. Sorry for the confusion
>>> caused because of this. Thanks to Kaushal for the patch. I will backport
>>> that one as well when I do the 3.11 backport of HALO. So will wait for the
>>> backport until Kaushal's patch is merged.
>>
>> Note that there have been disucssions about preventing special vendor
>> (Red Hat or Facebook) flags and naming. In that sense, --with-fb-extras
>> is not acceptible. Someone was interested in providing a "site.h"
>> configuration file that different vendors can use to fine-tune certain
>> things that are too detailed for ./configure options.
>>
>> We should remove the --with

Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.11: Has been Branched (and pending feature notes)

2017-05-04 Thread Kaushal M
On Thu, May 4, 2017 at 4:38 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
>> On Wed, May 3, 2017 at 2:36 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
>> > <pkara...@redhat.com> wrote:
>> > >
>> > >
>> > > On Sun, Apr 30, 2017 at 9:01 PM, Shyam <srang...@redhat.com> wrote:
>> > >>
>> > >> Hi,
>> > >>
>> > >> Release 3.11 for gluster has been branched [1] and tagged [2].
>> > >>
>> > >> We have ~4weeks to release of 3.11, and a week to backport features that
>> > >> slipped the branching date (May-5th).
>> > >>
>> > >> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
>> > >> that any bug that is determined as a blocker for the release be noted
>> > as a
>> > >> "blocks" against this bug.
>> > >>
>> > >> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>> > >> weeks need not be reflected against the blocker, *only* blocker bugs
>> > >> identified that should prevent the release, need to be tracked against
>> > this
>> > >> tracker bug.
>> > >>
>> > >> We are not building beta1 packages, and will build out RC0 packages once
>> > >> we cross the backport dates. Hence, folks interested in testing this
>> > out can
>> > >> either build from the code or wait for (about) a week longer for the
>> > >> packages (and initial release notes).
>> > >>
>> > >> Features tracked as slipped and expected to be backported by 5th May
>> > are,
>> > >>
>> > >> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>> > >>
>> > >> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>> > >>   - Needs a +2 on https://review.gluster.org/13762
>> > >>
>> > >> 3) Enhance handleops readdirplus operation to return handles along with
>> > >> dirents #174 (@skoduri)
>> > >>
>> > >> 4) Halo - Initial version (@pranith)
>> > >
>> > >
>> > > I merged the patch on master. Will send out the port on Thursday. I have
>> > to
>> > > leave like right now to catch train and am on leave tomorrow, so will be
>> > > back on Thursday and get the port done. Will also try to get the other
>> > > patches fb guys mentioned post that preferably by 5th itself.
>> >
>> > Niels found that the HALO patch has pulled in a little bit of the IPv6
>> > patch. This shouldn't have happened.
>> > The IPv6 patch is currently stalled because it depends on an internal
>> > FB library. The IPv6 bits that made it in pull this dependency.
>> > This would have lead to a -2 on the HALO patch by me, but as I wasn't
>> > aware of it, the patch was merged.
>> >
>> > The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
>> > to affect anything HALO. So they should be easily removable and should
>> > be removed.
>> >
>>
>> As per the configure.ac the macro is enabled only when we are building
>> gluster with "--with-fb-extras", which I don't think we do anywhere, so
>> didn't think they are important at the moment. Sorry for the confusion
>> caused because of this. Thanks to Kaushal for the patch. I will backport
>> that one as well when I do the 3.11 backport of HALO. So will wait for the
>> backport until Kaushal's patch is merged.
>
> Note that there have been disucssions about preventing special vendor
> (Red Hat or Facebook) flags and naming. In that sense, --with-fb-extras
> is not acceptible. Someone was interested in providing a "site.h"
> configuration file that different vendors can use to fine-tune certain
> things that are too detailed for ./configure options.
>
> We should remove the --with-fb-extras as well, specially because it is
> not useful for anyone that does not have access to the forked fbtirpc
> library.
>
> Kaushal mentioned he'll update the patch that removed the IPv6 default
> define, to also remove the --with-fb-extras and related bits.

The patch removing IPV6 and fbextras is at
https://review.gluster.org/17174 waiting for regression tests to run.

I've merged the Selinux backports, https://review.gluster.org/17159
and https:

Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-03 Thread Kaushal M
On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
 wrote:
>
>
> On Sun, Apr 30, 2017 at 9:01 PM, Shyam  wrote:
>>
>> Hi,
>>
>> Release 3.11 for gluster has been branched [1] and tagged [2].
>>
>> We have ~4weeks to release of 3.11, and a week to backport features that
>> slipped the branching date (May-5th).
>>
>> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
>> that any bug that is determined as a blocker for the release be noted as a
>> "blocks" against this bug.
>>
>> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>> weeks need not be reflected against the blocker, *only* blocker bugs
>> identified that should prevent the release, need to be tracked against this
>> tracker bug.
>>
>> We are not building beta1 packages, and will build out RC0 packages once
>> we cross the backport dates. Hence, folks interested in testing this out can
>> either build from the code or wait for (about) a week longer for the
>> packages (and initial release notes).
>>
>> Features tracked as slipped and expected to be backported by 5th May are,
>>
>> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>>
>> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>>   - Needs a +2 on https://review.gluster.org/13762
>>
>> 3) Enhance handleops readdirplus operation to return handles along with
>> dirents #174 (@skoduri)
>>
>> 4) Halo - Initial version (@pranith)
>
>
> I merged the patch on master. Will send out the port on Thursday. I have to
> leave like right now to catch train and am on leave tomorrow, so will be
> back on Thursday and get the port done. Will also try to get the other
> patches fb guys mentioned post that preferably by 5th itself.

Niels found that the HALO patch has pulled in a little bit of the IPv6
patch. This shouldn't have happened.
The IPv6 patch is currently stalled because it depends on an internal
FB library. The IPv6 bits that made it in pull this dependency.
This would have lead to a -2 on the HALO patch by me, but as I wasn't
aware of it, the patch was merged.

The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
to affect anything HALO. So they should be easily removable and should
be removed.

>
>>
>>
>> Thanks,
>> Kaushal, Shyam
>>
>> [1] 3.11 Branch: https://github.com/gluster/glusterfs/tree/release-3.11
>>
>> [2] Tag for 3.11.0beta1 :
>> https://github.com/gluster/glusterfs/tree/v3.11.0beta1
>>
>> [3] Tracker BZ for 3.11.0 blockers:
>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.0
>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
>
>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-20 Thread Kaushal M
On Thu, Apr 13, 2017 at 8:17 PM, Shyam  wrote:
> On 02/28/2017 10:17 AM, Shyam wrote:
>>
>> Hi,
>>
>> With release 3.10 shipped [1], it is time to set the dates for release
>> 3.11 (and subsequently 4.0).
>>
>> This mail has the following sections, so please read or revisit as needed,
>>   - Release 3.11 dates (the schedule)
>>   - 3.11 focus areas
>
>
> Pinging the list on the above 2 items.
>
>> *Release 3.11 dates:*
>> Based on our release schedule [2], 3.11 would be 3 months from the 3.10
>> release and would be a Short Term Maintenance (STM) release.
>>
>> This puts 3.11 schedule as (working from the release date backwards):
>> - Release: May 30th, 2017
>> - Branching: April 27th, 2017
>
>
> Branching is about 2 weeks away, other than the initial set of overflow
> features from 3.10 nothing else has been raised on the lists and in github
> as requests for 3.11.
>
> So, a reminder to folks who are working on features, to raise the relevant
> github issue for the same, and post it to devel list for consideration in
> 3.11 (also this helps tracking and ensuring we are waiting for the right
> things at the time of branching).
>
>>
>> *3.11 focus areas:*
>> As maintainers of gluster, we want to harden testing around the various
>> gluster features in this release. Towards this the focus area for this
>> release are,
>>
>> 1) Testing improvements in Gluster
>>   - Primary focus would be to get automated test cases to determine
>> release health, rather than repeating a manual exercise every 3 months
>>   - Further, we would also attempt to focus on maturing Glusto[7] for
>> this, and other needs (as much as possible)
>>
>> 2) Merge all (or as much as possible) Facebook patches into master, and
>> hence into release 3.11
>>   - Facebook has (as announced earlier [3]) started posting their
>> patches mainline, and this needs some attention to make it into master
>>
>
> Further to the above, we are also considering the following features for
> this release, request feature owners to let us know if these are actively
> being worked on and if these will make the branching dates. (calling out
> folks that I think are the current feature owners for the same)
>
> 1) Halo - Initial Cut (@pranith)
> 2) IPv6 support (@kaushal)

This is under review at https://review.gluster.org/16228 . The patch
mostly looks fine.

The only issue is that it currently depends and links with an internal
FB fork of tirpc (mainly for some helper functions and utilities).
This makes it hard for the community to make actual use of  and test,
the IPv6 features/fixes introduced by the change.

If the change were refactored the use publicly available versions of
tirpc or ntirpc, I'm OK for it to be merged. I did try it out myself.
While I was able to build it against available versions of tirpc, I
wasn't able to get it working correctly.

> 3) Negative lookup (@poornima)
> 4) Parallel Readdirp - More changes to default settings. (@poornima, @du)
>
>
>> [1] 3.10 release announcement:
>> http://lists.gluster.org/pipermail/gluster-devel/2017-February/052188.html
>>
>> [2] Gluster release schedule:
>> https://www.gluster.org/community/release-schedule/
>>
>> [3] Mail regarding facebook patches:
>> http://lists.gluster.org/pipermail/gluster-devel/2016-December/051784.html
>>
>> [4] Release scope: https://github.com/gluster/glusterfs/projects/1
>>
>> [5] glusterfs github issues: https://github.com/gluster/glusterfs/issues
>>
>> [6] github issues for features and major fixes:
>> https://hackmd.io/s/BkgH8sdtg#
>>
>> [7] Glusto tests: https://github.com/gluster/glusto-tests
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is it possible to have more than one volume?

2017-03-09 Thread Kaushal M
On Thu, Mar 9, 2017 at 6:39 PM, Tahereh Fattahi <t28.fatt...@gmail.com> wrote:
> Thank  you
> So one distributed file system has one volume with many server and many
> brick, is it correct?

Each volume is an individual distributed file-system. It can be
composed of many bricks, which can be spread over many servers.

>
> On Thu, Mar 9, 2017 at 4:26 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> On Thu, Mar 9, 2017 at 6:15 PM, Tahereh Fattahi <t28.fatt...@gmail.com>
>> wrote:
>> > Hi
>> > Is it possible to have more than one volume? (I know difference between
>> > brick and volume and in this question I mean volume)
>> > If yes, how should link these volumes to each other?
>>
>> You can have more than one volume in your GlusterFS pool. The volumes
>> are completely independent of each other, there is no linking between
>> volumes.
>>
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is it possible to have more than one volume?

2017-03-09 Thread Kaushal M
On Thu, Mar 9, 2017 at 6:15 PM, Tahereh Fattahi  wrote:
> Hi
> Is it possible to have more than one volume? (I know difference between
> brick and volume and in this question I mean volume)
> If yes, how should link these volumes to each other?

You can have more than one volume in your GlusterFS pool. The volumes
are completely independent of each other, there is no linking between
volumes.

>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Community Meeting 2017-03-01

2017-03-06 Thread Kaushal M
 carefully going forward

 Others

- _None_

## Announcements

### New announcements

- Reminder: Community cage outage on 14th and 15th March

### Regular announcements

- If you're attending any event/conference please add the event and
yourselves to Gluster attendance of events:
http://www.gluster.org/events (replaces
https://public.pad.fsfe.org/p/gluster-events)
- Put (even minor) interesting topics/updates on
https://bit.ly/gluster-community-meetings

---
* Roll call  (kshlm, 15:00:30)

* Discuss backport tracking via gerrit Change-ID  (kshlm, 15:06:03)
  * ACTION: shyam to notify devel list about the backport whine job and
gather feedback  (kshlm, 15:19:14)
  * ACTION: nigelb will implement the backports whine job after feedback
is obtained  (kshlm, 15:19:33)
  * ACTION: amye  to work on revised maintainers draft with vbellur to
get out for next maintainer's meeting. We'll approve it 'formally'
there, see how it works for 3.11.  (kshlm, 15:54:54)

Meeting ended at 15:57:32 UTC.




Action Items

* shyam to notify devel list about the backport whine job and gather
  feedback
* nigelb will implement the backports whine job after feedback is
  obtained
* amye  to work on revised maintainers draft with vbellur to get out for
  next maintainer's meeting. We'll approve it 'formally' there, see how
  it works for 3.11.




Action Items, by person
---
* amye
  * amye  to work on revised maintainers draft with vbellur to get out
for next maintainer's meeting. We'll approve it 'formally' there,
see how it works for 3.11.
* nigelb
  * nigelb will implement the backports whine job after feedback is
obtained
* shyam
  * shyam to notify devel list about the backport whine job and gather
feedback
* vbellur
  * amye  to work on revised maintainers draft with vbellur to get out
for next maintainer's meeting. We'll approve it 'formally' there,
see how it works for 3.11.
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (77)
* nigelb (55)
* shyam (35)
* amye (27)
* vbellur (22)
* zodbot (3)
* sankarshan (1)
* atinm (1)

On Wed, Mar 1, 2017 at 12:32 PM, Kaushal M <kshlms...@gmail.com> wrote:
> This is a reminder about todays meeting. The meeting will start at
> 1500UTC in #gluster-meeting on Freenode.
>
> The meeting notepad is available at
> https://bit.ly/gluster-community-meetings for you to add your updates
> and topics for discussion.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Can I do SSL with Gluster v3.4.2 ?

2017-02-15 Thread Kaushal M
On Thu, Feb 16, 2017 at 3:48 AM, dev  wrote:
> I'm trying to setup SSL transport with glusterfs following the guide
> here: http://blog.gluster.org/author/zbyszek/
>
> I've copied the resulting ca, pem and key files to my server
> (to /etc/ssl) as well as a copy on my gluster client. The link
> above does not explain the proper mount options for mounting the
> volume on the client however.
>
> I've tried searching for the correct options to add to the mount
> command, however nothing has turned up yet. I have found some
> options to place in a volume file such as:
>
>option transport.socket.ssl-enabled on
>option transport tcp
>option direct-io-mode disable
>option transport.socket.ssl-own-cert/etc/ssl/glusterfs.pem
>option transport.socket.ssl-private-key /etc/ssl/glusterfs.key
>option transport.socket.ssl-ca-list /etc/ssl/glusterfs.ca
>
> but mounting with:
>
>glusterfs -f /etc/gluster-pm-vol /mnt/ib-data/hydra
>
> Only gives an error in the logfile such as:
>...
>[socket.c:3594:socket_init] 0-pm1-dump: could not load our cert
>...
>
> I've started to investigate ACL on server, but attempting to
> set auth.ssl-allow results in an error as well.
>
>   # gluster volume info
>   Volume Name: pm1-dump
>   ...
>   client.ssl: on
>   ...
>
> # gluster volume set pm1-dump auth.ssl-allow foo
> volume set: failed: option : auth.ssl-allow does not exist
> Did you mean auth.allow?
>
> # gluster --version
> glusterfs 3.4.2 built on Jan 14 2014 18:05:37
>
>
> Is this version too old (ubuntu 14.04) to use SSL on or am I missing
> something?

This version is just too old. You can get up to date packages for
ubuntu from the gluster community ppa https://launchpad.net/~gluster .
I suggest you use glusterfs-3.8, which is the latest version to have
packages for trusty.

>
> Thanks in advance
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Community Meeting 2016-02-01

2017-02-14 Thread Kaushal M
On Wed, Feb 1, 2017 at 12:37 AM, Vijay Bellur <vbel...@redhat.com> wrote:
> Adding URL for meeting pad.
>
> On Tue, Jan 31, 2017 at 12:41 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> Hi all,
>>
>> This is a reminder for tomorrows community meeting. The meeting is
>> scheduled to be held in #gluster-meeting on Freenode at 1500UTC.
>>
>> Please add your updates for the last two weeks and any topics for
>> discussion into the meeting pad at [1].
>>
>> ~kaushal
>>
>
>
> [1]
> https://hackmd.io/CwBghmCsCcDGBmBaApgDlqxwBMYDMiqkqS0wxAjAEbLIDssVIQA=?both#

Here are the meeting notes from the Community meeting on 2017-02-01.
I should have sent this out 2 weeks back, but in between all the
travelling, I forgot. Sorry about that.

We had quite a lively discussion on two topics. jdarcy started a
discussion around policies for reverting patches, and shyam started a
discussion around the 3.10 release and the non responsiveness of
contributors. More details can be found in the meeting minutes and
logs below.

The next meeting is on 15th February at 1500 UTC. Remember to add your
updates to the meeting notepad at
https://bit.ly/gluster-community-meetings

Cheers!
~kaushal

- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-02-01/community_meeting_2017-02-01.2017-02-01-15.00.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-02-01/community_meeting_2017-02-01.2017-02-01-15.00.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-02-01/community_meeting_2017-02-01.2017-02-01-15.00.log.html

## Topics of Discussion

The meeting is an open floor, open for discussion of any topic entered below.

- Policy on reverting patches that are found to cause spurious test
failures (jdarcy)
- One change got merged (driven by downstream pressure), that
caused test failures for all future patches, and held up merges
- Two proposals for addressing such issues
- Revert said patches
- Also revoke commit permissions for habitual offenders
- Reverting bad patches is good.
- Infra support is required for making this easier
- jdarcy will reach out to nigelb to make this possible
- Revoking commit permissions needs to have a much larger
discussion only on that
- JoeJulian suggested a gfapi based test harness for per xlator testing
- 3.10 TODO nag in the community meeting
- There are a few TODOs that contributors are not responding to,
need some urgency here
- Reviews, release-notes, backport requests, among possibly others
- Just want to shout out in the meeting on this!
- Swapping reviews could help
- But most reviews happen within a component, so reviews
generally cannot be swapped
- We should encourage cross component reviews
- Contributors need to be educated that doing this is important
- Most of the discussions in IRC and mailing-lists don't seem
to reach contributors
- Probably due to a lot of noise
- Maintainers need to take up the responsibility of educating
their teams that upstream contribution isn't only submitting patches
- Immediate actions for 3.10
- Someone in bangalore will physically call on developers to
get stuff done



### Next edition's meeting host

- kshlm

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Action Items from the last meeting

- rastar will start a conversation on gluster-devel about 3.10 blocker
  test bugs.
- [update] sent today on feb 1st

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
  - GD2
  - 
https://lists.gluster.org/pipermail/gluster-devel/2017-February/052016.html
  - Volfile fetch supported now

 GlusterFS 3.10

- Maintainers : shyam, kkeithley, rtalur
- Next release : 3.10.0
- Target date: February 21, 2017 (this has slipped to accommodate
brick multiplexing feature)
- Release tracker : https://github.com/gluster/glusterfs/milestone/1
- Updates:
  - In the process of preparing release-notes to release beta1
  - beta1 target date is currently 1st Feb, 2017, and we are tracking
a day by day slip on this
  - Expect a possible 1 more day delay as we still are not getting
healthy responses from contributors on various activities
  - Please try and prioritize 3.10 work

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.1
- Next release : 3.9.2
  - Release date : 20 February 2017 (if 3.10 hasn't shipped by then)
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.2
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.9.2_resolved=1
- Roadm

[Gluster-users] GlusterD2 v4.0dev-5

2017-02-01 Thread Kaushal M
We have a new development release of GD2.

GD2 now supports volfile fetch and portmap requests, so clients are
finally able to mount volumes using the mount command. Portmap doesn't
work reliably yet, so there might be failures.

GD2 was refactored to clean up the main function and standardize the
various servers it runs.

More details about the release and downloads can be found at [1].

We also have a docker image ready for testing [2]. More information on
how to use this image can be found at [3].

Cheers!

~kaushal

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-5
[2]: 
https://hub.docker.com/r/gluster/glusterd2-test/builds/bqecolrgfsx8damioi3uyas/
[3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Community Meeting 2016-02-01

2017-01-31 Thread Kaushal M
Hi all,

This is a reminder for tomorrows community meeting. The meeting is
scheduled to be held in #gluster-meeting on Freenode at 1500UTC.

Please add your updates for the last two weeks and any topics for
discussion into the meeting pad at [1].

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing GlusterFS-3.7.20

2017-01-31 Thread Kaushal M
GlusterFS-3.7.20 has been released. This is regular bug fix release
for GlusterFS-3.7, and is currently the last planned release of
GlusterFS-3.7. GlusterFS-3.10 is expected next month, and
GlusterFS-3.7 [enters EOL][5] once it is
released. The community will be notified of any changes to the EOL schedule.

The release-notes for GlusterFS-3.7.20 can be read [here][1].

The release tarball and [community provided packages][2] can obtained
from [download.gluster.org][3]. The CentOS [Storage SIG][4] packages
have been built and should be available soon from the
`centos-gluster37` repository.

[1]: 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.20.md
[2]: https://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
[3]: https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.20/
[4]: https://wiki.centos.org/SpecialInterestGroup/Storage
[5]: https://www.gluster.org/community/release-schedule/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Compile xlator separately

2017-01-30 Thread Kaushal M
On Thu, Jan 26, 2017 at 9:20 PM, David Spisla 
wrote:

> Hello Gluster Community,
>
>
>
> I want to make some small changes to the read-only xlator. For this I want
> to re-compile the .so-file separately.
>
> I use the source from gluster 3.8.8 and the makefile according to this
> tutorial:
>
>
>
> https://github.com/gluster/glusterfs/blob/master/doc/
> developer-guide/translator-development.md#this-time-for-real
>
>
>
> But this tutorial seems to be obsolet because I did some small changes to
> re-compile the read-only.so. This ist my makefile:
>
>
>
> # Change these to match your source code.
>
> TARGET  = read-only.so
>
> OBJECTS = read-only.o
>
>
>
> # Change these to match your environment.
>
> GLFS_SRC = /srv/glusterfs-3.8.8
>
> GLFS_LIB = /usr/lib64
>
> HOST_OS  = GF_LINUX_HOST_OS
>
>
>
> # You shouldn't need to change anything below here.
>
>
>
> CFLAGS  = -fPIC -Wall -O0 -g \
>
>   -DHAVE_CONFIG_H -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE \
>
>   -D$(HOST_OS) -I$(GLFS_SRC) -I$(GLFS_SRC)/contrib/uuid \
>
>   -I$(GLFS_SRC)/libglusterfs/src
>
> LDFLAGS = -shared -nostartfiles -L$(GLFS_LIB)
>
> LIBS = -lpthread
>
>
>
> $(TARGET): $(OBJECTS)
>
> $(CC) $(LDFLAGS) -o $(TARGET) $(OBJECTS) $(LIBS)
>
>
>
>
>
> You see I removed the –lglusterfs from LIBS, because the compiler can not
> find this library. Is there another path actually?
>
> I also removed the first $(OBJECTS), because the compiler give me error
> messages.
>
>
>
> What is the best way to compile a xlator manually?
>

Wouldn't doing `make -C xlators/features/read-only` suffice for you?


>
>
> One more question: Does glusterd bind those feature-xlators dynamically to
> one volume? Because in the volfiles I can not see an entry for them.
>

For a translator to become part of the volume graph, it needs to be added
in glusterd's volgen code. Once this is done, glusterd will add the
translator to volumes when required.
Glusterd right now cannot dynamically pick up any translator and add it to
a volume graph. We are working on glusterd2, the next version of glusterd,
which will be able to dynamically pick up and insert translators in to a
volume graph.


>
> Thank you for your attention!
>
>
>
> *David Spisla*
>
> Software Developer
>
> david.spi...@iternity.com
>
> www.iTernity.com 
>
> Tel:   +49 761-590 34 841 <+49%20761%2059034841>
>
>
>
> [image: cid:image001.png@01D239C7.FDF7B430]
>
>
>
> iTernity GmbH
> Heinrich-von-Stephan-Str. 21
> 79100 Freiburg – Germany
> ---
> unseren technischen Support erreichen Sie unter +49 761-387 36 66
> <+49%20761%203873666>
> ---
>
> Geschäftsführer: Ralf Steinemann
> Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
> USt.Id de-24266431
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Community Meeting 2017-01-18

2017-01-18 Thread Kaushal M
On Wed, Jan 18, 2017 at 9:43 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Hi All,
>
> This meeting was the first following our new schedule - 1500UTC on
> Wednesday once every 2 weeks.
>
> This week we had one major discussion on fixing up and improving our
> regression test suite. More information on is available below and in
> the meeting logs.
> There was also a small discussion on conference attendance at
> DevConf.cz 2017, FOSDEM 2017, FAST'17 and Scale15x, all of which will
> have some Gluster community members in attendance. In particular, if
> you don't already know, Gluster will have a stand at FOSDEM, and will
> be a part of the software defined storage devroom.
>
> The next meeting will be on Feb 1 2017, 2 weeks from now, at 1500UTC.
> I've attached a calendar invite to make it easier to remember the
> meeting. The meeting pad[1] is open for discussion topics and updates.
>
> See you all later.
>
> Thanks,
> Kaushal
>
> [1] https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-01-18
> [2] Minutes: 
> https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.html
> [3] Minutes (text):
> https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.txt
> [4] Log: 
> https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.log.html
> [5] https://bit.ly/gluster-community-meetings
>
> ---
>
> ## Topics of Discussion
>
> The meeting is an open floor, open for discussion of any topic entered below.
>
> - Regression test suite hardening
> - or Changes to testing
> - Discussions happened on the topic of reducing test runtime
> - [vbellur] next steps regarding this is not clear
> - [rastar] nigelb is working on reducing test runtime
> - [rastar] fstat.gluster.org being updated to get stats by release
> - [jdarcy & rastar] tests parallelization is being worked on,
> using containers possibly
> - [jdarcy] I have a list of bad tests
> - https://github.com/gluster/glusterfs/wiki/Test-Clean-Up has more details
> - [vbellur] Test-Clean-up looks good
> - [rastar] additional proposals
> - blocker test bugs for 3.10
> - stop merges to modules with known bad tests over a watermark
> - [vbellur] Need to discuss blocker test bugs for 3.10
> - [rastar] will start a conversation
> - Upcoming conference attendence
> - Lots of people in FOSDEM. Stand + Storage Devroom
> - Lots of people in DevConf
> - Some going to FAST
> - At least one person at Scale
>
> ### Next edition's meeting host
>
> - kshlm
>
> ## Updates
>
>> NOTE : Updates will not be discussed during meetings. Any important or 
>> noteworthy update will be announced at the end of the meeting
>
> ### Action Items from last meeting
>
> - None
>
> ### Releases
>
>  GlusterFS 4.0
>
> - Tracker bug :
> https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
> - Roadmap : https://www.gluster.org/community/roadmap/4.0/
> - Updates:
>   - GD2
>   - ppai finished exploring/prototyping SunRPC for Go.
>   - ppai is exploring multiplexing services onto a single port
>   - kshlm is preparing his presentation for FOSDEM.
>
>  GlusterFS 3.10
>
> - Maintainers : shyam, kkeithley, rtalur
> - Next release : 3.10.0
> - Target date: February 14, 2017
> - Release tracker : https://github.com/gluster/glusterfs/milestone/1
> - Updates:
>   - Branching to be done today (i.e 18th Jan)
>   - Some exceptions are noted for features that will be backported and
> land in 3.10 post the branching, within mostly a week of branching
>   - gfapi statedump support
>   - Brick multiplexing
>   - Trash can directory creation
>   - DHT rebalance estimation
>   - some remaining storhaug work
>
>  GlusterFS 3.9
>
> - Maintainers : pranithk, aravindavk, dblack
> - Current release : 3.9.1
> - Release date: 20 December 2016
> - actual release date 17 Jan 2017
> - Next release : 3.9.2
>   - Release date : 20 Feb 2017 if 3.10 hasn't shipped by then.
> - Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.2
> - Open bugs : 
> https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.9.1_resolved=1
> - Updates:
>   - 3.9.1 has been tagged
>   - Packages have been built and are on d.g.o
>   - LATEST symlink has been moved
>   - CentOS Storage SIG will available soon
>   - Release announcement
> https://lists.gluster.org/pipermail/gluster-devel/2017-January/051931.ht

[Gluster-users] Community Meeting 2017-01-18

2017-01-18 Thread Kaushal M
Hi All,

This meeting was the first following our new schedule - 1500UTC on
Wednesday once every 2 weeks.

This week we had one major discussion on fixing up and improving our
regression test suite. More information on is available below and in
the meeting logs.
There was also a small discussion on conference attendance at
DevConf.cz 2017, FOSDEM 2017, FAST'17 and Scale15x, all of which will
have some Gluster community members in attendance. In particular, if
you don't already know, Gluster will have a stand at FOSDEM, and will
be a part of the software defined storage devroom.

The next meeting will be on Feb 1 2017, 2 weeks from now, at 1500UTC.
I've attached a calendar invite to make it easier to remember the
meeting. The meeting pad[1] is open for discussion topics and updates.

See you all later.

Thanks,
Kaushal

[1] https://github.com/gluster/glusterfs/wiki/Community-Meeting-2017-01-18
[2] Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.html
[3] Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.txt
[4] Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-01-18/gluster_community_meeting_2017-01-18.2017-01-18-15.00.log.html
[5] https://bit.ly/gluster-community-meetings

---

## Topics of Discussion

The meeting is an open floor, open for discussion of any topic entered below.

- Regression test suite hardening
- or Changes to testing
- Discussions happened on the topic of reducing test runtime
- [vbellur] next steps regarding this is not clear
- [rastar] nigelb is working on reducing test runtime
- [rastar] fstat.gluster.org being updated to get stats by release
- [jdarcy & rastar] tests parallelization is being worked on,
using containers possibly
- [jdarcy] I have a list of bad tests
- https://github.com/gluster/glusterfs/wiki/Test-Clean-Up has more details
- [vbellur] Test-Clean-up looks good
- [rastar] additional proposals
- blocker test bugs for 3.10
- stop merges to modules with known bad tests over a watermark
- [vbellur] Need to discuss blocker test bugs for 3.10
- [rastar] will start a conversation
- Upcoming conference attendence
- Lots of people in FOSDEM. Stand + Storage Devroom
- Lots of people in DevConf
- Some going to FAST
- At least one person at Scale

### Next edition's meeting host

- kshlm

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Action Items from last meeting

- None

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
  - GD2
  - ppai finished exploring/prototyping SunRPC for Go.
  - ppai is exploring multiplexing services onto a single port
  - kshlm is preparing his presentation for FOSDEM.

 GlusterFS 3.10

- Maintainers : shyam, kkeithley, rtalur
- Next release : 3.10.0
- Target date: February 14, 2017
- Release tracker : https://github.com/gluster/glusterfs/milestone/1
- Updates:
  - Branching to be done today (i.e 18th Jan)
  - Some exceptions are noted for features that will be backported and
land in 3.10 post the branching, within mostly a week of branching
  - gfapi statedump support
  - Brick multiplexing
  - Trash can directory creation
  - DHT rebalance estimation
  - some remaining storhaug work

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.1
- Release date: 20 December 2016
- actual release date 17 Jan 2017
- Next release : 3.9.2
  - Release date : 20 Feb 2017 if 3.10 hasn't shipped by then.
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.2
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.9.1_resolved=1
- Updates:
  - 3.9.1 has been tagged
  - Packages have been built and are on d.g.o
  - LATEST symlink has been moved
  - CentOS Storage SIG will available soon
  - Release announcement
https://lists.gluster.org/pipermail/gluster-devel/2017-January/051931.html

 GlusterFS 3.8

- Maintainers : ndevos, jiffin
- Current release : 3.8.8
- Next release : 3.8.9
  - Release date : 10 February 2017
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.8
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.8.8_resolved=1
- Updates:
  - GlusterFS-3.8.8 has been released
  - 
https://lists.gluster.org/pipermail/gluster-users/2017-January/029695.html
  - http://blog.nixpanic.net/2017/01/gluster-388-ltm-update.html

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.19
- Next release : 3.7.20
  - Release date : 30 January 2017
- 

Re: [Gluster-users] When to use striped volumes?

2017-01-17 Thread Kaushal M
On Tue, Jan 17, 2017 at 12:52 PM, Dave Fan  wrote:
> Hello everyone,
>
> We are trying to set up a Gluster-based storage for best performance. On the
> official Gluster website. It says:
>
> Striped – Striped volumes stripes data across bricks in the volume. For best
> results, you should use striped volumes only in high concurrency
> environments accessing very large files.
>
> Is there a rule-of-thumb on what size qualifies as "very large files" here?

Joe Julian has an excellent post on this at
https://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ .

>
> Many thanks,
> Dave
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Weekly Community meeting 2017-10-18

2017-01-16 Thread Kaushal M
Hi everyone,

If you haven't already heard, we will be moving to a new schedule for
the weekly community meeting, starting with tomorrows' meeting.

The meeting will be held at 1500UTC tomorrow, in #gluster-meeting on Freenode.

Add your updates and topics for discussion at
https://bit.ly/gluster-community-meetings

See you all tomorrow.

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] New schedule for community meetings - 1500UTC every alternate Wednesday.

2017-01-16 Thread Kaushal M
Hi All,

We recently changed the community meeting format to make it more
lively and are pleased with the nature of discussions happening since
the change. In order to foster more participation for our meetings, we
will be trying out a bi-weekly cadence and move the meeting to 1500UTC
on alternate Wednesdays. The meeting will continue to happen in
#gluster-meeting on Freenode.


We intend letting the new schedule be effective from  Jan 18. The next
community meeting will be held in #gluster-meeting on Freenode, at
1500UTC on Jan 18.

If you are a regular attendee of the community meetings and will be
inconvenienced by the altered schedule, please let us know.

Thanks,
Kaushal and Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] CentOS Storage SIG Repo for 3.8 is behind 3 versions (3.8.5). Is it still being used and can I help?

2017-01-14 Thread Kaushal M
Thanks for doing this Daryl. The help is appreciated.

On Sat, Jan 14, 2017 at 12:37 AM, Daryl lee <daryl...@ece.ucsb.edu> wrote:
> Thanks everyone for the information.   I'm happy to help provide test repo 
> feedback on an ongoing basis to help things along which I'll do right now.
>
> Niels,
> If you require more or less information please let me know, happy to help.  
> Thanks for doing the builds!
>
> I deployed GlusterFS v3.8.8 successfully to 5 servers running CentOS Linux 
> release 7.3.1611 (Core) here are the results.
>
> 2 GlusterFS clients deployed the following packages:
> ---
> glusterfs x86_64   
> 3.8.8-1.el7centos-gluster38-test  
>  509 k
> glusterfs-api x86_64   3.8.8-1.el7
> centos-gluster38-test89 k
> glusterfs-client-xlators x86_64   3.8.8-1.el7 
>centos-gluster38-test   781 k
> glusterfs-fuse  x86_64   3.8.8-1.el7  
>   centos-gluster38-test   133 k
> glusterfs-libsx86_64   3.8.8-1.el7
> centos-gluster38-test   378 k
>
> Tests:
> ---
> Package DOWNLOAD/UPDATE/CLEANUP from repo  - SUCCESS
> Basic FUSE mount RW test to remote GlusterFS volume - SUCCESS
> Boot and basic functionality test of libvirt gfapi based KVM Virtual Machine 
> - SUCCESS
>
>
> 3 GlusterFS Brick/Volume servers running REPLICA 3 ARBITER 1 updated the 
> following packages:
> ---
> glusterfsx86_64 
> 3.8.8-1.el7 centos-gluster38-test 
> 509 k
> glusterfs-apix86_64 
> 3.8.8-1.el7 centos-gluster38-test 
>  89 k
> glusterfs-cli  x86_64 
> 3.8.8-1.el7 centos-gluster38-test 
> 182 k
> glusterfs-client-xlators   x86_64 3.8.8-1.el7 
> centos-gluster38-test 781 k
> glusterfs-fusex86_64 3.8.8-1.el7  
>centos-gluster38-test 133 k
> glusterfs-libs   x86_64 
> 3.8.8-1.el7 centos-gluster38-test 
> 378 k
> glusterfs-server x86_64 3.8.8-1.el7   
>   centos-gluster38-test 1.4 M
> userspace-rcux86_64 0.7.16-3.el7  
>   centos-gluster38-test  72 k
>
> Tests:
> ---
> Package DOWNLOAD/UPDATE/CLEANUP from repo - SUCCESS w/ warnings
> *  warning during updating of glusterfs-server-3.8.8-1.el7.x86_64 backing 
> up gluster .vol files saved to rpmsave.   This is expected.
> Bricks on all 3 servers started - SUCCESS
> Self Healing Daemon on all 3 servers started - SUCCESS
> Bitrot Daemon on all 3 servers started - SUCCESS
> Scrubber Daemon on all 3 servers started - SUCCESS
> First replica self healing - success
> Second replica self healing - success
> Arbiter replica self healing - success
>
>
> -Daryl
>
>
> -Original Message-
> From: Kaushal M [mailto:kshlms...@gmail.com]
> Sent: Friday, January 13, 2017 5:03 AM
> To: Daryl lee
> Cc: Pavel Szalbot; gluster-users; Niels de Vos
> Subject: Re: [Gluster-users] CentOS Storage SIG Repo for 3.8 is behind 3 
> versions (3.8.5). Is it still being used and can I help?
>
> Packages for 3.7, 3.8 and 3.9 are being built for the Storage SIG.
> Niels is very punctual about building them. The packages first land in the 
> respective testing repositories. If someone verifies that the packages are 
> okay, and gives Niels a heads-up, he pushes the packages to be signed and 
> added to the release repositories.
>
> The only issu

Re: [Gluster-users] CentOS Storage SIG Repo for 3.8 is behind 3 versions (3.8.5). Is it still being used and can I help?

2017-01-13 Thread Kaushal M
Packages for 3.7, 3.8 and 3.9 are being built for the Storage SIG.
Niels is very punctual about building them. The packages first land in
the respective testing repositories. If someone verifies that the
packages are okay, and gives Niels a heads-up, he pushes the packages
to be signed and added to the release repositories.

The only issue is that Niels doesn't get enough (or any)
verifications. And the packages linger in testing.

On Fri, Jan 13, 2017 at 3:31 PM, Pavel Szalbot  wrote:
> Hi, you can install 3.8.7 from centos-gluster38-test using:
>
> yum --enablerepo=centos-gluster38-test install glusterfs
>
> I am not sure how QA works for CentOS Storage SIG, but 3.8.7 works same as
> 3.8.5 for me - libvirt gfapi is unfortunately broken, no other problems
> detected.
>
> Btw 3.9 is short term maintenance release
> (https://lists.centos.org/pipermail/centos-devel/2016-September/015197.html).
>
>
> -ps
>
> On Fri, Jan 13, 2017 at 1:18 AM, Daryl lee  wrote:
>>
>> Hey Gluster Community,
>>
>> According to the community packages list I get the impression that 3.8
>> would be released to the CentOS Storage SIG Repo, but this seems to have
>> stopped with 3.8.5 and 3.9 is still missing all together.   However, 3.7 is
>> still being updated and is at 3.7.8 so I am confused why the other two
>> versions have stopped.
>>
>>
>>
>> I did some looking on the past posts to this list and found a conversation
>> about 3.9 on the CentOS repo last year but it looks like it's still not up
>> yet; possibly due to a lack of community involvement in the testing and
>> reporting back to whoever the maintainer is (which we don’t know yet)?   I
>> might be in a position to help since I have a test environment that mirrors
>> my production environment setup that I would use for testing the patch
>> anyways, I might as well provide some good to the community..   At this
>> point I know to do " yum install --enablerepo=centos-gluster38-test
>> glusterfs-server" but I'm not sure who to tell if it works or not, and what
>> kind of info they are looking for.If someone wanted to give me a little
>> guidance that would be awesome, especially if it will save me from having to
>> switch to manually downloading packages.
>>
>>
>>
>> I guess the basic question is do we expect releases to resume for 3.8 on
>> the CentOS Storage SIG repo or should I be looking to move to manual
>> patching for 3.8.  Additionally, if the person who does the releases to the
>> CentOS Storage SIG is waiting for someone to tell them it looks fine,  who
>> should I contact to do so?
>>
>>
>>
>>
>>
>>
>>
>> Thanks!
>>
>>
>>
>> Daryl
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Weekly Community Meeting - 20170111

2017-01-11 Thread Kaushal M
ug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.8
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.8.8_resolved=1
- Updates:
  - None

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.19
- Next release : 3.7.20?
  - Release date : 30 Jan 2017?
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.19
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.7.19_resolved=1
- Updates:
  - 3.7.19 released
  - https://www.gluster.org/pipermail/gluster-devel/2017-January/051882.html

### Related projects and efforts

 Community Infra

- Gerrit OS upgrade on 21st Jan.

 Testing

- https://github.com/gluster/glusterfs/wiki/Test-Clean-Up


Meeting summary
---
* Roll call  (kshlm, 12:02:44)

* Is 3.7.20  required?  (kshlm, 12:08:32)
  * AGREED: 3.7.20 will be the (hopefully) last release of release-3.7
(kshlm, 12:10:54)

* Testing discussion update:
  https://github.com/gluster/glusterfs/wiki/Test-Clean-Up  (kshlm,
  12:12:43)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2017-January/051859.html
(kshlm, 12:13:30)

* Do we need a wiki page for meeting minutes?  (kshlm, 12:23:04)
  * AGREED: Meetings need to email minutes/notes to the mailing list.
Optionally add notes to wiki.  (kshlm, 12:41:27)
  * Weekly Community Meeting will do both.  (kshlm, 12:41:43)

* Discuss participation in the meetings  (kshlm, 12:41:57)

Meeting ended at 13:08:07 UTC.


People Present (lines said)
---
* kshlm (88)
* ndevos (47)
* nigelb (27)
* shyam (18)
* kkeithley (11)
* sankarshan (10)
* jdarcy (7)
* Saravanakmr (6)
* zodbot (3)
* skoduri (1)
* anoopcs (1)
* partner (1)

On Wed, Jan 11, 2017 at 3:27 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Today's meeting is due to start in 2 hours from now at 1200UTC in
> #gluster-meeting on Freenode.
>
> The meeting agenda and update pad is at [1]. Add updates and topics
> for discussion here.
>
> See you all in 2 hours.
>
> ~kaushal
>
> [1]: https://bit.ly/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.7.19 released

2017-01-11 Thread Kaushal M
On Wed, Jan 11, 2017 at 3:43 PM, Kaushal M <kshlms...@gmail.com> wrote:
> GlusterFS 3.7.19 is a regular bug fix release for GlusterFS-3.7. The
> release-notes for this release can be read here[1].
>
> The release tarball and community provided packages[2] can obtained
> from download.gluster.org[3]. The CentOS Storage SIG[4] packages have
> been built and should be available soon from the centos-gluster37
> repository.
>
> A reminder to everyone, GlusterFS-3.7 is scheduled[5] to be EOLed with
> the release of GlusterFS-3.10, which should happen sometime in
> February 2017.
>
> ~kaushal
>

The links have been corrected. Thanks Niels for noticing this.

 [1]: 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.19.md
 [2]: https://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
 [3]: https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.19/
 [4]: https://wiki.centos.org/SpecialInterestGroup/Storage
 [5]: https://www.gluster.org/community/release-schedule/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS 3.7.19 released

2017-01-11 Thread Kaushal M
GlusterFS 3.7.19 is a regular bug fix release for GlusterFS-3.7. The
release-notes for this release can be read here[1].

The release tarball and community provided packages[2] can obtained
from download.gluster.org[3]. The CentOS Storage SIG[4] packages have
been built and should be available soon from the centos-gluster37
repository.

A reminder to everyone, GlusterFS-3.7 is scheduled[5] to be EOLed with
the release of GlusterFS-3.10, which should happen sometime in
February 2017.

~kaushal

[1]: 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.18.md
[2]: https://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
[3]: https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.18/
[4]: https://wiki.centos.org/SpecialInterestGroup/Storage
[5]: https://www.gluster.org/community/release-schedule/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Weekly Community Meeting - 20170104

2017-01-04 Thread Kaushal M
cluded in glusterfs packages




Action Items, by person
---
* shyam
  * shyam will file a bug to get arequal included in glusterfs packages
* **UNASSIGNED**
  * Need to find out when 3.9.1 is happening




People Present (lines said)
---
* nigelb (92)
* kshlm (60)
* ndevos (39)
* shyam (29)
* rastar (15)
* jdarcy (12)
* atinmu (5)
* Saravanakmr (4)
* zodbot (3)
* sankarshan (1)

On Tue, Jan 3, 2017 at 12:36 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Happy New Year everyone!
>
> This is a reminder for the resumption of the weekly community meeting
> after nearly a month of not being held. I hope everyone enjoyed their
> holidays, and are now ready to get this restarted.
>
> The meeting agenda and updates document is at [1] as always. I expect
> there are a lot of updates to add after this long break, so make sure
> to add your updates and topics to this before the meeting starts.
>
> The meeting will start, as always, at 1200UTC in #gluster-meeting on Freenode.
>
> See you all tomorrow!
>
> ~kaushal
>
> [1] https://bit.ly/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly Community Meeting - 20170104

2017-01-02 Thread Kaushal M
Happy New Year everyone!

This is a reminder for the resumption of the weekly community meeting
after nearly a month of not being held. I hope everyone enjoyed their
holidays, and are now ready to get this restarted.

The meeting agenda and updates document is at [1] as always. I expect
there are a lot of updates to add after this long break, so make sure
to add your updates and topics to this before the meeting starts.

The meeting will start, as always, at 1200UTC in #gluster-meeting on Freenode.

See you all tomorrow!

~kaushal

[1] https://bit.ly/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly community meeting - 2016-12-14

2016-12-14 Thread Kaushal M
Hi all,

The community meeting wasn't held this week either. Because of a lack
of volunteers (to host the meeting) and a lack of attendance.

Considering this, we (kkeithley and I) have decided to tentatively
cancel the remaining meetings for the year (on 21 and 28 December). If
anyone wants the meetings to happen, please feel free to host.

See you all in the new year for the next meeting on 4th January.

Thanks,
Kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing GlusterFS-3.7.18

2016-12-12 Thread Kaushal M
Hi all,

GlusterFS-3.7.18 has been released. This is a regular bug fix release.
This release fixes 13 bugs. The release-notes can be found at [1].

Packages have been built at the CentOS Storage SIG [2] and
download.gluster.org [3]. The tarball can be downloaded from [3].

The next release might be delayed (more than normal) due to it being
the end of the year. The tracker for 3.7.19 is at [4], mark any bugs
that need to be fixed as dependencies.

See you all in the new year.

~kaushal

[1] 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.18.md
[2] https://wiki.centos.org/SpecialInterestGroup/Storage
[3] https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.18/
[4] https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.19
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Cancelled: Weekly Community Meeting 2016-12-07

2016-12-07 Thread Kaushal M
Hi All,

This weeks meeting has been cancelled. There was neither enough
attendance (is it the holidays already?), nor any topics for
discussion.

The next meeting is still on track for next week. The meeting pad [1]
will be carried over to the next week. Please add updates and topics
to it.

Thanks,
Kaushal

[1]: https://bit.ly/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Weekly Community Meeting 2016-11-30

2016-12-01 Thread Kaushal M
On Thu, Dec 1, 2016 at 10:33 AM, Amye Scavarda <a...@redhat.com> wrote:
> Heyo!
>
> On Wed, Nov 30, 2016 at 5:15 AM, Kaushal M <kshlms...@gmail.com> wrote:
>> Hi all,
>>
>> This weeks meeting was a short meeting with just one topic discussed
>> and few attendees. This left us wondering about what lead to the low
>> attendance. We moved on quickly though, so if anyone has any ideas
>> about this, please let us know.
>
> My guess is that the holidays are upon us, and we're not in a great
> place to try and push a lot of discussion for the next six weeks. This
> is normal and expected, we'll do what we can around this.
>
> I think we can keep the same format but change two things:
> Post the agenda at the top of the meeting instead of the link
> Ask for a new meeting host at the end of the meeting

By posting the agenda at the top, do you mean posting the agenda into
the channel once the meeting starts? That can be done.

After yesterday's meeting, I did some changes to the meeting template
[1] based on the feedback, which include your current suggestions.
The open-floor topics of discussion come first, followed by the
selection of new host. I'll be following this order in the next
meeting.

[1]: https://github.com/gluster/glusterfs/wiki/Community-meeting-template

>
> Reasons:
> * Agenda wakes everyone up and keeps us on track
> * New Meeting Host doesn't get enough time for volunteers if it's at
> the top of the hour
>
> Thoughts?
> - amye
>
>>
>> The meeting agenda and updates for the week are archived at [1]. The
>> meeting minutes and logs are available at [2],[3] and [4].
>>
>> The next weeks meeting agenda is at [5]. Please add your updates to
>> this before the next meeting. Also, everyone (and I mean EVERYONE) is
>> welcome to add their own topics for discussion.
>>
>> Thanks,
>> Kaushal
>>
>> [1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-30
>> [2]: Minutes: 
>> https://meetbot.fedoraproject.org/gluster-meeting/2016-11-30/weekly_community_meeting_2016-11-30.2016-11-30-12.02.html
>> [3]: Minutes (text):
>> https://meetbot.fedoraproject.org/gluster-meeting/2016-11-30/weekly_community_meeting_2016-11-30.2016-11-30-12.02.txt
>> [4]: Log: 
>> https://meetbot.fedoraproject.org/gluster-meeting/2016-11-30/weekly_community_meeting_2016-11-30.2016-11-30-12.02.log.html
>> [5]: https://bit.ly/gluster-community-meetings
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly Community Meeting 2016-11-30

2016-11-30 Thread Kaushal M
Hi all,

This weeks meeting was a short meeting with just one topic discussed
and few attendees. This left us wondering about what lead to the low
attendance. We moved on quickly though, so if anyone has any ideas
about this, please let us know.

The meeting agenda and updates for the week are archived at [1]. The
meeting minutes and logs are available at [2],[3] and [4].

The next weeks meeting agenda is at [5]. Please add your updates to
this before the next meeting. Also, everyone (and I mean EVERYONE) is
welcome to add their own topics for discussion.

Thanks,
Kaushal

[1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-30
[2]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-30/weekly_community_meeting_2016-11-30.2016-11-30-12.02.html
[3]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-30/weekly_community_meeting_2016-11-30.2016-11-30-12.02.txt
[4]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-30/weekly_community_meeting_2016-11-30.2016-11-30-12.02.log.html
[5]: https://bit.ly/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] not to reconnect between client and server because of race condition

2016-11-24 Thread Kaushal M
On Fri, Nov 25, 2016 at 12:03 PM, songxin  wrote:
> Hi Atin
> I found a  problem, that is about client(glusterfs) will not trying to
> reconnect to server(glusterfsd) after disconnect.
> Actually, it seems caused by race condition.
>
>
> Precondition
>
> The glusterfs version is 3.7.6.
> I create a replicate volume using two node, A node and B node.One brick is
> on A node and another brick is on B node.
> A node ip:10.32.1.144
> B node ip:10.32.0.48
>
>
> The phenomenon is following.
>
> Firstly, the client(glusterfs) on A board disconnect with server(glusterfsd)
> on B board.The log is following.
> ...
> readv on 10.32.0.48:49309 failed (No data available)
> ...
>
> And then the client(glusterfs) on A board disconnect with server(glusterfsd)
> on A board.The log is following.
> ...
> readv on 10.32.1.144:49391 failed (Connection reset by peer)
> ...
>
> After that, all operation in mount point will show "Transport endpoint is
> not connected" until client reconnect with server(glusterfsd) on B board.
>
>
> The client log is following.And I have highlight the important line.
> ...
> [2016-10-31 04:06:03.626047] W [socket.c:588:__socket_rwv]
> 0-c_glusterfs-client-9: readv on 10.32.1.144:49391 failed (Connection reset
> by peer)
> [2016-10-31 04:06:03.627345] E [rpc-clnt.c:362:saved_frames_unwind] (-->
> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xb5c80)[0x3fff8ab79f58] (
>  -->
> /usr/lib64/libgfrpc.so.0(saved_frames_unwind-0x1b7a0)[0x3fff8ab1dc90] (
>  -->
> /usr/lib64/libgfrpc.so.0(saved_frames_destroy-0x1b638)[0x3fff8ab1de10] (
>  -->
> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup-0x19af8)[0x3fff8ab1fb18]
> (
>  -->
> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0x18e68)[0x3fff8ab20808] )
>
> 0-c_glusterfs-client-9: forced unwinding frame type(GlusterFS 3.3)
>
> op(FINODELK(30)) called at 2016-10-31 04:06:03.626033 (xid=0x7f5e)
>
> [2016-10-31 04:06:03.627395] E [MSGID: 114031]
> [client-rpc-fops.c:1673:client3_3_finodelk_cbk] 0-c_glusterfs-client-9:
> remote operation failed [Transport endpoint is not connected]
>
> [2016-10-31 04:06:03.628381] I [socket.c:3308:socket_submit_request]
> 0-c_glusterfs-client-9: not connected (priv->connected = 0)
>
> [2016-10-31 04:06:03.628432] W [rpc-clnt.c:1586:rpc_clnt_submit]
> 0-c_glusterfs-client-9: failed to submit rpc-request (XID: 0x7f5f Program:
> GlusterFS 3.3, ProgVers: 330, Proc: 30) to rpc-transport
> (c_glusterfs-client-9)
>
> [2016-10-31 04:06:03.628466] E [MSGID: 114031]
> [client-rpc-fops.c:1673:client3_3_finodelk_cbk] 0-c_glusterfs-client-9:
> remote operation failed [Transport endpoint is not connected]
> [2016-10-31 04:06:03.628475] I [MSGID: 108019]
> [afr-lk-common.c:1086:afr_lock_blocking] 0-c_glusterfs-replicate-0: unable
> to lock on even one child
>
> [2016-10-31 04:06:03.628539] I [MSGID: 108019]
> [afr-transaction.c:1224:afr_post_blocking_inodelk_cbk]
> 0-c_glusterfs-replicate-0: Blocking inodelks failed.
>
> [2016-10-31 04:06:03.628630] W [fuse-bridge.c:1282:fuse_err_cbk]
> 0-glusterfs-fuse: 20790: FLUSH() ERR => -1 (Transport endpoint is not
> connected)
> [2016-10-31 04:06:03.629149] E [rpc-clnt.c:362:saved_frames_unwind] (-->
> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xb5c80)[0x3fff8ab79f58] (-->
> /usr/lib64/libgfrpc.so.0(saved_frames_unwind-0x1b7a0)[0x3fff8ab1dc90] (-->
> /usr/lib64/libgfrpc.so.0(saved_frames_destroy-0x1b638)[0x3fff8ab1de10] (-->
> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup-0x19af8)[0x3fff8ab1fb18]
> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0x18e68)[0x3fff8ab20808] )
> 0-c_glusterfs-client-9: forced unwinding frame type(GlusterFS 3.3)
> op(LOOKUP(27)) called at 2016-10-31 04:06:03.624346 (xid=0x7f5a)
>
> [2016-10-31 04:06:03.629183] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
> 0-c_glusterfs-client-9: changing port to 49391 (from 0)
>
> [2016-10-31 04:06:03.629210] W [MSGID: 114031]
> [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-c_glusterfs-client-9: remote
> operation failed. Path:
> /loadmodules_norepl/CXC1725605_P93A001/cello/emasviews
> (b0e5a94e-a432-4dce-b86f-a551555780a2) [Transport endpoint is not connected]
> [2016-10-31 04:06:03.629266] I [socket.c:3308:socket_submit_request]
> 0-c_glusterfs-client-9: not connected (priv->connected = 255)
> [2016-10-31 04:06:03.629277] I [MSGID: 109063]
> [dht-layout.c:702:dht_layout_normalize] 0-c_glusterfs-dht: Found anomalies
> in /loadmodules_norepl/CXC1725605_P93A001/cello/emasviews (gfid =
> b0e5a94e-a432-4dce-b86f-a551555780a2). Holes=1 overlaps=0
> [2016-10-31 04:06:03.629293] W [rpc-clnt.c:1586:rpc_clnt_submit]
> 0-c_glusterfs-client-9: failed to submit rpc-request (XID: 0x7f62 Program:
> GlusterFS 3.3, ProgVers: 330, Proc: 41) to 

[Gluster-users] Fwd: Weekly Community Meeting - 2016-11-23

2016-11-23 Thread Kaushal M
(Forgot the gluster-users list)


-- Forwarded message --
From: Kaushal M <kshlms...@gmail.com>
Date: Wed, Nov 23, 2016 at 6:50 PM
Subject: Re: Weekly Community Meeting - 2016-11-23
To: Gluster Devel <gluster-de...@gluster.org>


On Tue, Nov 22, 2016 at 6:26 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Hi All,
>
> This is a reminder to add your status updates and topics to the
> meeting agenda at [1].
> Ensure you do this before the meeting tomorrow.
>
> Thanks,
> Kaushal
>
> [1]: https://bit.ly/gluster-community-meetings

Thank you everyone who attended today's meeting.

4 topics were discussed today, the major one among them being on the
release of 3.9 and the beginning of 3.10 cycle. More information can
be found in the meeting minutes and logs at [1][2][3][4].

The agenda for next weeks meeting is available at [5]. Please add your
updates and topics for discussion to the agenda. Everyone is welcome
to add their own topics for discussion.

I'll be hosting next weeks meeting, at the same time and same place.

Thanks.
~kaushal

[1] https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-23
[2] Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-23/weekly_community_meeting_2016-11-23.2016-11-23-12.01.html
[3] Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-23/weekly_community_meeting_2016-11-23.2016-11-23-12.01.txt
[4] Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-23/weekly_community_meeting_2016-11-23.2016-11-23-12.01.log.html
[5] https://bit.ly/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meetings - Feedback on new meeting format

2016-11-17 Thread Kaushal M
Hi All,

We have begun following a new format for the weekly community meetings
for the past 4 weeks.

The new format is just a massive Open floor for everyone to discuss a
topic of their choice. The old boring weekly updates have been
relegated to just being notes in the meeting agenda. The meetings are
being captured into wiki [1][2][3], and give a good picture of what's
been happening in the community in the past week.

The format was trialed the format for 3 weeks (we actually did an
extra week, and will follow it next week as well). We'd like hear
feedback about this from the community. It'll be good if your feedback
covers the following,
1. What did you like or not like about the new format?
2. What could be done better?
3. Should we continue with the format?

---
I'll begin with my feedback.

This has resulted in several good changes,
a. Meetings are now more livelier with more people speaking up and
making themselves heard.
b. Each topic in the open floor gets a lot more time for discussion.
c. Developers are sending out weekly updates of works they are doing,
and linking those mails in the meeting agenda.

Thought the response and attendance to the initial 2 meetings was
good, it dropped for the last 2. This week in particular didn't have a
lot of updates added to the meeting agenda. It seems like interest has
dropped already.

We could probably do a better job of collecting updates to make it
easier for people to add their updates, but the current format of
adding updates to etherpad(/hackmd) is simple enough. I'd like to know
if there is anything else preventing people from providing updates.

I vote we continue with the new format.
---

Everyone please provide your feedback by replying to this mail. We'll
be going over the feedback in the next meeting.

Thanks.
~kaushal

[1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-16
[2]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-09
[3]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-02
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Weekly Community Meeting - 2016-11-16

2016-11-16 Thread Kaushal M
On Wed, Nov 16, 2016 at 2:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
> I forgot to send this reminder out earlier. Please add your updates
> and any topics of discussion to
> https://public.pad.fsfe.org/p/gluster-community-meetings .
>
> The meeting starts in ~3 hours from now.
>
> ~kaushal

Hi All,

Not a lot of updates or topics of discussion this week. Seems like
everyone's already lost interest in just over 3 weeks.

We did discuss a lot about what a release means for us. You can read
about it more in the minutes and logs. This discussion will also come
up soon on the mailing lists as well.
We also discussed replacements for FSFE etherpad. As a result, next
week we'll be trying out hackmd[1].

The logs for today's meeting are available at [2], [3] and [4].

Nigel will your host for the next meeting. Add your updates and topics at [1].
See you all then.

Cheers,
Kaushal

[1]: https://hackmd.io/CwBghmCsCcDGBmBaApgDlqxwBMYDMiqkqS0wxAjAEbLIDssVIQA=?both
[2]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-16/weekly_community_meeting_2016-11-16.2016-11-16-12.02.html
[3]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-16/weekly_community_meeting_2016-11-16.2016-11-16-12.02.txt
[4]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-16/weekly_community_meeting_2016-11-16.2016-11-16-12.02.log.html

## Topics of Discussion

### Next weeks meeting host

- nigelb

### Open floor

- FSFE etherpad replacement
- Try hackmd.io for next meeting
- [nigelb] What is a release?
- Tagging?
- Blog announcement?
- What constitutes a release?
- Just tag+tarball
- Packages, docs, upgrade guides also
- [nigelb] release is not 'when our work is done' but 'when users can
consume our work
- _Discussion will be continued on mailing lists_

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
  - GD2 - 
https://www.gluster.org/pipermail/gluster-devel/2016-November/051532.html

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.0rc2
- Next release : 3.9.0
  - Release date : End of Sept 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.0
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.9.0_resolved=1
- Roadmap : https://www.gluster.org/community/roadmap/3.9/
- Updates:
  - Release has been tagged. Announcement pending.

 GlusterFS 3.8

- Maintainers : ndevos, jiffin
- Current release : 3.8.5
- Next release : 3.8.6
  - Release date : 10 November 2016 - probably Friday 18
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.6
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.8.6_resolved=1
- Updates:
  - 3.8.6 has been delayed a little due to maintainer travelling
  - a little more delay for the 3.9 release ([ndevos] building
packages and setting up CentOS Storage SIG repo)

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.17
- Next release : 3.7.18
  - Release date : 30 November 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.18
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.7.18_resolved=1
- Updates:
  - None this week.
  - Still on track for 3.7.18.

### Related projects and efforts

 Community Infra

- Now that 3.9 is released, we'll begin work on improving Gerrit,
including upgrading the host VM to Centos 7.
- http://fstat.gluster.org/ is live!
- [nigelb] Progress on dbench test issues - seems to be a write-ahead bug

 Samba

- _None_

 Ganesha

- _None_

 Containers

- _None_

 Testing

- _None_

 Others

- [atinm] Updates on GlusterD-1.0
https://www.gluster.org/pipermail/gluster-devel/2016-November/051529.html


### Action Items from last week

- Saravanakmr will take up the task for finding and collecting all
gluster etherpads on FSFE etherpad.
- Mail sent. 
https://www.gluster.org/pipermail/gluster-devel/2016-November/051450.html
- kshlm will ask for feedback on the trial run of the new meeting format
- Not done.
- kshlm to send out an email about Etherpad+Wiki after it's done for
the first time.
- Not done.

## Announcements

### New announcements
_None_

### Regular announcements

- If you're attending any event/conference please add the event and
yourselves to Gluster attendance of events:
http://www.gluster.org/events (replaces
https://public.pad.fsfe.org/p/gluster-events)
- Put (even minor) interesting topics on
https://public.pad.fsfe.org/p/gluster-weekly-news
- Remember to add your updates to the next meetings agenda.

[Gluster-users] Last week in GD2 - 2016-11-16

2016-11-16 Thread Kaushal M
Hi All,

There was one big change in GD2 in the past week.

Prashanth completed embedding etcd into GD2 [1]. There is still a
little work remaining to store/restore etcd configuration on GD2
restarts. Once that is done, we'll do a new release of GD2.

I've continued working on volgen at [2]. I can now build a dependency
graph for a brick. This not the final graph though, as it needs to be
linearized to get the volume graph. I'll working on this next.

In addition, the test program now outputs a graphviz dotfile output of
the created graph, the makes it easier to visualize the graph during
development. A sample generate graph can be viewed at [3].

I'm still planning the hangout on volgen, but it will be done (sometime soon).

Thanks,
Kaushal

[1]: https://github.com/gluster/glusterd2/pull/148
[2]: https://github.com/kshlm/glusterd2-volgen
[3]: 
https://github.com/kshlm/glusterd2-volgen/blob/volgen-systemd-style/brick-graph.svg
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly Community Meeting - 2016-11-16

2016-11-16 Thread Kaushal M
I forgot to send this reminder out earlier. Please add your updates
and any topics of discussion to
https://public.pad.fsfe.org/p/gluster-community-meetings .

The meeting starts in ~3 hours from now.

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gfid generation

2016-11-15 Thread Kaushal M
On Tue, Nov 15, 2016 at 11:33 PM, Ankireddypalle Reddy
 wrote:
> Pranith,
>
>  Thanks for getting back on this. I am trying to see how
> gfid can be generated programmatically. Given a file name how do we generate
> gfid for it. I was reading some of the email threads about it where it was
> mentioned that gfid is generated based upon parent directory gfid and the
> file name. Given a same parent gfid and file name do we always end up with
> the same gfid.

You're probably confusing the hash as generated for the elastic hash
algorithm in DHT, with UUID. That is a combination of

I always thought that the GFID was a UUID, which was randomly
generated. (The random UUID might be being modified a little to allow
some leeway with directory listing, IIRC).

Adding gluster-devel to get more eyes on this.

>
>
>
> Thanks and Regards,
>
> ram
>
>
>
> From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> Sent: Tuesday, November 15, 2016 12:58 PM
> To: Ankireddypalle Reddy
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] gfid generation
>
>
>
> Sorry, didn't understand the question. Are you saying give a file on gluster
> how to get gfid of the file?
>
> #getfattr -d -m. -e hex /path/to/file shows it
>
>
>
> On Fri, Nov 11, 2016 at 9:47 PM, Ankireddypalle Reddy 
> wrote:
>
> Hi,
>
> Is the mapping from file name to gfid an idempotent operation.  If
> so please point me to the function that does this.
>
>
>
> Thanks and Regards,
>
> Ram
>
> ***Legal Disclaimer***
>
> "This communication may contain confidential and privileged material for the
>
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
>
> by others is strictly prohibited. If you have received the message by
> mistake,
>
> please advise the sender by reply email and delete the message. Thank you."
>
> **
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
>
> Pranith
>
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How to backup GlusterFS metadata in /etc, /var etc.?

2016-11-06 Thread Kaushal M
On Sun, Nov 6, 2016 at 3:19 PM, Marius Bergmann  wrote:
> Hi!
>
> I'm running a very tiny setup of just two servers, which each host
> several gluster bricks in a n=2 replica configuration. Now I want to
> reinstall the operating system on one of the hosts without having to
> resync all of the volumes afterwards. I found that there's some metadata
> outside of the volume bricks, e.g. in /etc/glusterfs, /var/lib/glusterd etc.
>
> Can I/do I have to take a (cold) backup of that metadata before
> reinstalling the host OS, so that there's only a small delta to be
> synced once the node is up again? If yes, which directories should be
> backed up?
>

/var/lib/glusterd holds everything you need, you'll be okay backing up
and restoring it. /etc/glusterfs is installed by the glusterfs
packages and doesn't have anything that changes.

> Kind regards,
> Marius
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly community meeting - Nov 02, 2016

2016-11-02 Thread Kaushal M
Hi all,

This week we discussed 3 topics (again!).

1. The FSFE etherpad service (where we host the meeting agenda and a
lot other gluster docs) will be shutdown, so we need to find an
alternative. We decided to make use of etherpads as temporary tools of
live collaboration, and using github wikis to permanently record the
document. Our first task is to identify and record known/live
etherpads. Nigel and I will start working on this, with our first
target being the community meeting agenda.

2. Nigel has created a centos-ci job for Glusto, which is nearly ready for use.

3. We continued discussion on 'Recognizing contributors', and we
couldn't arrive at a conclusion again. This discussion will continue
on the mailing lists.

I've pasted the meeting minutes below for easy reference. This will
also be available from a github wiki soon.

I'll be hosting next weeks meeting. See you all next week, same time &
same place.

~kaushal

## Logs

- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-02/weekly_community_meeting_nov_2,_2016.2016-11-02-12.00.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-02/weekly_community_meeting_nov_2,_2016.2016-11-02-12.00.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-02/weekly_community_meeting_nov_2,_2016.2016-11-02-12.00.log.html

## Topics of Discussion

### Host for next week (DON'T FORGET)

- Next weeks meeting moderator: kshlm
- kkeithley will host the Nov 16 meeting

### Open Floor

- Recognising community contributors(Manikandan Selvaganesh)
 - https://glusterfs.biterg.io/app/kibana#/dashboard/Overview
 - [nigelb] Are we clear on what is being asked for? What is the
problem we are trying to solve
 - People want to be rewarded for their contributions
 - As a reward for their work make them maintainers
 - Need to measure quality, not quantity.
   - Super tough
 - nigelb will be taking discussion to mailing list
- Sending up EasyFix bugs(assigned with a owner) on mailing list so
that new contributors can get started(May be?) (Manikandan
Selvaganesh)
 - Skipped again
- Glusto Job on Centos CI
 - https://ci.centos.org/job/gluster_glusto
   - Job setup, needs tweaking to get it 100% functional.
 - Leverage learaning from 3.9 test day into some tests.
- FSFE pad service is about to be decomissioned. Need to find an alternative.
 - https://wiki.fsfe.org/Teams/System-Hackers/Decommissioning-Pad
 - Pads will turn read-only on Dec 1. And will be deleted Feb 1.
 - Options
   - beta.etherpad.org
   - https://pad.riseup.net/
   - https://github.com/ether/etherpad-lite/wiki/Sites-that-run-Etherpad-Lite
 - Etherpad for live editing documents + Github wiki for permanent records.
 - First find and preserve existing pads. Pick alternative later.

## Updates

### GlusterFS 4.0

- GD2 (kshlm)
 - Volgen-2.0 design and prototype started. More info at
https://www.gluster.org/pipermail/gluster-devel/2016-October/051297.html
- Multiplexing
 - https://www.gluster.org/pipermail/gluster-devel/2016-November/051364.html

### GlusterFS 3.9

- _None_

### GlusterFS 3.8

- _None_


### GlusterFS 3.7

- Release planned for later today. Got delayed because of holidays in India.

### GlusterFS 3.6

- _None_

### Other initiatives and related projects

 Infrastructure

- Ongoing work on getting Glusto jobs on Centos CI
- Network issue between Centos CI and review.gluster.org

 NFS Ganesha

- ganesha 2.4.1 and (lib)ntirpc-1.4.3) were released. packages in
Fedora Updates-Testing (soon Updates) and Ubuntu Launchpad. Other
distributions soon.
- now working on 2.5, emphasis on performance and memory consum

 Samba

- Regression caused due to https://review.gluster.org/#/c/15332/ is
being tracked by upstream bug
https://bugzilla.samba.org/show_bug.cgi?id=12404 so that the fix from
master gets backported to release branches.

 Heketi/Containers

- We have a new repo under Gluster org in Github for anything and
everything related to Gluster as storage in containers. Refer to
https://github.com/gluster/container-storage

 Testing/Glusto

- https://ci.centos.org/job/gluster_glusto

### Last weeks AIs
- rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
that test starts.
 - _No updates_
- atinm to poke 3.9 release mgrs to finish and release
 - _No updates_
- obnox to starting discussion of Samba memory solutions
 - _No updates_
- jdarcy to discuss dbench smoke test failures on email
 - _No updates_

### Other Updates

- amye sent out the monthly newsletter
 - https://www.gluster.org/pipermail/gluster-users/2016-November/028920.html
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly community meeting - 26-Oct-2016

2016-10-26 Thread Kaushal M
Hi all!

This weeks meeting was GD.

We got rid of the regular updates, and just had an open floor. This
had the intended effect of more conversations.

We discussed 3 main topics today,
- How do we recognize contributors and their contributions [manikandan]
- What's happening with memory management in GlusterFS [post-factum]
- An update on Glusto/Testing initiative [ShwethaHP]

The details about these discussion can be found in the meeting logs
and minutes linked below.

Apart from that, an important announcement for this week is the 'Test
day' for release-3.9 happening tomorrow. I expect good participation
in this test day.

Thank you everyone who attended. Let's all meet next week same place, same time.

Cheers,
Kaushal

# Meeting Logs
- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-26/gluster_community_weekly_meeting_26-oct-2016.2016-10-26-12.00.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-26/gluster_community_weekly_meeting_26-oct-2016.2016-10-26-12.00.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-26/gluster_community_weekly_meeting_26-oct-2016.2016-10-26-12.00.log.html

# Topics
## Host for next week
Next weeks meeting moderator: kshlm

## Open Floor

- Recognising community contributors(Manikandan Selvaganesh)
 - Are more stats (commits, bugs, mails) good?
 - Bitergia exists with this info
http://projects.bitergia.com/redhat-glusterfs-dashboard/browser/
 - [obnox] Stats are not the best for recognition
   - If it is required they should be thourough, fine-grained
 - [sankarshan] Summary of discussions till now
   - coverage of stats on bitergia is not complete or enough
   - there are well identified additional ascpects to be considered
for recognition
   - a method of recognition will be useful
 - Everyone agrees we need to find a better method for recognizing
 - This topic will be discussed again next week.

- Sending up EasyFix bugs(assigned with a owner) on mailing list so
that new contributors can get started(May be?) (Manikandan
Selvaganesh)
 - Not discussed. Carrying forward to next week.

- Memory management (memory pools, jemalloc, FUSE client leaks etc) //
post-factum
 - [post-factum] mempools hurt more than they help
 - *I missed most of the rest of the discussion. Sorry. Please refer
the logs. - kshlm*
 - obnox will try to follow up on his AI about starting discussions
around memory management solutions for Gluster+Samba

- Glusto-Tests: Libs/Tests // shwethaHP
 - https://github.com/gluster/glusto-tests
 - Almnost all of the core libraries to manage volumes and IO have been added.
 - We are now in a position to begin writing tests using Glusto.
 - A CI job should be up for Glusto tests. Needs to be confirmed with
loadtheacc and nigelb
 - ShwethaHP will continue work on adding BVT tests to the community test suite.


# Updates

## GlusterFS 4.0
- [Shyam] No progress to report on DHT2, IOW no work ongoing on this
front at the moment
- [kshlm] GD2 -
https://www.gluster.org/pipermail/gluster-devel/2016-October/051274.html
- [jdarcy] Multiplexing -
https://www.gluster.org/pipermail/gluster-devel/2016-October/051281.html

## GlusterFS 3.9
- Aravinda has called for a test day to help speedup verification of 3.9
 - https://www.gluster.org/pipermail/gluster-devel/2016-October/051262.html
- An RC2 release is planned before test day.

## GlusterFS 3.8
- *No updates this week*

## GlusterFS 3.7
- Sent out 1 week remaining reminder
 - https://www.gluster.org/pipermail/gluster-devel/2016-October/051254.html
- Still on track for 30th.
- 6 changes merged since 3.7.16
- 2 changes still on review added since last release
https://review.gluster.org/#/q/project:glusterfs+branch:release-3.7+status:open


## GlusterFS 3.6
- *No updates this week*


## Other initiatives and related projects

### Infrastructure
- *No updates this week*

### NFS Ganesha
- [jiffin] 
https://www.gluster.org/pipermail/gluster-devel/2016-October/051282.html

### Samba
- [Anoop] Major regression was discovered with a recent change in
libgfapi(https://review.gluster.org/#/c/15332/) Following that
corresponding changes were made in VFS module for GlusterFS in
Samba(https://git.samba.org/?p=samba.git;a=c
ommit;h=06281e8f1b912540a8cc2a79497b074dbe559d53) so as to allocate
and free memory for resolved_path in glfs_realpath API properly.
 - gluster-devel ML thread:
https://www.gluster.org/pipermail/gluster-devel/2016-October/051231.html
- Almost all performance improvement md-cache patches have been merged upstream.
- Samba new upstream stable versions available now for download: v4.5.1, v4.4.7

### Heketi/Containers
- [rastar] We are exploring deployment models for Gluster in
Kubernetes. Currently leaning towards DaemonSets. See here:
http://kubernetes.io/docs/admin/daemons/

### Testing/Glusto
- Updates will be provided from next week


##Last weeks AIs
- rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
that test starts.
 - In 

Re: [Gluster-users] Epel Repo Link not accessible

2016-10-25 Thread Kaushal M
On Mon, Oct 24, 2016 at 7:59 PM, aparna  wrote:
> Hi All,
>
> Just wondering if someone can help me. I was trying to access the below link
> :
>
> Link:
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
>
> But didn't find anything. Looking forward for your response.

The repository on download.gluster.org has been discontinued and the
CentOS Storage SIG [1] is the official replacement.
You should use the Storage SIG packages from now on.

[1] https://wiki.centos.org/SpecialInterestGroup/Storage

>
> Thanks
> Aparna
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] New style community meetings - No more status updates

2016-10-25 Thread Kaushal M
On Fri, Oct 21, 2016 at 11:46 AM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, Oct 20, 2016 at 8:09 PM, Amye Scavarda <a...@redhat.com> wrote:
>>
>>
>> On Thu, Oct 20, 2016 at 7:06 AM, Kaushal M <kshlms...@gmail.com> wrote:
>>>
>>> Hi All,
>>>
>>> Our weekly community meetings have become mainly one hour of status
>>> updates. This just drains the life out of the meeting, and doesn't
>>> encourage new attendees to speak up.
>>>
>>> Let's try and change this. For the next meeting lets try skipping
>>> updates all together and instead just dive into the 'Open floor' part
>>> of the meeting.
>>>
>>> Let's have the updates to the regular topics be provided by the
>>> regular owners before the meeting. This could either be through
>>> sending out emails to the mailing lists, or updates entered into the
>>> meeting etherpad[1]. As the host, I'll make sure to link to these
>>> updates when the meeting begins, and in the meeting minutes. People
>>> can view these updates later in their own time. People who need to
>>> provide updates on AIs, just update the etherpad[1]. It will be
>>> visible from there.
>>>
>>> Now let's move why I addressed this mail to this large and specific
>>> set of people. The people who have been directly addressed are the
>>> owners of the regular topics. You all are expected, before the next
>>> meeting, to either,
>>>  - Send out an update on the status for the topic you are responsible
>>> for to the mailing lists, and then link to it on the the etherpad
>>>  - or, provide you updates directly in the etherpad.
>>> Please make sure you do this without fail.
>>> If you do have anything to discuss, add it to the "Open floor" section.
>>> Also, if I've missed out anyone in the addressed list, please make
>>> sure they get this message too.
>>>
>>> Anyone else who wants to share their updates, add it to the 'Other
>>> updates' section.
>>>
>>> Everyone else, go ahead and add anything you want to ask to the "Open
>>> floor" section. Ensure to have your name with the topic you add
>>> (etherpad colours are not reliable), and attend the meeting next week.
>>> When your topic comes up, you'll have the floor.
>>>
>>> I hope that this new format helps make our meetings more colourful and
>>> lively.
>>>
>>> As always, our community meetings will be held every Wednesday at
>>> 1200UTC in #gluster-meeting on Freenode.
>>> See you all there.
>>>
>>> ~kaushal
>>>
>>> [1]: https://public.pad.fsfe.org/p/gluster-community-meetings
>>
>>
>> I really like this idea and am all in favor of color + liveliness.
>> Let's give this new format three weeks or so, and we'll review around
>> November 9th to see if we like this experiment.
>> Fair?
>> -- amye
>
> Sounds good to me.
>

Okay. We have one more day to the meeting, but I've yet to see any
updates from all of you.
Please ensure that you do this before the meeting tomorrow.

>>
>> --
>> Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume with only one node

2016-10-25 Thread Kaushal M
On Tue, Oct 25, 2016 at 1:38 PM, Maxence Sartiaux  wrote:
> Hello,
>
> I need to migrate a old 2 node cluster to a proxmox cluster with a
> replicated gluster storage between those two (and a third arbitrer node).
>
> Id like to create a volume with a single node, migrate the data on this
> volume from the old server and then reinstall the second server and add the
> second brick to the volume.
>
> I found no informations about creating a replicated volume with a single
> node, is it possible ?

This is possible.
You first create a single brick volume on the first server. Note that
this is not a replica volume right now.
# gluster volume create  :

After migration and re-installing the 2nd server, you add the new
server to the GlusterFS trusted pool.
# gluster peer probe 

And add a replica brick the volume you created earlier,
# gluster volume add-brick  replica 2 :


>
> Also is it possible to add a third arbitrary node to an existing 2 replica
> volume ?

I'm not sure this is possible yet. Krutika or Pranith can answer this.

>
> Thank you.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] New style community meetings - No more status updates

2016-10-21 Thread Kaushal M
On Thu, Oct 20, 2016 at 8:09 PM, Amye Scavarda <a...@redhat.com> wrote:
>
>
> On Thu, Oct 20, 2016 at 7:06 AM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> Hi All,
>>
>> Our weekly community meetings have become mainly one hour of status
>> updates. This just drains the life out of the meeting, and doesn't
>> encourage new attendees to speak up.
>>
>> Let's try and change this. For the next meeting lets try skipping
>> updates all together and instead just dive into the 'Open floor' part
>> of the meeting.
>>
>> Let's have the updates to the regular topics be provided by the
>> regular owners before the meeting. This could either be through
>> sending out emails to the mailing lists, or updates entered into the
>> meeting etherpad[1]. As the host, I'll make sure to link to these
>> updates when the meeting begins, and in the meeting minutes. People
>> can view these updates later in their own time. People who need to
>> provide updates on AIs, just update the etherpad[1]. It will be
>> visible from there.
>>
>> Now let's move why I addressed this mail to this large and specific
>> set of people. The people who have been directly addressed are the
>> owners of the regular topics. You all are expected, before the next
>> meeting, to either,
>>  - Send out an update on the status for the topic you are responsible
>> for to the mailing lists, and then link to it on the the etherpad
>>  - or, provide you updates directly in the etherpad.
>> Please make sure you do this without fail.
>> If you do have anything to discuss, add it to the "Open floor" section.
>> Also, if I've missed out anyone in the addressed list, please make
>> sure they get this message too.
>>
>> Anyone else who wants to share their updates, add it to the 'Other
>> updates' section.
>>
>> Everyone else, go ahead and add anything you want to ask to the "Open
>> floor" section. Ensure to have your name with the topic you add
>> (etherpad colours are not reliable), and attend the meeting next week.
>> When your topic comes up, you'll have the floor.
>>
>> I hope that this new format helps make our meetings more colourful and
>> lively.
>>
>> As always, our community meetings will be held every Wednesday at
>> 1200UTC in #gluster-meeting on Freenode.
>> See you all there.
>>
>> ~kaushal
>>
>> [1]: https://public.pad.fsfe.org/p/gluster-community-meetings
>
>
> I really like this idea and am all in favor of color + liveliness.
> Let's give this new format three weeks or so, and we'll review around
> November 9th to see if we like this experiment.
> Fair?
> -- amye

Sounds good to me.

>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] New style community meetings - No more status updates

2016-10-20 Thread Kaushal M
Hi All,

Our weekly community meetings have become mainly one hour of status
updates. This just drains the life out of the meeting, and doesn't
encourage new attendees to speak up.

Let's try and change this. For the next meeting lets try skipping
updates all together and instead just dive into the 'Open floor' part
of the meeting.

Let's have the updates to the regular topics be provided by the
regular owners before the meeting. This could either be through
sending out emails to the mailing lists, or updates entered into the
meeting etherpad[1]. As the host, I'll make sure to link to these
updates when the meeting begins, and in the meeting minutes. People
can view these updates later in their own time. People who need to
provide updates on AIs, just update the etherpad[1]. It will be
visible from there.

Now let's move why I addressed this mail to this large and specific
set of people. The people who have been directly addressed are the
owners of the regular topics. You all are expected, before the next
meeting, to either,
 - Send out an update on the status for the topic you are responsible
for to the mailing lists, and then link to it on the the etherpad
 - or, provide you updates directly in the etherpad.
Please make sure you do this without fail.
If you do have anything to discuss, add it to the "Open floor" section.
Also, if I've missed out anyone in the addressed list, please make
sure they get this message too.

Anyone else who wants to share their updates, add it to the 'Other
updates' section.

Everyone else, go ahead and add anything you want to ask to the "Open
floor" section. Ensure to have your name with the topic you add
(etherpad colours are not reliable), and attend the meeting next week.
When your topic comes up, you'll have the floor.

I hope that this new format helps make our meetings more colourful and lively.

As always, our community meetings will be held every Wednesday at
1200UTC in #gluster-meeting on Freenode.
See you all there.

~kaushal

[1]: https://public.pad.fsfe.org/p/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GoGFAPI - Go bindings to libgfapi - Now under a Gluster project

2016-10-20 Thread Kaushal M
Hi All,

The GoGFAPI Go package is now a Gluster project [1]!

I'd created the github.com/kshlm/gogfapi/gfapi package over 3 years
ago, as a learning project to learn Go.
Since then, the project has been moving slowly, and found some users
and contributors. There are still TODOs left to be done[2], issues to
be fixed [3] and the package also needs to be updated to support the
newer APIs introduced into libgfapi since then.

With this move to being a Gluster project, I hope to revive activity
around this project and actually get it to completion and a production
ready state. So please go ahead and use the package, test it, file
issues, pick up TODOs/Issues to fix, write tests etc. Any sort
contribution is welcome.

Let's try to make this package better together.

Cheers!
~kaushal

[1]: https://github.com/gluster/gogfapi
[2]: https://github.com/gluster/gogfapi/blob/master/TODO.md
[3]: https://github.com/gluster/gogfapi/issues
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS-3.7.16 released

2016-10-16 Thread Kaushal M
Apologies for the very (I mean very) late announcement.

GlusterFS-3.7.16 has been released. The release-notes for this release
can be viewed at [1].

Storage-SIG packages have been built and are available from the
centos-gluster37-test repository right now, and will be available from
the release repository soon. Packages for other distros should be
available soon as well.

Thanks.
~kaushal

[1] 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.16.md
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Kaushal M
On Wed, Sep 28, 2016 at 10:38 PM, Ben Werthmann  wrote:
> These are interesting projects:
> https://github.com/prashanthpai/antbird
> https://github.com/kshlm/gogfapi
>
> Are there plans for an official go gfapi client library?

I hope to do make the gogfapi package official someday. I've not
gotten around to it yet, and don't know when I can.

>
> On Wed, Sep 28, 2016 at 12:16 PM, John Mark Walker 
> wrote:
>>
>> No - gluster-swift adds the swift API on top of GlusterFS. It doesn't
>> require Swift itself.
>>
>> This project is 4 years old now - how do people not know this?
>>
>> -JM
>>
>>
>>
>> On Wed, Sep 28, 2016 at 11:28 AM, Gandalf Corvotempesta
>>  wrote:
>>>
>>> 2016-09-28 16:27 GMT+02:00 Prashanth Pai :
>>> > There's gluster-swift[1]. It works with oth Swift API and S3 API[2]
>>> > (using Swift).
>>> >
>>> > [1]: https://github.com/prashanthpai/docker-gluster-swift
>>> > [2]:
>>> > https://github.com/gluster/gluster-swift/blob/master/doc/markdown/s3.md
>>>
>>> I wasn't aware of S3 support on Swift.
>>> Anyway, Swift has some requirements like the whole keyring stack
>>> proxies and so on from OpenStack, I prefere something smaller
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] An Update on GlusterD-2.0

2016-09-30 Thread Kaushal M
On Thu, Sep 29, 2016 at 10:10 AM, Vijay Bellur <vbel...@redhat.com> wrote:
> On 09/22/2016 07:28 AM, Kaushal M wrote:
>>
>> The first preview/dev release of GlusterD-2.0 is available now. A
>> prebuilt binary is available for download from the release-page[1].
>>
>> This is just a preview of what has been happening in GD2, to give
>> users a taste of how GD2 is evolving.
>>
>> GD2 can now form a cluster, list peers, create/delete,(psuedo)
>> start/stop and list volumes. Most of these will undergo changes and
>> refined as we progress.
>>
>> More information on how to test this release can be found on the release
>> page.
>>
>> We'll providing periodic (hopefully fortnightly) updates on the
>> changes happening in GD2 from now on.
>>
>
>
> Thank you for posting this, Kaushal!
>
> I was trying to add a peer using the gluster/gluster-centos docker
> containers and I encountered the following error:
>
> INFO[17533] New member added to the cluster   New member
> =ETCD_172.17.0.3 member Id =b197797611650d60
> INFO[17533] ETCD_NAME ETCD_NAME=ETCD_172.17.0.3
> INFO[17533] ETCD_INITIAL_CLUSTER
> ETCD_INITIAL_CLUSTER=default=http://172.17.0.4:2380,ETCD_172.17.0.3=http://172.17.0.3:2380
> INFO[17533] ETCD_INITIAL_CLUSTER_STATE"existing"
> ERRO[17540] Failed to add peer into the etcd storeerror=client: etcd
> cluster is unavailable or misconfigured peer/node=172.17.0.3
> ERRO[21635] Failed to add member into etcd clustererror=client: etcd
> cluster is unavailable or misconfigured member=172.17.0.3

These 2 errors are from etcd-client, which mean that the GD2 cannot
connect to the etcd server.
As the errors indicate, it could be because the etcd daemon isn't
running successfully,
or because etcd hasn't successfully connected to its cluster. There
could be more information in the etcd log under
`GD2WORKDIR/log/etcd.log`.

I've faced this issue intermittently, but I've never bothered checking
what caused it yet. I just nuke everything and start again.

>
> What should be done to overcome this error?
>
> Also noticed that there is a minor change in the actual response to /version
> when compared with what is documented in the API guide. We would need to
> change that.

Will do it. The whole ReST document is in need of a recheck.

>
> -Vijay
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] An Update on GlusterD-2.0

2016-09-22 Thread Kaushal M
The first preview/dev release of GlusterD-2.0 is available now. A
prebuilt binary is available for download from the release-page[1].

This is just a preview of what has been happening in GD2, to give
users a taste of how GD2 is evolving.

GD2 can now form a cluster, list peers, create/delete,(psuedo)
start/stop and list volumes. Most of these will undergo changes and
refined as we progress.

More information on how to test this release can be found on the release page.

We'll providing periodic (hopefully fortnightly) updates on the
changes happening in GD2 from now on.

We'll also be providing periodic dev builds for people to test.
Currently builds are only available for Linux on amd64. Vagrant and
docker releases are planned to make it easier to test GD2.

Thanks,
Kaushal


[1] https://github.com/gluster/glusterd2/releases/tag/v4.0dev-1
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly Community Meeting - 21-Sep-2016

2016-09-21 Thread Kaushal M
This weeks meeting started slow. But snowballed into quite an active meeting.
Thank you all who attended the meeting!

The meeting logs for the meeting are available at the links below, and
the minutes have been pasted at the end.

- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-21/weekly_community_meeting_21-sep-2016.2016-09-21-11.59.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-21/weekly_community_meeting_21-sep-2016.2016-09-21-11.59.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-21/weekly_community_meeting_21-sep-2016.2016-09-21-11.59.log.html

Next weeks meeting will be hosted by Samikshan. See you all next week,
same place, same time.

Cheers,
Kaushal


Meeting summary
---
* Roll Call  (kshlm, 12:00:06)

* Next weeks host  (kshlm, 12:08:57)
  * samikshan is next weeks host  (kshlm, 12:10:19)

* Project Infrastructure  (kshlm, 12:10:26)

* GlusterFS-4.0  (kshlm, 12:15:50)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-September/050928.html
(kshlm, 12:18:55)

* GlusterFS-3.9  (kshlm, 12:21:30)

* GlusterFS-3.8  (kshlm, 12:27:05)

* GlusterFS-3.7  (kshlm, 12:29:45)

* NFS Ganesha  (kshlm, 12:34:27)

* Samba  (kshlm, 12:37:34)

* Last weeks AIs  (kshlm, 12:39:14)

* rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
  that test starts.  (kshlm, 12:39:26)
  * ACTION: rastar_afk/ndevos/jdarcy to improve cleanup to control the
processes that test starts.  (kshlm, 12:40:27)

* RC tagging to be done by this week for 3.9 by aravindavk.  (kshlm,
  12:41:47)

* RC tagging to be done by this week for 3.9 by aravindavk/pranithk
  (kshlm, 12:42:19)
  * ACTION: RC tagging to be done by this week for 3.9 by
aravindavk/pranithk  (kshlm, 12:42:34)

* jdarcy will bug amye regarding a public announcement for Gluster
  Summit talks  (kshlm, 12:42:39)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-September/050888.html
(kshlm, 12:43:27)

* Open floor  (kshlm, 12:43:42)

* RHEL5 build issues  (kshlm, 12:43:58)
  * LINK:
https://www.gluster.org/pipermail/gluster-infra/2016-September/002821.html
(kshlm, 13:01:52)

* Updates on documentation  (kshlm, 13:02:06)
  * LINK: https://rajeshjoseph.gitbooks.io/test-guide/content/
(rjoseph, 13:03:51)
  * LINK: https://github.com/rajeshjoseph/doctest   (rjoseph, 13:06:23)

* Announcements  (kshlm, 13:08:02)

Meeting ended at 13:08:26 UTC.




Action Items

* rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
  that test starts.
* RC tagging to be done by this week for 3.9 by aravindavk/pranithk




Action Items, by person
---
* aravindavk
  * RC tagging to be done by this week for 3.9 by aravindavk/pranithk
* ndevos
  * rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
that test starts.
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (156)
* nigelb (42)
* ndevos (27)
* kkeithley (22)
* misc (20)
* rjoseph (20)
* aravindavk (11)
* samikshan (4)
* zodbot (4)
* amye (4)
* ankitraj (1)
* Klas (1)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] RPC program not available (req 1298437 330)

2016-09-17 Thread Kaushal M
This happens when a portmap request and RPC reconfigure fails on a
client. The client should connect to glusterd to get the brick port,
and then reconnect to the brick using the port. But this fails (for
some reason), leaving the client connected to glusterd instead of the
brick. When the client now tries to do a data IO operation, it sends
the RPCs to GlusterD instead of the brick, leading to the error logs
you are seeing.

The error seen just indicates that GlusterD doesn't support the IO or
FOP RPCs for the brick.

Try remounting the client. That should help the client correct to the
bricks correctly.

On Sat, Sep 17, 2016 at 2:07 AM, Vijay Bellur  wrote:
> Have you checked the brick logs to see if there's anything unusual there?
>
> Regards,
> Vijay
>
> On Thu, Sep 15, 2016 at 5:24 PM, Danny Lee  wrote:
>> Hi,
>>
>> Environment:
>> Gluster Version: 3.8.3
>> Operating System: CentOS Linux 7 (Core)
>> Kernel: Linux 3.10.0-327.28.3.el7.x86_64
>> Architecture: x86-64
>> Replicated 3-Node Volume
>> ~400GB of around a million files
>>
>> Description of Problem:
>> One of the brick dies.  The only suspect log I see is in the
>> etc-glusterfs-glusterd.vol.log (shown below).  Trying to get an idea of why
>> the brick died and how it could be prevented in the future.
>>
>> During this time, I was forcing replication (find . | xargs stat on the
>> mount).  There were some services starting up as well that was using the
>> gluster mount.
>>
>> [2016-09-13 20:01:50.033369] W [socket.c:590:__socket_rwv] 0-management:
>> readv on /var/run/gluster/cfc57a83cf9864900aa08380be93.socket failed (No
>> data available)
>> [2016-09-13 20:01:50.033830] I [MSGID: 106005]
>> [glusterd-handler.c:5050:__glusterd_brick_rpc_notify] 0-management: Brick
>> 172.17.32.28:/usr/local/volname/local-data/mirrored-data has disconnected
>> from glusterd.
>> [2016-09-13 20:01:50.121316] W [rpcsvc.c:265:rpcsvc_program_actor]
>> 0-rpc-service: RPC program not available (req 1298437 330) for
>> 172.17.32.28:49146
>> [2016-09-13 20:01:50.121339] E [rpcsvc.c:560:rpcsvc_check_and_reply_error]
>> 0-rpcsvc: rpc actor failed to complete successfully
>> [2016-09-13 20:01:50.121383] W [rpcsvc.c:265:rpcsvc_program_actor]
>> 0-rpc-service: RPC program not available (req 1298437 330) for
>> 172.17.32.28:49146
>> [2016-09-13 20:01:50.121392] E [rpcsvc.c:560:rpcsvc_check_and_reply_error]
>> 0-rpcsvc: rpc actor failed to complete successfully
>> The message "I [MSGID: 106005]
>> [glusterd-handler.c:5050:__glusterd_brick_rpc_notify] 0-management: Brick
>> 172.17.32.28:/usr/local/volname/local-data/mirrored-data has disconnected
>> from glusterd." repeated 34 times between [2016-09-13 20:01:50.033830] and
>> [2016-09-13 20:03:40.010862]
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly community meeting - 7-Sept-2016

2016-09-08 Thread Kaushal M
The meeting minutes are here slightly late,  because I forgot to
`#startmeeting` the meeting. The minutes and logs can be obtained at
the links below,

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-07/gluster-meeting.2016-09-07-17.30.html
Minutes(text): 
https://meetbot-raw.fedoraproject.org/gluster-meeting/2016-09-07/gluster-meeting.2016-09-07-17.30.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-07/gluster-meeting.2016-09-07-17.30.log.html


We had another active meeting this week. A big thank you to all the attendees.

Next weeks meeting will be hosted by Samikshan(irc: samikshan), at the
same time and same place.
See you all next week.

~kaushal

Meeting summary
---
* Roll Call  (kshlm, 17:30:58)

* Next weeks host  (kshlm, 17:34:53)
  * samikshan is next week's host  (kshlm, 17:38:18)

* GlusterFS-4.0  (kshlm, 17:38:52)

* GlusterFS-3.9  (kshlm, 17:41:41)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-September/050741.html
(kshlm, 17:45:39)
  * ACTION: aravindavk to update the lists with the target dates for the
3.9 release  (kshlm, 17:48:06)

* GlusterFS-3.8  (kshlm, 17:49:39)

* GlusterFS-3.7  (kshlm, 17:51:56)

* Project Infrastructure  (kshlm, 18:00:02)
  * LINK:
http://review.gluster.org/#/q/I6f57b5e8ea174dd9e3056aff5da685e497894ccf
(ndevos, 18:12:09)

* NFS-Ganesha  (kshlm, 18:15:59)

* Samba  (kshlm, 18:21:55)

* Last weeks AIs  (kshlm, 18:26:49)

* improve cleanup to control the processes that test starts  (kshlm,
  18:26:57)
  * ACTION: rastar_afk/ndevos/jdarcy to  improve cleanup to control the
processes that test starts  (kshlm, 18:29:49)

* Open Floor  (kshlm, 18:30:07)
  * LINK:

http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1369124
(ndevos, 18:37:46)

Meeting ended at 18:39:27 UTC.




Action Items

* aravindavk to update the lists with the target dates for the 3.9
  release
* rastar_afk/ndevos/jdarcy to  improve cleanup to control the processes
  that test starts




Action Items, by person
---
* aravindavk
  * aravindavk to update the lists with the target dates for the 3.9
release
* ndevos
  * rastar_afk/ndevos/jdarcy to  improve cleanup to control the
processes that test starts
* rastar
  * rastar_afk/ndevos/jdarcy to  improve cleanup to control the
processes that test starts
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (143)
* nigelb (48)
* ndevos (26)
* kkeithley (20)
* obnox (13)
* post-factum (8)
* jkroon (6)
* aravindavk (6)
* samikshan (4)
* ** (3)
* jiffin (2)
* anoopcs (1)
* misc (1)
* zodbot (1)
* msvbhat (1)
* mchangir (1)
* ira (1)
* rastar (1)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] yum errors

2016-09-05 Thread Kaushal M
On Mon, Sep 5, 2016 at 9:26 PM, Dj Merrill  wrote:
> A few days ago we started getting errors from the Gluster yum repo:
>
> http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-7/x86_64/repodata/repomd.xml:
> [Errno 14] HTTP Error 404 - Not Found
>
> Looking into this we found a readme file in that directory indicating:
>
> RPMs for RHEL, CentOS, and other RHEL Clones are available from the
> CentOS Storage SIG.
>
> See
>   https://wiki.centos.org/SpecialInterestGroup/Storage
>
>
> Apparently I missed an announcement that the repo location was changing,
> so I'm playing catch-up at the moment.
>
>
> Following down through the docs on that link, I find the Centos Storage
> SIG repo has 3.7.13, and the Storage testing repo has 3.7.15.
>
> What is a typical timeframe for releases to transition from the testing
> repo to the normal repo?

Releases transistion from testing to the main repo after someone
provides an ACK that they aren't really broken.
Ideally that would be within a couple of days, but we forgot about
3.7.14 and it languished in testing for almost the whole month.
We'll try to get 3.7.15 pushed into the main repo today, and you
should have it available tomorrow.

>
> Will this be the standard repo location going forth, replacing the repo
> on download.gluster.org?

Yup. CentOS packages will be provided only through the CentOS Storage
SIG from now on.

>
> Thanks,
>
> -Dj
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS 3.7.15 released

2016-09-01 Thread Kaushal M
GlusterFS 3.7.15 has been released. This is a regular scheduled
release for GlusterFS-3.7 and includes 26 bug fixes since 3.7.14.
The release-notes can be read at [1].

## Downloads

The tarball can be downloaded from [2].

### Packages

Binary packages have been built are in the process of being made
available as updates.

The CentOS Storage SIG packages have been built and will become
available in the centos-gluster37-test repository (from the
centos-release-gluster37 package) shortly.
These will be made available in the release repository after some more testing.

Packages for Fedora 23 are queued for testing in Fedora Koji/Bodhi.
They will appear first via dnf in the Updates-Testing repo, then in
the Updates repo.

Packages for Fedora 24, 25, 26; Debian wheezy, jessie, and stretch,
are available now on [2].

Packages for Ubuntu Trusty, Wily, and Xenial are available now in Launchpad.

Packages for SuSE available now in the SuSE build system.

See the READMEs in the respective subdirs at [2] for more details on
how to obtain them.

## Next release

GlusterFS-3.7.16 will be the next release for GlusterFS-3.7, and is
currently targetted for release on 30th September 2016.
The tracker bug[3] for GlusterFS-3.7.16 has been created. Bugs that
need to be included in 3.7.16 need to be marked as dependencies of
this bug.



[1]: 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.15.md
[2]: https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.15/
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.16
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly Community Meeting 24/Aug/2016 - Minutes

2016-08-24 Thread Kaushal M
Thanks once again, to all the attendees of todays meeting. We've been
having good meeting attendence lately, let's keep it going.

The minutes and logs for todays meeting are available from the links below,
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-24/weekly_community_meeting_24aug2015.2016-08-24-12.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-24/weekly_community_meeting_24aug2015.2016-08-24-12.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-24/weekly_community_meeting_24aug2015.2016-08-24-12.00.log.html

Let's meet again next week, at the same time, in #gluster-meeting.
Rastar is hosting.

~kaushal

(No minutes here this week. meetbot.fedoraproject.org isn't opening)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster inside containers

2016-08-17 Thread Kaushal M
On Wed, Aug 17, 2016 at 5:18 PM, Humble Devassy Chirammal
 wrote:
> Hi Zach,
>
>>
> Option 1. 3 Gluster nodes, one large volume, divided up into subdirs (1 for
> each website), mounting the respective subdirs into their containers & using
> ACLs & LXD’s u/g id maps (mixed feelings about security here)
>>
>
> Which version of GlusterFS is in use here ? because gluster sub directory
> support patch is available in upstream, however  I dont think its in a good
> state to consume. Yeah, if the subdirectory mount is performed we have to
> take enough care to make sure the isolation of the mounts between multiple
> user, ie security is a concern here.

A correction here. Sub-directory mount support hasn't been merged yet.
It's still a patch under review.

>
>>
> Option 2. 3 Gluster nodes, website-specifc bricks on each, creating
> website-specific volumes, then mounting those respective volumes into their
> containers. Example:
> gnode-1
> - /data/website1/brick1
> - /data/website2/brick1
> gnode-2
> - /data/website1/brick2
> - /data/website2/brick2
> gnode-3
> - /data/website1/brick3
> - /data/website2/brick3
>>
>
> Yes, this looks to be an ideal or more consumable approach to me.
>
>>
>
> Option 3. 3 Gluster nodes, every website get’s their own mini “Gluster
> Cluster” via LXD containers on the Gluster nodes. Example:
> gnode-1
> - gcontainer-website1
>   - /data/brick1
> - gcontainer-website2
>   - /data/brick1
> gnode-2
> - gcontainer-website1
>   - /data/brick2
> - gcontainer-website2
>   - /data/brick2
> gnode-3
> - gcontainer-website1
>   - /data/brick3
> - gcontainer-website2
>   - /data/brick3
>>
>
> This is very difficult or complex to achieve and maintain.
>
> In short,  I would vote for option 2.
>
> Also for safer side,  you may need take snapshot of the volumes or configure
> a backup for these volumes to avoid single point of failure.
>
> Please let me know if you need any details.
>
> --Humble
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-16 Thread Kaushal M
Okay. Here's another proposal from me.

# GlusterFS Release process
An overview of the GlusterFS release process

The GlusterFS release process has been recently updated and been
documented for the first time. In this presentation, I'll be giving an
overview the whole release process including release types, release
schedules, patch acceptance criteria and the release procedure.

Kaushal
kshlms...@gmail.com
Process & Infrastructure

On Mon, Aug 15, 2016 at 5:30 AM, Amye Scavarda <a...@redhat.com> wrote:
> Kaushal,
>
> That's probably best. We'll be able to track similar proposals here.
> - amye
>
> On Sat, Aug 13, 2016 at 6:30 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> How do we submit proposals now? Do we just reply here?
>>
>>
>> On 13 Aug 2016 03:49, "Amye Scavarda" <a...@redhat.com> wrote:
>>
>> GlusterFS for Users
>> "GlusterFS for users" introduces you with GlusterFS, it's terminologies,
>> it's features and how to manage y GlusterFS cluster.
>>
>> GlusterFS is a scalable network filesystem. Using commodity hardware, you
>> can create large, distributed storage solutions for media streaming, data
>> analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
>> and open source software.
>>
>> This session is more intended for users/admins.
>> Scope of this session :
>>
>> * What is Glusterfs
>> * Glusterfs terminologies
>> * Easy steps to get started with glusterfs
>> * Volume topologies
>> * Access protocols
>> * Various features from user perspective :
>> Replication, Data distribution, Geo-replication, Bit rot detection,
>> data tiering,  Snapshot, Encryption, containerized glusterfs
>> * Various configuration files
>> * Various logs and it's location
>> * various custom profile for specific use-cases
>> * Collecting statedump and it's usage
>> * Few common problems like :
>>1) replacing a faulty brick
>>2) resolving split-brain
>>3) peer disconnect issue
>>
>> Bipin Kunal
>> bku...@redhat.com
>> User Perspectives
>>
>> On Fri, Aug 12, 2016 at 3:18 PM, Amye Scavarda <a...@redhat.com> wrote:
>>>
>>> Demo : Quickly setup GlusterFS cluster
>>> This demo will let you understand How to setup GlusterFS cluster and how
>>> to exploit its features.
>>>
>>> GlusterFS is a scalable network filesystem. Using commodity hardware, you
>>> can create large, distributed storage solutions for media streaming, data
>>> analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
>>> and open source software.
>>>
>>> This demo is intended for new user who is willing to setup glusterFS
>>> cluster.
>>>
>>> This demo will let you understand How to setup GlusterFS cluster and how
>>> to exploit its features.
>>>
>>> Scope of this session :
>>>
>>> 1) Install GlusterFS packages
>>> 2) Create a trusted storage pool
>>> 3) Create a GlusterFS volume
>>> 4) Access GlusterFS volume using various protocols
>>>a) FUSE b) NFS c) CIFS d) NFS-ganesha
>>> 5) Using Snapshot
>>> 6) Creating geo-rep session
>>> 7) Adding/removing/replacing bricks
>>> 8) Bit-rot detection and correction
>>>
>>> Bipin Kunal
>>> bku...@redhat.com
>>> User Perspectives
>>>
>>> On Fri, Aug 12, 2016 at 3:17 PM, Amye Scavarda <a...@redhat.com> wrote:
>>>>
>>>> An Update on GlusterD-2.0
>>>> An update on what's been happening in GlusterD-2.0 since the last
>>>> summit.
>>>>
>>>> Discussion around GlusterD-2.0 was initially started at the last Gluster
>>>> Development summit. Since then we've had many followup discussions, and
>>>> officially started working on GD2. In this talk I'll be providing an update
>>>> on what has been done, what we're doing and what needs to be done.
>>>>
>>>> Kaushal
>>>> kshlms...@gmail.com
>>>> Future Gluster Features
>>>>
>>>>
>>>> On Fri, Aug 12, 2016 at 3:16 PM, Amye Scavarda <a...@redhat.com> wrote:
>>>>>
>>>>> Challenges with Gluster and Persistent Memory
>>>>>
>>>>> A discussion of the difficulties posed by persistent memory with
>>>>> Gluster and  short and long term steps to address them.
>>>>>
>>>>> Persistent memory will significantly improve storage performan

Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-13 Thread Kaushal M
How do we submit proposals now? Do we just reply here?

On 13 Aug 2016 03:49, "Amye Scavarda"  wrote:

GlusterFS for Users
"GlusterFS for users" introduces you with GlusterFS, it's terminologies,
it's features and how to manage y GlusterFS cluster.

GlusterFS is a scalable network filesystem. Using commodity hardware, you
can create large, distributed storage solutions for media streaming, data
analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
and open source software.

This session is more intended for users/admins.
Scope of this session :

* What is Glusterfs
* Glusterfs terminologies
* Easy steps to get started with glusterfs
* Volume topologies
* Access protocols
* Various features from user perspective :
Replication, Data distribution, Geo-replication, Bit rot detection,
data tiering,  Snapshot, Encryption, containerized glusterfs
* Various configuration files
* Various logs and it's location
* various custom profile for specific use-cases
* Collecting statedump and it's usage
* Few common problems like :
   1) replacing a faulty brick
   2) resolving split-brain
   3) peer disconnect issue

Bipin Kunal
bku...@redhat.com
User Perspectives

On Fri, Aug 12, 2016 at 3:18 PM, Amye Scavarda  wrote:

> Demo : Quickly setup GlusterFS cluster
> This demo will let you understand How to setup GlusterFS cluster and how
> to exploit its features.
>
> GlusterFS is a scalable network filesystem. Using commodity hardware, you
> can create large, distributed storage solutions for media streaming, data
> analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
> and open source software.
>
> This demo is intended for new user who is willing to setup glusterFS
> cluster.
>
> This demo will let you understand How to setup GlusterFS cluster and how
> to exploit its features.
>
> Scope of this session :
>
> 1) Install GlusterFS packages
> 2) Create a trusted storage pool
> 3) Create a GlusterFS volume
> 4) Access GlusterFS volume using various protocols
>a) FUSE b) NFS c) CIFS d) NFS-ganesha
> 5) Using Snapshot
> 6) Creating geo-rep session
> 7) Adding/removing/replacing bricks
> 8) Bit-rot detection and correction
>
> Bipin Kunal
> bku...@redhat.com
> User Perspectives
>
> On Fri, Aug 12, 2016 at 3:17 PM, Amye Scavarda  wrote:
>
>> An Update on GlusterD-2.0
>> An update on what's been happening in GlusterD-2.0 since the last summit.
>>
>> Discussion around GlusterD-2.0 was initially started at the last Gluster
>> Development summit. Since then we've had many followup discussions, and
>> officially started working on GD2. In this talk I'll be providing an update
>> on what has been done, what we're doing and what needs to be done.
>>
>> Kaushal
>> kshlms...@gmail.com
>> Future Gluster Features
>>
>>
>> On Fri, Aug 12, 2016 at 3:16 PM, Amye Scavarda  wrote:
>>
>>> Challenges with Gluster and Persistent Memory
>>>
>>> A discussion of the difficulties posed by persistent memory with Gluster
>>> and  short and long term steps to address them.
>>>
>>> Persistent memory will significantly improve storage performance. But
>>> these benefits may be hard to realize in Gluster. Gains are mitigated from
>>> costly network overhead and its deep software layer. It is also likely that
>>> the high costs of persistent memory will limit deployments. This talk shall
>>> discuss short and long term steps to take on those problems. Possible
>>> strategies include better incorporating high speed networks such as
>>> infiniband, client side caching of metadata, and centralizing DHT's
>>> layouts. The talk will include discussion and results from a range of
>>> experiments in software and hardware.
>>>
>>> Presenters:
>>> Dan Lambright, Rafi Parambil dlamb...@redhat.com
>>> Future Gluster Features
>>>
>>> On Fri, Aug 12, 2016 at 3:15 PM, Amye Scavarda  wrote:
>>>


 On Fri, Aug 12, 2016 at 12:48 PM, Vijay Bellur 
 wrote:

> Hey All,
>
> Gluster Developer Summit 2016 is fast approaching [1] on us. We are
> looking to have talks and discussions related to the following themes in
> the summit:
>
> 1. Gluster.Next - focusing on features shaping the future of Gluster
>
> 2. Experience - Description of real world experience and feedback from:
>a> Devops and Users deploying Gluster in production
>b> Developers integrating Gluster with other
> ecosystems
>
> 3. Use cases  - focusing on key use cases that drive Gluster.today and
> Gluster.Next
>
> 4. Stability & Performance - focusing on current improvements to
> reduce our technical debt backlog
>
> 5. Process & infrastructure  - focusing on improving current workflow,
> infrastructure to make life easier for all of us!
>
> If you have a talk/discussion proposal that can be part of these

Re: [Gluster-users] [Gluster-devel] Talks and topics that need presenters (WAS: CFP for Gluster Developer Summit)

2016-08-13 Thread Kaushal M
I was thinking of doing one on the changes to the release process. I will
do that now.

On 14 Aug 2016 02:58, "Niels de Vos"  wrote:

> On Sun, Aug 14, 2016 at 01:31:58AM +0530, Mohammed Rafi K C wrote:
> >
> >
> > On 08/13/2016 01:29 PM, Niels de Vos wrote:
> > > In addition to Vijays request to submit talks, I would like to see some
> > > very specific topics presented/demo'd. Anyone attending the Summit and
> > > willing to take these on is very much encouraged to do so. To do so,
> > > reply to this (or Vijays) email with your name and a description of the
> > > topic.
> > >
> > > If others would like to see other topics, please add them to the list.
> > >
> > > Many thanks,
> > > Niels
> > >
> > >
> > > Practical Glusto example
> > >  - show how to install Glusto and dependencies
> > >  - write a simple new test-case from scratch (copy/paste example?)
> > >  - run the new test-case (in the development environment?)
> > >
> > > Debugging (large) production deployments
> > >  - tools that can be used for debugging on non-development systems
> > >  - filtering logs and other data to identify problems
> > >  - coming up with the root cause of the problem
> > >  - reporting a useful bug so that developers can fix it
> > >
> > > Making troubleshooting easier
> > >  - statedumps, how code tracks allocations, how to read the dumps
> > >  - io-stats, meta and other xlators
> > >  - useful, actionable log messages
> >
> > I was planning to give a talk on .meta directory and how that can be
> > used to debug a live file system. I could also talk about the statedump,
> > io-stats, etc.
>
> Great, thanks! I guess Amye can mark this as a proposal :)
>
> Niels
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Weekly Community meeting - 10/Aug/2016

2016-08-11 Thread Kaushal M
Hi All,

The meeting minutes and logs for this weeks meeting are available at
the links below.
- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-10/weekly_community_meeting_10aug2016.2016-08-10-12.00.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-10/weekly_community_meeting_10aug2016.2016-08-10-12.00.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-08-10/weekly_community_meeting_10aug2016.2016-08-10-12.00.log.html

We had a very lively meeting this time, and had good participation.
Hope next weeks meeting is also the same. The next meeting is as
always at 1200UTC next Wednesday in #gluster-meeting. See you all
there and thank you for attending todays meeting.

~kaushal

Meeting summary
---
* Roll call  (kshlm, 12:00:48)

* Next weeks meeting host  (kshlm, 12:04:07)
  * rafi hosts the next meeting  (kshlm, 12:06:23)

* GlusterFS-4.0  (kshlm, 12:06:42)

* GlusterFS-3.9  (kshlm, 12:14:50)
  * ACTION: pranithk/aravindavk/dblack to send out a reminder about the
feature deadline for 3.9  (kshlm, 12:20:33)

* GlusterFS 3.8  (kshlm, 12:21:36)

* GlusterFS-3.7  (kshlm, 12:30:01)

* GlusterFS-3.6  (kshlm, 12:38:27)

* NFS-Ganesha  (kshlm, 12:44:12)

* Samba  (kshlm, 12:49:45)

* Community Infrastructure  (kshlm, 12:54:05)

* Open Floor  (kshlm, 13:02:22)

* Glusto - libraries have been ported by the QE Automation Team and just
  need your +1s on Glusto to begin configuring upstream and make
  available  (kshlm, 13:02:47)

* Need some more reviews for
  https://github.com/gluster/glusterdocs/pull/139  (kshlm, 13:17:05)

Meeting ended at 13:19:53 UTC.




Action Items

* pranithk/aravindavk/dblack to send out a reminder about the feature
  deadline for 3.9




Action Items, by person
---
* **UNASSIGNED**
  * pranithk/aravindavk/dblack to send out a reminder about the feature
deadline for 3.9




People Present (lines said)
---
* kshlm (129)
* ndevos (46)
* obnox (34)
* kkeithley (31)
* nigelb (22)
* ira (20)
* post-factum (17)
* loadtheacc (13)
* msvbhat (10)
* skoduri (8)
* rafi (6)
* anoopcs (3)
* glusterbot (3)
* zodbot (3)
* ankitraj (1)
* misc (1)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS-3.7.14 released

2016-08-03 Thread Kaushal M
On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
> Hi,
>
> May I ask if multi-threaded self heal for distributed disperse volumes
> implemented in this release?

AFAIK, not yet. It's not yet available on the master branch yet.
Pranith can give a better answer.

>
> Thanks,
> Serkan
>
> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage
> <dgoss...@carouselchecks.com> wrote:
>> On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson
>> <lindsay.mathie...@gmail.com> wrote:
>>>
>>> On 2/08/2016 5:07 PM, Kaushal M wrote:
>>>>
>>>> GlusterFS-3.7.14 has been released. This is a regular minor release.
>>>> The release-notes are available at
>>>>
>>>> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md
>>>
>>>
>>> Thanks Kaushal, I'll check it out
>>>
>>
>> So far on my test box its working as expected.  At least the issues that
>> prevented it from running as before have disappeared.  Will need to see how
>> my test VM behaves after a few days.
>>
>>
>>
>>> --
>>> Lindsay Mathieson
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Data encryption

2016-07-28 Thread Kaushal M
On Tue, Jul 26, 2016 at 3:47 PM, Paul Warren  wrote:
> Hi,
>
> New to the list, I am trying to setup data encryption, I currently have SSL
> encryption up and running thanks to the help of kshlm. When I enable the
> option features.encryption on, I unmount and try to remount my client and
> get the following error.
>
> [2016-07-26 09:47:17.792417] E [crypt.c:4307:master_set_master_vol_key]
> 0-data-crypt: FATAL: can not open file with master key
> [2016-07-26 09:47:17.792448] E [MSGID: 101019] [xlator.c:428:xlator_init]
> 0-data-crypt: Initialization of volume 'data-crypt' failed, review your
> volfile again
>
>
> We are running Centos 6.7 and glusterfs-3.7.11-1.el6.x86_64
> with the client being centos 7.2 using glusterfs to mount the share.

I hope you are using the client packages provided by the community and
not the ones that come in the CentOS repo.
The client packages that come with the CentOS repo are repackaged
versions of the RHGS product. They might not be completely compatible
with the community packages.

>
> I've done some googling looking for a answer but I can't seem to find much
> regarding how data encryption works / errors etc. I would have assumed I
> just need to generate a key for the client. But I can't find much info about
> this.
>
> I was following
> http://www.gluster.org/community/documentation/index.php/Features/disk-encryption
> - but this doesn't exist any more.


The feature spec page at [1] has the information required. Section 6.2
has information on generating the key, and section 7 shows how to
enable and use encryption.
Let us know if this works, and if you have suggestions for improvement.

[1]: 
https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.5/Disk%20Encryption.md

>
> Thanks
> Paul
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly Community Meeting - 27/Jul/2016

2016-07-27 Thread Kaushal M
We had a pretty well attended and lively meeting today, thanks to
everyone who attended.

The meeting minutes and logs are available at the links below.

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-27/weekly_community_meeting_27jul2016.2016-07-27-12.02.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-27/weekly_community_meeting_27jul2016.2016-07-27-12.02.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-27/weekly_community_meeting_27jul2016.2016-07-27-12.02.log.html

Next weeks meeting will be held at the same time.

Thanks all.

~kaushal


Meeting summary
---
* Roll call  (kshlm, 12:03:19)

* Next weeks meeting host  (kshlm, 12:05:30)
  * ankitraj is next weeks meeting host.  (kshlm, 12:08:31)

* GlusterFS 4.0  (kshlm, 12:08:55)

* GlusterFS-3.9  (kshlm, 12:14:49)

* GlusterFS-3.8  (kshlm, 12:19:33)
  * ACTION: kshlm to send PR for release process document update
(kshlm, 12:22:28)
  * LINK: http://review.gluster.org/14945   (ndevos, 12:24:16)
  * do not backport unneeded changes, save your time for working on
other things and do not risk instability of the 3.8 release :)
(ndevos, 12:25:42)

* GlusterFS-3.7  (kshlm, 12:27:07)

* GlusterFS 3.6  (kshlm, 12:35:19)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-July/050256.html
(kshlm, 12:37:50)

* NFS Ganesha  (kshlm, 12:38:22)

* Samba  (kshlm, 12:44:44)

* Community Infrastructure  (kshlm, 12:53:22)

* Last weeks AIs  (kshlm, 12:57:40)
  * LINK: https://ci.centos.org/view/Gluster/job/gluster_coreutils/
(anoopcs, 13:01:24)
  * LINK: https://github.com/gluster/glusterfs-coreutils   (anoopcs,
13:02:55)

Meeting ended at 13:13:09 UTC.




Action Items

* kshlm to send PR for release process document update




Action Items, by person
---
* kshlm
  * kshlm to send PR for release process document update
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (131)
* ndevos (40)
* pranithk1 (22)
* anoopcs (16)
* hagarth (11)
* paul98 (10)
* kkeithley (9)
* rjoseph (7)
* ankitraj (6)
* jdarcy (6)
* jiffin (4)
* zodbot (3)
* nigelb (2)
* samikshan (1)
* misc (1)
* karthik_ (1)
* ramky (1)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Need a way to display and flush gluster cache ?

2016-07-26 Thread Kaushal M
On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai  wrote:
> +1 to option (2) which similar to echoing into /proc/sys/vm/drop_caches
>
>  -Prashanth Pai
>
> - Original Message -
>> From: "Mohammed Rafi K C" 
>> To: "gluster-users" , "Gluster Devel" 
>> 
>> Sent: Tuesday, 26 July, 2016 10:44:15 AM
>> Subject: [Gluster-devel] Need a way to display and flush gluster cache ?
>>
>> Hi,
>>
>> Gluster stack has it's own caching mechanism , mostly on client side.
>> But there is no concrete method to see how much memory are consuming by
>> gluster for caching and if needed there is no way to flush the cache memory.
>>
>> So my first question is, Do we require to implement this two features
>> for gluster cache?
>>
>>
>> If so I would like to discuss some of our thoughts towards it.
>>
>> (If you are not interested in implementation discussion, you can skip
>> this part :)
>>
>> 1) Implement a virtual xattr on root, and on doing setxattr, flush all
>> the cache, and for getxattr we can print the aggregated cache size.
>>
>> 2) Currently in gluster native client support .meta virtual directory to
>> get meta data information as analogues to proc. we can implement a
>> virtual file inside the .meta directory to read  the cache size. Also we
>> can flush the cache using a special write into the file, (similar to
>> echoing into proc file) . This approach may be difficult to implement in
>> other clients.

+1 for making use of the meta-xlator. We should be making more use of it.

>>
>> 3) A cli command to display and flush the data with ip and port as an
>> argument. GlusterD need to send the op to client from the connected
>> client list. But this approach would be difficult to implement for
>> libgfapi based clients. For me, it doesn't seems to be a good option.
>>
>> Your suggestions and comments are most welcome.
>>
>> Thanks to Talur and Poornima for their suggestions.
>>
>> Regards
>>
>> Rafi KC
>>
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS-3.7.13 released!

2016-07-20 Thread Kaushal M
Apologies for the late announcement.

GlusterFS-3.7.13 has been released. This release fixes 2 serious
libgfapi bugs and several other bugs. The release notes can be found
at [1].

The source tarball and prebuilt packages can be downloaded from [2].

Please report any bugs found using [3].

Thanks,
Kaushal

[1] 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.13.md
[2] https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.13/
[3] https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS=3.7.13
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Trouble rolling back to 3.7.11 on Ubuntu

2016-07-20 Thread Kaushal M
I'm assuming you already have liburcu >= 0.7 installed, because it is
needed to build gluster.
How did you install it? Was it via source or a package?
Could you also run a `ldd` on the glusterd xlator and give the
results? The command should be `ldd
/usr/lib/glusterfs/3.7.11/xlator/mgmt/glusterd.so`.

On Wed, Jul 20, 2016 at 1:36 PM, tommy.yard...@baesystems.com
 wrote:
> Yes sorry, I am already running it with sudo
>
> Thanks,
> Tommy
>
> -Original Message-
> From: Lindsay Mathieson [mailto:lindsay.mathie...@gmail.com]
> Sent: 20 July 2016 09:04
> To: Yardley, Tommy (UK Guildford)
> Cc: Atin Mukherjee; gluster-users@gluster.org
> Subject: Re: [Gluster-users] Trouble rolling back to 3.7.11 on Ubuntu
>
> On 20 July 2016 at 18:01, tommy.yard...@baesystems.com 
>  wrote:
>> looks like an error I saw on the mailing list previously which
>> suggested running ``` ldconfig``` which I have – doesn’t seem to fix things.
>
>
> Did you try "sudo ldconfig" ?
>
> --
> Lindsay
> Please consider the environment before printing this email. This message 
> should be regarded as confidential. If you have received this email in error 
> please notify the sender and destroy it immediately. Statements of intent 
> shall only become binding when confirmed in hard copy by an authorised 
> signatory. The contents of this email may relate to dealings with other 
> companies under the control of BAE Systems Applied Intelligence Limited, 
> details of which can be found at 
> http://www.baesystems.com/Businesses/index.htm.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Weekly community meeting - 06/Jun/2016

2016-07-06 Thread Kaushal M
Today's meeting didn't go according to the agenda, as we initially had
low attendance. Attendance overall was low as well owing to a holiday
in Bangalore.

The meeting minutes and logs for the meeting are available at the links below,
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-06/weekly_community_meeting_06jul2016.2016-07-06-12.09.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-06/weekly_community_meeting_06jul2016.2016-07-06-12.09.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-06/weekly_community_meeting_06jul2016.2016-07-06-12.09.log.html

Next weeks meeting will be held at the same time again.

See you all next week.

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12/3.8.qemu/proxmox testing

2016-07-04 Thread Kaushal M
On Mon, Jul 4, 2016 at 10:46 AM, Kaushal M <kshlms...@gmail.com> wrote:
> On Mon, Jul 4, 2016 at 9:47 AM, Dmitry Melekhov <d...@belkam.com> wrote:
>> 01.07.2016 07:31, Lindsay Mathieson пишет:
>>>
>>> Started a new thread for this to get away from the somewhat panicky
>>> subject line ...
>>>
>>> Some more test results. I built pve-qemu-kvm against gluster 3.8 and
>>> installed, which would I hoped would remove any libglusterfs version
>>> issues.
>>>
>>> Unfortunately it made no difference - same problems emerged.
>>>
>> Hello!
>>
>> I guess there is problem on server side, because in Centos 7 libgfapi is
>> dynamically linked,
>> and, thus, is automatically upgraded. But we have the same problem.
>>
>
> Thanks for the updates guys. This does indicate something has changed
> in libgfapi with the latest update.
> We are still trying to identify the cause, and will keep you updated on this.
>

An update on this, we are tracking this issue on bugzilla [1].
I've added some of the observations made till now in the bug. Copying
the same here.

```
With qemu-img at least the hangs happen when creating qcow2 images.
The command doesn't hang when creating raw images.

When creating a qcow2 image, the qemu-img appears to be reloading the
glusterfs graph several times. This can be observed in the attached
log where qemu-img is run against glusterfs-3.7.11.

With glusterfs-3.7.12, this doesn't happen as an early writev failure
happens on the brick transport with a EFAULT (Bad address) errno (see
attached log). No further actions happen after this, and the qemu-img
command hangs till the RPC ping-timeout happens and then fails.
```
~kaushal
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1352482

>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12/3.8.qemu/proxmox testing

2016-07-03 Thread Kaushal M
On Mon, Jul 4, 2016 at 9:47 AM, Dmitry Melekhov  wrote:
> 01.07.2016 07:31, Lindsay Mathieson пишет:
>>
>> Started a new thread for this to get away from the somewhat panicky
>> subject line ...
>>
>> Some more test results. I built pve-qemu-kvm against gluster 3.8 and
>> installed, which would I hoped would remove any libglusterfs version
>> issues.
>>
>> Unfortunately it made no difference - same problems emerged.
>>
> Hello!
>
> I guess there is problem on server side, because in Centos 7 libgfapi is
> dynamically linked,
> and, thus, is automatically upgraded. But we have the same problem.
>

Thanks for the updates guys. This does indicate something has changed
in libgfapi with the latest update.
We are still trying to identify the cause, and will keep you updated on this.

>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaushal M
On Thu, Jun 30, 2016 at 5:47 PM, Kevin Lemonnier  wrote:
>>
>> Replicated the problem with 3.7.12 *and* 3.8.0 :(
>>
>
> Yeah, I tried 3.8 when it came out too and I had to use the fuse mount point
> to get the VMs to work. I just assumed proxmox wasn't compatible yet with 3.8 
> (since
> the menu were a bit wonky anyway) but I guess it was the same bug.
>

I was able to reproduce the hang as well against 3.7.12.

I tested by installing the pve-qemu-kvm package from the Proxmox
repositories in a Debain Jessie container, as the default Debian qemu
packages don't link with glusterfs.
I used the 3.7.11 and 3.7.12 gluster repos from download.gluster.org.

I tried to create an image on a simple 1 brick gluster volume using qemu-img.
The qemu-img command succeeded against a 3.7.11 volume, but hung
against 3.7.12 to finally timeout and fail after ping-timeout.

We can at-least be happy that this issue isn't due to any bugs in AFR.

I was testing this with Raghavendra, and we are wondering if this is
probably a result of changes to libglusterfs and libgfapi that have
been introduced in 3.7.12 and 3.8.
Any app linking with libgfapi also needs to link with libglusterfs.
While we have some sort of versioning for libgfapi, we don't have any
for libglusterfs.
This has caused problems before (I cannot find any links for this
right now though).

The pve-qemu-kvm package was last built or updated in January this
year[1]. And I think it was built against glusterfs-3.5.2, which is
the latest version of glusterfs in the proxmox sources [2].
Maybe the pve-qemu-kvm package needs a rebuild.

We'll continue to try to figure out what the actual issue is though.

~kaushal

> --
> Kevin Lemonnier
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Disappearance of glusterfs-3.7.11-2.el6.x86_64 and dependencies

2016-06-30 Thread Kaushal M
On Thu, Jun 30, 2016 at 4:33 PM, Milos Kurtes  wrote:
> Hi,
>
> yesterday and day before package was there but now it is not.
>
> http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/EPEL.repo/epel-6/x86_64/glusterfs-3.7.11-2.el6.x86_64.rpm:
> [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not
> Found"

To get the latest 3.7 packages you should be using
http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo

>
> The package is still in yum list getting from the repository.
>
> What happened?
>
> When will be available again?
>
> Milos Kurtes, System Administrator, Alison Co.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaushal M
On Thu, Jun 30, 2016 at 12:29 PM, Kevin Lemonnier  wrote:
>>Glad that for both of you, things are back to normal. Could one of you
>>help us find what is the problem you are facing with libgfapi, if you have
>>any spare test machines. Otherwise we need to understand proxmox etc which
>>may take a bit more time.
>
> Sure, I have my test cluster working now using NFS but I can create other VMs
> using the lib to test if needed. What would you need ? Unfortunatly creating
> a VM on gluster through the lib doesn't work and I don't know how to get the 
> logs of that,
> the only error I get is this in proxmox logs :
>
> Jun 29 13:26:25 s2.name pvedaemon[2803]: create failed - unable to create 
> image: got lock timeout - aborting command
> Jun 29 13:26:52 s2.name qemu-img[2811]: [2016-06-29 11:26:52.485296] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-3: server 
> 172.16.0.2:49153 has not responded in the last 42 seconds, disconnecting.
> Jun 29 13:26:52 s2.name qemu-img[2811]: [2016-06-29 11:26:52.485407] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-4: server 
> 172.16.0.3:49153 has not responded in the last 42 seconds, disconnecting.
> Jun 29 13:26:52 s2.name qemu-img[2811]: [2016-06-29 11:26:52.485443] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-5: server 
> 172.16.0.50:49153 has not responded in the last 42 seconds, disconnecting.
>

I need a quick info from you guys, which packages are you using? Are
you using any of the packages built by the community (ie. on
download.gluster.org/launchpad/CentOS-storage-sig etc.).
We are wondering if the issues you are facing are the same that have
been fixed by https://review.gluster.org/14822 .
The packages that have been built by us, contain this patch. So if you
are facing problems with these packages, we can be sure its a new
issue.

>
> --
> Kevin Lemonnier
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly community meeting - 29/Jun/2016

2016-06-29 Thread Kaushal M
The meeting minutes for today's meeting are available at the following links,

- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-29/weekly_community_meeting_-_29jun2016.2016-06-29-12.00.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-29/weekly_community_meeting_-_29jun2016.2016-06-29-12.00.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-29/weekly_community_meeting_-_29jun2016.2016-06-29-12.00.log.html

Next weeks meeting is scheduled at the same time.

Thank you all who attended todays meeting.

~kaushal



Meeting summary
---
* Rollcall  (kshlm, 12:01:02)

* Next week's meeting host  (kshlm, 12:04:28)
  * kkeithley is next weeks meeting host  (kshlm, 12:05:38)

* GlusterFS 4.0  (kshlm, 12:05:46)
  * LINK: https://app.wercker.com/#applications/560a6c0971f137d02f0ad371
(kshlm, 12:10:56)
  * ACTION: kshlm to setup GD2 CI on centos-ci  (kshlm, 12:14:47)

* GlusterFS-3.9  (kshlm, 12:21:29)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-June/049923.html
(kshlm, 12:22:31)

* GlusterFS-3.8  (kshlm, 12:26:27)
  * component maintainers may merge changes, as long as they stick to
the patch acceptance criteria mentioned in the release process
(ndevos, 12:30:18)

* GlusterFS-3.7  (kshlm, 12:34:02)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
(kshlm, 12:35:20)
  * LINK: https://www.gluster.org/community/release-schedule/   (ndevos,
12:39:35)
  * AGREED: 3.7.13 to be released on schedule for 30th  (kshlm,
12:48:53)

* GlusterFS-3.6  (kshlm, 12:50:01)

* NFS-Ganesha  (kshlm, 12:51:57)

* Last weeks AIs.  (kshlm, 13:04:01)

Meeting ended at 13:10:02 UTC.




Action Items

* kshlm to setup GD2 CI on centos-ci




Action Items, by person
---
* kshlm
  * kshlm to setup GD2 CI on centos-ci
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (114)
* ndevos (58)
* kkeithley (17)
* asengupt (17)
* rjoseph (12)
* atinm (11)
* msvbhat (5)
* skoduri (3)
* post-factum (3)
* zodbot (3)
* karthik___ (1)
* poornimag (1)
* glusterbot (1)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Need urgent help to downgrade 3.7.12 to 3.7.11

2016-06-29 Thread Kaushal M
You need to edit the `/var/lib/glusterd/glusterd.info` manually on all
the nodes to reduce the op-version. Set it to the version you want,
and start glusterd.
 Make sure you edit the file on all nodes before starting even one glusterd.

On Wed, Jun 29, 2016 at 6:03 PM, Lindsay Mathieson
 wrote:
> My 3.7.12 upgrade has rendered my vm cluster unusable and I urgently need to
> downgrade to a working version.
>
>
> I can't set the op-version back from 30712 to 30710, I get error:
>
> gluster v set all  op-version 30710
> volume set: failed: Required op-version (30710) should not be equal or lower
> than current cluster op-version (30712)
>
>
> If I revert the gluster packages to 3.7.11 then the gluster servcie refuses
> to start.
>
>
> BTW, current settings are:
>
>
> Volume Name: datastore4
> Type: Replicate
> Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
> Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
> Brick3: vna.proxmox.softlog:/tank/vmdata/datastore4
> Options Reconfigured:
> cluster.background-self-heal-count: 16
> cluster.self-heal-window-size: 1024
> performance.readdir-ahead: on
> features.shard: on
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> nfs.disable: on
> nfs.addr-namelookup: off
> nfs.enable-ino32: off
> performance.strict-write-ordering: off
> performance.stat-prefetch: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> features.shard-block-size: 64MB
> cluster.locking-scheme: full
>
>
>
>
> --
> Lindsay Mathieson
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-29 Thread Kaushal M
Hi Lindsay,

Can you share the glusterd log and the glfsheal log for the volume
from the system on which you ran the heal command?
This will help understand why volfile fetch failed.

The files will be `/var/log/glusterfs/etc-glusterfs-glusterd.vol.log`
and `/var/log/glusterfs/glfsheal-.log`



On Wed, Jun 29, 2016 at 2:26 PM,   wrote:
> Yes, but I hadn't restarted the servers either, so the clients (qemu/gfapi)
> were still 3.7.11 until then.
>
>
>
> Still have same problems after reverting the settings.
>
>
>
> Waiting for heal to finish before I revert to 3.7.11
>
>
>
> Any advice on the best way to use apt for that?
>
>
>
> Sent from my Windows 10 phone
>
>
>
> From: Kevin Lemonnier
> Sent: Wednesday, 29 June 2016 6:49 PM
> To: gluster-users@gluster.org
> Subject: Re: [Gluster-users] 3.7.12 disaster
>
>
>
>> cluster.shd-max-threads:4
>
>> cluster.locking-scheme:granular
>
>
>
> So you had no problems before setting that ? I'm currently re-installing my
> test
>
> servers, as you can imagine really really hoping 3.7.12 fixes the corruption
> problem,
>
> I hope there isn't a new horrible bug ..
>
>
>
> --
>
> Kevin Lemonnier
>
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


  1   2   3   >