[Gluster-devel] freebsd-smoke failures

2016-04-01 Thread Jeff Darcy
I've seen a lot of patches blocked lately by this:

> BD xlator requested but required lvm2 development library not found.

It doesn't happen all the time, so there must be something about
certain patches that triggers it.  Any thoughts?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Reminder: adding source files

2016-04-01 Thread Jeff Darcy
If you add a file to the project, please remember to add it to the
appropriate Makefile.am as well.  Failure to do so *will not show up* in
our standard smoke/regression tests because those do "make install" but
they will prevent RPMs (and probably equivalents on other
distros/platforms) from building.  To see why, let's review how building
real packages works.

 * First there's autogen and configure, which operate in your original
   source directory turning .am and .in files into real
   platform-specific makefiles and scripts.

 * Then there's "make dist-gzip" and friends, which package up *only
   files specified in makefiles* into a tarball.

 * The rpmbuild "prep" stage unpacks this tarball and applies patches,
   typically in ~/rpmbuild/BUILD.

 * The rpmbuild "compile" stage does what you'd expect within BUILD.

 * The rpmbuild "install" stage does a "make install" in BUILD, which
   populates BUILDROOT.

There's more, but those are the key parts.  With that in mind, we can
look at various options for adding new source files.

 * xxx_SOURCES go into the dist tarball and become available in BUILD
   for the compilation phase.  xxx_HEADERS do likewise, and also get
   populated into BUILDROOT during the install phase.

 * noinst_HEADERS inhibits propagation into BUILDROOT (making headers
   act like sources), and is generally what you'd want for headers that
   are not specifically intended to be in -devel packages.

 * nodist_xxx_SOURCES can be used to specify files that are needed to
   build xxx, but will not be in the dist tarball.  This is what you'd
   use for generated files.

 * Anything not specified in any of these ways never even makes it to
   the dist tarball.

There are different rules for other things like Python scripts, but I
don't remember those very well.  Also, I might have gotten some of the
above slightly wrong; corrections from people who know this stuff even
better than I do (Kaleb?) are welcome.  The most important point is
this:

"git add" is not enough!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Package Matrix, tentative

2016-04-01 Thread Kaleb S. KEITHLEY
Hi,

With the imminent release of 3.8 in a few weeks, here's a summary of the
Linux packages that are
tentatively planned going forward.

Note that 3.5 will reach end-of-life (EOL) when 3.8 is released, and no
further releases will be
made on the release-3.5 branch.

(I haven't included NetBSD or FreeBSD here, only because they're not
Linux and we have little control
over them.)

An X means packages are planned to be in the repository.
A — means we have no plans to build the version for the repository.
d.g.o means packages will (also) be provided on https://download.gluster.org
DNF/YUM means the packages are included in the Fedora updates or
updates-testing repos.
 


3.8 3.7 3.6 3.5
CentOS Storage SIG¹ el5 —   d.g.o   d.g.o   d.g.o

el6 X   X, d.g.oX, d.g.od.g.o

el7 X   X, d.g.oX, d.g.od.g.o






Fedora  F22 —   d.g.o   DNF/YUM d.g.o

F23 d.g.o   DNF/YUM d.g.o   d.g.o

F24 DNF/YUM d.g.o   d.g.o   d.g.o

F25 DNF/YUM d.g.o   d.g.o   d.g.o






Ubuntu Launchpad²   Precise (12.04 LTS) —   X   X   X

Trusty (14.04 LTS)  X   X   X   X

Wily (15.10)X   X   X   X

Xenial (16.04 LTS)  X   X   X   —






Debian  Wheezy (7)  —   d.g.o   d.g.o   d.g.o

Jessie (8)  d.g.o   d.g.o   d.g.o   d.g.o

Stretch (9) d.g.o   d.g.o   d.g.o   —






SuSE Build System³  OpenSuSE13
X   X   X   X

Leap 42.1   X   X   —   —

SLES11  —   —   X   X

SLES12  X   X   X   —


¹ https://wiki.centos.org/SpecialInterestGroup/Storage
² https://launchpad.net/~gluster
³ https://build.opensuse.org/project/subprojects/home:kkeithleatredhat

-- Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] WORM/Retention Feature: 01-04-2016

2016-04-01 Thread Karthik Subrahmanya
Hi all,

This week's status:

-Tested the program with different volume configurations
-Exploring on the gluster test framework
-Writing regression test for the feature


Plan for next week:

-Completing the regression test
-Handling the issue with the smoke regression


Current work:

POC: http://review.gluster.org/#/c/13429/
Spec: http://review.gluster.org/13538
Feature page: 
http://www.gluster.org/community/documentation/index.php/Features/gluster_compliance_archive

Your valuable suggestions, reviews, and wish lists are most welcome

Thanks & Regards,
Karthik Subrahmanya
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Requesting for help with gluster test framework

2016-04-01 Thread Karthik Subrahmanya


- Original Message -
From: "Prasanna Kalever" 
To: "Karthik Subrahmanya" 
Cc: "Joseph Fernandes" , "Raghavendra Talur" 
, "Vijaikumar Mallikarjuna" , 
"gluster-devel" 
Sent: Friday, April 1, 2016 5:52:47 PM
Subject: Re: [Gluster-devel] Requesting for help with gluster test framework

On Fri, Apr 1, 2016 at 5:37 PM, Karthik Subrahmanya  wrote:
>
> Hi all,
>
> As I am trying to write a test for the WORM translator
> which I am working on right now, I am facing some issues
> while executing the test framework.
> I followed the steps in
> https://github.com/gluster/glusterfs/blob/master/tests/README.md
>
>
> [Issue #1]
> While running the run-tests.sh
>
> ... GlusterFS Test Framework ...
>
>
> ==
> Running tests in file ./tests/basic/0symbol-check.t
> [11:48:09] ./tests/basic/0symbol-check.t .. Dubious, test returned 1 (wstat 
> 256, 0x100)
> No subtests run
> [11:48:09]
>
> Test Summary Report
> ---
> ./tests/basic/0symbol-check.t (Wstat: 256 Tests: 0 Failed: 0)
>   Non-zero exit status: 1
>   Parse errors: No plan found in TAP output
> Files=1, Tests=0,  0 wallclock secs ( 0.01 usr +  0.00 sys =  0.01 CPU)
> Result: FAIL
> End of test ./tests/basic/0symbol-check.t
> ==
>
>
> Run complete
> 1 test(s) failed
> ./tests/basic/0symbol-check.t
> 0 test(s) generated core
>
> Slowest 10 tests:
> ./tests/basic/0symbol-check.t  -  1
> Result is 1
>
>
>
> [Issue #2]
> While running a single .t file using "prove -vf"
>
> tests/features/worm.t ..
> Aborting.
> Aborting.
>
> env.rc not found
> env.rc not found
>
> Please correct the problem and try again.
> Please correct the problem and try again.
>
> Dubious, test returned 1 (wstat 256, 0x100)
> No subtests run
>
> Test Summary Report
> ---
> tests/features/worm.t (Wstat: 256 Tests: 0 Failed: 0)
>   Non-zero exit status: 1
>   Parse errors: No plan found in TAP output
> Files=1, Tests=0,  0 wallclock secs ( 0.02 usr +  0.01 sys =  0.03 CPU)
> Result: FAIL
>

This is due to lag of configuration stuff,
run ./autogen.sh && ./configure and then try to run the tests

Thank you for the prompt reply and suggestion. Its working now.

--
Prasanna

>
> It would be awesome if someone can guide me with this.
>
> Thanks & Regards,
> Karthik Subrahmanya
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Requesting for help with gluster test framework

2016-04-01 Thread Prasanna Kalever
On Fri, Apr 1, 2016 at 5:37 PM, Karthik Subrahmanya  wrote:
>
> Hi all,
>
> As I am trying to write a test for the WORM translator
> which I am working on right now, I am facing some issues
> while executing the test framework.
> I followed the steps in
> https://github.com/gluster/glusterfs/blob/master/tests/README.md
>
>
> [Issue #1]
> While running the run-tests.sh
>
> ... GlusterFS Test Framework ...
>
>
> ==
> Running tests in file ./tests/basic/0symbol-check.t
> [11:48:09] ./tests/basic/0symbol-check.t .. Dubious, test returned 1 (wstat 
> 256, 0x100)
> No subtests run
> [11:48:09]
>
> Test Summary Report
> ---
> ./tests/basic/0symbol-check.t (Wstat: 256 Tests: 0 Failed: 0)
>   Non-zero exit status: 1
>   Parse errors: No plan found in TAP output
> Files=1, Tests=0,  0 wallclock secs ( 0.01 usr +  0.00 sys =  0.01 CPU)
> Result: FAIL
> End of test ./tests/basic/0symbol-check.t
> ==
>
>
> Run complete
> 1 test(s) failed
> ./tests/basic/0symbol-check.t
> 0 test(s) generated core
>
> Slowest 10 tests:
> ./tests/basic/0symbol-check.t  -  1
> Result is 1
>
>
>
> [Issue #2]
> While running a single .t file using "prove -vf"
>
> tests/features/worm.t ..
> Aborting.
> Aborting.
>
> env.rc not found
> env.rc not found
>
> Please correct the problem and try again.
> Please correct the problem and try again.
>
> Dubious, test returned 1 (wstat 256, 0x100)
> No subtests run
>
> Test Summary Report
> ---
> tests/features/worm.t (Wstat: 256 Tests: 0 Failed: 0)
>   Non-zero exit status: 1
>   Parse errors: No plan found in TAP output
> Files=1, Tests=0,  0 wallclock secs ( 0.02 usr +  0.01 sys =  0.03 CPU)
> Result: FAIL
>

This is due to lag of configuration stuff,
run ./autogen.sh && ./configure and then try to run the tests

--
Prasanna

>
> It would be awesome if someone can guide me with this.
>
> Thanks & Regards,
> Karthik Subrahmanya
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Requesting for help with gluster test framework

2016-04-01 Thread Karthik Subrahmanya
Hi all,

As I am trying to write a test for the WORM translator
which I am working on right now, I am facing some issues
while executing the test framework.
I followed the steps in
https://github.com/gluster/glusterfs/blob/master/tests/README.md


[Issue #1]
While running the run-tests.sh

... GlusterFS Test Framework ...


==
Running tests in file ./tests/basic/0symbol-check.t
[11:48:09] ./tests/basic/0symbol-check.t .. Dubious, test returned 1 (wstat 
256, 0x100)
No subtests run 
[11:48:09]

Test Summary Report
---
./tests/basic/0symbol-check.t (Wstat: 256 Tests: 0 Failed: 0)
  Non-zero exit status: 1
  Parse errors: No plan found in TAP output
Files=1, Tests=0,  0 wallclock secs ( 0.01 usr +  0.00 sys =  0.01 CPU)
Result: FAIL
End of test ./tests/basic/0symbol-check.t
==


Run complete
1 test(s) failed 
./tests/basic/0symbol-check.t
0 test(s) generated core 

Slowest 10 tests: 
./tests/basic/0symbol-check.t  -  1
Result is 1



[Issue #2]
While running a single .t file using "prove -vf"

tests/features/worm.t .. 
Aborting.
Aborting.

env.rc not found
env.rc not found

Please correct the problem and try again.
Please correct the problem and try again.

Dubious, test returned 1 (wstat 256, 0x100)
No subtests run 

Test Summary Report
---
tests/features/worm.t (Wstat: 256 Tests: 0 Failed: 0)
  Non-zero exit status: 1
  Parse errors: No plan found in TAP output
Files=1, Tests=0,  0 wallclock secs ( 0.02 usr +  0.01 sys =  0.03 CPU)
Result: FAIL


It would be awesome if someone can guide me with this.

Thanks & Regards,
Karthik Subrahmanya
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] "Tiering Performance Enhancements" is at risk for 3.8

2016-04-01 Thread Dan Lambright


- Original Message -
> From: "Niels de Vos" 
> To: "Dan Lambright" , "Joseph Fernandes" 
> 
> Cc: gluster-devel@gluster.org
> Sent: Friday, April 1, 2016 4:08:25 AM
> Subject: "Tiering Performance Enhancements" is at risk for 3.8
> 
> Hi,
> 
> the feature labelled "Tiering Performance Enhancements" did not receive
> any status updates by pull request to the 3.8 roadmap. We have now moved
> this feature to the new "at risk" category on the page. If there is
> still an intention to include this feature with 3.8, we encourage you
> send an update for the roadmap soon. This can easily be done by clicking
> the "edit this page" link on the bottom of the roadmap:
> 
>   https://www.gluster.org/community/roadmap/3.8/
> 
> If there is no update within a week, we'll move the feature to the next
> release.

This is related to EC as a cold tier, we can rename the feature to clarify that.

One fix from Pranith helps, I'll confirm with him it shall be in 3.8.

Another fix from me is waiting on results from Manoj.

Once I hear back, I'll update the page.

> 
> Thanks,
> Jiffin and Niels
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] "Samba and NFS-Ganesha support for tiered volumes" is at risk for 3.8

2016-04-01 Thread Dan Lambright


- Original Message -
> From: "Niels de Vos" 
> To: "Dan Lambright" , "Joseph Fernandes" 
> 
> Cc: gluster-devel@gluster.org
> Sent: Friday, April 1, 2016 4:09:11 AM
> Subject: "Samba and NFS-Ganesha support for tiered volumes" is at risk for 3.8
> 
> Hi,
> 
> the feature labelled "Samba and NFS-Ganesha support for tiered volumes"
> did not receive any status updates by pull request to the 3.8 roadmap.
> We have now moved this feature to the new "at risk" category on the
> page. If there is still an intention to include this feature with 3.8,
> we encourage you send an update for the roadmap soon. This can easily be
> done by clicking the "edit this page" link on the bottom of the roadmap:
> 
>   https://www.gluster.org/community/roadmap/3.8/
> 
> If there is no update within a week, we'll move the feature to the next
> release.

Do not foresee completion within the 3.8 timeframe. There is work underway but 
it will take time.

> 
> Thanks,
> Jiffin and Niels
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Brick is Offline

2016-04-01 Thread ABHISHEK PALIWAL
Please check the cmd-history file in attached logs of the Board A in which
we are removing-brick at 11:56:48 because at this time after waiting for 1
minute brick was not online.

So, We need to identify near to this time why brick is not online.

Because If brick will be online we will not remove-brick and if we remove
brick, volume become distributed which will also cause problem in our case.

Regards,
Abhishek

On Fri, Apr 1, 2016 at 4:08 PM, Atin Mukherjee  wrote:

>
>
> On 04/01/2016 03:57 PM, ABHISHEK PALIWAL wrote:
> >
> >
> > On Fri, Apr 1, 2016 at 3:39 PM, Atin Mukherjee  > > wrote:
> >
> >
> >
> > On 04/01/2016 03:06 PM, ABHISHEK PALIWAL wrote:
> > >
> > > On Fri, Apr 1, 2016 at 2:59 PM, Atin Mukherjee <
> amukh...@redhat.com 
> > > >> wrote:
> > >
> > >
> > >
> > > On 04/01/2016 02:55 PM, ABHISHEK PALIWAL wrote:
> > > > Hi Atin,
> > > >
> > > > Thanks for reply.
> > > >
> > > > Could you please help me to identify the error log in the
> respective
> > > > brick log file. I tried but not able to identified where the
> problem is
> > > > occuring.
> > > >
> > > > I am attaching the brick log file which is not coming online
> even after
> > > > waiting for 1 minute.
> > > What time did you reboot B? Could you also attach glusterd log
> file
> > > (complete log) for board B?
> > > >
> > >
> > > it is hard to say at what time we rebooted the B board because we
> are
> > > continuously rebooting the board B.
> > >
> > > Here I am attaching the glusterd and glsuterfs log for board B.
> > I can see the last restart of glusterd was at 13:11:27 and as per the
> > brick log there was no restart. This doesn't look like a reboot of
> the
> > board as in that case brick process should have also died and
> restarted.
> > Brick log indicates that the process is still running and there is no
> > interruption.
> >
> > In brick log file at 13:11:27 we have the logs showing reboot of board
> >
> > [2016-03-31 13:11:27.408512] I [MSGID: 115036]
> > [server.c:552:server_rpc_notify] 0-c_glusterfs-server: disconnecting
> > connection from
> > 002500-10939-2016/03/31-11:56:52:528771-c_glusterfs-client-11-0-0
> > [2016-03-31 13:11:27.408603] I [MSGID: 101055]
> > [client_t.c:419:gf_client_unref] 0-c_glusterfs-server: Shutting down
> > connection
> > 002500-10939-2016/03/31-11:56:52:528771-c_glusterfs-client-11-0-0
> FYI..this log indicates client disconnection.
> >
> > >
> > > > Regards,
> > > > Abhishek
> > > >
> > > > On Fri, Apr 1, 2016 at 1:04 PM, Atin Mukherjee <
> amukh...@redhat.com 
> > >
> > > > 
> >  > > >
> > > >
> > > >
> > > > On 04/01/2016 12:10 PM, ABHISHEK PALIWAL wrote:
> > > > > Hi,
> > > > >
> > > > >
> > > > > I have the setup of two boards A and B with two bricks
> in
> > > replica mode.
> > > > >
> > > > > There is one test scenario
> > > > >
> > > > > 1. A acts as an active board and having the glusterfs
> > mount
> > > point on it.
> > > > > 2. B acts as Passive board.
> > > > > 3. We are repetitively rebooting the B board (In this
> time
> > > period peer
> > > > > status on A board will be "peer in cluster
> (Disconnected)"
> > > and brick is
> > > > > not present in "gluster volume status") and when Board
> B
> > > comes up,
> > > > > starts the gluster daemon.
> > > > > 4. if Gluster daemon starts successfully it will make
> > "peer in
> > > > > cluster(Connected)"
> > > > > 5. At the same with the immediate effect "gluster
> volume
> > > status" command
> > > > > should show the brick is available in online.
> > > > >
> > > > >
> > > > > But in my case sometime step 5 takes immediate
> reflection
> > > sometime 10-15
> > > > > second and sometime doesn't show brick is online even
> > after
> > > the 1minute.
> > > > >
> > > > > Could you please confirm why this type of unpredictable
> > > behavior is
> > > > > occuring. It should be reflect with immediate effect in
> > > "gluster volume
> > > > > status" command.
> > > > When glusterd restarts bricks processes are brought up
> > > asynchronously
> > > > 

Re: [Gluster-devel] Brick is Offline

2016-04-01 Thread Atin Mukherjee


On 04/01/2016 03:57 PM, ABHISHEK PALIWAL wrote:
> 
> 
> On Fri, Apr 1, 2016 at 3:39 PM, Atin Mukherjee  > wrote:
> 
> 
> 
> On 04/01/2016 03:06 PM, ABHISHEK PALIWAL wrote:
> >
> > On Fri, Apr 1, 2016 at 2:59 PM, Atin Mukherjee  
> > >> wrote:
> >
> >
> >
> > On 04/01/2016 02:55 PM, ABHISHEK PALIWAL wrote:
> > > Hi Atin,
> > >
> > > Thanks for reply.
> > >
> > > Could you please help me to identify the error log in the 
> respective
> > > brick log file. I tried but not able to identified where the 
> problem is
> > > occuring.
> > >
> > > I am attaching the brick log file which is not coming online even 
> after
> > > waiting for 1 minute.
> > What time did you reboot B? Could you also attach glusterd log file
> > (complete log) for board B?
> > >
> >
> > it is hard to say at what time we rebooted the B board because we are
> > continuously rebooting the board B.
> >
> > Here I am attaching the glusterd and glsuterfs log for board B.
> I can see the last restart of glusterd was at 13:11:27 and as per the
> brick log there was no restart. This doesn't look like a reboot of the
> board as in that case brick process should have also died and restarted.
> Brick log indicates that the process is still running and there is no
> interruption.
> 
> In brick log file at 13:11:27 we have the logs showing reboot of board
> 
> [2016-03-31 13:11:27.408512] I [MSGID: 115036]
> [server.c:552:server_rpc_notify] 0-c_glusterfs-server: disconnecting
> connection from
> 002500-10939-2016/03/31-11:56:52:528771-c_glusterfs-client-11-0-0
> [2016-03-31 13:11:27.408603] I [MSGID: 101055]
> [client_t.c:419:gf_client_unref] 0-c_glusterfs-server: Shutting down
> connection
> 002500-10939-2016/03/31-11:56:52:528771-c_glusterfs-client-11-0-0
FYI..this log indicates client disconnection.
> 
> >
> > > Regards,
> > > Abhishek
> > >
> > > On Fri, Apr 1, 2016 at 1:04 PM, Atin Mukherjee 
> mailto:amukh...@redhat.com>
> >
> > > 
>  > >
> > >
> > >
> > > On 04/01/2016 12:10 PM, ABHISHEK PALIWAL wrote:
> > > > Hi,
> > > >
> > > >
> > > > I have the setup of two boards A and B with two bricks in
> > replica mode.
> > > >
> > > > There is one test scenario
> > > >
> > > > 1. A acts as an active board and having the glusterfs
> mount
> > point on it.
> > > > 2. B acts as Passive board.
> > > > 3. We are repetitively rebooting the B board (In this time
> > period peer
> > > > status on A board will be "peer in cluster (Disconnected)"
> > and brick is
> > > > not present in "gluster volume status") and when Board B
> > comes up,
> > > > starts the gluster daemon.
> > > > 4. if Gluster daemon starts successfully it will make
> "peer in
> > > > cluster(Connected)"
> > > > 5. At the same with the immediate effect "gluster volume
> > status" command
> > > > should show the brick is available in online.
> > > >
> > > >
> > > > But in my case sometime step 5 takes immediate reflection
> > sometime 10-15
> > > > second and sometime doesn't show brick is online even
> after
> > the 1minute.
> > > >
> > > > Could you please confirm why this type of unpredictable
> > behavior is
> > > > occuring. It should be reflect with immediate effect in
> > "gluster volume
> > > > status" command.
> > > When glusterd restarts bricks processes are brought up
> > asynchronously
> > > and hence you may not see the brick processes reflecting in
> > gluster
> > > volume status output immediately after restart. 5-10
> seconds is an
> > > accepted time frame. However if it doesn't come back online
> > post that
> > > then probably brick fails to start in that case.
> > >
> > > Please check the respective brick log file and if you
> can find
> > any error
> > > logs in it.
> > > >
> > > >
> > > > --
> > > > Regards
> > > > Abhishek Paliwal
> > > >
> > > >
> > > > ___
> >  

[Gluster-devel] Update on 3.7.10

2016-04-01 Thread Kaushal M
So I've just finished tagging 3.7.10 in the repository.

I've done basic usability, upgrade and performance tests with fuse clients.
I tested I/O using dbench, iozone and perf-test[1], and it happened
without issues.
I performed a rolling upgrade from 3.7.9 with I/O happening, and was
able to cleanly upgrade without I/O stopping.

I used perf-test script to measure performance changes between 3.7.9
and 3.7.10. I did the test on a 2x2 replicated volume. The storage
pool was created using VMs, so please don't read too much into them.

For 3.7.9,
emptyfiles_create   515.51
emptyfiles_delete   255.93
smallfiles_create   1435.28
smallfiles_rewrite  1907.45
smallfiles_read 315.60
smallfiles_reread   185.74
smallfiles_delete   363.33
largefile_create68.90
largefile_rewrite   90.86
largefile_read  6.90
largefile_reread0.21
largefile_delete0.43
directory_crawl_create  637.64
directory_crawl 19.85
directory_recrawl   23.11
metadata_modify 1220.35
directory_crawl_delete  259.46

For 3.7.10,
emptyfiles_create   569.40
emptyfiles_delete   312.65
smallfiles_create   1297.20
smallfiles_rewrite  1418.71
smallfiles_read 327.18
smallfiles_reread   163.53
smallfiles_delete   384.45
largefile_create10.23
largefile_rewrite   7.46
largefile_read  5.33
largefile_reread0.23
largefile_delete1.65
directory_crawl_create  612.95
directory_crawl 28.92
directory_recrawl   20.64
metadata_modify 1091.86
directory_crawl_delete  231.43

Smallfile/largefile creates and rewrites, and metadata modify seem to
have improved significantly.
The changes are significant enough that I hope they are actual
improvements rather than changes due to VM resource availability. I'd
love it if someone could do the test on actual physical hardware.

I hope to announce the release early next week, hopefully on Monday,
once the packages have been built.

Maintainers may now restart merging changes on the release-3.7 branch.

Thanks,
Kaushal

[1] https://github.com/avati/perf-test
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Brick is Offline

2016-04-01 Thread ABHISHEK PALIWAL
On Fri, Apr 1, 2016 at 3:39 PM, Atin Mukherjee  wrote:

>
>
> On 04/01/2016 03:06 PM, ABHISHEK PALIWAL wrote:
> >
> > On Fri, Apr 1, 2016 at 2:59 PM, Atin Mukherjee  > > wrote:
> >
> >
> >
> > On 04/01/2016 02:55 PM, ABHISHEK PALIWAL wrote:
> > > Hi Atin,
> > >
> > > Thanks for reply.
> > >
> > > Could you please help me to identify the error log in the
> respective
> > > brick log file. I tried but not able to identified where the
> problem is
> > > occuring.
> > >
> > > I am attaching the brick log file which is not coming online even
> after
> > > waiting for 1 minute.
> > What time did you reboot B? Could you also attach glusterd log file
> > (complete log) for board B?
> > >
> >
> > it is hard to say at what time we rebooted the B board because we are
> > continuously rebooting the board B.
> >
> > Here I am attaching the glusterd and glsuterfs log for board B.
> I can see the last restart of glusterd was at 13:11:27 and as per the
> brick log there was no restart. This doesn't look like a reboot of the
> board as in that case brick process should have also died and restarted.
> Brick log indicates that the process is still running and there is no
> interruption.
>
In brick log file at 13:11:27 we have the logs showing reboot of board

[2016-03-31 13:11:27.408512] I [MSGID: 115036]
[server.c:552:server_rpc_notify] 0-c_glusterfs-server: disconnecting
connection from
002500-10939-2016/03/31-11:56:52:528771-c_glusterfs-client-11-0-0
[2016-03-31 13:11:27.408603] I [MSGID: 101055]
[client_t.c:419:gf_client_unref] 0-c_glusterfs-server: Shutting down
connection
002500-10939-2016/03/31-11:56:52:528771-c_glusterfs-client-11-0-0

> >
> > > Regards,
> > > Abhishek
> > >
> > > On Fri, Apr 1, 2016 at 1:04 PM, Atin Mukherjee <
> amukh...@redhat.com 
> > > >> wrote:
> > >
> > >
> > >
> > > On 04/01/2016 12:10 PM, ABHISHEK PALIWAL wrote:
> > > > Hi,
> > > >
> > > >
> > > > I have the setup of two boards A and B with two bricks in
> > replica mode.
> > > >
> > > > There is one test scenario
> > > >
> > > > 1. A acts as an active board and having the glusterfs mount
> > point on it.
> > > > 2. B acts as Passive board.
> > > > 3. We are repetitively rebooting the B board (In this time
> > period peer
> > > > status on A board will be "peer in cluster (Disconnected)"
> > and brick is
> > > > not present in "gluster volume status") and when Board B
> > comes up,
> > > > starts the gluster daemon.
> > > > 4. if Gluster daemon starts successfully it will make "peer
> in
> > > > cluster(Connected)"
> > > > 5. At the same with the immediate effect "gluster volume
> > status" command
> > > > should show the brick is available in online.
> > > >
> > > >
> > > > But in my case sometime step 5 takes immediate reflection
> > sometime 10-15
> > > > second and sometime doesn't show brick is online even after
> > the 1minute.
> > > >
> > > > Could you please confirm why this type of unpredictable
> > behavior is
> > > > occuring. It should be reflect with immediate effect in
> > "gluster volume
> > > > status" command.
> > > When glusterd restarts bricks processes are brought up
> > asynchronously
> > > and hence you may not see the brick processes reflecting in
> > gluster
> > > volume status output immediately after restart. 5-10 seconds
> is an
> > > accepted time frame. However if it doesn't come back online
> > post that
> > > then probably brick fails to start in that case.
> > >
> > > Please check the respective brick log file and if you can find
> > any error
> > > logs in it.
> > > >
> > > >
> > > > --
> > > > Regards
> > > > Abhishek Paliwal
> > > >
> > > >
> > > > ___
> > > > Gluster-devel mailing list
> > > > Gluster-devel@gluster.org 
> >  >>
> > > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > > >
> > >
> > >
> > >
> > >
> > > --
> > >
> > >
> > >
> > >
> > > Regards
> > > Abhishek Paliwal
> >
> >
>



-- 




Regards
Abhishek Paliwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Brick is Offline

2016-04-01 Thread Atin Mukherjee


On 04/01/2016 03:06 PM, ABHISHEK PALIWAL wrote:
> 
> On Fri, Apr 1, 2016 at 2:59 PM, Atin Mukherjee  > wrote:
> 
> 
> 
> On 04/01/2016 02:55 PM, ABHISHEK PALIWAL wrote:
> > Hi Atin,
> >
> > Thanks for reply.
> >
> > Could you please help me to identify the error log in the respective
> > brick log file. I tried but not able to identified where the problem is
> > occuring.
> >
> > I am attaching the brick log file which is not coming online even after
> > waiting for 1 minute.
> What time did you reboot B? Could you also attach glusterd log file
> (complete log) for board B?
> >
> 
> it is hard to say at what time we rebooted the B board because we are
> continuously rebooting the board B.
> 
> Here I am attaching the glusterd and glsuterfs log for board B.
I can see the last restart of glusterd was at 13:11:27 and as per the
brick log there was no restart. This doesn't look like a reboot of the
board as in that case brick process should have also died and restarted.
Brick log indicates that the process is still running and there is no
interruption.
> 
> > Regards,
> > Abhishek
> >
> > On Fri, Apr 1, 2016 at 1:04 PM, Atin Mukherjee  
> > >> wrote:
> >
> >
> >
> > On 04/01/2016 12:10 PM, ABHISHEK PALIWAL wrote:
> > > Hi,
> > >
> > >
> > > I have the setup of two boards A and B with two bricks in
> replica mode.
> > >
> > > There is one test scenario
> > >
> > > 1. A acts as an active board and having the glusterfs mount
> point on it.
> > > 2. B acts as Passive board.
> > > 3. We are repetitively rebooting the B board (In this time
> period peer
> > > status on A board will be "peer in cluster (Disconnected)"
> and brick is
> > > not present in "gluster volume status") and when Board B
> comes up,
> > > starts the gluster daemon.
> > > 4. if Gluster daemon starts successfully it will make "peer in
> > > cluster(Connected)"
> > > 5. At the same with the immediate effect "gluster volume
> status" command
> > > should show the brick is available in online.
> > >
> > >
> > > But in my case sometime step 5 takes immediate reflection
> sometime 10-15
> > > second and sometime doesn't show brick is online even after
> the 1minute.
> > >
> > > Could you please confirm why this type of unpredictable
> behavior is
> > > occuring. It should be reflect with immediate effect in
> "gluster volume
> > > status" command.
> > When glusterd restarts bricks processes are brought up
> asynchronously
> > and hence you may not see the brick processes reflecting in
> gluster
> > volume status output immediately after restart. 5-10 seconds is an
> > accepted time frame. However if it doesn't come back online
> post that
> > then probably brick fails to start in that case.
> >
> > Please check the respective brick log file and if you can find
> any error
> > logs in it.
> > >
> > >
> > > --
> > > Regards
> > > Abhishek Paliwal
> > >
> > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org 
> >
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> >
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
> 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Stop merging changes - Not all smoke tests are reporting status to gerrit

2016-04-01 Thread Kaushal M
The reporting has been fixed now. Jenkins will be reporting back the
results of the following smoke jobs,
- smoke (linux smoke)
- netbsd6-smoke
- freebsd-smoke
- gluster-devrpms (rpms on fedora)
- gluster-devrpms-el6
- gluster-devrpms-el7
- compare-bug-version-and-git-branch

As an additional check, the maintainers should check the comment
posted by Jenkins and ensure that all 7 tests have been reported back,
and not just rely on the Smoke+1 flag.

On Fri, Apr 1, 2016 at 12:29 PM, Kaushal M  wrote:
> Hi All,
>
> There has been a recent change which has caused failures to build
> RPMs. This change was also unknowingly backported to release-3.7,
> because the failures were not reported back to gerrit.
>
> Rpmbuild results aren't being reported back to gerrit since we brought
> in the new flags for voting. None of us seemed to notice it since
> then.
>
> So I'll be taking some time to fix this issue. Till then please don't
> merge any changes on any of the branches.
>
> I'll update the list once this has been fixed.
>
> ~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Brick is Offline

2016-04-01 Thread ABHISHEK PALIWAL
On Fri, Apr 1, 2016 at 2:59 PM, Atin Mukherjee  wrote:

>
>
> On 04/01/2016 02:55 PM, ABHISHEK PALIWAL wrote:
> > Hi Atin,
> >
> > Thanks for reply.
> >
> > Could you please help me to identify the error log in the respective
> > brick log file. I tried but not able to identified where the problem is
> > occuring.
> >
> > I am attaching the brick log file which is not coming online even after
> > waiting for 1 minute.
> What time did you reboot B? Could you also attach glusterd log file
> (complete log) for board B?
> >
>
it is hard to say at what time we rebooted the B board because we are
continuously rebooting the board B.

Here I am attaching the glusterd and glsuterfs log for board B.

> > Regards,
> > Abhishek
> >
> > On Fri, Apr 1, 2016 at 1:04 PM, Atin Mukherjee  > > wrote:
> >
> >
> >
> > On 04/01/2016 12:10 PM, ABHISHEK PALIWAL wrote:
> > > Hi,
> > >
> > >
> > > I have the setup of two boards A and B with two bricks in replica
> mode.
> > >
> > > There is one test scenario
> > >
> > > 1. A acts as an active board and having the glusterfs mount point
> on it.
> > > 2. B acts as Passive board.
> > > 3. We are repetitively rebooting the B board (In this time period
> peer
> > > status on A board will be "peer in cluster (Disconnected)" and
> brick is
> > > not present in "gluster volume status") and when Board B comes up,
> > > starts the gluster daemon.
> > > 4. if Gluster daemon starts successfully it will make "peer in
> > > cluster(Connected)"
> > > 5. At the same with the immediate effect "gluster volume status"
> command
> > > should show the brick is available in online.
> > >
> > >
> > > But in my case sometime step 5 takes immediate reflection sometime
> 10-15
> > > second and sometime doesn't show brick is online even after the
> 1minute.
> > >
> > > Could you please confirm why this type of unpredictable behavior is
> > > occuring. It should be reflect with immediate effect in "gluster
> volume
> > > status" command.
> > When glusterd restarts bricks processes are brought up asynchronously
> > and hence you may not see the brick processes reflecting in gluster
> > volume status output immediately after restart. 5-10 seconds is an
> > accepted time frame. However if it doesn't come back online post that
> > then probably brick fails to start in that case.
> >
> > Please check the respective brick log file and if you can find any
> error
> > logs in it.
> > >
> > >
> > > --
> > > Regards
> > > Abhishek Paliwal
> > >
> > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org 
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> >
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
>


Board B logs.tar.gz
Description: GNU Zip compressed data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Brick is Offline

2016-04-01 Thread Atin Mukherjee


On 04/01/2016 02:55 PM, ABHISHEK PALIWAL wrote:
> Hi Atin,
> 
> Thanks for reply.
> 
> Could you please help me to identify the error log in the respective
> brick log file. I tried but not able to identified where the problem is
> occuring.
> 
> I am attaching the brick log file which is not coming online even after
> waiting for 1 minute.
What time did you reboot B? Could you also attach glusterd log file
(complete log) for board B?
> 
> Regards,
> Abhishek
> 
> On Fri, Apr 1, 2016 at 1:04 PM, Atin Mukherjee  > wrote:
> 
> 
> 
> On 04/01/2016 12:10 PM, ABHISHEK PALIWAL wrote:
> > Hi,
> >
> >
> > I have the setup of two boards A and B with two bricks in replica mode.
> >
> > There is one test scenario
> >
> > 1. A acts as an active board and having the glusterfs mount point on it.
> > 2. B acts as Passive board.
> > 3. We are repetitively rebooting the B board (In this time period peer
> > status on A board will be "peer in cluster (Disconnected)" and brick is
> > not present in "gluster volume status") and when Board B comes up,
> > starts the gluster daemon.
> > 4. if Gluster daemon starts successfully it will make "peer in
> > cluster(Connected)"
> > 5. At the same with the immediate effect "gluster volume status" command
> > should show the brick is available in online.
> >
> >
> > But in my case sometime step 5 takes immediate reflection sometime 10-15
> > second and sometime doesn't show brick is online even after the 1minute.
> >
> > Could you please confirm why this type of unpredictable behavior is
> > occuring. It should be reflect with immediate effect in "gluster volume
> > status" command.
> When glusterd restarts bricks processes are brought up asynchronously
> and hence you may not see the brick processes reflecting in gluster
> volume status output immediately after restart. 5-10 seconds is an
> accepted time frame. However if it doesn't come back online post that
> then probably brick fails to start in that case.
> 
> Please check the respective brick log file and if you can find any error
> logs in it.
> >
> >
> > --
> > Regards
> > Abhishek Paliwal
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org 
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> 
> 
> 
> 
> -- 
> 
> 
> 
> 
> Regards
> Abhishek Paliwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Brick is Offline

2016-04-01 Thread ABHISHEK PALIWAL
Hi Atin,

Thanks for reply.

Could you please help me to identify the error log in the respective brick
log file. I tried but not able to identified where the problem is occuring.

I am attaching the brick log file which is not coming online even after
waiting for 1 minute.

Regards,
Abhishek

On Fri, Apr 1, 2016 at 1:04 PM, Atin Mukherjee  wrote:

>
>
> On 04/01/2016 12:10 PM, ABHISHEK PALIWAL wrote:
> > Hi,
> >
> >
> > I have the setup of two boards A and B with two bricks in replica mode.
> >
> > There is one test scenario
> >
> > 1. A acts as an active board and having the glusterfs mount point on it.
> > 2. B acts as Passive board.
> > 3. We are repetitively rebooting the B board (In this time period peer
> > status on A board will be "peer in cluster (Disconnected)" and brick is
> > not present in "gluster volume status") and when Board B comes up,
> > starts the gluster daemon.
> > 4. if Gluster daemon starts successfully it will make "peer in
> > cluster(Connected)"
> > 5. At the same with the immediate effect "gluster volume status" command
> > should show the brick is available in online.
> >
> >
> > But in my case sometime step 5 takes immediate reflection sometime 10-15
> > second and sometime doesn't show brick is online even after the 1minute.
> >
> > Could you please confirm why this type of unpredictable behavior is
> > occuring. It should be reflect with immediate effect in "gluster volume
> > status" command.
> When glusterd restarts bricks processes are brought up asynchronously
> and hence you may not see the brick processes reflecting in gluster
> volume status output immediately after restart. 5-10 seconds is an
> accepted time frame. However if it doesn't come back online post that
> then probably brick fails to start in that case.
>
> Please check the respective brick log file and if you can find any error
> logs in it.
> >
> >
> > --
> > Regards
> > Abhishek Paliwal
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
>



-- 




Regards
Abhishek Paliwal
[2016-03-31 11:48:33.233242] I [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.7.6 (args: /usr/sbin/glusterfsd -s 10.32.1.144 --volfile-id c_glusterfs.10.32.1.144.opt-lvmdir-c2-brick -p /system/glusterd/vols/c_glusterfs/run/10.32.1.144-opt-lvmdir-c2-brick.pid -S /var/run/gluster/697c0e4a16ebc734cd06fd9150723005.socket --brick-name /opt/lvmdir/c2/brick -l /var/log/glusterfs/bricks/opt-lvmdir-c2-brick.log --xlator-option *-posix.glusterd-uuid=2d576ff8-0cea-4f75-9e34-a5674fbf7256 --brick-port 49301 --xlator-option c_glusterfs-server.listen-port=49301)
[2016-03-31 11:48:33.252639] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2016-03-31 11:48:33.269339] I [graph.c:269:gf_add_cmdline_options] 0-c_glusterfs-server: adding option 'listen-port' for volume 'c_glusterfs-server' with value '49301'
[2016-03-31 11:48:33.269390] I [graph.c:269:gf_add_cmdline_options] 0-c_glusterfs-posix: adding option 'glusterd-uuid' for volume 'c_glusterfs-posix' with value '2d576ff8-0cea-4f75-9e34-a5674fbf7256'
[2016-03-31 11:48:33.269816] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2016-03-31 11:48:33.269882] I [MSGID: 115034] [server.c:403:_check_for_auth_option] 0-/opt/lvmdir/c2/brick: skip format check for non-addr auth option auth.login./opt/lvmdir/c2/brick.allow
[2016-03-31 11:48:33.269917] I [MSGID: 115034] [server.c:403:_check_for_auth_option] 0-/opt/lvmdir/c2/brick: skip format check for non-addr auth option auth.login.229f077d-bff9-4066-bc5e-88caa83bdd14.password
[2016-03-31 11:48:33.271801] I [rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2016-03-31 11:48:33.271984] W [MSGID: 101002] [options.c:957:xl_opt_validate] 0-c_glusterfs-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction
[2016-03-31 11:48:33.275928] I [trash.c:2366:init] 0-c_glusterfs-trash: no option specified for 'eliminate', using NULL
[2016-03-31 11:48:33.278025] W [graph.c:357:_log_if_unknown_option] 0-c_glusterfs-server: option 'rpc-auth.auth-glusterfs' is not recognized
[2016-03-31 11:48:33.278142] W [graph.c:357:_log_if_unknown_option] 0-c_glusterfs-server: option 'rpc-auth.auth-unix' is not recognized
[2016-03-31 11:48:33.278175] W [graph.c:357:_log_if_unknown_option] 0-c_glusterfs-server: option 'rpc-auth.auth-null' is not recognized
[2016-03-31 11:48:33.278229] W [graph.c:357:_log_if_unknown_option] 0-c_glusterfs-quota: option 'timeout' is not recognized
[2016-03-31 11:48:33.278261] W [graph.c:357:_log_if_unknown_option] 0-c_glusterfs-marker: option 'quota-version' is not recognized
[2016-03-31 11:48:33.278303] W [graph.c:357:_log_if_unknown_

[Gluster-devel] HELP! Many features on the 3.8 roadmap are missing basic details

2016-04-01 Thread Niels de Vos
Hi,

You might have seen that we have moved some features to a new "at risk"
category on the 3.8 roadmap. These are the features that did not provide
any details for the feature page yet. However, many other features do
not provide basic details like a summary, link to bugs or patches, and
so on. In order to get the release done, we rely on assistance from the
feature owners to provide the needed information.

  https://www.gluster.org/community/roadmap/3.8/

Please open the roadmap, locate a feature that you are working on, and
see if all details are there. In case something it missing, you should
scroll to the bottom of the page and click the "edit this page on
GitHub" link and create a pull request.

Thanks,
Jiffin and Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] "Samba and NFS-Ganesha support for tiered volumes" is at risk for 3.8

2016-04-01 Thread Niels de Vos
Hi,

the feature labelled "Samba and NFS-Ganesha support for tiered volumes"
did not receive any status updates by pull request to the 3.8 roadmap.
We have now moved this feature to the new "at risk" category on the
page. If there is still an intention to include this feature with 3.8,
we encourage you send an update for the roadmap soon. This can easily be
done by clicking the "edit this page" link on the bottom of the roadmap:

  https://www.gluster.org/community/roadmap/3.8/

If there is no update within a week, we'll move the feature to the next
release.

Thanks,
Jiffin and Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] "Tiering Performance Enhancements" is at risk for 3.8

2016-04-01 Thread Niels de Vos
Hi,

the feature labelled "Tiering Performance Enhancements" did not receive
any status updates by pull request to the 3.8 roadmap. We have now moved
this feature to the new "at risk" category on the page. If there is
still an intention to include this feature with 3.8, we encourage you
send an update for the roadmap soon. This can easily be done by clicking
the "edit this page" link on the bottom of the roadmap:

  https://www.gluster.org/community/roadmap/3.8/

If there is no update within a week, we'll move the feature to the next
release.

Thanks,
Jiffin and Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] "Converged HA for NFS-Ganesha and Samba" is at risk for 3.8

2016-04-01 Thread Niels de Vos
Hi,

the feature labelled "Converged HA for NFS-Ganesha and Samba" did not
receive any status updates by pull request to the 3.8 roadmap. We have
now moved this feature to the new "at risk" category on the page. If
there is still an intention to include this feature with 3.8, we
encourage you send an update for the roadmap soon. This can easily be
done by clicking the "edit this page" link on the bottom of the roadmap:

  https://www.gluster.org/community/roadmap/3.8/

If there is no update within a week, we'll move the feature to the next
release.

Thanks,
Jiffin and Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] "Quota Enhancements" is at risk for 3.8

2016-04-01 Thread Niels de Vos
Hi,

the feature labelled "Quota Enhancements" did not receive any status
updates by pull request to the 3.8 roadmap.  We have now moved this
feature to the new "at risk" category on the page. If there is still an
intention to include this feature with 3.8, we encourage you send an
update for the roadmap soon. This can easily be done by clicking the
"edit this page" link on the bottom of the roadmap:

  https://www.gluster.org/community/roadmap/3.8/

If there is no update within a week, we'll move the feature to the next
release.

Thanks,
Jiffin and Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Brick is Offline

2016-04-01 Thread Atin Mukherjee


On 04/01/2016 12:10 PM, ABHISHEK PALIWAL wrote:
> Hi,
> 
> 
> I have the setup of two boards A and B with two bricks in replica mode.
> 
> There is one test scenario
> 
> 1. A acts as an active board and having the glusterfs mount point on it.
> 2. B acts as Passive board.
> 3. We are repetitively rebooting the B board (In this time period peer
> status on A board will be "peer in cluster (Disconnected)" and brick is
> not present in "gluster volume status") and when Board B comes up,
> starts the gluster daemon.
> 4. if Gluster daemon starts successfully it will make "peer in
> cluster(Connected)"
> 5. At the same with the immediate effect "gluster volume status" command
> should show the brick is available in online.
> 
> 
> But in my case sometime step 5 takes immediate reflection sometime 10-15
> second and sometime doesn't show brick is online even after the 1minute.
> 
> Could you please confirm why this type of unpredictable behavior is
> occuring. It should be reflect with immediate effect in "gluster volume
> status" command.
When glusterd restarts bricks processes are brought up asynchronously
and hence you may not see the brick processes reflecting in gluster
volume status output immediately after restart. 5-10 seconds is an
accepted time frame. However if it doesn't come back online post that
then probably brick fails to start in that case.

Please check the respective brick log file and if you can find any error
logs in it.
>  
> 
> -- 
> Regards
> Abhishek Paliwal
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Update on 3.7.10 - on schedule to be tagged at 2200PDT 30th March.

2016-04-01 Thread Pranith Kumar Karampuri



On 04/01/2016 12:24 PM, Kaushal M wrote:

In the time I was waiting for https://review.gluster.org/13861/ , a
change was merged in (which I didn't know of) and has broken building
RPMs.

The offending change is 3d34c49  (cluster/ec: Rebalance hangs during
rename) by Ashish.
The same change had earlier also broken building RPMs on master.

For now, to proceed with 3.7.10, I'm going to revert the offending
change. Please make sure this change is merged in for the next
release.


Sorry about this. I thought the commit which we are going to use for 
tagging is decided already so started merging as the build system gave 
+ve results for the patch I merged. After which Kotresh's patch is 
required for the release.
Will refrain from merging patches between tagging announcement till 
release in future.


Pranith


~kaushal

On Thu, Mar 31, 2016 at 8:28 PM, Kotresh Hiremath Ravishankar
 wrote:

Point noted, will keep informed from next time!

Thanks and Regards,
Kotresh H R

- Original Message -

From: "Kaushal M" 
To: "Kotresh Hiremath Ravishankar" 
Cc: "Aravinda" , "Gluster Devel" 
, maintain...@gluster.org
Sent: Thursday, March 31, 2016 7:32:58 PM
Subject: Re: [Gluster-Maintainers] Update on 3.7.10 - on schedule to be tagged 
at 2200PDT 30th March.

This is a really hard to hit issue, that requires a lot of things to
be in place for it to happen.
But it is an unexpected data loss issue.

I'll wait tonight for the change to be merged, though I really don't like it.

You could have informed me on this thread earlier.
Please, in the future, keep release-managers/maintainers updated about
any critical changes.

The only reason this is getting merged now, is because of the Jenkins
migration which got completed surprisingly quickly.

On Thu, Mar 31, 2016 at 7:08 PM, Kotresh Hiremath Ravishankar
 wrote:

Kaushal,

I just replied to Aravinda's mail. Anyway pasting the snippet if someone
misses that.

 "In the scenario mentioned by aravinda below, when an unlink comes on a
 entry, in changelog xlator, it's 'loc->pargfid'
 was getting modified to "/". So consequence is that , when it hits
 posix, the 'loc->pargfid' would be pointing
 to "/" instead of actual parent. This is not so terrible yet, as we are
 saved by posix. Posix checks
 for "loc->path" first, only if it's not filled, it will use
 "pargfid/bname" combination. So only for
 clients like self-heal who does not populate 'loc->path' and the same
 basename exists on root, the
 unlink happens on root instead of actual path."

Thanks and Regards,
Kotresh H R

- Original Message -

From: "Kaushal M" 
To: "Aravinda" 
Cc: "Gluster Devel" , maintain...@gluster.org,
"Kotresh Hiremath Ravishankar"

Sent: Thursday, March 31, 2016 6:56:18 PM
Subject: Re: [Gluster-Maintainers] Update on 3.7.10 - on schedule to be
tagged at 2200PDT 30th March.

Kotresh, Could you please provide the details?

On Thu, Mar 31, 2016 at 6:43 PM, Aravinda  wrote:

Hi Kaushal,

We have a Changelog bug which can lead to data loss if Glusterfind is
enabled(To be specific,  when changelog.capture-del-path and
changelog.changelog options enabled on a replica volume).

http://review.gluster.org/#/c/13861/

This is very corner case. but good to go with the release. We tried to
merge
this before the merge window for 3.7.10, but regressions not yet
complete
:(

Do you think we should wait for this patch?

@Kotresh can provide more details about this issue.

regards
Aravinda


On 03/31/2016 01:29 PM, Kaushal M wrote:

The last change for 3.7.10 has been merged now. Commit 2cd5b75 will be
used for the release. I'll be preparing release-notes, and tagging the
release soon.

After running verification tests and checking for any perf
improvements, I'll make be making the release tarball.

Regards,
Kaushal

On Wed, Mar 30, 2016 at 7:00 PM, Kaushal M  wrote:

Hi all,

I'll be taking over the release duties for 3.7.10. Vijay is busy and
could not get the time to do a scheduled release.

The .10 release has been scheduled for tagging on 30th (ie. today).
In the interests of providing some heads up to developers wishing to
get changes merged,
I'll be waiting till 10PM PDT, 30th March. (0500UTC/1030IST 31st
March), to tag the release.

So you have ~15 hours to get any changes required merged.

Thanks,
Kaushal

___
maintainers mailing list
maintain...@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel