[Gluster-devel] Gluster IPv6 bugfixes (Bug 1117886)

2015-06-13 Thread Nithin Kumar Dabilpuram

 

Hi,
Can I contribute to this bug fix ? I've worked on Gluster IPv6 functionality 
bugs in 3.3.2 in my past organization and was able to successfully bring up 
gluster on IPv6 link local addresses as well.
Please find my work in progress patch. I'll raise gerrit review once testing is 
done. I was successfully able to create volumes with 3 peers and add bricks. 
I'll continue testing other basic functionality and see what needs to be 
modified. Any other suggestions ?

Brief info about the patch:Here I'm trying to use transport.address-family 
option in /etc/glusterfs/glusterd.vol file and then propagate the same to 
server and client vol files and their translators.
In this way when user mentions transport.address-family inet6 in its 
glusterd.vol file, all glusterd servers open AF_INET6 sockets and then the same 
information is stored in glusterd_volinfo and used when generating vol config 
files. -thanksNithin

   

patch
Description: Binary data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Coreutils

2015-06-13 Thread M S Vishwanath Bhat
On 12 June 2015 at 23:59, chris holcombe chris.holco...@canonical.com
wrote:

 Yeah I have this repo but it's basically empty:
 https://github.com/cholcombe973/GlusterUtils


AFAIK the plan is to collaborate through a git repo in github.com/gluster
account. But anything that works should be good...

And the choice of language seems to be python.

Best Regards,
Vishwanath



 On 06/12/2015 11:27 AM, Craig Cabrey wrote:

 Chris,

 That sounds good to me.

 I already have started on implementation, just to get familiar with the
 codebase and GFAPI.

 Is there a public repo that we can use for collaboration?

 Craig

  On Jun 12, 2015, at 10:46 AM, chris holcombe 
 chris.holco...@canonical.com wrote:

 Craig,

 I was actually planning on building the same tool set.  I would like to
 work with you also on this if that's ok.

 -Chris

 On 06/12/2015 10:43 AM, Jeff Darcy wrote:

 Hi everyone,

 This summer I am an intern at Facebook working on the Gluster team.
 Part of
 my project for the summer includes developing a set of coreutils that
 utilizes the Gluster C API natively.

 This project is similar in nature to the NFS coreutils that some of
 you may
 have heard about from the other Facebook engineers at the Gluster
 summit
 recently. I just wanted to reach out to the Gluster community to gather
 ideas, potential features, feedback, and direction.

 The initial set of utilities that I am developing includes the
 following:

 * cat
 * mkdir
 * put (read from stdin and write to a file)
 * mv
 * ls
 * rm
 * tail

 Again, any feedback will be welcome.

 Hi, Craig, and welcome to the project.  :)

 There seems to be some overlap with a proposal Ragahavendra Talur sent
 out
 a couple of days ago.


 https://urldefense.proofpoint.com/v1/url?u=https://docs.google.com/document/d/1yuRLRbdccx_0V0UDAxqWbz4g983q5inuINHgM1YO040/edit?usp%3Dsharingk=ZVNjlDMF0FElm4dQtryO4A%3D%3D%0Ar=ThH6JMKaB%2Fxgkh9d2jPjehcdps8B69L0q04jdBbZvX4%3D%0Am=86la5Xg7nlxAzIR6E5c2v2SgQSd6VssYzB%2BklM3wf%2BI%3D%0As=8d55bb5770b8ed1d683a6908a05af32b79289735c537c660252fcaa7c690e162

 This seems like an excellent opportunity to collaborate.  Ideally, I
 think
 it would be useful to have both an FTP-client-like shell and a set of
 standalone one shot commands, based on as much common code as
 possible.

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org

 https://urldefense.proofpoint.com/v1/url?u=http://www.gluster.org/mailman/listinfo/gluster-develk=ZVNjlDMF0FElm4dQtryO4A%3D%3D%0Ar=ThH6JMKaB%2Fxgkh9d2jPjehcdps8B69L0q04jdBbZvX4%3D%0Am=86la5Xg7nlxAzIR6E5c2v2SgQSd6VssYzB%2BklM3wf%2BI%3D%0As=28546cdc6fdf6f75f4cfa4b8260abc595eee96601a5f849ebb230ddbd1faf8b3


 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Vijay Bellur

On 06/13/2015 01:15 PM, Raghavendra Talur wrote:

If it is a crash of glusterd when you do rebalance start, it is because
of FORTIFY_FAIL in libc.
Here is the patch that Susant has already sent:
http://review.gluster.org/#/c/11090/



Thanks, have merged this patch.

-Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Atin Mukherjee
Sent from Samsung Galaxy S4
On 13 Jun 2015 12:58, Anand Nekkunti anekk...@redhat.com wrote:

 Hi All
Rebalance is not working in single node cluster environment ( current
test frame work ).  I am getting error in below test , it seems re-balance
is not migrated to  current cluster test framework.
Could you pin point which test case fails and what log do you see?

 cleanup;
 TEST launch_cluster 2;
 TEST $CLI_1 peer probe $H2;

 EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers

 $CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
 EXPECT 'Created' volinfo_field $V0 'Status';

 $CLI_1 volume start $V0
 EXPECT 'Started' volinfo_field $V0 'Status';

 #Mount FUSE
 TEST glusterfs -s $H1 --volfile-id=$V0 $M0;

 TEST mkdir $M0/dir{1..4};
 TEST touch $M0/dir{1..4}/files{1..4};

 TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1

 TEST $CLI_1 volume rebalance $V0  start

 EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field $V0

 $CLI_2 volume status $V0
 EXPECT 'Started' volinfo_field $V0 'Status';

 cleanup;

 Regards
 Anand.N



 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Anand Nekkunti

Hi All
   Rebalance is not working in single node cluster environment ( 
current test frame work ).  I am getting error in below test , it seems 
re-balance is not migrated to  current cluster test framework.


cleanup;
TEST launch_cluster 2;
TEST $CLI_1 peer probe $H2;

EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers

$CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
EXPECT 'Created' volinfo_field $V0 'Status';

$CLI_1 volume start $V0
EXPECT 'Started' volinfo_field $V0 'Status';

#Mount FUSE
TEST glusterfs -s $H1 --volfile-id=$V0 $M0;

TEST mkdir $M0/dir{1..4};
TEST touch $M0/dir{1..4}/files{1..4};

TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1

TEST $CLI_1 volume rebalance $V0  start

EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field $V0

$CLI_2 volume status $V0
EXPECT 'Started' volinfo_field $V0 'Status';

cleanup;

Regards
Anand.N




1229139.t
Description: Perl program
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Niels de Vos
On Sat, Jun 13, 2015 at 01:15:04PM +0530, Raghavendra Talur wrote:
 On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee atin.mukherje...@gmail.com
 wrote:
 
  Sent from Samsung Galaxy S4
  On 13 Jun 2015 12:58, Anand Nekkunti anekk...@redhat.com wrote:
  
   Hi All
  Rebalance is not working in single node cluster environment ( current
  test frame work ).  I am getting error in below test , it seems re-balance
  is not migrated to  current cluster test framework.
  Could you pin point which test case fails and what log do you see?
  
   cleanup;
   TEST launch_cluster 2;
   TEST $CLI_1 peer probe $H2;
  
   EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers
  
   $CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
   EXPECT 'Created' volinfo_field $V0 'Status';
  
   $CLI_1 volume start $V0
   EXPECT 'Started' volinfo_field $V0 'Status';
  
   #Mount FUSE
   TEST glusterfs -s $H1 --volfile-id=$V0 $M0;
  
   TEST mkdir $M0/dir{1..4};
   TEST touch $M0/dir{1..4}/files{1..4};
  
   TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1
  
   TEST $CLI_1 volume rebalance $V0  start
  
   EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field $V0
  
   $CLI_2 volume status $V0
   EXPECT 'Started' volinfo_field $V0 'Status';
  
   cleanup;
  
   Regards
   Anand.N
  
  
  
   ___
   Gluster-devel mailing list
   Gluster-devel@gluster.org
   http://www.gluster.org/mailman/listinfo/gluster-devel
  
 
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-devel
 
 
 If it is a crash of glusterd when you do rebalance start, it is because of
 FORTIFY_FAIL in libc.
 Here is the patch that Susant has already sent:
 http://review.gluster.org/#/c/11090/
 
 You can verify that it is the same crash by checking the core in gdb; a
 SIGABRT would be raised
 after strncpy.

Sounds like we should use _FORTIFY_SOURCE for running our regression
tests? Patches for build.sh or one of the other scripts are welcome!

You can get them here:
https://github.com/gluster/glusterfs-patch-acceptance-tests/

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Raghavendra Talur
On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee atin.mukherje...@gmail.com
wrote:

 Sent from Samsung Galaxy S4
 On 13 Jun 2015 12:58, Anand Nekkunti anekk...@redhat.com wrote:
 
  Hi All
 Rebalance is not working in single node cluster environment ( current
 test frame work ).  I am getting error in below test , it seems re-balance
 is not migrated to  current cluster test framework.
 Could you pin point which test case fails and what log do you see?
 
  cleanup;
  TEST launch_cluster 2;
  TEST $CLI_1 peer probe $H2;
 
  EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers
 
  $CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
  EXPECT 'Created' volinfo_field $V0 'Status';
 
  $CLI_1 volume start $V0
  EXPECT 'Started' volinfo_field $V0 'Status';
 
  #Mount FUSE
  TEST glusterfs -s $H1 --volfile-id=$V0 $M0;
 
  TEST mkdir $M0/dir{1..4};
  TEST touch $M0/dir{1..4}/files{1..4};
 
  TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1
 
  TEST $CLI_1 volume rebalance $V0  start
 
  EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field $V0
 
  $CLI_2 volume status $V0
  EXPECT 'Started' volinfo_field $V0 'Status';
 
  cleanup;
 
  Regards
  Anand.N
 
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-devel
 

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel


If it is a crash of glusterd when you do rebalance start, it is because of
FORTIFY_FAIL in libc.
Here is the patch that Susant has already sent:
http://review.gluster.org/#/c/11090/

You can verify that it is the same crash by checking the core in gdb; a
SIGABRT would be raised
after strncpy.

-- 
*Raghavendra Talur *
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Anand Nekkunti


On 06/13/2015 02:27 PM, Atin Mukherjee wrote:


Sent from Samsung Galaxy S4
On 13 Jun 2015 13:15, Raghavendra Talur raghavendra.ta...@gmail.com 
mailto:raghavendra.ta...@gmail.com wrote:




 On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee 
atin.mukherje...@gmail.com mailto:atin.mukherje...@gmail.com wrote:


 Sent from Samsung Galaxy S4
 On 13 Jun 2015 12:58, Anand Nekkunti anekk...@redhat.com 
mailto:anekk...@redhat.com wrote:

 
  Hi All
 Rebalance is not working in single node cluster environment ( 
current test frame work ).  I am getting error in below test , it 
seems re-balance is not migrated to  current cluster test framework.

 Could you pin point which test case fails and what log do you see?
 
  cleanup;
  TEST launch_cluster 2;
  TEST $CLI_1 peer probe $H2;
 
  EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers
 
  $CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
  EXPECT 'Created' volinfo_field $V0 'Status';
 
  $CLI_1 volume start $V0
  EXPECT 'Started' volinfo_field $V0 'Status';
 
  #Mount FUSE
  TEST glusterfs -s $H1 --volfile-id=$V0 $M0;
 
  TEST mkdir $M0/dir{1..4};
  TEST touch $M0/dir{1..4}/files{1..4};
 
  TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1
 
  TEST $CLI_1 volume rebalance $V0  start
 
  EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field $V0
 
  $CLI_2 volume status $V0
  EXPECT 'Started' volinfo_field $V0 'Status';
 
  cleanup;
 
  Regards
  Anand.N
 
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org mailto:Gluster-devel@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-devel
 


 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org mailto:Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel


 If it is a crash of glusterd when you do rebalance start, it is 
because of FORTIFY_FAIL in libc.
 Here is the patch that Susant has already sent: 
http://review.gluster.org/#/c/11090/


 You can verify that it is the same crash by checking the core in 
gdb; a SIGABRT would be raised

 after strncpy.



glusterd  is not crashing, but I am getting rebalance status as fail  in 
my test case. It is happening in test frame work ( any simulated cluster 
environment in same node ) only.

RCA:
1. we are passing always localhost as volfile server for rebalance 
xlator .
  2.Rebalance processes are  overwriting  unix socket and log files 
each other (All rebalance processes are creating socket with same name) .


I will send patch for this

Regards
Anand.N



AFAIR Anand tried it in mainline and that fix was already in place.  I 
think this is something different.

 --
 Raghavendra Talur




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Atin Mukherjee
Sent from Samsung Galaxy S4
On 13 Jun 2015 14:42, Anand Nekkunti anekk...@redhat.com wrote:


 On 06/13/2015 02:27 PM, Atin Mukherjee wrote:

 Sent from Samsung Galaxy S4
 On 13 Jun 2015 13:15, Raghavendra Talur raghavendra.ta...@gmail.com
wrote:
 
 
 
  On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee 
atin.mukherje...@gmail.com wrote:
 
  Sent from Samsung Galaxy S4
  On 13 Jun 2015 12:58, Anand Nekkunti anekk...@redhat.com wrote:
  
   Hi All
  Rebalance is not working in single node cluster environment (
current test frame work ).  I am getting error in below test , it seems
re-balance is not migrated to  current cluster test framework.
  Could you pin point which test case fails and what log do you see?
  
   cleanup;
   TEST launch_cluster 2;
   TEST $CLI_1 peer probe $H2;
  
   EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers
  
   $CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
   EXPECT 'Created' volinfo_field $V0 'Status';
  
   $CLI_1 volume start $V0
   EXPECT 'Started' volinfo_field $V0 'Status';
  
   #Mount FUSE
   TEST glusterfs -s $H1 --volfile-id=$V0 $M0;
  
   TEST mkdir $M0/dir{1..4};
   TEST touch $M0/dir{1..4}/files{1..4};
  
   TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1
  
   TEST $CLI_1 volume rebalance $V0  start
  
   EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field $V0
  
   $CLI_2 volume status $V0
   EXPECT 'Started' volinfo_field $V0 'Status';
  
   cleanup;
  
   Regards
   Anand.N
  
  
  
   ___
   Gluster-devel mailing list
   Gluster-devel@gluster.org
   http://www.gluster.org/mailman/listinfo/gluster-devel
  
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-devel
 
 
  If it is a crash of glusterd when you do rebalance start, it is
because of FORTIFY_FAIL in libc.
  Here is the patch that Susant has already sent:
http://review.gluster.org/#/c/11090/
 
  You can verify that it is the same crash by checking the core in gdb;
a SIGABRT would be raised
  after strncpy.


 glusterd  is not crashing, but I am getting rebalance status as fail  in
my test case. It is happening in test frame work ( any simulated cluster
environment in same node ) only.
 RCA:
   1. we are passing always localhost as volfile server for rebalance
xlator .
   2.Rebalance processes are  overwriting  unix socket and log files each
other (All rebalance processes are creating socket with same name) .

 I will send patch for this
I thought we were already in an agreement for this yesterday. IIRC, the
same is true for all other daemons. As of now we dont have any tests which
invoke daemons using cluster.rc

 Regards
 Anand.N

 
 AFAIR Anand tried it in mainline and that fix was already in place.  I
think this is something different.
  --
  Raghavendra Talur
 


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Anand Nekkunti


On 06/13/2015 04:50 PM, Atin Mukherjee wrote:



 Sent from Samsung Galaxy S4 On 13 Jun 2015 14:42, Anand Nekkunti
 anekk...@redhat.com mailto:anekk...@redhat.com wrote:


 On 06/13/2015 02:27 PM, Atin Mukherjee wrote:

 Sent from Samsung Galaxy S4 On 13 Jun 2015 13:15, Raghavendra
 Talur raghavendra.ta...@gmail.com
 mailto:raghavendra.ta...@gmail.com wrote:



 On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee
 atin.mukherje...@gmail.com
 mailto:atin.mukherje...@gmail.com wrote:

 Sent from Samsung Galaxy S4 On 13 Jun 2015 12:58, Anand
 Nekkunti anekk...@redhat.com mailto:anekk...@redhat.com
 wrote:

 Hi All Rebalance is not working in single node cluster
 environment ( current test frame work ). I am getting
 error in below test , it seems re-balance is not migrated
 to  current cluster test framework.
 Could you pin point which test case fails and what log do you
 see?

 cleanup; TEST launch_cluster 2; TEST $CLI_1 peer probe
 $H2;

 EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers

 $CLI_1 volume create $V0 $H1:$B1/$V0 $H2:$B2/$V0 EXPECT
 'Created' volinfo_field $V0 'Status';

 $CLI_1 volume start $V0 EXPECT 'Started' volinfo_field $V0
 'Status';

 #Mount FUSE TEST glusterfs -s $H1 --volfile-id=$V0 $M0;

 TEST mkdir $M0/dir{1..4}; TEST touch
 $M0/dir{1..4}/files{1..4};

 TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1
 $H2:$B2/${V0}1

 TEST $CLI_1 volume rebalance $V0  start

 EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field
 $V0

 $CLI_2 volume status $V0 EXPECT 'Started' volinfo_field $V0
 'Status';

 cleanup;

 Regards Anand.N



 ___
 Gluster-devel mailing list Gluster-devel@gluster.org
 mailto:Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel



 ___ Gluster-devel
 mailing list Gluster-devel@gluster.org
 mailto:Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel


 If it is a crash of glusterd when you do rebalance start, it is
 because of FORTIFY_FAIL in libc. Here is the patch that Susant
 has already sent: http://review.gluster.org/#/c/11090/

 You can verify that it is the same crash by checking the core
 in gdb; a SIGABRT would be raised after strncpy.


 glusterd  is not crashing, but I am getting rebalance status as
 fail  in my test case. It is happening in test frame work ( any
 simulated cluster environment in same node ) only. RCA: 1. we are
 passing always localhost as volfile server for rebalance xlator
 . 2.Rebalance processes are  overwriting  unix socket and log files
 each other (All rebalance processes are creating socket with same
 name) .

 I will send patch for this
 I thought we were already in an agreement for this yesterday. IIRC,
 the same is true for all other daemons. As of now we dont have any
 tests which invoke daemons using cluster.rc

 ya .. yesterday we found that volfile server is the  problem , I 
modified volfile server but still i was getting rebalance  status fail . 
Initially I thought some problem in rebalance process, later I found 
that rebalance not able send respond to to glusterd after completing 
rebalance due to unix socket file corruption  and  all rebalance  
daemons are writing log into same log file .
 I think there is no issue with other daemons  which are are using SVC 
framwork work.


patch: http://review.gluster.org/#/c/11210/  - this patch enable the 
writing test cases  for rebalance in cluster environment.






 Regards Anand.N


 AFAIR Anand tried it in mainline and that fix was already in
 place.  I think this is something different.
 -- Raghavendra Talur






___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Raghavendra Talur
On Sat, Jun 13, 2015 at 1:36 PM, Niels de Vos nde...@redhat.com wrote:

 On Sat, Jun 13, 2015 at 01:15:04PM +0530, Raghavendra Talur wrote:
  On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee 
 atin.mukherje...@gmail.com
  wrote:
 
   Sent from Samsung Galaxy S4
   On 13 Jun 2015 12:58, Anand Nekkunti anekk...@redhat.com wrote:
   
Hi All
   Rebalance is not working in single node cluster environment (
 current
   test frame work ).  I am getting error in below test , it seems
 re-balance
   is not migrated to  current cluster test framework.
   Could you pin point which test case fails and what log do you see?
   
cleanup;
TEST launch_cluster 2;
TEST $CLI_1 peer probe $H2;
   
EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers
   
$CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
EXPECT 'Created' volinfo_field $V0 'Status';
   
$CLI_1 volume start $V0
EXPECT 'Started' volinfo_field $V0 'Status';
   
#Mount FUSE
TEST glusterfs -s $H1 --volfile-id=$V0 $M0;
   
TEST mkdir $M0/dir{1..4};
TEST touch $M0/dir{1..4}/files{1..4};
   
TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1
   
TEST $CLI_1 volume rebalance $V0  start
   
EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field $V0
   
$CLI_2 volume status $V0
EXPECT 'Started' volinfo_field $V0 'Status';
   
cleanup;
   
Regards
Anand.N
   
   
   
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
   
  
   ___
   Gluster-devel mailing list
   Gluster-devel@gluster.org
   http://www.gluster.org/mailman/listinfo/gluster-devel
  
  
  If it is a crash of glusterd when you do rebalance start, it is because
 of
  FORTIFY_FAIL in libc.
  Here is the patch that Susant has already sent:
  http://review.gluster.org/#/c/11090/
 
  You can verify that it is the same crash by checking the core in gdb; a
  SIGABRT would be raised
  after strncpy.

 Sounds like we should use _FORTIFY_SOURCE for running our regression
 tests? Patches for build.sh or one of the other scripts are welcome!

 You can get them here:
 https://github.com/gluster/glusterfs-patch-acceptance-tests/

 Thanks,
 Niels


Yes, Kaushal and Vijay also agreed to have our regression use this flag.

I have discovered a problem though. For glibc to detect these possible
overflows,
we need to have -D_FORTIFY_SOURCE at level 2 and -O optimization flag at
minimum of
1 with 2 as recommended.
Read this for more info:
https://gcc.gnu.org/ml/gcc-patches/2004-09/msg02055.html


Not sure if having -O2 will lead to debugging other cores difficult.


If nobody objects to O2, I think I have created a pull request correctly.
Please merge.
https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/1


-- 
*Raghavendra Talur *
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Atin Mukherjee
Sent from Samsung Galaxy S4
On 13 Jun 2015 14:11, Raghavendra Talur raghavendra.ta...@gmail.com
wrote:



 On Sat, Jun 13, 2015 at 1:36 PM, Niels de Vos nde...@redhat.com wrote:

 On Sat, Jun 13, 2015 at 01:15:04PM +0530, Raghavendra Talur wrote:
  On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee 
atin.mukherje...@gmail.com
  wrote:
 
   Sent from Samsung Galaxy S4
   On 13 Jun 2015 12:58, Anand Nekkunti anekk...@redhat.com wrote:
   
Hi All
   Rebalance is not working in single node cluster environment (
current
   test frame work ).  I am getting error in below test , it seems
re-balance
   is not migrated to  current cluster test framework.
   Could you pin point which test case fails and what log do you see?
   
cleanup;
TEST launch_cluster 2;
TEST $CLI_1 peer probe $H2;
   
EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers
   
$CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
EXPECT 'Created' volinfo_field $V0 'Status';
   
$CLI_1 volume start $V0
EXPECT 'Started' volinfo_field $V0 'Status';
   
#Mount FUSE
TEST glusterfs -s $H1 --volfile-id=$V0 $M0;
   
TEST mkdir $M0/dir{1..4};
TEST touch $M0/dir{1..4}/files{1..4};
   
TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1
   
TEST $CLI_1 volume rebalance $V0  start
   
EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field $V0
   
$CLI_2 volume status $V0
EXPECT 'Started' volinfo_field $V0 'Status';
   
cleanup;
   
Regards
Anand.N
   
   
   
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
   
  
   ___
   Gluster-devel mailing list
   Gluster-devel@gluster.org
   http://www.gluster.org/mailman/listinfo/gluster-devel
  
  
  If it is a crash of glusterd when you do rebalance start, it is
because of
  FORTIFY_FAIL in libc.
  Here is the patch that Susant has already sent:
  http://review.gluster.org/#/c/11090/
 
  You can verify that it is the same crash by checking the core in gdb; a
  SIGABRT would be raised
  after strncpy.

 Sounds like we should use _FORTIFY_SOURCE for running our regression
 tests? Patches for build.sh or one of the other scripts are welcome!

 You can get them here:
 https://github.com/gluster/glusterfs-patch-acceptance-tests/

 Thanks,
 Niels


 Yes, Kaushal and Vijay also agreed to have our regression use this flag.

 I have discovered a problem though. For glibc to detect these possible
overflows,
 we need to have -D_FORTIFY_SOURCE at level 2 and -O optimization flag at
minimum of
 1 with 2 as recommended.
 Read this for more info:
https://gcc.gnu.org/ml/gcc-patches/2004-09/msg02055.html


 Not sure if having -O2 will lead to debugging other cores difficult.


 If nobody objects to O2, I think I have created a pull request correctly.
 Please merge.
 https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/1
I feel we should try to maintain uniformity at all the places as far as
compilation flags are concerned.


 --
 Raghavendra Talur

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance is not working in single node cluster environment.

2015-06-13 Thread Atin Mukherjee
Sent from Samsung Galaxy S4
On 13 Jun 2015 13:15, Raghavendra Talur raghavendra.ta...@gmail.com
wrote:



 On Sat, Jun 13, 2015 at 1:00 PM, Atin Mukherjee 
atin.mukherje...@gmail.com wrote:

 Sent from Samsung Galaxy S4
 On 13 Jun 2015 12:58, Anand Nekkunti anekk...@redhat.com wrote:
 
  Hi All
 Rebalance is not working in single node cluster environment (
current test frame work ).  I am getting error in below test , it seems
re-balance is not migrated to  current cluster test framework.
 Could you pin point which test case fails and what log do you see?
 
  cleanup;
  TEST launch_cluster 2;
  TEST $CLI_1 peer probe $H2;
 
  EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers
 
  $CLI_1 volume create $V0 $H1:$B1/$V0  $H2:$B2/$V0
  EXPECT 'Created' volinfo_field $V0 'Status';
 
  $CLI_1 volume start $V0
  EXPECT 'Started' volinfo_field $V0 'Status';
 
  #Mount FUSE
  TEST glusterfs -s $H1 --volfile-id=$V0 $M0;
 
  TEST mkdir $M0/dir{1..4};
  TEST touch $M0/dir{1..4}/files{1..4};
 
  TEST $CLI_1 volume add-brick $V0 $H1:$B1/${V0}1 $H2:$B2/${V0}1
 
  TEST $CLI_1 volume rebalance $V0  start
 
  EXPECT_WITHIN 60 completed CLI_1_rebalance_status_field $V0
 
  $CLI_2 volume status $V0
  EXPECT 'Started' volinfo_field $V0 'Status';
 
  cleanup;
 
  Regards
  Anand.N
 
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-devel
 


 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel


 If it is a crash of glusterd when you do rebalance start, it is because
of FORTIFY_FAIL in libc.
 Here is the patch that Susant has already sent:
http://review.gluster.org/#/c/11090/

 You can verify that it is the same crash by checking the core in gdb; a
SIGABRT would be raised
 after strncpy.

AFAIR Anand tried it in mainline and that fix was already in place.  I
think this is something different.
 --
 Raghavendra Talur

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel