Re: [Gluster-devel] Regression: ./tests/basic/quota.t fails

2016-03-08 Thread Vijaikumar Mallikarjuna
Test number 82 is a umount operation on NFS mount, not sure why umount
failed.

EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $N0



Thanks,
Vijay



On Wed, Mar 9, 2016 at 12:14 PM, Poornima Gurusiddaiah 
wrote:

> Hi,
>
> I see the below test failing for an unrelated patch:
>
> ./tests/basic/quota.t.
> Failed test : 82
>
> Reqgression: 
> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15045/consoleFull
>
> Can you please take a look into it?
>
> Regards,
> Poornima
>
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] sub-directory geo-replication, snapshot features

2016-03-08 Thread Pranith Kumar Karampuri



On 03/09/2016 10:40 AM, Kaushal M wrote:

On Tue, Mar 8, 2016 at 11:58 PM, Atin Mukherjee  wrote:


On 03/08/2016 07:32 PM, Pranith Kumar Karampuri wrote:

hi,
  Late last week I sent a solution for how to achieve
subdirectory-mount support with access-controls
(http://www.gluster.org/pipermail/gluster-devel/2016-March/048537.html).
What follows here is a short description of how other features of
gluster volumes are implemented for sub-directories.

Please note that the sub-directories are not allowed to be accessed by
normal mounts i.e. top-level volume mounts. All access to the
sub-directories goes only through sub-directory mounts.

Is this acceptable? If I have a,b,c sub directories in the volume and if
I mount the same volume in /mnt then do you mean to say I won't be able
to access /mnt/a or /mnt/b and I can only access them using sub
directory mounts? Or you are talking about some specific case here?

1) Geo-replication:
The direction in which we are going is to allow geo-replicating just
some sub-directories and not all of the volume based on options. When
these options are set, server xlators populate extra information in the
frames/xdata to write changelog for the fops coming from their
sub-directory mounts. changelog xlator on seeing this will only
geo-replicate the files/directories that are in the changelog. Thus only
the sub-directories are geo-replicated. There is also a suggestion from
Vijay and Aravinda to have separate domains for operations inside
sub-directories for changelogs.

2) Sub-directory snapshots using lvm
Every time a sub-directory needs to be created, Our idea is that the
admin needs to execute subvolume creation command which creates a mount
to an empty snapshot at the given sub-directory name. All these
directories can be modified in parallel and we can take individual
snapshots of each of the directories. We will be providing a detailed
list commands to do the same once they are fleshed out. At the moment
these are the directions we are going to increase granularity from
volume to subdirectory for the main features.

We use hardlinks to the `.glusterfs` directory on bricks. So wouldn't
having multiple filesystems inside a brick break the brick?


You are right. I think we will have to do full separation, where it will 
be more like multiple tenants :-/.




Also, I'd prefer if sub-directory mounts and sub-directory snapshots
remained separate, and not tied with each other. This mail gives the
feeling that they will be tied together.


With the above point, I don't think it will be.




Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] sub-directory geo-replication, snapshot features

2016-03-08 Thread Aravinda


regards
Aravinda

On 03/08/2016 07:32 PM, Pranith Kumar Karampuri wrote:

hi,
 Late last week I sent a solution for how to achieve 
subdirectory-mount support with access-controls 
(http://www.gluster.org/pipermail/gluster-devel/2016-March/048537.html). 
What follows here is a short description of how other features of 
gluster volumes are implemented for sub-directories.


Please note that the sub-directories are not allowed to be accessed by 
normal mounts i.e. top-level volume mounts. All access to the 
sub-directories goes only through sub-directory mounts.


1) Geo-replication:
The direction in which we are going is to allow geo-replicating just 
some sub-directories and not all of the volume based on options. When 
these options are set, server xlators populate extra information in 
the frames/xdata to write changelog for the fops coming from their 
sub-directory mounts. changelog xlator on seeing this will only 
geo-replicate the files/directories that are in the changelog. Thus 
only the sub-directories are geo-replicated. There is also a 
suggestion from Vijay and Aravinda to have separate domains for 
operations inside sub-directories for changelogs.
We can additionally record subdir/client info in Changelog to 
differentiate I/O belonging to each subdirs instead of having separate 
domains for changelogs.


Just a note, Geo-replication expects target to be Gluster Volume and not 
any directory. If subdir 1 to be replicated to remote site A and subdir2 
to be replicated to remote site B, then we should have two Geo-rep 
session from Master to two remote volumes in remote site A and site B 
respectively.(with subdir filter accordingly)


2) Sub-directory snapshots using lvm
Every time a sub-directory needs to be created, Our idea is that the 
admin needs to execute subvolume creation command which creates a 
mount to an empty snapshot at the given sub-directory name. All these 
directories can be modified in parallel and we can take individual 
snapshots of each of the directories. We will be providing a detailed 
list commands to do the same once they are fleshed out. At the moment 
these are the directions we are going to increase granularity from 
volume to subdirectory for the main features.


Pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Using geo-replication as backup solution using gluster volume snapshot!

2016-03-08 Thread Aravinda

Awesome idea!

Now Geo-rep can be used to take daily/monthly backup. With Gluster
3.7.9, we can run Geo-replication whenever required instead of running
all the time.

http://review.gluster.org/13510 (geo-rep: Script to Schedule
Geo-replication)

Full Backup:

1. Create Geo-rep session.
2. Run Schedule Geo-rep script
3. Take Gluster Volume Snapshot at Slave side.

Incremental(Daily):
---
1. Run Schedule Geo-rep script
2. Take Gluster Volume Snapshot at Slave side

Note: Delete old snapshots regularly.

Restore:

Depending on the snapshot you want to recover, Clone the snapshot to
create new Volume and then establish Geo-replication from cloned
Volume to new volume where ever required.

If we need to restore any specific file/directory then just mount the
snapshot and copy data to required location.

regards
Aravinda

On 03/07/2016 05:13 PM, Kotresh Hiremath Ravishankar wrote:

Added gluster-users.

Thanks and Regards,
Kotresh H R

- Original Message -

From: "Kotresh Hiremath Ravishankar" 
To: "Gluster Devel" 
Sent: Monday, March 7, 2016 3:03:08 PM
Subject: [Gluster-devel] Using geo-replication as backup solution using gluster 
volume snapshot!

Hi All,

Here is the idea, we can use geo-replication as backup solution using gluster
volume
snapshots on slave side. One of the drawbacks of geo-replication is that it's
a
continuous asynchronous replication and would not help in getting the last
week's or
yesterday's data. So if we use gluster snapshots at the slave end, we can use
the
snapshots to get the last week's or yesterday's data making it a candidate
for a
backup solution. The limitation is that the snapshots at the slave end can't
be
restored as it will break the running geo-replication. It could be mounted
and
we have access to data when the snapshots are taken. It's just a naive idea.
Any suggestions and use cases are worth discussing:)


Thanks and Regards,
Kotresh H R

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] CentOS Regression generated core by .tests/basic/tier/tier-file-create.t

2016-03-08 Thread Pranith Kumar Karampuri
Sorry for the delay in responding. I am looking at this core. Will 
update with my findings/patches.


Pranith

On 03/08/2016 12:29 PM, Kotresh Hiremath Ravishankar wrote:

Hi All,

The regression run has caused the core to generate for below patch.

https://build.gluster.org/job/rackspace-regression-2GB-triggered/18859/console

 From the initial analysis, it's a tiered setup where ec sub-volume is the cold 
tier and afr is the hot tier.
The crash has happened during lookup, the lookup is wound to cold-tier, since 
it is not present there, dht issued discover
onto hot-tier and while serializing dictionary, it found the 'data' is freed 
for the key 'trusted.ec.size'.

(gdb) bt
#0  0x7fe059df9772 in memcpy () from ./lib64/libc.so.6
#1  0x7fe05b209902 in dict_serialize_lk (this=0x7fe04809f7dc, buf=0x7fe0480a2b7c 
"") at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/dict.c:2533
#2  0x7fe05b20a182 in dict_allocate_and_serialize (this=0x7fe04809f7dc, 
buf=0x7fe04ef6bb08, length=0x7fe04ef6bb00) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/dict.c:2780
#3  0x7fe04e3492de in client3_3_lookup (frame=0x7fe0480a22dc, 
this=0x7fe048008c00, data=0x7fe04ef6bbe0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/protocol/client/src/client-rpc-fops.c:3368
#4  0x7fe04e32c8c8 in client_lookup (frame=0x7fe0480a22dc, 
this=0x7fe048008c00, loc=0x7fe0480a4354, xdata=0x7fe04809f7dc) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/protocol/client/src/client.c:417
#5  0x7fe04dbdaf5f in afr_lookup_do (frame=0x7fe04809f6dc, 
this=0x7fe048029e00, err=0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/afr/src/afr-common.c:2422
#6  0x7fe04dbdb4bb in afr_lookup (frame=0x7fe04809f6dc, 
this=0x7fe048029e00, loc=0x7fe03c0082f4, xattr_req=0x7fe03c00810c) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/afr/src/afr-common.c:2532
#7  0x7fe04de3c2b8 in dht_lookup (frame=0x7fe0480a0a3c, 
this=0x7fe04802c580, loc=0x7fe03c0082f4, xattr_req=0x7fe03c00810c) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:2429
#8  0x7fe04d91f07e in dht_lookup_everywhere (frame=0x7fe03c0081ec, 
this=0x7fe04802d450, loc=0x7fe03c0082f4) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:1803
#9  0x7fe04d920953 in dht_lookup_cbk (frame=0x7fe03c0081ec, 
cookie=0x7fe03c00902c, this=0x7fe04802d450, op_ret=-1, op_errno=2, inode=0x0, 
stbuf=0x0, xattr=0x0, postparent=0x0)
 at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:2056
#10 0x7fe04de35b94 in dht_lookup_everywhere_done (frame=0x7fe03c00902c, 
this=0x7fe0480288a0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:1338
#11 0x7fe04de38281 in dht_lookup_everywhere_cbk (frame=0x7fe03c00902c, 
cookie=0x7fe04809ed2c, this=0x7fe0480288a0, op_ret=-1, op_errno=2, inode=0x0, 
buf=0x0, xattr=0x0, postparent=0x0)
 at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/dht/src/dht-common.c:1768
#12 0x7fe05b27 in default_lookup_cbk (frame=0x7fe04809ed2c, 
cookie=0x7fe048099ddc, this=0x7fe048027590, op_ret=-1, op_errno=2, inode=0x0, 
buf=0x0, xdata=0x0, postparent=0x0) at defaults.c:1188
#13 0x7fe04e0a4861 in ec_manager_lookup (fop=0x7fe048099ddc, state=-5) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-generic.c:864
#14 0x7fe04e0a0b3a in __ec_manager (fop=0x7fe048099ddc, error=2) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-common.c:2098
#15 0x7fe04e09c912 in ec_resume (fop=0x7fe048099ddc, error=0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-common.c:289
#16 0x7fe04e09caf8 in ec_complete (fop=0x7fe048099ddc) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-common.c:362
#17 0x7fe04e0a41a8 in ec_lookup_cbk (frame=0x7fe04800107c, cookie=0x5, 
this=0x7fe048027590, op_ret=-1, op_errno=2, inode=0x7fe03c00152c, 
buf=0x7fe04ef6c860, xdata=0x0, postparent=0x7fe04ef6c7f0)
 at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/cluster/ec/src/ec-generic.c:758
#18 0x7fe04e348239 in client3_3_lookup_cbk (req=0x7fe04809dd4c, 
iov=0x7fe04809dd8c, count=1, myframe=0x7fe04809964c)
 at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/protocol/client/src/client-rpc-fops.c:3028
#19 0x7fe05afd83e6 in rpc_clnt_handle_reply (clnt=0x7fe048066350, 
pollin=0x7fe0480018f0) at 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:759
#20 

Re: [Gluster-devel] Review request for leases patches

2016-03-08 Thread Soumya Koduri

Hi Poornima,

On 03/07/2016 11:24 AM, Poornima Gurusiddaiah wrote:

Hi All,

Here is the link to feature page: http://review.gluster.org/#/c/11980/

Patches can be found @:
http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:leases


This link displays only one patch [1]. Probably other patches are not 
marked under topic:leases. Please verify the same.


Also please confirm if the list is complete to be consumed by any 
application or if there are still any pending patches (apart from the 
open items mentioned in the design doc) to be/being worked upon.


Thanks,
Soumya

[1] http://review.gluster.org/11721





Regards,
Poornima


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] sub-directory geo-replication, snapshot features

2016-03-08 Thread Pranith Kumar Karampuri

hi,
 Late last week I sent a solution for how to achieve 
subdirectory-mount support with access-controls 
(http://www.gluster.org/pipermail/gluster-devel/2016-March/048537.html). 
What follows here is a short description of how other features of 
gluster volumes are implemented for sub-directories.


Please note that the sub-directories are not allowed to be accessed by 
normal mounts i.e. top-level volume mounts. All access to the 
sub-directories goes only through sub-directory mounts.


1) Geo-replication:
The direction in which we are going is to allow geo-replicating just 
some sub-directories and not all of the volume based on options. When 
these options are set, server xlators populate extra information in the 
frames/xdata to write changelog for the fops coming from their 
sub-directory mounts. changelog xlator on seeing this will only 
geo-replicate the files/directories that are in the changelog. Thus only 
the sub-directories are geo-replicated. There is also a suggestion from 
Vijay and Aravinda to have separate domains for operations inside 
sub-directories for changelogs.


2) Sub-directory snapshots using lvm
Every time a sub-directory needs to be created, Our idea is that the 
admin needs to execute subvolume creation command which creates a mount 
to an empty snapshot at the given sub-directory name. All these 
directories can be modified in parallel and we can take individual 
snapshots of each of the directories. We will be providing a detailed 
list commands to do the same once they are fleshed out. At the moment 
these are the directions we are going to increase granularity from 
volume to subdirectory for the main features.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core on Linux

2016-03-08 Thread Dan Lambright


- Original Message -
> From: "Niels de Vos" 
> To: "Krutika Dhananjay" , "Vijay Bellur" 
> 
> Cc: "Dan Lambright" , "Gluster Devel" 
> , "Pranith Karampuri"
> , "gluster-infra" , "RHGS 
> tiering mailing list"
> 
> Sent: Tuesday, March 8, 2016 8:37:14 AM
> Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core 
> on Linux
> 
> On Tue, Mar 08, 2016 at 07:00:07PM +0530, Krutika Dhananjay wrote:
> > I did talk to the author and he is going to look into the issue.
> > 3.7.9 is round the corner and we certainly don't want bad tests to block
> > patches that need to go in.
> 
> 3.7.9 has been delayed already. There is hardly a good reason for
> changes to delay the relase even more. 3.7.10 will be done in a few
> weeks at the end of March, anything non-regression should probably not
> be included at this point anymore.
> 
> Vijay is the release manager for 3.7.9 and .10, you'll need to come with
> extreme strong points for getting more patches included.

I've no desire to disrupt downstream releases and if we are indeed out of time 
for 3.7.9, then thats life.
That said, obviously a test failure is a failure, and once its masked it tends 
to be forgotton.
I suggest we do not mask this test upstream (does it happen there?) and give 
some urgency to chasing it down.


> 
> Niels
> 
> 
> > 
> > -Krutika
> > 
> > On Tue, Mar 8, 2016 at 6:50 PM, Dan Lambright  wrote:
> > 
> > >
> > >
> > > - Original Message -
> > > > From: "Krutika Dhananjay" 
> > > > To: "Pranith Karampuri" 
> > > > Cc: "gluster-infra" , "Gluster Devel" <
> > > gluster-devel@gluster.org>, "RHGS tiering mailing
> > > > list" , "Dan Lambright" 
> > > > Sent: Tuesday, March 8, 2016 12:15:20 AM
> > > > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
> > > > dumping
> > > core on Linux
> > > >
> > > > It has been failing rather frequently.
> > > > Have reported a bug at
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1315560
> > > > For now, have moved it to bad tests here:
> > > > http://review.gluster.org/#/c/13632/1
> > > >
> > >
> > >
> > > Masking tests is a bad habit. It would be better to fix the problem, and
> > > it looks like a real bug.
> > > The author of the test should help chase this down.
> > >
> > > > -Krutika
> > > >
> > > > On Mon, Mar 7, 2016 at 4:17 PM, Krutika Dhananjay 
> > > > wrote:
> > > >
> > > > > +Pranith
> > > > >
> > > > > -Krutika
> > > > >
> > > > >
> > > > > On Sat, Mar 5, 2016 at 11:34 PM, Dan Lambright 
> > > > > wrote:
> > > > >
> > > > >>
> > > > >>
> > > > >> - Original Message -
> > > > >> > From: "Dan Lambright" 
> > > > >> > To: "Shyam" 
> > > > >> > Cc: "Krutika Dhananjay" , "Gluster Devel" <
> > > > >> gluster-devel@gluster.org>, "Rafi Kavungal Chundattu
> > > > >> > Parambil" , "Nithya Balachandran" <
> > > > >> nbala...@redhat.com>, "Joseph Fernandes"
> > > > >> > , "gluster-infra" 
> > > > >> > Sent: Friday, March 4, 2016 9:51:18 AM
> > > > >> > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
> > > > >> dumping core on Linux
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > - Original Message -
> > > > >> > > From: "Shyam" 
> > > > >> > > To: "Krutika Dhananjay" , "Gluster Devel"
> > > > >> > > , "Rafi Kavungal Chundattu
> > > > >> > > Parambil" , "Nithya Balachandran"
> > > > >> > > , "Joseph Fernandes"
> > > > >> > > , "Dan Lambright" 
> > > > >> > > Cc: "gluster-infra" 
> > > > >> > > Sent: Friday, March 4, 2016 9:45:17 AM
> > > > >> > > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
> > > > >> dumping
> > > > >> > > core on Linux
> > > > >> > >
> > > > >> > > Facing the same problem in the following runs as well,
> > > > >> > >
> > > > >> > > 1)
> > > > >> > >
> > > > >>
> > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/18767/console
> > > > >> > > 2)
> > > https://build.gluster.org/job/regression-test-burn-in/546/console
> > > > >> > > 3)
> > > https://build.gluster.org/job/regression-test-burn-in/547/console
> > > > >> > > 4)
> > > https://build.gluster.org/job/regression-test-burn-in/549/console
> > > > >> > >
> > > > >> > > Last successful burn-in was: 545 (but do not see the test having
> > > been
> > > > >> > > run here, so this is inconclusive)
> > > > >> > >
> > > > >> > > burn-in test 544 is hung on the 

Re: [Gluster-devel] [Gluster-users] Arbiter brick size estimation

2016-03-08 Thread Ravishankar N

On 03/05/2016 03:45 PM, Oleksandr Natalenko wrote:

In order to estimate GlusterFS arbiter brick size, I've deployed test setup
with replica 3 arbiter 1 volume within one node. Each brick is located on
separate HDD (XFS with inode size == 512). Using GlusterFS v3.7.6 + memleak
patches. Volume options are kept default.

Here is the script that creates files and folders in mounted volume: [1]

The script creates 1M of files of random size (between 1 and 32768 bytes) and
some amount of folders. After running it I've got 1036637 folders. So, in
total it is 2036637 files and folders.

The initial used space on each brick is 42M . After running script I've got:

replica brick 1 and 2: 19867168 kbytes == 19G
arbiter brick: 1872308 kbytes == 1.8G

The amount of inodes on each brick is 3139091. So here goes estimation.

Dividing arbiter used space by files+folders we get:

(1872308 - 42000)/2036637 == 899 bytes per file or folder

Dividing arbiter used space by inodes we get:

(1872308 - 42000)/3139091 == 583 bytes per inode

Not sure about what calculation is correct.


I think the first one is right because you still haven't used up all the 
inodes.(2036637 used vs. the max. permissible 3139091). But again this 
is an approximation because not all files would be 899 bytes. For 
example if there are a thousand files present in a directory, then du 
 would be more than du  because the directory will take 
some disk space to store the dentries.



  I guess we should consider the one
that accounts inodes because of .glusterfs/ folder data.

Nevertheless, in contrast, documentation [2] says it should be 4096 bytes per
file. Am I wrong with my calculations?


The 4KB is a conservative estimate considering the fact that though the 
arbiter brick does not store data, it still keeps a copy of both user 
and gluster xattrs. For example, if the application sets a lot of 
xattrs, it can consume a data block if they cannot be accommodated on 
the inode itself.  Also there is the .glusterfs folder like you said 
which would take up some space. Here is what I tried on an XFS brick:

[root@ravi4 brick]# touch file

[root@ravi4 brick]# ls -l file
-rw-r--r-- 1 root root 0 Mar  8 12:54 file

[root@ravi4 brick]# du file
*0   file**
*
[root@ravi4 brick]# for i in {1..100}
> do
> setfattr -n user.value$i -v value$i file
> done

[root@ravi4 brick]# ll -l file
-rw-r--r-- 1 root root 0 Mar  8 12:54 file

[root@ravi4 brick]# du -h file
*4.0Kfile**
*
Hope this helps,
Ravi



Pranith?

[1] http://termbin.com/ka9x
[2] 
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core on Linux

2016-03-08 Thread Krutika Dhananjay
I did talk to the author and he is going to look into the issue.
3.7.9 is round the corner and we certainly don't want bad tests to block
patches that need to go in.

-Krutika

On Tue, Mar 8, 2016 at 6:50 PM, Dan Lambright  wrote:

>
>
> - Original Message -
> > From: "Krutika Dhananjay" 
> > To: "Pranith Karampuri" 
> > Cc: "gluster-infra" , "Gluster Devel" <
> gluster-devel@gluster.org>, "RHGS tiering mailing
> > list" , "Dan Lambright" 
> > Sent: Tuesday, March 8, 2016 12:15:20 AM
> > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t dumping
> core on Linux
> >
> > It has been failing rather frequently.
> > Have reported a bug at
> https://bugzilla.redhat.com/show_bug.cgi?id=1315560
> > For now, have moved it to bad tests here:
> > http://review.gluster.org/#/c/13632/1
> >
>
>
> Masking tests is a bad habit. It would be better to fix the problem, and
> it looks like a real bug.
> The author of the test should help chase this down.
>
> > -Krutika
> >
> > On Mon, Mar 7, 2016 at 4:17 PM, Krutika Dhananjay 
> > wrote:
> >
> > > +Pranith
> > >
> > > -Krutika
> > >
> > >
> > > On Sat, Mar 5, 2016 at 11:34 PM, Dan Lambright 
> > > wrote:
> > >
> > >>
> > >>
> > >> - Original Message -
> > >> > From: "Dan Lambright" 
> > >> > To: "Shyam" 
> > >> > Cc: "Krutika Dhananjay" , "Gluster Devel" <
> > >> gluster-devel@gluster.org>, "Rafi Kavungal Chundattu
> > >> > Parambil" , "Nithya Balachandran" <
> > >> nbala...@redhat.com>, "Joseph Fernandes"
> > >> > , "gluster-infra" 
> > >> > Sent: Friday, March 4, 2016 9:51:18 AM
> > >> > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
> > >> dumping core on Linux
> > >> >
> > >> >
> > >> >
> > >> > - Original Message -
> > >> > > From: "Shyam" 
> > >> > > To: "Krutika Dhananjay" , "Gluster Devel"
> > >> > > , "Rafi Kavungal Chundattu
> > >> > > Parambil" , "Nithya Balachandran"
> > >> > > , "Joseph Fernandes"
> > >> > > , "Dan Lambright" 
> > >> > > Cc: "gluster-infra" 
> > >> > > Sent: Friday, March 4, 2016 9:45:17 AM
> > >> > > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
> > >> dumping
> > >> > > core on Linux
> > >> > >
> > >> > > Facing the same problem in the following runs as well,
> > >> > >
> > >> > > 1)
> > >> > >
> > >>
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18767/console
> > >> > > 2)
> https://build.gluster.org/job/regression-test-burn-in/546/console
> > >> > > 3)
> https://build.gluster.org/job/regression-test-burn-in/547/console
> > >> > > 4)
> https://build.gluster.org/job/regression-test-burn-in/549/console
> > >> > >
> > >> > > Last successful burn-in was: 545 (but do not see the test having
> been
> > >> > > run here, so this is inconclusive)
> > >> > >
> > >> > > burn-in test 544 is hung on the same test here,
> > >> > > https://build.gluster.org/job/regression-test-burn-in/544/console
> > >> > >
> > >> > > (and at this point I am stopping the hunt for when this last
> > >> succeeded :) )
> > >> > >
> > >> > > Let's know if anyone is taking a peek at the cores.
> > >> >
> > >> > hm. Not familiar with this test. Written by Pranith? I'll look.
> > >>
> > >> We are doing lookup everywhere, and building up a dict of the extended
> > >> attributes of a file as we traverse each sub volume across the hot and
> > >> cold
> > >> tiers. The length field of one of the EC keys is corrupted.
> > >>
> > >> Not clear why this is happening.. I see no tiering relationship as of
> > >> yet, its possible the file is being demoted in parallel to the
> foreground
> > >> script operation.
> > >>
> > >> The test runs fine on my machines.  Does this reproduce consistently
> on
> > >> one of the Jenkins machines? If so, getting onto it would be the next
> > >> step.
> > >> I think that would be preferable to masking this test case.
> > >>
> > >>
> > >> >
> > >> > >
> > >> > > Thanks,
> > >> > > Shyam
> > >> > >
> > >> > >
> > >> > >
> > >> > > On 03/04/2016 07:40 AM, Krutika Dhananjay wrote:
> > >> > > > Could someone from tiering dev team please take a look?
> > >> > > >
> > >> > > >
> > >>
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18793/console
> > >> > > >
> > >> > > > -Krutika
> > >> > > >
> > >> > > >
> > >> > > > ___
> > >> > > > Gluster-infra mailing list
> > >> > > > gluster-in...@gluster.org
> > >> > > > http://www.gluster.org/mailman/listinfo/gluster-infra
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>

Re: [Gluster-devel] [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core on Linux

2016-03-08 Thread Dan Lambright


- Original Message -
> From: "Krutika Dhananjay" 
> To: "Pranith Karampuri" 
> Cc: "gluster-infra" , "Gluster Devel" 
> , "RHGS tiering mailing
> list" , "Dan Lambright" 
> Sent: Tuesday, March 8, 2016 12:15:20 AM
> Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core 
> on Linux
> 
> It has been failing rather frequently.
> Have reported a bug at https://bugzilla.redhat.com/show_bug.cgi?id=1315560
> For now, have moved it to bad tests here:
> http://review.gluster.org/#/c/13632/1
> 


Masking tests is a bad habit. It would be better to fix the problem, and it 
looks like a real bug.
The author of the test should help chase this down.

> -Krutika
> 
> On Mon, Mar 7, 2016 at 4:17 PM, Krutika Dhananjay 
> wrote:
> 
> > +Pranith
> >
> > -Krutika
> >
> >
> > On Sat, Mar 5, 2016 at 11:34 PM, Dan Lambright 
> > wrote:
> >
> >>
> >>
> >> - Original Message -
> >> > From: "Dan Lambright" 
> >> > To: "Shyam" 
> >> > Cc: "Krutika Dhananjay" , "Gluster Devel" <
> >> gluster-devel@gluster.org>, "Rafi Kavungal Chundattu
> >> > Parambil" , "Nithya Balachandran" <
> >> nbala...@redhat.com>, "Joseph Fernandes"
> >> > , "gluster-infra" 
> >> > Sent: Friday, March 4, 2016 9:51:18 AM
> >> > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
> >> dumping core on Linux
> >> >
> >> >
> >> >
> >> > - Original Message -
> >> > > From: "Shyam" 
> >> > > To: "Krutika Dhananjay" , "Gluster Devel"
> >> > > , "Rafi Kavungal Chundattu
> >> > > Parambil" , "Nithya Balachandran"
> >> > > , "Joseph Fernandes"
> >> > > , "Dan Lambright" 
> >> > > Cc: "gluster-infra" 
> >> > > Sent: Friday, March 4, 2016 9:45:17 AM
> >> > > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
> >> dumping
> >> > > core on Linux
> >> > >
> >> > > Facing the same problem in the following runs as well,
> >> > >
> >> > > 1)
> >> > >
> >> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18767/console
> >> > > 2) https://build.gluster.org/job/regression-test-burn-in/546/console
> >> > > 3) https://build.gluster.org/job/regression-test-burn-in/547/console
> >> > > 4) https://build.gluster.org/job/regression-test-burn-in/549/console
> >> > >
> >> > > Last successful burn-in was: 545 (but do not see the test having been
> >> > > run here, so this is inconclusive)
> >> > >
> >> > > burn-in test 544 is hung on the same test here,
> >> > > https://build.gluster.org/job/regression-test-burn-in/544/console
> >> > >
> >> > > (and at this point I am stopping the hunt for when this last
> >> succeeded :) )
> >> > >
> >> > > Let's know if anyone is taking a peek at the cores.
> >> >
> >> > hm. Not familiar with this test. Written by Pranith? I'll look.
> >>
> >> We are doing lookup everywhere, and building up a dict of the extended
> >> attributes of a file as we traverse each sub volume across the hot and
> >> cold
> >> tiers. The length field of one of the EC keys is corrupted.
> >>
> >> Not clear why this is happening.. I see no tiering relationship as of
> >> yet, its possible the file is being demoted in parallel to the foreground
> >> script operation.
> >>
> >> The test runs fine on my machines.  Does this reproduce consistently on
> >> one of the Jenkins machines? If so, getting onto it would be the next
> >> step.
> >> I think that would be preferable to masking this test case.
> >>
> >>
> >> >
> >> > >
> >> > > Thanks,
> >> > > Shyam
> >> > >
> >> > >
> >> > >
> >> > > On 03/04/2016 07:40 AM, Krutika Dhananjay wrote:
> >> > > > Could someone from tiering dev team please take a look?
> >> > > >
> >> > > >
> >> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18793/console
> >> > > >
> >> > > > -Krutika
> >> > > >
> >> > > >
> >> > > > ___
> >> > > > Gluster-infra mailing list
> >> > > > gluster-in...@gluster.org
> >> > > > http://www.gluster.org/mailman/listinfo/gluster-infra
> >> > > >
> >> > >
> >> >
> >>
> >
> >
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 8th March, 2016

2016-03-08 Thread Gaurav Garg
Hi All,

Following are the meeting minutes for today Gluster community bug triaging 
meeting.


Meeting ended Tue Mar  8 12:58:43 2016 UTC. Information about MeetBot at 
http://wiki.debian.org/MeetBot .

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-08/gluster_bug_triage.2016-03-08-12.01.html

Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-08/gluster_bug_triage.2016-03-08-12.01.txt

Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-03-08/gluster_bug_triage.2016-03-08-12.01.log.html


Meeting summary
---
* Roll call  (ggarg_, 12:02:27)
  * ACTION: kkeithley_ will come up with a proposal to reduce the number
of bugs against "mainline" in NEW state  (ggarg, 12:06:49)
  * LINK:
https://ci.centos.org/view/Gluster/job/gluster_libgfapi-python/
(ndevos, 12:08:31)
  * LINK:

https://ci.centos.org/view/Gluster/job/gluster_libgfapi-python/buildTimeTrend
(ndevos, 12:09:42)
  * ACTION: ndevos to continue work on  proposing  some test-cases for
minimal libgfapi test  (ggarg, 12:11:15)
  * ACTION: Manikandan and Nandaja will update on bug automation
(ggarg, 12:13:09)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage   (ggarg,
12:14:02)
  * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1315422   (rafi,
12:18:41)
  * LINK: http://ur1.ca/om4jt   (rafi, 12:46:38)

Meeting ended at 12:58:43 UTC.



Action Items

* kkeithley_ will come up with a proposal to reduce the number of bugs
  against "mainline" in NEW state
* ndevos to continue work on  proposing  some test-cases for minimal
  libgfapi test
* Manikandan and Nandaja will update on bug automation



Action Items, by person
---
* Manikandan
  * Manikandan and Nandaja will update on bug automation
* ndevos
  * ndevos to continue work on  proposing  some test-cases for minimal
libgfapi test

  * kkeithley_ will come up with a proposal to reduce the number of bugs
against "mainline" in NEW state




People Present (lines said)
---
* ggarg (54)
* rafi (37)
* ndevos (15)
* obnox (15)
* ira (15)
* jiffin (13)
* Manikandan (9)
* Saravanakmr (6)
* glusterbot (6)
* zodbot (3)
* ggarg_ (3)
* hgowtham (1)


Thanks,

Regards,
Gaurav

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 90 minutes)

2016-03-08 Thread Gaurav Garg
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
  (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thanks,

Regards,
Gaurav
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Code-Review+2 and Verified+1 cause multiple retriggers on Jenkins

2016-03-08 Thread Saravanakumar Arumugam

Hi Raghavendra,

Can we have this documented (with a example workflow) here:
http://gluster.readthedocs.org/en/latest/Developer-guide/Development-Workflow/

Thanks,
Saravana

On 03/07/2016 10:57 AM, Raghavendra Talur wrote:



On Fri, Mar 4, 2016 at 6:13 PM, Raghavendra Talur > wrote:




On Thu, Feb 4, 2016 at 7:13 PM, Niels de Vos > wrote:

On Thu, Feb 04, 2016 at 04:15:16PM +0530, Raghavendra Talur wrote:
> On Thu, Feb 4, 2016 at 4:13 PM, Niels de Vos
> wrote:
>
> > On Thu, Feb 04, 2016 at 03:34:05PM +0530, Raghavendra
Talur wrote:
> > > Hi,
> > >
> > > We recently changed the jenkins builds to be triggered
on the following
> > > triggers.
> > >
> > > 1. Verified+1
> > > 2. Code-review+2
> > > 3. recheck (netbsd|centos|smoke)
> > >
> > > There is a bug in 1 and 2.
> > >
> > > Multiple triggers of 1 or 2 would result in re-runs even
when not
> > intended.
> > >
> > > I would like to replace 1 and 2 with a comment
"run-all-regression" or
> > > something like that.
> > > Thoughts?
> >
> > Maybe starting regressions on Code-Review +1 (or +2) only?
> >
>
> Multiple code-reviews would do multiple triggers. Won't work.

How can we make this to work, without the need of providing magic
comments?


I investigated but couldn't find a way to make it work. Discussed
with Kaushal and we feel it should be ok to go with a "check all"
comment for initial regression run and deprecate Code-Review+2 and
Verified+1 triggers.

I would like to go ahead and do it as the build queue is
increasing again just because of Code-Review+2's given just before
a patch is merged; they don't serve any purpose.


I have for now just removed trigger for code-review.
Trigger for verified+1 remains as is.
No new trigger on comments have been added.


Niels





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [master] FAILED NetBSD regression: quota.t

2016-03-08 Thread Manikandan Selvaganesh
Hi Milind,

These are the two tests that failed in NetBSD:
TEST $CLI volume stop $V0;
EXPECT "1" get_aux

We are not sure why the volume stop has failed. Since the
volume stop is failed, it is obvious that the next test case
would fail. Could you please retrigger and check?

--
Thanks & Regards,
Manikandan Selvaganesh.

- Original Message -
> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14776/
> 
> ==
> Running tests in file ./tests/basic/quota.t
> [06:08:03] ./tests/basic/quota.t ..
> not ok 75
> not ok 76
> Failed 2/76 subtests
> [06:08:03]
> 
> Test Summary Report
> ---
> ./tests/basic/quota.t (Wstat: 0 Tests: 76 Failed: 2)
>   Failed tests:  75-76
> Files=1, Tests=76, 432 wallclock secs ( 0.06 usr  0.01 sys +  6.89 cusr  8.00
> csys = 14.96 CPU)
> Result: FAIL
> End of test ./tests/basic/quota.t
> ==
> 
> Please advise.
> 
> --
> Milind
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [master] FAILED: bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

2016-03-08 Thread Mohammed Rafi K C
Hi Milind,

There was an issue with this test case caused by a parallel cherry-pick.
More info [1].

It is already fixed, and rebasing your patch will work.


[1] :
http://nongnu.13855.n7.nabble.com/mainline-regression-test-is-broken-td209079.html

Regards!
Rafi


On 03/08/2016 01:56 PM, Mohammed Rafi K C wrote:
> HI Du,
>
> I will take a look.
>
> Milind,
>
> Can you please provide a link for the failed case.
>
> Rafi
>
> On 03/08/2016 12:59 PM, Raghavendra Gowdappa wrote:
>> +rafi.
>>
>> Rafi, can you have an initial analysis on this?
>>
>> regards,
>> Raghavendra.
>>
>> - Original Message -
>>> From: "Milind Changire" 
>>> To: gluster-devel@gluster.org
>>> Sent: Tuesday, March 8, 2016 12:53:27 PM
>>> Subject: [Gluster-devel] [master] FAILED: 
>>> bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>>>
>>> ==
>>> Running tests in file
>>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>>> [07:27:48]
>>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>>> ..
>>> not ok 11 Got "1" instead of "0"
>>> not ok 14 Got "1" instead of "0"
>>> not ok 15 Got "1" instead of "0"
>>> not ok 16 Got "1" instead of "0"
>>> Failed 4/16 subtests
>>> [07:27:48]
>>>
>>> Test Summary Report
>>> ---
>>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>>> (Wstat: 0 Tests: 16 Failed: 4)
>>>   Failed tests:  11, 14-16
>>> Files=1, Tests=16, 23 wallclock secs ( 0.02 usr  0.00 sys +  1.13 cusr  0.39
>>> csys =  1.54 CPU)
>>> Result: FAIL
>>> End of test
>>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>>> ==
>>>
>>> Please advise.
>>>
>>> --
>>> Milind
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [master] FAILED NetBSD regression: quota.t

2016-03-08 Thread Milind Changire
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/14776/

==
Running tests in file ./tests/basic/quota.t
[06:08:03] ./tests/basic/quota.t .. 
not ok 75 
not ok 76 
Failed 2/76 subtests 
[06:08:03]

Test Summary Report
---
./tests/basic/quota.t (Wstat: 0 Tests: 76 Failed: 2)
  Failed tests:  75-76
Files=1, Tests=76, 432 wallclock secs ( 0.06 usr  0.01 sys +  6.89 cusr  8.00 
csys = 14.96 CPU)
Result: FAIL
End of test ./tests/basic/quota.t
==

Please advise.

--
Milind

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [master] FAILED: bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

2016-03-08 Thread Milind Changire
oops! how did I miss that :)
https://build.gluster.org/job/rackspace-regression-2GB-triggered/18683/


--
Milind


- Original Message -
From: "Mohammed Rafi K C" 
To: "Raghavendra Gowdappa" , "Milind Changire" 

Cc: gluster-devel@gluster.org
Sent: Tuesday, March 8, 2016 1:56:51 PM
Subject: Re: [Gluster-devel] [master] FAILED: 
bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

HI Du,

I will take a look.

Milind,

Can you please provide a link for the failed case.

Rafi

On 03/08/2016 12:59 PM, Raghavendra Gowdappa wrote:
> +rafi.
>
> Rafi, can you have an initial analysis on this?
>
> regards,
> Raghavendra.
>
> - Original Message -
>> From: "Milind Changire" 
>> To: gluster-devel@gluster.org
>> Sent: Tuesday, March 8, 2016 12:53:27 PM
>> Subject: [Gluster-devel] [master] FAILED: 
>> bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>>
>> ==
>> Running tests in file
>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>> [07:27:48]
>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>> ..
>> not ok 11 Got "1" instead of "0"
>> not ok 14 Got "1" instead of "0"
>> not ok 15 Got "1" instead of "0"
>> not ok 16 Got "1" instead of "0"
>> Failed 4/16 subtests
>> [07:27:48]
>>
>> Test Summary Report
>> ---
>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>> (Wstat: 0 Tests: 16 Failed: 4)
>>   Failed tests:  11, 14-16
>> Files=1, Tests=16, 23 wallclock secs ( 0.02 usr  0.00 sys +  1.13 cusr  0.39
>> csys =  1.54 CPU)
>> Result: FAIL
>> End of test
>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>> ==
>>
>> Please advise.
>>
>> --
>> Milind
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [master] FAILED: bug-1303028-Rebalance-glusterd-rpc-connection-issue.t

2016-03-08 Thread Mohammed Rafi K C
HI Du,

I will take a look.

Milind,

Can you please provide a link for the failed case.

Rafi

On 03/08/2016 12:59 PM, Raghavendra Gowdappa wrote:
> +rafi.
>
> Rafi, can you have an initial analysis on this?
>
> regards,
> Raghavendra.
>
> - Original Message -
>> From: "Milind Changire" 
>> To: gluster-devel@gluster.org
>> Sent: Tuesday, March 8, 2016 12:53:27 PM
>> Subject: [Gluster-devel] [master] FAILED: 
>> bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>>
>> ==
>> Running tests in file
>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>> [07:27:48]
>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>> ..
>> not ok 11 Got "1" instead of "0"
>> not ok 14 Got "1" instead of "0"
>> not ok 15 Got "1" instead of "0"
>> not ok 16 Got "1" instead of "0"
>> Failed 4/16 subtests
>> [07:27:48]
>>
>> Test Summary Report
>> ---
>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>> (Wstat: 0 Tests: 16 Failed: 4)
>>   Failed tests:  11, 14-16
>> Files=1, Tests=16, 23 wallclock secs ( 0.02 usr  0.00 sys +  1.13 cusr  0.39
>> csys =  1.54 CPU)
>> Result: FAIL
>> End of test
>> ./tests/bugs/glusterd/bug-1303028-Rebalance-glusterd-rpc-connection-issue.t
>> ==
>>
>> Please advise.
>>
>> --
>> Milind
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel