Re: [Gluster-devel] netbsd regression logs

2015-05-01 Thread Atin Mukherjee


On 05/02/2015 09:08 AM, Atin Mukherjee wrote:
> 
> 
> On 05/02/2015 08:54 AM, Emmanuel Dreyfus wrote:
>> Pranith Kumar Karampuri  wrote:
>>
>>> Seems like glusterd failure from the looks of it: +glusterd folks.
>>>
>>> Running tests in file ./tests/basic/cdc.t
>>> volume delete: patchy: failed: Another transaction is in progress for
>>> patchy. Please try again after sometime.
>>> [18:16:40] ./tests/basic/cdc.t ..
>>> not ok 52
>>
>> This is a volume stop that fails. Logs says a lock is held by an UUID
>> which happeens to be the volume's own UUID. 
>>
>> I tried git bisect and it seems to be related to
>> http://review.gluster.org/9918 but I am not completely sure (I may have
>> botched by git bisect)
> 
> I'm looking into this.
Looking at the logs, here is the findings:

- gluster volume stop got timed out at cli because of which
cmd_history.log didn't capture it.
- glusterd acquired the volume lock in volume stop but didn't release it
somehow as gluster v delete failed saying another transaction is in progress
- For gluster volume stop transaction I could see glusterd_nfssvc_stop
was triggered but after that it didn't log anything for almost two
minutes, but catching point here is by this time volinfo->status should
have been marked as stopped and persisted in the disk, but gluster v
info didn't reflect the same.

Is this reproducible in netbsd everytime, if yes I would need a VM to
further debug it. I am guessing that the reason of other failure from
tests/geo-rep/georep-setup.t is same. Is it a new regression failure ?

~Atin
>>
> 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failures in tests/basic/afr/sparse-file-self-heal.t

2015-05-01 Thread Krishnan Parthasarathi

> If glusterd itself fails to come up, of course the test will fail :-). Is it
> still happening?
Pranith,

Did you get a chance to see glusterd logs and find why glusterd didn't come up?
Please paste the relevant logs in this thread.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression logs

2015-05-01 Thread Atin Mukherjee


On 05/02/2015 08:54 AM, Emmanuel Dreyfus wrote:
> Pranith Kumar Karampuri  wrote:
> 
>> Seems like glusterd failure from the looks of it: +glusterd folks.
>>
>> Running tests in file ./tests/basic/cdc.t
>> volume delete: patchy: failed: Another transaction is in progress for
>> patchy. Please try again after sometime.
>> [18:16:40] ./tests/basic/cdc.t ..
>> not ok 52
> 
> This is a volume stop that fails. Logs says a lock is held by an UUID
> which happeens to be the volume's own UUID. 
> 
> I tried git bisect and it seems to be related to
> http://review.gluster.org/9918 but I am not completely sure (I may have
> botched by git bisect)

I'm looking into this.
> 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious failures in tests/basic/afr/sparse-file-self-heal.t

2015-05-01 Thread Vijay Bellur

On 05/02/2015 08:17 AM, Pranith Kumar Karampuri wrote:

hi,
  As per the etherpad:
https://public.pad.fsfe.org/p/gluster-spurious-failures

  * tests/basic/afr/sparse-file-self-heal.t (Wstat: 0 Tests: 64 Failed: 35)

  * Failed tests:  1-6, 11, 20-30, 33-34, 36, 41, 50-61, 64

  * Happens in master (Mon 30th March - git commit id
3feaf1648528ff39e23748ac9004a77595460c9d)

  * (hasn't yet been added to BZs)

If glusterd itself fails to come up, of course the test will fail :-).
Is it still happening?



We have not been actively curating this list for the last few days and 
am not certain if this failure happens anymore.


Investigating why a regression run fails for our patches and fixing them 
(though unrelated to our patch) should be the most effective way going 
ahead.


-Vijay


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression logs

2015-05-01 Thread Emmanuel Dreyfus
Pranith Kumar Karampuri  wrote:

> Seems like glusterd failure from the looks of it: +glusterd folks.
> 
> Running tests in file ./tests/basic/cdc.t
> volume delete: patchy: failed: Another transaction is in progress for
> patchy. Please try again after sometime.
> [18:16:40] ./tests/basic/cdc.t ..
> not ok 52

This is a volume stop that fails. Logs says a lock is held by an UUID
which happeens to be the volume's own UUID. 

I tried git bisect and it seems to be related to
http://review.gluster.org/9918 but I am not completely sure (I may have
botched by git bisect)

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] spurious failures in tests/basic/afr/sparse-file-self-heal.t

2015-05-01 Thread Pranith Kumar Karampuri

hi,
 As per the etherpad: 
https://public.pad.fsfe.org/p/gluster-spurious-failures


 * tests/basic/afr/sparse-file-self-heal.t (Wstat: 0 Tests: 64 Failed: 35)

 * Failed tests:  1-6, 11, 20-30, 33-34, 36, 41, 50-61, 64

 * Happens in master (Mon 30th March - git commit id
   3feaf1648528ff39e23748ac9004a77595460c9d)

 * (hasn't yet been added to BZs)

If glusterd itself fails to come up, of course the test will fail :-). 
Is it still happening?


Pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression logs

2015-05-01 Thread Pranith Kumar Karampuri

Seems like glusterd failure from the looks of it: +glusterd folks.

Running tests in file ./tests/basic/cdc.t
volume delete: patchy: failed: Another transaction is in progress for patchy. 
Please try again after sometime.
[18:16:40] ./tests/basic/cdc.t ..
not ok 52
not ok 53 Got "Started" instead of "Stopped"
not ok 54
not ok 55
Failed 4/55 subtests
[18:16:40]

Pranith

On 05/02/2015 01:23 AM, Emmanuel Dreyfus wrote:

Justin Clift  wrote:


They are archived, in /archives/logs/ on the regressions VM. It's just
that you have to get them through sftp.

Is it easy to add web access for them?

It was really easy:
http://nbslave76.cloud.gluster.org/archives/logs/glusterfs-logs-20150501182952.tgz

Now there is the script in jenkins to tweak to give the URL instead of 
host:/path
I go offline, feel free to beat me at fixing this.



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-01 Thread Pranith Kumar Karampuri


On 05/01/2015 10:05 PM, Nithya Balachandran wrote:

Hi,

Can you point me to a Jenkins run with this failure?
I don't have one. But it is very easy to re-create. Just run the 
following in your workspace

while prove -rfv tests/basic/fops-sanity.t; do :; done
At least on my machine this failed in 5-10 minutes. Very consistent 
failure :-)


Pranith


Regards,
Nithya



- Original Message -

From: "Pranith Kumar Karampuri" 
To: "Shyam" , "Raghavendra Gowdappa" , 
"Nithya Balachandran"
, "Susant Palai" 
Cc: "Gluster Devel" 
Sent: Friday, 1 May, 2015 5:07:12 PM
Subject: spurious regression failures for ./tests/basic/fops-sanity.t

hi,
  I see the following logs when the failure happens:
[2015-05-01 10:37:44.157477] E
[dht-helper.c:900:dht_migration_complete_check_task] 0-patchy-dht:
(null): failed to get the 'linkto' xattr No data avai
lable
[2015-05-01 10:37:44.157504] W [fuse-bridge.c:2190:fuse_readv_cbk]
0-glusterfs-fuse: 25: READ => -1 (No data available)

Then the program fails with following message:
read failed: No data available
read returning junk
fd based file operation 1 failed
read failed: No data available
read returning junk
fstat failed : No data available
fd based file operation 2 failed
read failed: No data available
read returning junk
dup fd based file operation failed
not ok 10

Could you let us know when this can happen and post a patch which will
fix it? Please let us know who is going to fix it.

Pranith



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Slogans Revisited

2015-05-01 Thread Benjamin Turner
I liked:

Gluster: Redefine storage.
Gluster: Software-Defined Storage. Redefined
Gluster: RAID G
Gluster: RAISE (redundant array of inexpensive storage equipment)
Gluster: Software {re}defined storage+

And suggested:

Gluster {DS|FS|RAISE|RAIDG}: Software Defined Storage Redefined(some
combination of lines 38-42)

My thinking is:

: 

Gluster * - This is the change gluster fs to gluster ds idea we were
discussing this already.  Maybe we could even come up with a
longer acronym, like RAIDG or RAISE or something more definitive of what we
are?  I think instead of jsut using Gluster: we come up with the new way to
refer to glusterFS and use this as a way to push that as well.

Tagline - Whatever cool saying that get people excited to check out
glusterDS/glusterRAIDG/ whatever

So ex:

Gluster RAISE: Software Defined Storage Redifined
Gluster DS: Software defined storage defined your way
Gluster RAIDG: Storage from the ground up

Just my $0.02

-b


On Fri, May 1, 2015 at 2:51 PM, Tom Callaway  wrote:

> Hello Gluster Ants!
>
> Thanks for all the slogan suggestions that you've provided. I've made an
> etherpad page which collected them all, along with some additional
> suggestions made by Red Hat's Brand team:
>
> https://public.pad.fsfe.org/p/gluster-slogans
>
> Feel free to discuss them (either here or on the etherpad). If you like
> a particular slogan, feel free to put a + next to it on the etherpad.
>
> Before we can pick a new slogan, it needs to be cleared by Red Hat
> Legal, this is a small formality to make sure that we're not infringing
> someone else's trademark or doing anything that would cause Red Hat
> undue risk. We don't want to waste their time by having them clear every
> possible suggestion, so your feedback is very helpful to allow us to
> narrow down the list. At the end of the day, barring legal clearance,
> the slogan selection is up to the community.
>
> Thanks!
>
> ~tom
>
> ==
> Red Hat
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression logs

2015-05-01 Thread Emmanuel Dreyfus
Justin Clift  wrote:

> > They are archived, in /archives/logs/ on the regressions VM. It's just
> > that you have to get them through sftp.
> 
> Is it easy to add web access for them? 

It was really easy:
http://nbslave76.cloud.gluster.org/archives/logs/glusterfs-logs-20150501182952.tgz

Now there is the script in jenkins to tweak to give the URL instead of 
host:/path
I go offline, feel free to beat me at fixing this.

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Slogans Revisited

2015-05-01 Thread Tom Callaway
Hello Gluster Ants!

Thanks for all the slogan suggestions that you've provided. I've made an
etherpad page which collected them all, along with some additional
suggestions made by Red Hat's Brand team:

https://public.pad.fsfe.org/p/gluster-slogans

Feel free to discuss them (either here or on the etherpad). If you like
a particular slogan, feel free to put a + next to it on the etherpad.

Before we can pick a new slogan, it needs to be cleared by Red Hat
Legal, this is a small formality to make sure that we're not infringing
someone else's trademark or doing anything that would cause Red Hat
undue risk. We don't want to waste their time by having them clear every
possible suggestion, so your feedback is very helpful to allow us to
narrow down the list. At the end of the day, barring legal clearance,
the slogan selection is up to the community.

Thanks!

~tom

==
Red Hat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression logs

2015-05-01 Thread Emmanuel Dreyfus
Justin Clift  wrote:

> Is it easy to add web access for them? (eg nginx or whatever)

NetBSD has a built-in simple web server but I have never set it up. I
will look at it once I will have investigated the cdc.t regression.

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression logs

2015-05-01 Thread Justin Clift
On 1 May 2015, at 16:08, Emmanuel Dreyfus  wrote:
> Pranith Kumar Karampuri  wrote:
> 
>>  I was not able to re-create glupy failure. I see that netbsd
>> is not archiving logs like the linux regression. Do you mind adding that
>> one? I think kaushal and Vijay did this for Linux regressions, so CC them.
> 
> They are archived, in /archives/logs/ on the regressions VM. It's just
> that you have to get them through sftp.

Is it easy to add web access for them? (eg nginx or whatever)

We have the nginx rule for the CentOS ones around somewhere if it'd help?

+ Justin


> -- 
> Emmanuel Dreyfus
> http://hcpnet.free.fr/pubz
> m...@netbsd.org
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-01 Thread Nithya Balachandran
Hi,

Can you point me to a Jenkins run with this failure?

Regards,
Nithya



- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Shyam" , "Raghavendra Gowdappa" 
> , "Nithya Balachandran"
> , "Susant Palai" 
> Cc: "Gluster Devel" 
> Sent: Friday, 1 May, 2015 5:07:12 PM
> Subject: spurious regression failures for ./tests/basic/fops-sanity.t
> 
> hi,
>  I see the following logs when the failure happens:
> [2015-05-01 10:37:44.157477] E
> [dht-helper.c:900:dht_migration_complete_check_task] 0-patchy-dht:
> (null): failed to get the 'linkto' xattr No data avai
> lable
> [2015-05-01 10:37:44.157504] W [fuse-bridge.c:2190:fuse_readv_cbk]
> 0-glusterfs-fuse: 25: READ => -1 (No data available)
> 
> Then the program fails with following message:
> read failed: No data available
> read returning junk
> fd based file operation 1 failed
> read failed: No data available
> read returning junk
> fstat failed : No data available
> fd based file operation 2 failed
> read failed: No data available
> read returning junk
> dup fd based file operation failed
> not ok 10
> 
> Could you let us know when this can happen and post a patch which will
> fix it? Please let us know who is going to fix it.
> 
> Pranith
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd regression logs

2015-05-01 Thread Emmanuel Dreyfus
Pranith Kumar Karampuri  wrote:

>   I was not able to re-create glupy failure. I see that netbsd
> is not archiving logs like the linux regression. Do you mind adding that
> one? I think kaushal and Vijay did this for Linux regressions, so CC them.

They are archived, in /archives/logs/ on the regressions VM. It's just
that you have to get them through sftp.

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] netbsd regression logs

2015-05-01 Thread Pranith Kumar Karampuri

hi Emmanuel,
 I was not able to re-create glupy failure. I see that netbsd 
is not archiving logs like the linux regression. Do you mind adding that 
one? I think kaushal and Vijay did this for Linux regressions, so CC them.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Spurious test failure in tests/bugs/distribute/bug-1122443.t

2015-05-01 Thread Pranith Kumar Karampuri

hi,
  Found the reason for this too:
ok 8
not ok 9 Got "in" instead of "completed"
FAILED COMMAND: completed remove_brick_status_completed_field patchy 
pranithk-laptop:/d/backends/patchy0
volume remove-brick commit: failed: use 'force' option as migration is 
in progress

not ok 10
FAILED COMMAND: gluster --mode=script --wignore volume remove-brick 
patchy pranithk-laptop:/d/backends/patchy0 commit

ok 11
ok 12
Failed 2/12 subtests

Test Summary Report
---
tests/bugs/distribute/bug-1122443.t (Wstat: 0 Tests: 12 Failed: 2)
  Failed tests:  9-10

Here is the fix:
http://review.gluster.org/10487

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Configuration Error during gerrit login

2015-05-01 Thread Justin Clift
I'm hoping this us mostly due to bugs in the older version of Gerrit +
GitHub plugin we're using.

We'll upgrade in a few weeks, and see how it goes then... ;)

+ Justin


On 1 May 2015, at 03:38, Gaurav Garg  wrote:
> Hi,
> 
> I was also having the same problems many times, i fixed it by following way
> 
> 
> 1. Go to https://github.com/settings/applications and revoke the 
> authorization for 'Gerrit Instance for Gluster Community'
> 2. Clean up all cookies for github and review.gluster.org
> 3. Goto https://review.gluster.org/ and sign-in again. You'll be asked to 
> sign-in to Github again and provide authorization
> 
> 
> - Original Message -
> From: "Vijay Bellur" 
> To: "Gluster Devel" 
> Sent: Friday, May 1, 2015 12:31:38 AM
> Subject: [Gluster-devel] Configuration Error during gerrit login
> 
> Ran into "Configuration Error" several times today. The error message 
> states:
> 
> "The HTTP server did not provide the username in the GITHUB_USERheader 
> when it forwarded the request to Gerrit Code Review..."
> 
> Switching browsers was useful for me to overcome the problem. Annoying 
> for sure, but we seem to have a workaround :).
> 
> HTH,
> Vijay
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] spurious regression failures for ./tests/basic/fops-sanity.t

2015-05-01 Thread Pranith Kumar Karampuri

hi,
I see the following logs when the failure happens:
[2015-05-01 10:37:44.157477] E 
[dht-helper.c:900:dht_migration_complete_check_task] 0-patchy-dht: 
(null): failed to get the 'linkto' xattr No data avai

lable
[2015-05-01 10:37:44.157504] W [fuse-bridge.c:2190:fuse_readv_cbk] 
0-glusterfs-fuse: 25: READ => -1 (No data available)


Then the program fails with following message:
read failed: No data available
read returning junk
fd based file operation 1 failed
read failed: No data available
read returning junk
fstat failed : No data available
fd based file operation 2 failed
read failed: No data available
read returning junk
dup fd based file operation failed
not ok 10

Could you let us know when this can happen and post a patch which will 
fix it? Please let us know who is going to fix it.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Issue with el5 builds for 3.7.0beta1

2015-05-01 Thread Niels de Vos
On Fri, May 01, 2015 at 02:02:10PM +0530, Lalatendu Mohanty wrote:
> On 05/01/2015 12:34 PM, Humble Devassy Chirammal wrote:
> >Hi All,
> >
> >
> >GlusterFS 3.7 beta1 RPMs for RHEL, CentOS, ( except el5) and Fedora are
> >available at download.gluster.org  [1].
> >
> >[1]http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.7.0beta1/
> >
> >
> >el5 rpms will be available soon.
> >
> >
> 
> Scratch build [1] for el5 is failing because it is failing on dependencies
> of  libuuid-devel and userspace-rcu-devel.
> 
> DEBUG util.py:388:  Error: No Package found for libuuid-devel
> DEBUG util.py:388:  Error: No Package found for userspace-rcu-devel >= 0.7
> 
> I remember seeing some discussion around making userspace-rcu-devel in epel 
> 5. But not sure what we decided for it.

It is available in the updates-testing repository for EPEL-5:


https://admin.fedoraproject.org/updates/FEDORA-EPEL-2015-1134/userspace-rcu-0.7.7-1.el5

After people tested it, and gave positive karma on that page, the build
will get automatically pushed to the updates repository. 3+ karma is
required, or someone could ask the maintainer to manually push it.

HTH,
Niels

> 
> [1] http://koji.fedoraproject.org/koji/taskinfo?taskID=9615033
> 
> -Lala
> >On Wed, Apr 29, 2015 at 4:39 PM, Vijay Bellur  >> wrote:
> >
> >Hi All,
> >
> >Just pushed tag v3.7.0beta1 to glusterfs.git. A tarball of
> >3.7.0beta1 is now available at [1]. RPM and other packages will
> >appear in download.gluster.org  when
> >the respective packages are ready.
> >
> >Important features available in beta1 include:
> >
> >- bitrot
> >- tiering
> >- inode quotas
> >- sharding
> >- glusterfind
> >- multi-threaded epoll
> >- trash
> >- netgroups style authentication for nfs exports
> >- snapshot scheduling
> >- cli support for NFS Ganesha
> >- Cloning volumes from snapshots
> >
> >I suspect that I might have missed a few from the list here.
> >Please chime in with your favorite feature if I have missed
> >including it here :).
> >
> >List of known bugs for 3.7.0 is being tracked at [2]. Testing
> >feedback and patches would be very welcome!
> >
> >Thanks,
> >Vijay
> >
> >[1]
> >http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.7.0beta1/
> >
> >[2] https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.0
> >
> >
> >
> >___
> >Gluster-devel mailing list
> >Gluster-devel@gluster.org 
> >http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> >
> >
> >___
> >Gluster-devel mailing list
> >Gluster-devel@gluster.org
> >http://www.gluster.org/mailman/listinfo/gluster-devel
> 

> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel



pgpQJTtD9zI9Y.pgp
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Issue with el5 builds for 3.7.0beta1

2015-05-01 Thread Lalatendu Mohanty

On 05/01/2015 12:34 PM, Humble Devassy Chirammal wrote:

Hi All,


GlusterFS 3.7 beta1 RPMs for RHEL, CentOS, ( except el5) and Fedora 
are available at download.gluster.org  [1].


[1]http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.7.0beta1/ 



el5 rpms will be available soon.




Scratch build [1] for el5 is failing because it is failing on 
dependencies of  libuuid-devel and userspace-rcu-devel.


DEBUG util.py:388:  Error: No Package found for libuuid-devel
DEBUG util.py:388:  Error: No Package found for userspace-rcu-devel >= 0.7

I remember seeing some discussion around making userspace-rcu-devel in epel 5. 
But not sure what we decided for it.

[1] http://koji.fedoraproject.org/koji/taskinfo?taskID=9615033

-Lala
On Wed, Apr 29, 2015 at 4:39 PM, Vijay Bellur > wrote:


Hi All,

Just pushed tag v3.7.0beta1 to glusterfs.git. A tarball of
3.7.0beta1 is now available at [1]. RPM and other packages will
appear in download.gluster.org  when
the respective packages are ready.

Important features available in beta1 include:

- bitrot
- tiering
- inode quotas
- sharding
- glusterfind
- multi-threaded epoll
- trash
- netgroups style authentication for nfs exports
- snapshot scheduling
- cli support for NFS Ganesha
- Cloning volumes from snapshots

I suspect that I might have missed a few from the list here.
Please chime in with your favorite feature if I have missed
including it here :).

List of known bugs for 3.7.0 is being tracked at [2]. Testing
feedback and patches would be very welcome!

Thanks,
Vijay

[1]
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.7.0beta1/

[2] https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.0



___
Gluster-devel mailing list
Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Rebalance improvement design

2015-05-01 Thread Ravishankar N
I sent  a fix  but abandoned it 
since Susant (CC'ed) has already sent one 
http://review.gluster.org/#/c/10459/

I think it needs re-submission, but more review-eyes are welcome.
-Ravi

On 05/01/2015 12:18 PM, Benjamin Turner wrote:

There was a segfault on gqas001, have a look when you get a sec:

Core was generated by `/usr/sbin/glusterfs -s localhost --volfile-id 
rebalance/testvol --xlator-option'.

Program terminated with signal 11, Segmentation fault.
#0  gf_defrag_get_entry (this=0x7f26f8011180, defrag=0x7f26f8031ef0, 
loc=0x7f26f4dbbfd0, migrate_data=0x7f2707874be8) at dht-rebalance.c:2032

2032   GF_FREE (tmp_container->parent_loc);
(gdb) bt
#0  gf_defrag_get_entry (this=0x7f26f8011180, defrag=0x7f26f8031ef0, 
loc=0x7f26f4dbbfd0, migrate_data=0x7f2707874be8) at dht-rebalance.c:2032
#1  gf_defrag_process_dir (this=0x7f26f8011180, defrag=0x7f26f8031ef0, 
loc=0x7f26f4dbbfd0, migrate_data=0x7f2707874be8) at dht-rebalance.c:2207
#2  0x7f26fdae1eb8 in gf_defrag_fix_layout (this=0x7f26f8011180, 
defrag=0x7f26f8031ef0, loc=0x7f26f4dbbfd0, fix_layout=0x7f2707874b5c, 
migrate_data=0x7f2707874be8)

at dht-rebalance.c:2299
#3  0x7f26fdae1f4b in gf_defrag_fix_layout (this=0x7f26f8011180, 
defrag=0x7f26f8031ef0, loc=0x7f26f4dbc200, fix_layout=0x7f2707874b5c, 
migrate_data=0x7f2707874be8)

at dht-rebalance.c:2416
#4  0x7f26fdae1f4b in gf_defrag_fix_layout (this=0x7f26f8011180, 
defrag=0x7f26f8031ef0, loc=0x7f26f4dbc430, fix_layout=0x7f2707874b5c, 
migrate_data=0x7f2707874be8)

at dht-rebalance.c:2416
#5  0x7f26fdae1f4b in gf_defrag_fix_layout (this=0x7f26f8011180, 
defrag=0x7f26f8031ef0, loc=0x7f26f4dbc660, fix_layout=0x7f2707874b5c, 
migrate_data=0x7f2707874be8)

at dht-rebalance.c:2416
#6  0x7f26fdae1f4b in gf_defrag_fix_layout (this=0x7f26f8011180, 
defrag=0x7f26f8031ef0, loc=0x7f26f4dbc890, fix_layout=0x7f2707874b5c, 
migrate_data=0x7f2707874be8)

at dht-rebalance.c:2416
#7  0x7f26fdae1f4b in gf_defrag_fix_layout (this=0x7f26f8011180, 
defrag=0x7f26f8031ef0, loc=0x7f26f4dbcac0, fix_layout=0x7f2707874b5c, 
migrate_data=0x7f2707874be8)

at dht-rebalance.c:2416
#8  0x7f26fdae1f4b in gf_defrag_fix_layout (this=0x7f26f8011180, 
defrag=0x7f26f8031ef0, loc=0x7f26f4dbccf0, fix_layout=0x7f2707874b5c, 
migrate_data=0x7f2707874be8)

at dht-rebalance.c:2416
#9  0x7f26fdae1f4b in gf_defrag_fix_layout (this=0x7f26f8011180, 
defrag=0x7f26f8031ef0, loc=0x7f26f4dbcf60, fix_layout=0x7f2707874b5c, 
migrate_data=0x7f2707874be8)

at dht-rebalance.c:2416
#10 0x7f26fdae2524 in gf_defrag_start_crawl (data=0x7f26f8011180) 
at dht-rebalance.c:2599
#11 0x7f2709024f62 in synctask_wrap (old_task=out>) at syncop.c:375
#12 0x003648c438f0 in ?? () from /lib64/libc-2.12.so 


#13 0x in ?? ()


On Fri, May 1, 2015 at 12:53 AM, Benjamin Turner > wrote:


Ok I have all my data created and I just started the rebalance. 
One thing to not in the client log I see the following spamming:


[root@gqac006 ~]# cat /var/log/glusterfs/gluster-mount-.log | wc -l
394042

[2015-05-01 00:47:55.591150] I [MSGID: 109036]
[dht-common.c:6478:dht_log_new_layout_for_dir_selfheal]
0-testvol-dht: Setting layout of

/file_dstdir/gqac006.sbu.lab.eng.bos.redhat.com/thrd_05/d_001/d_000/d_004/d_006

with [Subvol_name: testvol-replicate-0, Err: -1 , Start: 0 , Stop:
2141429669 ], [Subvol_name: testvol-replicate-1, Err: -1 , Start:
2141429670 , Stop: 4294967295 ],
[2015-05-01 00:47:55.596147] I
[dht-selfheal.c:1587:dht_selfheal_layout_new_directory]
0-testvol-dht: chunk size = 0x / 19920276 = 0xd7
[2015-05-01 00:47:55.596177] I
[dht-selfheal.c:1626:dht_selfheal_layout_new_directory]
0-testvol-dht: assigning range size 0x7fa39fa6 to testvol-replicate-1
[2015-05-01 00:47:55.596189] I
[dht-selfheal.c:1626:dht_selfheal_layout_new_directory]
0-testvol-dht: assigning range size 0x7fa39fa6 to testvol-replicate-0
[2015-05-01 00:47:55.597081] I [MSGID: 109036]
[dht-common.c:6478:dht_log_new_layout_for_dir_selfheal]
0-testvol-dht: Setting layout of

/file_dstdir/gqac006.sbu.lab.eng.bos.redhat.com/thrd_05/d_001/d_000/d_004/d_005

with [Subvol_name: testvol-replicate-0, Err: -1 , Start:
2141429670 , Stop: 4294967295 ], [Subvol_name:
testvol-replicate-1, Err: -1 , Start: 0 , Stop: 2141429669 ],
[2015-05-01 00:47:55.601853] I
[dht-selfheal.c:1587:dht_selfheal_layout_new_directory]
0-testvol-dht: chunk size = 0x / 19920276 = 0xd7
[2015-05-01 00:47:55.601882] I
[dht-selfheal.c:1626:dht_selfheal_layout_new_directory]
0-testvol-dht: assigning range size 0x7fa39fa6 to testvol-replicate-1
[2015-05-01 0

Re: [Gluster-devel] GlusterFS 3.7.0beta1 released

2015-05-01 Thread Humble Devassy Chirammal
Hi All,


GlusterFS 3.7 beta1 RPMs for RHEL, CentOS, ( except el5) and Fedora are
available at download.gluster.org [1].

[1]http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.7.0beta1/

el5 rpms will be available soon.


--Humble


On Wed, Apr 29, 2015 at 4:39 PM, Vijay Bellur  wrote:

> Hi All,
>
> Just pushed tag v3.7.0beta1 to glusterfs.git. A tarball of 3.7.0beta1 is
> now available at [1]. RPM and other packages will appear in
> download.gluster.org when the respective packages are ready.
>
> Important features available in beta1 include:
>
> - bitrot
> - tiering
> - inode quotas
> - sharding
> - glusterfind
> - multi-threaded epoll
> - trash
> - netgroups style authentication for nfs exports
> - snapshot scheduling
> - cli support for NFS Ganesha
> - Cloning volumes from snapshots
>
> I suspect that I might have missed a few from the list here. Please chime
> in with your favorite feature if I have missed including it here :).
>
> List of known bugs for 3.7.0 is being tracked at [2]. Testing feedback and
> patches would be very welcome!
>
> Thanks,
> Vijay
>
> [1]
> http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.7.0beta1/
>
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.0
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel