Re: [Gluster-devel] spurious regression failures again!

2014-07-18 Thread Varun Shastry

Hi,

Created a bug against the same. Please use it to submit if required.
https://bugzilla.redhat.com/show_bug.cgi?id=1121014

Thanks
Varun Shastry


On Tuesday 15 July 2014 09:34 PM, Pranith Kumar Karampuri wrote:


On 07/15/2014 09:24 PM, Joseph Fernandes wrote:

Hi Pranith,

Could you please share the link of the console output of the failures.

Added them inline. Thanks for reminding :-)

Pranith


Regards,
Joe

- Original Message -
From: "Pranith Kumar Karampuri" 
To: "Gluster Devel" , "Varun Shastry" 


Sent: Tuesday, July 15, 2014 8:52:44 PM
Subject: [Gluster-devel] spurious regression failures again!

hi,
  We have 4 tests failing once in a while causing problems:
1) tests/bugs/bug-1087198.t - Author: Varun
http://build.gluster.org/job/rackspace-regression-2GB-triggered/379/consoleFull 


2) tests/basic/mgmt_v3-locks.t - Author: Avra
http://build.gluster.org/job/rackspace-regression-2GB-triggered/375/consoleFull 


3) tests/basic/fops-sanity.t - Author: Pranith
http://build.gluster.org/job/rackspace-regression-2GB-triggered/383/consoleFull 



Please take a look at them and post updates.

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Glusterfs Help needed

2014-06-27 Thread Varun Shastry


On Tuesday 24 June 2014 04:45 PM, Chandrahasa S wrote:

Dear All,

I am building Glusterfs on shared storage.

I got Disk array with 2 SAS controller, one controller connected to 
node A and other Node B.


Can I create Glusterfs between these two node ( A & B) without 
replication, but data should be read / write on both node ( for better 
performance). In case of node A fail data should be accessed from node B.


Hi Chandrahasa,

I think only Erasure Coding feature (which is not *yet* merged but under 
review) can provide Failure Tolerance without using Replication.


- Varun Shastry



Please suggest.

Regards,
Chandrahasa S
Tata Consultancy Services
Data Center- ( Non STPI)
2nd Pokharan Road,
Subash Nagar ,
Mumbai - 400601,Maharashtra
India
Ph:- +91 22 677-81825
Buzz:- 4221825
Mailto: chandrahas...@tcs.com
Website: http://www.tcs.com <http://www.tcs.com/>

Experience certainty.IT Services
   Business Solutions
   Consulting




From: jenk...@build.gluster.org (Gluster Build System)
To: gluster-us...@gluster.org, gluster-devel@gluster.org
Date: 06/24/2014 03:46 PM
Subject: [Gluster-users] glusterfs-3.5.1 released
Sent by: gluster-users-boun...@gluster.org






SRC: 
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1.tar.gz


This release is made off jenkins-release-73

-- Gluster Build System
___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you



___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Awareness of the disk space available on nodes

2014-06-27 Thread Varun Shastry


On Friday 27 June 2014 05:37 PM, Nux! wrote:

Thanks, this is great news!
It was quite bad to hit disk limits and gluster to be oblivious to it and 
rebalancing is slow and taxing.
Any estimate in which version this feature will land?


If everything goes by plan, I think the 3.6 release should be built with 
this patch.


- Varun Shastry



Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


- Original Message -

From: "Varun Shastry" 
To: "Nux!" , gluster-devel@gluster.org
Sent: Friday, 27 June, 2014 12:41:46 PM
Subject: Re: [Gluster-devel] Awareness of the disk space available on nodes


On Friday 27 June 2014 04:02 PM, Nux! wrote:

Hi,

In 3.3/3.4 we had a problem with distributed volumes having problems when a
brick/node would fill up and GlusterFS just failing to write down any more
files on it, not trying to write them down on another node.
I was told this is not a bug and it happens because of the hashing used to
distribute the files (and no checks being done on disk space).
My question is, has any of the above changed in 3.5?

The patch addressing this is up for review at
http://review.gluster.org/#/c/8093/. I don't think any other patch (at
least partially) addressing this issue is merged in 3.5.

- Varun Shastry


Thanks
Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Awareness of the disk space available on nodes

2014-06-27 Thread Varun Shastry


On Friday 27 June 2014 04:02 PM, Nux! wrote:

Hi,

In 3.3/3.4 we had a problem with distributed volumes having problems when a 
brick/node would fill up and GlusterFS just failing to write down any more 
files on it, not trying to write them down on another node.
I was told this is not a bug and it happens because of the hashing used to 
distribute the files (and no checks being done on disk space).
My question is, has any of the above changed in 3.5?


The patch addressing this is up for review at 
http://review.gluster.org/#/c/8093/. I don't think any other patch (at 
least partially) addressing this issue is merged in 3.5.


- Varun Shastry



Thanks
Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] quota tests and usage of sleep

2014-06-16 Thread Varun Shastry



> - Original Message -
>> hi,
>>   Could you guys remove 'sleep' for quota tests authored
by you guys
>> if it can be done. They are leading to spurious failures.



I don't get how sleep can cause the failures. But for script 
bug-1087198.t  in my name, it is part of the testing. I can reduce it to 
a smaller value but we need to have the test which waits for a small 
amount of time.


- Varun Shastry


>> I will be sending out a patch removing 'sleep' in other tests.
>>
>> Pranith
>>






___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-08 Thread Varun Shastry

Hi,

On Wednesday 07 May 2014 10:52 PM, Jeff Darcy wrote:

Attached is a basic write-up of the user-serviceable snapshot feature
design (Avati's). Please take a look and let us know if you have
questions of any sort...

A few.

The design creates a new type of daemon: snapview-server.

* Where is it started?  One server (selected how) or all?

All the servers in the cluster.


* How do clients find it?  Are we dynamically changing the client
   side graph to add new protocol/client instances pointing to new
   snapview-servers, or is snapview-client using RPC directly?  Are
   the snapview-server ports managed through the glusterd portmapper
   interface, or patched in some other way?
Adding a protocol/client instance to connect to protocol/server at the 
daemon.


so, the call flow would look like,

(if the call is to .snaps) snapview-client -> protocol/client -> 
protocol/server ->snapview-server.


Yes, it is handled through glusterd portmapper.



* Since a snap volume will refer to multiple bricks, we'll need
   more brick daemons as well.  How are *those* managed?

Brick processes associated with the snapshot will be started.

- Varun Shastry


* How does snapview-server manage user credentials for connecting
   to snap bricks?  What if multiple users try to use the same
   snapshot at the same time?  How does any of this interact with
   on-wire or on-disk encryption?

I'm sure I'll come up with more later.  Also, next time it might
be nice to use the upstream feature proposal template *as it was
designed* to make sure that questions like these get addressed
where the whole community can participate in a timely fashion.
___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Varun Shastry

Hi Sobhan,

On Wednesday 07 May 2014 09:12 PM, Sobhan Samantaray wrote:

I think its a good idea to include the auto-remove of the snapshots based on 
the time or space as threshold as mentioned in below link.

http://www.howtogeek.com/110138/how-to-back-up-your-linux-system-with-back-in-time/



I think this feature is already implemented (partially?) as part of the 
snapshot feature.


The feature proposed here only concentrates on the user serviceability 
of the snapshots taken.


- Varun Shastry



- Original Message -
From: "Anand Subramanian" 
To: "Paul Cuzner" 
Cc: gluster-devel@gluster.org, "gluster-users" , "Anand 
Avati" 
Sent: Wednesday, May 7, 2014 7:50:30 PM
Subject: Re: [Gluster-users] User-serviceable snapshots design

Hi Paul, that is definitely doable and a very nice suggestion. It is just that 
we probably won't be able to get to that in the immediate code drop (what we 
like to call phase-1 of the feature). But yes, let us try to implement what you 
suggest for phase-2. Soon :-)

Regards,
Anand

On 05/06/2014 07:27 AM, Paul Cuzner wrote:



Just one question relating to thoughts around how you apply a filter to the 
snapshot view from a user's perspective.

In the "considerations" section, it states - "We plan to introduce a configurable 
option to limit the number of snapshots visible under the USS feature."
Would it not be possible to take the meta data from the snapshots to form a 
tree hierarchy when the number of snapshots present exceeds a given threshold, 
effectively organising the snaps by time. I think this would work better from 
an end-user workflow perspective.

i.e.
.snaps
\/ Today
+-- snap01_20140503_0800
+-- snap02_ 20140503_ 1400

Last 7 days
7-21 days
21-60 days
60-180days
180days







From: "Anand Subramanian" 
To: gluster-de...@nongnu.org , "gluster-users" 
Cc: "Anand Avati" 
Sent: Saturday, 3 May, 2014 2:35:26 AM
Subject: [Gluster-users] User-serviceable snapshots design

Attached is a basic write-up of the user-serviceable snapshot feature
design (Avati's). Please take a look and let us know if you have
questions of any sort...

We have a basic implementation up now; reviews and upstream commit
should follow very soon over the next week.

Cheers,
Anand

___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel