Re: [Gluster-devel] Review-request: Readdirp (ls -l) Performance Improvement

2020-05-27 Thread RAFI KC

Result for a single ls on a dir with 10k directories inside (16*3 volume)

*
*

*

Configuration



Plain volume



Parallel-readdir



Proposed Solution

Single Dir ls (Seconds)



-



135



32.744

*
**


It is showing 321% improvements.

Regards
Rafi KC

On 27/05/20 11:22 am, RAFI KC wrote:


Hi All,

I have been working on POC to improve readdirp performance 
improvement. At the end of the experiment, The results are showing 
promising result in performance, overall there is a 104% improvement 
for full filesystem crawl compared to the existing solution. Here is 
the short test numbers. The tests were carried out in 16*3 setup with 
1.5 Million dentries (Both files and dir). The system also contains 
some empty directories. *In the result the proposed solution is 287% 
faster than the plane volume and 104% faster than the parallel-readdir 
based solution.*


*
*

*

Configuration



Plain volume



Parallel-readdir



Proposed Solution

FS Crawl Time in Seconds



16497.523



8717.872



4261.401

*
**

In short, the basic idea behind the proposal is the efficient managing 
of readdir buffer in gluster along with prefetching the dentries for 
intelligent switch-over to the next buffer. The detailed problem 
description, deign description and results are available in the 
doc.https://docs.google.com/document/d/10z4T5Sd_-wCFrmDrzyQtlWOGLang1_g17wO8VUxSiJ8/edit 




https://review.gluster.org/24469

https://review.gluster.org/24470


Regards

Rafi KC


___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Review-request: Readdirp (ls -l) Performance Improvement

2020-05-26 Thread RAFI KC

Hi All,

I have been working on POC to improve readdirp performance improvement. 
At the end of the experiment, The results are showing promising result 
in performance, overall there is a 104% improvement for full filesystem 
crawl compared to the existing solution. Here is the short test numbers. 
The tests were carried out in 16*3 setup with 1.5 Million dentries (Both 
files and dir). The system also contains some empty directories. *In the 
result the proposed solution is 287% faster than the plane volume and 
104% faster than the parallel-readdir based solution.*


*
*

*

Configuration



Plain volume



Parallel-readdir



Proposed Solution

FS Crawl Time in Seconds



16497.523



8717.872



4261.401

*
**

In short, the basic idea behind the proposal is the efficient managing 
of readdir buffer in gluster along with prefetching the dentries for 
intelligent switch-over to the next buffer. The detailed problem 
description, deign description and results are available in the 
doc.https://docs.google.com/document/d/10z4T5Sd_-wCFrmDrzyQtlWOGLang1_g17wO8VUxSiJ8/edit 




https://review.gluster.org/24469

https://review.gluster.org/24470


Regards

Rafi KC

___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Review request

2017-10-09 Thread Hari Gowtham
Hi,

I would be happy if i can get more review for
https://review.gluster.org/#/c/17137/28

-- 
Regards,
Hari Gowtham.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for several Gluster/NFS changes

2017-07-04 Thread Niels de Vos
Hello,

I'd like to have some reviews for the following changes:

nfs: make nfs3_call_state_t refcounted
- https://review.gluster.org/17696

nfs/nlm: unref fds in nlm_client_free()
- https://review.gluster.org/17697

nfs/nlm: handle reconnect for non-NLM4_LOCK requests
- https://review.gluster.org/17698

nfs/nlm: use refcounting for nfs3_call_state_t
- https://review.gluster.org/17699

nfs/nlm: keep track of the call-state and frame for notifications
- https://review.gluster.org/17700


These prevent some unfortunate use-after-free in certain (un)lock
situations that cthon04 can expose.

Thanks!
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request - #17105

2017-05-16 Thread Raghavendra G
program/GF-DUMP: Shield ping processing from traffic to Glusterfs Program

Since poller thread bears the brunt of execution till the request is handed
over to io-threads, poller thread experiencies lock contention(s) in the
control flow till io-threads, which slows it down. This delay invariably
affects reading ping requests from network and responding to them,
resulting in increased ping latencies, which sometimes results in a
ping-timer-expiry on client leading to disconnect of transport. So, this
patch aims to free up poller thread from executing code of Glusterfs
Program.

We do this by making
* Glusterfs Program registering itself asking rpcsvc to execute its actors
in its own threads.
* GF-DUMP Program registering itself asking rpcsvc to _NOT_ execute its
actors in its own threads. Otherwise program's ownthreads become bottleneck
in processing ping traffic. This means that poller thread reads a ping
packet, invokes its actor and hands the response msg to transport queue.

Change-Id: I526268c10bdd5ef93f322a4f95385137550a6a49

Signed-off-by: Raghavendra G 
BUG: 1421938 

Patch: https://review.gluster.org/#/c/17105/

Note that there is only one thread per program. So, am wondering whether
this thread can become performance bottleneck for Glusterfs program. Your
comments are welcome.

regards,
-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request - patch #15036

2017-05-11 Thread Amar Tumballi
Have just 1 comment. Once you answer it, its good to go.

On Fri, May 12, 2017 at 9:48 AM, Raghavendra G 
wrote:

> I'll wait for a day on this. If there are no reviews, I'll assume that as
> a +1 and will go ahead and merge it. If anyone needs more time, please let
> me know and I can wait.
>
> On Thu, May 11, 2017 at 12:22 PM, Raghavendra Gowdappa <
> rgowd...@redhat.com> wrote:
>
>> All,
>>
>> Reviews are requested on [1]. Impact is non-trivial as it introduces more
>> concurrency in execution wrt processing of messages read from network.
>>
>> All tests are passed, though gerrit is not reflecting the last smoke
>> which was successful.
>>
>> For reference, below is the verbatim copy of commit msg:
>>
>> 
>>
>> event/epoll: Add back socket for polling of events immediately after
>> reading the entire rpc message from the wire Currently socket is added back
>> for future events after higher layers (rpc, xlators etc) have processed the
>> message. If message processing involves signficant delay (as in writev
>> replies processed by Erasure Coding), performance takes hit. Hence this
>> patch modifies transport/socket to add back the socket for polling of
>> events immediately after reading the entire rpc message, but before
>> notification to higher layers.
>>
>> credits: Thanks to "Kotresh Hiremath Ravishankar" 
>> for assitance in fixing a regression in bitrot caused by this patch.
>>
>> BUG: 1448364
>> 
>>
>> @Nigel,
>>
>> Is there a way to override -1 from smoke, as last instance of it is
>> successful?
>>
>> [1] https://review.gluster.org/#/c/15036/
>>
>> regards,
>> Raghavendra
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request - patch #15036

2017-05-11 Thread Raghavendra G
I'll wait for a day on this. If there are no reviews, I'll assume that as a
+1 and will go ahead and merge it. If anyone needs more time, please let me
know and I can wait.

On Thu, May 11, 2017 at 12:22 PM, Raghavendra Gowdappa 
wrote:

> All,
>
> Reviews are requested on [1]. Impact is non-trivial as it introduces more
> concurrency in execution wrt processing of messages read from network.
>
> All tests are passed, though gerrit is not reflecting the last smoke which
> was successful.
>
> For reference, below is the verbatim copy of commit msg:
>
> 
>
> event/epoll: Add back socket for polling of events immediately after
> reading the entire rpc message from the wire Currently socket is added back
> for future events after higher layers (rpc, xlators etc) have processed the
> message. If message processing involves signficant delay (as in writev
> replies processed by Erasure Coding), performance takes hit. Hence this
> patch modifies transport/socket to add back the socket for polling of
> events immediately after reading the entire rpc message, but before
> notification to higher layers.
>
> credits: Thanks to "Kotresh Hiremath Ravishankar" 
> for assitance in fixing a regression in bitrot caused by this patch.
>
> BUG: 1448364
> 
>
> @Nigel,
>
> Is there a way to override -1 from smoke, as last instance of it is
> successful?
>
> [1] https://review.gluster.org/#/c/15036/
>
> regards,
> Raghavendra
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request - patch #15036

2017-05-10 Thread Raghavendra Gowdappa
All,

Reviews are requested on [1]. Impact is non-trivial as it introduces more 
concurrency in execution wrt processing of messages read from network.

All tests are passed, though gerrit is not reflecting the last smoke which was 
successful.

For reference, below is the verbatim copy of commit msg:



event/epoll: Add back socket for polling of events immediately after reading 
the entire rpc message from the wire Currently socket is added back for future 
events after higher layers (rpc, xlators etc) have processed the message. If 
message processing involves signficant delay (as in writev replies processed by 
Erasure Coding), performance takes hit. Hence this patch modifies 
transport/socket to add back the socket for polling of events immediately after 
reading the entire rpc message, but before notification to higher layers. 

credits: Thanks to "Kotresh Hiremath Ravishankar"  for 
assitance in fixing a regression in bitrot caused by this patch.

BUG: 1448364


@Nigel,

Is there a way to override -1 from smoke, as last instance of it is successful?

[1] https://review.gluster.org/#/c/15036/

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for 3.9 patches

2016-12-13 Thread Poornima Gurusiddaiah
Hi, 

Below are some of the backported patches that are important for 3.9, please 
review the same: 

http://review.gluster.org/#/c/15890/ (afr,dht,ec: Replace 
GF_EVENT_CHILD_MODIFIED with event SOME_DESCENDENT_DOWN/UP) 
http://review.gluster.org/#/c/15933/ , http://review.gluster.org/#/c/15935/ 
(libglusterfs: Fix a read hang) 
http://review.gluster.org/#/c/15959/ (afr: Fix the EIO that can occur in 
afr_inode_refresh as a result) 
http://review.gluster.org/#/c/15960/ (tests: Fix one of the md-cache test 
cases) 
http://review.gluster.org/#/c/16022/ (dht/md-cache: Filter invalidate if the 
file is made a linkto file) 

Thank You. 

Regards, 
Poornima 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request - change pid file location to /var/run/gluster

2016-11-14 Thread Atin Mukherjee
Patch has been reviewed with some comments.

On Thu, Oct 27, 2016 at 11:56 AM, Atin Mukherjee 
wrote:

> Saravana,
>
> Thank you for working on this. We'll be considering this patch for 3.10.
>
> On Thu, Oct 27, 2016 at 11:54 AM, Saravanakumar Arumugam <
> sarum...@redhat.com> wrote:
>
>> Hi,
>>
>> I have refreshed this patch addressing review comments (originally
>> authored by Gaurav) which moves brick pid files from /var/lib/glusterd/* to
>> /var/run/gluster.
>>
>> It will be great if you can review this:
>> http://review.gluster.org/#/c/13580/
>>
>> Thank you
>>
>> Regards,
>> Saravana
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request: Data corruption in write ordering of rebalance and application writes

2016-11-05 Thread Karthik Subrahmanya
Hi all,

Requesting for review of [1].

Bug: Lack of atomicity b/w read-src and write-dst of rebalance process [2]

Description & proposed solution:
Currently rebalance process does,
1. read (src)
2. write (dst)
To make sure that src and dst are identical, we need to make 1 and 2 atomic. 
Otherwise with parallel writes happening to same region during rebalance, 
writes on dst can go out of order (relative to src) and dst can be different 
from src which is basically a corruption [2]. To make atomic, we need to:
* lock (src) the region of file being read before 1
* unlock (src) the region of file being read after 2
and make sure that this lock blocks new writes from application (till an unlock 
is issued). Combining this with the approach that application writes are 
serially written to src first and then to dst, we will have the solution.

[1] http://review.gluster.org/#/c/15698/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1376757

Thanks & Regards,
Karthik
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request - change pid file location to /var/run/gluster

2016-10-26 Thread Atin Mukherjee
Saravana,

Thank you for working on this. We'll be considering this patch for 3.10.

On Thu, Oct 27, 2016 at 11:54 AM, Saravanakumar Arumugam <
sarum...@redhat.com> wrote:

> Hi,
>
> I have refreshed this patch addressing review comments (originally
> authored by Gaurav) which moves brick pid files from /var/lib/glusterd/* to
> /var/run/gluster.
>
> It will be great if you can review this:
> http://review.gluster.org/#/c/13580/
>
> Thank you
>
> Regards,
> Saravana
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request - change pid file location to /var/run/gluster

2016-10-26 Thread Saravanakumar Arumugam

Hi,

I have refreshed this patch addressing review comments (originally 
authored by Gaurav) which moves brick pid files from /var/lib/glusterd/* 
to /var/run/gluster.


It will be great if you can review this:
http://review.gluster.org/#/c/13580/

Thank you

Regards,
Saravana

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-23 Thread Soumya Koduri



On 09/23/2016 11:48 AM, Poornima Gurusiddaiah wrote:



- Original Message -

From: "Niels de Vos" 
To: "Raghavendra Gowdappa" 
Cc: "Gluster Devel" 
Sent: Wednesday, September 21, 2016 3:52:39 AM
Subject: Re: [Gluster-devel] review request - Change the way client uuid is 
built

On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:

Hi all,

[1] might have implications across different components in the stack. Your
reviews are requested.



rpc : Change the way client uuid is built

Problem:
Today the main users of client uuid are protocol layers, locks, leases.
Protocolo layers requires each client uuid to be unique, even across
connects and disconnects. Locks and leases on the server side also use
the same client uid which changes across graph switches and across
file migrations. Which makes the graph switch and file migration
tedious for locks and leases.
As of today lock migration across graph switch is client driven,
i.e. when a graph switches, the client reassociates all the locks(which
were associated with the old graph client uid) with the new graphs
client uid. This means flood of fops to get and set locks for each fd.
Also file migration across bricks becomes even more difficult as
client uuid for the same client, is different on the other brick.

The exact set of issues exists for leases as well.

Hence the solution:
Make the migration of locks and leases during graph switch and migration,
server driven instead of client driven. This can be achieved by changing
the format of client uuid.

Client uuid currently:
%s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume
count/reconnect count)

Proposed Client uuid:
"CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
-  CTX_ID: This is will be constant per client.
-  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume
count)
remains the same.

With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
constant across file migration, thus the migration is made easier.

Locks and leases store only the first part CTX_ID+GRAPH_ID as their
client identification. This means, when the new graph connects,


Can we assume that CTX_ID+GRAPH_ID shall be unique across clients all 
the time? If not, wouldn't we get into issues of clientB's locks/leases 
not conflicting with locks/leases of clientA's.



the locks and leases xlator should walk through their database
to update the client id, to have new GRAPH_ID. Thus the graph switch
is made server driven and saves a lot of network traffic.


What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
applications? This would be important for NFS-Ganesha failover where one
NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
to an other NFS-Ganesha server.


Sharing it across multiple gfapi applications is currently not supported.
Do you mean, setting the CTX_ID+GRAPH_ID at the init of the other client,
or during replay of locks during the failover?
If its the former, we need an api in gfapi to take the CTX_ID+GRAPH_ID as
an argument and other things.

Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
would allow us to add a configuration option to NFS-Ganesha and have the
whole NFS-Ganesha cluster use the same locking/leases.

Ah, ok. the whole of cluster will have the same CTX_ID(+GRAPH_ID?), but then
the cleanup logic will not work, as the disconnect cleanup happens as soon as
one of the NFS-Ganesha disconnects?


yes. If we have uniform ID (CTX_ID+GRAPH_ID?) across clients, we should 
keep locks/leases as long as even one client is connected and not clean 
them up as part of fd cleanup during disconnects.


Thanks,
Soumya



This patch doesn't eliminate the migration that is required during graph switch,
it still is necessary, but it can be server driven instead of client driven.


Thanks,
Niels




Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
BUG: 1369028
Signed-off-by: Poornima G 
Signed-off-by: Susant Palai 



[1] http://review.gluster.org/#/c/13901/10/

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-22 Thread Poornima Gurusiddaiah


- Original Message -
> From: "Niels de Vos" 
> To: "Raghavendra Gowdappa" 
> Cc: "Gluster Devel" 
> Sent: Wednesday, September 21, 2016 3:52:39 AM
> Subject: Re: [Gluster-devel] review request - Change the way client uuid is 
> built
> 
> On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:
> > Hi all,
> > 
> > [1] might have implications across different components in the stack. Your
> > reviews are requested.
> > 
> > 
> > 
> > rpc : Change the way client uuid is built
> > 
> > Problem:
> > Today the main users of client uuid are protocol layers, locks, leases.
> > Protocolo layers requires each client uuid to be unique, even across
> > connects and disconnects. Locks and leases on the server side also use
> > the same client uid which changes across graph switches and across
> > file migrations. Which makes the graph switch and file migration
> > tedious for locks and leases.
> > As of today lock migration across graph switch is client driven,
> > i.e. when a graph switches, the client reassociates all the locks(which
> > were associated with the old graph client uid) with the new graphs
> > client uid. This means flood of fops to get and set locks for each fd.
> > Also file migration across bricks becomes even more difficult as
> > client uuid for the same client, is different on the other brick.
> > 
> > The exact set of issues exists for leases as well.
> > 
> > Hence the solution:
> > Make the migration of locks and leases during graph switch and migration,
> > server driven instead of client driven. This can be achieved by changing
> > the format of client uuid.
> > 
> > Client uuid currently:
> > %s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume
> > count/reconnect count)
> > 
> > Proposed Client uuid:
> > "CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
> > -  CTX_ID: This is will be constant per client.
> > -  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume
> > count)
> > remains the same.
> > 
> > With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
> > constant across file migration, thus the migration is made easier.
> > 
> > Locks and leases store only the first part CTX_ID+GRAPH_ID as their
> > client identification. This means, when the new graph connects,
> > the locks and leases xlator should walk through their database
> > to update the client id, to have new GRAPH_ID. Thus the graph switch
> > is made server driven and saves a lot of network traffic.
> 
> What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
> applications? This would be important for NFS-Ganesha failover where one
> NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
> to an other NFS-Ganesha server.
> 
Sharing it across multiple gfapi applications is currently not supported.
Do you mean, setting the CTX_ID+GRAPH_ID at the init of the other client,
or during replay of locks during the failover?
If its the former, we need an api in gfapi to take the CTX_ID+GRAPH_ID as
an argument and other things.
> Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
> would allow us to add a configuration option to NFS-Ganesha and have the
> whole NFS-Ganesha cluster use the same locking/leases.
Ah, ok. the whole of cluster will have the same CTX_ID(+GRAPH_ID?), but then
the cleanup logic will not work, as the disconnect cleanup happens as soon as
one of the NFS-Ganesha disconnects?

This patch doesn't eliminate the migration that is required during graph switch,
it still is necessary, but it can be server driven instead of client driven.
> 
> Thanks,
> Niels
> 
> 
> > 
> > Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
> > BUG: 1369028
> > Signed-off-by: Poornima G 
> > Signed-off-by: Susant Palai 
> > 
> > 
> > 
> > [1] http://review.gluster.org/#/c/13901/10/
> > 
> > regards,
> > Raghavendra
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-22 Thread Raghavendra Gowdappa
+Poornima

- Original Message -
> From: "Niels de Vos" 
> To: "Raghavendra Gowdappa" 
> Cc: "Gluster Devel" 
> Sent: Wednesday, September 21, 2016 1:22:39 PM
> Subject: Re: [Gluster-devel] review request - Change the way client uuid is 
> built
> 
> On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:
> > Hi all,
> > 
> > [1] might have implications across different components in the stack. Your
> > reviews are requested.
> > 
> > 
> > 
> > rpc : Change the way client uuid is built
> > 
> > Problem:
> > Today the main users of client uuid are protocol layers, locks, leases.
> > Protocolo layers requires each client uuid to be unique, even across
> > connects and disconnects. Locks and leases on the server side also use
> > the same client uid which changes across graph switches and across
> > file migrations. Which makes the graph switch and file migration
> > tedious for locks and leases.
> > As of today lock migration across graph switch is client driven,
> > i.e. when a graph switches, the client reassociates all the locks(which
> > were associated with the old graph client uid) with the new graphs
> > client uid. This means flood of fops to get and set locks for each fd.
> > Also file migration across bricks becomes even more difficult as
> > client uuid for the same client, is different on the other brick.
> > 
> > The exact set of issues exists for leases as well.
> > 
> > Hence the solution:
> > Make the migration of locks and leases during graph switch and migration,
> > server driven instead of client driven. This can be achieved by changing
> > the format of client uuid.
> > 
> > Client uuid currently:
> > %s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume
> > count/reconnect count)
> > 
> > Proposed Client uuid:
> > "CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
> > -  CTX_ID: This is will be constant per client.
> > -  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume
> > count)
> > remains the same.
> > 
> > With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
> > constant across file migration, thus the migration is made easier.
> > 
> > Locks and leases store only the first part CTX_ID+GRAPH_ID as their
> > client identification. This means, when the new graph connects,
> > the locks and leases xlator should walk through their database
> > to update the client id, to have new GRAPH_ID. Thus the graph switch
> > is made server driven and saves a lot of network traffic.
> 
> What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
> applications? This would be important for NFS-Ganesha failover where one
> NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
> to an other NFS-Ganesha server.
> 
> Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
> would allow us to add a configuration option to NFS-Ganesha and have the
> whole NFS-Ganesha cluster use the same locking/leases.
> 
> Thanks,
> Niels
> 
> 
> > 
> > Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
> > BUG: 1369028
> > Signed-off-by: Poornima G 
> > Signed-off-by: Susant Palai 
> > 
> > 
> > 
> > [1] http://review.gluster.org/#/c/13901/10/
> > 
> > regards,
> > Raghavendra
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] review request - Change the way client uuid is built

2016-09-21 Thread Niels de Vos
On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:
> Hi all,
> 
> [1] might have implications across different components in the stack. Your 
> reviews are requested.
> 
> 
> 
> rpc : Change the way client uuid is built
> 
> Problem:
> Today the main users of client uuid are protocol layers, locks, leases.
> Protocolo layers requires each client uuid to be unique, even across
> connects and disconnects. Locks and leases on the server side also use
> the same client uid which changes across graph switches and across
> file migrations. Which makes the graph switch and file migration
> tedious for locks and leases.
> As of today lock migration across graph switch is client driven,
> i.e. when a graph switches, the client reassociates all the locks(which
> were associated with the old graph client uid) with the new graphs
> client uid. This means flood of fops to get and set locks for each fd.
> Also file migration across bricks becomes even more difficult as
> client uuid for the same client, is different on the other brick.
> 
> The exact set of issues exists for leases as well.
> 
> Hence the solution:
> Make the migration of locks and leases during graph switch and migration,
> server driven instead of client driven. This can be achieved by changing
> the format of client uuid.
> 
> Client uuid currently:
> %s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume 
> count/reconnect count)
> 
> Proposed Client uuid:
> "CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
> -  CTX_ID: This is will be constant per client.
> -  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume 
> count)
> remains the same.
> 
> With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
> constant across file migration, thus the migration is made easier.
> 
> Locks and leases store only the first part CTX_ID+GRAPH_ID as their
> client identification. This means, when the new graph connects,
> the locks and leases xlator should walk through their database
> to update the client id, to have new GRAPH_ID. Thus the graph switch
> is made server driven and saves a lot of network traffic.

What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
applications? This would be important for NFS-Ganesha failover where one
NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
to an other NFS-Ganesha server.

Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
would allow us to add a configuration option to NFS-Ganesha and have the
whole NFS-Ganesha cluster use the same locking/leases.

Thanks,
Niels


> 
> Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
> BUG: 1369028
> Signed-off-by: Poornima G 
> Signed-off-by: Susant Palai 
> 
> 
> 
> [1] http://review.gluster.org/#/c/13901/10/
> 
> regards,
> Raghavendra
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] review request - Change the way client uuid is built

2016-09-20 Thread Raghavendra Gowdappa
Hi all,

[1] might have implications across different components in the stack. Your 
reviews are requested.



rpc : Change the way client uuid is built

Problem:
Today the main users of client uuid are protocol layers, locks, leases.
Protocolo layers requires each client uuid to be unique, even across
connects and disconnects. Locks and leases on the server side also use
the same client uid which changes across graph switches and across
file migrations. Which makes the graph switch and file migration
tedious for locks and leases.
As of today lock migration across graph switch is client driven,
i.e. when a graph switches, the client reassociates all the locks(which
were associated with the old graph client uid) with the new graphs
client uid. This means flood of fops to get and set locks for each fd.
Also file migration across bricks becomes even more difficult as
client uuid for the same client, is different on the other brick.

The exact set of issues exists for leases as well.

Hence the solution:
Make the migration of locks and leases during graph switch and migration,
server driven instead of client driven. This can be achieved by changing
the format of client uuid.

Client uuid currently:
%s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume count/reconnect 
count)

Proposed Client uuid:
"CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
-  CTX_ID: This is will be constant per client.
-  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume count)
remains the same.

With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
constant across file migration, thus the migration is made easier.

Locks and leases store only the first part CTX_ID+GRAPH_ID as their
client identification. This means, when the new graph connects,
the locks and leases xlator should walk through their database
to update the client id, to have new GRAPH_ID. Thus the graph switch
is made server driven and saves a lot of network traffic.

Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
BUG: 1369028
Signed-off-by: Poornima G 
Signed-off-by: Susant Palai 



[1] http://review.gluster.org/#/c/13901/10/

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for 3.9 patches

2016-09-18 Thread Xavier Hernandez

Hi Poornima,

On 19/09/16 07:01, Poornima Gurusiddaiah wrote:

Hi All,

There are 3 more patches that we need for enabling md-cache invalidation in 3.9.
Request your help with the reviews:

http://review.gluster.org/#/c/15378/   - afr: Implement IPC fop
http://review.gluster.org/#/c/15387/   - ec: Implement IPC fop


The patch is ok for me. Only concern is if it shouldn't reference a bug 
instead of having 'rfc' as topic.


Xavi


http://review.gluster.org/#/c/15398/   - mdc/upcall/afr: Reduce the window of 
stale read


Thanks,
Poornima

- Original Message -

From: "Poornima Gurusiddaiah" 
To: "Gluster Devel" , "Raghavendra Gowdappa" 
, "Rajesh Joseph"
, "Raghavendra Talur" , "Soumya Koduri" 
, "Niels de Vos"
, "Anoop Chirayath Manjiyil Sajan" 
Sent: Tuesday, August 30, 2016 5:13:36 AM
Subject: Re: [Gluster-devel] Review request for 3.9 patches

Hi,

Few more patches, have addressed the review comments, could you please review
these patches:

http://review.gluster.org/15002   md-cache: Register the list of xattrs with
cache-invalidation
http://review.gluster.org/15300   dht, md-cache, upcall: Add invalidation of
IATT when the layout changes
http://review.gluster.org/15324   md-cache: Process all the cache
invalidation flags
http://review.gluster.org/15313   upcall: Mark the clients as accessed even
on readdir entries
http://review.gluster.org/15193   io-stats: Add stats for upcall
notifications

Regards,
Poornima

- Original Message -


From: "Poornima Gurusiddaiah" 
To: "Gluster Devel" , "Raghavendra Gowdappa"
, "Rajesh Joseph" , "Raghavendra
Talur" , "Soumya Koduri" , "Niels de
Vos" , "Anoop Chirayath Manjiyil Sajan"

Sent: Thursday, August 25, 2016 5:22:43 AM
Subject: Review request for 3.9 patches



Hi,



There are few patches that are part of the effort of integrating md-cache
with upcall.
Hope to take these patches for 3.9, it would be great if you can review
these
patches:



upcall patches:
http://review.gluster.org/#/c/15313/
http://review.gluster.org/#/c/15301/



md-cache patches:
http://review.gluster.org/#/c/15002/
http://review.gluster.org/#/c/15045/
http://review.gluster.org/#/c/15185/
http://review.gluster.org/#/c/15224/
http://review.gluster.org/#/c/15225/
http://review.gluster.org/#/c/15300/
http://review.gluster.org/#/c/15314/



Thanks,
Poornima

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for 3.9 patches

2016-09-18 Thread Niels de Vos
On Mon, Sep 19, 2016 at 01:01:13AM -0400, Poornima Gurusiddaiah wrote:
> Hi All,
> 
> There are 3 more patches that we need for enabling md-cache invalidation in 
> 3.9.
> Request your help with the reviews:
> 
> http://review.gluster.org/#/c/15378/   - afr: Implement IPC fop
> http://review.gluster.org/#/c/15387/   - ec: Implement IPC fop
> http://review.gluster.org/#/c/15398/   - mdc/upcall/afr: Reduce the window of 
> stale read

There is no patch for invalidation of xattrs through libgfapi yet, I
think? Are you planning on getting that done too? I would appreciate it
if you can at least file a bug for it.

Thanks,
Niels


> 
> 
> Thanks,
> Poornima
> 
> - Original Message -
> > From: "Poornima Gurusiddaiah" 
> > To: "Gluster Devel" , "Raghavendra Gowdappa" 
> > , "Rajesh Joseph"
> > , "Raghavendra Talur" , "Soumya 
> > Koduri" , "Niels de Vos"
> > , "Anoop Chirayath Manjiyil Sajan" 
> > Sent: Tuesday, August 30, 2016 5:13:36 AM
> > Subject: Re: [Gluster-devel] Review request for 3.9 patches
> > 
> > Hi,
> > 
> > Few more patches, have addressed the review comments, could you please 
> > review
> > these patches:
> > 
> > http://review.gluster.org/15002   md-cache: Register the list of xattrs with
> > cache-invalidation
> > http://review.gluster.org/15300   dht, md-cache, upcall: Add invalidation of
> > IATT when the layout changes
> > http://review.gluster.org/15324   md-cache: Process all the cache
> > invalidation flags
> > http://review.gluster.org/15313   upcall: Mark the clients as accessed even
> > on readdir entries
> > http://review.gluster.org/15193   io-stats: Add stats for upcall
> > notifications
> > 
> > Regards,
> > Poornima
> > 
> > - Original Message -
> > 
> > > From: "Poornima Gurusiddaiah" 
> > > To: "Gluster Devel" , "Raghavendra Gowdappa"
> > > , "Rajesh Joseph" , "Raghavendra
> > > Talur" , "Soumya Koduri" , "Niels 
> > > de
> > > Vos" , "Anoop Chirayath Manjiyil Sajan"
> > > 
> > > Sent: Thursday, August 25, 2016 5:22:43 AM
> > > Subject: Review request for 3.9 patches
> > 
> > > Hi,
> > 
> > > There are few patches that are part of the effort of integrating md-cache
> > > with upcall.
> > > Hope to take these patches for 3.9, it would be great if you can review
> > > these
> > > patches:
> > 
> > > upcall patches:
> > > http://review.gluster.org/#/c/15313/
> > > http://review.gluster.org/#/c/15301/
> > 
> > > md-cache patches:
> > > http://review.gluster.org/#/c/15002/
> > > http://review.gluster.org/#/c/15045/
> > > http://review.gluster.org/#/c/15185/
> > > http://review.gluster.org/#/c/15224/
> > > http://review.gluster.org/#/c/15225/
> > > http://review.gluster.org/#/c/15300/
> > > http://review.gluster.org/#/c/15314/
> > 
> > > Thanks,
> > > Poornima
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request for 3.9 patches

2016-09-18 Thread Poornima Gurusiddaiah
Hi All,

There are 3 more patches that we need for enabling md-cache invalidation in 3.9.
Request your help with the reviews:

http://review.gluster.org/#/c/15378/   - afr: Implement IPC fop
http://review.gluster.org/#/c/15387/   - ec: Implement IPC fop
http://review.gluster.org/#/c/15398/   - mdc/upcall/afr: Reduce the window of 
stale read


Thanks,
Poornima

- Original Message -
> From: "Poornima Gurusiddaiah" 
> To: "Gluster Devel" , "Raghavendra Gowdappa" 
> , "Rajesh Joseph"
> , "Raghavendra Talur" , "Soumya 
> Koduri" , "Niels de Vos"
> , "Anoop Chirayath Manjiyil Sajan" 
> Sent: Tuesday, August 30, 2016 5:13:36 AM
> Subject: Re: [Gluster-devel] Review request for 3.9 patches
> 
> Hi,
> 
> Few more patches, have addressed the review comments, could you please review
> these patches:
> 
> http://review.gluster.org/15002   md-cache: Register the list of xattrs with
> cache-invalidation
> http://review.gluster.org/15300   dht, md-cache, upcall: Add invalidation of
> IATT when the layout changes
> http://review.gluster.org/15324   md-cache: Process all the cache
> invalidation flags
> http://review.gluster.org/15313   upcall: Mark the clients as accessed even
> on readdir entries
> http://review.gluster.org/15193   io-stats: Add stats for upcall
> notifications
> 
> Regards,
> Poornima
> 
> - Original Message -
> 
> > From: "Poornima Gurusiddaiah" 
> > To: "Gluster Devel" , "Raghavendra Gowdappa"
> > , "Rajesh Joseph" , "Raghavendra
> > Talur" , "Soumya Koduri" , "Niels de
> > Vos" , "Anoop Chirayath Manjiyil Sajan"
> > 
> > Sent: Thursday, August 25, 2016 5:22:43 AM
> > Subject: Review request for 3.9 patches
> 
> > Hi,
> 
> > There are few patches that are part of the effort of integrating md-cache
> > with upcall.
> > Hope to take these patches for 3.9, it would be great if you can review
> > these
> > patches:
> 
> > upcall patches:
> > http://review.gluster.org/#/c/15313/
> > http://review.gluster.org/#/c/15301/
> 
> > md-cache patches:
> > http://review.gluster.org/#/c/15002/
> > http://review.gluster.org/#/c/15045/
> > http://review.gluster.org/#/c/15185/
> > http://review.gluster.org/#/c/15224/
> > http://review.gluster.org/#/c/15225/
> > http://review.gluster.org/#/c/15300/
> > http://review.gluster.org/#/c/15314/
> 
> > Thanks,
> > Poornima
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request: tier as a service.

2016-09-16 Thread Hari Gowtham
Hi Manikandan,

Thanks for the review. Will work on addressing them ASAP.
Please feel free to review the remaining files when you find time.

- Original Message -
> From: "Manikandan Selvaganesh" 
> To: "Niels de Vos" 
> Cc: "Hari Gowtham" , "gluster-devel" 
> , "Kaushal Madappa"
> 
> Sent: Friday, September 16, 2016 1:37:09 PM
> Subject: Re: [Gluster-devel] Review request: tier as a service.
> 
> Hi Hari,
> 
> I have done a very initial review for some of the files. I have just
> reviewed the code flow without having much idea on the actual
> functionality. Please
> feel free to address it when you have time(since most of them are coverity,
> indentation
> and memory issues related).
> 
> I will also review the remaining files when I get time.
> 
> --
> Thanks & Regards,
> Manikandan Selvaganesh.
> 
> On Thu, Sep 15, 2016 at 2:22 PM, Niels de Vos  wrote:
> 
> > On Thu, Sep 15, 2016 at 02:50:09AM -0400, Hari Gowtham wrote:
> > > Hi,
> > >
> > > I would be happy to get reviews for this patch
> > > http://review.gluster.org/#/c/13365/
> > >
> > > more details can be found here about the changes:
> > > https://docs.google.com/document/d/1_iyjiwTLnBJlCiUgjAWnpnPD801h5LN
> > xLhHmN7zmk1o/edit?usp=sharing
> >
> > Please send this as a document for the glusterfs-specs repository (uses
> > Gerrit just like the glusterfs sources). See the README.md on
> > https://github.com/gluster/glusterfs-specs/blob/master/README.md for
> > some more details.
> >
> > Thanks,
> > Niels
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> 

-- 
Regards, 
Hari. 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request: tier as a service.

2016-09-16 Thread Manikandan Selvaganesh
Hi Hari,

I have done a very initial review for some of the files. I have just
reviewed the code flow without having much idea on the actual
functionality. Please
feel free to address it when you have time(since most of them are coverity,
indentation
and memory issues related).

I will also review the remaining files when I get time.

--
Thanks & Regards,
Manikandan Selvaganesh.

On Thu, Sep 15, 2016 at 2:22 PM, Niels de Vos  wrote:

> On Thu, Sep 15, 2016 at 02:50:09AM -0400, Hari Gowtham wrote:
> > Hi,
> >
> > I would be happy to get reviews for this patch
> > http://review.gluster.org/#/c/13365/
> >
> > more details can be found here about the changes:
> > https://docs.google.com/document/d/1_iyjiwTLnBJlCiUgjAWnpnPD801h5LN
> xLhHmN7zmk1o/edit?usp=sharing
>
> Please send this as a document for the glusterfs-specs repository (uses
> Gerrit just like the glusterfs sources). See the README.md on
> https://github.com/gluster/glusterfs-specs/blob/master/README.md for
> some more details.
>
> Thanks,
> Niels
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request: tier as a service.

2016-09-15 Thread Niels de Vos
On Thu, Sep 15, 2016 at 02:50:09AM -0400, Hari Gowtham wrote:
> Hi,
> 
> I would be happy to get reviews for this patch
> http://review.gluster.org/#/c/13365/
> 
> more details can be found here about the changes:
> https://docs.google.com/document/d/1_iyjiwTLnBJlCiUgjAWnpnPD801h5LNxLhHmN7zmk1o/edit?usp=sharing

Please send this as a document for the glusterfs-specs repository (uses
Gerrit just like the glusterfs sources). See the README.md on
https://github.com/gluster/glusterfs-specs/blob/master/README.md for
some more details.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request: tier as a service.

2016-09-14 Thread Hari Gowtham
Hi,

I would be happy to get reviews for this patch
http://review.gluster.org/#/c/13365/

more details can be found here about the changes:
https://docs.google.com/document/d/1_iyjiwTLnBJlCiUgjAWnpnPD801h5LNxLhHmN7zmk1o/edit?usp=sharing


-- 
Regards, 
Hari. 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for lock migration patches

2016-09-14 Thread Susant Palai
+Poornima, Talur

- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Susant Palai" 
> Cc: "Raghavendra Gowdappa" , "gluster-devel" 
> 
> Sent: Wednesday, 14 September, 2016 8:13:25 PM
> Subject: Re: [Gluster-devel] Review request for lock migration patches
> 
> Could you get the reviews from one of Poornima/Raghavendra Talur once?
> 
> On Wed, Sep 14, 2016 at 6:12 PM, Susant Palai  wrote:
> 
> > Hi,
> >   It would be nice to get the patches in 3.9. The reviews are pending for
> > a long time. Requesting reviews.
> >
> > Thanks,
> > Susant
> >
> >
> > - Original Message -
> > > From: "Susant Palai" 
> > > To: "Raghavendra Gowdappa" , "Pranith Kumar
> > Karampuri" 
> > > Cc: "gluster-devel" 
> > > Sent: Wednesday, 7 September, 2016 9:54:04 AM
> > > Subject: Re: [Gluster-devel] Review request for lock migration patches
> > >
> > > Gentle reminder for reviews.
> > >
> > > Thanks,
> > > Susant
> > >
> > > - Original Message -
> > > > From: "Susant Palai" 
> > > > To: "Raghavendra Gowdappa" , "Pranith Kumar
> > Karampuri"
> > > > 
> > > > Cc: "gluster-devel" 
> > > > Sent: Tuesday, 30 August, 2016 3:19:13 PM
> > > > Subject: [Gluster-devel] Review request for lock migration patches
> > > >
> > > > Hi,
> > > >
> > > > There are few patches targeted for lock migration. Requesting for
> > review.
> > > > 1. http://review.gluster.org/#/c/13901/
> > > > 2. http://review.gluster.org/#/c/14286/
> > > > 3. http://review.gluster.org/#/c/14492/
> > > > 4. http://review.gluster.org/#/c/15076/
> > > >
> > > >
> > > > Thanks,
> > > > Susant~
> > > >
> > > > ___
> > > > Gluster-devel mailing list
> > > > Gluster-devel@gluster.org
> > > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> >
> 
> 
> 
> --
> Pranith
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for lock migration patches

2016-09-14 Thread Pranith Kumar Karampuri
Could you get the reviews from one of Poornima/Raghavendra Talur once?

On Wed, Sep 14, 2016 at 6:12 PM, Susant Palai  wrote:

> Hi,
>   It would be nice to get the patches in 3.9. The reviews are pending for
> a long time. Requesting reviews.
>
> Thanks,
> Susant
>
>
> - Original Message -
> > From: "Susant Palai" 
> > To: "Raghavendra Gowdappa" , "Pranith Kumar
> Karampuri" 
> > Cc: "gluster-devel" 
> > Sent: Wednesday, 7 September, 2016 9:54:04 AM
> > Subject: Re: [Gluster-devel] Review request for lock migration patches
> >
> > Gentle reminder for reviews.
> >
> > Thanks,
> > Susant
> >
> > - Original Message -
> > > From: "Susant Palai" 
> > > To: "Raghavendra Gowdappa" , "Pranith Kumar
> Karampuri"
> > > 
> > > Cc: "gluster-devel" 
> > > Sent: Tuesday, 30 August, 2016 3:19:13 PM
> > > Subject: [Gluster-devel] Review request for lock migration patches
> > >
> > > Hi,
> > >
> > > There are few patches targeted for lock migration. Requesting for
> review.
> > > 1. http://review.gluster.org/#/c/13901/
> > > 2. http://review.gluster.org/#/c/14286/
> > > 3. http://review.gluster.org/#/c/14492/
> > > 4. http://review.gluster.org/#/c/15076/
> > >
> > >
> > > Thanks,
> > > Susant~
> > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request for lock migration patches

2016-09-14 Thread Susant Palai
Hi,
  It would be nice to get the patches in 3.9. The reviews are pending for a 
long time. Requesting reviews.

Thanks,
Susant
  

- Original Message -
> From: "Susant Palai" 
> To: "Raghavendra Gowdappa" , "Pranith Kumar Karampuri" 
> 
> Cc: "gluster-devel" 
> Sent: Wednesday, 7 September, 2016 9:54:04 AM
> Subject: Re: [Gluster-devel] Review request for lock migration patches
> 
> Gentle reminder for reviews.
> 
> Thanks,
> Susant
> 
> - Original Message -
> > From: "Susant Palai" 
> > To: "Raghavendra Gowdappa" , "Pranith Kumar Karampuri"
> > 
> > Cc: "gluster-devel" 
> > Sent: Tuesday, 30 August, 2016 3:19:13 PM
> > Subject: [Gluster-devel] Review request for lock migration patches
> > 
> > Hi,
> > 
> > There are few patches targeted for lock migration. Requesting for review.
> > 1. http://review.gluster.org/#/c/13901/
> > 2. http://review.gluster.org/#/c/14286/
> > 3. http://review.gluster.org/#/c/14492/
> > 4. http://review.gluster.org/#/c/15076/
> > 
> > 
> > Thanks,
> > Susant~
> > 
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for EC - set/unset dirty flag for data/metadata update

2016-09-07 Thread Ashish Pandey
Hi, 

Please review the following patch for EC- 
http://review.gluster.org/#/c/13733/ 

Ashish 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request for lock migration patches

2016-09-06 Thread Susant Palai
Gentle reminder for reviews.

Thanks,
Susant

- Original Message -
> From: "Susant Palai" 
> To: "Raghavendra Gowdappa" , "Pranith Kumar Karampuri" 
> 
> Cc: "gluster-devel" 
> Sent: Tuesday, 30 August, 2016 3:19:13 PM
> Subject: [Gluster-devel] Review request for lock migration patches
> 
> Hi,
> 
> There are few patches targeted for lock migration. Requesting for review.
> 1. http://review.gluster.org/#/c/13901/
> 2. http://review.gluster.org/#/c/14286/
> 3. http://review.gluster.org/#/c/14492/
> 4. http://review.gluster.org/#/c/15076/
> 
> 
> Thanks,
> Susant~
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for lock migration patches

2016-08-30 Thread Susant Palai
Hi,

There are few patches targeted for lock migration. Requesting for review. 
1. http://review.gluster.org/#/c/13901/
2. http://review.gluster.org/#/c/14286/
3. http://review.gluster.org/#/c/14492/
4. http://review.gluster.org/#/c/15076/


Thanks,
Susant~

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for 3.9 patches

2016-08-30 Thread Poornima Gurusiddaiah
Hi,

Few more patches, have addressed the review comments, could you please review 
these patches:

http://review.gluster.org/15002   md-cache: Register the list of xattrs with 
cache-invalidation
http://review.gluster.org/15300   dht, md-cache, upcall: Add invalidation of 
IATT when the layout changes
http://review.gluster.org/15324   md-cache: Process all the cache invalidation 
flags
http://review.gluster.org/15313   upcall: Mark the clients as accessed even on 
readdir entries
http://review.gluster.org/15193   io-stats: Add stats for upcall notifications

Regards,
Poornima

- Original Message - 

> From: "Poornima Gurusiddaiah" 
> To: "Gluster Devel" , "Raghavendra Gowdappa"
> , "Rajesh Joseph" , "Raghavendra
> Talur" , "Soumya Koduri" , "Niels de
> Vos" , "Anoop Chirayath Manjiyil Sajan"
> 
> Sent: Thursday, August 25, 2016 5:22:43 AM
> Subject: Review request for 3.9 patches

> Hi,

> There are few patches that are part of the effort of integrating md-cache
> with upcall.
> Hope to take these patches for 3.9, it would be great if you can review these
> patches:

> upcall patches:
> http://review.gluster.org/#/c/15313/
> http://review.gluster.org/#/c/15301/

> md-cache patches:
> http://review.gluster.org/#/c/15002/
> http://review.gluster.org/#/c/15045/
> http://review.gluster.org/#/c/15185/
> http://review.gluster.org/#/c/15224/
> http://review.gluster.org/#/c/15225/
> http://review.gluster.org/#/c/15300/
> http://review.gluster.org/#/c/15314/

> Thanks,
> Poornima
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for 3.9 patches

2016-08-25 Thread Poornima Gurusiddaiah
Hi, 

There are few patches that are part of the effort of integrating md-cache with 
upcall. 
Hope to take these patches for 3.9, it would be great if you can review these 
patches: 

upcall patches: 
http://review.gluster.org/#/c/15313/ 
http://review.gluster.org/#/c/15301/ 

md-cache patches: 
http://review.gluster.org/#/c/15002/ 
http://review.gluster.org/#/c/15045/ 
http://review.gluster.org/#/c/15185/ 
http://review.gluster.org/#/c/15224/ 
http://review.gluster.org/#/c/15225/ 
http://review.gluster.org/#/c/15300/ 
http://review.gluster.org/#/c/15314/ 

Thanks, 
Poornima 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request for md-cache enhancements

2016-07-27 Thread Niels de Vos
On Wed, Jul 27, 2016 at 06:20:48AM -0400, Poornima Gurusiddaiah wrote:
> Hi, 
> 
> Here is a patch http://review.gluster.org/#/c/15002 that lets md-cache
> inform upcall about the list of xattrs, that it is interested, in
> receiving invalidations. Could you please take a look at this. There
> are dependent patches on this, which i will work on based on the
> review comments. 

Please make sure to add it as a feature for 3.9. You should be able to
send a pull request through GitHub on the bottom of the planning page:
  https://www.gluster.org/community/roadmap/3.9/

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request for md-cache enhancements

2016-07-27 Thread Poornima Gurusiddaiah
Hi, 

Here is a patch http://review.gluster.org/#/c/15002 that lets md-cache inform 
upcall about the list of xattrs, that it is interested, in receiving 
invalidations. Could you please take a look at this. There are dependent 
patches on this, which i will work on based on the review comments. 

Regards, 
Poornima 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request for md-cache(performance xlator) patch

2016-07-17 Thread Raghavendra Gowdappa
I've some comments on the patch. Please go through them. Once they are acked by 
you, I can merge the patch.

- Original Message -
> From: "Poornima Gurusiddaiah" 
> To: "Raghavendra Gowdappa" , "Gluster Devel" 
> 
> Sent: Monday, July 18, 2016 9:45:57 AM
> Subject: Review request for md-cache(performance xlator) patch
> 
> Hi Raghavendra,
> 
> http://review.gluster.org/#/c/12951/ is a patch that enables
> cache-invalidation for md-cache. Have got ack from Niels and Zhou Zhengping.
> Can you please review and merge the patch, as you are the maintainer. There
> are other dependent patches on this, which i will be sending once this is
> out of the way.
> 
> Thanks,
> Poornima
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for md-cache(performance xlator) patch

2016-07-17 Thread Poornima Gurusiddaiah
Hi Raghavendra, 

http://review.gluster.org/#/c/12951/ is a patch that enables cache-invalidation 
for md-cache. Have got ack from Niels and Zhou Zhengping. Can you please review 
and merge the patch, as you are the maintainer. There are other dependent 
patches on this, which i will be sending once this is out of the way. 

Thanks, 
Poornima 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request for http://review.gluster.org/#/c/14815/

2016-06-27 Thread Karthik Subrahmanya
Hi,

I have added implementation for the posix_do_futimes function, which is
not complete in the current implementation. I did the implementation by
looking into the current implementation of posix_do_utimes function.
Please review the patch and correct me if I am wrong anywhere.

Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1350406
Patch: http://review.gluster.org/#/c/14815/

Thanks & Regards,
Karthik
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] review request - quota information mismatch which glusterfs on zfs environment

2016-05-31 Thread Sungsik Park
Hi all,

'quota information mismatch' problem occurs in the zfs underlying file
system environment.

request the code commits reviews for solving this problem.

* Red Hat Bugzilla - Bug 1341355 - quota information mismatch which
glusterfs on zfs environment


* for review: http://review.gluster.org/#/c/14593/

thanks.

-- 

-- 

Sungsik, Park [Corazy Park]

Software Development Engineer

Email: corazy.p...@gmail.com 




This email may be confidential and protected by legal privilege.

If you are not the intended recipient, disclosure, copying, distribution

and use are prohibited; please notify us immediately and delete this copy

from your system.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] review request

2016-05-30 Thread Susant Palai
Hi,
  Requesting reviews for http://review.gluster.org/#/c/14251 and 
http://review.gluster.org/#/c/14252.

Thanks,
Susant~
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request - Gluster Eventing Feature

2016-05-24 Thread Aravinda

Hi,

I submitted patch for the new feature "Gluster Eventing", Please review 
the patch.


Patch:   http://review.gluster.org/14248
Design:  http://review.gluster.org/13115
Blog: http://aravindavk.in/blog/10-mins-intro-to-gluster-eventing
Demo:https://www.youtube.com/watch?v=urzong5sKqc

Thanks

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request for leases patches

2016-03-08 Thread Soumya Koduri

Hi Poornima,

On 03/07/2016 11:24 AM, Poornima Gurusiddaiah wrote:

Hi All,

Here is the link to feature page: http://review.gluster.org/#/c/11980/

Patches can be found @:
http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:leases


This link displays only one patch [1]. Probably other patches are not 
marked under topic:leases. Please verify the same.


Also please confirm if the list is complete to be consumed by any 
application or if there are still any pending patches (apart from the 
open items mentioned in the design doc) to be/being worked upon.


Thanks,
Soumya

[1] http://review.gluster.org/11721





Regards,
Poornima


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for leases patches

2016-03-06 Thread Poornima Gurusiddaiah
Hi All, 

Here is the link to feature page: http://review.gluster.org/#/c/11980/ 

Patches can be found @: 
http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:leases
 

Regards, 
Poornima 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request for md-cache improvement patches

2016-03-01 Thread Poornima Gurusiddaiah
Hi, 

Here are the improvements proposed for md-cache: 
- Integrate it with upcall cache-invalidation so that we can increase the cache 
timeout to be more than 1 sec. 
- Enable md-cache to cache xattrs. 

The doc explaining the same in detail can be found at 
http://review.gluster.org/#/c/13408/ 

Here are the patches implementing the same: 
md-cache: http://review.gluster.org/#/c/12951/ 
http://review.gluster.org/#/c/13406/ 
Upcall: http://review.gluster.org/#/c/12995/ 
http://review.gluster.org/#/c/12996/ 

Please review the same. 

Regards, 
Poornima 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request

2016-02-17 Thread Atin Mukherjee


On 02/17/2016 04:15 PM, Niels de Vos wrote:
> On Wed, Feb 17, 2016 at 02:38:44PM +0530, Atin Mukherjee wrote:
>> I've few patches which need some review attention, will appreciate if
>> some of you can have a look at it.
>>
Here you go!
>> http://review.gluster.org/#/c/13258/ - glusterd: use string comparison for 
>> realpath checks in glusterd_is_brickpath_available
>> http://review.gluster.org/#/c/13222/ - glusterd: display usr.* options in 
>> volume get
>> http://review.gluster.org/#/c/13272/ - glusterd: volume get should pick 
>> options from priv->opts too
>> http://review.gluster.org/#/c/13323/ - glusterd: set 
>> decommission_is_in_progress flag for inprogress remove-brick op on glusterd 
>> restart
>> http://review.gluster.org/#/c/13390/ - this is merged now, so ignore :)
> 
> It really helps if you list the components/subjects of the changes :)
> 
> Niels
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request

2016-02-17 Thread Niels de Vos
On Wed, Feb 17, 2016 at 02:38:44PM +0530, Atin Mukherjee wrote:
> I've few patches which need some review attention, will appreciate if
> some of you can have a look at it.
> 
> http://review.gluster.org/#/c/13258/
> http://review.gluster.org/#/c/13222/
> http://review.gluster.org/#/c/13272/
> http://review.gluster.org/#/c/13323/
> http://review.gluster.org/#/c/13390/

It really helps if you list the components/subjects of the changes :)

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request

2016-02-17 Thread Atin Mukherjee
I've few patches which need some review attention, will appreciate if
some of you can have a look at it.

http://review.gluster.org/#/c/13258/
http://review.gluster.org/#/c/13222/
http://review.gluster.org/#/c/13272/
http://review.gluster.org/#/c/13323/
http://review.gluster.org/#/c/13390/

Thanks,
Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request

2016-01-04 Thread Atin Mukherjee


On 01/04/2016 12:08 PM, Prasanna Kumar Kalever wrote:
> Giving one more try in the new year :)
> 
> http://review.gluster.org/#/c/12709/
> http://review.gluster.org/#/c/12963/
Managed to review this one :)
> http://review.gluster.org/#/c/13002/
> 
> 
> Happy New year 2016!
>  
> Thanks,
> -Prasanna
> 
> 
> - Original Message -
>> On Wednesday, December 23, 2015 10:07:53 PM, Atin Mukherjee wrote:
>>> I'd try to take up few of them in coming few days which are related
>>> glusterd. Appreciate your patience!
>>
>> Thanks for taking them Atin
>>
>> -Prasanna
>>
>>>
>>> -Atin
>>> Sent from one plus one
>>> On Dec 21, 2015 6:34 PM, "Prasanna Kumar Kalever" 
>>> wrote:
>>>
 Hello Everyone,


 It would be helpful if someone can review:

 http://review.gluster.org/#/c/12709/
 http://review.gluster.org/#/c/12963/
 http://review.gluster.org/#/c/13002/


 Thanks,
 -Prasanna ​


 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request

2016-01-03 Thread Prasanna Kumar Kalever
Giving one more try in the new year :)

http://review.gluster.org/#/c/12709/
http://review.gluster.org/#/c/12963/
http://review.gluster.org/#/c/13002/


Happy New year 2016!
 
Thanks,
-Prasanna


- Original Message -
> On Wednesday, December 23, 2015 10:07:53 PM, Atin Mukherjee wrote:
> > I'd try to take up few of them in coming few days which are related
> > glusterd. Appreciate your patience!
> 
> Thanks for taking them Atin
> 
> -Prasanna
> 
> > 
> > -Atin
> > Sent from one plus one
> > On Dec 21, 2015 6:34 PM, "Prasanna Kumar Kalever" 
> > wrote:
> > 
> > > Hello Everyone,
> > >
> > >
> > > It would be helpful if someone can review:
> > >
> > > http://review.gluster.org/#/c/12709/
> > > http://review.gluster.org/#/c/12963/
> > > http://review.gluster.org/#/c/13002/
> > >
> > >
> > > Thanks,
> > > -Prasanna ​
> > >
> > >
> > > ___
> > > Gluster-devel mailing list
> > > Gluster-devel@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request

2015-12-23 Thread Prasanna Kumar Kalever
On Wednesday, December 23, 2015 10:07:53 PM, Atin Mukherjee wrote:
> I'd try to take up few of them in coming few days which are related
> glusterd. Appreciate your patience!

Thanks for taking them Atin

-Prasanna

> 
> -Atin
> Sent from one plus one
> On Dec 21, 2015 6:34 PM, "Prasanna Kumar Kalever" 
> wrote:
> 
> > Hello Everyone,
> >
> >
> > It would be helpful if someone can review:
> >
> > http://review.gluster.org/#/c/12709/
> > http://review.gluster.org/#/c/12963/
> > http://review.gluster.org/#/c/13002/
> >
> >
> > Thanks,
> > -Prasanna ​
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Review request

2015-12-23 Thread Atin Mukherjee
I'd try to take up few of them in coming few days which are related
glusterd. Appreciate your patience!

-Atin
Sent from one plus one
On Dec 21, 2015 6:34 PM, "Prasanna Kumar Kalever" 
wrote:

> Hello Everyone,
>
>
> It would be helpful if someone can review:
>
> http://review.gluster.org/#/c/12709/
> http://review.gluster.org/#/c/12963/
> http://review.gluster.org/#/c/13002/
>
>
> Thanks,
> -Prasanna ​
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Review request

2015-12-21 Thread Prasanna Kumar Kalever
Hello Everyone,


It would be helpful if someone can review:

http://review.gluster.org/#/c/12709/
http://review.gluster.org/#/c/12963/
http://review.gluster.org/#/c/13002/


Thanks,
-Prasanna ​ 


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Review request] write-behind to retry failed syncs

2015-11-18 Thread Raghavendra Gowdappa
For ease of access, I am posting the summary from commit-msg below:

1. When sync fails, the cached-write is still preserved unless there
   is a flush/fsync waiting on it.
2. When a sync fails and there is a flush/fsync waiting on the
   cached-write, the cache is thrown away and no further retries will
   be made. In other words flush/fsync act as barriers for all the
   previous writes. All previous writes are either successfully
   synced to backend or forgotten in case of an error. Without such
   barrier fop (especially flush which is issued prior to a close), we
   end up retrying for ever even after fd is closed.
3. If a fop is waiting on cached-write and syncing to backend fails,
   the waiting fop is failed.
4. sync failures when no fop is waiting are ignored and are not
   propagated to application.
5. The effect of repeated sync failures is that, there will be no
   cache for future writes and they cannot be written behind.

Above algo is for handling of transient errors (EDQUOT, ENOSPC,
ENOTCONN). Handling of non-transient errors is slightly different as
below:
1. Throw away the write-buffer, so that cache is freed. This means no
   retries are made for non-transient errors. Also, since cache is
   freed, future writes can be written-behind.
2. Retain the request till an fsync or flush. This means all future
   operations to failed regions will fail till an fsync/flush. This is
   a conservative error handling to force application to know that a
   written-behind write has failed and take remedial action like
   rollback to last fsync and retrying all the writes from that point.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [Review request] write-behind to retry failed syncs

2015-11-18 Thread Raghavendra Gowdappa
Hi all,

[1] adds retry logic to failed syncs (to backend). It would be helpful if you 
can comment on:

1. Interface
2. Design
3. Implementation

[1] review.gluster.org/#/c/12594/7

regards,
Raghavendra.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request

2015-10-14 Thread Atin Mukherjee
I've couple of patches which need review attention

- http://review.gluster.org/#/c/12171/
- http://review.gluster.org/#/c/12329/

Raghavendra T - http://review.gluster.org/#/c/12328/ has got a +2 from
Jeff, could you merge it?

Thanks,
Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for Gluster IPv6 bugfix (Bug 1117886)

2015-09-09 Thread Atin Mukherjee


On 09/10/2015 06:30 AM, nithin dabilpuram wrote:
> Hi ,
> 
> Can this code be reviewed ? It is related to transport inet6
> 
> http://review.gluster.org/#/c/11988/
I've reviewed it and logged some comments.

~Atin
> 
> thanks
> Nithin
> 
> On Wed, Jun 17, 2015 at 9:06 AM, Nithin Kumar Dabilpuram
> mailto:nithind1...@yahoo.in>> wrote:
> 
> Sure Richard.
>  
> -Nithin
> 
> 
> 
> On Tuesday, 16 June 2015 1:23 AM, Richard Wareing  > wrote:
> 
> 
> Hey Nithin,
> 
> We have IPv6 going as well (v3.4.x & v3.6.x), so I might be able to
> help out here and perhaps combine our efforts.  We did something
> similar here, however we also tackled the NFS side of the house,
> which required a bunch of changes due to how port registration w/
> portmapper changed in IPv6 vs IPv4.  You effectively have to use
> "libtirpc" to do all the port registrations with IPv6.
> 
> We can offer up our patches for this work and hopefully things can
> be combined such that end-users can simply do "vol set 
> transport-address-family " and voila they have whatever
> support they desire.
> 
> I'll see if we can get this posted to bug 1117886 this week.
> 
> Richard
> 
> 
> 
> 
> *From:* gluster-devel-boun...@gluster.org
> 
> [gluster-devel-boun...@gluster.org
> ] on behalf of Nithin
> Kumar Dabilpuram [nithind1...@yahoo.in ]
> *Sent:* Saturday, June 13, 2015 9:12 PM
> *To:* gluster-devel@gluster.org 
> *Subject:* [Gluster-devel] Gluster IPv6 bugfixes (Bug 1117886)
> 
> 
> 
> 
> Hi,
> 
> Can I contribute to this bug fix ? I've worked on Gluster IPv6
> functionality bugs in 3.3.2 in my past organization and was able to
> successfully bring up gluster on IPv6 link local addresses as well.
> 
> Please find my work in progress patch. I'll raise gerrit review once
> testing is done. I was successfully able to create volumes with 3
> peers and add bricks. I'll continue testing other basic
> functionality and see what needs to be modified. Any other suggestions ?
> 
> Brief info about the patch:
> Here I'm trying to use "transport.address-family" option in
> /etc/glusterfs/glusterd.vol file and then propagate the same to
> server and client vol files and their translators.
> 
> In this way when user mentions "transport.address-family inet6" in
> its glusterd.vol file, all glusterd servers open AF_INET6 sockets
> and then the same information is stored in glusterd_volinfo and used
> when generating vol config files.
>  
> -thanks
> Nithin
> 
> 
> 
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for Gluster IPv6 bugfix (Bug 1117886)

2015-09-09 Thread nithin dabilpuram
Hi ,

Can this code be reviewed ? It is related to transport inet6

http://review.gluster.org/#/c/11988/

thanks
Nithin

On Wed, Jun 17, 2015 at 9:06 AM, Nithin Kumar Dabilpuram <
nithind1...@yahoo.in> wrote:

> Sure Richard.
>
> -Nithin
>
>
>
> On Tuesday, 16 June 2015 1:23 AM, Richard Wareing  wrote:
>
>
> Hey Nithin,
>
> We have IPv6 going as well (v3.4.x & v3.6.x), so I might be able to help
> out here and perhaps combine our efforts.  We did something similar here,
> however we also tackled the NFS side of the house, which required a bunch
> of changes due to how port registration w/ portmapper changed in IPv6 vs
> IPv4.  You effectively have to use "libtirpc" to do all the port
> registrations with IPv6.
>
> We can offer up our patches for this work and hopefully things can be
> combined such that end-users can simply do "vol set 
> transport-address-family " and voila they have whatever support
> they desire.
>
> I'll see if we can get this posted to bug 1117886 this week.
>
> Richard
>
>
>
> --
> *From:* gluster-devel-boun...@gluster.org [
> gluster-devel-boun...@gluster.org] on behalf of Nithin Kumar Dabilpuram [
> nithind1...@yahoo.in]
> *Sent:* Saturday, June 13, 2015 9:12 PM
> *To:* gluster-devel@gluster.org
> *Subject:* [Gluster-devel] Gluster IPv6 bugfixes (Bug 1117886)
>
>
>
>
> Hi ,
>
> Can I contribute to this bug fix ? I've worked on Gluster IPv6
> functionality bugs in 3.3.2 in my past organization and was able to
> successfully bring up gluster on IPv6 link local addresses as well.
>
> Please find my work in progress patch. I'll raise gerrit review once
> testing is done. I was successfully able to create volumes with 3 peers and
> add bricks. I'll continue testing other basic functionality and see what
> needs to be modified. Any other suggestions ?
>
> Brief info about the patch:
> Here I'm trying to use "transport.address-family" option in
> /etc/glusterfs/glusterd.vol file and then propagate the same to server and
> client vol files and their translators.
>
> In this way when user mentions "transport.address-family inet6" in its
> glusterd.vol file, all glusterd servers open AF_INET6 sockets and then the
> same information is stored in glusterd_volinfo and used when generating vol
> config files.
>
> -thanks
> Nithin
>
>
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for patch on run_tests.sh

2015-09-04 Thread Raghavendra Talur
Jeff and Manu already did review of http://review.gluster.org/#/c/12093/.
Would like a few more reviews before I merge this.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for http://review.gluster.org/#/c/10262

2015-08-11 Thread Atin Mukherjee
Gentle reminder!! Since 3.7.3 is already out, I am willing to take that
in for 3.7.4

On 07/24/2015 09:28 AM, Atin Mukherjee wrote:
> Folks,
> 
> Currently in our vme table we have few option names which are redundant
> across different translators. For eg: cache-size, this option is same
> across io-cache and quick-read xlator. Now if an user wants to have two
> different values set, we don't have a mechanism for it. What I have done
> here is to use unique names for these redundant options.
> 
> Reviews is highly appreciated and I do think this will be a good
> candidate for 3.7.3.
> 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for http://review.gluster.org/#/c/10262

2015-07-23 Thread Atin Mukherjee
Folks,

Currently in our vme table we have few option names which are redundant
across different translators. For eg: cache-size, this option is same
across io-cache and quick-read xlator. Now if an user wants to have two
different values set, we don't have a mechanism for it. What I have done
here is to use unique names for these redundant options.

Reviews is highly appreciated and I do think this will be a good
candidate for 3.7.3.
-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for patch: libglusterfs/syncop: Add xdata to all syncop calls

2015-04-01 Thread Jeff Darcy
> Is it always ok to consider xdata to be valid, even if op_ret < 0?

I would say yes, simply because it's useful.  For example, xdata might
contain extra information explaining the error, or suggesting a next
step.  In NSR, when a non-coordinator receives a request it returns
EREMOTE.  It would be handy for it to return something in xdata to
identify which brick *is* the coordinator, so the client doesn't have
to guess.  It might also be useful to return a transaction ID, even
on error, so that a subsequent retry can be detected as such.  If
xdata is discarded on error, then both the "regardless of error" and
"only on error" use cases aren't satisfied.

> If yes, I will have to update the syncop_*_cbk calls to ref xdata
> if they exist, irrespective of op_ret.
> 
> Also, it can be used in really cool ways, like we can have a
> key called glusterfs.error_origin_xlator set to this->name of the xlator
> where error originated and master xlators (fuse and gfapi) can log / make
> use of it etc.

Great minds think alike.  ;)  There might even be cases where it
would be useful to capture tracebacks or statistics to send back
with an error.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Review request for patch: libglusterfs/syncop: Add xdata to all syncop calls

2015-04-01 Thread Raghavendra Talur

On Tuesday 31 March 2015 09:36 PM, Raghavendra Talur wrote:

Hi,

I have sent updated patch which adds xdata support to all syncop calls.
It adds xdata in both request and response path of syncop.

Considering that this patch has changes in many files,
I request a quick review and merge to avoid rebase issues.

Patch link http://review.gluster.org/#/c/9859/
Bug Id: https://bugzilla.redhat.com/show_bug.cgi?id=1158621

Thanks,
Raghavendra Talur


Question regarding validity of xdata when op_ret < 0.

In this patch set I have syncop_*_cbk in this form


int
syncop_rmdir_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
   int op_ret, int op_errno, struct iatt *preparent,
   struct iatt *postparent, dict_t *xdata)
{
struct syncargs *args = NULL;

args = cookie;

args->op_ret   = op_ret;
args->op_errno = op_errno;

if (op_ret >= 0) {
if (xdata)
args->xdata  = dict_ref (xdata);
}

__wake (args);

return 0;
}


where as the call stub has it like this

call_stub_t *
fop_rmdir_cbk_stub (call_frame_t *frame, fop_rmdir_cbk_t fn,
int32_t op_ret, int32_t op_errno,
struct iatt *preparent, struct iatt *postparent,
dict_t *xdata)
{
call_stub_t *stub = NULL;

GF_VALIDATE_OR_GOTO ("call-stub", frame, out);

stub = stub_new (frame, 0, GF_FOP_RMDIR);
GF_VALIDATE_OR_GOTO ("call-stub", stub, out);

stub->fn_cbk.rmdir = fn;
stub->args_cbk.op_ret = op_ret;
stub->args_cbk.op_errno = op_errno;
if (preparent)
stub->args_cbk.preparent = *preparent;
if (postparent)
stub->args_cbk.postparent = *postparent;
if (xdata)
stub->args_cbk.xdata = dict_ref (xdata);
out:
return stub;
}


The difference being when xdata is considered to be valid.
call-stub considers it valid irrespective of op_ret value.

Is it always ok to consider xdata to be valid, even if op_ret < 0?

If yes, I will have to update the syncop_*_cbk calls to ref xdata
if they exist, irrespective of op_ret.

Also, it can be used in really cool ways, like we can have a
key called glusterfs.error_origin_xlator set to this->name of the xlator
where error originated and master xlators (fuse and gfapi) can log / make
use of it etc.

Thanks,
Raghavendra Talur



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for patch: libglusterfs/syncop: Add xdata to all syncop calls

2015-03-31 Thread Raghavendra Talur

Hi,

I have sent updated patch which adds xdata support to all syncop calls.
It adds xdata in both request and response path of syncop.

Considering that this patch has changes in many files,
I request a quick review and merge to avoid rebase issues.

Patch link http://review.gluster.org/#/c/9859/
Bug Id: https://bugzilla.redhat.com/show_bug.cgi?id=1158621

Thanks,
Raghavendra Talur

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for http://review.gluster.com/#/c/8244/

2014-08-25 Thread Krishnan Parthasarathi
All,

This patch was part of the glusterfsiostat GSOC effort[1].
The GSOC project was successfully completed, thanks to Vipul.
This is the first for Gluster community. It would be great
if we could merge this into our repository and mark the successful
completion of our first GSOC proposal!

[1] - 
http://www.google-melange.com/gsoc/project/details/google/gsoc2014/vipulnayyar/56868772
[2] - https://forge.gluster.org/glusterfsiostat

cheers,
KP 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for http://review.gluster.org/#/c/8285/

2014-07-30 Thread Krishnan Parthasarathi
All,

This patch is useful for debugging network related issues in
GlusterFS, esp. mount processes. Any review help is greatly appreciated.

~KP
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel