Re: [Gluster-devel] crash in tests/bugs/core/bug-1432542-mpx-restart-crash.t

2017-07-13 Thread Nigel Babu
>
> Maybe we need to upgrade the version of gdb on these machines, because it
> didn't seem able to get the backtrace but the version I'm using (7.12.50)
> did just fine.  The first few frames of thread 1 above the signal handler
> look like this.
>

I'd like to move our regression infra to Centos 7 in the near future :)

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] crash in tests/bugs/core/bug-1432542-mpx-restart-crash.t

2017-07-13 Thread Jeff Darcy



On Thu, Jul 13, 2017, at 01:10 PM, Pranith Kumar Karampuri wrote:
> I just observed that 
> https://build.gluster.org/job/centos6-regression/5433/consoleFull failed 
> because of this .t failure.
Maybe we need to upgrade the version of gdb on these machines, because it 
didn't seem able to get the backtrace but the version I'm using (7.12.50) did 
just fine.  The first few frames of thread 1 above the signal handler look like 
this.
#8  0x7f9b642d8872 in fini_db (_conn_node=0x7f9b528d2e50)
at 
/home/jenkins/root/workspace/centos6-regression/libglusterfs/src/gfdb/gfdb_data_store.c:326
#9  0x7f9b644ffb8e in notify (this=0x7f9b524795e0, event=9, 
data=0x7f9b5247bb20)at 
/home/jenkins/root/workspace/centos6-regression/xlators/features/changetimerecorder/src/changetimerecorder.c:2313
#10 0x7f9b7280c04f in xlator_notify (xl=0x7f9b524795e0, event=9, 
data=0x7f9b5247bb20)
at 
/home/jenkins/root/workspace/centos6-regression/libglusterfs/src/xlator.c:566#11
 0x7f9b728cf1ec in default_notify (this=0x7f9b5247bb20, event=9, 
data=0x7f9b5247cf90) at defaults.c:3151
#12 0x7f9b5fded58d in notify (this=0x7f9b5247bb20, event=9, 
data=0x7f9b5247cf90)at 
/home/jenkins/root/workspace/centos6-regression/xlators/features/changelog/src/changelog.c:2307

Based on that, I think I'd start looking at changetimerecorder/libgfdb first.  
IIRC we've had some init/fini races there before.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] crash in tests/bugs/core/bug-1432542-mpx-restart-crash.t

2017-07-13 Thread Pranith Kumar Karampuri
I just observed that
https://build.gluster.org/job/centos6-regression/5433/consoleFull failed
because of this .t failure.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-13 Thread Ankireddypalle Reddy
Thanks Pranith. We are waiting for a downtime on our production setup. Will 
update you once we are able to apply this on our production setup.

Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: Thursday, July 13, 2017 4:13 AM
To: Ankireddypalle Reddy
Cc: Sanoj Unnikrishnan; Gluster Devel (gluster-devel@gluster.org); 
gluster-us...@gluster.org
Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost

Ram,
  I sent https://review.gluster.org/17765 to fix the possibility in bulk 
removexattr. But I am not sure if this is indeed the reason for this issue.

On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy 
> wrote:
Thanks for the swift turn around. Will try this out and let you know.

Thanks and Regards,
Ram
From: Pranith Kumar Karampuri 
[mailto:pkara...@redhat.com]
Sent: Monday, July 10, 2017 8:31 AM
To: Sanoj Unnikrishnan
Cc: Ankireddypalle Reddy; Gluster Devel 
(gluster-devel@gluster.org); 
gluster-us...@gluster.org

Subject: Re: [Gluster-devel] gfid and volume-id extended attributes lost

Ram,
  If you see it again, you can use this. I am going to send out a patch for 
the code path which can lead to removal of gfid/volume-id tomorrow.

On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan 
> wrote:
Please use the systemtap 
script(https://paste.fedoraproject.org/paste/EGDa0ErwX0LV3y-gBYpfNA) to check 
which process is invoking remove xattr calls.
It prints the pid, tid and arguments of all removexattr calls.
I have checked for these fops at the protocol/client and posix translators.

To run the script ..
1) install systemtap and dependencies.
2) install glusterfs-debuginfo
3) change the path of the translator in the systemtap script to appropriate 
values for your system
(change "/usr/lib64/glusterfs/3.12dev/xlator/protocol/client.so" and 
"/usr/lib64/glusterfs/3.12dev/xlator/storage/posix.so")
4) run the script as follows
#stap -v fop_trace.stp

The o/p would look like these .. additionally arguments will also be dumped if 
glusterfs-debuginfo is also installed (i had not done it here.)
pid-958: 0 glusterfsd(3893):->posix_setxattr
pid-958:47 glusterfsd(3893):<-posix_setxattr
pid-966: 0 glusterfsd(5033):->posix_setxattr
pid-966:57 glusterfsd(5033):<-posix_setxattr
pid-1423: 0 glusterfs(1431):->client_setxattr
pid-1423:37 glusterfs(1431):<-client_setxattr
pid-1423: 0 glusterfs(1431):->client_setxattr
pid-1423:41 glusterfs(1431):<-client_setxattr
Regards,
Sanoj



On Mon, Jul 10, 2017 at 2:56 PM, Sanoj Unnikrishnan 
> wrote:
@ pranith , yes . we can get the pid on all removexattr call and also print the 
backtrace of the glusterfsd process when trigerring removing xattr.
I will write the script and reply back.

On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri 
> wrote:
Ram,
   As per the code, self-heal was the only candidate which *can* do it. 
Could you check logs of self-heal daemon and the mount to check if there are 
any metadata heals on root?
+Sanoj
Sanoj,
   Is there any systemtap script we can use to detect which process is 
removing these xattrs?

On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy 
> wrote:
We lost the attributes on all the bricks on servers glusterfs2 and glusterfs3 
again.

[root@glusterfs2 Log_Files]# gluster volume info

Volume Name: StoragePool
Type: Distributed-Disperse
Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
Status: Started
Number of Bricks: 20 x (2 + 1) = 60
Transport-type: tcp
Bricks:
Brick1: glusterfs1sds:/ws/disk1/ws_brick
Brick2: glusterfs2sds:/ws/disk1/ws_brick
Brick3: glusterfs3sds:/ws/disk1/ws_brick
Brick4: glusterfs1sds:/ws/disk2/ws_brick
Brick5: glusterfs2sds:/ws/disk2/ws_brick
Brick6: glusterfs3sds:/ws/disk2/ws_brick
Brick7: glusterfs1sds:/ws/disk3/ws_brick
Brick8: glusterfs2sds:/ws/disk3/ws_brick
Brick9: glusterfs3sds:/ws/disk3/ws_brick
Brick10: glusterfs1sds:/ws/disk4/ws_brick
Brick11: glusterfs2sds:/ws/disk4/ws_brick
Brick12: glusterfs3sds:/ws/disk4/ws_brick
Brick13: glusterfs1sds:/ws/disk5/ws_brick
Brick14: glusterfs2sds:/ws/disk5/ws_brick
Brick15: glusterfs3sds:/ws/disk5/ws_brick
Brick16: glusterfs1sds:/ws/disk6/ws_brick
Brick17: glusterfs2sds:/ws/disk6/ws_brick
Brick18: glusterfs3sds:/ws/disk6/ws_brick
Brick19: glusterfs1sds:/ws/disk7/ws_brick
Brick20: glusterfs2sds:/ws/disk7/ws_brick
Brick21: glusterfs3sds:/ws/disk7/ws_brick
Brick22: glusterfs1sds:/ws/disk8/ws_brick
Brick23: glusterfs2sds:/ws/disk8/ws_brick
Brick24: glusterfs3sds:/ws/disk8/ws_brick
Brick25: glusterfs4sds.commvault.com:/ws/disk1/ws_brick
Brick26: glusterfs5sds.commvault.com:/ws/disk1/ws_brick
Brick27: 

[Gluster-devel] Coverity covscan for 2017-07-13-f367671d (master branch)

2017-07-13 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-07-13-f367671d
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] http://bugs.cloud.gluster.org is functional again

2017-07-13 Thread Niels de Vos
On Thu, Jul 13, 2017 at 10:12:32AM +0200, Niels de Vos wrote:
> For anyone that helps with the Bug Triaging, the failure (it was not
> updating its json data file) of http://bugs.cloud.gluster.org/ [0] has
> been annoying. I finally took the time to test a one-line change [1],
> and it seems to be working.
> 
> Please send patches in case something is broken :-)

Hmm, it looks that the "Review Status" column is not correct. It should
display the number of patches that have been M(erged), A(bandoned) and
N(ew). This should get retrieved/calculated through
get_reviews_from_bug() and get_review_status() in gluster-bugs.py [2].

If anyone feels like looking into this, create a GitHub issue and send a
patch.

Thanks!
Niels


> Cheers,
> Niels
> 
> 
> 0. https://github.com/gluster/gluster-bugs-webui/issues/9
> 1. https://github.com/gluster/gluster-bugs-webui/pull/10/files

2. https://github.com/gluster/gluster-bugs-webui/blob/master/gluster-bugs.py#L41



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] gfid and volume-id extended attributes lost

2017-07-13 Thread Pranith Kumar Karampuri
Ram,
  I sent https://review.gluster.org/17765 to fix the possibility in
bulk removexattr. But I am not sure if this is indeed the reason for this
issue.

On Mon, Jul 10, 2017 at 6:30 PM, Ankireddypalle Reddy 
wrote:

> Thanks for the swift turn around. Will try this out and let you know.
>
>
>
> Thanks and Regards,
>
> Ram
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Monday, July 10, 2017 8:31 AM
> *To:* Sanoj Unnikrishnan
> *Cc:* Ankireddypalle Reddy; Gluster Devel (gluster-devel@gluster.org);
> gluster-us...@gluster.org
>
> *Subject:* Re: [Gluster-devel] gfid and volume-id extended attributes lost
>
>
>
> Ram,
>
>   If you see it again, you can use this. I am going to send out a
> patch for the code path which can lead to removal of gfid/volume-id
> tomorrow.
>
>
>
> On Mon, Jul 10, 2017 at 5:19 PM, Sanoj Unnikrishnan 
> wrote:
>
> Please use the systemtap script(https://paste.fedoraproject.org/paste/
> EGDa0ErwX0LV3y-gBYpfNA) to check which process is invoking remove xattr
> calls.
> It prints the pid, tid and arguments of all removexattr calls.
>
> I have checked for these fops at the protocol/client and posix translators.
>
>
> To run the script ..
>
> 1) install systemtap and dependencies.
> 2) install glusterfs-debuginfo
>
> 3) change the path of the translator in the systemtap script to
> appropriate values for your system
>
> (change "/usr/lib64/glusterfs/3.12dev/xlator/protocol/client.so" and
> "/usr/lib64/glusterfs/3.12dev/xlator/storage/posix.so")
>
> 4) run the script as follows
>
> #stap -v fop_trace.stp
>
> The o/p would look like these .. additionally arguments will also be
> dumped if glusterfs-debuginfo is also installed (i had not done it here.)
> pid-958: 0 glusterfsd(3893):->posix_setxattr
> pid-958:47 glusterfsd(3893):<-posix_setxattr
> pid-966: 0 glusterfsd(5033):->posix_setxattr
> pid-966:57 glusterfsd(5033):<-posix_setxattr
> pid-1423: 0 glusterfs(1431):->client_setxattr
> pid-1423:37 glusterfs(1431):<-client_setxattr
> pid-1423: 0 glusterfs(1431):->client_setxattr
> pid-1423:41 glusterfs(1431):<-client_setxattr
>
> Regards,
>
> Sanoj
>
>
>
>
>
>
>
> On Mon, Jul 10, 2017 at 2:56 PM, Sanoj Unnikrishnan 
> wrote:
>
> @ pranith , yes . we can get the pid on all removexattr call and also
> print the backtrace of the glusterfsd process when trigerring removing
> xattr.
>
> I will write the script and reply back.
>
>
>
> On Sat, Jul 8, 2017 at 7:06 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
> Ram,
>
>As per the code, self-heal was the only candidate which *can* do
> it. Could you check logs of self-heal daemon and the mount to check if
> there are any metadata heals on root?
>
> +Sanoj
>
> Sanoj,
>
>Is there any systemtap script we can use to detect which process is
> removing these xattrs?
>
>
>
> On Sat, Jul 8, 2017 at 2:58 AM, Ankireddypalle Reddy 
> wrote:
>
> We lost the attributes on all the bricks on servers glusterfs2 and
> glusterfs3 again.
>
>
>
> [root@glusterfs2 Log_Files]# gluster volume info
>
>
>
> Volume Name: StoragePool
>
> Type: Distributed-Disperse
>
> Volume ID: 149e976f-4e21-451c-bf0f-f5691208531f
>
> Status: Started
>
> Number of Bricks: 20 x (2 + 1) = 60
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: glusterfs1sds:/ws/disk1/ws_brick
>
> Brick2: glusterfs2sds:/ws/disk1/ws_brick
>
> Brick3: glusterfs3sds:/ws/disk1/ws_brick
>
> Brick4: glusterfs1sds:/ws/disk2/ws_brick
>
> Brick5: glusterfs2sds:/ws/disk2/ws_brick
>
> Brick6: glusterfs3sds:/ws/disk2/ws_brick
>
> Brick7: glusterfs1sds:/ws/disk3/ws_brick
>
> Brick8: glusterfs2sds:/ws/disk3/ws_brick
>
> Brick9: glusterfs3sds:/ws/disk3/ws_brick
>
> Brick10: glusterfs1sds:/ws/disk4/ws_brick
>
> Brick11: glusterfs2sds:/ws/disk4/ws_brick
>
> Brick12: glusterfs3sds:/ws/disk4/ws_brick
>
> Brick13: glusterfs1sds:/ws/disk5/ws_brick
>
> Brick14: glusterfs2sds:/ws/disk5/ws_brick
>
> Brick15: glusterfs3sds:/ws/disk5/ws_brick
>
> Brick16: glusterfs1sds:/ws/disk6/ws_brick
>
> Brick17: glusterfs2sds:/ws/disk6/ws_brick
>
> Brick18: glusterfs3sds:/ws/disk6/ws_brick
>
> Brick19: glusterfs1sds:/ws/disk7/ws_brick
>
> Brick20: glusterfs2sds:/ws/disk7/ws_brick
>
> Brick21: glusterfs3sds:/ws/disk7/ws_brick
>
> Brick22: glusterfs1sds:/ws/disk8/ws_brick
>
> Brick23: glusterfs2sds:/ws/disk8/ws_brick
>
> Brick24: glusterfs3sds:/ws/disk8/ws_brick
>
> Brick25: glusterfs4sds.commvault.com:/ws/disk1/ws_brick
>
> Brick26: glusterfs5sds.commvault.com:/ws/disk1/ws_brick
>
> Brick27: glusterfs6sds.commvault.com:/ws/disk1/ws_brick
>
> Brick28: glusterfs4sds.commvault.com:/ws/disk10/ws_brick
>
> Brick29: glusterfs5sds.commvault.com:/ws/disk10/ws_brick
>
> Brick30: glusterfs6sds.commvault.com:/ws/disk10/ws_brick
>
> Brick31: glusterfs4sds.commvault.com:/ws/disk11/ws_brick
>
> Brick32: glusterfs5sds.commvault.com:/ws/disk11/ws_brick
>
> Brick33: 

[Gluster-devel] http://bugs.cloud.gluster.org is functional again

2017-07-13 Thread Niels de Vos
For anyone that helps with the Bug Triaging, the failure (it was not
updating its json data file) of http://bugs.cloud.gluster.org/ [0] has
been annoying. I finally took the time to test a one-line change [1],
and it seems to be working.

Please send patches in case something is broken :-)

Cheers,
Niels


0. https://github.com/gluster/gluster-bugs-webui/issues/9
1. https://github.com/gluster/gluster-bugs-webui/pull/10/files


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] create restrictions xlator

2017-07-13 Thread Taehwa Lee
the issue is same as me almost.

It looks better than my suggestion.


but, It is planned on Gluster 4.0, isn’t it?

Can I follow and develop this issue for 3.10 and master?


I want to know what you need and what I can do


-
Taehwa Lee
Gluesys Co.,Ltd.
alghost@gmail.com
+82-10-3420-6114, +82-70-8785-6591
-

> 2017. 7. 13. 오후 4:18, Amar Tumballi  작성:
> 
> Just by looking at the need, it looked like more of 
> https://github.com/gluster/glusterfs/issues/236 
> 
> 
> Will the above change in posix itself be better?
> 
> -Amar
> 
> On Thu, Jul 13, 2017 at 12:04 PM, Taehwa Lee  > wrote:
> I know that when free capacity for a brick is less than min-free-disk,
> create a link file.
> 
> but, when all subvols’ free capacity are less than min-free-disk,
> just try to search the best subvol and do fops to the subvol.
> 
> I think restriction is needed as option.
> 
> 
> so, i suggest that when all subvols’ free capacity are less than 
> min-free-disk,
> reject fops for create and mknod (using min-free-disk).
> 
> 
> like below codes. (of course, needed to modify the function; 
> dht_free_disk_available_subvol)
> 
> avail_subvol = dht_free_disk_available_subvol (this, subvol, 
> local);
> 
> if (!avail_subvol) {
> gf_msg_debug (this->name, 0,
> "failed to create %s on %s",
> loc->path, subvol->name);
> 
> DHT_STACK_UNWIND (mknod, frame, -1, ENOSPC,
> NULL, NULL, NULL, NULL, NULL);
> 
> goto out;
> }
> else if (avail_subvol != subvol) {
> local->params = dict_ref (params);
> local->rdev = rdev;
> local->mode = mode;
> local->umask = umask;
> local->cached_subvol = avail_subvol;
> local->hashed_subvol = subvol;
> 
> gf_msg_debug (this->name, 0,
>   "creating %s on %s (link at %s)", 
> loc->path,
>   avail_subvol->name, subvol->name);
> 
> dht_linkfile_create (frame,
>  dht_mknod_linkfile_create_cbk,
>  this, avail_subvol, subvol, loc);
> 
> goto out;
> }
> 
> 
> 
> - 이태화 드림 -
> 
> -
> 이 태 화
> Taehwa Lee
> Gluesys Co.,Ltd.
> alghost@gmail.com 
> 010-3420-6114, 070-8785-6591
> -
> 
>> 2017. 7. 13. 오후 3:22, Nithya Balachandran > > 작성:
>> 
>> 
>> 
>> On 13 July 2017 at 11:46, Pranith Kumar Karampuri > > wrote:
>> 
>> 
>> On Thu, Jul 13, 2017 at 10:11 AM, Taehwa Lee > > wrote:
>> Thank you for response quickly
>> 
>> 
>> I went through dht_get_du_info before I start developing this.
>> 
>> at that time, I think that this functionality should be independent module.
>> 
>> 
>> so, I will move this into DHT without new statfs.
>> 
>> 
>> Can you provide the details of your usecase? I ask because we already have 
>> dht redirecting creates to other subvols if a particular brick's usage 
>> crosses a certain value.
>> 
>>  
>> Let's here from dht folks also what they think about this change before you 
>> make modifications. I included some of the dht folks to the thread.
>>  
>> 
>> and then, will suggest it on gerrit ! 
>> 
>> 
>> Thank you so much.
>> 
>> 
>> -
>> Taehwa Lee
>> Gluesys Co.,Ltd.
>> alghost@gmail.com 
>> +82-10-3420-6114, +82-70-8785-6591
>> -
>> 
>>> 2017. 7. 13. 오후 1:06, Pranith Kumar Karampuri >> > 작성:
>>> 
>>> hey,
>>>   I went through the patch. I see that statfs is always wound for 
>>> create fop. So number of network operations increase and performance will 
>>> be less even in normal case. I think similar functionality is in DHT, may 
>>> be you should take a look at that?
>>> 
>>> Check dht_get_du_info() which is used by dht_mknod(). It keeps refreshing 
>>> this info every X seconds. I will let DHT guys comment a bit more about 
>>> this. One more thing to check is if we can have just one implementation 
>>> that satisfied everyone's requirements. i.e. move out this functionality 
>>> from DHT to this xlator or, move the 

Re: [Gluster-devel] create restrictions xlator

2017-07-13 Thread Amar Tumballi
Just by looking at the need, it looked like more of
https://github.com/gluster/glusterfs/issues/236

Will the above change in posix itself be better?

-Amar

On Thu, Jul 13, 2017 at 12:04 PM, Taehwa Lee  wrote:

> I know that when free capacity for a brick is less than min-free-disk,
> create a link file.
>
> but, when all subvols’ free capacity are less than min-free-disk,
> just try to search the best subvol and do fops to the subvol.
>
> I think restriction is needed as option.
>
>
> so, i suggest that when all subvols’ free capacity are less than
> min-free-disk,
> reject fops for create and mknod (using min-free-disk).
>
>
> like below codes. (of course, needed to modify the function;
> dht_free_disk_available_subvol)
>
> avail_subvol = dht_free_disk_available_subvol (this,
> subvol, local);
>
> if (!avail_subvol) {
> gf_msg_debug (this->name, 0,
> "failed to create %s on %s",
> loc->path, subvol->name);
>
> DHT_STACK_UNWIND (mknod, frame, -1, ENOSPC,
> NULL, NULL, NULL, NULL, NULL);
>
> goto out;
> }
> else if (avail_subvol != subvol) {
> local->params = dict_ref (params);
> local->rdev = rdev;
> local->mode = mode;
> local->umask = umask;
> local->cached_subvol = avail_subvol;
> local->hashed_subvol = subvol;
>
> gf_msg_debug (this->name, 0,
>   "creating %s on %s (link at %s)",
> loc->path,
>   avail_subvol->name, subvol->name);
>
> dht_linkfile_create (frame,
>  dht_mknod_linkfile_create_
> cbk,
>  this, avail_subvol, subvol,
> loc);
>
> goto out;
> }
>
>
>
> - 이태화 드림 -
>
> -
> 이 태 화
> Taehwa Lee
> Gluesys Co.,Ltd.
> alghost@gmail.com
> 010-3420-6114, 070-8785-6591
> -
>
> 2017. 7. 13. 오후 3:22, Nithya Balachandran  작성:
>
>
>
> On 13 July 2017 at 11:46, Pranith Kumar Karampuri 
> wrote:
>
>>
>>
>> On Thu, Jul 13, 2017 at 10:11 AM, Taehwa Lee 
>> wrote:
>>
>>> Thank you for response quickly
>>>
>>>
>>> I went through dht_get_du_info before I start developing this.
>>>
>>> at that time, I think that this functionality should be independent
>>> module.
>>>
>>>
>>> so, I will move this into DHT without new statfs.
>>>
>>
>>
> Can you provide the details of your usecase? I ask because we already have
> dht redirecting creates to other subvols if a particular brick's usage
> crosses a certain value.
>
>
>
>> Let's here from dht folks also what they think about this change before
>> you make modifications. I included some of the dht folks to the thread.
>>
>>
>>>
>>> and then, will suggest it on gerrit !
>>>
>>>
>>> Thank you so much.
>>>
>>>
>>> -
>>> Taehwa Lee
>>> Gluesys Co.,Ltd.
>>> alghost@gmail.com
>>> +82-10-3420-6114, +82-70-8785-6591
>>> -
>>>
>>> 2017. 7. 13. 오후 1:06, Pranith Kumar Karampuri  작성:
>>>
>>> hey,
>>>   I went through the patch. I see that statfs is always wound for
>>> create fop. So number of network operations increase and performance will
>>> be less even in normal case. I think similar functionality is in DHT, may
>>> be you should take a look at that?
>>>
>>> Check dht_get_du_info() which is used by dht_mknod(). It keeps
>>> refreshing this info every X seconds. I will let DHT guys comment a bit
>>> more about this. One more thing to check is if we can have just one
>>> implementation that satisfied everyone's requirements. i.e. move out this
>>> functionality from DHT to this xlator or, move the functionality of this
>>> xlator into DHT.
>>>
>>>
>>> On Thu, Jul 13, 2017 at 8:18 AM, Taehwa Lee 
>>> wrote:
>>>
 Hi all,

 I’ve been developing a xlator that create is rejected when used
 capacity of a volume higher than threshold.


 the reason why I’m doing is that I got problems when LV is used fully.

 this patch is in the middle of develop.

 just I want to know whether my approach is pretty correct to satisfy my
 requirement.

 so, when you guys have a little spare time, please review my patch and
 tell me WHATEVER you’re thinking.


 and If you guys think that it is useful for glusterfs, I’m gonna do
 process to merge into glusterfs.


 thanks in advance


Re: [Gluster-devel] create restrictions xlator

2017-07-13 Thread Taehwa Lee
I know that when free capacity for a brick is less than min-free-disk,
create a link file.

but, when all subvols’ free capacity are less than min-free-disk,
just try to search the best subvol and do fops to the subvol.

I think restriction is needed as option.


so, i suggest that when all subvols’ free capacity are less than min-free-disk,
reject fops for create and mknod (using min-free-disk).


like below codes. (of course, needed to modify the function; 
dht_free_disk_available_subvol)

avail_subvol = dht_free_disk_available_subvol (this, subvol, 
local);

if (!avail_subvol) {
gf_msg_debug (this->name, 0,
"failed to create %s on %s",
loc->path, subvol->name);

DHT_STACK_UNWIND (mknod, frame, -1, ENOSPC,
NULL, NULL, NULL, NULL, NULL);

goto out;
}
else if (avail_subvol != subvol) {
local->params = dict_ref (params);
local->rdev = rdev;
local->mode = mode;
local->umask = umask;
local->cached_subvol = avail_subvol;
local->hashed_subvol = subvol;

gf_msg_debug (this->name, 0,
  "creating %s on %s (link at %s)", 
loc->path,
  avail_subvol->name, subvol->name);

dht_linkfile_create (frame,
 dht_mknod_linkfile_create_cbk,
 this, avail_subvol, subvol, loc);

goto out;
}



- 이태화 드림 -

-
이 태 화
Taehwa Lee
Gluesys Co.,Ltd.
alghost@gmail.com
010-3420-6114, 070-8785-6591
-

> 2017. 7. 13. 오후 3:22, Nithya Balachandran  작성:
> 
> 
> 
> On 13 July 2017 at 11:46, Pranith Kumar Karampuri  > wrote:
> 
> 
> On Thu, Jul 13, 2017 at 10:11 AM, Taehwa Lee  > wrote:
> Thank you for response quickly
> 
> 
> I went through dht_get_du_info before I start developing this.
> 
> at that time, I think that this functionality should be independent module.
> 
> 
> so, I will move this into DHT without new statfs.
> 
> 
> Can you provide the details of your usecase? I ask because we already have 
> dht redirecting creates to other subvols if a particular brick's usage 
> crosses a certain value.
> 
>  
> Let's here from dht folks also what they think about this change before you 
> make modifications. I included some of the dht folks to the thread.
>  
> 
> and then, will suggest it on gerrit ! 
> 
> 
> Thank you so much.
> 
> 
> -
> Taehwa Lee
> Gluesys Co.,Ltd.
> alghost@gmail.com 
> +82-10-3420-6114, +82-70-8785-6591
> -
> 
>> 2017. 7. 13. 오후 1:06, Pranith Kumar Karampuri > > 작성:
>> 
>> hey,
>>   I went through the patch. I see that statfs is always wound for create 
>> fop. So number of network operations increase and performance will be less 
>> even in normal case. I think similar functionality is in DHT, may be you 
>> should take a look at that?
>> 
>> Check dht_get_du_info() which is used by dht_mknod(). It keeps refreshing 
>> this info every X seconds. I will let DHT guys comment a bit more about 
>> this. One more thing to check is if we can have just one implementation that 
>> satisfied everyone's requirements. i.e. move out this functionality from DHT 
>> to this xlator or, move the functionality of this xlator into DHT.
>> 
>> 
>> On Thu, Jul 13, 2017 at 8:18 AM, Taehwa Lee > > wrote:
>> Hi all, 
>> 
>> I’ve been developing a xlator that create is rejected when used capacity of 
>> a volume higher than threshold.
>> 
>> 
>> the reason why I’m doing is that I got problems when LV is used fully.
>> 
>> this patch is in the middle of develop.
>> 
>> just I want to know whether my approach is pretty correct to satisfy my 
>> requirement.
>> 
>> so, when you guys have a little spare time, please review my patch and tell 
>> me WHATEVER you’re thinking.
>> 
>> 
>> and If you guys think that it is useful for glusterfs, I’m gonna do process 
>> to merge into glusterfs.
>> 
>> 
>> thanks in advance
>> 
>> 
>> 
>> 
>> 
>> -
>> Taehwa Lee
>> Gluesys Co.,Ltd.
>> alghost@gmail.com 
>> +82-10-3420-6114, +82-70-8785-6591
>> -
>> 
>> 
>> 

Re: [Gluster-devel] create restrictions xlator

2017-07-13 Thread Nithya Balachandran
On 13 July 2017 at 11:46, Pranith Kumar Karampuri 
wrote:

>
>
> On Thu, Jul 13, 2017 at 10:11 AM, Taehwa Lee 
> wrote:
>
>> Thank you for response quickly
>>
>>
>> I went through dht_get_du_info before I start developing this.
>>
>> at that time, I think that this functionality should be independent
>> module.
>>
>>
>> so, I will move this into DHT without new statfs.
>>
>
>
Can you provide the details of your usecase? I ask because we already have
dht redirecting creates to other subvols if a particular brick's usage
crosses a certain value.



> Let's here from dht folks also what they think about this change before
> you make modifications. I included some of the dht folks to the thread.
>
>
>>
>> and then, will suggest it on gerrit !
>>
>>
>> Thank you so much.
>>
>>
>> -
>> Taehwa Lee
>> Gluesys Co.,Ltd.
>> alghost@gmail.com
>> +82-10-3420-6114, +82-70-8785-6591
>> -
>>
>> 2017. 7. 13. 오후 1:06, Pranith Kumar Karampuri  작성:
>>
>> hey,
>>   I went through the patch. I see that statfs is always wound for
>> create fop. So number of network operations increase and performance will
>> be less even in normal case. I think similar functionality is in DHT, may
>> be you should take a look at that?
>>
>> Check dht_get_du_info() which is used by dht_mknod(). It keeps refreshing
>> this info every X seconds. I will let DHT guys comment a bit more about
>> this. One more thing to check is if we can have just one implementation
>> that satisfied everyone's requirements. i.e. move out this functionality
>> from DHT to this xlator or, move the functionality of this xlator into DHT.
>>
>>
>> On Thu, Jul 13, 2017 at 8:18 AM, Taehwa Lee 
>> wrote:
>>
>>> Hi all,
>>>
>>> I’ve been developing a xlator that create is rejected when used capacity
>>> of a volume higher than threshold.
>>>
>>>
>>> the reason why I’m doing is that I got problems when LV is used fully.
>>>
>>> this patch is in the middle of develop.
>>>
>>> just I want to know whether my approach is pretty correct to satisfy my
>>> requirement.
>>>
>>> so, when you guys have a little spare time, please review my patch and
>>> tell me WHATEVER you’re thinking.
>>>
>>>
>>> and If you guys think that it is useful for glusterfs, I’m gonna do
>>> process to merge into glusterfs.
>>>
>>>
>>> thanks in advance
>>>
>>>
>>>
>>>
>>>
>>> -
>>> Taehwa Lee
>>> Gluesys Co.,Ltd.
>>> alghost@gmail.com
>>> +82-10-3420-6114, +82-70-8785-6591
>>> -
>>>
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>>
>> --
>> Pranith
>>
>>
>>
>
>
> --
> Pranith
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] create restrictions xlator

2017-07-13 Thread Pranith Kumar Karampuri
On Thu, Jul 13, 2017 at 10:11 AM, Taehwa Lee  wrote:

> Thank you for response quickly
>
>
> I went through dht_get_du_info before I start developing this.
>
> at that time, I think that this functionality should be independent module.
>
>
> so, I will move this into DHT without new statfs.
>

Let's here from dht folks also what they think about this change before you
make modifications. I included some of the dht folks to the thread.


>
> and then, will suggest it on gerrit !
>
>
> Thank you so much.
>
>
> -
> Taehwa Lee
> Gluesys Co.,Ltd.
> alghost@gmail.com
> +82-10-3420-6114, +82-70-8785-6591
> -
>
> 2017. 7. 13. 오후 1:06, Pranith Kumar Karampuri  작성:
>
> hey,
>   I went through the patch. I see that statfs is always wound for
> create fop. So number of network operations increase and performance will
> be less even in normal case. I think similar functionality is in DHT, may
> be you should take a look at that?
>
> Check dht_get_du_info() which is used by dht_mknod(). It keeps refreshing
> this info every X seconds. I will let DHT guys comment a bit more about
> this. One more thing to check is if we can have just one implementation
> that satisfied everyone's requirements. i.e. move out this functionality
> from DHT to this xlator or, move the functionality of this xlator into DHT.
>
>
> On Thu, Jul 13, 2017 at 8:18 AM, Taehwa Lee  wrote:
>
>> Hi all,
>>
>> I’ve been developing a xlator that create is rejected when used capacity
>> of a volume higher than threshold.
>>
>>
>> the reason why I’m doing is that I got problems when LV is used fully.
>>
>> this patch is in the middle of develop.
>>
>> just I want to know whether my approach is pretty correct to satisfy my
>> requirement.
>>
>> so, when you guys have a little spare time, please review my patch and
>> tell me WHATEVER you’re thinking.
>>
>>
>> and If you guys think that it is useful for glusterfs, I’m gonna do
>> process to merge into glusterfs.
>>
>>
>> thanks in advance
>>
>>
>>
>>
>>
>> -
>> Taehwa Lee
>> Gluesys Co.,Ltd.
>> alghost@gmail.com
>> +82-10-3420-6114, +82-70-8785-6591
>> -
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Pranith
>
>
>


-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel