Re: [Gluster-devel] Selfheal on mount process (disperse)

2017-11-15 Thread Xavi Hernandez
Hi, On Wed, Nov 15, 2017 at 6:19 AM, jayakrishnan mm wrote: > Hi, > > Glusterfs ver 3.7.10 > Volume : disperse (4+2) > Client on separate machine. > 1 brick offline. > Error happens after about 60 seconds of starting write. When checked the > online brick's >

Re: [Gluster-devel] glusterd crashes on /tests/bugs/replicate/bug-884328.t

2017-12-15 Thread Xavi Hernandez
I've uploaded a patch to fix this problem: https://review.gluster.org/19040 On Fri, Dec 15, 2017 at 11:33 AM, Xavi Hernandez <jaher...@redhat.com> wrote: > I've checked the size of 'gluster volume set help' on current master and > it's 51176 bytes. Only 24 bytes below the size o

Re: [Gluster-devel] glusterd crashes on /tests/bugs/replicate/bug-884328.t

2017-12-15 Thread Xavi Hernandez
act it's a very trivial change), so I'm not sure why it may or may not crash. I'll analyze it. Anyway, that function needs a patch because there's no space limit check before writing to the buffer. Xavi > On Fri, Dec 15, 2017 at 2:23 PM, Xavi Hernandez <jaher...@redhat.com> > wrote:

Re: [Gluster-devel] glusterd crashes on /tests/bugs/replicate/bug-884328.t

2017-12-15 Thread Xavi Hernandez
. I'll send a patch to fix the problem. Xavi On Fri, Dec 15, 2017 at 10:05 AM, Xavi Hernandez <jaher...@redhat.com> wrote: > On Fri, Dec 15, 2017 at 9:57 AM, Atin Mukherjee <amukh...@redhat.com> > wrote: > >> But why doesn't it crash every time if this is the RCA? N

[Gluster-devel] Message id's for components

2017-12-12 Thread Xavi Hernandez
Hi, I've uploaded a patch [1] to change the way used to reserve a range of messages to components and to define message id's inside a component. The old method was error prone because adding a new component needed to define some macros based on previous macros (in fact there was already an

[Gluster-devel] About GF_ASSERT() macro

2017-11-03 Thread Xavi Hernandez
Hi all, I've seen that GF_ASSERT() macro is defined in different ways depending on if we are building in debug mode or not. In debug mode, it's an alias of assert(), but in non-debug mode it simply logs an error message and continues. I think that an assert should be a critical check that

[Gluster-devel] Feature proposal: xlator to optimize heal and rebalance operations

2017-11-02 Thread Xavi Hernandez
Hi all, I've created a new GitHub issue [1] to discuss an idea to optimize self-heal and rebalance operations by not requiring to take a lock during data operations. Any thoughts will be welcome. Regards, Xavi [1] https://github.com/gluster/glusterfs/issues/347

Re: [Gluster-devel] Gluster Summit Discussion: Time taken for regression tests

2017-11-08 Thread Xavi Hernandez
One thing we could do with some tests I know is to remove some of them. EC currently runs the same test on multiple volume configurations (2+1, 3+1, 4+1, 3+2, 4+2, 4+3 and 8+4). I think we could reduce it to two common configurations (2+1 and 4+2) and one or two special configurations (3+1 and/or

[Gluster-devel] String manipulation

2017-11-02 Thread Xavi Hernandez
Hi all, Several times I've seen issues with the way strings are handled in many parts of the code. Sometimes it's because of an incorrect use of some functions, like strncat(). Others it's because of a lack of error conditions check. Others it's a failure in allocating the right amount of memory,

[Gluster-devel] Proposal for a transaction framework for Gluster

2017-10-26 Thread Xavi Hernandez
Hi, I've opened a github issue [1] to discuss the implementation of a transaction framework that should provide a level of abstraction for xlators that currently use inodelk/entrylk, simplifying its coding and improving performance. Feel free to provide your thoughts. Thanks, Xavi [1]

Re: [Gluster-devel] Simulating some kind of "virtual file"

2018-01-09 Thread Xavi Hernandez
Hi David, adding again gluster-devel. On Tue, Jan 9, 2018 at 4:15 PM, David Spisla <david.spi...@iternity.com> wrote: > Hello Xavi, > > > > *Von:* Xavi Hernandez [mailto:jaher...@redhat.com] > *Gesendet:* Dienstag, 9. Januar 2018 09:48 > *An:* David Spisla <

Re: [Gluster-devel] Simulating some kind of "virtual file"

2018-01-11 Thread Xavi Hernandez
Hi David, On Wed, Jan 10, 2018 at 3:24 PM, David Spisla <david.spi...@iternity.com> wrote: > Hello Amar, Xavi > > > > *Von:* Amar Tumballi [mailto:atumb...@redhat.com] > *Gesendet:* Mittwoch, 10. Januar 2018 14:16 > *An:* Xavi Hernandez <jaher...@redhat.co

Re: [Gluster-devel] Simulating some kind of "virtual file"

2018-01-10 Thread Xavi Hernandez
Hi David, On Wed, Jan 10, 2018 at 1:42 PM, David Spisla wrote: > > *[David Spisla] I tried this:* > > *char *new_path = malloc(1+len_path-5);* > > *memcpy(new_path, loc->path, len_path-5);* > > *new_path[strlen(new_path)] = '\0';* > > *loc->name = new_path + (len_path

Re: [Gluster-devel] Simulating some kind of "virtual file"

2018-01-09 Thread Xavi Hernandez
Hi David, On Tue, Jan 9, 2018 at 9:09 AM, David Spisla wrote: > Dear Gluster Devels, > > at the moment I do some Xlator stuff and I want to know if there is a way > to simulate the existing of a file to the client. It should be a kind of > "virtual file". Here are more

Re: [Gluster-devel] Regression tests time

2018-01-24 Thread Xavi Hernandez
On Wed, Jan 24, 2018 at 3:11 PM, Jeff Darcy <j...@pl.atyp.us> wrote: > > > > On Tue, Jan 23, 2018, at 12:58 PM, Xavi Hernandez wrote: > > I've made some experiments [1] with the time that centos regression takes > to complete. After some changes the time taken t

Re: [Gluster-devel] Regression tests time

2018-01-25 Thread Xavi Hernandez
On Thu, Jan 25, 2018 at 3:03 PM, Jeff Darcy <j...@pl.atyp.us> wrote: > > > > On Wed, Jan 24, 2018, at 9:37 AM, Xavi Hernandez wrote: > > That happens when we use arbitrary delays. If we use an explicit check, it > will work on all systems. > > > You're argui

[Gluster-devel] Race in protocol/client and RPC

2018-01-29 Thread Xavi Hernandez
Hi all, I've identified a race in RPC layer that caused some spurious disconnections and CHILD_DOWN notifications. The problem happens when protocol/client reconfigures a connection to move from glusterd to glusterfsd. This is done by calling rpc_clnt_reconfig() followed by

Re: [Gluster-devel] Race in protocol/client and RPC

2018-02-01 Thread Xavi Hernandez
I've more time. Xavi On Mon, Jan 29, 2018 at 11:07 PM, Xavi Hernandez <jaher...@redhat.com> wrote: > Hi all, > > I've identified a race in RPC layer that caused some spurious > disconnections and CHILD_DOWN notifications. > > The problem happens when protocol/client r

Re: [Gluster-devel] Race in protocol/client and RPC

2018-02-01 Thread Xavi Hernandez
On Thu, Feb 1, 2018 at 2:48 PM, Shyam Ranganathan <srang...@redhat.com> wrote: > On 02/01/2018 08:25 AM, Xavi Hernandez wrote: > > After having tried several things, it seems that it will be complex to > > solve these races. All attempts to fix them have caused failures in

Re: [Gluster-devel] Regression tests time

2018-01-27 Thread Xavi Hernandez
have to identify another race (probably in RPC also) that is generating unexpected disconnections (or incorrect reconnections). Xavi Regards, Amar On Thu, Jan 25, 2018 at 8:07 PM, Xavi Hernandez <jaher...@redhat.com> wrote: > On Thu, Jan 25, 2018 at 3:03 PM, Jeff Darcy <j...@pl.atyp.us&

[Gluster-devel] gNFS service management from glusterd

2018-02-21 Thread Xavi Hernandez
Hi all, currently glusterd sends a SIGKILL to stop gNFS, while all other services are stopped with a SIGTERM signal first (this can be seen in glusterd_svc_stop() function of mgmt/glusterd xlator). The question is why it cannot be stopped with SIGTERM as all other services. Using SIGKILL blindly

Re: [Gluster-devel] [features/locks] Fetching lock info in lookup

2018-06-20 Thread Xavi Hernandez
On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa wrote: > Krutika, > > This patch doesn't seem to be getting counts per domain, like number of > inodelks or entrylks acquired in a domain "xyz". Am I right? If per domain > stats are not available, passing interested domains in xdata_req would

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Xavi Hernandez
On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee wrote: > > > On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee > wrote: > >> I just went through the nightly regression report of brick mux runs and >> here's what I can summarize. >> >> >>

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Xavi Hernandez
> wrote: >> >>> >>> >>> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez >>> wrote: >>> >>>> On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee >>>> wrote: >>>> >>>>> >>>>> >>&

Re: [Gluster-devel] [Gluster-infra] bug-1432542-mpx-restart-crash.t failing

2018-07-09 Thread Xavi Hernandez
On Mon, Jul 9, 2018 at 11:14 AM Karthik Subrahmanya wrote: > Hi Deepshikha, > > Are you looking into this failure? I can still see this happening for all > the regression runs. > I've executed the failing script on my laptop and all tests finish relatively fast. What seems to take time is the

[Gluster-devel] Regression tests time

2018-01-23 Thread Xavi Hernandez
Hi, I've made some experiments [1] with the time that centos regression takes to complete. After some changes the time taken to run a full regression has dropped between 2.5 and 3.5 hours (depending on the run time of 2 tests, see below). Basically the changes are related with delays manually

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-16 Thread Xavi Hernandez
On Tue, Mar 13, 2018 at 2:37 AM, Shyam Ranganathan wrote: > Hi, > > As we wind down on 4.0 activities (waiting on docs to hit the site, and > packages to be available in CentOS repositories before announcing the > release), it is time to start preparing for the 4.1 release.

Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Branched and further dates

2018-10-11 Thread Xavi Hernandez
On Wed, Oct 10, 2018 at 10:03 PM Shyam Ranganathan wrote: > On 09/26/2018 10:21 AM, Shyam Ranganathan wrote: > > 3. Upgrade testing > > - Need *volunteers* to do the upgrade testing as stated in the 4.1 > > upgrade guide [3] to note any differences or changes to the same > > - Explicit call

[Gluster-devel] Gluster performance updates

2018-10-01 Thread Xavi Hernandez
Hi, this is an update containing some work done regarding performance and consistency during latest weeks. We'll try to build a complete list of all known issues and track them through this email thread. Please, let me know of any performance issue not included in this email so that we can build

Re: [Gluster-devel] POC- Distributed regression testing framework

2018-10-04 Thread Xavi Hernandez
On Wed, Oct 3, 2018 at 11:57 AM Deepshikha Khandelwal wrote: > Hello folks, > > Distributed-regression job[1] is now a part of Gluster's > nightly-master build pipeline. The following are the issues we have > resolved since we started working on this: > > 1) Collecting gluster logs from servers.

Re: [Gluster-devel] POC- Distributed regression testing framework

2018-10-04 Thread Xavi Hernandez
On Thu, Oct 4, 2018 at 9:47 AM Amar Tumballi wrote: > > > On Thu, Oct 4, 2018 at 12:54 PM Xavi Hernandez > wrote: > >> On Wed, Oct 3, 2018 at 11:57 AM Deepshikha Khandelwal < >> dkhan...@redhat.com> wrote: >> >>> Hello folks, >>> >

[Gluster-devel] Gluster performance improvements

2018-09-25 Thread Xavi Hernandez
Hi, we are starting to design the next cache implementation for gluster that should provide much better latencies, increasing performance. The document [1] with the high level approach will be used as a starting point to design the final architecture. Any comments will be highly appreciated so

Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2019-01-01 Thread Xavi Hernandez
ormance reduction of 70% or more. > > We need to determine what causes the fluctuations in brick side and > avoid them. > > This scenario is very similar to a smallfile/metadata workload, so this > is probably one important cause of its bad performance. > > What kind of i

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-15 Thread Xavi Hernandez
On Mon, Jan 14, 2019 at 11:08 AM Ashish Pandey wrote: > > I downloaded logs of regression runs 1077 and 1073 and tried to > investigate it. > In both regression ec/bug-1236065.t is hanging on TEST 70 which is trying > to get the online brick count > > I can see that in mount/bricks and glusterd

[Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2018-12-21 Thread Xavi Hernandez
Hi, I've done some tracing of the latency that network layer introduces in gluster. I've made the analysis as part of the pgbench performance issue (in particulat the initialization and scaling phase), so I decided to look at READV for this particular workload, but I think the results can be

Re: [Gluster-devel] Performance improvements

2019-01-26 Thread Xavi Hernandez
On Fri, 25 Jan 2019, 08:53 Vijay Bellur Thank you for the detailed update, Xavi! This looks very interesting. > > On Thu, Jan 24, 2019 at 7:50 AM Xavi Hernandez > wrote: > >> Hi all, >> >> I've just updated a patch [1] that implements a new thread pool based on

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Xavi Hernandez
Hi Raghavendra, On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa wrote: > All, > > Glusterfs cleans up POSIX locks held on an fd when the client/mount > through which those locks are held disconnects from bricks/server. This > helps Glusterfs to not run into a stale lock problem later (For

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Xavi Hernandez
On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa wrote: > > > On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez > wrote: > >> Hi Raghavendra, >> >> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa >> wrote: >> >>> All, >

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Xavi Hernandez
On Wed, Mar 27, 2019 at 11:54 AM Raghavendra Gowdappa wrote: > > > On Wed, Mar 27, 2019 at 4:22 PM Raghavendra Gowdappa > wrote: > >> >> >> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez >> wrote: >> >>> Hi Raghavendra, >>> >&

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Xavi Hernandez
On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri wrote: > > > On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez > wrote: > >> On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa < >> rgowd...@redhat.com> wrote: >> >>> >>> >&

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Xavi Hernandez
On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri wrote: > > > On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez > wrote: > >> On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri < >> pkara...@redhat.com> wrote: >> >>> >>> >

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-27 Thread Xavi Hernandez
On Wed, 27 Mar 2019, 18:26 Pranith Kumar Karampuri, wrote: > > > On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez > wrote: > >> On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri < >> pkara...@redhat.com> wrote: >> >>> >>> >

[Gluster-devel] Issue with posix locks

2019-03-29 Thread Xavi Hernandez
Hi all, there is one potential problem with posix locks when used in a replicated or dispersed volume. Some background: Posix locks allow any process to lock a region of a file multiple times, but a single unlock on a given region will release all previous locks. Locked regions can be different

Re: [Gluster-devel] [Gluster-users] POSIX locks and disconnections between clients and bricks

2019-03-28 Thread Xavi Hernandez
On Thu, Mar 28, 2019 at 3:05 AM Raghavendra Gowdappa wrote: > > > On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez > wrote: > >> On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri < >> pkara...@redhat.com> wrote: >> >>> >>> >

Re: [Gluster-devel] Issue with posix locks

2019-04-01 Thread Xavi Hernandez
On Mon, Apr 1, 2019 at 10:15 AM Soumya Koduri wrote: > > > On 4/1/19 10:02 AM, Pranith Kumar Karampuri wrote: > > > > > > On Sun, Mar 31, 2019 at 11:29 PM Soumya Koduri > <mailto:skod...@redhat.com>> wrote: > > > > > > > >

Re: [Gluster-devel] Issue with posix locks

2019-04-01 Thread Xavi Hernandez
On Sun, Mar 31, 2019 at 7:59 PM Soumya Koduri wrote: > > > On 3/29/19 11:55 PM, Xavi Hernandez wrote: > > Hi all, > > > > there is one potential problem with posix locks when used in a > > replicated or dispersed volume. > > > > Some background:

Re: [Gluster-devel] Hello, I have a question about the erasure code translator, hope someone give me some advice, thank you!

2019-04-08 Thread Xavi Hernandez
Hi, On Mon, Apr 8, 2019 at 8:50 AM PSC <1173701...@qq.com> wrote: > Hi, I am a storage software coder who is interested in Gluster. I am > trying to improve the read/write performance of it. > > > I noticed that gluster is using Vandermonde matrix in erasure code > encoding and decoding process.

Re: [Gluster-devel] I/O performance

2019-02-13 Thread Xavi Hernandez
On Wed, Feb 13, 2019 at 11:34 AM Xavi Hernandez wrote: > On Tue, Feb 12, 2019 at 1:30 AM Vijay Bellur wrote: > >> >> >> On Tue, Feb 5, 2019 at 10:57 PM Xavi Hernandez >> wrote: >> >>> On Wed, Feb 6, 2019 at 7:00 AM Poornima Gurusiddaiah < >

Re: [Gluster-devel] I/O performance

2019-02-05 Thread Xavi Hernandez
On Wed, Feb 6, 2019 at 7:00 AM Poornima Gurusiddaiah wrote: > > > On Tue, Feb 5, 2019, 10:53 PM Xavi Hernandez >> On Fri, Feb 1, 2019 at 1:51 PM Xavi Hernandez >> wrote: >> >>> On Fri, Feb 1, 2019 at 1:25 PM Poornima Gurusiddaiah < >>> pgur

[Gluster-devel] Performance improvements

2019-01-24 Thread Xavi Hernandez
Hi all, I've just updated a patch [1] that implements a new thread pool based on a wait-free queue provided by userspace-rcu library. The patch also includes an auto scaling mechanism that only keeps running the needed amount of threads for the current workload. This new approach has some

Re: [Gluster-devel] I/O performance

2019-01-31 Thread Xavi Hernandez
On Fri, Feb 1, 2019 at 7:54 AM Vijay Bellur wrote: > > > On Thu, Jan 31, 2019 at 10:01 AM Xavi Hernandez > wrote: > >> Hi, >> >> I've been doing some tests with the global thread pool [1], and I've >> observed one important thing: >> >>

Re: [Gluster-devel] I/O performance

2019-02-01 Thread Xavi Hernandez
On Fri, Feb 1, 2019 at 1:25 PM Poornima Gurusiddaiah wrote: > Can the threads be categorised to do certain kinds of fops? > Could be, but creating multiple thread groups for different tasks is generally bad because many times you end up with lots of idle threads which waste resources and could

Re: [Gluster-devel] Performance improvements

2019-01-31 Thread Xavi Hernandez
On Sun, Jan 27, 2019 at 8:03 AM Xavi Hernandez wrote: > On Fri, 25 Jan 2019, 08:53 Vijay Bellur >> Thank you for the detailed update, Xavi! This looks very interesting. >> >> On Thu, Jan 24, 2019 at 7:50 AM Xavi Hernandez >> wrote: >> >>> Hi

[Gluster-devel] I/O performance

2019-01-31 Thread Xavi Hernandez
Hi, I've been doing some tests with the global thread pool [1], and I've observed one important thing: Since this new thread pool has very low contention (apparently), it exposes other problems when the number of threads grows. What I've seen is that some workloads use all available threads on

Re: [Gluster-devel] I/O performance

2019-02-05 Thread Xavi Hernandez
On Fri, Feb 1, 2019 at 1:51 PM Xavi Hernandez wrote: > On Fri, Feb 1, 2019 at 1:25 PM Poornima Gurusiddaiah > wrote: > >> Can the threads be categorised to do certain kinds of fops? >> > > Could be, but creating multiple thread groups for different tasks is > ge

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-11 Thread Xavi Hernandez
On Thu, Apr 11, 2019 at 11:28 AM Xavi Hernandez wrote: > On Wed, Apr 10, 2019 at 7:25 PM Xavi Hernandez > wrote: > >> On Wed, Apr 10, 2019 at 4:01 PM Atin Mukherjee >> wrote: >> >>> And now for last 15 days: >>> >>> >>> https://

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-11 Thread Xavi Hernandez
On Wed, Apr 10, 2019 at 7:25 PM Xavi Hernandez wrote: > On Wed, Apr 10, 2019 at 4:01 PM Atin Mukherjee > wrote: > >> And now for last 15 days: >> >> >> https://fstat.gluster.org/summary?start_date=2019-03-25_date=2019-04-10 >> >> ./tests/bitrot/bug

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-15 Thread Xavi Hernandez
On Mon, Apr 15, 2019 at 11:08 AM Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > > > On Thu, Apr 11, 2019 at 2:59 PM Xavi Hernandez > wrote: > >> On Wed, Apr 10, 2019 at 7:25 PM Xavi Hernandez >> wrote: >> >>> On Wed, Apr 10, 2019 at 4:0

[Gluster-devel] Possible issues with shared threads

2019-04-12 Thread Xavi Hernandez
Hi, I've found some issues with memory accounting and I've written a patch [1] to fix them. However during the tests I've found another problem: In a brick-multiplexed environment, posix tries to start a single janitor thread shared by all posix xlator instances, however there are two issues:

Re: [Gluster-devel] Should we enable features.locks-notify.contention by default ?

2019-05-30 Thread Xavi Hernandez
omething ? > --- > Ashish > > -- > *From: *"Amar Tumballi Suryanarayan" > *To: *"Xavi Hernandez" > *Cc: *"gluster-devel" > *Sent: *Thursday, May 30, 2019 12:04:43 PM > *Subject: *Re: [Gluster-devel] Should we enable >

[Gluster-devel] Should we enable features.locks-notify.contention by default ?

2019-05-30 Thread Xavi Hernandez
Hi all, a patch [1] was added some time ago to send upcall notifications from the locks xlator to the current owner of a granted lock when another client tries to acquire the same lock (inodelk or entrylk). This makes it possible to use eager-locking on the client side, which improves performance

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-06-06 Thread Xavi Hernandez
On Thu, May 2, 2019 at 5:45 PM Atin Mukherjee wrote: > > > On Thu, 2 May 2019 at 20:38, Xavi Hernandez wrote: > >> On Thu, May 2, 2019 at 4:06 PM Atin Mukherjee >> wrote: >> >>> >>> >>> On Thu, 2 May 2019 at 19:14, Xavi Hernandez >

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-06-06 Thread Xavi Hernandez
Missed the patch link: https://review.gluster.org/c/glusterfs/+/22828 On Thu, Jun 6, 2019 at 8:32 AM Xavi Hernandez wrote: > On Thu, May 2, 2019 at 5:45 PM Atin Mukherjee > wrote: > >> >> >> On Thu, 2 May 2019 at 20:38, Xavi Hernandez >> wrote: >>

Re: [Gluster-devel] Solving Ctime Issue with legacy files [BUG 1593542]

2019-06-18 Thread Xavi Hernandez
Hi Kotresh, On Tue, Jun 18, 2019 at 8:33 AM Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > Hi Xavi, > > Reply inline. > > On Mon, Jun 17, 2019 at 5:38 PM Xavi Hernandez > wrote: > >> Hi Kotresh, >> >> On Mon, Jun 17, 2019 at 1:

Re: [Gluster-devel] Solving Ctime Issue with legacy files [BUG 1593542]

2019-06-17 Thread Xavi Hernandez
Hi Kotresh, On Mon, Jun 17, 2019 at 1:50 PM Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > Hi All, > > The ctime feature is enabled by default from release gluster-6. But as > explained in bug [1] there is a known issue with legacy files i.e., the > files which are created before

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Xavi Hernandez
On Thu, 2 May 2019, 15:37 Milind Changire, wrote: > On Thu, May 2, 2019 at 6:44 PM Xavi Hernandez > wrote: > >> Hi Ashish, >> >> On Thu, May 2, 2019 at 2:17 PM Ashish Pandey wrote: >> >>> Xavi, >>> >>> I would like to keep this opt

[Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Xavi Hernandez
Hi all, there's a feature in the locks xlator that sends a notification to current owner of a lock when another client tries to acquire the same lock. This way the current owner is made aware of the contention and can release the lock as soon as possible to allow the other client to proceed.

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Xavi Hernandez
good enough. If there are many bricks, each brick could send a notification per lock. 1000 bricks would mean a client would receive 1000 notifications every 5 seconds. It doesn't seem too much, but in those cases 10, and considering we could have other locks, maybe a higher value could be better. Xavi

[Gluster-devel] Weird performance behavior

2019-05-02 Thread Xavi Hernandez
Hi, doing some tests to compare performance I've found some weird results. I've seen this in different tests, but probably the more clear an easier to reproduce is to use smallfile tool to create files. The test command is: # python smallfile_cli.py --operation create --files-per-dir 100

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Xavi Hernandez
On Thu, May 2, 2019 at 4:06 PM Atin Mukherjee wrote: > > > On Thu, 2 May 2019 at 19:14, Xavi Hernandez wrote: > >> On Thu, 2 May 2019, 15:37 Milind Changire, wrote: >> >>> On Thu, May 2, 2019 at 6:44 PM Xavi Hernandez >>> wrote: >>> >&

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Xavi Hernandez
Hi Atin, On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee wrote: > I'm bit puzzled on the way coverity is reporting the open defects on GD1 > component. As you can see from [1], technically we have 6 open defects and > all of the rest are being marked as dismissed. We tried to put some >

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-10 Thread Xavi Hernandez
On Wed, Apr 10, 2019 at 4:01 PM Atin Mukherjee wrote: > And now for last 15 days: > > https://fstat.gluster.org/summary?start_date=2019-03-25_date=2019-04-10 > > ./tests/bitrot/bug-1373520.t 18 ==> Fixed through > https://review.gluster.org/#/c/glusterfs/+/22481/, I don't see this > failing

Re: [Gluster-devel] [RFC] inode table locking contention reduction experiment

2019-10-30 Thread Xavi Hernandez
Hi Changwei, On Tue, Oct 29, 2019 at 7:56 AM Changwei Ge wrote: > Hi, > > I am recently working on reducing inode_[un]ref() locking contention by > getting rid of inode table lock. Just use inode lock to protect inode > REF. I have already discussed a couple rounds with several Glusterfs >

Re: [Gluster-devel] Regards to taking lock in dictionary

2019-10-24 Thread Xavi Hernandez
Hi Mohit, On Thu, Oct 24, 2019 at 5:19 AM Mohit Agrawal wrote: > > I have a query why do we take a lock at the time of doing an operation in > a dictionary.I have observed in testing it seems there is no codepath where > we are using the dictionary parallel. In theory, the dictionary flow is

Re: [Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2020-01-09 Thread Xavi Hernandez
On Thu, Jan 9, 2020 at 10:22 AM Amar Tumballi wrote: > > > On Thu, Jan 9, 2020 at 2:33 PM Xavi Hernandez wrote: > >> On Thu, Jan 9, 2020 at 9:44 AM Amar Tumballi wrote: >> >>> >>> >>> On Thu, Jan 9, 2020 at 1:38 PM Xavi Hernandez >>>

Re: [Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2020-01-09 Thread Xavi Hernandez
On Thu, Jan 9, 2020 at 9:44 AM Amar Tumballi wrote: > > > On Thu, Jan 9, 2020 at 1:38 PM Xavi Hernandez wrote: > >> On Sun, Dec 22, 2019 at 4:56 PM Yaniv Kaul wrote: >> >>> I could not find a relevant use for them. Can anyone enlighten me? >>> >>

Re: [Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2020-01-09 Thread Xavi Hernandez
On Sun, Dec 22, 2019 at 4:56 PM Yaniv Kaul wrote: > I could not find a relevant use for them. Can anyone enlighten me? > I'm not sure why they are needed. They seem to be used to keep the unserialized version of a dict around until the dict is destroyed. I thought this could be because we were

Re: [Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2020-01-09 Thread Xavi Hernandez
On Thu, Jan 9, 2020 at 11:11 AM Yaniv Kaul wrote: > > > On Thu, Jan 9, 2020 at 11:35 AM Xavi Hernandez > wrote: > >> On Thu, Jan 9, 2020 at 10:22 AM Amar Tumballi wrote: >> >>> >>> >>> On Thu, Jan 9, 2020 at 2:33 PM Xavi Hernandez &

Re: [Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2020-01-13 Thread Xavi Hernandez
if (snap_info_rsp.dict.dict_val) { > GF_FREE(snap_info_rsp.dict.dict_val); > } > This seems like a bug. Additionally, this memory should be released using free() instead of GF_FREE(). > > I think I should remove that and stick to freeing right after > unserialization? > Yes. I agree.

Re: [Gluster-devel] [Gluster-users] Minutes of Gluster Community Meeting [12th May 2020]

2020-05-18 Thread Xavi Hernandez
Hi Sankarshan, On Sat, May 16, 2020 at 9:15 AM sankarshan wrote: > On Fri, 15 May 2020 at 10:59, Hari Gowtham wrote: > > > ### User stories > > * [Hari] users are hesitant to upgrade. A good number of issues in > release-7 (crashes, flooding of logs, self heal) Need to look into this. > > *

[Gluster-devel] Pull Request review workflow

2020-10-15 Thread Xavi Hernandez
Hi all, after the recent switch to GitHub, I've seen that reviews that require multiple iterations are hard to follow using the old workflow we were using in Gerrit. Till now we basically amended the commit and pushed it again. Gerrit had a feature to calculate diffs between versions of the

Re: [Gluster-devel] Pull Request review workflow

2020-10-15 Thread Xavi Hernandez
Hi Ravi, On Thu, Oct 15, 2020 at 1:27 PM Ravishankar N wrote: > > On 15/10/20 4:36 pm, Sheetal Pamecha wrote: > > > +1 > Just a note to the maintainers who are merging PRs to have patience and > check the commit message when there are more than 1 commits in PR. > > Makes sense. > > >> >>

Re: [Gluster-devel] Pull Request review workflow

2020-10-15 Thread Xavi Hernandez
If everyone agrees, I'll prepare a PR with the changes in rfc.sh and documentation to implement this change. Xavi On Thu, Oct 15, 2020 at 1:27 PM Ravishankar N wrote: > > On 15/10/20 4:36 pm, Sheetal Pamecha wrote: > > > +1 > Just a note to the maintainers who are merging PRs to have patience

Re: [Gluster-devel] Weird full heal on Distributed-Disperse volume with sharding

2020-09-30 Thread Xavi Hernandez
Hi Dmitry, my comments below... On Tue, Sep 29, 2020 at 11:19 AM Dmitry Antipov wrote: > For the testing purposes, I've set up a localhost-only setup with 6x16M > ramdisks (formatted as ext4) mounted (with '-o user_xattr') at > /tmp/ram/{0,1,2,3,4,5} and SHARD_MIN_BLOCK_SIZE lowered to 4K.

Re: [Gluster-devel] Weird full heal on Distributed-Disperse volume with sharding

2020-09-30 Thread Xavi Hernandez
Hi Dmitry, On Wed, Sep 30, 2020 at 9:21 AM Dmitry Antipov wrote: > On 9/30/20 8:58 AM, Xavi Hernandez wrote: > > > This is normal. A dispersed volume writes encoded fragments of each > block in each brick. In this case it's a 2+1 configuration, so each block > is divide

Re: [Gluster-devel] heal info output

2020-07-06 Thread Xavi Hernandez
Hi Emmanuel, On Thu, Jul 2, 2020 at 3:05 AM Emmanuel Dreyfus wrote: > Hello > > gluster volume heal info show me questionable entries. I wonder if these > are bugs, or if I shoud handle them and how. > > bidon# gluster volume heal gfs info > Brick bidon:/export/wd0e_tmp > Status: Connected >

Re: [Gluster-devel] [Gluster-users] high load when copy directory with many files

2021-04-21 Thread Xavi Hernandez
Before each test: gluster volume profile info clear After the test: gluster volume profile info >/some/file Regards, Xavi On Mon, Apr 12, 2021 at 9:01 AM Xavi Hernandez wrote: > On Sun, Apr 11, 2021 at 10:29 AM Amar Tumballi wrote: > >> Hi Marco, this is really good te

[Gluster-devel] Automatic clang-format for GitHub PRs

2021-02-10 Thread Xavi Hernandez
Hi all, I'm wondering if enforcing clang-format for all patches is a good idea... I've recently seen patches where clang-format is doing changes on parts of the code that have not been touched by the patch. Given that all files were already formatted by clang-format long ago, this shouldn't

Re: [Gluster-devel] Automatic clang-format for GitHub PRs

2021-02-11 Thread Xavi Hernandez
On Wed, Feb 10, 2021 at 1:33 PM Amar Tumballi wrote: > > > On Wed, Feb 10, 2021 at 3:29 PM Xavi Hernandez > wrote: > >> Hi all, >> >> I'm wondering if enforcing clang-format for all patches is a good idea... >> >> I've recently seen patches

Re: [Gluster-devel] [Gluster-users] high load when copy directory with many files

2021-04-12 Thread Xavi Hernandez
On Sun, Apr 11, 2021 at 10:29 AM Amar Tumballi wrote: > Hi Marco, this is really good test/info. Thanks. > > One more thing to observe is you are running such tests is 'gluster > profile info', so the bottleneck fop is listed. > > Mohit, Xavi, in this parallel operations, the load may be high

Re: [Gluster-devel] Automatic clang-format for GitHub PRs

2021-02-14 Thread Xavi Hernandez
On Thu, Feb 11, 2021 at 5:50 PM Yaniv Kaul wrote: > > > On Thu, Feb 11, 2021 at 5:54 PM Amar Tumballi wrote: > >> >> >> On Thu, 11 Feb, 2021, 9:19 pm Xavi Hernandez, >> wrote: >> >>> On Wed, Feb 10, 2021 at 1:33 PM Amar Tumballi wrote: >

Re: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 30/12/2021 Test Status: FAIL (-7.91%)

2021-12-29 Thread Xavi Hernandez
On Thu, Dec 30, 2021 at 5:50 AM Amar Tumballi wrote: > Any PR to suspect here? > The previous execution that passed was based on commit 12b44fe. This one is based on commit b8e32c3. The only commit between them is b8e32c3, but it seems unlikely that it may affect non SSL connections. It seems

Re: [Gluster-devel] [PATCH] timer: fix ctx->timer memleak

2021-07-19 Thread Xavi Hernandez
Thanks for the patch. Could you send it to GitHub so that it can be reviewed and merged using the regular procedure ? You can find more information about contributing to the project here: https://docs.gluster.org/en/latest/Developer-guide/Developers-Index/ Xavi On Fri, Jul 16, 2021 at 10:43 AM

Re: [Gluster-devel] New logging interface

2022-03-24 Thread Xavi Hernandez
ve a minimal performance benefit, but it's not the main reason. Best regards, Xavi > Best Regards, > Strahil Nikolov > > On Thu, Mar 24, 2022 at 20:33, Xavi Hernandez > wrote: > Hi all, > > I've just posted a proposal for a new logging interface here: > https:/

[Gluster-devel] New logging interface

2022-03-24 Thread Xavi Hernandez
Hi all, I've just posted a proposal for a new logging interface here: https://github.com/gluster/glusterfs/pull/3342 There are many comments and the documentation is updated in the PR itself, so I won't duplicate all the info here. Please check it if you are interested in the details. As a

Re: [Gluster-devel] [Gluster-users] Fw: Distributed-Disperse Shard Behavior

2022-02-09 Thread Xavi Hernandez
Hi, this problem is most likely caused by the XFS speculative preallocation ( https://linux-xfs.oss.sgi.narkive.com/jjjfnyI1/faq-xfs-speculative-preallocation ) Regards, Xavi On Sat, Feb 5, 2022 at 10:19 AM Strahil Nikolov wrote: > It seems quite odd. > I'm adding the devel list,as it looks

Re: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features

2022-10-16 Thread Xavi Hernandez
On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi wrote: > Here is my honest take on this one. > > On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya > wrote: > >> It is time to evaluate the fulfillment of our committed >> features/improvements and the feasibility of the proposed deadlines as per >>

Re: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features

2022-10-17 Thread Xavi Hernandez
On Mon, Oct 17, 2022 at 10:40 AM Yaniv Kaul wrote: > > > On Mon, Oct 17, 2022 at 8:41 AM Xavi Hernandez > wrote: > >> On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi wrote: >> >>> Here is my honest take on this one. >>> >>> On T

Re: [Gluster-devel] [Gluster-users] Error in gluster v11

2023-05-16 Thread Xavi Hernandez
before ? Xavi > > --- > Gilberto Nunes Ferreira > (47) 99676-7530 - Whatsapp / Telegram > > > > > > > Em ter., 16 de mai. de 2023 às 07:45, Xavi Hernandez > escreveu: > >> The referenced GitHub issue now has a potential patch that could fix the &

Re: [Gluster-devel] [Gluster-users] Error in gluster v11

2023-05-16 Thread Xavi Hernandez
The referenced GitHub issue now has a potential patch that could fix the problem, though it will need to be verified. Could you try to apply the patch and check if the problem persists ? On Mon, May 15, 2023 at 2:10 AM Gilberto Ferreira < gilberto.nune...@gmail.com> wrote: > Hi there, anyone in

  1   2   >