Hi,
On Wed, Nov 15, 2017 at 6:19 AM, jayakrishnan mm
wrote:
> Hi,
>
> Glusterfs ver 3.7.10
> Volume : disperse (4+2)
> Client on separate machine.
> 1 brick offline.
> Error happens after about 60 seconds of starting write. When checked the
> online brick's
>
I've uploaded a patch to fix this problem: https://review.gluster.org/19040
On Fri, Dec 15, 2017 at 11:33 AM, Xavi Hernandez <jaher...@redhat.com>
wrote:
> I've checked the size of 'gluster volume set help' on current master and
> it's 51176 bytes. Only 24 bytes below the size o
act it's a very trivial change), so I'm not sure why it may
or may not crash.
I'll analyze it. Anyway, that function needs a patch because there's no
space limit check before writing to the buffer.
Xavi
> On Fri, Dec 15, 2017 at 2:23 PM, Xavi Hernandez <jaher...@redhat.com>
> wrote:
.
I'll send a patch to fix the problem.
Xavi
On Fri, Dec 15, 2017 at 10:05 AM, Xavi Hernandez <jaher...@redhat.com>
wrote:
> On Fri, Dec 15, 2017 at 9:57 AM, Atin Mukherjee <amukh...@redhat.com>
> wrote:
>
>> But why doesn't it crash every time if this is the RCA? N
Hi,
I've uploaded a patch [1] to change the way used to reserve a range of
messages to components and to define message id's inside a component.
The old method was error prone because adding a new component needed to
define some macros based on previous macros (in fact there was already an
Hi all,
I've seen that GF_ASSERT() macro is defined in different ways depending on
if we are building in debug mode or not.
In debug mode, it's an alias of assert(), but in non-debug mode it simply
logs an error message and continues.
I think that an assert should be a critical check that
Hi all,
I've created a new GitHub issue [1] to discuss an idea to optimize
self-heal and rebalance operations by not requiring to take a lock during
data operations.
Any thoughts will be welcome.
Regards,
Xavi
[1] https://github.com/gluster/glusterfs/issues/347
One thing we could do with some tests I know is to remove some of them.
EC currently runs the same test on multiple volume configurations (2+1,
3+1, 4+1, 3+2, 4+2, 4+3 and 8+4). I think we could reduce it to two common
configurations (2+1 and 4+2) and one or two special configurations (3+1
and/or
Hi all,
Several times I've seen issues with the way strings are handled in many
parts of the code. Sometimes it's because of an incorrect use of some
functions, like strncat(). Others it's because of a lack of error
conditions check. Others it's a failure in allocating the right amount of
memory,
Hi,
I've opened a github issue [1] to discuss the implementation of a
transaction framework that should provide a level of abstraction for
xlators that currently use inodelk/entrylk, simplifying its coding and
improving performance.
Feel free to provide your thoughts.
Thanks,
Xavi
[1]
Hi David,
adding again gluster-devel.
On Tue, Jan 9, 2018 at 4:15 PM, David Spisla <david.spi...@iternity.com>
wrote:
> Hello Xavi,
>
>
>
> *Von:* Xavi Hernandez [mailto:jaher...@redhat.com]
> *Gesendet:* Dienstag, 9. Januar 2018 09:48
> *An:* David Spisla <
Hi David,
On Wed, Jan 10, 2018 at 3:24 PM, David Spisla <david.spi...@iternity.com>
wrote:
> Hello Amar, Xavi
>
>
>
> *Von:* Amar Tumballi [mailto:atumb...@redhat.com]
> *Gesendet:* Mittwoch, 10. Januar 2018 14:16
> *An:* Xavi Hernandez <jaher...@redhat.co
Hi David,
On Wed, Jan 10, 2018 at 1:42 PM, David Spisla
wrote:
>
> *[David Spisla] I tried this:*
>
> *char *new_path = malloc(1+len_path-5);*
>
> *memcpy(new_path, loc->path, len_path-5);*
>
> *new_path[strlen(new_path)] = '\0';*
>
> *loc->name = new_path + (len_path
Hi David,
On Tue, Jan 9, 2018 at 9:09 AM, David Spisla
wrote:
> Dear Gluster Devels,
>
> at the moment I do some Xlator stuff and I want to know if there is a way
> to simulate the existing of a file to the client. It should be a kind of
> "virtual file". Here are more
On Wed, Jan 24, 2018 at 3:11 PM, Jeff Darcy <j...@pl.atyp.us> wrote:
>
>
>
> On Tue, Jan 23, 2018, at 12:58 PM, Xavi Hernandez wrote:
>
> I've made some experiments [1] with the time that centos regression takes
> to complete. After some changes the time taken t
On Thu, Jan 25, 2018 at 3:03 PM, Jeff Darcy <j...@pl.atyp.us> wrote:
>
>
>
> On Wed, Jan 24, 2018, at 9:37 AM, Xavi Hernandez wrote:
>
> That happens when we use arbitrary delays. If we use an explicit check, it
> will work on all systems.
>
>
> You're argui
Hi all,
I've identified a race in RPC layer that caused some spurious
disconnections and CHILD_DOWN notifications.
The problem happens when protocol/client reconfigures a connection to move
from glusterd to glusterfsd. This is done by calling rpc_clnt_reconfig()
followed by
I've more time.
Xavi
On Mon, Jan 29, 2018 at 11:07 PM, Xavi Hernandez <jaher...@redhat.com>
wrote:
> Hi all,
>
> I've identified a race in RPC layer that caused some spurious
> disconnections and CHILD_DOWN notifications.
>
> The problem happens when protocol/client r
On Thu, Feb 1, 2018 at 2:48 PM, Shyam Ranganathan <srang...@redhat.com>
wrote:
> On 02/01/2018 08:25 AM, Xavi Hernandez wrote:
> > After having tried several things, it seems that it will be complex to
> > solve these races. All attempts to fix them have caused failures in
have to identify
another race (probably in RPC also) that is generating unexpected
disconnections (or incorrect reconnections).
Xavi
Regards,
Amar
On Thu, Jan 25, 2018 at 8:07 PM, Xavi Hernandez <jaher...@redhat.com> wrote:
> On Thu, Jan 25, 2018 at 3:03 PM, Jeff Darcy <j...@pl.atyp.us&
Hi all,
currently glusterd sends a SIGKILL to stop gNFS, while all other services
are stopped with a SIGTERM signal first (this can be seen in
glusterd_svc_stop() function of mgmt/glusterd xlator).
The question is why it cannot be stopped with SIGTERM as all other
services. Using SIGKILL blindly
On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa
wrote:
> Krutika,
>
> This patch doesn't seem to be getting counts per domain, like number of
> inodelks or entrylks acquired in a domain "xyz". Am I right? If per domain
> stats are not available, passing interested domains in xdata_req would
On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee wrote:
>
>
> On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee
> wrote:
>
>> I just went through the nightly regression report of brick mux runs and
>> here's what I can summarize.
>>
>>
>>
> wrote:
>>
>>>
>>>
>>> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez
>>> wrote:
>>>
>>>> On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee
>>>> wrote:
>>>>
>>>>>
>>>>>
>>&
On Mon, Jul 9, 2018 at 11:14 AM Karthik Subrahmanya
wrote:
> Hi Deepshikha,
>
> Are you looking into this failure? I can still see this happening for all
> the regression runs.
>
I've executed the failing script on my laptop and all tests finish
relatively fast. What seems to take time is the
Hi,
I've made some experiments [1] with the time that centos regression takes
to complete. After some changes the time taken to run a full regression has
dropped between 2.5 and 3.5 hours (depending on the run time of 2 tests,
see below).
Basically the changes are related with delays manually
On Tue, Mar 13, 2018 at 2:37 AM, Shyam Ranganathan
wrote:
> Hi,
>
> As we wind down on 4.0 activities (waiting on docs to hit the site, and
> packages to be available in CentOS repositories before announcing the
> release), it is time to start preparing for the 4.1 release.
On Wed, Oct 10, 2018 at 10:03 PM Shyam Ranganathan
wrote:
> On 09/26/2018 10:21 AM, Shyam Ranganathan wrote:
> > 3. Upgrade testing
> > - Need *volunteers* to do the upgrade testing as stated in the 4.1
> > upgrade guide [3] to note any differences or changes to the same
> > - Explicit call
Hi,
this is an update containing some work done regarding performance and
consistency during latest weeks. We'll try to build a complete list of all
known issues and track them through this email thread. Please, let me know
of any performance issue not included in this email so that we can build
On Wed, Oct 3, 2018 at 11:57 AM Deepshikha Khandelwal
wrote:
> Hello folks,
>
> Distributed-regression job[1] is now a part of Gluster's
> nightly-master build pipeline. The following are the issues we have
> resolved since we started working on this:
>
> 1) Collecting gluster logs from servers.
On Thu, Oct 4, 2018 at 9:47 AM Amar Tumballi wrote:
>
>
> On Thu, Oct 4, 2018 at 12:54 PM Xavi Hernandez
> wrote:
>
>> On Wed, Oct 3, 2018 at 11:57 AM Deepshikha Khandelwal <
>> dkhan...@redhat.com> wrote:
>>
>>> Hello folks,
>>>
>
Hi,
we are starting to design the next cache implementation for gluster that
should provide much better latencies, increasing performance. The document
[1] with the high level approach will be used as a starting point to design
the final architecture. Any comments will be highly appreciated so
ormance reduction of 70% or more.
> > We need to determine what causes the fluctuations in brick side and
> avoid them.
> > This scenario is very similar to a smallfile/metadata workload, so this
> is probably one important cause of its bad performance.
>
> What kind of i
On Mon, Jan 14, 2019 at 11:08 AM Ashish Pandey wrote:
>
> I downloaded logs of regression runs 1077 and 1073 and tried to
> investigate it.
> In both regression ec/bug-1236065.t is hanging on TEST 70 which is trying
> to get the online brick count
>
> I can see that in mount/bricks and glusterd
Hi,
I've done some tracing of the latency that network layer introduces in
gluster. I've made the analysis as part of the pgbench performance issue
(in particulat the initialization and scaling phase), so I decided to look
at READV for this particular workload, but I think the results can be
On Fri, 25 Jan 2019, 08:53 Vijay Bellur Thank you for the detailed update, Xavi! This looks very interesting.
>
> On Thu, Jan 24, 2019 at 7:50 AM Xavi Hernandez
> wrote:
>
>> Hi all,
>>
>> I've just updated a patch [1] that implements a new thread pool based on
Hi Raghavendra,
On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa
wrote:
> All,
>
> Glusterfs cleans up POSIX locks held on an fd when the client/mount
> through which those locks are held disconnects from bricks/server. This
> helps Glusterfs to not run into a stale lock problem later (For
On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa
wrote:
>
>
> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez
> wrote:
>
>> Hi Raghavendra,
>>
>> On Wed, Mar 27, 2019 at 2:49 AM Raghavendra Gowdappa
>> wrote:
>>
>>> All,
>
On Wed, Mar 27, 2019 at 11:54 AM Raghavendra Gowdappa
wrote:
>
>
> On Wed, Mar 27, 2019 at 4:22 PM Raghavendra Gowdappa
> wrote:
>
>>
>>
>> On Wed, Mar 27, 2019 at 12:56 PM Xavi Hernandez
>> wrote:
>>
>>> Hi Raghavendra,
>>>
>&
On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri
wrote:
>
>
> On Wed, Mar 27, 2019 at 5:13 PM Xavi Hernandez
> wrote:
>
>> On Wed, Mar 27, 2019 at 11:52 AM Raghavendra Gowdappa <
>> rgowd...@redhat.com> wrote:
>>
>>>
>>>
>&
On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri
wrote:
>
>
> On Wed, Mar 27, 2019 at 6:38 PM Xavi Hernandez
> wrote:
>
>> On Wed, Mar 27, 2019 at 1:13 PM Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>
On Wed, 27 Mar 2019, 18:26 Pranith Kumar Karampuri,
wrote:
>
>
> On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez
> wrote:
>
>> On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>
Hi all,
there is one potential problem with posix locks when used in a replicated
or dispersed volume.
Some background:
Posix locks allow any process to lock a region of a file multiple times,
but a single unlock on a given region will release all previous locks.
Locked regions can be different
On Thu, Mar 28, 2019 at 3:05 AM Raghavendra Gowdappa
wrote:
>
>
> On Wed, Mar 27, 2019 at 8:38 PM Xavi Hernandez
> wrote:
>
>> On Wed, Mar 27, 2019 at 2:20 PM Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>
On Mon, Apr 1, 2019 at 10:15 AM Soumya Koduri wrote:
>
>
> On 4/1/19 10:02 AM, Pranith Kumar Karampuri wrote:
> >
> >
> > On Sun, Mar 31, 2019 at 11:29 PM Soumya Koduri > <mailto:skod...@redhat.com>> wrote:
> >
> >
> >
> >
On Sun, Mar 31, 2019 at 7:59 PM Soumya Koduri wrote:
>
>
> On 3/29/19 11:55 PM, Xavi Hernandez wrote:
> > Hi all,
> >
> > there is one potential problem with posix locks when used in a
> > replicated or dispersed volume.
> >
> > Some background:
Hi,
On Mon, Apr 8, 2019 at 8:50 AM PSC <1173701...@qq.com> wrote:
> Hi, I am a storage software coder who is interested in Gluster. I am
> trying to improve the read/write performance of it.
>
>
> I noticed that gluster is using Vandermonde matrix in erasure code
> encoding and decoding process.
On Wed, Feb 13, 2019 at 11:34 AM Xavi Hernandez
wrote:
> On Tue, Feb 12, 2019 at 1:30 AM Vijay Bellur wrote:
>
>>
>>
>> On Tue, Feb 5, 2019 at 10:57 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Feb 6, 2019 at 7:00 AM Poornima Gurusiddaiah <
>
On Wed, Feb 6, 2019 at 7:00 AM Poornima Gurusiddaiah
wrote:
>
>
> On Tue, Feb 5, 2019, 10:53 PM Xavi Hernandez
>> On Fri, Feb 1, 2019 at 1:51 PM Xavi Hernandez
>> wrote:
>>
>>> On Fri, Feb 1, 2019 at 1:25 PM Poornima Gurusiddaiah <
>>> pgur
Hi all,
I've just updated a patch [1] that implements a new thread pool based on a
wait-free queue provided by userspace-rcu library. The patch also includes
an auto scaling mechanism that only keeps running the needed amount of
threads for the current workload.
This new approach has some
On Fri, Feb 1, 2019 at 7:54 AM Vijay Bellur wrote:
>
>
> On Thu, Jan 31, 2019 at 10:01 AM Xavi Hernandez
> wrote:
>
>> Hi,
>>
>> I've been doing some tests with the global thread pool [1], and I've
>> observed one important thing:
>>
>>
On Fri, Feb 1, 2019 at 1:25 PM Poornima Gurusiddaiah
wrote:
> Can the threads be categorised to do certain kinds of fops?
>
Could be, but creating multiple thread groups for different tasks is
generally bad because many times you end up with lots of idle threads which
waste resources and could
On Sun, Jan 27, 2019 at 8:03 AM Xavi Hernandez
wrote:
> On Fri, 25 Jan 2019, 08:53 Vijay Bellur
>> Thank you for the detailed update, Xavi! This looks very interesting.
>>
>> On Thu, Jan 24, 2019 at 7:50 AM Xavi Hernandez
>> wrote:
>>
>>> Hi
Hi,
I've been doing some tests with the global thread pool [1], and I've
observed one important thing:
Since this new thread pool has very low contention (apparently), it exposes
other problems when the number of threads grows. What I've seen is that
some workloads use all available threads on
On Fri, Feb 1, 2019 at 1:51 PM Xavi Hernandez wrote:
> On Fri, Feb 1, 2019 at 1:25 PM Poornima Gurusiddaiah
> wrote:
>
>> Can the threads be categorised to do certain kinds of fops?
>>
>
> Could be, but creating multiple thread groups for different tasks is
> ge
On Thu, Apr 11, 2019 at 11:28 AM Xavi Hernandez wrote:
> On Wed, Apr 10, 2019 at 7:25 PM Xavi Hernandez
> wrote:
>
>> On Wed, Apr 10, 2019 at 4:01 PM Atin Mukherjee
>> wrote:
>>
>>> And now for last 15 days:
>>>
>>>
>>> https://
On Wed, Apr 10, 2019 at 7:25 PM Xavi Hernandez wrote:
> On Wed, Apr 10, 2019 at 4:01 PM Atin Mukherjee
> wrote:
>
>> And now for last 15 days:
>>
>>
>> https://fstat.gluster.org/summary?start_date=2019-03-25_date=2019-04-10
>>
>> ./tests/bitrot/bug
On Mon, Apr 15, 2019 at 11:08 AM Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Thu, Apr 11, 2019 at 2:59 PM Xavi Hernandez
> wrote:
>
>> On Wed, Apr 10, 2019 at 7:25 PM Xavi Hernandez
>> wrote:
>>
>>> On Wed, Apr 10, 2019 at 4:0
Hi,
I've found some issues with memory accounting and I've written a patch [1]
to fix them. However during the tests I've found another problem:
In a brick-multiplexed environment, posix tries to start a single janitor
thread shared by all posix xlator instances, however there are two issues:
omething ?
> ---
> Ashish
>
> --
> *From: *"Amar Tumballi Suryanarayan"
> *To: *"Xavi Hernandez"
> *Cc: *"gluster-devel"
> *Sent: *Thursday, May 30, 2019 12:04:43 PM
> *Subject: *Re: [Gluster-devel] Should we enable
>
Hi all,
a patch [1] was added some time ago to send upcall notifications from the
locks xlator to the current owner of a granted lock when another client
tries to acquire the same lock (inodelk or entrylk). This makes it possible
to use eager-locking on the client side, which improves performance
On Thu, May 2, 2019 at 5:45 PM Atin Mukherjee
wrote:
>
>
> On Thu, 2 May 2019 at 20:38, Xavi Hernandez wrote:
>
>> On Thu, May 2, 2019 at 4:06 PM Atin Mukherjee
>> wrote:
>>
>>>
>>>
>>> On Thu, 2 May 2019 at 19:14, Xavi Hernandez
>
Missed the patch link: https://review.gluster.org/c/glusterfs/+/22828
On Thu, Jun 6, 2019 at 8:32 AM Xavi Hernandez wrote:
> On Thu, May 2, 2019 at 5:45 PM Atin Mukherjee
> wrote:
>
>>
>>
>> On Thu, 2 May 2019 at 20:38, Xavi Hernandez
>> wrote:
>>
Hi Kotresh,
On Tue, Jun 18, 2019 at 8:33 AM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi Xavi,
>
> Reply inline.
>
> On Mon, Jun 17, 2019 at 5:38 PM Xavi Hernandez
> wrote:
>
>> Hi Kotresh,
>>
>> On Mon, Jun 17, 2019 at 1:
Hi Kotresh,
On Mon, Jun 17, 2019 at 1:50 PM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Hi All,
>
> The ctime feature is enabled by default from release gluster-6. But as
> explained in bug [1] there is a known issue with legacy files i.e., the
> files which are created before
On Thu, 2 May 2019, 15:37 Milind Changire, wrote:
> On Thu, May 2, 2019 at 6:44 PM Xavi Hernandez
> wrote:
>
>> Hi Ashish,
>>
>> On Thu, May 2, 2019 at 2:17 PM Ashish Pandey wrote:
>>
>>> Xavi,
>>>
>>> I would like to keep this opt
Hi all,
there's a feature in the locks xlator that sends a notification to current
owner of a lock when another client tries to acquire the same lock. This
way the current owner is made aware of the contention and can release the
lock as soon as possible to allow the other client to proceed.
good enough. If there
are many bricks, each brick could send a notification per lock. 1000 bricks
would mean a client would receive 1000 notifications every 5 seconds. It
doesn't seem too much, but in those cases 10, and considering we could have
other locks, maybe a higher value could be better.
Xavi
Hi,
doing some tests to compare performance I've found some weird results. I've
seen this in different tests, but probably the more clear an easier to
reproduce is to use smallfile tool to create files.
The test command is:
# python smallfile_cli.py --operation create --files-per-dir 100
On Thu, May 2, 2019 at 4:06 PM Atin Mukherjee
wrote:
>
>
> On Thu, 2 May 2019 at 19:14, Xavi Hernandez wrote:
>
>> On Thu, 2 May 2019, 15:37 Milind Changire, wrote:
>>
>>> On Thu, May 2, 2019 at 6:44 PM Xavi Hernandez
>>> wrote:
>>>
>&
Hi Atin,
On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee wrote:
> I'm bit puzzled on the way coverity is reporting the open defects on GD1
> component. As you can see from [1], technically we have 6 open defects and
> all of the rest are being marked as dismissed. We tried to put some
>
On Wed, Apr 10, 2019 at 4:01 PM Atin Mukherjee wrote:
> And now for last 15 days:
>
> https://fstat.gluster.org/summary?start_date=2019-03-25_date=2019-04-10
>
> ./tests/bitrot/bug-1373520.t 18 ==> Fixed through
> https://review.gluster.org/#/c/glusterfs/+/22481/, I don't see this
> failing
Hi Changwei,
On Tue, Oct 29, 2019 at 7:56 AM Changwei Ge wrote:
> Hi,
>
> I am recently working on reducing inode_[un]ref() locking contention by
> getting rid of inode table lock. Just use inode lock to protect inode
> REF. I have already discussed a couple rounds with several Glusterfs
>
Hi Mohit,
On Thu, Oct 24, 2019 at 5:19 AM Mohit Agrawal wrote:
>
> I have a query why do we take a lock at the time of doing an operation in
> a dictionary.I have observed in testing it seems there is no codepath where
> we are using the dictionary parallel. In theory, the dictionary flow is
On Thu, Jan 9, 2020 at 10:22 AM Amar Tumballi wrote:
>
>
> On Thu, Jan 9, 2020 at 2:33 PM Xavi Hernandez wrote:
>
>> On Thu, Jan 9, 2020 at 9:44 AM Amar Tumballi wrote:
>>
>>>
>>>
>>> On Thu, Jan 9, 2020 at 1:38 PM Xavi Hernandez
>>>
On Thu, Jan 9, 2020 at 9:44 AM Amar Tumballi wrote:
>
>
> On Thu, Jan 9, 2020 at 1:38 PM Xavi Hernandez wrote:
>
>> On Sun, Dec 22, 2019 at 4:56 PM Yaniv Kaul wrote:
>>
>>> I could not find a relevant use for them. Can anyone enlighten me?
>>>
>>
On Sun, Dec 22, 2019 at 4:56 PM Yaniv Kaul wrote:
> I could not find a relevant use for them. Can anyone enlighten me?
>
I'm not sure why they are needed. They seem to be used to keep the
unserialized version of a dict around until the dict is destroyed. I
thought this could be because we were
On Thu, Jan 9, 2020 at 11:11 AM Yaniv Kaul wrote:
>
>
> On Thu, Jan 9, 2020 at 11:35 AM Xavi Hernandez
> wrote:
>
>> On Thu, Jan 9, 2020 at 10:22 AM Amar Tumballi wrote:
>>
>>>
>>>
>>> On Thu, Jan 9, 2020 at 2:33 PM Xavi Hernandez
&
if (snap_info_rsp.dict.dict_val) {
> GF_FREE(snap_info_rsp.dict.dict_val);
> }
>
This seems like a bug. Additionally, this memory should be released using
free() instead of GF_FREE().
>
> I think I should remove that and stick to freeing right after
> unserialization?
>
Yes. I agree.
Hi Sankarshan,
On Sat, May 16, 2020 at 9:15 AM sankarshan wrote:
> On Fri, 15 May 2020 at 10:59, Hari Gowtham wrote:
>
> > ### User stories
> > * [Hari] users are hesitant to upgrade. A good number of issues in
> release-7 (crashes, flooding of logs, self heal) Need to look into this.
> > *
Hi all,
after the recent switch to GitHub, I've seen that reviews that require
multiple iterations are hard to follow using the old workflow we were using
in Gerrit.
Till now we basically amended the commit and pushed it again. Gerrit had a
feature to calculate diffs between versions of the
Hi Ravi,
On Thu, Oct 15, 2020 at 1:27 PM Ravishankar N
wrote:
>
> On 15/10/20 4:36 pm, Sheetal Pamecha wrote:
>
>
> +1
> Just a note to the maintainers who are merging PRs to have patience and
> check the commit message when there are more than 1 commits in PR.
>
> Makes sense.
>
>
>>
>>
If everyone agrees, I'll prepare a PR with the changes in rfc.sh and
documentation to implement this change.
Xavi
On Thu, Oct 15, 2020 at 1:27 PM Ravishankar N
wrote:
>
> On 15/10/20 4:36 pm, Sheetal Pamecha wrote:
>
>
> +1
> Just a note to the maintainers who are merging PRs to have patience
Hi Dmitry,
my comments below...
On Tue, Sep 29, 2020 at 11:19 AM Dmitry Antipov wrote:
> For the testing purposes, I've set up a localhost-only setup with 6x16M
> ramdisks (formatted as ext4) mounted (with '-o user_xattr') at
> /tmp/ram/{0,1,2,3,4,5} and SHARD_MIN_BLOCK_SIZE lowered to 4K.
Hi Dmitry,
On Wed, Sep 30, 2020 at 9:21 AM Dmitry Antipov wrote:
> On 9/30/20 8:58 AM, Xavi Hernandez wrote:
>
> > This is normal. A dispersed volume writes encoded fragments of each
> block in each brick. In this case it's a 2+1 configuration, so each block
> is divide
Hi Emmanuel,
On Thu, Jul 2, 2020 at 3:05 AM Emmanuel Dreyfus wrote:
> Hello
>
> gluster volume heal info show me questionable entries. I wonder if these
> are bugs, or if I shoud handle them and how.
>
> bidon# gluster volume heal gfs info
> Brick bidon:/export/wd0e_tmp
> Status: Connected
>
Before each test: gluster volume profile info clear
After the test: gluster volume profile info >/some/file
Regards,
Xavi
On Mon, Apr 12, 2021 at 9:01 AM Xavi Hernandez wrote:
> On Sun, Apr 11, 2021 at 10:29 AM Amar Tumballi wrote:
>
>> Hi Marco, this is really good te
Hi all,
I'm wondering if enforcing clang-format for all patches is a good idea...
I've recently seen patches where clang-format is doing changes on parts of
the code that have not been touched by the patch. Given that all files were
already formatted by clang-format long ago, this shouldn't
On Wed, Feb 10, 2021 at 1:33 PM Amar Tumballi wrote:
>
>
> On Wed, Feb 10, 2021 at 3:29 PM Xavi Hernandez
> wrote:
>
>> Hi all,
>>
>> I'm wondering if enforcing clang-format for all patches is a good idea...
>>
>> I've recently seen patches
On Sun, Apr 11, 2021 at 10:29 AM Amar Tumballi wrote:
> Hi Marco, this is really good test/info. Thanks.
>
> One more thing to observe is you are running such tests is 'gluster
> profile info', so the bottleneck fop is listed.
>
> Mohit, Xavi, in this parallel operations, the load may be high
On Thu, Feb 11, 2021 at 5:50 PM Yaniv Kaul wrote:
>
>
> On Thu, Feb 11, 2021 at 5:54 PM Amar Tumballi wrote:
>
>>
>>
>> On Thu, 11 Feb, 2021, 9:19 pm Xavi Hernandez,
>> wrote:
>>
>>> On Wed, Feb 10, 2021 at 1:33 PM Amar Tumballi wrote:
>
On Thu, Dec 30, 2021 at 5:50 AM Amar Tumballi wrote:
> Any PR to suspect here?
>
The previous execution that passed was based on commit 12b44fe. This one is
based on commit b8e32c3. The only commit between them is b8e32c3, but it
seems unlikely that it may affect non SSL connections.
It seems
Thanks for the patch. Could you send it to GitHub so that it can be
reviewed and merged using the regular procedure ?
You can find more information about contributing to the project here:
https://docs.gluster.org/en/latest/Developer-guide/Developers-Index/
Xavi
On Fri, Jul 16, 2021 at 10:43 AM
ve a minimal performance
benefit, but it's not the main reason.
Best regards,
Xavi
> Best Regards,
> Strahil Nikolov
>
> On Thu, Mar 24, 2022 at 20:33, Xavi Hernandez
> wrote:
> Hi all,
>
> I've just posted a proposal for a new logging interface here:
> https:/
Hi all,
I've just posted a proposal for a new logging interface here:
https://github.com/gluster/glusterfs/pull/3342
There are many comments and the documentation is updated in the PR itself,
so I won't duplicate all the info here. Please check it if you are
interested in the details.
As a
Hi,
this problem is most likely caused by the XFS speculative preallocation (
https://linux-xfs.oss.sgi.narkive.com/jjjfnyI1/faq-xfs-speculative-preallocation
)
Regards,
Xavi
On Sat, Feb 5, 2022 at 10:19 AM Strahil Nikolov
wrote:
> It seems quite odd.
> I'm adding the devel list,as it looks
On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi wrote:
> Here is my honest take on this one.
>
> On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya
> wrote:
>
>> It is time to evaluate the fulfillment of our committed
>> features/improvements and the feasibility of the proposed deadlines as per
>>
On Mon, Oct 17, 2022 at 10:40 AM Yaniv Kaul wrote:
>
>
> On Mon, Oct 17, 2022 at 8:41 AM Xavi Hernandez
> wrote:
>
>> On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi wrote:
>>
>>> Here is my honest take on this one.
>>>
>>> On T
before ?
Xavi
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em ter., 16 de mai. de 2023 às 07:45, Xavi Hernandez
> escreveu:
>
>> The referenced GitHub issue now has a potential patch that could fix the
&
The referenced GitHub issue now has a potential patch that could fix the
problem, though it will need to be verified. Could you try to apply the
patch and check if the problem persists ?
On Mon, May 15, 2023 at 2:10 AM Gilberto Ferreira <
gilberto.nune...@gmail.com> wrote:
> Hi there, anyone in
1 - 100 of 101 matches
Mail list logo