Re: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features

2022-11-06 Thread Amar Tumballi
Other than 2 large PRs (CDC and zlib changes) we don't have any major
pending tasks. I would like to propose we keep up with the proposed dates,
and go ahead with the branching. If we merge these PRs, we can rebase and
send it to the branch again.

Shwetha Can you please go ahead with the branching related activities?

-Amar

On Mon, Oct 17, 2022 at 3:24 PM Xavi Hernandez  wrote:

> On Mon, Oct 17, 2022 at 10:40 AM Yaniv Kaul  wrote:
>
>>
>>
>> On Mon, Oct 17, 2022 at 8:41 AM Xavi Hernandez 
>> wrote:
>>
>>> On Mon, Oct 17, 2022 at 4:03 AM Amar Tumballi  wrote:
>>>
>>>> Here is my honest take on this one.
>>>>
>>>> On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya 
>>>> wrote:
>>>>
>>>>> It is time to evaluate the fulfillment of our committed
>>>>> features/improvements and the feasibility of the proposed deadlines as 
>>>>> per Release
>>>>> 11 tracker <https://github.com/gluster/glusterfs/issues/3023>.
>>>>>
>>>>>
>>>>> Currently our timeline is as follows:
>>>>>
>>>>> Code Freeze: 31-Oct-2022
>>>>> RC : 30-Nov-2022
>>>>> GA : 10-JAN-2023
>>>>>
>>>>> *Please evaluate the following and reply to this thread if you want to
>>>>> convey anything important:*
>>>>>
>>>>> - Can we ensure to fulfill all the proposed requirements by the Code
>>>>> Freeze?
>>>>> - Do we need to add any more changes to accommodate any shortcomings
>>>>> or improvements?
>>>>> - Are we all good to go with the proposed timeline?
>>>>>
>>>>>
>>>> We have delayed the release already by more than 1year, and that is a
>>>> significant delay for any project. If the changes we work on is not getting
>>>> released frequently, the feedback loop for the project is delayed and hence
>>>> the further improvements. So, regardless of any pending promised things, we
>>>> should go ahead with the code-freeze and release on these dates.
>>>>
>>>> It is crucial for any projects / companies dependent on the project to
>>>> plan accordingly. There may be already few others who would have planned
>>>> their product release around these dates. Lets keep the same dates, and try
>>>> to achieve the tasks we have planned in these dates.
>>>>
>>>
>>> I agree. Pending changes will need to be added to next release. Doing it
>>> at last time is not safe for stability.
>>>
>>
>> Generally, +1.
>>
>> - Some info on my in-flight PRs:
>>
>> I have multiple independent patches for the flexible array member
>> conversion of different variables that are pending:
>> https://github.com/gluster/glusterfs/pull/3873
>> https://github.com/gluster/glusterfs/pull/3872
>> https://github.com/gluster/glusterfs/pull/3868  (this one is
>> particularly interesting, I hope it works!)
>> https://github.com/gluster/glusterfs/pull/3861
>> https://github.com/gluster/glusterfs/pull/3870 (already in review,
>> perhaps it can get it soon?)
>>
>
> I'm already looking at these and I expect they can be merged before the
> current code-freeze date.
>
>
>> I have this for one for inode related code, which got some attention
>> recently:
>> https://github.com/gluster/glusterfs/pull/3226
>>
>
> I'll try to review this one before code-freeze, but it requires much more
> care. Any help will be appreciated.
>
>
>>
>> I think this one is worthwhile looking at:
>> https://github.com/gluster/glusterfs/pull/3854
>>
>
> I'll try to take a look at this one also.
>
>
>> I wish we could get rid of old, unsupported versions:
>> https://github.com/gluster/glusterfs/pull/3544
>> (there's more to do, in different patches, but it's a start)
>>
>
> This one is mostly ok, but I think we can't release a new version without
> an explicit check for unsupported versions at least at the beginning, to
> avoid problems when users upgrade directly from 3.x to 11.x.
>
>
>> None of them is critical for release 11, though I'm unsure if I'll have
>> the ability to complete them later.
>>
>>
>> - The lack of EL9 official support (inc. testing infra.) is regrettable,
>> and I think something worth fixing *before* release 11 - adding sanity
>> on newer OS releases, which will use io_uring for example, is something we
>> should definitely consider.
>>

Re: [Gluster-devel] [Gluster-Maintainers] Release 11: Revisting our proposed timeline and features

2022-10-16 Thread Amar Tumballi
Here is my honest take on this one.

On Tue, Oct 11, 2022 at 3:06 PM Shwetha Acharya  wrote:

> It is time to evaluate the fulfillment of our committed
> features/improvements and the feasibility of the proposed deadlines as per 
> Release
> 11 tracker .
>
>
> Currently our timeline is as follows:
>
> Code Freeze: 31-Oct-2022
> RC : 30-Nov-2022
> GA : 10-JAN-2023
>
> *Please evaluate the following and reply to this thread if you want to
> convey anything important:*
>
> - Can we ensure to fulfill all the proposed requirements by the Code
> Freeze?
> - Do we need to add any more changes to accommodate any shortcomings or
> improvements?
> - Are we all good to go with the proposed timeline?
>
>
We have delayed the release already by more than 1year, and that is a
significant delay for any project. If the changes we work on is not getting
released frequently, the feedback loop for the project is delayed and hence
the further improvements. So, regardless of any pending promised things, we
should go ahead with the code-freeze and release on these dates.

It is crucial for any projects / companies dependent on the project to plan
accordingly. There may be already few others who would have planned their
product release around these dates. Lets keep the same dates, and try to
achieve the tasks we have planned in these dates.

Regards,
Amar

> Regards,
> Shwetha
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
--
https://kadalu.io
Container Storage made easy!
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] GlusterFS storage driver deprecation in Kubernetes.

2022-08-11 Thread Amar Tumballi
Thanks for the heads up Humble. This would help many of the gluster
community users, who may not be following k8s threads actively to be
planning their migration plans actively.

For all the users who are currently running heketi + glusterfs, starting
from k8s v1.26, you CANNOT use heketi + glusterfs based storage in k8s.

Below are my personal suggestions for the users. Please treat these options
as my personal opinion, and not an official stand of the gluster community.

0. Use an older (< 1.25) version of k8s, and keep using the setup :-)

1. Use current storage nodes as part of storage, but managed separately,
and expose NFS and use NFS CSI to get the data in the pods. (Note the
change over to new PV through CSI based PVC, which means applications need
a migration). - I haven't tested this setup, hence can't vouch for this.

2. Use kadalu [6] operator to manage currently deployed glusterfs nodes as
'External' storage class, and use kadalu CSI (which uses native glusterfs
mount as part of CSI node plugin) to get PV for your application pods.
NOTE: here too, there is an application migration needed to use kadalu CSI
based PVC. Suggested for those users with bigger PVs used in k8s setup
already. There is a team to help with this migration if you wish to.

3. Use kadalu (or any 'other' CSI providers), provision a new storage, and
copy over the data set to new storage: Would be an option if the storage is
smaller in size. In this case, there would be extra time to do a copy of
data through starting a pod with both existing PV and new PV added as
mounts in the same pod, so you can copy off the data quickly.

In any case, considering you do not have a lot of time before kubernetes
v1.26 comes out, please do start your migration plans soon.

For the developers of the glusterfs community, what are the thoughts you
have on this? I know there is some effort started on keeping
glusterfs-containers repo relevant, and I see PRs coming out. Happy to open
up a discussion on the same.

Regards,
Amar (@amarts)

[6] - https://github.com/kadalu/kadalu


On Thu, Aug 11, 2022 at 5:47 PM Humble Chirammal 
wrote:

> Hey Gluster Community,
>
> As you might be aware, there is an effort in the kubernetes community to
> remove in-tree storage plugins to reduce external dependencies and security
> concerns in the core Kubernetes. Thus, we are in a process to gradually
> deprecate all the in-tree external storage plugins and eventually remove
> them from the core Kubernetes codebase.  GlusterFS is one of the very first
> dynamic provisioners which was made into kubernetes v1.4 ( 2016 ) release
> via https://github.com/kubernetes/kubernetes/pull/30888 . From then on
> many deployments were/are making use of this driver to consume GlusterFS
> volumes in Kubernetes/Openshift clusters.
>
> As part of this effort, we are planning to deprecate GlusterFS intree
> plugin in 1.25 release and planning to take out Heketi code from
> Kubernetes Code base in subsequent release. This code removal will not be
> following kubernetes' normal deprecation policy [1] and will be treated as
> an exception [2]. The main reason for this exception is that, Heketi is in
> "Deep Maintenance" [3], also please see [4] for the latest push back from
> the Heketi team on changes we would need to keep vendoring heketi into
> kubernetes/kubernetes. We cannot keep heketi in the kubernetes code base as
> heketi itself is literally going away. The current plan is to start
> declaring the deprecation in kubernetes v1.25 and code removal in v1.26.
>
> If you are using GlusterFS driver in your cluster setup, please reply
> with  below info before 16-Aug-2022 to d...@kubernetes.io ML on thread ( 
> Deprecation
> of intree GlusterFS driver in 1.25) or to this thread which can help us
> to make a decision on when to completely remove this code base from the
> repo.
>
> - what version of kubernetes are you running in your setup ?
>
> - how often do you upgrade your cluster?
>
> - what vendor or distro you are using ? Is it any (downstream) product
> offering or upstream GlusterFS driver directly used in your setup?
>
> Awaiting your feedback.
>
> Thanks,
>
> Humble
>
> [1] https://kubernetes.io/docs/reference/using-api/deprecation-policy/
>
> [2]
> https://kubernetes.io/docs/reference/using-api/deprecation-policy/#exceptions
>
> [3] https://github.com/heketi/heketi#maintenance-status
>
> [4] https://github.com/heketi/heketi/pull/1904#issuecomment-1197100513
> [5] https://github.com/kubernetes/kubernetes/issues/100897
>
> --
>
>
>
>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 30/12/2021 Test Status: FAIL (-7.91%)

2021-12-29 Thread Amar Tumballi
Any PR to suspect here?

On Thu, Dec 30, 2021 at 6:25 AM Gluster-jenkins 
wrote:

> *Test details:*
> RPM Location: Upstream
> OS Version: Red-Hat-Enterprise-Linux 8.4-(Ootpa)
> Baseline Gluster version: glusterfs-10.0-1
> Current Gluster version: glusterfs-20211228.b8e32c3-0.0
> Intermediate Gluster version: No intermediate baseline
> Test type: Smallfile
> Tool: smallfile
> Volume type: Replica-3
> Volume Option: No volume options configured
> FOPsBaselineDailyBuildBaseline vs DailyBuild
> create 15586 15791 1
> ls-l 229506 228038 0
> chmod 24545 20445 -16
> stat 35376 25572 -27
> read 29000 23453 -19
> append 13850 10562 -23
> rename 958 980 2
> delete-renamed 22512 21956 -2
> mkdir 3212 3204 0
> rmdir 2691 2676 0
> cleanup 9564 9181 -3
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
--
https://kadalu.io
Container Storage made easy!
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] mem-pool.c magic trailer causes SIGBUS, fix or remove?

2021-07-29 Thread Amar Tumballi
Thanks for the initiative Paul. Let me answer the major question from the
thread.

> > Remove the trailer? Or fix it?

Ideal one is to fix it, as we do use mem-pool info to identify leaks etc
through statedumps of the process. Remove can be an option on SPARC to
start with if fixing is time consuming. I recommend removing the trailer
within a #ifdef codeblock, so it may continue work in places where its
already working.

-Amar

On Tue, Jul 27, 2021 at 7:45 PM Paul Jakma  wrote:

> Hi,
>
> The magic trailer added at the end of memory allocations in mem-pool.c
> doesn't have its alignment ensured. This can lead to SIGBUS and
> abnormal exits on some platforms (e.g., SPARC64).
>
> I have a patch to round-up the header and allocation size to get the
> right alignment for the trailer (I think).
>
> It does complicate the memory allocation a little further.
>
> Question is whether it would just be simpler to remove the trailer, and
> simplify the code?
>
> There are a number of external tools that exist to debug memory allocs,
> from runtime loadable debug malloc libraries, to compiler features
> (ASAN, etc.), to valgrind.
>
> Glusterfs could just rely on those, and so simplify and (perhaps)
> speed-up its own, general-case memory code.
>
> Remove the trailer? Or fix it?
>
> regards,
> --
> Paul Jakma | p...@jakma.org | @pjakma | Key ID: 0xD86BF79464A2FF6A
> Fortune:
> Many people are secretly interested in life.
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
--
https://kadalu.io
Container Storage made easy!
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Freenode takeover and GlusterFS IRC channels

2021-06-07 Thread Amar Tumballi
We (at least many developers and some users) actively use Slack at
https://gluster.slack.com

While I agree that it's not a free/open alternative to IRC, it does get
many questions answered, and also gets communication happen related to the
project.

Regards,
Amar


On Mon, 7 Jun, 2021, 9:27 pm Jordan Erickson, <
jerick...@logicalnetworking.net> wrote:

> I'm relatively new to the community but I would vote for having a point
> of presence on libera.chat, or OFTC as some other F/OSS projects are
> moving there as an alternative. I use IRC daily for supporting my own
> projects as well as related projects such as GlusterFS. Personally I
> hadn't heard of Matrix until the whole Freenode fiasco happened, so I
> would imagine others may be in the same boat. Anyway, just my $0.02 :)
>
>
> Cheers,
> Jordan Erickson
>
>
> On 6/7/21 5:51 AM, Anoop C S wrote:
> > Hi all,
> >
> > I hope many of us are aware of the recent changes that happened at
> > Freenode IRC network(in case you are not, feel free to look into
> > details based on various resignation letters from long-time then
> > Freenode staff starting with [1]). In the light of this take over
> > situation, many open source communities have moved over to its
> > replacement i.e, libera.chat[2].
> >
> > Now I would like to open this up to GlusterFS community to think about
> > moving forward with our current IRC channels(#gluster, #gluster-dev and
> > #gluster-meeting) on Freenode. How important are those channels for
> > GlusterFS project? How about moving over to libera.chat in case we
> > stick to IRC communication?
> >
> > Let's discuss and conclude on the way forward..
> >
> > Note:- Matrix[3] platform is also an option nowadays and we do have a
> > Gluster room(#gluster:matrix.org) there ! welcome..welcome :-)
> >
> > Regards,
> > Anoop C S
> >
> >
> > [1] https://fuchsnet.ch/freenode-resign-letter.txt
> > [2] https://libera.chat/
> > [3] https://matrix.org/
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
>
> --
> Jordan Erickson (PGP: 0x78DD41CB)
> Logical Networking Solutions, 707-636-5678
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] high load when copy directory with many files

2021-04-11 Thread Amar Tumballi
Hi Marco, this is really good test/info. Thanks.

One more thing to observe is you are running such tests is 'gluster profile
info', so the bottleneck fop is listed.

Mohit, Xavi, in this parallel operations, the load may be high due to
inodelk used in mds xattr update in dht? Or you guys suspect something else?

Regards
Amar

On Sat, 10 Apr, 2021, 11:45 pm Marco Lerda - FOREACH S.R.L., <
marco.le...@foreach.it> wrote:

> hi,
> we have isolated the problem (meanwhile some hardware upgrade and code
> optimization helped to limit the problem).
> it happens when many request (HTTP over apache) comes to a non existent
> file.
> With 30 concurrent request to the same non existing file cause the load
> go high without limit.
> Same requests on existing files works fine.
> I have tried to simulate che apache access to file excluding apache with
> repeated command on files with the same parallelism (30):
> - with ls works fine, file exists or not
> - with stat works fine, file exists or not
> - with xattr load go up, file exists or not
>
> thank you
>
>
> Il 05/10/2020 19.45, Marco Lerda - FOREACH S.R.L. ha scritto:
> > hi,
> > we use glusterfs on a php application that have many small php files
> > images etc...
> > We use glusterfs in replication mode.
> > We have 2 nodes connected in fiber with 100MBps and less than 1 ms
> > latency.
> > We have also an arbiter on slower network (but the issue is there also
> > without the arbiter).
> > When we copy a directory (cp command) with many files, cpu usage and
> > load explode raplidly,
> > our application become inaccessible until the copy ends.
> >
> > I wonder if is that normal or we have done something wrong.
> > I know that glusterfs is not indicated with many small files, and I
> > know that it slow down,
> > but I want to avoid that a simple copy of a directory will put down
> > out application.
> >
> > Any suggestion?
> >
> > Thanks a lot
> >
> >
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://bluejeans.com/441850968
> >
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
> --
>
> --
> Marco Lerda
> FOREACH S.R.L.
> Via Laghi di Avigliana 115, 12022 - Busca (CN)
> Telefono: 0171-1984102
> Centralino/Fax: 0171-1984100
> Email:  marco.le...@foreach.it
> Web: http://www.foreach.it
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Bugs summary jenkins job

2021-03-10 Thread Amar Tumballi
I personally haven't checked it after migrating to GitHub.

Haven't seen any PRs coming with bug reference either. IMO, ok to stop the
job, and cleanup python2 reference.

On Wed, 10 Mar, 2021, 7:21 pm Michael Scherer,  wrote:

> Hi,
>
> are we still using the bugs summary on
> https://bugs.gluster.org/gluster-bugs.html ?
>
> As we moved out of bugzilla, I think the script wasn't adapted to
> github, and it is still running on python 2 (so we need to keep a
> Fedora 30 around for that)
>
>
> --
> Michael Scherer / He/Il/Er/Él
> Sysadmin, Community Infrastructure
>
>
>
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Automatic clang-format for GitHub PRs

2021-02-11 Thread Amar Tumballi
On Thu, 11 Feb, 2021, 9:19 pm Xavi Hernandez,  wrote:

> On Wed, Feb 10, 2021 at 1:33 PM Amar Tumballi  wrote:
>
>>
>>
>> On Wed, Feb 10, 2021 at 3:29 PM Xavi Hernandez 
>> wrote:
>>
>>> Hi all,
>>>
>>> I'm wondering if enforcing clang-format for all patches is a good idea...
>>>
>>> I've recently seen patches where clang-format is doing changes on parts
>>> of the code that have not been touched by the patch. Given that all files
>>> were already formatted by clang-format long ago, this shouldn't happen.
>>>
>>> This means that as the clang-format version evolves, the formatting with
>>> the same configuration is not the same. This introduces unnecessary noise
>>> to the file history that I think it should be avoided.
>>>
>>> Additionally, I've also seen some cases where some constructs are
>>> reformatted in an uglier or less clear way. I think it's very hard to come
>>> up with a set of rules that formats everything in the best possible way.
>>>
>>> For all these reasons, I would say we shouldn't enforce clang-format to
>>> accept a PR. I think it's a good test to run to catch some clear formatting
>>> issues, but it shouldn't vote for patch acceptance.
>>>
>>> What do you think ?
>>>
>>>
>> One thing I have noticed is, as long as some test is 'skipped', no one
>> bothers to check. It would be great if the whole diff (in case of failure)
>> is posted as a comment, so we can consider that while merging. I would
>> request one to invest time on posting the failure message as a comment back
>> into issue from jenkins if possible, and later implement skip behavior.
>> Otherwise, considering we have >10 people having ability to merge patches,
>> many people may miss having a look on clang-format issues.
>>
>
> I agree that it could be hard to enforce some rules, but what I'm seeing
> lately is that the clang-format version from Fedora 33 doesn't format the
> code the same way as a previous version with the same configuration in some
> cases (this also seems to happen with much older versions). This causes
> failures in the clang check that need manual modifications to update the
> patches.
>

Ok, let's get moving with actual work than syntaxes. Ok with skipping!


> Xavi
>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Automatic clang-format for GitHub PRs

2021-02-10 Thread Amar Tumballi
On Wed, Feb 10, 2021 at 3:29 PM Xavi Hernandez 
wrote:

> Hi all,
>
> I'm wondering if enforcing clang-format for all patches is a good idea...
>
> I've recently seen patches where clang-format is doing changes on parts of
> the code that have not been touched by the patch. Given that all files were
> already formatted by clang-format long ago, this shouldn't happen.
>
> This means that as the clang-format version evolves, the formatting with
> the same configuration is not the same. This introduces unnecessary noise
> to the file history that I think it should be avoided.
>
> Additionally, I've also seen some cases where some constructs are
> reformatted in an uglier or less clear way. I think it's very hard to come
> up with a set of rules that formats everything in the best possible way.
>
> For all these reasons, I would say we shouldn't enforce clang-format to
> accept a PR. I think it's a good test to run to catch some clear formatting
> issues, but it shouldn't vote for patch acceptance.
>
> What do you think ?
>
>
One thing I have noticed is, as long as some test is 'skipped', no one
bothers to check. It would be great if the whole diff (in case of failure)
is posted as a comment, so we can consider that while merging. I would
request one to invest time on posting the failure message as a comment back
into issue from jenkins if possible, and later implement skip behavior.
Otherwise, considering we have >10 people having ability to merge patches,
many people may miss having a look on clang-format issues.

-Amar


> Regards,
>
> Xavi
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
--
https://kadalu.io
Container Storage made easy!
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] New Testing Framework For Gluster

2021-02-09 Thread Amar Tumballi
While I am going through the document, I would like people to take a look
at what we had done -
https://kadalu.io/rfcs/0002-test-framework-binnacle.html

At least the goals looks similar, ie, simplify testing, make it language
agnostic, and also provide distributed infra awareness. Our goal was to
keep the learning curve less for already existing developers with
compatibility to existing test framework.

Happy if we as a community can do a collaborated effort on this. Because in
my experience, none of this efforts will be concluded in few months, but
will need year or more time. And what we do now should stand test of time,
and should be used for the next 10+years at least. So, take time, discuss
pros and cons and then get to implementation.

If everyone thinks binnacle  idea is
good, we can work together on it. If there are concerns in the community
that kadalu owns the repo, happy to consider hosting it under gluster/ org
after there are contributions coming.

Regards,
Amar

On Tue, Feb 9, 2021 at 9:32 PM Barak Sason Rofman 
wrote:

> Greetings Gluster community,
>
> Following recent discussions on the effectiveness of the current
> functional test framework (glusto_test), I've prepared a doc specifying why
> in my opinion the glusto_test needs to be abandoned and a new testing
> framework should be created:
>
> https://docs.google.com/document/d/1g0jb04IN0ENic3CovCGJMILLQXMhnD930QeF1bzTlh4/edit?usp=sharing
>
> In addition, I've also prepared a doc detailing a design for a new testing
> framework:
>
> https://docs.google.com/document/d/1D8zUSmg-00ey711gsqvS6G9i_fGN2cE0EbG4u1TOsaQ/edit?usp=sharing
>
> Please share your thoughts and views on the matter.
> Regards.
> --
> *Barak Sason Rofman*
>
> Gluster Storage Development
>
> Red Hat Israel 
>
> 34 Jerusalem rd. Ra'anana, 43501
>
> bsaso...@redhat.com T: *+972-9-7692304*
> M: *+972-52-4326355*
> @RedHat    Red Hat
>   Red Hat
> 
> 
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
--
https://kadalu.io
Container Storage made easy!
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Repository access and review policy

2021-01-12 Thread Amar Tumballi
Add the Reviewers in comment for now!

(@amarts is me on github). Lets see how better we can handle this.

-Amar

On Tue, Jan 12, 2021 at 10:46 PM Dmitry Antipov  wrote:

> Is it possible to request reviews for pull requests from the persons
> without write access?
> Considering myself as an example, this may be
> https://github.com/gluster/glusterfs/pull/1953.
>
> Dmitry
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
--
https://kadalu.io
Container Storage made easy!
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Toggle storage.linux-aio and volume restart

2020-12-08 Thread Amar Tumballi
Depends on if posix_reconfigure() handles this option or not. If it handles
it, then without volume restart, we can use it.

-Amar

On Tue, Dec 8, 2020 at 6:59 PM Dmitry Antipov  wrote:

> Is it expected that toggling storage.linux-aio should take an effect
> immediately without
> doing the full volume restart (with 'gluster volume stop' and 'gluster
> volume start')?
>
> It seems that if the volume was started with AIO enabled, it's possible to
> disable it
> with 'gluster volume set XXX storage.linux-aio off', but reverting the
> latter back to
> 'on' makes no effect until volume restart.
>
> Dmitry
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
--
https://kadalu.io
Container Storage made easy!
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Pull Request review workflow

2020-10-15 Thread Amar Tumballi
Thanks for taking time on this, and sending this note Xavi!

Some comments inline!

On Thu, Oct 15, 2020 at 4:03 PM Xavi Hernandez 
wrote:

> Hi all,
>
> after the recent switch to GitHub, I've seen that reviews that require
> multiple iterations are hard to follow using the old workflow we were using
> in Gerrit.
>
> Till now we basically amended the commit and pushed it again. Gerrit had a
> feature to calculate diffs between versions of the patch, so it was
> relatively easy to follow the changes between iterations (unless there was
> a big change in the base branch and the patch was rebased).
>
> In GitHub we don't have this feature (at least I haven't seen it). So I'm
> proposing to change this workflow.
>
> The idea is to create a PR with the initial commit. When a modification
> needs to be done as a result of the review, instead of amending the
> existing commit, we should create a new commit. From the review tool in
> GitHub it's very easy to check individual commits.
>
>
+1


> Once the review is finished, the patch will be merged with the "Squash and
> Merge" option, that will combine all the commits into a single one before
> merging, so the end result will be exactly the same we had with Gerrit.
>
>
+1
Just a note to the maintainers who are merging PRs to have patience and
check the commit message when there are more than 1 commits in PR.


Another thing to consider is that rfc.sh script always does a rebase before
> pushing changes. This rewrites history and changes all commits of a PR. I
> think we shouldn't do a rebase in rfc.sh. Only if there are conflicts, I
> would do a manual rebase and push the changes.
>
>
With github workflow, we don't need './rfc.sh' in my personal opinion. I
ported it to new branch and github considering the number of developers who
are used to it.  If you do the changes as per github, then you would have a
separate branch per PR (ie, feature/bug), so you are at your own to decide
when to rebase.

What do you think ?
>
>
I agree, we can remove -f option of ./rfc.sh and also the rebase part in
./rfc.sh!

Regards,
Amar
--
https://kadalu.io


> Regards,
>
> Xavi
>
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-infra] [Gluster-Maintainers] ACTION REQUESTED: Migrate your glusterfs patches from Gerrit to GitHub

2020-10-12 Thread Amar Tumballi
On Mon, 12 Oct, 2020, 8:08 pm sankarshan, 
wrote:

> It is perhaps on Amar to send the PR with the changes - but that would
> kind of make the approval/merge process a bit muddled? How about a PR
> being sent for review and then merged in?
>
> On Mon, 12 Oct 2020 at 19:22, Kaleb Keithley  wrote:
> >
> >
> >
> > On Thu, Oct 8, 2020 at 8:10 AM Kaleb Keithley 
> wrote:
> >>
> >> On Wed, Oct 7, 2020 at 7:33 AM Sunil Kumar Heggodu Gopala Acharya <
> shegg...@redhat.com> wrote:
> >>>
> >>>
> >>> Regards,
> >>>
> >>> Sunil kumar Acharya
> >>>
> >>>
> >>>
> >>>
> >>> On Wed, Oct 7, 2020 at 4:54 PM Kaleb Keithley 
> wrote:
> 
> 
> 
>  On Wed, Oct 7, 2020 at 5:46 AM Deepshikha Khandelwal <
> dkhan...@redhat.com> wrote:
> >
> >
> > - The "regression" tests would be triggered by a comment "/run
> regression" from anyone in the gluster-maintainers[4] github group. To run
> full regression, maintainers need to comment "/run full regression"
> >
> > [4] https://github.com/orgs/gluster/teams/gluster-maintainers
> 
> 
>  There are a lot of people in that group that haven't been involved
> with Gluster for a long time.
> >>>
> >>> Also there are new contributors, time to update!
> >>
> >>
> >> Who is going to do this? I don't have the necessary privs.
> >
> >
> > Anyone?
>

I will volunteer to do it to match with content with Maintainers file. IMO,
we should also fix non existing emails from maintainers file.

Will take a look sometime tomorrow!

Regards
Amar


> >
> > --
> >
> > Kaleb
> > ___
> > maintainers mailing list
> > maintain...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/maintainers
>
>
>
> --
> sankarshan mukhopadhyay
> 
> ___
> Gluster-infra mailing list
> gluster-in...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-infra
>
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] ACTION REQUESTED: Migrate your glusterfs patches from Gerrit to GitHub

2020-10-07 Thread Amar Tumballi
Thanks for getting this done, Deepshika and Michael, and Infra team members.

Thanks everyone for valuable feedback during this process!

I personally hope this change will help glusterfs project to attract more
developers, and we can engage more closely with our developers!

Note that this would result in weeks of confusions, questions, and bugs in
workflow! We are trying to tune the workflow accordingly! Please try it
out, and give feedback! Once we start using we may figure out new things,
so jump in, and give it a try!

In case of any issues, raise a github issue,  or find some of us on
gluster.slack.com

Regards,
Amar



On Wed, Oct 7, 2020 at 3:16 PM Deepshikha Khandelwal 
wrote:

> Hi folks,
>
> We have initiated the migration process today. All the patch owners are
> requested to move their existing patches from Gerrit[1] to Github[2].
>
> The changes we brought in with this migration:
>
> - The 'devel' branch[3] is the new default branch on GitHub to get away
> from master/slave language.
>
> - This 'devel' branch is the result of the merge of the current branch and
> the historic repository, thus requiring a new clone. It helps in getting
> the complete idea of tracing any changes properly to its origin to
> understand the intentions behind the code.
>
> - We have switched the glusterfs repo on gerrit to readonly state. So you
> will not be able to merge the patches on Gerrit from now onwards. Though we
> are not deprecating gerrit right now, we will work with the remaining
> users/projects to move to github as well.
>
> - Changes in the development workflow:
> - All the required smoke tests would be auto-triggered on submitting a
> PR.
> - Developers can retrigger the smoke tests using "/recheck smoke" as
> comment.
> - The "regression" tests would be triggered by a comment "/run
> regression" from anyone in the gluster-maintainers[4] github group. To run
> full regression, maintainers need to comment "/run full regression"
>
> For more information you can go through the contribution guidelines listed
> in CONTRIBUTING.md[5]
>
> [1] https://review.gluster.org/#/q/status:open+project:glusterfs
> [2] https://github.com/gluster/glusterfs
> [3] https://github.com/gluster/glusterfs/tree/devel
> [4] https://github.com/orgs/gluster/teams/gluster-maintainers
> [5] https://github.com/gluster/glusterfs/blob/master/CONTRIBUTING.md
>
> Please reach out to us if you have any queries.
>
> Thanks,
> Gluster-infra team
>


-- 
--
https://kadalu.io
Container Storage made easy!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 7.7

2020-08-03 Thread Amar Tumballi
Hi Hu Bert,

Thanks for letting us know about improvements.

Now to check on possible reason, the question is, from which version did
you upgrade to 7.7?

Thanks


On Mon, Aug 3, 2020 at 12:16 PM Hu Bert  wrote:

> Hi there,
>
> just wanted to say thanks to all the developers, maintainers etc. This
> release (7) has brought us a small but nice performance improvement.
> Utilization and IOs per disk decreased, latency dropped. See attached
> images.
>
> I read the release notes but couldn't identify the specific
> changes/features for this improvement. Maybe someone could point to
> them - but no hurry... :-)
>
>
> Best regards,
> Hubert
>
> Am Mi., 22. Juli 2020 um 18:27 Uhr schrieb Rinku Kothiya <
> rkoth...@redhat.com>:
> >
> > Hi,
> >
> > The Gluster community is pleased to announce the release of Gluster7.7
> (packages available at [1]).
> > Release notes for the release can be found at [2].
> >
> > Major changes, features and limitations addressed in this release:
> > None
> >
> > Please Note: Some of the packages are unavailable and we are working on
> it. We will release them soon.
> >
> > Thanks,
> > Gluster community
> >
> > References:
> >
> > [1] Packages for 7.7:
> > https://download.gluster.org/pub/gluster/glusterfs/7/7.7/
> >
> > [2] Release notes for 7.7:
> > https://docs.gluster.org/en/latest/release-notes/7.7/
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://bluejeans.com/441850968
> >
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
--
https://kadalu.io
Container Storage made easy!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-infra] [gluster-infra] build-jobs repo is migrated to Github

2020-06-22 Thread Amar Tumballi
Thanks Deepshika and @misc,

This is a great step for getting glusterfs project migrated.

Regards,
Amar




On Mon, 22 Jun, 2020, 4:18 pm Deepshikha Khandelwal, 
wrote:

> Hi All,
>
> We have migrated the build-jobs repo from Gerrit[1] to Github[2]. It is a
> repository for automatically configuring Jenkins jobs of Gluster (and other
> Gluster related projects). The automation to update Jenkins and the review
> process is in place.
>
> The build-job repo on Gerrit is in 'Read Only' state now. So I request all
> the open and active PR[3] owners to switch it to Github pull requests. You
> just need to fork and send a PR.
>
> [1] https://review.gluster.org/#/admin/projects/build-jobs
> [2] https://github.com/gluster/build-jobs
> [3] https://review.gluster.org/#/q/project:build-jobs+status:pending
>
> Thanks,
> Deepshikha
> ___
> Gluster-infra mailing list
> gluster-in...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-infra
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Multithreaded Iterative Dir Tree Scan

2020-04-23 Thread Amar Tumballi
This looks like a good effort to pick up Barak. A needed one indeed.

-Amar

On Mon, Mar 23, 2020 at 3:18 PM Barak Sason Rofman 
wrote:

> Hello everyone!
> Following a discussion I had with @Susant Palai some time ago, we have
> decided to look into an option to improve the rebalance process in the DHT
> layer by modifying the underlying mechanism. Currently, dir-tree crawling
> is done recursively, by a single thread, which is likely slow and also
> poses the risk of stack overflow. An iterative multithreaded solution might
> improve performance and also stability (by eliminating the risk of stack
> overflow). I have prepared a POC doc on the matter, including a sample
> implementation of the iterative multithreaded solution. The doc can be
> found at:
>
> https://docs.google.com/document/d/1JCl0T9zeagOcFFpgVQF8zNyhlR54VqkNAZ7TJb42egE/edit
>
> Apart
> from the rebalance process, maybe this approach can be useful for other
> use-cases where dir-tree crawl is being performed? Any comments on the
> concept, the design of the solution and the implementation are welcome.
>
> --
> *Barak Sason Rofman*
>
> Gluster Storage Development
>
> Red Hat Israel 
>
> 34 Jerusalem rd. Ra'anana, 43501
>
> bsaso...@redhat.com T: *+972-9-7692304*
> M: *+972-52-4326355*
> 
> ___
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
>
>
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
--
https://kadalu.io
Container Storage made easy!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Updating the repository's actual 'ACTIVE'ness status

2020-03-25 Thread Amar Tumballi
Hi all,

We have 101 repositories in gluster org in github. Only handful of them are
being actively managed, and progressing.

After seeing https://github.com/gluster/gluster-kubernetes/issues/644, I
feel we should at least keep the status of the project up-to-date in the
repository, so the users can move on to other repos if not maintained.
Saves time for them, and they wouldn't form a opinion on gluster project.
But if they spend time on setting it up, and later find that its not
working, and is not maintained, they would feel bad about the overall
project itself.

So my request to all repository maintainers are to mark a repository as
'Archived'. And update the README (or description) to reflect the same.

In any case, On April 1st week, we should actively mark them inactive if no
activity is found in last 15+ months. For other repos, maintainer's can
take appropriate action.

Regards,
Amar
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-16 Thread Amar Tumballi
On Tue, Mar 17, 2020 at 10:18 AM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:

> Hi glusterfs expert,
>
> Our product need to tolerate change date to future and then change back.
>
> How about change like this ?
>
>
> https://review.gluster.org/#/c/glusterfs/+/24229/1/xlators/storage/posix/src/posix-metadata.c
>
>
>
> when time change to future and change back , should still be able to
> update mdata, so the following changes to file can be populated to other
> clients.
>
>
>

We do like to have people integrating with GlusterFS. But this change is
not inline with the 'assumptions' we had about the feature.

If you have verified this change works for you, please add it as an
'option' in posix, which can be changed through volume set, and keep this
option disable/off by default. That should be an easier way to get the
patch reviewed and take it further. Please make sure to provide the
'description' for the option with details.

Regards,
Amar



> cynthia
>
>
>
> *From:* Zhou, Cynthia (NSB - CN/Hangzhou)
> *Sent:* 2020年3月12日 17:31
> *To:* 'Kotresh Hiremath Ravishankar' 
> *Cc:* 'Gluster Devel' 
> *Subject:* RE: could you help to check about a glusterfs issue seems to
> be related to ctime
>
>
>
> Hi,
>
> One more question, I find each client has the same future time stamp where
> are those time stamps from, since Since it is different from any brick
> stored time stamp. And after I modify files  from clients, it remains the
> same.
>
> [root@mn-0:/home/robot]
>
> # stat /mnt/export/testfile
>
>   File: /mnt/export/testfile
>
>   Size: 193 Blocks: 1  IO Block: 131072 regular file
>
> Device: 28h/40d Inode: 10383279039841136109  Links: 1
>
> Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (
> 615/_nokfsuifileshare)
>
> Access: 2020-04-11 12:20:22.114365172 +0300
>
> Modify: 2020-04-11 12:20:22.121552573 +0300
>
> Change: 2020-04-11 12:20:22.121552573 +0300
>
>
>
> [root@mn-0:/home/robot]
>
> # date
>
> Thu Mar 12 11:27:33 EET 2020
>
> [root@mn-0:/home/robot]
>
>
>
> [root@mn-0:/home/robot]
>
> # stat /mnt/bricks/export/brick/testfile
>
>   File: /mnt/bricks/export/brick/testfile
>
>   Size: 193 Blocks: 16 IO Block: 4096   regular file
>
> Device: fc02h/64514dInode: 512015  Links: 2
>
> Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (
> 615/_nokfsuifileshare)
>
> Access: 2020-04-11 12:20:22.100395536 +0300
>
> Modify: 2020-03-12 11:25:04.095981276 +0200
>
> Change: 2020-03-12 11:25:04.095981276 +0200
>
> Birth: 2020-04-11 08:53:26.805163816 +0300
>
>
>
>
>
> [root@mn-1:/root]
>
> # stat /mnt/bricks/export/brick/testfile
>
>   File: /mnt/bricks/export/brick/testfile
>
>   Size: 193 Blocks: 16 IO Block: 4096   regular file
>
> Device: fc02h/64514dInode: 512015  Links: 2
>
> Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (
> 615/_nokfsuifileshare)
>
> Access: 2020-04-11 12:20:22.100395536 +0300
>
> Modify: 2020-03-12 11:25:04.094913452 +0200
>
> Change: 2020-03-12 11:25:04.095913453 +0200
>
> Birth: 2020-03-12 07:53:26.803783053 +0200
>
>
>
>
>
>
>
> *From:* Zhou, Cynthia (NSB - CN/Hangzhou)
> *Sent:* 2020年3月12日 16:09
> *To:* 'Kotresh Hiremath Ravishankar' 
> *Cc:* Gluster Devel 
> *Subject:* RE: could you help to check about a glusterfs issue seems to
> be related to ctime
>
>
>
> Hi,
>
> This is abnormal test case, however, when this happened it will have big
> impact on the apps using those files. And this can not be restored
> automatically unless disable some xlator, I think it is unacceptable for
> the user apps.
>
>
>
>
>
> cynthia
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* 2020年3月12日 14:37
> *To:* Zhou, Cynthia (NSB - CN/Hangzhou) 
> *Cc:* Gluster Devel 
> *Subject:* Re: could you help to check about a glusterfs issue seems to
> be related to ctime
>
>
>
> All the perf xlators depend on time (mostly mtime I guess). In my setup,
> only quick read was enabled and hence disabling it worked for me.
> All perf xlators needs to be disabled to make it work correctly. But I
> still failed to understand how normal this kind of workload ?
>
>
>
> Thanks,
> Kotresh
>
>
>
> On Thu, Mar 12, 2020 at 11:20 AM Zhou, Cynthia (NSB - CN/Hangzhou) <
> cynthia.z...@nokia-sbell.com> wrote:
>
> When disable both quick-read and performance.io-cache off everything is
> back to normal
>
> I attached the log when only enable quick-read and performance.io-cache is
> still on glusterfs trace log
>
> When execute command “cat /mnt/export/testfile”
>
> Can you help to find why this still to fail to show correct content?
>
> The file size showed is 141, but actually in brick it is longer than that.
>
>
>
>
>
> cynthia
>
>
>
>
>
> *From:* Zhou, Cynthia (NSB - CN/Hangzhou)
> *Sent:* 2020年3月12日 12:53
> *To:* 'Kotresh Hiremath Ravishankar' 
> *Cc:* 'Gluster Devel' 
> *Subject:* RE: could you help to check about a glusterfs issue seems to
> be related to ctime
>
>
>
> From my local test only when 

Re: [Gluster-devel] ACL issue v6.6, v6.7, v7.1, v7.2

2020-02-12 Thread Amar Tumballi
Hi Nikolov,



On Fri, Feb 7, 2020 at 12:29 AM Strahil Nikolov 
wrote:

> Hello List,
>
> Recently I had upgraded  my oVirt + Gluster  (v6.5 -> v6.6) and I hit an
> ACL bug , which forced me to upgrade to v7.0
>
> Another oVirt user also hit the bug when upgrading to v6.7  and he had to
> rebuild his gluster cluster.
>
> Sadly, the fun doesn't stop here. Last week I have tried  upgrading to
> v7.2  and again the ACL bug hit me. Downgrading to v7.1 doesn't help -  so
> I downgraded to 7.0 and everything is operational.
>
> The bug report for the last issue:
> https://bugzilla.redhat.com/show_bug.cgi?id=1797099
>
> I have 2 questions :
> 1. What is gluster's ACL evaluating/checking ? Is there an option to force
> gluster  not to support ACL at all ?
>

Most of the tests are from gluster's regression tests. Happy to discuss how
we can enable this further. I am not aware of an option to disable acl, but
will check and if I find something, will respond again.


>
> 2. Are you aware of the issue? That bug was supposed to be fixed a long
> time ago, yet the facts speak different.
>
> I got to know about of this issue, because you bought this up in two more
emails. Will make sure we have something in Gluster-7.3

Also, we are in this discussion/process of migration towards one place to
manage bugs and code, and thinking of migrating to github completely.
Hopefully after that, we as members of gluster community should be
responsible to triage issues properly.



> Thanks for reading this long post.
>
>
Thanks for writing long post, it summarizes the pain properly, so I hope
developers learn from this to build better tests when they implement a
feature/bug fix.

Regards,


> Best Regards,
> Strahil Nikolov
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
--
https://kadalu.io
Container Storage made easy!
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Release-8] Thin-Arbiter: Unique-ID requirement

2020-02-04 Thread Amar Tumballi
On Tue, Jan 14, 2020 at 2:37 PM Atin Mukherjee 
wrote:

> From a design perspective 2 is a better choice. However I'd like to see a
> design on how cluster id will be generated and maintained (with peer
> addition/deletion scenarios, node replacement etc).
>
>
Thanks for the feedback Atin.


> On Tue, Jan 14, 2020 at 1:42 PM Amar Tumballi  wrote:
>
>> Hello,
>>
>> As we are gearing up for Release-8, and its planning, I wanted to bring
>> up one of my favorite topics, 'Thin-Arbiter' (or Tie-Breaker/Metro-Cluster
>> etc etc).
>>
>> We have made thin-arbiter release in v7.0 itself, which works great, when
>> we have just 1 cluster of gluster. I am talking about a situation which
>> involves multiple gluster clusters, and easier management of thin-arbiter
>> nodes. (Ref: https://github.com/gluster/glusterfs/issues/763)
>>
>> I am working with a goal of hosting a thin-arbiter node service (free of
>> cost), for which any gluster deployment can connect, and save their cost of
>> an additional replica, which is required today to not get into split-brain
>> situation. Tie-breaker storage and process needs are so less that we can
>> easily handle all gluster deployments till date in just one machine. When I
>> looked at the code with this goal, I found that current implementation
>> doesn't support it, mainly because it uses 'volumename' in the file it
>> creates. This is good for 1 cluster, as we don't allow duplicate volume
>> names in a single cluster, or OK for multiple clusters, as long as volume
>> names are not colliding.
>>
>> To resolve this properly we have 2 options (as per my thinking now) to
>> make it truly global service.
>>
>> 1. Add 'volume-id' option in afr volume itself, so, each instance picks
>> the volume-id and uses it in thin-arbiter name. A variant of this is
>> submitted for review - https://review.gluster.org/23723 but as it uses
>> volume-id from io-stats, this particular patch fails in case of brick-mux
>> and shd-mux scenarios.  A proper enhancement of this patch is, providing
>> 'volume-id' option in AFR itself, so glusterd (while generating volfiles)
>> sends the proper vol-id to instance.
>>
>> Pros: Minimal code changes to the above patch.
>> Cons: One more option to AFR (not exposed to users).
>>
>> 2. Add* cluster-id *to glusterd, and pass it to all processes. Let
>> replicate use this in thin-arbiter file. This too will solve the issue.
>>
>> Pros: A cluster-id is good to have in any distributed system, specially
>> when there are deployments which will be 3 node each in different clusters.
>> Identifying bricks, services as part of a cluster is better.
>>
>> Cons: Code changes are more, and in glusterd component.
>>
>> On another note, 1 above is purely for Thin-Arbiter feature only, where
>> as 2nd option would be useful in debugging, and other solutions which
>> involves multiple clusters.
>>
>> Let me know what you all think about this. This is good to be discussed
>> in next week's meeting, and taken to completion.
>>
>
After some more code reading, and thinking about possible solutions, I
found that there is another simpler solution to get this resolved for
multiple cluster.

Currently thin-arbiter file name for a replica-set is picked from what is
the 3rd (ie, index=2) option in 'pending-xattr' key in volume file. If we
get that key to be unique (say volume-id + index-of-replica-set), this
problem is solved. Needs minimum change in code for glusterfs (actually, no
code change in filesystem part, but only in glusterd-volgen.c).

I tried this approach while providing replica2 option
<https://kadalu.io/rfcs/0003-kadalu-thin-arbiter-support> of kadalu.io
project. The tests are running fine, and I got the expected goal met.



>  I am working with a goal of hosting a thin-arbiter node service (free of
> cost), for which any gluster deployment can connect, and save their cost of
> an additional replica, which is required today to not get into split-brain
> situation.



I am happy to tell, this goal is achieved. We now have
`tie-breaker.kadalu.io:/mnt`, an instance in cloud, for anyone trying to
use a thin-arbiter. If you are not keen to deploy your own instance, you
can use this as thin-arbiter instance. Note that if you are using glusterfs
releases, you may want to wait for patch https://review.gluster.org/24096
to make it to a release (probably 7.3/7.4) to use this in production, till
that time, volume-files generated by glusterd volgen are still using
volumename itself in pending-xattr, hence possible collision of files.

Regards,


>> Regards,
>> Amar
>> ---
>> https://kadalu.io
>> Stora

Re: [Gluster-devel] WORM-Xlator: How to get filepath in worm_create_cbk?

2020-02-04 Thread Amar Tumballi
On Tue, Feb 4, 2020 at 7:16 PM David Spisla  wrote:

> Dear Gluster Community,
> in worm_create_cbk a file gets the xattr "trusted.worm_file" and
> "trusted.start_time" if worm-file-level is enabled. Now I want to exclude
> some files in a special folder from the WORM function. Therefore I want to
> check in worm_create_cbk if the file is in this folder or not. But I don't
> find a parameter where the filepath is stored. So my alternative solution
> was, to check it in worm_create (via loc->path) and store a boolean value
> in frame->local. This boolean value will be used in worm_create_cbk later.
> But its not my favourite solution.
>
>
Do you know how to get the filepath in the cbk function?
>
>
As per FS guidelines, inside the filesystem, we need to handle inodes or
parent-inode + basename. If you are looking at building a 'path' info in
create_cbk, then i recommend using 'inode_path()' to build the path as per
the latest inode table information.

-Amar


-- 
https://kadalu.io
Container Storage made easy!
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Do we still support the '"GlusterFS 3.3" RPC client?

2020-01-22 Thread Amar Tumballi
On Thu, Jan 23, 2020 at 12:02 AM Yaniv Kaul  wrote:

> Or can we remove this code?
>

In my opinion this should stay for at least another year! Especially
because we still have products and user deployment which use 3.x version in
them. While we say from community that we don't 'support' any old version,
giving them option (or more time) to shift their clients to higher version
after an upgrade is crucial for the project.

Also, unlike many other code we removed (For example, RDMA / Tiering etc),
this is not adding any complexity or dependency issues to the codebase.

I am *not* in favour of removing the old xdr format codebase yet. Happy to
hear what others think.

Regards,
Amar


> TIA,
> Y.
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
--
https://kadalu.io
Container Storage made easy!
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [Release-8] Planning Meeting update

2020-01-21 Thread Amar Tumballi
Hi All,

We successfully had our Release-8 planning meeting in APAC timezone.
Meeting minutes available here - https://hackmd.io/YB60uRCMQRC90xhNt4r6gA

It was well represented by many component maintainers and many developers
looked enthusiastic to make sure we have a good Release-8 out.

I agreed to setup another Planning meeting for NA/EMEA friendly hours in
coming week, so we can continue with development.

As the above hackmd page would see modification with more content for next
week, I am posting the meeting minutes as part of this email itself.

TL;DR;
-

Goal

A release of Gluster, which is stable, and provides the functionality to
users, as promised in the documentation. This release also should be a
stepping stone for GlusterX goal
<https://lists.gluster.org/pipermail/gluster-devel/2019-September/056606.html>
References

   -
   https://lists.gluster.org/pipermail/gluster-devel/2019-November/056709.html
   - https://github.com/gluster/glusterfs/milestone/10

Timeline

GA- April 15th, 2020.
RC - 31st March, 2020.
beta - 2nd March, 2020.
Branching - 10th March, 2020.
What is targetted ?

All targetted major issues/enhancements should have the tag of ‘Release-8’
<https://github.com/gluster/glusterfs/milestone/10> in github
<https://github.com/gluster/glusterfs/issues>
Features

The features which need to get into the release must have a Github issue
opened, and have the relevant milestone attached. This helps to track the
issue more closely. It is necessary to have an owner assigned for the
issue, so we are sure whom to ask for progress.

   -

   Non-Root GeoReplication
   -  Aravinda Enabling root ssh is not favoured, so better to project
  non-root georep (through mount broker)
   -

   Debugging :
   - Error codes Issue #280
  <https://github.com/gluster/glusterfs/issues/280>  Amar Tumballi
  - Instrumentation
   -

   Logging :
   - Structured logging Issue #657
  <https://github.com/gluster/glusterfs/issues/657>  Yati P
   -

   Observability :
   - More documentation on how to use prometheus or other tools for
  Monitoring.
  - gstatus <https://github.com/gluster/gstatus> Release 8 compatibility
   -

   Maintainability:
   - Code cleanup (More static functions etc)
  - Function scope, extern function reduction etc
  - Locked section reduction
   -

   New Idea Proposals:
   - vscan xlator <https://github.com/gluster/glusterfs/issues/265>
  - Add more [if you have any]

Bugs / Issues

A good goal is to have less than 200 bugs (<10% high / urgent severity),
and total issues open reduced to less than 200 too.

Current Status:

|Total Bugs
<https://bugzilla.redhat.com/report.cgi?x_axis_field=bug_status_axis_field=component_axis_field=_redirect=1_format=report-table_status=__open__=table=wrap=GlusterFS>
|
354 |
|Github issues <https://github.com/gluster/glusterfs/issues> | 434 |

   - Bugs to target for Release-8
  - https://bugzilla.redhat.com/show_bug.cgi?id=1793042 -
  WORM/write-behind.

How can we reduce these?

   -

   Issues sorted by least active
   
<https://github.com/gluster/glusterfs/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-asc>
Visit
   the top of the issue, and if you can close it with a proper comment and
   label WONTFIX, do so.
   -

   Bugs: If the bug is not critical, and can be resolved with a proper
   comment, go ahead!

Notes from the meeting

Introduction by Amar; general setting of expectation from the Gluster 8
release in light of Gluster X discussion from the last meetup. Stability is
the focus. However, keeping documentation and codebase in sync helps our
users and is important to keep in mind.

There is a lot of steps to undertake in order to push the project towards
the intent of Gluster X. Not breaking things; but making features and
capabilities progressive.

Network layer optimization is an improvement that would require intensive
effort to make possible. It should be discussed and planned but not
realistic for release 8.

[Yaniv] Mohit has an experimental patch for performance enhancement (ls -l
etc; anyone has link to the patch?)

Performance improvements are desirable as they have been present for quite
a while in Gluster.

[Aravinda] Path-based GeoReplication - the design is available. It has to
be made to work with sharding. The implementation is not ready yet to make
it a target for release 8; likely a release 9

Multiple set of activities around the topic of Global Threadpool. Amar
hasn’t observed many patches in recent times. Not a release 8

[Yaniv] 3k-4k support has some incremental work available. But a complete
set of work required to get it in release 8 is not yet available. [Amar]
best to think about using some label based tracking mechanism to see how to
test it as entire capability.

RIO has a lot of work at this point and there is not enough full time work
being undertaken on this. Amar has a new approach
<https://docs.google.co

[Gluster-devel] [Release-8] Checking on the work done for reflink (#349)

2020-01-21 Thread Amar Tumballi
Hi Raghavendra, Pranith,

I have ambitiously tagged Issue #349
 for Release-8. It was
mainly done as there were some work done in that regard last year. I wanted
to understand if it is possible to take it to a basic level of completion
so it can make it as an experimental new feature in Release-8.

It would be a nice feature to have in project, so we can experiment further
on it this year to have it ready by GlusterX.

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [Release-8] Thin-Arbiter: Unique-ID requirement

2020-01-14 Thread Amar Tumballi
Hello,

As we are gearing up for Release-8, and its planning, I wanted to bring up
one of my favorite topics, 'Thin-Arbiter' (or Tie-Breaker/Metro-Cluster etc
etc).

We have made thin-arbiter release in v7.0 itself, which works great, when
we have just 1 cluster of gluster. I am talking about a situation which
involves multiple gluster clusters, and easier management of thin-arbiter
nodes. (Ref: https://github.com/gluster/glusterfs/issues/763)

I am working with a goal of hosting a thin-arbiter node service (free of
cost), for which any gluster deployment can connect, and save their cost of
an additional replica, which is required today to not get into split-brain
situation. Tie-breaker storage and process needs are so less that we can
easily handle all gluster deployments till date in just one machine. When I
looked at the code with this goal, I found that current implementation
doesn't support it, mainly because it uses 'volumename' in the file it
creates. This is good for 1 cluster, as we don't allow duplicate volume
names in a single cluster, or OK for multiple clusters, as long as volume
names are not colliding.

To resolve this properly we have 2 options (as per my thinking now) to make
it truly global service.

1. Add 'volume-id' option in afr volume itself, so, each instance picks the
volume-id and uses it in thin-arbiter name. A variant of this is submitted
for review - https://review.gluster.org/23723 but as it uses volume-id from
io-stats, this particular patch fails in case of brick-mux and
shd-mux scenarios.  A proper enhancement of this patch is, providing
'volume-id' option in AFR itself, so glusterd (while generating volfiles)
sends the proper vol-id to instance.

Pros: Minimal code changes to the above patch.
Cons: One more option to AFR (not exposed to users).

2. Add* cluster-id *to glusterd, and pass it to all processes. Let
replicate use this in thin-arbiter file. This too will solve the issue.

Pros: A cluster-id is good to have in any distributed system, specially
when there are deployments which will be 3 node each in different clusters.
Identifying bricks, services as part of a cluster is better.

Cons: Code changes are more, and in glusterd component.

On another note, 1 above is purely for Thin-Arbiter feature only, where as
2nd option would be useful in debugging, and other solutions which
involves multiple clusters.

Let me know what you all think about this. This is good to be discussed in
next week's meeting, and taken to completion.

Regards,
Amar
---
https://kadalu.io
Storage made easy for k8s
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Following up on the Gluster 8 conversation from the community meeting

2020-01-13 Thread Amar Tumballi
On Fri, Jan 10, 2020 at 10:55 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> I had mentioned that I'll reach out to Amar to urge about making
> progress on Gluster 8. We did have a conversation and upon reviewing
> the present state of the list of issues/features with the release
> label, Amar would be setting up a meeting to obtain a more pragmatic
> and reality based plan. It is somewhat obvious that a number of items
> listed are not feasible within the current release timeline. Please
> wait for the meeting request from Amar.
>

Sankarshan,

Thanks for reviving the thread. As promised in
https://lists.gluster.org/pipermail/gluster-devel/2019-November/056709.html I
created a milestone @ https://github.com/gluster/glusterfs/milestone/10 and
added few possible issues which people can pick up to it.

After the community meeting tomorrow, I will send another email on the same.

Regards,
Amar
---
https://kadalu.io


> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2020-01-09 Thread Amar Tumballi
On Thu, Jan 9, 2020 at 2:33 PM Xavi Hernandez  wrote:

> On Thu, Jan 9, 2020 at 9:44 AM Amar Tumballi  wrote:
>
>>
>>
>> On Thu, Jan 9, 2020 at 1:38 PM Xavi Hernandez 
>> wrote:
>>
>>> On Sun, Dec 22, 2019 at 4:56 PM Yaniv Kaul  wrote:
>>>
>>>> I could not find a relevant use for them. Can anyone enlighten me?
>>>>
>>>
>>> I'm not sure why they are needed. They seem to be used to keep the
>>> unserialized version of a dict around until the dict is destroyed. I
>>> thought this could be because we were using pointers to the unserialized
>>> data inside dict, but that's not the case currently. However, checking very
>>> old versions (pre 3.2), I see that dict values were not allocated, but a
>>> pointer to the unserialized data was used.
>>>
>>
>> Xavi,
>>
>> While you are right about the intent, it is used still, at least when I
>> grepped latest repo to keep a reference in protocol layer.
>>
>> This is done to reduce a copy after the dictionary's binary content is
>> received from RPC. The 'extra_free' flag is used when we use a
>> GF_*ALLOC()'d buffer in protocol to receive dictionary, and extra_stdfree
>> is used when RPC itself allocates the buffer and hence uses 'free()' to
>> free the buffer.
>>
>
> I don't see it. When dict_unserialize() is called, key and value are
> allocated and copied, so  why do we need to keep the raw data after that ?
>
> In 3.1 the value was simply a pointer to the unserialized data, but
> starting with 3.2, value is memdup'ed. Key is always copied. I don't see
> any other reference to the unserialized data right now. I think that
> instead of assigning the raw data to extra_(std)free, we should simply
> release that memory and remove those fields.
>
> Am I missing something else ?
>

I did grep on 'extra_stdfree' and 'extra_free' and saw that many handshake/
and protocol code seemed to use it. Haven't gone deeper to check which part.

[amar@kadalu glusterfs]$ git grep extra_stdfree | wc -l
40
[amar@kadalu glusterfs]$ git grep extra_free | wc -l
5


>
>
>>
>>> I think this is not needed anymore. Probably we could remove these
>>> fields if that's the only reason.
>>>
>>
>> If keeping them is hard to maintain, we can add few allocation to remove
>> those elements, that shouldn't matter much IMO too. We are not using
>> dictionary itself as protocol now (which we did in 1.x series though).
>>
>> Regards,
>> Amar
>> ---
>> https://kadalu.io
>>
>>
>>
>>> TIA,
>>>> Y.
>>>> ___
>>>>
>>>> Community Meeting Calendar:
>>>>
>>>> APAC Schedule -
>>>> Every 2nd and 4th Tuesday at 11:30 AM IST
>>>> Bridge: https://bluejeans.com/441850968
>>>>
>>>>
>>>> NA/EMEA Schedule -
>>>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>>>> Bridge: https://bluejeans.com/441850968
>>>>
>>>> Gluster-devel mailing list
>>>> Gluster-devel@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>
>>>> ___
>>>
>>> Community Meeting Calendar:
>>>
>>> APAC Schedule -
>>> Every 2nd and 4th Tuesday at 11:30 AM IST
>>> Bridge: https://bluejeans.com/441850968
>>>
>>>
>>> NA/EMEA Schedule -
>>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] What do extra_free and extrastd_free params do in the dictionary object?

2020-01-09 Thread Amar Tumballi
On Thu, Jan 9, 2020 at 1:38 PM Xavi Hernandez  wrote:

> On Sun, Dec 22, 2019 at 4:56 PM Yaniv Kaul  wrote:
>
>> I could not find a relevant use for them. Can anyone enlighten me?
>>
>
> I'm not sure why they are needed. They seem to be used to keep the
> unserialized version of a dict around until the dict is destroyed. I
> thought this could be because we were using pointers to the unserialized
> data inside dict, but that's not the case currently. However, checking very
> old versions (pre 3.2), I see that dict values were not allocated, but a
> pointer to the unserialized data was used.
>

Xavi,

While you are right about the intent, it is used still, at least when I
grepped latest repo to keep a reference in protocol layer.

This is done to reduce a copy after the dictionary's binary content is
received from RPC. The 'extra_free' flag is used when we use a
GF_*ALLOC()'d buffer in protocol to receive dictionary, and extra_stdfree
is used when RPC itself allocates the buffer and hence uses 'free()' to
free the buffer.


> I think this is not needed anymore. Probably we could remove these fields
> if that's the only reason.
>

If keeping them is hard to maintain, we can add few allocation to remove
those elements, that shouldn't matter much IMO too. We are not using
dictionary itself as protocol now (which we did in 1.x series though).

Regards,
Amar
---
https://kadalu.io



> TIA,
>> Y.
>> ___
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/441850968
>>
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Infra & Tests: Few requests

2020-01-03 Thread Amar Tumballi
On Fri, Jan 3, 2020 at 10:10 PM Yaniv Kaul  wrote:

>
>
> On Fri, Jan 3, 2020 at 4:07 PM Amar Tumballi  wrote:
>
>> Hi Team,
>>
>> First thing first - Happy 2020 !! Hope this year will be great for all of
>> us :-)
>>
>> Few requests to begin the new year!
>>
>> 1. Lets please move all the fedora builders to F31.
>>- There can be some warnings with F31, so we can start with 'skip'
>> mode and once fixed enabled to vote.
>>
>
> I use F31 and things seem OK.
> Probably worth adding / moving to CentOS 8 (streams?!).
>

The reason we added fedora smoke was to use latest gcc, so we can prevents
newer warnings/errors from getting in.

If CentOS 8 gets us newer gcc, I don't see an issue with changing the
fedora-smoke to centos8-smoke.


>
>
>> 2. Failures to smoke due to devrpm-el.
>>- I was not able to get the info just by looking at console logs, and
>> other things. It is not the previous glitches of Build root locked by
>> another process error.
>>- Would be great to get this resolved, so we can merge some good
>> patches.
>>
>
> It's being worked on.
>
>>
>> 3. Random failures in centos-regression.
>>- Again, I am not sure if someone looking into this.
>>- I have noticed tests like 
>> './tests/basic/distribute/rebal-all-nodes-migrate.t'
>> etc failing on few machines.
>>
>
> Same, I think.
> I know a patch of mine broke CI - but was reverted - do you still see
> regular failures, or just random? Per node?
>

Not regular failures. Random.


> Y.
>
>>
>> Thanks in advance!
>>
>> Regards,
>> Amar
>>
>>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Infra & Tests: Few requests

2020-01-03 Thread Amar Tumballi
Hi Team,

First thing first - Happy 2020 !! Hope this year will be great for all of
us :-)

Few requests to begin the new year!

1. Lets please move all the fedora builders to F31.
   - There can be some warnings with F31, so we can start with 'skip' mode
and once fixed enabled to vote.

2. Failures to smoke due to devrpm-el.
   - I was not able to get the info just by looking at console logs, and
other things. It is not the previous glitches of Build root locked by
another process error.
   - Would be great to get this resolved, so we can merge some good patches.

3. Random failures in centos-regression.
   - Again, I am not sure if someone looking into this.
   - I have noticed tests like
'./tests/basic/distribute/rebal-all-nodes-migrate.t'
etc failing on few machines.

Thanks in advance!

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] periodical warnings in brick-log after upgrading to gluster 7.1

2019-12-30 Thread Amar Tumballi
On Sun, Dec 29, 2019 at 9:50 PM Michael Böhm 
wrote:

> Hello,
>
> i've finally upgraded our gluster-server to 7.1 (3.8 debian stretch ->
> gluster-repo 3.12 -> gluster-repo 7.1) and it actually went real smooth.
> There is just that warning in all brick-logs and every server every 15min i
> can't really explain myself.
>
> [2019-12-29 05:51:17.807293] W [socket.c:774:__socket_rwv]
> 0-tcp.store4-server: readv on 192.168.31.60:49077 failed (no data
> available)
> [2019-12-29 06:00:47.886369] W [socket.c:774:__socket_rwv]
> 0-tcp.store4-server: readv on 192.168.31.61:49140 failed (no data
> available)
> [2019-12-29 06:06:18.204778] W [socket.c:774:__socket_rwv]
> 0-tcp.store4-server: readv on 192.168.31.60:49077 failed (no data
> available)
> [2019-12-29 06:15:47.317705] W [socket.c:774:__socket_rwv]
> 0-tcp.store4-server: readv on 192.168.31.61:49148 failed (no data
> available)
>
> .60 and .61 are two servers with bricks for gluster volume "store4". They
> are not only the gluster-server, they also are clients that mount the
> gluster volumes. (replicated gluster volume, that hosts kvm images that run
> locally on the server)
>
> Everything really looks good no other errors or warning - just the lines
> mentioned above. Is this something i should be worried about or can resolve
> somehow?
>
> Of course i looked through the web, unfortunately i only really could find
> one guy that had the same problem but no following messages [1]. And the
> best explanation i could find was in a bug report [2], so it seems it's not
> unusual i just don't like ignoring warnings.
>
>
Your concern is valid, and if the log level is WARNING, it shouldn't be
ignored :-)

I noticed that this is happening when a client disconnects with server
(specially, after umount). Was there any periodic umount which is
happening? Or a monitoring system which is running, which would do a
temporary mount and and check for size etc ?

Regards,
Amar
--
https://kadalu.io


Anyone here that has similiar warning or could explain what i can do about
> it?
>
>


> Thanks Mika
>
> [1] https://lists.gluster.org/pipermail/gluster-users/2015-May/021971.html
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1385525#c3
> 
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Expanding Volume problem in old release 3.12.2

2019-12-28 Thread Amar Tumballi
After expand, you need to run rebalance (at least the fix-layout) to get
the files get distributed to newer bricks.

Regards,
Amar
--
https://kadalu.io


On Wed, 25 Dec, 2019, 9:34 AM PSC, <1173701...@qq.com> wrote:

> Hello,
> I have a glusterfs deployment, the version of which is 3.12.2.  When I
> expanded a volume, I encountered with a problem that new files wrote to
> volume will not appear on those bricks newly added to the volume. I mean
> new files will only wrote to original bricks of that volume (bricks before
> expand). Those new bricks will never used.
> I made some test on the latest version of glusterfs, this problem
> disappeared. However, for stability concerning, I cannot upgrade to new
> version. I plan to made a patch on the old 3.12.2 glusterfs, just fix this
> one bug. Would you please tell me some information about it?
>
> Thank you very much!
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Gluster Release 8.0: Call for proposals

2019-11-28 Thread Amar Tumballi
Hello everyone,

For a change, you are seeing me posting release planning email, instead of
Shyam, (and recently Rinku/Hari). Just today, I summed up my thoughts on
Gluster’s release planning in my blog [1]. Continuation of that, I am
asking for providing your valuable feedback on what are the features each
of you like to see, and what are the work you or your team willing to
accomplish by release-8. It is OK to start proposing ideas which are more
long term, we can discuss on how to break it down too.

Unlike previous planning, we are not restricting the planning to only
GlusterFS. As a project, we would be usable only when there are enough
support systems around, from the angle of deployment, testing, monitoring,
debugging and others. So, planning for a single repository (or a project)
is not sufficient in the real world. We need other things to help here too.
It would be great if you pitch in with your thoughts on these.

Also note that, earlier, we briefly talked about GlusterX planning here
[3][4], and it is a real good idea to work together in that direction as a
community, so we can build up enough confidence to address some of the long
pending challenges, which may need non-compatible changes too. I believe,
if there are plans for proper migration, the gains are going to be
significant, and our users would cooperate with those changes.

As part of this process, I have created github Milestone (Release 8) and
Project lane in our github [2]. Start proposing your enhancements.

[1] - https://www.gluster.org/blog-planning-ahead-for-gluster-releases/

[2] - https://github.com/gluster/glusterfs/projects/1

[3] -
https://lists.gluster.org/pipermail/gluster-devel/2019-September/056606.html

[4] -
http://lists.gluster.org/pipermail/gluster-devel/attachments/20190927/9f4f76b7/attachment-0001.pdf


PS: Currently the scheduled time for the release is *March 18th, 2020*. But
depending on the proposals, and discussions, we can adjust the schedule a
bit (but not by a huge margin). Please submit your ideas by *Dec 10th*, so
before the holidays start in most part of the world, we can have the plan
for Release-8 ready..

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Modifying gluster's logging mechanism

2019-11-26 Thread Amar Tumballi
Hi Barak,

My replies inline.

On Thu, Nov 21, 2019 at 6:34 PM Barak Sason Rofman 
wrote:

> Hello Gluster community,
>
> My name is Barak and I’ve joined RH gluster development in August.
> Shortly after my arrival, I’ve identified a potential problem with
> gluster’s logging mechanism and I’d like to bring the matter up for
> discussion.
>
> The general concept of the current mechanism is that every worker thread
> that needs to log a message has to contend for a mutex which guards the log
> file, write the message and, flush the data and then release the mutex.
> I see two design / implementation problems with that mechanism:
>
>1.
>
>The mutex that guards the log file is likely under constant contention.
>2.
>
>The fact that each worker thread perform the IO by himself, thus
>slowing his "real" work.
>
>
Both  above points are true, and can have an impact when there is lot of
logging. While some of us would say we knew the impact of it, we had not
picked this up as a priority item to fix for below reasons.

* First of all, when we looked at log very early in project's life, our
idea was based mostly on kernel logs (/var/log/messages). We decided, as a
file-system, because it is very active with I/Os and should run for years
together without failing, there should be NO log messages when the system
is healthy, which should be 99%+ time.

* Now, if there are no logs when everything is healthy, and most of the
things are healthy 99% of the time, naturally the focus was not
'performance' of logging infra, but the correctness. This is where, the
strict ordering through locks to preserve the timestamps of logs, and have
it organized came by.


> Initial tests, done by *removing logging from the regression testing,
> shows an improvement of about 20% in run time*. This indicates we’re
> taking a pretty heavy performance hit just because of the logging activity.
>
>
That is interesting observation. For this alone, can we have an option to
disable all logging during regression? That would fasten up things for
normal runs immediately.


> In addition to these problems, the logging module is due for an upgrade:
>
>1.
>
>There are dozens of APIs in the logger, much of them are deprecated -
>this makes it very hard for new developers to keep evolving the project.
>2.
>
>One of the key points for Gluster-X, presented in October at
>Bangalore, is the switch to a structured logging all across gluster.
>
>
>
+1


> Given these points, I believe we’re in a position that allows us to
> upgrade the logging mechanism by both switching to structured logging
> across the project AND replacing the logging system itself, thus “killing
> two birds with one stone”.
>
> Moreover, if the upgrade is successful, the new logger mechanism might be
> adopted by other teams in Red Hat, which lead to uniform logging activity
> across different products.
>
>
This, in my opinion is a good reason to undertake this activity. Mainly
because we should be having our logging infra similar with other tools, and
one shouldn't be having a learning curve to understand gluster's logging.


> I’d like to propose a logging utility I’ve been working on for the past
> few weeks.
> This project is still a work in progress (and still much work needs to be
> done in it), but I’d like to bring this matter up now so if the community
> will want to advance on that front, we could collaborate and shape the
> logger to best suit the community’s needs.
>
> An overview of the system:
>
> The logger provides several (number and size are user-defined)
> pre-allocated buffers which threads can 'register' to and receive a private
> buffer. In addition, a single, shared buffer is also pre-allocated (size is
> user-defined). The number of buffers and their size is modifiable at
> runtime (not yet implemented).
>
> Worker threads write messages in one of 3 ways that will be described
> next, and an internal logger threads constantly iterates the existing
> buffers and drains the data to the log file.
>
> As all allocations are allocated at the initialization stage, no special
> treatment it needed for "out of memory" cases.
>
> The following writing levels exist:
>
>1.
>
>Level 1 - Lockless writing: Lockless writing is achieved by assigning
>each thread a private ring buffer. A worker threads write to that buffer
>and the logger thread drains that buffer into a log file.
>
> In case the private ring buffer is full and not yet drained, or in case
> the worker thread has not registered for a private buffer, we fall down to
> the following writing methods:
>
>1.
>
>Level 2 - Shared buffer writing: The worker thread will write it's
>data into a buffer that's shared across all threads. This is done in a
>synchronized manner.
>
> In case the private ring buffer is full and not yet drained AND the shared
> ring buffer is full and not yet drained, or in case the worker thread has
> not registered for a 

Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFS status

2019-11-25 Thread Amar Tumballi
Responses inline.

On Fri, Nov 22, 2019 at 6:04 PM Niels de Vos  wrote:

> On Thu, Nov 21, 2019 at 04:01:23PM +0530, Amar Tumballi wrote:
> > Hi All,
> >
> > As per the discussion on https://review.gluster.org/23645, recently we
> > changed the status of gNFS (gluster's native NFSv3 support) feature to
> > 'Depricated / Orphan' state. (ref:
> > https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189
> ).
> > With this email, I am proposing to change the status again to 'Odd Fixes'
> > (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>
> I'd recommend against re-surrecting gNFS. The server is not very
> extensible and adding new features is pretty tricky without breaking
> other (mostly undocumented) use-cases.


I too am against adding the features/enhancements to gNFS. It doesn't make
sense. We are removing features from glusterfs itself, adding features to
gNFS after 3 years wouldn't even be feasible.

I guess you missed the intention of my proposal. It was not about
'resurrecting' gNFS to 'Maintained' or 'Supported' status. It was about
taking it out of 'Orphan' status, because there are still users who are
'happy' with it. Hence I picked the status as 'Odd Fixes' (as per
MAINTAINERS file, there was nothing else which would give meaning of *'This
feature is still shipped, but we are not adding any features or not
actively maintaining it'. *



> Eventhough NFSv3 is stateless,
> the actual usage of NFSv3, mounting and locking is definitely not. The
> server keeps track of which clients have an export mounted, and which
> clients received grants for locks. These things are currently not very
> reliable in combination with high-availability. And there is also the by
> default disabled duplicate-reply-cache (DRC) that has always been very
> buggy (and neither cluster-aware).
>
> If we enable gNFS by default again, we're sending out an incorrect
> message to our users. gNFS works fine for certain workloads and
> environments, but it should not be advertised as 'clustered NFS'.
>
>
I didn't talk or was intending to go this route. I am not even talking
about making gNFS 'default' enable. That would take away our focus on
glusterfs, and different things we can solve with Gluster alone. Not sure
why my email was taken as there would be focus on gNFS.


> Instead of going the gNFS route, I suggest to make it easier to deploy
> NFS-Ganesha as that is a more featured, well maintained and can be
> configured for much more reliable high-availability than gNFS.
>
>
I believe this is critical, and we surely need to work on it. But doesn't
come in the way of doing 1-2 bug fixes in gNFS (if any) in a release.


> If someone really wants to maintain gNFS, I won't object much, but they
> should know that previous maintainers have had many difficulties just
> keeping it working well while other components evolved. Addressing some
> of the bugs/limitations will be extremely difficult and may require
> large rewrites of parts of gNFS.
>

Yes, that awareness is critical, and it should exist.


> Until now, I have not read convincing arguments in this thread that gNFS
> is stable enough to be consumed by anyone in the community. Users should
> be aware of its limitations and be careful what workloads to run on it.
>

In this thread, Xie mentioned that he is managing gNFS on 1000+ servers
with 2000+ clients (more than 24 gluster cluster overall) for more than 2
years now. If that doesn't sound as 'stability', not sure what sounds as.

I agree that the users should be careful about the proper usecase to use
gNFS. I am even open to say we should add a warning or console log in
gluster CLI when 'gluster volume set  nfs.disable false' is performed,
saying it is advised to move to NFS-Ganesha based approach, and give a URL
link in that message. But the whole point is, when we make a release, we
should still ship gNFS as there are some users, very happy with gNFS, and
their usecases are properly handled by gNFS in its current form itself. Why
make them unhappy, or shift to other projects?

End of the day, as developers it is our duty to make sure we suggest the
best technologies to users, but the intentions should always be to make
sure we solve problems. If there are already solved problems, why resurface
them in the name of better technology?

So, again, my proposal is, to keep gNFS in the codebase (not as Orphan),
and continue to make releases with gNFS binary shipped when we make
release, not to make the focus of project to start working on enhancements
of gNFS.

Happy to answer if anyone has further queries.

I have sent a patch https://review.gluster.org/23738 for the same, and I
see people commenting already on that. I agree that Xie's contributions to
Gluster may need to increase (specifically in gNFS component) to be called
as MAINTAINER

[Gluster-devel] Fwd: Proposal to change gNFS status

2019-11-21 Thread Amar Tumballi
Hi Markus,

Looks like your email got bounced as you were not member of the list. I got
the email as I had you in Cc earlier (through the github communications).

Forwarding it to the list so everyone gets your email.


Also, Yaniv, about the 'name' *Odd Fixes, *I am not a big fan either, but
out of the existing options in MAINTAINERS file, that was best suited. If
we have to change the name, *'Maintained, but no enhancements' * would give
very clear messaging IMO. Happy to hear what people think.

-Amar

-- Forwarded message -
From: Markus Seywald 
Date: Thu, Nov 21, 2019 at 4:48 PM
Subject: RE: Proposal to change gNFS status

Hi Everyone,



Just an Feedback from 1 User End.



We used GNFS with NFS3 now since  some years.  The 3.12.15 have some Bugs,…
but it’s running most stable in comparisons to older Versions.



NFS-Ganesha we where not able to get it stable since Years. NFS-Ganesha is
very instable once its getting used with VMware (6.0/6.7). Instable in such
Way making it unusable. Even 2.8.x. The Idea of using NFS4.1 with Vmware
was an great Mind,……. But never worked so far. Vmware just support “Session
Trunking” and not pnfs in general. They don’t support using non Vsan,
binded to open-source community. I’am not sure how other Hypervisor works
with,……



Also I think there is a big Complication once it comes to Issues once 2
Open-Source Products are in use. Once an Issue appears and not clear what
ever End is the root Cause,… could be an extensive Process to get a Fix
for.



So looking forward, on my End I really would appreciate if GNFS would be as
a Default Package part of Gluster. As it’s the most simple Way providing
Peoples an easy NFS Solution.  Glusterfs as a Client is not supported or
available for each OS. It would be good providing “Fixes” once there are
Bugs with newer Glusterfs Versions with GNFS in use (like #764,#765).



Also I would be more then happy if Devs would also deep Focus support
Matrix with Hypervisors and to make it stable for most common ones like
Vmware. May Hyper-V/KVM as well. (nfs-ganesha, gluster,..)



Reason why I’am thinking this Way from User-End is following:



Markets are moving very very fast. Everything need to get more stable, more
efficient and flexible with the tiniest Effort possible. This means as well
based on Costs > Reason why Open-Source getting more and more interesting
in Total. Changing Products causing opposite Situation. Biggest Kickback is
once Products getting changed. Leading to Risks Use Cases not working
anymore or if lots Bugs/Issues arise.  That can happen with Fully Payed
Products as well > but there is then a big Support Channel behind.



Top Leader in Virtualization Sector also focusing Kubernetes and combining
Stacks for easier usages.



So I think it would be really really good if gluster and NFS-ganesha
getting also fully adopted for the Virtualization Solutions.  As this World
is changing more and more into this Layers. And more Activities into this
Direction we see in the Future.



I understand this is may just some Words from 1 user,…. But may some others
thinking Similar.



Appreciated and Thx for Reading.



Max



*From:* Amar Tumballi 
*Sent:* Donnerstag, 21. November 2019 11:31
*To:* GlusterFS Maintainers 
*Cc:* Gluster Devel ; Gluster users maillist <
gluster-us...@gluster.org>; xiechanglong.d 
*Subject:* Proposal to change gNFS status



Hi All,



As per the discussion on https://review.gluster.org/23645, recently we
changed the status of gNFS (gluster's native NFSv3 support) feature to
'Depricated / Orphan' state. (ref:
https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
With this email, I am proposing to change the status again to 'Odd Fixes'
(ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)



TL;DR;



I understand the current maintainers are not able to focus on maintaining
it as the focus of the project, as earlier described, is keeping
NFS-Ganesha based integration with glusterfs. But, I am volunteering along
with Xie Changlong (currently working at Chinamobile), to keep the feature
running as it used to in previous versions. Hence the status of 'Odd
Fixes'.



Before sending the patch to make these changes, I am proposing it here now,
as gNFS is not even shipped with latest glusterfs-7.0 releases. I have
heard from some users that it was working great for them with earlier
releases, as all they wanted was NFS v3 support, and not much of features
from gNFS. Also note that, even though the packages are not built, none of
the regression tests using gNFS are stopped with latest master, so it is
working same from at least last 2 years.



I request the package maintainers to please add '--with gnfs' (or
--enable-gnfs) back to their release script through this email, so those
users wanting to use gNFS happily can continue to use it. Also points to
users/admins is that, the status is 'Odd Fixes', so don't expect any
'enhancements' on 

[Gluster-devel] Proposal to change gNFS status

2019-11-21 Thread Amar Tumballi
Hi All,

As per the discussion on https://review.gluster.org/23645, recently we
changed the status of gNFS (gluster's native NFSv3 support) feature to
'Depricated / Orphan' state. (ref:
https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
With this email, I am proposing to change the status again to 'Odd Fixes'
(ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)

TL;DR;

I understand the current maintainers are not able to focus on maintaining
it as the focus of the project, as earlier described, is keeping
NFS-Ganesha based integration with glusterfs. But, I am volunteering along
with Xie Changlong (currently working at Chinamobile), to keep the feature
running as it used to in previous versions. Hence the status of 'Odd
Fixes'.

Before sending the patch to make these changes, I am proposing it here now,
as gNFS is not even shipped with latest glusterfs-7.0 releases. I have
heard from some users that it was working great for them with earlier
releases, as all they wanted was NFS v3 support, and not much of features
from gNFS. Also note that, even though the packages are not built, none of
the regression tests using gNFS are stopped with latest master, so it is
working same from at least last 2 years.

I request the package maintainers to please add '--with gnfs' (or
--enable-gnfs) back to their release script through this email, so those
users wanting to use gNFS happily can continue to use it. Also points to
users/admins is that, the status is 'Odd Fixes', so don't expect any
'enhancements' on the features provided by gNFS.

Happy to hear feedback, if any.

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [RFC] inode table locking contention reduction experiment

2019-11-03 Thread Amar Tumballi
Thanks for this, github works for review right now :-)

I am occupied till Wednesday, and will review them by this week. A glance
on the changes looks good to me.

Few tests which can run for validations are :

tests/bugs/shard/bug-1696136-lru-limit-equals-deletion-rate.t

tests/features/fuse-lru-limit.t

tests/bugs/shard/shard-inode-refcount-test.t


Ideal is to run the full regression with `./run-tests.sh -c`


Regards,

Amar


On Mon, Nov 4, 2019 at 9:21 AM Changwei Ge  wrote:

> Hi Amar,
>
> On 2019/10/31 6:30 下午, Amar Tumballi wrote:
> >
> >
> > On Wed, Oct 30, 2019 at 4:32 PM Xavi Hernandez  > <mailto:jaher...@redhat.com>> wrote:
> >
> > Hi Changwei,
> >
> > On Tue, Oct 29, 2019 at 7:56 AM Changwei Ge  > <mailto:c...@linux.alibaba.com>> wrote:
> >
> > Hi,
> >
> > I am recently working on reducing inode_[un]ref() locking
> > contention by
> > getting rid of inode table lock. Just use inode lock to protect
> > inode
> > REF. I have already discussed a couple rounds with several
> > Glusterfs
> > developers via emails and Gerrit and basically get understood on
> > major
> > logic around.
> >
> > Currently, inode REF can be ZERO and be reused by increasing it
> > to ONE.
> > This is IMO why we have to burden so much work for inode table
> when
> > REF/UNREF. It makes inode [un]ref() and inode table and
> > dentries(alias)
> > searching hard to run concurrently.
> >
> > So my question is in what cases, how can we find a inode whose
> > REF is ZERO?
> >
> > As Glusterfs store its inode memory address into kernel/fuse,
> > can we
> > conclude that only fuse_ino_to_inode() can bring back a REF=0
> inode?
> >
> >
> > Xavi's answer below provides some insights. and same time, assuming that
> > only fuse_ino_to_inode() can bring back inode from ref=0 state (for
> > now), is a good start.
> >
> >
> > Yes, when an inode gets refs = 0, it means that gluster code is not
> > using it anywhere, so it cannot be referenced again unless kernel
> > sends new requests on the same inode. Once refs=0 and nlookup=0, the
> > inode can be destroyed.
> >
> > Inode code is quite complex right now and I haven't had time to
> > investigate this further, but I think we could simplify inode
> > management significantly (specially unref) if we add a reference
> > when nlookup becomes > 0, and remove a reference when
> > nlookup becomes 0 again. Maybe with this approach we could avoid
> > inode table lock in many cases. However we need to make sure we
> > correctly handle invalidation logic to keep inode table size under
> > control.
> >
> >
> > My suggestion is, don't wait for a complete solution for posting the
> > patch. Let us get a chance to have a look at WorkInProgress patches, so
> > we can have discussions on code itself. It would help to reach better
> > solutions sooner.
>
> Agree.
>
> I have almost implemented my draft design for this experiment.
> The immature code has been pushed to my personal Glusterfs repo[1].
>
> Now it's a single large patch, I will split it to patches when I decide
> to push it to Gerrit for review convenience. If you prefer to push it to
> Gerrit for a early review and discussion, I can do that :-). But I am
> still doing some debug stuff.
>
> My work includes:
>
> 1. Move inode refing and unrefing logic unrelated logic out from
> `__inode_[un]ref()` hence to reduce their arguments.
> 2. Add a specific ‘ref_lock’ to inode to keep ref/unref atomicity.
> 3. As `inode_table::active_size` is only used for debug purpose, convert
> it to atomic variable.
> 4. Factor out pruning inode.
> 5. In order to run inode search and grep run concurrently, firstly  use
> RDLOCK  and then convert it WRLOCK if necessary.
> 6. Inode table lock is not necessary for inode ref/unref unless we have
> to move it between table lists.
>
> etc...
>
> Any comments, ideas, suggestions are kindly welcomed.
>
> Thanks,
> Changwei
>
> [1]:
>
> https://github.com/changweige/glusterfs/commit/d7226d2458281212af19ec8c2ca3d8c8caae1330
>
> >
> > Regards,
> >
> > Xavi
> >
> >
> >
> > Thanks,
> > Changwei
> > ___
> >
> > Community Meeting Calendar:
> >
> &

Re: [Gluster-devel] [RFC] inode table locking contention reduction experiment

2019-10-31 Thread Amar Tumballi
On Wed, Oct 30, 2019 at 4:32 PM Xavi Hernandez  wrote:

> Hi Changwei,
>
> On Tue, Oct 29, 2019 at 7:56 AM Changwei Ge 
> wrote:
>
>> Hi,
>>
>> I am recently working on reducing inode_[un]ref() locking contention by
>> getting rid of inode table lock. Just use inode lock to protect inode
>> REF. I have already discussed a couple rounds with several Glusterfs
>> developers via emails and Gerrit and basically get understood on major
>> logic around.
>>
>> Currently, inode REF can be ZERO and be reused by increasing it to ONE.
>> This is IMO why we have to burden so much work for inode table when
>> REF/UNREF. It makes inode [un]ref() and inode table and dentries(alias)
>> searching hard to run concurrently.
>>
>> So my question is in what cases, how can we find a inode whose REF is
>> ZERO?
>>
>> As Glusterfs store its inode memory address into kernel/fuse, can we
>> conclude that only fuse_ino_to_inode() can bring back a REF=0 inode?
>>
>
Xavi's answer below provides some insights. and same time, assuming that
only fuse_ino_to_inode() can bring back inode from ref=0 state (for now),
is a good start.


>
> Yes, when an inode gets refs = 0, it means that gluster code is not using
> it anywhere, so it cannot be referenced again unless kernel sends new
> requests on the same inode. Once refs=0 and nlookup=0, the inode can be
> destroyed.
>
> Inode code is quite complex right now and I haven't had time to
> investigate this further, but I think we could simplify inode management
> significantly (specially unref) if we add a reference when nlookup becomes
> > 0, and remove a reference when nlookup becomes 0 again. Maybe with this
> approach we could avoid inode table lock in many cases. However we need to
> make sure we correctly handle invalidation logic to keep inode table size
> under control.
>
>
My suggestion is, don't wait for a complete solution for posting the patch.
Let us get a chance to have a look at WorkInProgress patches, so we can
have discussions on code itself. It would help to reach better solutions
sooner.

Regards,
>
> Xavi
>
>
>>
>> Thanks,
>> Changwei
>> ___
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/118564314
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/118564314
>>
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [RFC] various lists on inode table usage?

2019-10-21 Thread Amar Tumballi
On Mon, Oct 21, 2019 at 1:02 PM Changwei Ge  wrote:

> Hi Amar,
>
> Thank you very much for your quick reply :-)
>
> On 2019/10/21 2:44 下午, Amar Tumballi wrote:
> >
> >
> > On Mon, Oct 21, 2019 at 11:58 AM Changwei Ge  > <mailto:c...@linux.alibaba.com>> wrote:
> >
> > Hi,
> >
> > I am recently working on optimizing inode searching/getting/putting
> > concurrency. Before the experiment/trial goes, I would like to get
> > fully
> > understand what the usage of several lists of inode table, especially
> > for 'invalidate list', since the major difficulty making inode
> > searching
> > run concurrently is that We have to move inode from one list to the
> > other and modify some attributes against inode table.
> > After reading corresponding code, it seems that inode table
> 'invalidate
> > list' is only retrieved when destroying inode
> > table(inode_table_destroy).
> >
> > Can someone help explain the list usage/purpose of 'invalidate list'?
> >
> >
> > 'invalidate_list' is used only in client side. The patch which got it is
> > below:
> >
> >
> https://github.com/gluster/glusterfs/commit/d49b41e817d592c1904b6f01716df6546dad3ebe
> >
> >
> > Hope this gives some idea.
>
> Cool, this helps me to understand more deeply on inode table.
> I will check out other related commits as well.
>
> >
> > Happy to help. If there is an interest, we can even have a video
> > conference for all interested developers to discuss inode table, and
> > detail out how it is done.
> >
>
> Ack.
> As I think that inode table(how we manage inode/dentry) is one of the
> most critical parts of a certain type of FS, we should consider and
> design carefully hence run it efficiently.
> What I can see from glusterfs HEAD code is that we might have many
> locking contentions and burden too much logic into inode_[un]ref().
>

This is true. And I tried couple of approaches. Couldn't get to take it to
completion earlier.

* https://review.gluster.org/22242
* https://review.gluster.org/22243
* https://review.gluster.org/22186
* https://review.gluster.org/22156

* https://github.com/gluster/glusterfs/issues/204



>
> So how can we arrange a video conference and have a further discussion?
>
>
Open for suggestions. We have weekly Community meeting, which can be
extended to talk about this.


> Thanks,
> Changwei
>
> > For getting complete history of changes to inode.c (from
> >
> https://github.com/amarts/glusterfs/commit/72db44413ce4686b465c29ea8383fa4f09f53a76),
>
> > you can clone github.com/amarts/glusterfs
> > <http://github.com/amarts/glusterfs> and see that over time what
> changes
> > got into the file... 'git log libglusterfs/src/inode.c' gives an idea.
> >
> > Regards,
> > Amar
> >
> >
> >
> > Thanks,
> > Changwei
> >
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [RFC] various lists on inode table usage?

2019-10-21 Thread Amar Tumballi
On Mon, Oct 21, 2019 at 11:58 AM Changwei Ge  wrote:

> Hi,
>
> I am recently working on optimizing inode searching/getting/putting
> concurrency. Before the experiment/trial goes, I would like to get fully
> understand what the usage of several lists of inode table, especially
> for 'invalidate list', since the major difficulty making inode searching
> run concurrently is that We have to move inode from one list to the
> other and modify some attributes against inode table.
> After reading corresponding code, it seems that inode table 'invalidate
> list' is only retrieved when destroying inode table(inode_table_destroy).
>
> Can someone help explain the list usage/purpose of 'invalidate list'?
>

'invalidate_list' is used only in client side. The patch which got it is
below:

https://github.com/gluster/glusterfs/commit/d49b41e817d592c1904b6f01716df6546dad3ebe


Hope this gives some idea.

Happy to help. If there is an interest, we can even have a video conference
for all interested developers to discuss inode table, and detail out how it
is done.

For getting complete history of changes to inode.c (from
https://github.com/amarts/glusterfs/commit/72db44413ce4686b465c29ea8383fa4f09f53a76),
you can clone github.com/amarts/glusterfs and see that over time what
changes got into the file... 'git log libglusterfs/src/inode.c' gives an
idea.

Regards,
Amar


>
> Thanks,
> Changwei
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] GlusterFS API to manipulate open file descriptor - glfs_fcntl?

2019-10-15 Thread Amar Tumballi
It would be great if you can point at the requirements, or new additions
you are talking about.


On Tue, Oct 15, 2019 at 12:26 PM Anoop C S  wrote:

> Hi all,
>
> This is to check and confirm whether we have an API(or an internal
> implementation which can be exposed as API) to perform operations on an
> open file descriptor as a wrapper around existing fcntl() system call.
> We do have specific APIs for locks(glfs_posix_lock) and file descriptor
> duplication(glfs_dup) which are important among those operations listed
> as per man fcntl(2).


> At present we have a requirement(very recent) from Samba to set file
> descriptor flags through its VFS layer which would need a corresponding
> mechanism inside GlusterFS. Due to its absence, VFS module for
> GlusterFS inside Samba will have to workaround with the hack of
> creating fake local file descriptors outside GlusterFS.
>
> Thoughts and suggestions are welcome.
>
>
If there is a need have a feature, it makes sense to extend fd_t structure
and provide it inside. If my memory serve right, we didn't support fcntl()
behavior in gluster as there was no fcntl() through fuse when we started..

Would be good to understand what is needed, and then start working on
design discussions. (if it makes sense).

-Amar


> Anoop C S.
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-10-14 Thread Amar Tumballi
On Mon, 14 Oct, 2019, 5:37 PM Niels de Vos,  wrote:

> On Mon, Oct 14, 2019 at 03:52:30PM +0530, Amar Tumballi wrote:
> > Any thoughts on this?
> >
> > I tried a basic .travis.yml for the unified glusterfs repo I am
> > maintaining, and it is good enough for getting most of the tests.
> > Considering we are very close to glusterfs-7.0 release, it is good to
> time
> > this after 7.0 release.
>
> Is there a reason to move to Travis? GitHub does offer integration with
> Jenkins, so we should be able to keep using our existing CI, I think?
>

Yes, that's true. I tried Travis because I don't have complete idea of
Jenkins infra and trying Travis needed just basic permissions from me on
repo (it was tried on my personal repo)

Happy to get some help here.

Regards,
Amar


> Niels
>
>
> >
> > -Amar
> >
> > On Thu, Sep 5, 2019 at 5:13 PM Amar Tumballi  wrote:
> >
> > > Going through the thread, I see in general positive responses for the
> > > same, with few points on review system, and not loosing information
> when
> > > merging the patches.
> > >
> > > While we are working on that, we need to see and understand how our
> CI/CD
> > > looks like with github migration. We surely need suggestion and
> volunteers
> > > here to get this going.
> > >
> > > Regards,
> > > Amar
> > >
> > >
> > > On Wed, Aug 28, 2019 at 12:38 PM Niels de Vos 
> wrote:
> > >
> > >> On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan
> > >> wrote:
> > >> > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos 
> > >> wrote:
> > >> >
> > >> > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura
> > >> Krishna
> > >> > > Murthy wrote:
> > >> > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian <
> j...@julianfamily.org>
> > >> wrote:
> > >> > > >
> > >> > > > > > Comparing the changes between revisions is something
> > >> > > > > that GitHub does not support...
> > >> > > > >
> > >> > > > > It does support that,
> > >> > > > > actually.___
> > >> > > > >
> > >> > > >
> > >> > > > Yes, it does support. We need to use Squash merge after all
> review
> > >> is
> > >> > > done.
> > >> > >
> > >> > > Squash merge would also combine multiple commits that are
> intended to
> > >> > > stay separate. This is really bad :-(
> > >> > >
> > >> > >
> > >> > We should treat 1 patch in gerrit as 1 PR in github, then squash
> merge
> > >> > works same as how reviews in gerrit are done.  Or we can come up
> with
> > >> > label, upon which we can actually do 'rebase and merge' option,
> which
> > >> can
> > >> > preserve the commits as is.
> > >>
> > >> Something like that would be good. For many things, including commit
> > >> message update squashing patches is just loosing details. We dont do
> > >> that with Gerrit now, and we should not do that when using GitHub PRs.
> > >> Proper documenting changes is still very important to me, the details
> of
> > >> patches should be explained in commit messages. This only works well
> > >> when developers 'force push' to the branch holding the PR.
> > >>
> > >> Niels
> > >> ___
> > >>
> > >> Community Meeting Calendar:
> > >>
> > >> APAC Schedule -
> > >> Every 2nd and 4th Tuesday at 11:30 AM IST
> > >> Bridge: https://bluejeans.com/836554017
> > >>
> > >> NA/EMEA Schedule -
> > >> Every 1st and 3rd Tuesday at 01:00 PM EDT
> > >> Bridge: https://bluejeans.com/486278655
> > >>
> > >> Gluster-devel mailing list
> > >> Gluster-devel@gluster.org
> > >> https://lists.gluster.org/mailman/listinfo/gluster-devel
> > >>
> > >>
>
> > ___
> >
> > Community Meeting Calendar:
> >
> > APAC Schedule -
> > Every 2nd and 4th Tuesday at 11:30 AM IST
> > Bridge: https://bluejeans.com/118564314
> >
> > NA/EMEA Schedule -
> > Every 1st and 3rd Tuesday at 01:00 PM EDT
> > Bridge: https://bluejeans.com/118564314
> >
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-10-14 Thread Amar Tumballi
Any thoughts on this?

I tried a basic .travis.yml for the unified glusterfs repo I am
maintaining, and it is good enough for getting most of the tests.
Considering we are very close to glusterfs-7.0 release, it is good to time
this after 7.0 release.

-Amar

On Thu, Sep 5, 2019 at 5:13 PM Amar Tumballi  wrote:

> Going through the thread, I see in general positive responses for the
> same, with few points on review system, and not loosing information when
> merging the patches.
>
> While we are working on that, we need to see and understand how our CI/CD
> looks like with github migration. We surely need suggestion and volunteers
> here to get this going.
>
> Regards,
> Amar
>
>
> On Wed, Aug 28, 2019 at 12:38 PM Niels de Vos  wrote:
>
>> On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan
>> wrote:
>> > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos 
>> wrote:
>> >
>> > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura
>> Krishna
>> > > Murthy wrote:
>> > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian 
>> wrote:
>> > > >
>> > > > > > Comparing the changes between revisions is something
>> > > > > that GitHub does not support...
>> > > > >
>> > > > > It does support that,
>> > > > > actually.___
>> > > > >
>> > > >
>> > > > Yes, it does support. We need to use Squash merge after all review
>> is
>> > > done.
>> > >
>> > > Squash merge would also combine multiple commits that are intended to
>> > > stay separate. This is really bad :-(
>> > >
>> > >
>> > We should treat 1 patch in gerrit as 1 PR in github, then squash merge
>> > works same as how reviews in gerrit are done.  Or we can come up with
>> > label, upon which we can actually do 'rebase and merge' option, which
>> can
>> > preserve the commits as is.
>>
>> Something like that would be good. For many things, including commit
>> message update squashing patches is just loosing details. We dont do
>> that with Gerrit now, and we should not do that when using GitHub PRs.
>> Proper documenting changes is still very important to me, the details of
>> patches should be explained in commit messages. This only works well
>> when developers 'force push' to the branch holding the PR.
>>
>> Niels
>> ___
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/836554017
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/486278655
>>
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Query regards to expose client-pid to fuse process

2019-10-13 Thread Amar Tumballi
On Fri, Oct 11, 2019 at 5:05 PM Mohit Agrawal  wrote:

> Hi,
>
> Yes, you are right it is not a default value.
>
> We can assign the client_pid only while volume has mounted after through a
> glusterfs binary directly like
> /usr/local/sbin/glusterfs --process-name fuse --volfile-server=192.168.1.3
> --client-pid=-3 --volfile-id=/test /mnt1
>
>
I agree that this is in general risky, and good to fix. But as the check
for this happens after basic auth check in RPC (ip/user based), it should
be OK.  Good to open a github issue and have some possible design options
so we can have more discussions on this.

-Amar



> Regards,
> Mohit Agrawal
>
>
> On Fri, Oct 11, 2019 at 4:52 PM Nithya Balachandran 
> wrote:
>
>>
>>
>> On Fri, 11 Oct 2019 at 14:56, Mohit Agrawal  wrote:
>>
>>> Hi,
>>>
>>>   I have a query specific to authenticate a client based on the PID
>>> (client-pid).
>>>   It can break the bricks xlator functionality, Usually, on the brick
>>> side we take a decision about the
>>>source of fop request based on PID.If PID value is -ve xlator
>>> considers the request has come from an internal
>>>   client otherwise it has come from an external client.
>>>
>>>   If a user has mounted the volume through fuse after provide
>>> --client-pid to command line argument similar to internal client PID
>>>   in that case brick_xlator consider external fop request also as an
>>> internal and it will break functionality.
>>>
>>>   We are checking pid in (lease, posix-lock, worm, trash) xlator to know
>>> about the source of the fops.
>>>   Even there are other brick xlators also we are checking specific PID
>>> value for all internal
>>>   clients that can be break if the external client has the same pid.
>>>
>>>   My query is why we need to expose client-pid as an argument to the
>>> fuse process?
>>>
>>
>>
>> I don't think this is a default value to the fuse mount. One place where
>> this helps us is with the script based file migration and rebalance - we
>> can provide a negative pid to  the special client mount to ensure these
>> fops are also treated as internal fops.
>>
>> In the meantime I do not see the harm in having this option available as
>> it allows a specific purpose. Are there any other client processes that use
>> this?
>>
>>I think we need to resolve it. Please share your view on the same.
>>>
>>> Thanks,
>>> Mohit Agrawal
>>> ___
>>>
>>> Community Meeting Calendar:
>>>
>>> APAC Schedule -
>>> Every 2nd and 4th Tuesday at 11:30 AM IST
>>> Bridge: https://bluejeans.com/118564314
>>>
>>> NA/EMEA Schedule -
>>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>>> Bridge: https://bluejeans.com/118564314
>>>
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] RDMA in GlusterFs

2019-10-08 Thread Amar Tumballi
If it is for academic purposes, I suggest using glusterfs-3.12.x or
glusterfs-5.x series with RDMA... That would give an idea.

-Amar

On Tue, Oct 8, 2019 at 7:44 PM Liu, Changcheng 
wrote:

> Thanks Rafi-KC & Tumbali.
> I'm working on developing RDMA in Ceph and doing some investigating about
> RDMA usage in distributed storage system.
> It's unlucky that GlusterFs won't support RDMA now.
>
> For IPoIB, I'll try to evaluate its performance versus RDMA(RoCEv2 &
> iWARP).
>
> -Original Message-
> From: Rafi Kavungal Chundattu Parambil [mailto:rkavu...@redhat.com]
> Sent: Monday, October 7, 2019 5:07 PM
> To: Liu, Changcheng 
> Cc: yk...@redhat.com; gluster-devel@gluster.org
> Subject: Re: RDMA in GlusterFs
>
> Hi Liu,
>
> RDMA support in GlusterFS has been dropped from the latest release and
> removed from the codebase. You can find more details here in the mail sent
> as the proposal to drop the feature[1].
>
> 
> Gluster
> started supporting RDMA while ib-verbs was still new, and very high-end
> infra around that time were using Infiniband. Engineers did work with
> Mellanox, and got the technology into GlusterFS for better data migration,
> data copy. While current day kernels support very good speed with IPoIB
> module itself, and there are no more bandwidth for experts in these area to
> maintain the feature, we recommend migrating over to TCP (IP based) network
> for your volume.If you are successfully using RDMA transport, do get in
> touch with us to prioritize the migration plan for your volume. Plan is to
> work on this after the release, so by version 6.0, we will have a cleaner
> transport code, which just needs to support one type.
> 
>
> That said, Officially GlusterFs doesn't support RDMA on master branch.
>
> [1] :
> https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html
>
>
> Regards
> Rafi KC
> >
> >
> > Does anyone use RDMA in Glusterfs before? What’s RDMA usability in
> > Glusterfs now?
> >
> >
> >
> > *From:* Liu, Changcheng
> > *Sent:* Tuesday, October 1, 2019 11:24 PM
> > *To:* 'Yaniv Kaul' ; 'Amar Tumballi'
> > 
> > *Cc:* 'gluster-devel@gluster.org' 
> > *Subject:* RE: RDMA in GlusterFs
> >
> >
> >
> > Does anyone know whether Glusterfs support RDMA or not in master
> > branch now?
> >
> >
> >
> > *From:* Liu, Changcheng
> > *Sent:* Tuesday, October 1, 2019 12:16 AM
> > *To:* Yaniv Kaul ; Amar Tumballi 
> > *Cc:* gluster-devel@gluster.org
> > *Subject:* RDMA in GlusterFs
> >
> >
> >
> > Hi Yaniv Kaul & Amar Tumbali
> >
> > Upstream question:
> >
> >Does Glusterfs support RDMA after below PR is merged?
> >
> >  ibverbs/rdma: remove from build:
> > https://github.com/gluster/glusterfs/commit/8f37561d665b
> >
> >
> >
> > B.R.
> >
> > Changcheng
> >
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] kadalu - k8s storage with Gluster

2019-09-12 Thread Amar Tumballi
Hi Gluster users,

I am not sure how many of you use Gluster for your k8s storage (or even
considering to use). I have some good news for you.

Last month, I along with Aravinda spoke at DevConf India, about project
kadalu. The code & README available @ https://github.com/kadalu/kadalu. We
are awaiting the talk's video to be uploaded, and once done I will share
the link here.

Wanted to share few highlights of the kadalu project with you all, and also
future scope of work.

   - kadalu comes with *CSI driver*, so one can use this smoothly with k8s
   1.14+ versions.
   - Has an *operator* which starts CSI drivers, and Gluster storage pod
   when required.
   - 2 commands to setup and get k8s storage working.
  - kubectl create -f kadalu-operator.yml
  - kubectl create -f kadalu-config.yml
   - Native support for single disk use-case (ie, if your backend supports
   High Availability, no need to use Gluster's replication), which I believe
   is a good thing for people who already have some storage array which is
   highly available, and for those companies which have their own storage
   products, but doesn't have k8s expose.
   - The above can be usecase can be used on a single AWS EBS volume, if
   you want to save cost of Replica 3 (If you trust it to provide your
   required SLA for it). Here, Single EBS volume would provide multiple k8s
   PVs.
   - GlusterFS used is very light mode, ie, no 'glusterd', no LVM, or any
   other layers. Only using glusterfs for filesystem, not management.
   - Basic end to end testing is done using Travis CI/CD. [Need more help
   to enhance it further].

More on this in our presentation @
https://github.com/kadalu/kadalu/blob/master/doc/rethinking-gluster-management-using-k8s.pdf

Please note that this is a project which we started as a prototype for our
talk. To take it further, feedback, feature request, suggestions and
contributions are very important. Let me know if you are interested to
collaborate on this one.

Possible future work:

* Implement data backup features (possibly with geo-rep).
* Resize of Volume (Both backend Gluster volume, and PV volume).
* Consider implementing helm chart for operator.
* Scale testing, etc.


Limitations (for now)

* No 'migration'. One has to start fresh with kadalu.
* No Snapshot, No cloning.
* 

As of now, there are 2 deployment guides  available @
https://github.com/kadalu/kadalu-cookbook

Thanks & Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-09-05 Thread Amar Tumballi
Going through the thread, I see in general positive responses for the same,
with few points on review system, and not loosing information when merging
the patches.

While we are working on that, we need to see and understand how our CI/CD
looks like with github migration. We surely need suggestion and volunteers
here to get this going.

Regards,
Amar


On Wed, Aug 28, 2019 at 12:38 PM Niels de Vos  wrote:

> On Tue, Aug 27, 2019 at 06:57:14AM +0530, Amar Tumballi Suryanarayan wrote:
> > On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos  wrote:
> >
> > > On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura
> Krishna
> > > Murthy wrote:
> > > > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian 
> wrote:
> > > >
> > > > > > Comparing the changes between revisions is something
> > > > > that GitHub does not support...
> > > > >
> > > > > It does support that,
> > > > > actually.___
> > > > >
> > > >
> > > > Yes, it does support. We need to use Squash merge after all review is
> > > done.
> > >
> > > Squash merge would also combine multiple commits that are intended to
> > > stay separate. This is really bad :-(
> > >
> > >
> > We should treat 1 patch in gerrit as 1 PR in github, then squash merge
> > works same as how reviews in gerrit are done.  Or we can come up with
> > label, upon which we can actually do 'rebase and merge' option, which can
> > preserve the commits as is.
>
> Something like that would be good. For many things, including commit
> message update squashing patches is just loosing details. We dont do
> that with Gerrit now, and we should not do that when using GitHub PRs.
> Proper documenting changes is still very important to me, the details of
> patches should be explained in commit messages. This only works well
> when developers 'force push' to the branch holding the PR.
>
> Niels
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-26 Thread Amar Tumballi Suryanarayan
On Tue, Aug 27, 2019 at 12:10 AM Niels de Vos  wrote:

> On Mon, Aug 26, 2019 at 08:36:30PM +0530, Aravinda Vishwanathapura Krishna
> Murthy wrote:
> > On Mon, Aug 26, 2019 at 7:49 PM Joe Julian  wrote:
> >
> > > > Comparing the changes between revisions is something
> > > that GitHub does not support...
> > >
> > > It does support that,
> > > actually.___
> > >
> >
> > Yes, it does support. We need to use Squash merge after all review is
> done.
>
> Squash merge would also combine multiple commits that are intended to
> stay separate. This is really bad :-(
>
>
We should treat 1 patch in gerrit as 1 PR in github, then squash merge
works same as how reviews in gerrit are done.  Or we can come up with
label, upon which we can actually do 'rebase and merge' option, which can
preserve the commits as is.

-Amar


> Niels
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-24 Thread Amar Tumballi
On Sat, Aug 24, 2019 at 4:33 PM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> On Fri, Aug 23, 2019 at 6:42 PM Amar Tumballi  wrote:
> >
> > Hi developers,
> >
> > With this email, I want to understand what is the general feeling around
> this topic.
> >
> > We from gluster org (in github.com/gluster) have many projects which
> follow complete github workflow, where as there are few, specially the main
> one 'glusterfs', which uses 'Gerrit'.
> >
> > While this has worked all these years, currently, there is a huge set of
> brain-share on github workflow as many other top projects, and similar
> projects use only github as the place to develop, track and run tests etc.
> As it is possible to have all of the tools required for this project in
> github itself (code, PR, issues, CI/CD, docs), lets look at how we are
> structured today:
> >
> > Gerrit - glusterfs code + Review system
> > Bugzilla - For bugs
> > Github - For feature requests
> > Trello - (not very much used) for tracking project development.
> > CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo.
> > Docs - glusterdocs - different repo.
> > Metrics - Nothing (other than github itself tracking contributors).
> >
> > While it may cause a minor glitch for many long time developers who are
> used to the flow, moving to github would bring all these in single place,
> makes getting new users easy, and uniform development practices for all
> gluster org repositories.
> >
> > As it is just the proposal, I would like to hear people's thought on
> this, and conclude on this another month, so by glusterfs-8 development
> time, we are clear about this.
> >
>
> I'd want to propose that a decision be arrived at much earlier. Say,
> within a fortnight ie. mid-Sep. I do not see why this would need a
> whole month to consider. Such a timeline would also allow to manage
> changes after proper assessment of sub-tasks.
>
>
It would be great if we can decide sooner. I kept a month as timeline, as
historically, I had not seen much responses to proposal like this. Would be
great if we have at least 20+ people participating in this discussion.

I am happy to create a poll if everyone prefers that.

Regards,
Amar


> > Can we decide on this before September 30th? Please voice your concerns.
> >
> > Regards,
> > Amar
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-08-23 Thread Amar Tumballi
Hi developers,

With this email, I want to understand what is the general feeling around
this topic.

We from gluster org (in github.com/gluster) have many projects which follow
complete github workflow, where as there are few, specially the main one
'glusterfs', which uses 'Gerrit'.

While this has worked all these years, currently, there is a huge set of
brain-share on github workflow as many other top projects, and similar
projects use only github as the place to develop, track and run tests etc.
As it is possible to have all of the tools required for this project in
github itself (code, PR, issues, CI/CD, docs), lets look at how we are
structured today:

Gerrit - glusterfs code + Review system
Bugzilla - For bugs
Github - For feature requests
Trello - (not very much used) for tracking project development.
CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo.
Docs - glusterdocs - different repo.
Metrics - Nothing (other than github itself tracking contributors).

While it may cause a minor glitch for many long time developers who are
used to the flow, moving to github would bring all these in single place,
makes getting new users easy, and uniform development practices for all
gluster org repositories.

As it is just the proposal, I would like to hear people's thought on this,
and conclude on this another month, so by glusterfs-8 development time, we
are clear about this.

Can we decide on this before September 30th? Please voice your concerns.

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [RFC] alter inode table lock from mutex to rwlock

2019-08-22 Thread Amar Tumballi Suryanarayan
Hi Changwei Ge,

On Thu, Aug 22, 2019 at 5:57 PM Changwei Ge  wrote:

> Hi,
>
> Now inode_table_t:lock is type of mutex which I think we can use
> ‘pthread_rwlock' to replace it for a better concurrency.
>
> Because phread_rwlock allows more than one thread accessing inode table
> at the same time.
> Moreover, the critical section the lock is protecting won't take many
> CPU cycles and no I/O and CPU fault/exception involved after a quick
> glance at glusterfs code.
> I hope I didn't miss something.
> If I would get an ACK from major glusterfs developer, I will try to do it.
>
>
You are right. I believe this is possible. No harm in trying this out.

Xavier, Raghavendra, Pranith, Nithya, do you think this is possible?

Regards,



> Thanks.
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] URGENT: Release 7 blocked due to patches failing centos regression

2019-08-20 Thread Amar Tumballi
Looks like these issues are now fixed in master. Need a port to release-7
branch, and other patches has to be taken in.

On Tue, Aug 13, 2019 at 12:35 PM Rinku Kothiya  wrote:

> Hi Team,
>
> The following patches posted in release 7 are failing centos regression.
> For some of the patches I have run "recheck centos" multiple times and each
> time we see different failures. So I am not sure if this is related to the
> patch or is a spurious failure. But some of the patches are consistently
> failing on "brick-mux-validation.t". Please advice as release 7 is blocked
> due to this.
>
> ==
> https://review.gluster.org/#/c/glusterfs/+/23195/
> ==
>
> Patch Set 1:
>
> Build Failed
>
> https://build.gluster.org/job/centos7-regression/7348/ : FAILURE <<<
> 1 test(s) failed
> ./tests/bugs/glusterd/bug-1595320.t
>
> 0 test(s) generated core
> >>>
>
> Patch Set 1:
>
> Build Failed
>
> https://build.gluster.org/job/centos7-regression/7361/ : FAILURE <<<
> 1 test(s) failed
> ./tests/bugs/core/bug-1119582.t
>
> 0 test(s) generated core
> >>>
>
> ==
> https://review.gluster.org/#/c/glusterfs/+/23196/
> ==
>
> Patch Set 1:
>
> Build Failed
>
> https://build.gluster.org/job/centos7-regression/7349/ : FAILURE <<<
> 1 test(s) failed
> ./tests/bugs/glusterd/brick-mux-validation.t
>
> 0 test(s) generated core
> >>>
>
> Patch Set 1:
>
> Build Failed
>
> https://build.gluster.org/job/centos7-regression/7362/ : FAILURE <<<
> 1 test(s) failed
> ./tests/bugs/glusterd/brick-mux-validation.t
>
> 0 test(s) generated core
> >>>
>
> ==
> https://review.gluster.org/#/c/glusterfs/+/23189/
> ==
>
> Gluster Build System
> Aug 9 6:38 PM
>
> Patch Set 1:
>
> Build Failed
>
> https://build.gluster.org/job/centos7-regression/7341/ : FAILURE <<<
> 1 test(s) failed
> ./tests/basic/volume-snapshot.t
>
> 0 test(s) generated core
> >>>
>
>
> Patch Set 1:
>
> Build Failed
>
> https://build.gluster.org/job/centos7-regression/7364/ : FAILURE <<<
> 1 test(s) failed
> ./tests/bugs/glusterd/brick-mux-validation.t
>
> 0 test(s) generated core
> >>>
>
>
> Patch Set 1:
>
> Build Failed
>
> https://build.gluster.org/job/centos7-regression/7365/ : FAILURE <<< 1
> test(s) failed ./tests/bugs/glusterd/brick-mux-validation.t
>
> 0 test(s) generated core
> >>>
>
> ==
> https://review.gluster.org/#/c/glusterfs/+/23190/
> ==
>
> Patch Set 1:
>
> Build Failed
>
> https://build.gluster.org/job/centos7-regression/7342/ : FAILURE <<<
> 1 test(s) failed
> ./tests/bugs/cli/bug-1077682.t
>
> 0 test(s) generated core
> >>>
>
> ==
> https://review.gluster.org/#/c/glusterfs/+/23188/
> ==
>
> Patch Set 1:
>
> Build Failed
>
> https://build.gluster.org/job/centos7-regression/7340/ : FAILURE <<<
> 1 test(s) failed
> ./tests/bugs/glusterd/brick-mux-validation.t
>
> 0 test(s) generated core
> >>>
>
>
> Regards
> Rinku
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] directory filehandles

2019-07-15 Thread Amar Tumballi
On Sat, Jul 13, 2019 at 7:33 AM Emmanuel Dreyfus  wrote:

> Hello
>
> I have trouble figuring the whole story about how to cope with FUSE
> directory filehandles in the NetBSD implementation.
>
> libfuse makes a special use of filehandles exposed to filesystem for
> OPENDIR, READDIR, FSYNCDIR, and RELEASEDIR. For that four operations,
> the fh is a pointer to a struct fuse_dh, in which the fh field is
> exposed to the filesystem. All other filesystem operations pass the fh
> as is from kernel to filesystem back and forth.
>
> That means that a fh obtained by OPENDIR should never be passed to
> operations others than (READDIR, FSYNCDIR and RELEASEDIR). For instance,
> when porting ltfs to NetBSD, I experienced that passing a fh obtained
> from OPENDIR to SETATTR would crash.
>
> glusterfs implementation differs from libfuse because it seems the
> filesystem is always passed as is: there is nothing like libfuse struct
> fuse_dh. It will therefore happily accept fh obtained by OPENDIR for any
> operation, something that I do not expect to happen in libfuse based
> filesystems.
>
>
It would be great to add these comments as part of
https://github.com/gluster/glusterfs/issues/153. My take is to start
working in the direction of rebasing gluster code to use libfuse in future
than to maintain our own changes. Would that help if we move in that
direction?


> My real concern is SETLK on directory. Here glusterfs really wants a fh
> or it will report an error. The NetBSD implementation passes the fh it
> got from OPENDIR, but I expect a libfuse based filesystem to crash in
> such a situation. For now I did not find any libfuse-based filesystem
> that implements locking, so I could not test that.
>
> Could someone clarify this? What are the FUSE operations that should be
> sent to filesystem on that kind of program?
>
> int fd;
>
> /* NetBSD  calls FUSE LOOKUP and OPENDIR */
> if ((fd = open("/gfs/tmp", O_RDONLY, 0)) == -1)
> err(1, "open failed");
>
> /* NetBSD calls FUSE SETLKW */
> if (flock(fd, LOCK_EX) == -1)
> err(1, "flock failed");
>
>
Csaba, Raghavendra, Any suggestions here?

-Amar


>
>
> --
> Emmanuel Dreyfus
> http://hcpnet.free.fr/pubz
> m...@netbsd.org
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [Announcement] Gluster Community Update

2019-07-09 Thread Amar Tumballi Suryanarayan
Hello Gluster community,



Today marks a new day in the 26-year history of Red Hat. IBM has finalized
its acquisition of Red Hat
<https://www.redhat.com/en/about/press-releases/ibm-closes-landmark-acquisition-red-hat-34-billion-defines-open-hybrid-cloud-future>,
which will operate as a distinct unit within IBM moving forward.



What does this mean for Red Hat’s contributions to the Gluster project?



In short, nothing.



Red Hat always has and will continue to be a champion for open source and
projects like Gluster. IBM is committed to Red Hat’s independence and role
in open source software communities so that we can continue this work
without interruption or changes.



Our mission, governance, and objectives remain the same. We will continue
to execute the existing project roadmap. Red Hat associates will continue
to contribute to the upstream in the same ways they have been. And, as
always, we will continue to help upstream projects be successful and
contribute to welcoming new members and maintaining the project.



We will do this together, with the community, as we always have.



If you have questions or would like to learn more about today’s news, I
encourage you to review the list of materials below. Red Hat CTO Chris
Wright will host an online Q session in the coming days where you can ask
questions you may have about what the acquisition means for Red Hat and our
involvement in open source communities. Details will be announced on the Red
Hat blog <https://www.redhat.com/en/blog>.



   -

   Press release
   
<https://www.redhat.com/en/about/press-releases/ibm-closes-landmark-acquisition-red-hat-34-billion-defines-open-hybrid-cloud-future>
   -

   Chris Wright blog - Red Hat and IBM: Accelerating the adoption of open
   source
   
<https://www.redhat.com/en/blog/red-hat-and-ibm-accelerating-adoption-open-source>
   -

   FAQ on Red Hat Community Blog
   <https://community.redhat.com/blog/2019/07/faq-for-communities/>



Amar Tumballi,

Maintainer, Lead,

Gluster Community.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Migration of the builders to Fedora 30

2019-07-09 Thread Amar Tumballi Suryanarayan
On Thu, Jul 4, 2019 at 9:55 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

>
>
> On Thu, Jul 4, 2019 at 9:37 PM Michael Scherer 
> wrote:
>
>> Hi,
>>
>> I have upgraded for testing some of the builder to F30 (because F28 is
>> EOL and people did request newer version of stuff), and I was a bit
>> surprised to see the result of the test of the jobs.
>>
>> So we have 10 jobs that run on those builders.
>>
>> 5 jobs run without trouble:
>> - python-lint
>> - clang-scan
>> - clang-format
>> - 32-bit-build-smoke
>> - bugs-summary
>>
>> 1 is disabled, tsan. I didn't try to run it.
>>
>> 4 fails:
>> - python-compliance
>>
>
> OK to run, but skip voting, so we can eventually (soonish) fix this.
>
>
>> - fedora-smoke
>>
>
> Ideally we should soon fix it. Effort is ON. We have a bug for this:
> https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5
>

Can we re-run this on latest master? I think we are ready for  fedora-smoke
on fedora30 on latest master.


>
>> - gluster-csi-containers
>> - glusterd2-containers
>>
>>
> OK to drop for now.
>
>
>> The job python-compliance fail like this:
>> https://build.gluster.org/job/python-compliance/5813/
>>
>> The fedora-smoke job, who is building on newer fedora (so newer gcc),
>> is failling too:
>> https://build.gluster.org/job/fedora-smos some new vol option that ought
>> to be set?ke/6753/console
>> <https://build.gluster.org/job/fedora-smoke/6753/console>
>>
>> Gluster-csi-containers is having trouble to run
>> https://build.gluster.org/job/gluster-csi-containers/304/console
>> https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5
>> but before, it did fail with "out of space":
>> https://build.gluster.org/job/gluster-csi-containers/303/console
>>
>> and it also fail (well, should fail) with this:
>> 16:51:07 make: *** No targets specified and no makefile found.  Stop.
>>
>> which is indeed not present in the git repo, so this seems like the job
>> is unmaintained.
>>
>>
>> The last one to fail is glusterd2-containers:
>>
>> https://build.gluster.org/job/glusterd2-containers/323/console
>>
>> This one is fun, because it fail, but appear as ok on jenkins. It fail
>> because of some ansible issue, due to newer Fedora.
>>
>> So, since we need to switch, here is what I would recommend:
>> - switch the working job to F30
>> - wait 2 weeks, and switch fedora-smoke and python-compliance to F30.
>> This will force someone to fix the problem.
>> - drop the non fixed containers jobs, unless someone fix them, in 1 month.
>>
>
> Looks like a good plan.
>
>
>>
>> --
>> Michael Scherer
>> Sysadmin, Community Infrastructure
>>
>>
>>
>> ___
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/836554017
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/486278655
>>
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>
> --
> Amar Tumballi (amarts)
>


-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] GlusterFS 8+: Roadmap ahead - Open for discussion

2019-07-08 Thread Amar Tumballi
Hello everyone,

This email is long, and I request each one of you to participate and give
comments. We want your collaboration in this big step.

TL;DR;

We are at an interesting time in Gluster project’s development roadmap. In
the last  year, we have taken some hard decisions to not focus on features
and focus all our energies to stabilize the project, and if you notice as a
result of that, we did really well with many regards. With most of the
stabilization work getting into the glusterfs-7 branch, we feel the time is
good for discussing the future.

Now, it is the time for us to start addressing the most common concerns of
the project, Performance and related improvements. While many of our users
and customers have faced problems with not so great performance, please
note that there is no one silver bullet which will solve all performance
problems in one step, especially with a distributed storage solution like
GlusterFS.

Over the years, we have noticed that there are a lot of factors which
contribute to the performance issues in Gluster, and it is not ‘easy’ to
tell which one of the ‘known’ issue caused the particular problem.
Sometimes, even to debug where is the bottleneck, we face the challenge of
lack of instrumentation in many parts of the codebase. Hence, one of the
major activities we want to pick as immediate roadmap is, work on this area.

Instead of discussing on the email thread, and losing context soon, I
prefer, this time, we can take our discussion to hackmd with comments.
Would like each of you to participate and let us know what are your
priorities, what you need, how you can help etc.

Link to hackmd URL here: https://hackmd.io/JtfYZr49QeGaNIlTvQNsaA After the
meeting, I will share the updates as a blog, and once its final, will
update the ML with an email.

Along with this, from the Gluster project, in the last couple of years, we
have noticed increased interest in 2 major use cases.

First is using Gluster in container use cases, and the second is using it
as a storage for VMs, especially with oVirt project, and also as
hyperconverged storage in some cases.

We see more stability and performance improvements should help our usecases
with VMs. For container storage, Gluster’s official solution involved
‘Heketi’  project as the frontend to
handle k8s APIs and provide storage from Gluster. We did try to come up
with a new age management solution with GD2
, but haven’t got enough
contributions on it to take it to completion. There were a couple of
different approaches attempted too, gluster-subvol
 and piragua
. But neither of them have seen major
contributions. From the activity in github and other places, we see that
there is still a major need for a proper solution.

We are happy to discuss on this too. Please suggest your ideas.





Another topic while we are at Roadmap is, the discussion on github vs
gerrit. There are some opinions in the group, saying that, we are not
getting not many new developers because our project is hosted on gerrit,
and most of the developer community is on github. We surely want your
opinion on this.

Lets use Doc:
https://docs.google.com/document/d/16a-EyPRySPlJR3ioRgZRNohq7lM-2EmavulfDxlid_M/edit?usp=sharing
for discussing on this.



This email is to kick start a discussion focused on our roadmap, discuss
the priorities, look into what we can quickly do, and what we can achieve
long term. We can have discussions about this in our community meeting, so
we can cover most of the time-zones. If we need more time to finalize on
things, then we can schedule a few more slots based on people’s preference.
Maintainers, please send your preferences for the components you maintain
as part of this discussion too.

Again, we are planning to use collaborative tool hackmd (
https://hackmd.io/JtfYZr49QeGaNIlTvQNsaA) to capture the notes, and will
publish it in a blog form once the meetings conclude. The actionable tasks
will move to github issues from there.

Looking for your active participation.

Regards,

Amar  (@tumballi)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Migration of the builders to Fedora 30

2019-07-04 Thread Amar Tumballi Suryanarayan
On Thu, Jul 4, 2019 at 9:37 PM Michael Scherer  wrote:

> Hi,
>
> I have upgraded for testing some of the builder to F30 (because F28 is
> EOL and people did request newer version of stuff), and I was a bit
> surprised to see the result of the test of the jobs.
>
> So we have 10 jobs that run on those builders.
>
> 5 jobs run without trouble:
> - python-lint
> - clang-scan
> - clang-format
> - 32-bit-build-smoke
> - bugs-summary
>
> 1 is disabled, tsan. I didn't try to run it.
>
> 4 fails:
> - python-compliance
>

OK to run, but skip voting, so we can eventually (soonish) fix this.


> - fedora-smoke
>

Ideally we should soon fix it. Effort is ON. We have a bug for this:
https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5


> - gluster-csi-containers
> - glusterd2-containers
>
>
OK to drop for now.


> The job python-compliance fail like this:
> https://build.gluster.org/job/python-compliance/5813/
>
> The fedora-smoke job, who is building on newer fedora (so newer gcc),
> is failling too:
> https://build.gluster.org/job/fedora-smoke/6753/console
>
> Gluster-csi-containers is having trouble to run
> https://build.gluster.org/job/gluster-csi-containers/304/console
> https://bugzilla.redhat.com/show_bug.cgi?id=1693385#c5
> but before, it did fail with "out of space":
> https://build.gluster.org/job/gluster-csi-containers/303/console
>
> and it also fail (well, should fail) with this:
> 16:51:07 make: *** No targets specified and no makefile found.  Stop.
>
> which is indeed not present in the git repo, so this seems like the job is
> unmaintained.
>
>
> The last one to fail is glusterd2-containers:
>
> https://build.gluster.org/job/glusterd2-containers/323/console
>
> This one is fun, because it fail, but appear as ok on jenkins. It fail
> because of some ansible issue, due to newer Fedora.
>
> So, since we need to switch, here is what I would recommend:
> - switch the working job to F30
> - wait 2 weeks, and switch fedora-smoke and python-compliance to F30. This
> will force someone to fix the problem.
> - drop the non fixed containers jobs, unless someone fix them, in 1 month.
>

Looks like a good plan.


>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure
>
>
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Amar Tumballi Suryanarayan
On Thu, Jun 20, 2019 at 2:35 PM Niels de Vos  wrote:

> On Thu, Jun 20, 2019 at 02:11:21PM +0530, Amar Tumballi Suryanarayan wrote:
> > On Thu, Jun 20, 2019 at 1:13 PM Niels de Vos  wrote:
> >
> > > On Thu, Jun 20, 2019 at 11:36:46AM +0530, Amar Tumballi Suryanarayan
> wrote:
> > > > Considering python3 is anyways the future, I vote for taking the
> patch we
> > > > did in master for fixing regression tests with python3 into the
> release-6
> > > > and release-5 branch and getting over this deadlock.
> > > >
> > > > Patch in discussion here is
> > > > https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone
> > > notices, it
> > > > changes only the files inside 'tests/' directory, which is not
> packaged
> > > in
> > > > a release anyways.
> > > >
> > > > Hari, can we get the backport of this patch to both the release
> branches?
> > >
> > > When going this route, you still need to make sure that the
> > > python3-devel package is available on the CentOS-7 builders. And I
> > > don't know if installing that package is already sufficient, maybe the
> > > backport is not even needed in that case.
> > >
> > >
> > I was thinking, having this patch makes it compatible with both python2
> and
> > python3, so technically, it allows us to move to Fedora30 if we need to
> run
> > regression there. (and CentOS7 with only python2).
> >
> > The above patch made it compatible, not mandatory to have python3. So,
> > treating it as a bug fix.
>
> Well, whatever Python is detected (python3 has preference over python2),
> needs to have the -devel package available too. Detection is done by
> probing the python executable. The Matching header files from -devel
> need to be present in order to be able to build glupy (and others?).
>
> I do not think compatibility for python3/2 is the problem while
> building the tarball.


Got it! True. Compatibility is not the problem to build the tarball.

I noticed the issue of smoke is coming only from strfmt-errors job, which
checks for 'epel-6-i386' mock, and fails right now.

The backport might become relevant while running
> tests on environments where there is no python2.
>
>
Backport is very important if we are running in a system where we have only
python3. Hence my proposal to include it in releases.

But we are stuck with strfmt-errors job right now, and looking at what it
was intended to catch in first place, mostly our
https://build.gluster.org/job/32-bit-build-smoke/ would be doing same. If
that is the case, we can remove the job altogether.  Also note, this job is
known to fail many smokes with 'Build root is locked by another process'
errors.

Would be great if disabling strfmt-errors is an option.

Regards,

> Niels
>
>
> >
> >
> > > Niels
> > >
> > >
> > > >
> > > > Regards,
> > > > Amar
> > > >
> > > > On Thu, Jun 13, 2019 at 7:26 PM Michael Scherer  >
> > > wrote:
> > > >
> > > > > Le jeudi 13 juin 2019 à 14:28 +0200, Niels de Vos a écrit :
> > > > > > On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> > > > > > > On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > > > > > > > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley <
> > > > > > > > kkeit...@redhat.com> wrote:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi
> Suryanarayan <
> > > > > > > > > atumb...@redhat.com> wrote:
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > We recently noticed that in one of the package update on
> > > > > > > > > > builder (ie,
> > > > > > > > > > centos7.x machines), python3.6 got installed as a
> dependency.
> > > > > > > > > > So, yes, it
> > > > > > > > > > is possible to have python3 in centos7 now.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > EPEL updated from python34 to python36 recently, but C7
> doesn't
> > > > > > > > > have
> > > > > > > > > python3 in the base. I don't think we've ever used EPEL
> > > > > > > > > packages 

Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Amar Tumballi Suryanarayan
On Thu, Jun 20, 2019 at 1:13 PM Niels de Vos  wrote:

> On Thu, Jun 20, 2019 at 11:36:46AM +0530, Amar Tumballi Suryanarayan wrote:
> > Considering python3 is anyways the future, I vote for taking the patch we
> > did in master for fixing regression tests with python3 into the release-6
> > and release-5 branch and getting over this deadlock.
> >
> > Patch in discussion here is
> > https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone
> notices, it
> > changes only the files inside 'tests/' directory, which is not packaged
> in
> > a release anyways.
> >
> > Hari, can we get the backport of this patch to both the release branches?
>
> When going this route, you still need to make sure that the
> python3-devel package is available on the CentOS-7 builders. And I
> don't know if installing that package is already sufficient, maybe the
> backport is not even needed in that case.
>
>
I was thinking, having this patch makes it compatible with both python2 and
python3, so technically, it allows us to move to Fedora30 if we need to run
regression there. (and CentOS7 with only python2).

The above patch made it compatible, not mandatory to have python3. So,
treating it as a bug fix.


> Niels
>
>
> >
> > Regards,
> > Amar
> >
> > On Thu, Jun 13, 2019 at 7:26 PM Michael Scherer 
> wrote:
> >
> > > Le jeudi 13 juin 2019 à 14:28 +0200, Niels de Vos a écrit :
> > > > On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> > > > > On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > > > > > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley <
> > > > > > kkeit...@redhat.com> wrote:
> > > > > >
> > > > > > >
> > > > > > > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi Suryanarayan <
> > > > > > > atumb...@redhat.com> wrote:
> > > > > > >
> > > > > > > >
> > > > > > > > We recently noticed that in one of the package update on
> > > > > > > > builder (ie,
> > > > > > > > centos7.x machines), python3.6 got installed as a dependency.
> > > > > > > > So, yes, it
> > > > > > > > is possible to have python3 in centos7 now.
> > > > > > > >
> > > > > > >
> > > > > > > EPEL updated from python34 to python36 recently, but C7 doesn't
> > > > > > > have
> > > > > > > python3 in the base. I don't think we've ever used EPEL
> > > > > > > packages for
> > > > > > > building.
> > > > > > >
> > > > > > > And GlusterFS-5 isn't python3 ready.
> > > > > > >
> > > > > >
> > > > > > Correction: GlusterFS-5 is mostly or completely python3
> > > > > > ready.  FWIW,
> > > > > > python33 is available on both RHEL7 and CentOS7 from the Software
> > > > > > Collection Library (SCL), and python34 and now python36 are
> > > > > > available from
> > > > > > EPEL.
> > > > > >
> > > > > > But packages built for the CentOS Storage SIG have never used the
> > > > > > SCL or
> > > > > > EPEL (EPEL not allowed) and the shebangs in the .py files are
> > > > > > converted
> > > > > > from /usr/bin/python3 to /usr/bin/python2 during the rpmbuild
> > > > > > %prep stage.
> > > > > > All the python dependencies for the packages remain the python2
> > > > > > flavors.
> > > > > > AFAIK the centos-regression machines ought to be building the
> > > > > > same way.
> > > > >
> > > > > Indeed, there should not be a requirement on having EPEL enabled on
> > > > > the
> > > > > CentOS-7 builders. At least not for the building of the glusterfs
> > > > > tarball. We still need to do releases of glusterfs-4.1 and
> > > > > glusterfs-5,
> > > > > until then it is expected to have python2 as the (only?) version
> > > > > for the
> > > > > system. Is it possible to remove python3 from the CentOS-7 builders
> > > > > and
> > > > > run the jobs that require python3 on the Fedora builders instead?
> > > >
> > > > Actually, if the python-devel package for python3 is 

Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-20 Thread Amar Tumballi Suryanarayan
Considering python3 is anyways the future, I vote for taking the patch we
did in master for fixing regression tests with python3 into the release-6
and release-5 branch and getting over this deadlock.

Patch in discussion here is
https://review.gluster.org/#/c/glusterfs/+/22829/ and if anyone notices, it
changes only the files inside 'tests/' directory, which is not packaged in
a release anyways.

Hari, can we get the backport of this patch to both the release branches?

Regards,
Amar

On Thu, Jun 13, 2019 at 7:26 PM Michael Scherer  wrote:

> Le jeudi 13 juin 2019 à 14:28 +0200, Niels de Vos a écrit :
> > On Thu, Jun 13, 2019 at 11:08:25AM +0200, Niels de Vos wrote:
> > > On Wed, Jun 12, 2019 at 04:09:55PM -0700, Kaleb Keithley wrote:
> > > > On Wed, Jun 12, 2019 at 11:36 AM Kaleb Keithley <
> > > > kkeit...@redhat.com> wrote:
> > > >
> > > > >
> > > > > On Wed, Jun 12, 2019 at 10:43 AM Amar Tumballi Suryanarayan <
> > > > > atumb...@redhat.com> wrote:
> > > > >
> > > > > >
> > > > > > We recently noticed that in one of the package update on
> > > > > > builder (ie,
> > > > > > centos7.x machines), python3.6 got installed as a dependency.
> > > > > > So, yes, it
> > > > > > is possible to have python3 in centos7 now.
> > > > > >
> > > > >
> > > > > EPEL updated from python34 to python36 recently, but C7 doesn't
> > > > > have
> > > > > python3 in the base. I don't think we've ever used EPEL
> > > > > packages for
> > > > > building.
> > > > >
> > > > > And GlusterFS-5 isn't python3 ready.
> > > > >
> > > >
> > > > Correction: GlusterFS-5 is mostly or completely python3
> > > > ready.  FWIW,
> > > > python33 is available on both RHEL7 and CentOS7 from the Software
> > > > Collection Library (SCL), and python34 and now python36 are
> > > > available from
> > > > EPEL.
> > > >
> > > > But packages built for the CentOS Storage SIG have never used the
> > > > SCL or
> > > > EPEL (EPEL not allowed) and the shebangs in the .py files are
> > > > converted
> > > > from /usr/bin/python3 to /usr/bin/python2 during the rpmbuild
> > > > %prep stage.
> > > > All the python dependencies for the packages remain the python2
> > > > flavors.
> > > > AFAIK the centos-regression machines ought to be building the
> > > > same way.
> > >
> > > Indeed, there should not be a requirement on having EPEL enabled on
> > > the
> > > CentOS-7 builders. At least not for the building of the glusterfs
> > > tarball. We still need to do releases of glusterfs-4.1 and
> > > glusterfs-5,
> > > until then it is expected to have python2 as the (only?) version
> > > for the
> > > system. Is it possible to remove python3 from the CentOS-7 builders
> > > and
> > > run the jobs that require python3 on the Fedora builders instead?
> >
> > Actually, if the python-devel package for python3 is installed on the
> > CentOS-7 builders, things may work too. It still feels like some sort
> > of
> > Frankenstein deployment, and we don't expect to this see in
> > production
> > environments. But maybe this is a workaround in case something
> > really,
> > really, REALLY depends on python3 on the builders.
>
> To be honest, people would be surprised what happen in production
> around (sysadmins tend to discuss around, we all have horrors stories,
> stuff that were supposed to be cleaned and wasn't, etc)
>
> After all, "frankenstein deployment now" is better than "perfect
> later", especially since lots of IT departements are under constant
> pressure (so that's more "perfect never").
>
> I can understand that we want clean and simple code (who doesn't), but
> real life is much messier than we want to admit, so we need something
> robust.
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure
>
>
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Release 7: Gentle Reminder, Regression health for release-6.next and release-7

2019-06-18 Thread Amar Tumballi Suryanarayan
On Tue, Jun 18, 2019 at 12:07 PM Rinku Kothiya  wrote:

> Hi Team,
>
> We need to branch for release-7, but nightly builds failures are blocking
> this activity. Please find test failures and respective test links below :
>
> The top tests that are failing are as below and need attention,
>
>
> ./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t
> ./tests/bugs/gfapi/bug-1319374-THIS-crash.t
>

Still an issue with many tests.


> ./tests/basic/distribute/non-root-unlink-stale-linkto.t
>

Looks like this got fixed after https://review.gluster.org/22847


> ./tests/bugs/posix/bug-1040275-brick-uid-reset-on-volume-restart.t
> ./tests/features/subdir-mount.t
>

Got fixed with https://review.gluster.org/22877


> ./tests/basic/ec/self-heal.t
> ./tests/basic/afr/tarissue.t
>

I see random failures on this, not yet sure if this is setup issue, or a
actual regression issue.


> ./tests/basic/all_squash.t
> ./tests/basic/ec/nfs.t
> ./tests/00-geo-rep/00-georep-verify-setup.t
>

Most of the times, it fails if 'setup' is not complete to run geo-rep.


> ./tests/basic/quota-rename.t
> ./tests/basic/volume-snapshot-clone.t
>
> Nightly build for this month :
> https://build.gluster.org/job/nightly-master/
>
> Gluster test failure tracker :
> https://fstat.gluster.org/summary?start_date=2019-06-15_date=2019-06-18
>
> Please file a bug if needed against the test case and report the same
> here, in case a problem is already addressed, then do send back the
> patch details that addresses this issue as a response to this mail.
>
>
Thanks!


> Regards
> Rinku
>
>
> On Fri, Jun 14, 2019 at 9:08 PM Rinku Kothiya  wrote:
>
>> Hi Team,
>>
>> As part of branching preparation next week for release-7, please find
>> test failures and respective test links here.
>>
>> The top tests that are failing are as below and need attention,
>>
>> ./tests/bugs/gfapi/bug-1319374-THIS-crash.t
>> ./tests/basic/uss.t
>> ./tests/basic/volfile-sanity.t
>> ./tests/basic/quick-read-with-upcall.t
>> ./tests/basic/afr/tarissue.t
>> ./tests/features/subdir-mount.t
>> ./tests/basic/ec/self-heal.t
>>
>> ./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t
>> ./tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t
>> ./tests/basic/afr/split-brain-favorite-child-policy.t
>> ./tests/basic/distribute/non-root-unlink-stale-linkto.t
>> ./tests/bugs/protocol/bug-1433815-auth-allow.t
>> ./tests/basic/afr/arbiter-mount.t
>> ./tests/basic/all_squash.t
>>
>> ./tests/bugs/glusterd/mgmt-handshake-and-volume-sync-post-glusterd-restart.t
>> ./tests/basic/volume-snapshot-clone.t
>> ./tests/bugs/glusterd/serialize-shd-manager-glusterd-restart.t
>> ./tests/basic/gfapi/upcall-register-api.t
>>
>>
>> Nightly build for this month :
>> https://build.gluster.org/job/nightly-master/
>>
>> Gluster test failure tracker :
>>
>> https://fstat.gluster.org/summary?start_date=2019-05-15_date=2019-06-14
>>
>> Please file a bug if needed against the test case and report the same
>> here, in case a problem is already addressed, then do send back the
>> patch details that addresses this issue as a response to this mail.
>>
>> Regards
>> Rinku
>>
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Project Update: Containers-based distributed tests runner

2019-06-17 Thread Amar Tumballi Suryanarayan
This is a nice way to validate the patch for us.

Question I have is did we measure the time benefit of running them in
parallel with containers?

Would be great to see the result in getting this tested in a cloud env,
with 5 parallel threads and 10 parallel threads.

-Amar


On Fri, Jun 14, 2019 at 7:44 PM Aravinda  wrote:

> **gluster-tester** is a framework to run existing "*.t" test files in
> parallel using containers.
>
> Install and usage instructions are available in the following
> repository.
>
> https://github.com/aravindavk/gluster-tester
>
> ## Completed:
> - Create a base container image with all the dependencies installed.
> - Create a tester container image with requested refspec(or latest
> master) compiled and installed.
> - SSH setup in containers required to test Geo-replication
> - Take `--num-parallel` option and spawn the containers with ready
> infra for running tests
> - Split the tests based on the number of parallel jobs specified.
> - Execute the tests in parallel in each container and watch for the
> status.
> - Archive only failed tests(Optionally enable logs for successful tests
> using `--preserve-success-logs`)
>
> ## Pending:
> - NFS related tests are not running since the required changes are
> pending while creating the container image. (To know the failures run
> gluster-tester with `--include-nfs-tests` option)
> - Filter support while running the tests(To enable/disable tests on the
> run time)
> - Some Loop based tests are failing(I think due to shared `/dev/loop*`)
> - A few tests are timing out(Due to this overall test duration is more)
> - Once tests are started, showing real-time status is pending(Now
> status is checked in `/regression-.log` for example
> `/var/log/gluster-tester/regression-3.log`
> - If the base image is not built before running tests, it gives an
> error. Need to re-trigger the base container image step if not built.
> (Issue: https://github.com/aravindavk/gluster-tester/issues/11)
> - Creating an archive of core files
> - Creating a single archive from all jobs/containers
> - Testing `--ignore-from` feature to ignore the tests
> - Improvements to the status output
> - Cleanup(Stop test containers, and delete)
>
> I opened an issue to collect the details of failed tests. I will
> continue to update that issue as and when I capture failed tests in my
> setup.
> https://github.com/aravindavk/gluster-tester/issues/9
>
> Feel free to suggest any feature improvements. Contributions are
> welcome.
> https://github.com/aravindavk/gluster-tester/issues
>
> --
> Regards
> Aravinda
> http://aravindavk.in
>
>
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Quick Update: a cleanup patch and subsequent build failures

2019-06-14 Thread Amar Tumballi Suryanarayan
I just merged a larger cleanup patch, which I felt is good to get in, but
due to the order of its parents when it passed the regression and smoke,
and the other patches which got merged in same time, we hit a compile issue
for 'undefined functions'.

Below patch fixes it:

glfs: add syscall.h after header cleanup

in one of the recent patches, we cleaned-up the unneccesary header
file includes. In the order of merging the patches, there cropped
up an compile error.

updates: bz#1193929
Change-Id: I2ad52aa918f9c698d5273bb293838de6dd50ac31
Signed-off-by: Amar Tumballi 

diff --git a/api/src/glfs.c b/api/src/glfs.c
index b0db866441..0771e074d6 100644
--- a/api/src/glfs.c
+++ b/api/src/glfs.c
@@ -45,6 +45,7 @@
 #include 
 #include "rpc-clnt.h"
 #include 
+#include 

 #include "gfapi-messages.h"
 #include "glfs.h"

-
The patch has been pushed to repository, as it is causing critical compile
error right now. and if you have a build error, please fetch the latest
master to fix the the issue.

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Seems like Smoke job is not voting

2019-06-14 Thread Amar Tumballi Suryanarayan
Ok, guessed the possible cause.

The same possible DNS issue with review.gluster.org could have prevented
the patch fetching in smoke, and hence would have not triggered the job.

Those of you who have patches not getting a smoke, please run 'recheck
smoke' through comment.

-Amar

On Fri, Jun 14, 2019 at 5:16 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> I see patches starting from 10:45 AM IST (7hrs before) are not getting
> smoke votes.
>
> For one of my patch, the smoke job is not triggered at all IMO.
>
> https://review.gluster.org/#/c/22863/
>
> Would be good to check it.
>
> Regards,
> Amar
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Seems like Smoke job is not voting

2019-06-14 Thread Amar Tumballi Suryanarayan
I see patches starting from 10:45 AM IST (7hrs before) are not getting
smoke votes.

For one of my patch, the smoke job is not triggered at all IMO.

https://review.gluster.org/#/c/22863/

Would be good to check it.

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing glupy from release 5.7

2019-06-12 Thread Amar Tumballi Suryanarayan
On Wed, Jun 12, 2019 at 8:42 PM Niels de Vos  wrote:

> On Wed, Jun 12, 2019 at 07:54:17PM +0530, Hari Gowtham wrote:
> > We haven't sent any patch to fix it.
> > Waiting for the decision to be made.
> > The bz: https://bugzilla.redhat.com/show_bug.cgi?id=1719778
> > The link to the build log:
> >
> https://build.gluster.org/job/strfmt_errors/1/artifact/RPMS/el6/i686/build.log
> >
> > The last few messages in the log:
> >
> > config.status: creating xlators/features/changelog/lib/src/Makefile
> > config.status: creating xlators/features/changetimerecorder/Makefile
> > config.status: creating xlators/features/changetimerecorder/src/Makefile
> > BUILDSTDERR: config.status: error: cannot find input file:
> > xlators/features/glupy/Makefile.in
> > RPM build errors:
> > BUILDSTDERR: error: Bad exit status from /var/tmp/rpm-tmp.kGZI5V (%build)
> > BUILDSTDERR: Bad exit status from /var/tmp/rpm-tmp.kGZI5V (%build)
> > Child return code was: 1
> > EXCEPTION: [Error()]
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py",
> > line 96, in trace
> > result = func(*args, **kw)
> >   File "/usr/lib/python3.6/site-packages/mockbuild/util.py", line 736,
> > in do_with_status
> > raise exception.Error("Command failed: \n # %s\n%s" % (command,
> > output), child.returncode)
> > mockbuild.exception.Error: Command failed:
> >  # bash --login -c /usr/bin/rpmbuild -bb --target i686 --nodeps
> > /builddir/build/SPECS/glusterfs.spec
>
> Those messages are caused by missing files. The 'make dist' that
> generates the tarball in the previous step did not included the glupy
> files.
>
> https://build.gluster.org/job/strfmt_errors/1/console contains the
> following message:
>
> configure: WARNING:
>
> -
> cannot build glupy. python 3.6 and python-devel/python-dev
> package are required.
>
> -
>
> I am not sure if there have been any recent backports to release-5 that
> introduced this behaviour. Maybe it is related to the builder where the
> tarball is generated. The job seems to detect python-3.6.8, which is not
> included in CentOS-7 for all I know?
>
>
We recently noticed that in one of the package update on builder (ie,
centos7.x machines), python3.6 got installed as a dependency. So, yes, it
is possible to have python3 in centos7 now.

-Amar


> Maybe someone else understands how this can happen?
>
> HTH,
> Niels
>
>
> >
> > On Wed, Jun 12, 2019 at 7:04 PM Niels de Vos  wrote:
> > >
> > > On Wed, Jun 12, 2019 at 02:44:04PM +0530, Hari Gowtham wrote:
> > > > Hi,
> > > >
> > > > Due to the recent changes we made. we have a build issue because of
> glupy.
> > > > As glupy is already removed from master, we are thinking of removing
> > > > it in 5.7 as well rather than fixing the issue.
> > > >
> > > > The release of 5.7 will be delayed as we have send a patch to fix
> this issue.
> > > > And if anyone has any concerns, do let us know.
> > >
> > > Could you link to the BZ with the build error and patches that attempt
> > > fixing it?
> > >
> > > We normally do not remove features with minor updates. Fixing the build
> > > error would be the preferred approach.
> > >
> > > Thanks,
> > > Niels
> >
> >
> >
> > --
> > Regards,
> > Hari Gowtham.
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Linux 5.2-RC regression bisected, mounting glusterfs volumes fails after commit: fuse: require /dev/fuse reads to have enough buffer capacity

2019-06-11 Thread Amar Tumballi Suryanarayan
Thanks for the heads up! We will see how to revert / fix the issue properly
for 5.2 kernel.

-Amar

On Tue, Jun 11, 2019 at 4:34 PM Sander Eikelenboom 
wrote:

> L.S.,
>
> While testing a linux 5.2 kernel I noticed it fails to mount my glusterfs
> volumes.
>
> It repeatedly fails with:
>[2019-06-11 09:15:27.106946] W [fuse-bridge.c:4993:fuse_thread_proc]
> 0-glusterfs-fuse: read from /dev/fuse returned -1 (Invalid argument)
>[2019-06-11 09:15:27.106955] W [fuse-bridge.c:4993:fuse_thread_proc]
> 0-glusterfs-fuse: read from /dev/fuse returned -1 (Invalid argument)
>[2019-06-11 09:15:27.106963] W [fuse-bridge.c:4993:fuse_thread_proc]
> 0-glusterfs-fuse: read from /dev/fuse returned -1 (Invalid argument)
>[2019-06-11 09:15:27.106971] W [fuse-bridge.c:4993:fuse_thread_proc]
> 0-glusterfs-fuse: read from /dev/fuse returned -1 (Invalid argument)
>etc.
>etc.
>
> Bisecting turned up as culprit:
> commit d4b13963f217dd947da5c0cabd1569e914d21699: fuse: require
> /dev/fuse reads to have enough buffer capacity
>
> The glusterfs version i'm using is from Debian stable:
> ii  glusterfs-client3.8.8-1
> amd64clustered file-system (client package)
> ii  glusterfs-common3.8.8-1
> amd64GlusterFS common libraries and translator modules
>
>
> A 5.1.* kernel works fine, as does a 5.2-rc4 kernel with said commit
> reverted.
>
> --
> Sander
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Regression failure continues: 'tests/basic/afr/split-brain-favorite-child-policy.t`

2019-06-10 Thread Amar Tumballi Suryanarayan
Fails with:

*20:56:58* ok 132 [  8/ 82] < 194> 'gluster --mode=script
--wignore volume heal patchy'*20:56:58* not ok 133 [  8/  80260] <
195> '^0$ get_pending_heal_count patchy' -> 'Got "2" instead of
"^0$"'*20:56:58* ok 134 [ 18/  2] < 197> '0 echo 0'


Looks like when the error occurred, it took 80seconds.


I see 2 different patches fail on this, would be good to analyze it further.


Regards,

Amar


-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] CI failure - NameError: name 'unicode' is not defined (related to changelogparser.py)

2019-06-08 Thread Amar Tumballi Suryanarayan
Update:

The issue happened because python3 got installed on centos7.x series of
builders due to other package dependencies. And considering GlusterFS picks
python3 as priority even if python2 is default, the tests started to fail.
We had completed the work of migrating the code to work smoothly with
python3 by glusterfs-6.0 release, but had not noticed issues with
regression framework as it was running only on centos7 (python2) earlier.

With this event, our regression tests are also now compatible with python3
(Thanks the the below mentioned patch of Kotresh). We were able to mark few
spurious failures as BAD_TEST, and fix all the python3 related issues in
regression by EOD Friday, and after watching regression tests for 1 more
day, can say that the issues are now resolved.

Please resubmit (or rebase in the gerrit web) before triggering the
'recheck centos' in the submitted patch(es).

Thanks everyone who responded quickly once the issue was noticed, and we
are back to GREEN again.

Regards,
Amar



On Fri, Jun 7, 2019 at 10:26 AM Deepshikha Khandelwal 
wrote:

> Hi Yaniv,
>
> We are working on this. The builders are picking up python3.6 which is
> leading to modules missing and such undefined errors.
>
> Kotresh has sent a patch https://review.gluster.org/#/c/glusterfs/+/22829/
> to fix the issue.
>
>
>
> On Thu, Jun 6, 2019 at 11:49 AM Yaniv Kaul  wrote:
>
>> From [1].
>>
>> I think it's a Python2/3 thing, so perhaps a CI issue additionally
>> (though if our code is not Python 3 ready, let's ensure we use Python 2
>> explicitly until we fix this).
>>
>> *00:47:05.207* ok  14 [ 13/386] <  34> 'gluster --mode=script 
>> --wignore volume start patchy'*00:47:05.207* ok  15 [ 13/ 70] <  36> 
>> '_GFS --attribute-timeout=0 --entry-timeout=0 --volfile-id=patchy 
>> --volfile-server=builder208.int.aws.gluster.org 
>> /mnt/glusterfs/0'*00:47:05.207* Traceback (most recent call 
>> last):*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 233, in 
>> *00:47:05.207* parse(sys.argv[1])*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 221, in 
>> parse*00:47:05.207* process_record(data, tokens, changelog_ts, 
>> callback)*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 178, in 
>> process_record*00:47:05.207* callback(record)*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 182, in 
>> default_callback*00:47:05.207* 
>> sys.stdout.write(u"{0}\n".format(record))*00:47:05.207*   File 
>> "./tests/basic/changelog/../../utils/changelogparser.py", line 128, in 
>> __str__*00:47:05.207* return unicode(self).encode('utf-8')*00:47:05.207* 
>> NameError: name 'unicode' is not defined*00:47:05.207* not ok  16 [ 53/  
>>39] <  42> '2 check_changelog_op 
>> /d/backends/patchy0/.glusterfs/changelogs RENAME' -> 'Got "0" instead of "2"'
>>
>>
>> Y.
>>
>> [1] https://build.gluster.org/job/centos7-regression/6318/console
>>
>> ___
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/836554017
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/486278655
>>
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Fwd: Build failed in Jenkins: regression-test-with-multiplex #1359

2019-06-06 Thread Amar Tumballi Suryanarayan
Got time to test subdir-mount.t failing in brick-mux scenario.

I noticed some issues, where I need further help from glusterd team.

subdir-mount.t expects 'hook' script to run after add-brick to make sure
the required subdirectories are healed and are present in new bricks. This
is important as subdir mount expects the subdirs to exist for successful
mount.

But in case of brick-mux setup, I see that in some cases (6/10), hook
script (add-brick/post-hook/S13-create-subdir-mount.sh) started getting
executed after 20second of finishing the add-brick command. Due to this,
the mount which we execute after add-brick failed.

My question is, what is making post hook script to run so late ??

I can recreate the issues locally on my laptop too.


On Sat, Jun 1, 2019 at 4:55 PM Atin Mukherjee  wrote:

> subdir-mount.t has started failing in brick mux regression nightly. This
> needs to be fixed.
>
> Raghavendra - did we manage to get any further clue on uss.t failure?
>
> -- Forwarded message -
> From: 
> Date: Fri, 31 May 2019 at 23:34
> Subject: [Gluster-Maintainers] Build failed in Jenkins:
> regression-test-with-multiplex #1359
> To: , , ,
> , 
>
>
> See <
> https://build.gluster.org/job/regression-test-with-multiplex/1359/display/redirect?page=changes
> >
>
> Changes:
>
> [atin] glusterd: add an op-version check
>
> [atin] glusterd/svc: glusterd_svcs_stop should call individual wrapper
> function
>
> [atin] glusterd/svc: Stop stale process using the glusterd_proc_stop
>
> [Amar Tumballi] lcov: more coverage to shard, old-protocol, sdfs
>
> [Kotresh H R] tests/geo-rep: Add EC volume test case
>
> [Amar Tumballi] glusterfsd/cleanup: Protect graph object under a lock
>
> [Mohammed Rafi KC] glusterd/shd: Optimize the glustershd manager to send
> reconfigure
>
> [Kotresh H R] tests/geo-rep: Add tests to cover glusterd geo-rep
>
> [atin] glusterd: Optimize code to copy dictionary in handshake code path
>
> --
> [...truncated 3.18 MB...]
> ./tests/basic/afr/stale-file-lookup.t  -  9 second
> ./tests/basic/afr/granular-esh/replace-brick.t  -  9 second
> ./tests/basic/afr/granular-esh/add-brick.t  -  9 second
> ./tests/basic/afr/gfid-mismatch.t  -  9 second
> ./tests/performance/open-behind.t  -  8 second
> ./tests/features/ssl-authz.t  -  8 second
> ./tests/features/readdir-ahead.t  -  8 second
> ./tests/bugs/upcall/bug-1458127.t  -  8 second
> ./tests/bugs/transport/bug-873367.t  -  8 second
> ./tests/bugs/replicate/bug-1498570-client-iot-graph-check.t  -  8 second
> ./tests/bugs/replicate/bug-1132102.t  -  8 second
> ./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
> -  8 second
> ./tests/bugs/quota/bug-1104692.t  -  8 second
> ./tests/bugs/posix/bug-1360679.t  -  8 second
> ./tests/bugs/posix/bug-1122028.t  -  8 second
> ./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  8 second
> ./tests/bugs/glusterfs/bug-861015-log.t  -  8 second
> ./tests/bugs/glusterd/sync-post-glusterd-restart.t  -  8 second
> ./tests/bugs/glusterd/bug-1696046.t  -  8 second
> ./tests/bugs/fuse/bug-983477.t  -  8 second
> ./tests/bugs/ec/bug-1227869.t  -  8 second
> ./tests/bugs/distribute/bug-1088231.t  -  8 second
> ./tests/bugs/distribute/bug-1086228.t  -  8 second
> ./tests/bugs/cli/bug-1087487.t  -  8 second
> ./tests/bugs/cli/bug-1022905.t  -  8 second
> ./tests/bugs/bug-1258069.t  -  8 second
> ./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t
> -  8 second
> ./tests/basic/xlator-pass-through-sanity.t  -  8 second
> ./tests/basic/quota-nfs.t  -  8 second
> ./tests/basic/glusterd/arbiter-volume.t  -  8 second
> ./tests/basic/ctime/ctime-noatime.t  -  8 second
> ./tests/line-coverage/cli-peer-and-volume-operations.t  -  7 second
> ./tests/gfid2path/get-gfid-to-path.t  -  7 second
> ./tests/bugs/upcall/bug-1369430.t  -  7 second
> ./tests/bugs/snapshot/bug-1260848.t  -  7 second
> ./tests/bugs/shard/shard-inode-refcount-test.t  -  7 second
> ./tests/bugs/shard/bug-1258334.t  -  7 second
> ./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
> ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  7 second
> ./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
> ./tests/bugs/posix/bug-1175711.t  -  7 second
> ./tests/bugs/nfs/bug-915280.t  -  7 second
> ./tests/bugs/md-cache/setxattr-prepoststat.t  -  7 second
> ./tests/bugs/md-cache/bug-1211863_unlink.t  -  7 second
> ./tests/bugs/glusterfs/bug-848251.t  -  7 second
> ./tests/bugs/distribute/bug-1122443.t  -  7 second
> ./tests/bugs/changelog/bug-1208470.t  -  7 second
> ./tests/bugs/bug-1702299.t  -  7 second
> ./tests/bugs/bug-1371806_2.t  -  7 second
> ./tests/

[Gluster-devel] Update: GlusterFS code coverage

2019-06-05 Thread Amar Tumballi Suryanarayan
All,

I just wanted to update everyone about one of the initiatives we have
undertaken, ie, increasing the overall code coverage of GlusterFS above 70%.
You can have a look at current code coverage here:
https://build.gluster.org/job/line-coverage/lastCompletedBuild/Line_20Coverage_20Report/
(This shows the latest all the time)

The daily job, and its details are captured @
https://build.gluster.org/job/line-coverage/

When we started focus on code coverage 3 months back, our code coverage was
around 60% overall. We kept the ambitious goal of increasing the code
coverage by 10% before glusterfs-7.0 release, and I am happy to announce
that we met this goal, before the branching.

Before talking about next goals, I want to thank and call out few
developers who made this happen.

* Xavier Hernandez - Made EC cross 90% from < 70%.
* Glusterd Team (Sanju, Rishub, Mohit, Atin) - Increased CLI/glusterd
coverage
* Geo-Rep Team (Kotresh, Sunny, Shwetha, Aravinda).
* Sheetal (help to increase glfs-api test cases, which indirectly helped
cover more code across).

Also note that, Some components like AFR/replicate was already at 80%+
before we started the efforts.

Now, our next goal is to make sure we have above 80% functions coverage in
all of the top level components shown. Once that is done, we will focus on
75% code coverage across all components. (ie, no 'Red' in top level page).

While it was possible to meet our goal of increasing the overall code
coverage from 60% - 70%, increasing it above 70% is not going to be easy,
mainly because it involves adding more tests for negative test cases, and
adding tests with different options (currently >300 of them across). We
also need to look at details from code coverage tests, and reverse engineer
to see how to write a test to hit the particular line in the code.

I personally invite everyone who is interested to contribute to gluster
project to get involved in this effort. Help us write test cases, suggest
how to improve it. Help by assigning interns write them for us (if your
team has some of them). This is a good way to understand glusterfs code
too. We are happy to organize sessions on how to walk through the code etc
if required.

Happy to hear feedback and see more contribution in this area.

Regards,
Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] tests are timing out in master branch

2019-05-21 Thread Amar Tumballi Suryanarayan
Looks like after reverting a patch on RPC layer reconnection logic (
https://review.gluster.org/22750) things are back to normal.

For those who submitted a patch in last 1 week, please resubmit. (which
should take care of rebasing on top of this patch).

This event proves that there are very delicate races in our RPC layer,
which can trigger random failures. While it was discussed in brief earlier.
We need to debug this further, and come up with possible next actions.
Volunteers welcome.

I recommend to use https://github.com/gluster/glusterfs/issues/391 to
capture our observations, and continue on github from here.

-Amar


On Wed, May 15, 2019 at 11:46 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> On Wed, May 15, 2019 at 11:24 AM Atin Mukherjee 
> wrote:
> >
> > There're random tests which are timing out after 200 secs. My belief is
> this is a major regression introduced by some commit recently or the
> builders have become extremely slow which I highly doubt. I'd request that
> we first figure out the cause, get master back to it's proper health and
> then get back to the review/merge queue.
> >
>
> For such dire situations, we also need to consider a proposal to back
> out patches in order to keep the master healthy. The outcome we seek
> is a healthy master - the isolation of the cause allows us to not
> repeat the same offense.
>
> > Sanju has already started looking into
> /tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t to understand
> what test is specifically hanging and consuming more time.
> ___
> Atin Mukherjee , Sankarshan Mukhopadhyay <
> sankarshan.mukhopadh...@gmail.com>
> Community Meeting Calendar:
>
> APAC Schedule -https://review.gluster.org/22750
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Release 6.2: Expected tagging on May 15th

2019-05-17 Thread Amar Tumballi Suryanarayan
Which are the patches? I can merge it for now.

-Amar

On Fri, May 17, 2019 at 1:10 PM Hari Gowtham  wrote:

> Thanks Sunny.
> Have CCed Shyam.
>
> On Fri, May 17, 2019 at 1:06 PM Sunny Kumar  wrote:
> >
> > Hi Hari,
> >
> > For this to pass regression other 3 patches needs to merge first, I
> > tried to merge but do not have sufficient permissions to merge on 6.2
> > branch.
> > I know bug is already in place to grant additional permission for
> > us(Me, you and Rinku) so until then waiting on Shyam to merge it.
> >
> > -Sunny
> >
> > On Fri, May 17, 2019 at 12:54 PM Hari Gowtham 
> wrote:
> > >
> > > Hi Kotresh ans Sunny,
> > > The patch has been failing regression a few times.
> > > We need to look into why this is happening and take a decision
> > > as to take it in release 6.2 or drop it.
> > >
> > > On Wed, May 15, 2019 at 4:27 PM Hari Gowtham 
> wrote:
> > > >
> > > > Hi,
> > > >
> > > > The following patch is waiting for centos regression.
> > > > https://review.gluster.org/#/c/glusterfs/+/22725/
> > > >
> > > > Sunny or Kotresh, please do take a look so that we can go ahead with
> > > > the tagging.
> > > >
> > > > On Thu, May 9, 2019 at 4:45 PM Hari Gowtham 
> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > Expected tagging date for release-6.2 is on May, 15th, 2019.
> > > > >
> > > > > Please ensure required patches are backported and also are passing
> > > > > regressions and are appropriately reviewed for easy merging and
> tagging
> > > > > on the date.
> > > > >
> > > > > --
> > > > > Regards,
> > > > > Hari Gowtham.
> > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > > Hari Gowtham.
> > >
> > >
> > >
> > > --
> > > Regards,
> > > Hari Gowtham.
>
>
>
> --
> Regards,
> Hari Gowtham.
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] nightly builds are available again, with slightly different versioning

2019-05-15 Thread Amar Tumballi Suryanarayan
Thanks for noticing and correcting the issue Niels. Very helpful.


On Wed, May 15, 2019 at 12:48 PM Niels de Vos  wrote:

> This is sort of an RCA and notification to anyone interested in using
> nightly builds of GlusterFS. If you have any (automated) tests that
> consume the nightly builds for non-master branches, you did not run
> tests with updated packages since 2 May 2019. The nightly builds failed
> to run, but nobody was notified or reported this.
>
> Around two weeks ago the nightly builds for glusterfs of the non-master
> branches were broken due to a change in the CI script. This has been
> corrected now and a manual run of the job shows green balls again:
>   https://ci.centos.org/view/Gluster/job/gluster_build-rpms/
>
> The initial breakage was introduced by an optimization to not download
> the whole glusterfs git repository, but only the current HEAD. This did
> not take into account that 'git checkout' would not be able to switch to
> a branch that was not downloaded. With a few iterations of fixes, it
> became obvious that also tags were not fetched (duh), and 'git describe'
> would not work. Without tags it is not possible to mark builds with the
> most recent minor release that was made of a branch. Currently the date
> of the build + git-hash is part of the package version. That means that
> there is a new version of each branch every day, instead of only after
> commits have been merged. This might be changed in the future...
>
> As a reminder, the YUM .repo files for the nightly builds can be found
> at http://artifacts.ci.centos.org/gluster/nightly/
>
> Cheers,
> Niels
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] gluster-block v0.4 is alive!

2019-05-06 Thread Amar Tumballi Suryanarayan
On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever  wrote:

> Hello Gluster folks,
>
> Gluster-block team is happy to announce the v0.4 release [1].
>
> This is the new stable version of gluster-block, lots of new and
> exciting features and interesting bug fixes are made available as part
> of this release.
> Please find the big list of release highlights and notable fixes at [2].
>
>
Good work Team (Prasanna and Xiubo Li to be precise)!!

This was much needed release w.r.to gluster-block project, mainly because
of the number of improvements done since last release. Also, gluster-block
release 0.3 was not compatible with glusterfs-6.x series.

All, feel free to use it if your deployment has any usecase for Block
storage, and give us feedback. Happy to make sure gluster-block is stable
for you.

Regards,
Amar


> Details about installation can be found in the easy install guide at
> [3]. Find the details about prerequisites and setup guide at [4].
> If you are a new user, checkout the demo video attached in the README
> doc [5], which will be a good source of intro to the project.
> There are good examples about how to use gluster-block both in the man
> pages [6] and test file [7] (also in the README).
>
> gluster-block is part of fedora package collection, an updated package
> with release version v0.4 will be soon made available. And the
> community provided packages will be soon made available at [8].
>
> Please spend a minute to report any kind of issue that comes to your
> notice with this handy link [9].
> We look forward to your feedback, which will help gluster-block get better!
>
> We would like to thank all our users, contributors for bug filing and
> fixes, also the whole team who involved in the huge effort with
> pre-release testing.
>
>
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/releases
> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
> [4] https://github.com/gluster/gluster-block#usage
> [5] https://github.com/gluster/gluster-block/blob/master/README.md
> [6] https://github.com/gluster/gluster-block/tree/master/docs
> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
> [8] https://download.gluster.org/pub/gluster/gluster-block/
> [9] https://github.com/gluster/gluster-block/issues/new
>
> Cheers,
> Team Gluster-Block!
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Coverity scan - how does it ignore dismissed defects & annotations?

2019-05-03 Thread Amar Tumballi Suryanarayan
On Fri, May 3, 2019 at 3:17 PM Atin Mukherjee  wrote:

>
>
> On Fri, 3 May 2019 at 14:59, Xavi Hernandez  wrote:
>
>> Hi Atin,
>>
>> On Fri, May 3, 2019 at 10:57 AM Atin Mukherjee 
>> wrote:
>>
>>> I'm bit puzzled on the way coverity is reporting the open defects on GD1
>>> component. As you can see from [1], technically we have 6 open defects and
>>> all of the rest are being marked as dismissed. We tried to put some
>>> additional annotations in the code through [2] to see if coverity starts
>>> feeling happy but the result doesn't change. I still see in the report it
>>> complaints about open defect of GD1 as 25 (7 as High, 18 as medium and 1 as
>>> Low). More interestingly yesterday's report claimed we fixed 8 defects,
>>> introduced 1, but the overall count remained as 102. I'm not able to
>>> connect the dots of this puzzle, can anyone?
>>>
>>
>> Maybe we need to modify all dismissed CID's so that Coverity considers
>> them again and, hopefully, mark them as solved with the newer updates. They
>> have been manually marked to be ignored, so they are still there...
>>
>
> After yesterday’s run I set the severity for all of them to see if
> modifications to these CIDs make any difference or not. So fingers crossed
> till the next report comes :-) .
>

If you noticed the previous day report, it was 101 'Open defects' and 65
'Dismissed' (which means, they are not 'fixed in code', but dismissed as
false positive or ignore in CID dashboard.

Now, it is 57 'Dismissed', which means, your patch has actually fixed 8
defects.


>
>
>> Just a thought, I'm not sure how this really works.
>>
>
> Same here, I don’t understand the exact workflow and hence seeking
> additional ideas.
>
>
Looks like we should consider overall open defects as Open + Dismissed.


>
>> Xavi
>>
>>
>>>
>>> [1] https://scan.coverity.com/projects/gluster-glusterfs/view_defects
>>> [2] https://review.gluster.org/#/c/22619/
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> --
> - Atin (atinm)
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-01 Thread Amar Tumballi Suryanarayan
Hi Cynthia Zhou,

Can you post the patch which fixes the issue of missing free? We will
continue to investigate the leak further, but would really appreciate
getting the patch which is already worked on land into upstream master.

-Amar

On Mon, Apr 22, 2019 at 1:38 PM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:

> Ok, I am clear now.
>
> I’ve added ssl_free in socket reset and socket finish function, though
> glusterfsd memory leak is not that much, still it is leaking, from source
> code I can not find anything else,
>
> Could you help to check if this issue exists in your env? If not I may
> have a try to merge your patch .
>
> Step
>
> 1>   while true;do gluster v heal  info,
>
> 2>   check the vol-name glusterfsd memory usage, it is obviously
> increasing.
>
> cynthia
>
>
>
> *From:* Milind Changire 
> *Sent:* Monday, April 22, 2019 2:36 PM
> *To:* Zhou, Cynthia (NSB - CN/Hangzhou) 
> *Cc:* Atin Mukherjee ; gluster-devel@gluster.org
> *Subject:* Re: [Gluster-devel] glusterfsd memory leak issue found after
> enable ssl
>
>
>
> According to BIO_new_socket() man page ...
>
>
>
> *If the close flag is set then the socket is shut down and closed when the
> BIO is freed.*
>
>
>
> For Gluster to have more control over the socket shutdown, the BIO_NOCLOSE
> flag is set. Otherwise, SSL takes control of socket shutdown whenever BIO
> is freed.
>
>
> _______
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Query regarding dictionary logic

2019-04-30 Thread Amar Tumballi Suryanarayan
Shreyas/Kevin tried to address it some time back using
https://bugzilla.redhat.com/show_bug.cgi?id=1428049 (
https://review.gluster.org/16830)

I vaguely remember the reason to keep the hash value 1 was done during the
time when we had dictionary itself sent as on wire protocol, and in most
other places, number of entries in dictionary was on an avg, 3. So, we
felt, saving on a bit of memory for optimization was better at that time.

-Amar

On Tue, Apr 30, 2019 at 12:02 PM Mohit Agrawal  wrote:

> sure Vijay, I will try and update.
>
> Regards,
> Mohit Agrawal
>
> On Tue, Apr 30, 2019 at 11:44 AM Vijay Bellur  wrote:
>
>> Hi Mohit,
>>
>> On Mon, Apr 29, 2019 at 7:15 AM Mohit Agrawal 
>> wrote:
>>
>>> Hi All,
>>>
>>>   I was just looking at the code of dict, I have one query current
>>> dictionary logic.
>>>   I am not able to understand why we use hash_size is 1 for a
>>> dictionary.IMO with the
>>>   hash_size of 1 dictionary always work like a list, not a hash, for
>>> every lookup
>>>   in dictionary complexity is O(n).
>>>
>>>   Before optimizing the code I just want to know what was the exact
>>> reason to define
>>>   hash_size is 1?
>>>
>>
>> This is a good question. I looked up the source in gluster's historic
>> repo [1] and hash_size is 1 even there. So, this could have been the case
>> since the first version of the dictionary code.
>>
>> Would you be able to run some tests with a larger hash_size and share
>> your observations?
>>
>> Thanks,
>> Vijay
>>
>> [1]
>> https://github.com/gluster/historic/blob/master/libglusterfs/src/dict.c
>>
>>
>>
>>>
>>>   Please share your view on the same.
>>>
>>> Thanks,
>>> Mohit Agrawal
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-04-26 Thread Amar Tumballi Suryanarayan
On Fri, Apr 26, 2019 at 6:27 PM Kaleb Keithley  wrote:

>
>
> On Fri, Apr 26, 2019 at 8:21 AM Harold Miller  wrote:
>
>> Has Red Hat security cleared the Slack systems for confidential /
>> customer information?
>>
>> If not, it will make it difficult for support to collect/answer questions.
>>
>
> I'm pretty sure Amar meant as a replacement for the freenode #gluster and
> #gluster-dev channels, given that he sent this to the public gluster
> mailing lists @gluster.org. Nobody should have even been posting
> confidential and/or customer information to any of those lists or channels.
> And AFAIK nobody ever has.
>
>
Yep, I am only talking about IRC (from freenode, #gluster, #gluster-dev
etc).  Also, I am not saying we are 'replacing IRC'. Gluster as a project
started in pre-Slack era, and we have many users who prefer to stay in IRC.
So, for now, no pressure to make a statement calling Slack channel as a
'Replacement' to IRC.


> Amar, would you like to clarify which IRC channels you meant?
>
>

Thanks Kaleb. I was bit confused on why the concern of it came up in this
group.



>
>> On Fri, Apr 26, 2019 at 6:00 AM Scott Worthington <
>> scott.c.worthing...@gmail.com> wrote:
>>
>>> Hello, are you not _BOTH_ Red Hat FTEs or contractors?
>>>
>>>
Yes! but come from very different internal teams.

Michael supports Gluster (the project) team's Infrastructure needs, and has
valid concerns from his perspective :-) I, on the other hand, bother more
about code, users, and how to make sure we are up-to-date with other
technologies and communities, from the engineering view point.


> On Fri, Apr 26, 2019, 3:16 AM Michael Scherer  wrote:
>>>
>>>> Le vendredi 26 avril 2019 à 13:24 +0530, Amar Tumballi Suryanarayan a
>>>> écrit :
>>>> > Hi All,
>>>> >
>>>> > We wanted to move to Slack from IRC for our official communication
>>>> > channel
>>>> > from sometime, but couldn't as we didn't had a proper URL for us to
>>>> > register. 'gluster' was taken and we didn't knew who had it
>>>> > registered.
>>>> > Thanks to constant ask from Satish, Slack team has now agreed to let
>>>> > us use
>>>> > https://gluster.slack.com and I am happy to invite you all there.
>>>> > (Use this
>>>> > link
>>>> > <
>>>> >
>>>> https://join.slack.com/t/gluster/shared_invite/enQtNjIxMTA1MTk3MDE1LWIzZWZjNzhkYWEwNDdiZWRiOTczMTc4ZjdiY2JiMTc3MDE5YmEyZTRkNzg0MWJiMWM3OGEyMDU2MmYzMTViYTA
>>>> > >
>>>> > to
>>>> > join)
>>>> >
>>>> > Please note that, it won't be a replacement for mailing list. But can
>>>> > be
>>>> > used by all developers and users for quick communication. Also note
>>>> > that,
>>>> > no information there would be 'stored' beyond 10k lines as we are
>>>> > using the
>>>> > free version of Slack.
>>>>
>>>> Aren't we concerned about the ToS of slack ? Last time I did read them,
>>>> they were quite scary (like, if you use your corporate email, you
>>>> engage your employer, and that wasn't the worst part).
>>>>
>>>> Also, to anticipate the question, my employer Legal department told me
>>>> to not setup a bridge between IRC and slack, due to the said ToS.
>>>>
>>>>
Again, re-iterating here. Not planning to use any bridges from IRC to
Slack. I re-read the Slack API Terms and condition. And it makes sense.
They surely don't want us to build another slack, or abuse slack with too
many API requests made for collecting logs.

Currently, to start with, we are not adding any bots (other than github
bot). Hopefully, that will keep us under proper usage guidelines.

-Amar


> --
>>>> Michael Scherer
>>>> Sysadmin, Community Infrastructure
>>>>
>>>>
>>>>
>>>> ___
>>>> Gluster-users mailing list
>>>> gluster-us...@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>>
>> HAROLD MILLER
>>
>> ASSOCIATE MANAGER, ENTERPRISE CLOUD SUPPORT
>>
>> Red Hat
>>
>> <https://www.redhat.com/>
>>
>> har...@redhat.comT: (650)-254-4346
>> <https://red.ht/sig>
>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-04-26 Thread Amar Tumballi Suryanarayan
Hi All,

We wanted to move to Slack from IRC for our official communication channel
from sometime, but couldn't as we didn't had a proper URL for us to
register. 'gluster' was taken and we didn't knew who had it registered.
Thanks to constant ask from Satish, Slack team has now agreed to let us use
https://gluster.slack.com and I am happy to invite you all there. (Use this
link

to
join)

Please note that, it won't be a replacement for mailing list. But can be
used by all developers and users for quick communication. Also note that,
no information there would be 'stored' beyond 10k lines as we are using the
free version of Slack.

Regards,
Amar
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-04-11 Thread Amar Tumballi Suryanarayan
Hi All,

Below is the final details of our community meeting, and I will be sending
invites to mailing list following this email. You can add Gluster Community
Calendar so you can get notifications on the meetings.

We are starting the meetings from next week. For the first meeting, we need
1 volunteer from users to discuss the use case / what went well, and what
went bad, etc. preferrably in APAC region.  NA/EMEA region, next week.

Draft Content: https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g

Gluster Community Meeting
<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Previous-Meeting-minutes>Previous
Meeting minutes:

   - http://github.com/gluster/community

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#DateTime-Check-the-community-calendar>Date/Time:
Check the community calendar
<https://calendar.google.com/calendar/b/1?cid=dmViajVibDBrbnNiOWQwY205ZWg5cGJsaTRAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ>
<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Bridge>Bridge

   - APAC friendly hours
  - Bridge: https://bluejeans.com/836554017
   - NA/EMEA
  - Bridge: https://bluejeans.com/486278655

--
<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Attendance>Attendance

   - Name, Company

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Host>Host

   - Who will host next meeting?
  - Host will need to send out the agenda 24hr - 12hrs in advance to
  mailing list, and also make sure to send the meeting minutes.
  - Host will need to reach out to one user at least who can talk about
  their usecase, their experience, and their needs.
  - Host needs to send meeting minutes as PR to
  http://github.com/gluster/community

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#User-stories>User stories

   - Discuss 1 usecase from a user.
  - How was the architecture derived, what volume type used, options,
  etc?
  - What were the major issues faced ? How to improve them?
  - What worked good?
  - How can we all collaborate well, so it is win-win for the community
  and the user? How can we

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Community>Community

   -

   Any release updates?
   -

   Blocker issues across the project?
   -

   Metrics
   - Number of new bugs since previous meeting. How many are not triaged?
  - Number of emails, anything unanswered?

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Conferences--Meetups>Conferences
/ Meetups

   - Any conference in next 1 month where gluster-developers are going?
   gluster-users are going? So we can meet and discuss.

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#Developer-focus>Developer
focus

   -

   Any design specs to discuss?
   -

   Metrics of the week?
   - Coverity
  - Clang-Scan
  - Number of patches from new developers.
  - Did we increase test coverage?
  - [Atin] Also talk about most frequent test failures in the CI and
  carve out an AI to get them fixed.

<https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#RoundTable>RoundTable

   - 



Regards,
Amar

On Mon, Mar 25, 2019 at 8:53 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Thanks for the feedback Darrell,
>
> The new proposal is to have one in North America 'morning' time. (10AM
> PST), And another in ASIA day time, which is evening 7pm/6pm in Australia,
> 9pm Newzealand, 5pm Tokyo, 4pm Beijing.
>
> For example, if we choose Every other Tuesday for meeting, and 1st of the
> month is Tuesday, we would have North America time for 1st, and on 15th it
> would be ASIA/Pacific time.
>
> Hopefully, this way, we can cover all the timezones, and meeting minutes
> would be committed to github repo, so that way, it will be easier for
> everyone to be aware of what is happening.
>
> Regards,
> Amar
>
> On Mon, Mar 25, 2019 at 8:40 PM Darrell Budic 
> wrote:
>
>> As a user, I’d like to visit more of these, but the time slot is my 3AM.
>> Any possibility for a rolling schedule (move meeting +6 hours each week
>> with rolling attendance from maintainers?) or an occasional regional
>> meeting 12 hours opposed to the one you’re proposing?
>>
>>   -Darrell
>>
>> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan <
>> atumb...@redhat.com> wrote:
>>
>> All,
>>
>> We currently have 3 meetings which are public:
>>
>> 1. Maintainer's Meeting
>>
>> - Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
>> on an avg, and not much is discussed.
>> - Without majority attendance, we can't take any decisions too.
>>
>> 2. Community meeting
>>
>> - Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
>> meeting which is for 'Community/Users'. Others are for developers as of
>> now.
>

Re: [Gluster-devel] test failure reports for last 15 days

2019-04-10 Thread Amar Tumballi Suryanarayan
Thanks for the summary Atin.

On Wed, Apr 10, 2019 at 7:30 PM Atin Mukherjee  wrote:

> And now for last 15 days:
>
> https://fstat.gluster.org/summary?start_date=2019-03-25_date=2019-04-10
>
> ./tests/bitrot/bug-1373520.t 18  ==> Fixed through
> https://review.gluster.org/#/c/glusterfs/+/22481/, I don't see this
> failing in brick mux post 5th April
> ./tests/bugs/ec/bug-1236065.t 17  ==> happens only in brick mux, needs
> analysis.
> ./tests/basic/uss.t 15  ==> happens in both brick mux and non
> brick mux runs, test just simply times out. Needs urgent analysis.
> ./tests/basic/ec/ec-fix-openfd.t 13  ==> Fixed through
> https://review.gluster.org/#/c/22508/ , patch merged today.
> ./tests/basic/volfile-sanity.t  8  ==> Some race, though this succeeds
> in second attempt every time.
>
>
Can volfile-sanity.t be  failing because of the 'hang' in uss.t ? It is
possible as volfile-sanity.t runs after uss.t in regressions. I checked
volfile-sanity.t, but it has 'cleanup' at the beginning, but not sure if
there are any lingering things which caused these failures.


> There're plenty more with 5 instances of failure from many tests. We need
> all maintainers/owners to look through these failures and fix them, we
> certainly don't want to get into a stage where master is unstable and we
> have to lock down the merges till all these failures are resolved. So
> please help.
>
> (Please note fstat stats show up the retries as failures too which in a
> way is right)
>
>
> On Tue, Feb 26, 2019 at 5:27 PM Atin Mukherjee 
> wrote:
>
>> [1] captures the test failures report since last 30 days and we'd need
>> volunteers/component owners to see why the number of failures are so high
>> against few tests.
>>
>> [1]
>> https://fstat.gluster.org/summary?start_date=2019-01-26_date=2019-02-25=all
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
Thanks for the feedback Darrell,

The new proposal is to have one in North America 'morning' time. (10AM
PST), And another in ASIA day time, which is evening 7pm/6pm in Australia,
9pm Newzealand, 5pm Tokyo, 4pm Beijing.

For example, if we choose Every other Tuesday for meeting, and 1st of the
month is Tuesday, we would have North America time for 1st, and on 15th it
would be ASIA/Pacific time.

Hopefully, this way, we can cover all the timezones, and meeting minutes
would be committed to github repo, so that way, it will be easier for
everyone to be aware of what is happening.

Regards,
Amar

On Mon, Mar 25, 2019 at 8:40 PM Darrell Budic 
wrote:

> As a user, I’d like to visit more of these, but the time slot is my 3AM.
> Any possibility for a rolling schedule (move meeting +6 hours each week
> with rolling attendance from maintainers?) or an occasional regional
> meeting 12 hours opposed to the one you’re proposing?
>
>   -Darrell
>
> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
> All,
>
> We currently have 3 meetings which are public:
>
> 1. Maintainer's Meeting
>
> - Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
> on an avg, and not much is discussed.
> - Without majority attendance, we can't take any decisions too.
>
> 2. Community meeting
>
> - Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
> meeting which is for 'Community/Users'. Others are for developers as of
> now.
> Sadly attendance is getting closer to 0 in recent times.
>
> 3. GCS meeting
>
> - We started it as an effort inside Red Hat gluster team, and opened it up
> for community from Jan 2019, but the attendance was always from RHT
> members, and haven't seen any traction from wider group.
>
> So, I have a proposal to call out for cancelling all these meeting, and
> keeping just 1 weekly 'Community' meeting, where even topics related to
> maintainers and GCS and other projects can be discussed.
>
> I have a template of a draft template @
> https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g
>
> Please feel free to suggest improvements, both in agenda and in timings.
> So, we can have more participation from members of community, which allows
> more user - developer interactions, and hence quality of project.
>
> Waiting for feedbacks,
>
> Regards,
> Amar
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
All,

We currently have 3 meetings which are public:

1. Maintainer's Meeting

- Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
on an avg, and not much is discussed.
- Without majority attendance, we can't take any decisions too.

2. Community meeting

- Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
meeting which is for 'Community/Users'. Others are for developers as of now.
Sadly attendance is getting closer to 0 in recent times.

3. GCS meeting

- We started it as an effort inside Red Hat gluster team, and opened it up
for community from Jan 2019, but the attendance was always from RHT
members, and haven't seen any traction from wider group.

So, I have a proposal to call out for cancelling all these meeting, and
keeping just 1 weekly 'Community' meeting, where even topics related to
maintainers and GCS and other projects can be discussed.

I have a template of a draft template @
https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g

Please feel free to suggest improvements, both in agenda and in timings.
So, we can have more participation from members of community, which allows
more user - developer interactions, and hence quality of project.

Waiting for feedbacks,

Regards,
Amar
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterFS v7.0 (and v8.0) roadmap discussion

2019-03-25 Thread Amar Tumballi Suryanarayan
Hello Gluster Members,

We are now done with glusterfs-6.0 release, and the next up is
glusterfs-7.0. But considering for many 'initiatives', 3-4 months are not
enough time to complete the tasks, we would like to call for a road-map
discussion meeting for calendar year 2019 (covers both glusterfs-7.0, and
8.0).

It would be good to use the meeting slot of community meeting for this.
While talking to team locally, I compiled a presentation here: <
https://docs.google.com/presentation/d/1rtn38S4YBe77KK5IjczWmoAR-ZSO-i3tNHg9pAH8Wt8/edit?usp=sharing>,
please go through and let me know what more can be added, or what can be
dropped?

We can start having discussions in https://hackmd.io/jlnWqzwCRvC9uoEU2h01Zw

Regards,
Amar
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Jim,

On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney  wrote:

>
> Issues with glusterfs fuse mounts cause issues with python file open for
> write. We have to use nfs to avoid this.
>
> Really want to see better back-end tools to facilitate cleaning up of
> glusterfs failures. If system is going to use hard linked ID, need a
> mapping of id to file to fix things. That option is now on for all exports.
> It should be the default If a host is down and users delete files by the
> thousands, gluster _never_ catches up. Finding path names for ids across
> even a 40TB mount, much less the 200+TB one, is a slow process. A network
> outage of 2 minutes and one system didn't get the call to recursively
> delete several dozen directories each with several thousand files.
>
>
Are you talking about some issues in geo-replication module or some other
application using native mount? Happy to take the discussion forward about
these issues.

Are there any bugs open on this?

Thanks,
Amar


>
>
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe  wrote:
>>
>> Hi,
>>
>> Looking into something else I fell over this proposal. Being a shop that
>> are going into "Leaving GlusterFS" mode, I thought I would give my two
>> cents.
>>
>> While being partially an HPC shop with a few Lustre filesystems,  we
>> chose GlusterFS for an archiving solution (2-3 PB), because we could find
>> files in the underlying ZFS filesystems if GlusterFS went sour.
>>
>> We have used the access to the underlying files plenty, because of the
>> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
>> effortless to run and mainly for that reason we are planning to move away
>> from GlusterFS.
>>
>> Reading this proposal kind of underlined that "Leaving GluserFS" is the
>> right thing to do. While I never understood why GlusterFS has been in
>> feature crazy mode instead of stabilizing mode, taking away crucial
>> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
>> very useful, even though the current implementation are not perfect.
>> Tiering also makes so much sense, but, for large files, not on a per-file
>> level.
>>
>> To be honest we only use quotas. We got scared of trying out new
>> performance features that potentially would open up a new back of issues.
>>
>> Sorry for being such a buzzkill. I really wanted it to be different.
>>
>> Cheers,
>> Hans Henrik
>> On 19/07/2018 08.56, Amar Tumballi wrote:
>>
>>
>> * Hi all, Over last 12 years of Gluster, we have developed many features,
>> and continue to support most of it till now. But along the way, we have
>> figured out better methods of doing things. Also we are not actively
>> maintaining some of these features. We are now thinking of cleaning up some
>> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
>> totally taken out of codebase in following releases) in next upcoming
>> release, v5.0. The release notes will provide options for smoothly
>> migrating to the supported configurations. If you are using any of these
>> features, do let us know, so that we can help you with ‘migration’.. Also,
>> we are happy to guide new developers to work on those components which are
>> not actively being maintained by current set of developers. List of
>> features hitting sunset: ‘cluster/stripe’ translator: This translator was
>> developed very early in the evolution of GlusterFS, and addressed one of
>> the very common question of Distributed FS, which is “What happens if one
>> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
>> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
>> was very hard to handle failure scenarios, and give a real good experience
>> to our users with this feature. Over the time, Gluster solved the problem
>> with it’s ‘Shard’ feature, which solves the problem in much better way, and
>> provides much better solution with existing well supported stack. Hence the
>> proposal for Deprecation. If you are using this feature, then do write to
>> us, as it needs a proper migration from existing volume to a new full
>> supported volume type before you upgrade. ‘storage/bd’ translator: This
>> feature got into the code base 5 years back with this patch
>> <http://review.gluster.org/4809>[1]. Plan was to use a block device
>> directly as a brick, which would help to handle disk-image storage much
>> easily in glusterfs. As the feature is not getting more contribution, and
>> we are not seeing any user traction on 

Re: [Gluster-devel] [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Hans,

Thanks for the honest feedback. Appreciate this.

On Tue, Mar 19, 2019 at 5:39 PM Hans Henrik Happe  wrote:

> Hi,
>
> Looking into something else I fell over this proposal. Being a shop that
> are going into "Leaving GlusterFS" mode, I thought I would give my two
> cents.
>
> While being partially an HPC shop with a few Lustre filesystems,  we chose
> GlusterFS for an archiving solution (2-3 PB), because we could find files
> in the underlying ZFS filesystems if GlusterFS went sour.
>
> We have used the access to the underlying files plenty, because of the
> continuous instability of GlusterFS'. Meanwhile, Lustre have been almost
> effortless to run and mainly for that reason we are planning to move away
> from GlusterFS.
>
> Reading this proposal kind of underlined that "Leaving GluserFS" is the
> right thing to do. While I never understood why GlusterFS has been in
> feature crazy mode instead of stabilizing mode, taking away crucial
> features I don't get. With RoCE, RDMA is getting mainstream. Quotas are
> very useful, even though the current implementation are not perfect.
> Tiering also makes so much sense, but, for large files, not on a per-file
> level.
>
>
It is a right concern to raise, and removing the existing features is not a
good thing most of the times. But, one thing we noticed over the years is,
the features which we develop, and not take to completion cause the major
heart-burn. People think it is present, and it is already few years since
its introduced, but if the developers are not working on it, users would
always feel that the product doesn't work, because that one feature didn't
work.

Other than Quota in the proposal email, for all other features, even though
we have *some* users, we are inclined towards deprecating them, considering
projects overall goals of stability in the longer run.


> To be honest we only use quotas. We got scared of trying out new
> performance features that potentially would open up a new back of issues.
>
> About Quota, we heard enough voices, so we will make sure we keep it. The
original email was 'Proposal', and hence these opinions matter for decision.

Sorry for being such a buzzkill. I really wanted it to be different.
>
> We hear you. Please let us know one thing, which were the versions you
tried ?

We hope in coming months, our recent focus on Stability and Technical debt
reduction will help you to re-look at Gluster after sometime.


> Cheers,
> Hans Henrik
> On 19/07/2018 08.56, Amar Tumballi wrote:
>
>
> * Hi all, Over last 12 years of Gluster, we have developed many features,
> and continue to support most of it till now. But along the way, we have
> figured out better methods of doing things. Also we are not actively
> maintaining some of these features. We are now thinking of cleaning up some
> of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
> totally taken out of codebase in following releases) in next upcoming
> release, v5.0. The release notes will provide options for smoothly
> migrating to the supported configurations. If you are using any of these
> features, do let us know, so that we can help you with ‘migration’.. Also,
> we are happy to guide new developers to work on those components which are
> not actively being maintained by current set of developers. List of
> features hitting sunset: ‘cluster/stripe’ translator: This translator was
> developed very early in the evolution of GlusterFS, and addressed one of
> the very common question of Distributed FS, which is “What happens if one
> of my file is bigger than the available brick. Say, I have 2 TB hard drive,
> exported in glusterfs, my file is 3 TB”. While it solved the purpose, it
> was very hard to handle failure scenarios, and give a real good experience
> to our users with this feature. Over the time, Gluster solved the problem
> with it’s ‘Shard’ feature, which solves the problem in much better way, and
> provides much better solution with existing well supported stack. Hence the
> proposal for Deprecation. If you are using this feature, then do write to
> us, as it needs a proper migration from existing volume to a new full
> supported volume type before you upgrade. ‘storage/bd’ translator: This
> feature got into the code base 5 years back with this patch
> <http://review.gluster.org/4809>[1]. Plan was to use a block device
> directly as a brick, which would help to handle disk-image storage much
> easily in glusterfs. As the feature is not getting more contribution, and
> we are not seeing any user traction on this, would like to propose for
> Deprecation. If you are using the feature, plan to move to a supported
> gluster volume configuration, and have your setup ‘supported’ before
> upgrading to y

Re: [Gluster-devel] Github#268 Compatibility with Alpine Linux

2019-03-13 Thread Amar Tumballi Suryanarayan
I tried this recently, and the issue of rpcgen is real, and is not
straight-forward is what I felt. Would like to pick this up after
glusterfs-6.0 release.

-Amar

On Tue, Mar 12, 2019 at 8:17 AM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:

> Saw some recent activity on
> <https://github.com/gluster/glusterfs/issues/268> - is there a plan to
> address this or, should the interested users be informed about other
> plans?
>
> /s
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-13 Thread Amar Tumballi Suryanarayan
We recommend to use 'tirpc' in the later releases. use '--with-tirpc' while
running ./configure

On Wed, Mar 13, 2019 at 10:55 AM ABHISHEK PALIWAL 
wrote:

> Hi Amar,
>
> this problem seems to be configuration issue due to librpc.
>
> Could you please let me know what should be configuration I need to use?
>
> Regards,
> Abhishek
>
> On Wed, Mar 13, 2019 at 10:42 AM ABHISHEK PALIWAL 
> wrote:
>
>> logs for libgfrpc.so
>>
>> pabhishe@arn-build3$ldd
>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.*
>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0:
>> not a dynamic executable
>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0.0.1:
>> not a dynamic executable
>>
>>
>> On Wed, Mar 13, 2019 at 10:02 AM ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Here are the logs:
>>>
>>>
>>> pabhishe@arn-build3$ldd
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.*
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0:
>>> not a dynamic executable
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1:
>>> not a dynamic executable
>>> pabhishe@arn-build3$ldd
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1
>>> not a dynamic executable
>>>
>>>
>>> For backtraces I have attached the core_logs.txt file.
>>>
>>> Regards,
>>> Abhishek
>>>
>>> On Wed, Mar 13, 2019 at 9:51 AM Amar Tumballi Suryanarayan <
>>> atumb...@redhat.com> wrote:
>>>
>>>> Hi Abhishek,
>>>>
>>>> Few more questions,
>>>>
>>>>
>>>>> On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL <
>>>>> abhishpali...@gmail.com> wrote:
>>>>>
>>>>>> Hi Amar,
>>>>>>
>>>>>> Below are the requested logs
>>>>>>
>>>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
>>>>>> not a dynamic executable
>>>>>>
>>>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
>>>>>> not a dynamic executable
>>>>>>
>>>>>>
>>>> Can you please add a * at the end, so it gets the linked library list
>>>> from the actual files (ideally this is a symlink, but I expected it to
>>>> resolve like in Fedora).
>>>>
>>>>
>>>>
>>>>> root@128:/# gdb /usr/sbin/glusterd core.1099
>>>>>> GNU gdb (GDB) 7.10.1
>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>> License GPLv3+: GNU GPL version 3 or later <
>>>>>> http://gnu.org/licenses/gpl.html>
>>>>>> This is free software: you are free to change and redistribute it.
>>>>>> There is NO WARRANTY, to the extent permitted by law.  Type "show
>>>>>> copying"
>>>>>> and "show warranty" for details.
>>>>>> This GDB was configured as "powerpc64-wrs-linux".
>>>>>> Type "show configuration" for configuration details.
>>>>>> For bug reporting instructions, please see:
>>>>>> <http://www.gnu.org/software/gdb/bugs/>.
>>>>>> Find the GDB manual and other documentation resources online at:
>>>>>> <http://www.gnu.org/software/gdb/documentation/>.
>>>>>> For help, type "help".
>>>>>> Type "apropos word" to search for commands related to "word"...
>>>>>> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
>>>>>> found)...done.
>>>>>> [New LWP 1109]
>>>>>> [New LWP 1101]
>>>>>> [New LWP 1105]
>>>>>> [New LWP 1110]
>>>>>> [New LWP 1099]
>>>>>> [New LWP 1107]
>>>>>> [New LWP 1119]
>>>>>> [New LWP 1103]
>>>>>> [New LWP 1112]
>>>>>> [New LWP 1116]
>>>>>> [New LWP 1104]
>>>>>> [New LWP 1239]
>>>>>> [New LWP 1106]
>>>>>> [New LWP ]
>>>>>> [New LWP 1108]
>>>>>> [New LWP 1117]
>>>>>> [New LWP 1102]
>>>>>> [New LWP 1118]
>>>>>> [New LWP 1100]
>>>>>> [New LWP 1114]
>>>>>> [New 

Re: [Gluster-devel] [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread Amar Tumballi Suryanarayan
bgfxdr.so.0
>> No symbol table info available.
>> #8  0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8109870, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109920 "\232\373\377\315\352\325\005\271"
>> stat = 
>> #9  0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa8109870, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #10 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #11 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa81096f0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa81097a0 "\241X\372!\216\256=\342"
>> stat = 
>> ---Type  to continue, or q  to quit---
>> #12 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa81096f0, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #13 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #14 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8109570, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109620 "\265\205\003Vu'\002L"
>> stat = 
>> #15 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa8109570, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #16 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #17 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa81093f0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa81094a0 "\200L\027F'\177\366D"
>> stat = 
>> #18 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa81093f0, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #19 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #20 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8109270, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109320 "\217{dK(\001E\220"
>> stat = 
>> #21 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa8109270, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #22 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #23 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa81090f0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa81091a0 "\217\275\067\336\232\300(\005"
>> stat = 
>> #24 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa81090f0, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #25 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #26 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8108f70, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109020 "\260.\025\b\244\352IT"
>> stat = 
>> #27 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa8108f70, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #28 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #29 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8108df0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8108ea0 "\212GS\203l\035\n\\"
>> ---Type  to continue, or q  to quit---
>>
>>
>> Regards,
>> Abhishek
>>
>> On Mon, Mar 11, 2019 at 7:10 PM Amar Tumballi Suryanarayan <
>> atumb...@redhat.com> wrote:
>>
>>> Hi Abhishek,
>>>
>

Re: [Gluster-devel] [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-11 Thread Amar Tumballi Suryanarayan
n __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c131720, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #25 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #26 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c1315a0, size=, proc=) at
>> xdr_ref.c:84
>> #27 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c1315a0, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #28 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #29 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c131420, size=, proc=) at
>> xdr_ref.c:84
>> #30 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c131420, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> #31 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> #32 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>> pp=0x3fff7c1312a0, size=, proc=) at
>> xdr_ref.c:84
>> #33 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
>> objpp=0x3fff7c1312a0, obj_size=,
>> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>>
>> Frames are getting repeated, could any one please me.
>> --
>> Regards
>> Abhishek Paliwal
>>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Amar Tumballi Suryanarayan
Thanks to those who participated.

Update at present:

We found 3 blocker bugs in upgrade scenarios, and hence have marked release
as pending upon them. We will keep these lists updated about progress.

-Amar

On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Hi all,
>
> We are calling out our users, and developers to contribute in validating
> ‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
> upgrade, stability, and performance.
>
> Some of the key highlights of the release are listed in release-notes
> draft
> <https://github.com/gluster/glusterfs/blob/release-6/doc/release-notes/6.0.md>.
> Please note that there are some of the features which are being dropped out
> of this release, and hence making sure your setup is not going to have an
> issue is critical. Also the default lru-limit option in fuse mount for
> Inodes should help to control the memory usage of client processes. All the
> good reason to give it a shot in your test setup.
>
> If you are developer using gfapi interface to integrate with other
> projects, you also have some signature changes, so please make sure your
> project would work with latest release. Or even if you are using a project
> which depends on gfapi, report the error with new RPMs (if any). We will
> help fix it.
>
> As part of test days, we want to focus on testing the latest upcoming
> release i.e. GlusterFS-6, and one or the other gluster volunteers would be
> there on #gluster channel on freenode to assist the people. Some of the key
> things we are looking as bug reports are:
>
>-
>
>See if upgrade from your current version to 6.0rc is smooth, and works
>as documented.
>- Report bugs in process, or in documentation if you find mismatch.
>-
>
>Functionality is all as expected for your usecase.
>- No issues with actual application you would run on production etc.
>-
>
>Performance has not degraded in your usecase.
>- While we have added some performance options to the code, not all of
>   them are turned on, as they have to be done based on usecases.
>   - Make sure the default setup is at least same as your current
>   version
>   - Try out few options mentioned in release notes (especially,
>   --auto-invalidation=no) and see if it helps performance.
>-
>
>While doing all the above, check below:
>- see if the log files are making sense, and not flooding with some
>   “for developer only” type of messages.
>   - get ‘profile info’ output from old and now, and see if there is
>   anything which is out of normal expectation. Check with us on the 
> numbers.
>   - get a ‘statedump’ when there are some issues. Try to make sense
>   of it, and raise a bug if you don’t understand it completely.
>
>
> <https://hackmd.io/YB60uRCMQRC90xhNt4r6gA?both#Process-expected-on-test-days>Process
> expected on test days.
>
>-
>
>We have a tracker bug
><https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0>[0]
>- We will attach all the ‘blocker’ bugs to this bug.
>-
>
>Use this link to report bugs, so that we have more metadata around
>given bugzilla.
>- Click Here
>   
> <https://bugzilla.redhat.com/enter_bug.cgi?blocked=1672818_severity=high=core=high=GlusterFS_whiteboard=gluster-test-day=6>
>   [1]
>-
>
>The test cases which are to be tested are listed here in this sheet
>
> <https://docs.google.com/spreadsheets/d/1AS-tDiJmAr9skK535MbLJGe_RfqDQ3j1abX1wtjwpL4/edit?usp=sharing>[2],
>please add, update, and keep it up-to-date to reduce duplicate efforts.
>
> Lets together make this release a success.
>
> Also check if we covered some of the open issues from Weekly untriaged
> bugs
> <https://lists.gluster.org/pipermail/gluster-devel/2019-February/055874.html>
> [3]
>
> For details on build and RPMs check this email
> <https://lists.gluster.org/pipermail/gluster-devel/2019-February/055875.html>
> [4]
>
> Finally, the dates :-)
>
>- Wednesday - Feb 27th, and
>    - Thursday - Feb 28th
>
> Note that our goal is to identify as many issues as possible in upgrade
> and stability scenarios, and if any blockers are found, want to make sure
> we release with the fix for same. So each of you, Gluster users, feel
> comfortable to upgrade to 6.0 version.
>
> Regards,
> Gluster Ants.
>
> --
> Amar Tumballi (amarts)
>


-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Version uplift query

2019-02-27 Thread Amar Tumballi Suryanarayan
GlusterD2 is not yet called out for standalone deployments.

You can happily update to glusterfs-5.x (recommend you to wait for
glusterfs-5.4 which is already tagged, and waiting for packages to be
built).

Regards,
Amar

On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL 
wrote:

> Hi,
>
> Could  you please update on this and also let us know what is GlusterD2
> (as it is under development in 5.0 release), so it is ok to uplift to 5.0?
>
> Regards,
> Abhishek
>
> On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL 
> wrote:
>
>> Hi,
>>
>> Currently we are using Glusterfs 3.7.6 and thinking to switch on
>> Glusterfs 4.1 or 5.0, when I see there are too much code changes between
>> these version, could you please let us know, is there any compatibility
>> issue when we uplift any of the new mentioned version?
>>
>> Regards
>> Abhishek
>>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-02-25 Thread Amar Tumballi Suryanarayan
Hi all,

We are calling out our users, and developers to contribute in validating
‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of
upgrade, stability, and performance.

Some of the key highlights of the release are listed in release-notes draft
<https://github.com/gluster/glusterfs/blob/release-6/doc/release-notes/6.0.md>.
Please note that there are some of the features which are being dropped out
of this release, and hence making sure your setup is not going to have an
issue is critical. Also the default lru-limit option in fuse mount for
Inodes should help to control the memory usage of client processes. All the
good reason to give it a shot in your test setup.

If you are developer using gfapi interface to integrate with other
projects, you also have some signature changes, so please make sure your
project would work with latest release. Or even if you are using a project
which depends on gfapi, report the error with new RPMs (if any). We will
help fix it.

As part of test days, we want to focus on testing the latest upcoming
release i.e. GlusterFS-6, and one or the other gluster volunteers would be
there on #gluster channel on freenode to assist the people. Some of the key
things we are looking as bug reports are:

   -

   See if upgrade from your current version to 6.0rc is smooth, and works
   as documented.
   - Report bugs in process, or in documentation if you find mismatch.
   -

   Functionality is all as expected for your usecase.
   - No issues with actual application you would run on production etc.
   -

   Performance has not degraded in your usecase.
   - While we have added some performance options to the code, not all of
  them are turned on, as they have to be done based on usecases.
  - Make sure the default setup is at least same as your current version
  - Try out few options mentioned in release notes (especially,
  --auto-invalidation=no) and see if it helps performance.
   -

   While doing all the above, check below:
   - see if the log files are making sense, and not flooding with some “for
  developer only” type of messages.
  - get ‘profile info’ output from old and now, and see if there is
  anything which is out of normal expectation. Check with us on the numbers.
  - get a ‘statedump’ when there are some issues. Try to make sense of
  it, and raise a bug if you don’t understand it completely.

<https://hackmd.io/YB60uRCMQRC90xhNt4r6gA?both#Process-expected-on-test-days>Process
expected on test days.

   -

   We have a tracker bug
   <https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.0>[0]
   - We will attach all the ‘blocker’ bugs to this bug.
   -

   Use this link to report bugs, so that we have more metadata around given
   bugzilla.
   - Click Here
  
<https://bugzilla.redhat.com/enter_bug.cgi?blocked=1672818_severity=high=core=high=GlusterFS_whiteboard=gluster-test-day=6>
  [1]
   -

   The test cases which are to be tested are listed here in this sheet
   
<https://docs.google.com/spreadsheets/d/1AS-tDiJmAr9skK535MbLJGe_RfqDQ3j1abX1wtjwpL4/edit?usp=sharing>[2],
   please add, update, and keep it up-to-date to reduce duplicate efforts.

Lets together make this release a success.

Also check if we covered some of the open issues from Weekly untriaged bugs
<https://lists.gluster.org/pipermail/gluster-devel/2019-February/055874.html>
[3]

For details on build and RPMs check this email
<https://lists.gluster.org/pipermail/gluster-devel/2019-February/055875.html>
[4]

Finally, the dates :-)

   - Wednesday - Feb 27th, and
   - Thursday - Feb 28th

Note that our goal is to identify as many issues as possible in upgrade and
stability scenarios, and if any blockers are found, want to make sure we
release with the fix for same. So each of you, Gluster users, feel
comfortable to upgrade to 6.0 version.

Regards,
Gluster Ants.

-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterFs v4.1.5: Need help on bitrot detection

2019-02-20 Thread Amar Tumballi Suryanarayan
Hi Chandranana,

We are trying to find a BigEndian platform to test this out at the moment,
will get back to you on this.

Meantime, did you run the entire regression suit? Is it the only test
failing? To run the entire regression suite, please run `run-tests.sh -c`
from glusterfs source repo.

-Amar

On Tue, Feb 19, 2019 at 1:31 AM Chandranana Naik 
wrote:

> Hi Team,
>
> We are working with Glusterfs v4.1.5 on big endian platform(Ubuntu 16.04)
> and encountered that the subtest 20 of test
> ./tests/bitrot/bug-1207627-bitrot-scrub-status.t is failing.
>
> Subtest 20 is failing as below:
> *trusted.bit-rot.bad-file check_for_xattr trusted.bit-rot.bad-file
> //d/backends/patchy1/FILE1*
> *not ok 20 Got "" instead of "trusted.bit-rot.bad-file", LINENUM:50*
> *FAILED COMMAND: trusted.bit-rot.bad-file check_for_xattr
> trusted.bit-rot.bad-file //d/backends/patchy1/FILE1*
>
> The test is failing with error "*remote operation failed [Cannot allocate
> memory]"* logged in /var/log/glusterfs/scrub.log.
> Could you please let us know if anything is missing in making this test
> pass, PFA the logs for the test case
>
> *(See attached file: bug-1207627-bitrot-scrub-status.7z)*
>
> Note: *Enough memory is available on the system*.
>
> Regards,
> Chandranana Naik
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] md-cache: May bug found in md-cache.c

2019-02-20 Thread Amar Tumballi Suryanarayan
Hi David,

https://docs.gluster.org/en/latest/Developer-guide/Backport-Guidelines/
gives more details about it.

But easiest is to go to your patch (https://review.gluster.org/22234), and
then click on 'Cherry Pick' button. In the pop-up, 'branch:' field, give
'release-6' and Submit. If you want it in release-5 branch too, repeat the
same, with branch being 'release-5'. Siimlarly we need 'clone-of' bug for
both the branches (the original bug used in patch is for master branch).

That should be it. Rest, we can take care.

Thanks a lot!

Regards,
Amar

On Wed, Feb 20, 2019 at 6:58 PM David Spisla 
wrote:

> Hello Amar,
>
>
>
> no problem. How can I do that? Can you please tell me the procedure?
>
>
>
> Regards
>
> David
>
>
>
> *Von:* Amar Tumballi Suryanarayan 
> *Gesendet:* Mittwoch, 20. Februar 2019 14:18
> *An:* David Spisla 
> *Cc:* Gluster Devel 
> *Betreff:* Re: [Gluster-devel] md-cache: May bug found in md-cache.c
>
>
>
> Hi David,
>
>
>
> Thanks for the patch, it got merged in master now. Can you please post it
> into release branches, so we can take them in release-6, release-5 branch,
> so next releases can have them.
>
>
>
> Regards,
>
> Amar
>
>
>
> On Tue, Feb 19, 2019 at 8:49 PM David Spisla  wrote:
>
> Hello,
>
>
>
> I already open a bug:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1678726
>
>
>
> There is also a link to a bug fix patch
>
>
>
> Regards
>
> David Spisla
>
>
>
> Am Di., 19. Feb. 2019 um 13:07 Uhr schrieb David Spisla <
> spisl...@gmail.com>:
>
> Hi folks,
>
>
>
> The 'struct md_cache' in md-cache.c uses int data types which are not in
> common with the data types used in the 'struct iatt' in iatt.h . If one
> take a closer look to the implementations one can see that the struct in
> md-cache.c uses still the int data types like in the struct 'old_iatt' .
> This can lead to unexpected side effects and some values of iatt maybe will
> not mapped correctly. I would suggest to open a bug report. What do you
> think?
>
> Additional info:
>
> struct md_cache {
> ia_prot_t md_prot;
> uint32_t md_nlink;
> uint32_t md_uid;
> uint32_t md_gid;
> uint32_t md_atime;
> uint32_t md_atime_nsec;
> uint32_t md_mtime;
> uint32_t md_mtime_nsec;
> uint32_t md_ctime;
> uint32_t md_ctime_nsec;
> uint64_t md_rdev;
> uint64_t md_size;
> uint64_t md_blocks;
> uint64_t invalidation_time;
> uint64_t generation;
> dict_t *xattr;
> char *linkname;
> time_t ia_time;
> time_t xa_time;
> gf_boolean_t need_lookup;
> gf_boolean_t valid;
> gf_boolean_t gen_rollover;
> gf_boolean_t invalidation_rollover;
> gf_lock_t lock;
> };
>
> struct iatt {
> uint64_t ia_flags;
> uint64_t ia_ino; /* inode number */
> uint64_t ia_dev; /* backing device ID */
> uint64_t ia_rdev;/* device ID (if special file) */
> uint64_t ia_size;/* file size in bytes */
> uint32_t ia_nlink;   /* Link count */
> uint32_t ia_uid; /* user ID of owner */
> uint32_t ia_gid; /* group ID of owner */
> uint32_t ia_blksize; /* blocksize for filesystem I/O */
> uint64_t ia_blocks;  /* number of 512B blocks allocated */
> int64_t ia_atime;/* last access time */
> int64_t ia_mtime;/* last modification time */
> int64_t ia_ctime;/* last status change time */
> int64_t ia_btime;/* creation time. Fill using statx */
> uint32_t ia_atime_nsec;
> uint32_t ia_mtime_nsec;
> uint32_t ia_ctime_nsec;
> uint32_t ia_btime_nsec;
> uint64_t ia_attributes;  /* chattr related:compressed, immutable,
>   * append only, encrypted etc.*/
> uint64_t ia_attributes_mask; /* Mask for the attributes */
>
> uuid_t ia_gfid;
> ia_type_t ia_type; /* type of file */
> ia_prot_t ia_prot; /* protection */
> };
>
> struct old_iatt {
> uint64_t ia_ino; /* inode number */
> uuid_t ia_gfid;
> uint64_t ia_dev; /* backing device ID */
> ia_type_t ia_type;   /* type of file */
> ia_prot_t ia_prot;   /* protection */
> uint32_t ia_nlink;   /* Link count */
> uint32_t ia_uid; /* user ID of owner */
> uint32_t ia_gid; /* group ID of owner */
> uint64_t ia_rdev;/* device ID (if special file) */
> uint64_t ia_size;/* file size in bytes */
> uint32_t ia_blksize; /* blocksize for filesystem I/O */
> uint64_t ia_blocks;  /* number of 512B blocks allocated */
> uint32_t ia_atime;   /* last access time 

  1   2   3   4   >