Re: [Gluster-devel] Gluster Code Metrics Weekly Report

2021-06-13 Thread sankarshan
The Coverity trend line seems to have fallen off a cliff in recent months -
what would be causing this?

On Mon, 14 Jun 2021 at 08:03,  wrote:

> Gluster Code Metrics
> MetricsValues
> Clang Scan 88
> Coverity 32
> Line Cov 70.8 %
> Func Cov 85.2 %
> Trend Graph Check the latest run: Coverity
> <https://scan.coverity.com/projects/gluster-glusterfs> Clang
> <https://build.gluster.org/job/clang-scan/lastBuild> Code Coverage
> <https://build.gluster.org/job/line-coverage/lastCompletedBuild/Line_20Coverage_20Report/index.html>
>



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Meeting minutes for the Gluster community meeting held on 25-05-2021

2021-05-25 Thread sankarshan
Ayush - thank you for hosting what is your first Gluster community
meeting! It was an excellent effort at keeping the conversation moving
along.

Some additional comments in-line.

On Tue, 25 May 2021 at 17:52, Ayush Ujjwal  wrote:
>
> # Gluster Community Meeting -  25/05/2021

[snip]

> * Project metrics:
>
> |Metrics|   Value  |
> | - |  |
> |[Coverity](https://scan.coverity.com/projects/gluster-glusterfs)  | 38  |
> |[Clang Scan](https://build.gluster.org/job/clang-scan/lastBuild/) |   89  |
> |[Test 
> coverage](https://build.gluster.org/job/line-coverage/lastCompletedBuild/Line_20Coverage_20Report/)|
> 70.9 |
> |[Gluster User Queries in last 14 
> days](https://lists.gluster.org/pipermail/gluster-users/2021-May/thread.html#start)
> | 27 |
> |[Total Github issues](https://github.com/gluster/glusterfs/issues)   |   
>  315   |
>

As brought up at the meeting - it might be useful to discuss the trend
of these values and from there deduce if these are in the right
direction. The values in isolation do not communicate enough data to
determine whether there are opportunities to improve. At a certain
point in time in early 2020 there was intense focus on test coverage.
I am not sure if that has resulted in actual better coverage or just
spreading butter on toast.

>
> * Any release updates?
> * None
>
> * Blocker issues across the project?
> * It looks like the lock-recovery changes introduced with 
> https://review.gluster.org/#/c/glusterfs/+/22712/ has issues. We already 
> fixed https://github.com/gluster/glusterfs/pull/2456 and 
> https://github.com/gluster/glusterfs/issues/2337 but looks like the code is 
> buggy. Need someone to take a look at the difference between posix-locks and 
> client xlators in how the locks are maintained to fix the issue completely.
>

As a project it is necessary to do right by our community - this means
that the impact of the issue and remedy/workaround should be
immediately shared widely enough to ensure that this is not missed.
Since the issues are tagged 'blocker' I am guessing that these meet
the somewhat established criteria of a blocker issue and would need an
enhanced level of attention. Have the issues been triaged and
developers assigned?

>
> * Notable thread form mailing list
> * Not exactly from mailing list. Slack user pinged me and asked me if it 
> is possible to let the users know of any known issues in the latest releases 
> so that they can make a decision about which version to use. For example: 9.0 
> and 9.1 had protocol issue.
> * Along the same lines, I wanted to ask one more question. Should we 
> release beta-releases for major releases so that we get feedback about any 
> issues that happen in their particular environment to address the issues even 
> before the stable releases are made?
>
>

I've offered to look at the criteria which defines a 'beta' and check
how it aligns with a release schedule. The history of 'beta' releases
of storage software (as compared to say a browser) is that we have
often received no uptake. There are many reasons for this - but one
key aspect is that it is additional work being asked from the
community. If the 'beta' is reasonably well described perhaps the
accrued value from this testing cycle would be better understood.
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Archiving old projects

2021-02-02 Thread sankarshan
On Tue, 2 Feb 2021 at 21:04, Michael Scherer  wrote:
>
> Le lundi 01 février 2021 à 18:40 +0530, sankarshan a écrit :
> > Seems reasonable to do this spring cleaning. I'd like to suggest that
> > we add this as an agenda topic to an upcoming meeting and have it on
> > record prior to moving ahead? Do you mind terribly to track this via
> > an issue which can be referenced in the meeting?
>
> I do not mind, but which tracker would be best for that ?
>

Sorry about muddling this - a GitHub issue under project
infrastructure perhaps? The key thing is that I intended to propose
that this activity be tracked at a place other than a mailing list
archive.

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Archiving old projects

2021-02-01 Thread sankarshan
Seems reasonable to do this spring cleaning. I'd like to suggest that
we add this as an agenda topic to an upcoming meeting and have it on
record prior to moving ahead? Do you mind terribly to track this via
an issue which can be referenced in the meeting?

On Mon, 1 Feb 2021 at 18:26, Michael Scherer  wrote:
>
> Hi,
>
> So, it is this time of the year again, do we want to archive the older
> projects on github ?
>
> I already archived:
>
> - gluster/cockpit-gluster (listed as unmaintained)
> - gluster/gluster-plus-one-scale (just a readme)
> - gluster/samba-glusterfs (listed as unmaintained)
> - gluster/anteater (empty, except 1 bug to say to remove it)
>
>
>
> Unless people say "no" this week, I propose thoses :
> https://github.com/gluster/nagios-server-addons
> https://github.com/gluster/gluster-nagios-common
> https://github.com/gluster/gluster-nagios-addons
> https://github.com/gluster/mod_proxy_gluster
> https://github.com/gluster/xglfs
> https://github.com/gluster/glusterfs-java-filesystem
> https://github.com/gluster/glusterfs-kubernetes-openshift
> https://github.com/gluster/libgfapi-jni
> https://github.com/gluster/gluster-debug-tools
> https://github.com/gluster/gdeploy_config_generator
> https://github.com/gluster/glustertool
> https://github.com/gluster/gluster-zeroconf
> https://github.com/gluster/Gfapi-sys
> https://github.com/gluster/gluster-swift
>
> I can't find my last mail, but that's everything that wasn't changed in
> 2019 nor 2020. Repo who are used just for bug tracking are ok.
>
>
>
> --
> Michael Scherer / He/Il/Er/Él
> Sysadmin, Community Infrastructure
>
>
>
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>


-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294 (voice/messaging)
kadalu.io : Making it easy to provision storage in k8s!
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Removing problematic language in geo-replication

2020-12-30 Thread sankarshan
Thank you for the update. Looking forward to the completion of this
activity in early 2021!

On Wed, 30 Dec 2020 at 18:00, Ravishankar N  wrote:
>
> Hello,
>
> Just a quick update:  all geo-rep related offensive words (as a matter
> of fact, even the non geo-rep ones) that can be removed from the code
> have been done so from the devel branch of the glusterfs repo. I thank
> everyone for their suggestions, debugging/testing help and code reviews.
>
> Since we have some soak time before the changes make it to the
> release-10 branch, I would encourage you to test the changes and report
> any issues that you might find. Please try out both new geo-rep set ups
> as well as upgrade scenarios (say from a supported release version to
> the latest devel branch).
>
> Also, for any new PRs that we are sending/ reviewing/merging, we need to
> keep in mind not to re-introduce any offensive words.
>
> Wishing you all a happy new year!
> Ravi
>
> On 22/07/20 5:06 pm, Aravinda VK wrote:
> > +1
> >
> >> On 22-Jul-2020, at 2:34 PM, Ravishankar N  wrote:
> >>
> >> Hi,
> >>
> >> The gluster code base has some words and terminology (blacklist, 
> >> whitelist, master, slave etc.) that can be considered hurtful/offensive to 
> >> people in a global open source setting. Some of words can be fixed 
> >> trivially but the Geo-replication code seems to be something that needs 
> >> extensive rework. More so because we have these words being used in the 
> >> CLI itself. Two questions that I had were:
> >>
> >> 1. Can I replace master:slave with primary:secondary everywhere in the 
> >> code and the CLI? Are there any suggestions for more appropriate 
> >> terminology?
> > Primary -> Secondary looks good.
> >
> >> 2. Is it okay to target the changes to a major release (release-9) and 
> >> *not* provide backward compatibility for the CLI?
> > Functionality is not affected and CLI commands are compatible since all are 
> > positional arguments. Need changes in
> >
> > - Geo-rep status xml output
> > - Documentation
> > - CLI help
> > - Variables and other references in Code.
> >
> >> Thanks,
> >>
> >> Ravi
> >>
> >>
> >> ___
> >>
> >> Community Meeting Calendar:
> >>
> >> Schedule -
> >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >> Bridge: https://bluejeans.com/441850968
> >>
> >>
> >>
> >>
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-devel
> >>
> > Aravinda Vishwanathapura
> > https://kadalu.io
> >
> >
> >
>
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Holiday vacation (from tonight to the 4th of January)

2020-12-18 Thread sankarshan
Season's greetings and happy holidays. I wish you a restful vacation
and let's get to meet virtually in 2021!

On Fri, 18 Dec 2020 at 15:59, Michael Scherer  wrote:
>
> Hi folks,
>
> just a quick note, I will be in in vacation starting tonight until
> Monday the 4th of January 2021.
> My Out of office message will, as usual, contain ways to contact me.
>
> --
> Michael Scherer / He/Il/Er/Él
> Sysadmin, Community Infrastructure
>
>
>
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>


-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
kadalu.io : Making it easy to provision storage in k8s!
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] On the topic of being intentional and making conscious efforts around problematic language

2020-12-03 Thread sankarshan
This morning I was reading the post at
https://www.redhat.com/en/blog/update-red-hats-conscious-language-efforts
and while it inexplicably makes no reference to the efforts within
Gluster (hat tip to Ravishankar, Aravinda, Kotresh and others) there
is work being undertaken at
https://github.com/gluster/glusterfs/pull/1568

It might be useful to look at the dashboard at
https://stats.eng.ansible.com/apps/ConsciousLanguage/ and review the
other parts of the project in terms of completing this work.

/s
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-16 Thread sankarshan
On Mon, 16 Nov 2020 at 16:45, Ravishankar N  wrote:

> I am surprised too that it wasn't caught earlier.
>

There have been sporadic requests around maintaining v3 but of late I
haven't heard a lot around NFS-Ganesha. I am not surprised to learn that
there is tests lacking - this is unlikely to be one-off.

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Meeting minutes for Gluster Community Meeting on 27-10-2020

2020-10-30 Thread sankarshan
Nikhil, thank you for being the host of the meeting and this follow up
note to the lists. Some remarks in-line

On Fri, 30 Oct 2020 at 13:07, Nikhil Ladha  wrote:
>
> # Gluster Community Meeting -  27/10/2020
>
>
> ### Previous Meeting minutes:
>
> - http://github.com/gluster/community
> - Details of this meeting
>
> ### Date/Time: Check the [community 
> calendar](https://calendar.google.com/event?action=TEMPLATE&tmeid=MDQ0YmRydTllMXYzdWFoMmpsbjdqNXJlYmNfMjAyMDEwMjdUMDkwMDAwWiBzYWptb2hhbUByZWRoYXQuY29t&tmsrc=sajmoham%40redhat.com&scp=ALL)
>
> ### Bridge
> * APAC/EMEA friendly hours
>   - Tuesday 13/10/2020, 02:30PM IST
>   - Bridge: https://bluejeans.com/441850968
> * NA
>   - Every 1st and 3rd Tuesday at 01:00 PM EDT
>   - Bridge: https://bluejeans.com/118564314
>

Are these meeting bridge detail still correct and do we continue to
have a meeting in NA time slices?

>
> ---
>
> ### Attendance
> Name (#gluster-dev alias) - company
> * Nikhil Ladha - Red Hat
> * sankarshan - Kadalu.IO
> * Pranith Kumar Karampuri - PhonePe
> * Vinayak Hariharmath - RedHat
> * Ravi - Red Hat
> * Sunil Kumar - Red Hat
> * Amar Tumballi - Kadalu.IO
> * Saju Mohammed - RedHat
> * Shwetha Acharya - RedHat
> * Aravinda Vishwanathapura (aravindavk) - Kadalu.io
> * Csaba Henk - Red Hat
> * Rinku Kothiya - Red Hat
> * Barak Sason Rofman - Red Hat
> * Deepshikha - Red Hat
> * Srijan Sivakumar - Red Hat
>
> ### User stories
> * No new stories.
>

This section has not been seeing updates for a while now - do we want
to continue to have this? Is there a better alternative?

> ### Community
>
> * Project metrics:
>
> |Metrics|   Value  |
> | - |  |
> |[Coverity](https://scan.coverity.com/projects/gluster-glusterfs)  | 61  |
> |[Clang Scan](https://build.gluster.org/job/clang-scan/lastBuild/) |   87  |
> |[Test 
> coverage](https://build.gluster.org/job/line-coverage/lastCompletedBuild/Line_20Coverage_20Report/)|
> 70.9 |
> |[Gluster User Queries in last 14 
> days](https://lists.gluster.org/pipermail/gluster-users/2020-October/thread.html)
> | 13 |
> |[Total Github issues](https://github.com/gluster/glusterfs/issues)   |   
>  401   |
>
>
> * Any release updates?
> * [Rinku] We do not have enough patches tagged for Release7 and Release8 
> hence I propose to delay the release to next month, in case we get enough 
> patches tagged for release-8 and release-7 we can consider releasing a build.
>
>

I missed following up on this during the meeting - what is being done
to ensure that there is no repeat of the paucity of patches ie.
content for the release?

> * Blocker / Severe issues across the project?
> * [Amar] https://github.com/gluster/glusterfs/issues/1699
> * [Deepshika] 
> [Document](https://docs.google.com/document/d/1Vxf24dLPSCmVBfJ7PfO5bN28r9slfyoJufZX2GOzlTc/edit?usp=sharing)
>  mentioning the criterias to be a member of gluster org   [Under review]
>
>

The process document from Deepshikha would need a "by this date"
deadline for the reviews to come in.

> * Notable thread from mailing list
> * None
>

This section is usually empty of content - are there truly no threads
worth of note or is there absence of reading of the lists?

>
> ### Conferences / Meetups
>
> * [Hacktoberfest](https://hacktoberfest.digitalocean.com/)
> * Event dates: The whole month of October.
> * Venue: Online
>
> ### GlusterFS - v9.0 and beyond
> * Gluster release 9 roadmap 
> [tracker](https://github.com/gluster/glusterfs/issues/1465)
> * Everything is on track. Good progress on the issues.
>
>
> ### Developer focus
>
> * Any design specs to discuss?
> * Updated permissions for running regression tests
> * Stale bot
> * Topic raised by Pranith around 
> https://github.com/gluster/glusterfs/issues/910#issuecomment-706451616 with 
> discussion around the 
> [stalebot](https://github.com/gluster/glusterfs/blob/devel/.github/stale.yml) 
> workflow. Pranith to raise an issue track proposed changes to kick in 
> stalebot earlier than the 7 month cycle
> * https://github.com/gluster/glusterfs/issues/910 (look at the 
> comment on stalebot)
> * We can choose to use a 'label'.
> * Quota - What next?
> * Check the RFC - 
> https://kadalu.io/rfcs/0006-optimized-quota-feature-with-namespace.html
> * There is a need for the feature.
>
> ### Component status
> * Arbiter - Stable / Maintained
> * AFR - Update on granular self-heal entry
> * DHT - [Barak] Will send out an invite for a session regarding Global Layout 
> and Rebalance enhanc

Re: [Gluster-devel] [Gluster-Maintainers] [Gluster-infra] ACTION REQUESTED: Migrate your glusterfs patches from Gerrit to GitHub

2020-10-12 Thread sankarshan
It is perhaps on Amar to send the PR with the changes - but that would
kind of make the approval/merge process a bit muddled? How about a PR
being sent for review and then merged in?

On Mon, 12 Oct 2020 at 19:22, Kaleb Keithley  wrote:
>
>
>
> On Thu, Oct 8, 2020 at 8:10 AM Kaleb Keithley  wrote:
>>
>> On Wed, Oct 7, 2020 at 7:33 AM Sunil Kumar Heggodu Gopala Acharya 
>>  wrote:
>>>
>>>
>>> Regards,
>>>
>>> Sunil kumar Acharya
>>>
>>>
>>>
>>>
>>> On Wed, Oct 7, 2020 at 4:54 PM Kaleb Keithley  wrote:
>>>>
>>>>
>>>>
>>>> On Wed, Oct 7, 2020 at 5:46 AM Deepshikha Khandelwal  
>>>> wrote:
>>>>>
>>>>>
>>>>> - The "regression" tests would be triggered by a comment "/run 
>>>>> regression" from anyone in the gluster-maintainers[4] github group. To 
>>>>> run full regression, maintainers need to comment "/run full regression"
>>>>>
>>>>> [4] https://github.com/orgs/gluster/teams/gluster-maintainers
>>>>
>>>>
>>>> There are a lot of people in that group that haven't been involved with 
>>>> Gluster for a long time.
>>>
>>> Also there are new contributors, time to update!
>>
>>
>> Who is going to do this? I don't have the necessary privs.
>
>
> Anyone?
>
> --
>
> Kaleb
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Rebalance improvement.

2020-08-03 Thread sankarshan
On Mon, 3 Aug 2020 at 12:47, Susant Palai  wrote:
>
> Centos Users can add the following repo and install the build from the master 
> branch to try out the feature. [Testing purpose only, not ready for 
> consumption in production env.]
>
> [gluster-nightly-master] 
> baseurl=http://artifacts.ci.centos.org/gluster/nightly/master/7/x86_64/ 
> gpgcheck=0 keepalive=1 enabled=1 repo_gpgcheck = 0 name=Gluster Nightly 
> builds (master branch)
>
> A summary of perf numbers from our test lab :
>

Are these numbers impacted by sizing of the machine instance/hardware?
What is the configuration on which these numbers were recorded?

> DirSize - 1Million Old New %diff Depth - 100 (Run 1) 353 74 +377% Depth - 100 
> (Run 2) 348 72 +377~% Depth - 50 246 122 +100% Depth - 3 174 114 +52%
>
> Susant

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Introducing me, questions on general improvements in gluster re. latency and throughput

2020-06-11 Thread sankarshan
Federico - welcome again! It was fantastic to have you at the
Community Meeting. You bring up an interesting perspective - of how to
onboard a participant by providing specific and focused conversations.
I am hoping that this note is one where Xavi and Amar can respond in
order to provide some of the plans and approaches you seek. One of the
topics which we have gone back and forth about (in the past) is how to
ensure that we are closer to the mainline/upstream of FUSE than we are
at this moment. I wonder if that is something worth investigating at
this point.

/s

On Thu, 11 Jun 2020 at 13:21, Federico Strati  wrote:
>
> Dear All,
>
> I just started working for a company named A3Cube, who produces HPC
> supercomputers.
>
> I was assigned the task to investigate which improvements to gluster are
> viable
>
> in order to lead to overall better performance in latency and throughput.
>
> I'm quite new to SDS and so pardon me if some questions are naive.
>
>  From what I've understood so far, possible bottlenecks are
>
> in FUSE and transport.
>
> Generally speaking, if you have time to just drop me some pointers,
>
> 1] FUSE + splice has never been considered (issue closed without real
> discussions)
>
> (probably because it conflicts with the general architecture and in
> particular
>
> with the write-behind translator)
>
> Recently, it has been announced a new userspace fs kernel module, ZUFS,
> whose aim
>
> is to zero copy and improving vastly over FUSE: would you be interested
>
> in investigating it ?
>
> (ZUFS: https://github.com/NetApp/zufs-zuf ;
> https://lwn.net/Articles/756625/)
>
> 2] Transport over RDMA (Infiniband) has been recently dropped:
>
> may I ask you what considerations have been made ?
>
> 3] I would love to hear what you consider real bottlenecks in gluster
>
> right now regarding latency and thruput.
>
> Thanks in advance
>
> Kind regards
>
> Federico
>
> ___
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
>
>
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>


-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-infra] Upgrade of the FreeBSD builder

2020-06-07 Thread sankarshan
On Fri, 5 Jun 2020 at 23:22, Michael Scherer  wrote:
>
> Hi,
>
> I did push this afternoon a role to automate the creation of FreeBSD
> instance in ec2. So, this would make it easier to 1) upgrade the
> builder 2) support more than 1 version.
>
> Unless someone has objection, I do intend to switch the builder used
> for freebsd-smoke from 10.0 to 12.1, once I am sure that it build
> correctly.
>
>

Thank you for taking care of this!

> I also heard that people asked for NetBSD, and unfortunately, I didn't
> found any recent image for NetBSD on EC2, and the few instructions I
> did found requires to rebuild and upload your own image, which is a bit
> too cumbersome for the moment. I am also unsure wether that work at
> all, since I didn't find anything related since more than 2 years due
> to EC2 no longer using xen, with people recommending GCP instead (not a
> option for us for now).

If you hear about this request again, please insist on a GitHub issue
to be filed (I'm assuming that is what we are using these days). There
needs to be a recorded instance of a request and a response on why
this is a set of activities which the project finds unable to
undertake at this point.


-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Minutes of Gluster Community Meeting [12th May 2020]

2020-05-18 Thread sankarshan
On Mon, 18 May 2020 at 12:39, Xavi Hernandez  wrote:

>> > ### User stories
>> > * [Hari] users are hesitant to upgrade. A good number of issues in 
>> > release-7 (crashes, flooding of logs, self heal) Need to look into this.
>> > * [Sunil] Increase in inode size 
>> > https://lists.gluster.org/pipermail/gluster-users/2020-May/038196.html 
>> > Looks like it can have perf benefit.
>> >
>>
>> Is there work underway to ascertain if there are indeed any
>> performance related benefits? What are the kind of tests which would
>> be appropriate?
>
>
> Rinku has done some tests downstream to validate that the change doesn't 
> cause any performance regression. Initial results don't show any regression 
> at all and it even provides a significant benefit for 'ls -l' and 'unlink' 
> workloads. I'm not sure yet why this happens as the xattrs for these tests 
> should already fit inside 512 bytes inodes, so no significant differences 
> were expected.

Can we not consider putting together an update or a blog (as part of
release 8 content) which provides a summary of the environment,
workload and results for these tests? I understand that test rig may
not be publicly available - however, given enough detail, others could
attempt to replicate the same.

>
> The real benefit would be with volumes that use at least geo-replication or 
> quotas. In this case the xattrs may not fit inside the 512 bytes inodes, so 
> 1024 bytes inodes will reduce the number of disk requests when xattr data is 
> not cached (and it's not always cached, even if the inode is in cache). This 
> testing is pending.
>
> From the functional point of view, we also need to test that bigger inodes 
> don't cause weird inode allocation problems when available space is small. 
> XFS allocates inodes in contiguous chunks in disk, so it could happen that 
> even though there's enough space in disk (apparently), an inode cannot be 
> allocated due to fragmentation. Given that the inode size is bigger, the 
> required chunk will also be bigger, which could make this problem worse. We 
> should try to fill a volume with small files (with fsync pre file and without 
> it) and see if we get ENOSPC errors much before it's expected.
>
> Any help validating our results or doing the remaining tests would be 
> appreciated.
>

It seems to me that we need to have a broader conversation around
these tests and paths - perhaps on a separate thread.


-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Minutes of Gluster Community Meeting [12th May 2020]

2020-05-16 Thread sankarshan
On Fri, 15 May 2020 at 10:59, Hari Gowtham  wrote:

> ### User stories
> * [Hari] users are hesitant to upgrade. A good number of issues in release-7 
> (crashes, flooding of logs, self heal) Need to look into this.
> * [Sunil] Increase in inode size 
> https://lists.gluster.org/pipermail/gluster-users/2020-May/038196.html Looks 
> like it can have perf benefit.
>

Is there work underway to ascertain if there are indeed any
performance related benefits? What are the kind of tests which would
be appropriate?


> * Any release updates?
> * 6.9 is done and announced
> * [Sunil]can we take this in for release-8: 
> https://review.gluster.org/#/c/glusterfs/+/24396/
> * [Rinku]Yes, we need to ask the patch owners to port this to release8 
> post merging it to master. Till the time we tag release8 this is possible 
> post this it will be difficult, after which we can put it in release8.1
> * [Csaba] This is necessary as well 
> https://review.gluster.org/#/c/glusterfs/+/24415/
> * [Rinku] We need release notes to be reviewed and merged release8 is 
> blocked due to this. https://review.gluster.org/#/c/glusterfs/+/24372/

Have the small set of questions on the notes been addressed? Also, do
we have plans to move this workflow over to GitHub issues? In other
words, how long are we planning to continue to work with dual systems?


> ### RoundTable
> * [Sunil] Do we support cento8 and gluster?
> * [sankarshan] Please highlight the concerns on the mailing list. The 
> developers who do the manual testing can review and provide their assessment 
> on where the project stands
> * We do have packages, how are we testing it?
> * [Sunil] Centos8 regression is having issues and are not being used for 
> regression testing.
> * [Hari] For packages, Shwetha and Sheetal are manually testing the bits with 
> centos8. Basics works fine. But this testing isn't enough
> * send out a mail to sort this out

I am guessing that this was on Sunil to send out the note to the list.
Will be looking forward to that.

> * [Amar] Kadalu 0.7 release based on GlusterFS 7.5 has been recently released 
> (Release Blog: https://kadalu.io/blog/kadalu-storage-0.7)
> * [Rinku] How to test
> * [Aravinda] https://kadalu.io/docs/k8s-storage/latest/quick-start




-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Action required: stale bot

2020-04-30 Thread sankarshan
I do not think that "a comment" is a sufficient indicator of the
capacity and interest to work on the issue. While this will suffice
for the first pass of identifying issues which will not be worked
upon, it is also easy to create a different-than-desired outcome by
leaving comments.

Regardless of the intrinsic goodness of an issue, if a
developer/maintainer is unable to review and plan for it (not in a
"will do this some day when I get the time") then we should
close/archive those issues. For future runs of the stale bot we'll
have to determine what is the desired outcome - to keep resusicating
topics or, organize for just the ones which can be worked upon.

On Thu, 30 Apr 2020 at 20:04, Sunny Kumar  wrote:
>
> Hi all,
>
> With the introduction of stale bot[1] in github workflow one can
> observe lots of github issues have been identified and marked as
> stale[WONTFIX].
>
> I will request every developer and maintainer to take some time and
> revisit these marked issues[2]. Identifying those which can be worked
> upon can prevent them from getting closed.
>
> So, please act soon as you can see the bot says:
> "It will be closed in 2 weeks if no one responds with a comment here."
>
>
>
> [1]. https://review.gluster.org/#/c/glusterfs/+/24386/
> [2].https://github.com/gluster/glusterfs/labels/WONTFIX
>
>
> /sunny



-- 
sankars...@kadalu.io | TZ: UTC+0530 | +91 99606 03294
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Multithreaded Iterative Dir Tree Scan

2020-04-23 Thread sankarshan
On Fri, 24 Apr 2020 at 08:05, Amar Tumballi  wrote:

> This looks like a good effort to pick up Barak. A needed one indeed.
>
>
Should this be tracked with a release label and planned? The content of the
document should probably transfer itself to the issue tracking the PR(s)



> On Mon, Mar 23, 2020 at 3:18 PM Barak Sason Rofman 
> wrote:
>
>> Hello everyone!
>> Following a discussion I had with @Susant Palai some time ago, we have
>> decided to look into an option to improve the rebalance process in the DHT
>> layer by modifying the underlying mechanism. Currently, dir-tree crawling
>> is done recursively, by a single thread, which is likely slow and also
>> poses the risk of stack overflow. An iterative multithreaded solution might
>> improve performance and also stability (by eliminating the risk of stack
>> overflow). I have prepared a POC doc on the matter, including a sample
>> implementation of the iterative multithreaded solution. The doc can be
>> found at:
>>
>> https://docs.google.com/document/d/1JCl0T9zeagOcFFpgVQF8zNyhlR54VqkNAZ7TJb42egE/edit
>>
>> <https://docs.google.com/document/d/1L0uHgFbrNWWxCQB6s4YcoymKrO7q0yVAbEIWWIiu_as/edit?usp=sharing>Apart
>> from the rebalance process, maybe this approach can be useful for other
>> use-cases where dir-tree crawl is being performed? Any comments on the
>> concept, the design of the solution and the implementation are welcome.
>>
>> --
>> *Barak Sason Rofman*
>>
>> Gluster Storage Development
>>
>> Red Hat Israel <https://www.redhat.com/>
>>
>> 34 Jerusalem rd. Ra'anana, 43501
>>
>> bsaso...@redhat.com T: *+972-9-7692304*
>> M: *+972-52-4326355*
>>
>

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Switching GlusterFS upstream bugs to Github issues

2020-03-12 Thread sankarshan
Thank you for making this happen. This is the first phase of adopting
a more GitHub based development workflow including actions.

On Thu, 12 Mar 2020 at 21:29, Deepshikha Khandelwal  wrote:
>
> Hi everyone,
>
> We have migrated most of the current upstream bugs(attached below) from the 
> GlusterFS community Bugzilla product to Github issues.
>
> 1. All the issues created as a part of a migration will have Bugzilla URL, 
> description, comments history, Github labels ('Migrated', 'Type: Bug', Prio) 
> with a list of assignees.
> 2. All the component's bug except project-infrastructure will go to 
> gluster/glusterfs repo.
> 3. project-infrastructure component's bugs will migrate under 
> gluster/project-infrastructure repo.
> 4. We are freezing the GlusterFS community product on Bugzilla. It will be 
> closed for new bug entries. You have to create an issue on Github repo from 
> now onwards.
> 5. All the bugs have been closed on Bugzilla with the corresponding Github 
> issue URL.
> 6. The changes have been reflected in the developer contributing workflow [1].
> 7. 'Migrated' and 'Type: Bug' GitHub labels has been added on the issues.
>
> Discussions on this are happening on the mailing list, and few of the 
> references are below:
>
> https://lists.gluster.org/pipermail/gluster-infra/2020-February/006030.html
> https://lists.gluster.org/pipermail/gluster-infra/2020-February/006009.html
> https://lists.gluster.org/pipermail/gluster-infra/2020-February/006040.html
>
> [1] https://github.com/gluster/glusterfs/blob/master/.github/ISSUE_TEMPLATE
>
> Let us know if you see any issues.
>
> Thank you,
> Deepshikha


-- 
sankars...@kadalu.io | TZ: UTC+0530
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] following up on the work underway for improvements in ls-l - looking for the data on the test runs

2020-02-25 Thread sankarshan
What is the configuration/sizing on which these tests are conducted?
Do you need any additional help from others on the patches which you
have used for the tests?

On Tue, 25 Feb 2020 at 13:32, Mohit Agrawal  wrote:
>
> With these 2 changes, we are getting a good improvement in file creation and
> slight improvement in the "ls-l" operation.
>
> We are still working to improve the same.
>
> To validate the same we have executed below script from 6 different clients 
> on 24x3 distributed
> replicate environment after enabling performance related option
>
> mkdir /gluster-mount/`hostname`
> date;
> for i in {1..100}
> do
> echo "directory $i is created" `date`
> mkdir /gluster-mount/`hostname`/dir$i
> tar -xvf /root/kernel_src/linux-5.4-rc8.tar.xz -C 
> /gluster-mount/`hostname`/dir$i >/dev/null
> done
>
> With no Patch
> tar was taking almost 36-37 hours
>
> With Patch
> tar is taking almost 26 hours
>
> We were getting a similar kind of improvement in smallfile tool also.
>
> On Tue, Feb 25, 2020 at 1:29 PM Mohit Agrawal  wrote:
>>
>> Hi,
>> We observed performance is mainly hurt while .glusterfs is having huge 
>> data.As we know before executing a fop in POSIX xlator it builds an internal 
>> path based on GFID.To validate the path it call's (l)stat system call and 
>> while .glusterfs is heavily loaded kernel takes time to lookup inode and due 
>> to that performance drops
>> To improve the same we tried two things with this 
>> patch(https://review.gluster.org/#/c/glusterfs/+/23783/)
>>
>> 1) To keep the first level entry always in a cache so that inode lookup will 
>> be faster   we have to keep open first level fds(00 to ff total 256) per 
>> brick at the time of starting a brick process. Even in case of cache cleanup 
>> kernel will not evict first level fds from the cache and performance will 
>> improve
>>
>> 2) We tried using "at" based call(lstatat,fstatat,readlinat etc) instead of 
>> accessing complete path access relative path, these call's were also helpful 
>> to improve performance.
>>
>> Regards,
>> Mohit Agrawal
>>
>>


-- 
sankars...@kadalu.io | TZ: UTC+0530
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] following up on the work underway for improvements in ls-l - looking for the data on the test runs

2020-02-24 Thread sankarshan
Following up on this - where are we in terms of readiness around
sharing observations and recorded data?

On Tue, 4 Feb 2020 at 12:13, sankarshan  wrote:
>
> Following up on this smaller list - who is working on the improvements
> and has the data to be shared?
>
> On Wed, 29 Jan 2020 at 10:19, Sankarshan Mukhopadhyay
>  wrote:
> >
> > We talked about a set of planned work underway which bring about
> > substantial improvements in ls-l and similar workloads.
> >
> > Do we have the (a) data from the test runs to be shared more widely
> > (b) the patchsets and issues to track this work?



-- 
sankars...@kadalu.io | TZ: UTC+0530
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Fwd: [netdata/netdata] Gluster monitoring (#4824)

2020-02-23 Thread sankarshan
Given NIshanth Thomas and his team members have previously worked on this
topic, they may be the subject matter experts the "sponsor" role requires.
Reading 
it would seem that the role would need a high touch consultation

On Sun, 23 Feb 2020 at 00:00, Yaniv Kaul  wrote:

> If anyone is interested in picking this up.
>
> -- Forwarded message -
> From: Chris Akritidis 
> Date: Sat, 22 Feb 2020, 19:30
> Subject: Re: [netdata/netdata] Gluster monitoring (#4824)
> To: netdata/netdata 
> Cc: Yaniv Kaul , Comment 
>
>
> A lot of interest here, but no sponsor. Can someone please assume the role
> so we can move it forward?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> ,
> or unsubscribe
> 
> .
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>

-- 
sankars...@kadalu.io | TZ: UTC+0530
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] gluster_gd2-nightly-rpms - do we need to continue to build for this?

2020-02-17 Thread Sankarshan Mukhopadhyay
There is no practical work being done on gd2, do we need to continue
to have a build job?

On Tue, 18 Feb 2020 at 05:46,  wrote:
>
> See <https://ci.centos.org/job/gluster_gd2-nightly-rpms/643/display/redirect>
>



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Community Meeting: Make it more reachable

2020-02-14 Thread sankarshan
Following up from the discussion on this topic.

Here's a TZ based set (at an arbitrary date)
<https://www.timeanddate.com/worldclock/meetingtime.html?day=18&month=2&year=2020&p1=54&p2=48&p3=179&iv=1800>

And we were discussion whether
<https://www.timeanddate.com/worldclock/meetingdetails.html?year=2020&month=2&day=18&hour=9&min=0&sec=0&p1=54&p2=48&p3=179&iv=1800>
works for participants from EMEA.


On Sat, 8 Feb 2020 at 09:02, sankarshan  wrote:
>
>
>
> On Fri, Feb 7, 2020, 18:37 Sunny Kumar  wrote:
>>
>> On Thu, Feb 6, 2020 at 11:47 AM sankarshan  wrote:
>> >
>> > On Thu, 6 Feb 2020 at 16:24, Yati Padia  wrote:
>> > >
>> > > Hi,
>> > > In response to the discussion that we had about the timings of the 
>> > > community meeting, I would like to propose that we can have it at 
>> > > 3PM/4PM IST on 11th February to accommodate EMEA/NA zone and if it suits 
>> > > all, we can fix the timing for next meetings as well.
>> > > If anyone has any objections regarding this, it can be discussed in this 
>> > > thread so that we can come up with a fixed timing for the meeting.
>> > >
>> >
>> > I'm not remarkably inconvenienced by the proposed time. 1600
>> > (UTC+0530) is still 0530 EST (at present day) - so that's perhaps
>> > early for those in that TZ. I'll defer to the participants from there
>> > to have their feedback heard.
>> >
>> Agree with you Sankarshan; it will be too early for NA TZ.  We can
>> work for a balanced/overlapping time probably between 1800 to 2000
>> (UTC+530).
>> I think in the past it was being hosted at the same time.
>>
>> We can discuss about it in upcoming meeting.
>
>
> Now that we do have a regular participation at the meetings, we should think 
> about making them more interactive than just a read out and reporting. Those 
> parts can be handled through the meeting notes themselves. I'd like to see 
> more information sharing about the process of production of the consumables 
> for our users viz. health of tests, health of infrastructure, triage of 
> outstanding issues, new work that is coming up in some time, keeping the 
> documentation current and accurate etc
>
> If we look at the meetings from the perspective of making them useful for 
> those who couldn't join, our updates and discussions would be very different.



--
sankars...@kadalu.io | TZ: UTC+0530
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Community Meeting: Make it more reachable

2020-02-07 Thread sankarshan
On Fri, Feb 7, 2020, 18:37 Sunny Kumar  wrote:

> On Thu, Feb 6, 2020 at 11:47 AM sankarshan  wrote:
> >
> > On Thu, 6 Feb 2020 at 16:24, Yati Padia  wrote:
> > >
> > > Hi,
> > > In response to the discussion that we had about the timings of the
> community meeting, I would like to propose that we can have it at 3PM/4PM
> IST on 11th February to accommodate EMEA/NA zone and if it suits all, we
> can fix the timing for next meetings as well.
> > > If anyone has any objections regarding this, it can be discussed in
> this thread so that we can come up with a fixed timing for the meeting.
> > >
> >
> > I'm not remarkably inconvenienced by the proposed time. 1600
> > (UTC+0530) is still 0530 EST (at present day) - so that's perhaps
> > early for those in that TZ. I'll defer to the participants from there
> > to have their feedback heard.
> >
> Agree with you Sankarshan; it will be too early for NA TZ.  We can
> work for a balanced/overlapping time probably between 1800 to 2000
> (UTC+530).
> I think in the past it was being hosted at the same time.
>
> We can discuss about it in upcoming meeting.
>

Now that we do have a regular participation at the meetings, we should
think about making them more interactive than just a read out and
reporting. Those parts can be handled through the meeting notes themselves.
I'd like to see more information sharing about the process of production of
the consumables for our users viz. health of tests, health of
infrastructure, triage of outstanding issues, new work that is coming up in
some time, keeping the documentation current and accurate etc

If we look at the meetings from the perspective of making them useful for
those who couldn't join, our updates and discussions would be very
different.

>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Community Meeting: Make it more reachable

2020-02-06 Thread sankarshan
On Thu, 6 Feb 2020 at 16:24, Yati Padia  wrote:
>
> Hi,
> In response to the discussion that we had about the timings of the community 
> meeting, I would like to propose that we can have it at 3PM/4PM IST on 11th 
> February to accommodate EMEA/NA zone and if it suits all, we can fix the 
> timing for next meetings as well.
> If anyone has any objections regarding this, it can be discussed in this 
> thread so that we can come up with a fixed timing for the meeting.
>

I'm not remarkably inconvenienced by the proposed time. 1600
(UTC+0530) is still 0530 EST (at present day) - so that's perhaps
early for those in that TZ. I'll defer to the participants from there
to have their feedback heard.

However, if we do want to switch over, we will have to:

+ modify the text on the website
+ modify the email footers

We already have had users pointing out that mixed/confusing messages
about the meeting time slices make it difficult to understand.
-- 
sankars...@kadalu.io | TZ: UTC+0530
kadalu.io : Making it easy to provision storage in k8s!
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Feedback on the community meetings

2020-01-29 Thread sankarshan
I wanted to take a moment to highlight a small set of things about the
community meetings. Sunny has already brought forward the issue of
time/TZs and participation. There are few additional aspects

+ content : the meeting is built on a template. This is very useful as
a guide and I am sure it helps the host (a shout out to Yati Padia for
keeping the meeting focused and running a tight ship). However, it
would be good to check additional related and relevant conversations
to be driven in the meeting as linked to the quality of the project.
These could be the state/status of the untriaged bugs; update on tests
and testing framework. I bring this up as in light of the Github
migration, a 360 degree view is something we could draw benefit from

+ updates : if the updates could be added to the note prior to the
meeting, then it provides an opportunity for everyone to read/annotate
with clarifying questions

/s
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Community Meeting: Make it more reachable

2020-01-29 Thread Sankarshan Mukhopadhyay
On Wed, Jan 29, 2020, 17:14 Sunny Kumar  wrote:

> On Wed, Jan 29, 2020 at 9:50 AM Sankarshan Mukhopadhyay
>  wrote:
> >
> > On Wed, 29 Jan 2020 at 15:03, Sunny Kumar  wrote:
> > >
> > > Hello folks,
> > >
> > > I would like to propose moving community meeting to a time which is
> > > more suitable for EMEA/NA zone, that is merging both of the zone
> > > specific meetings into a single one.
> > >
> >
> > I am aware that there are 2 sets of meetings now - APAC and EMEA/NA.
> > This came about to ensure that users and community at these TZs have
> > the opportunity to participate in time slices that are more
> > comfortable. I have never managed to attend a NA/EMEA instance - is
> > that not seeing enough participation? There is only a very thin time
>
> I usually join but no one turns ups in meeting, I guess there is some
> sort of problem most likely no one is aware/hosting/communication gap.
>

There certainly seems to be. Thanks for highlighting this situation.


> > slice that overlaps APAC, EMEA and NA. If we want to do this, we would
> > need to have a doodle/whenisgood set of options to see how this pans
> > out.
>
> Yes, we have to come up with a time which can cover most of TZs.
>

Would be sending out a possible time slice set so that we can see how it is
accepted by those who are the regulars?

In the meanwhile we can use the hackpad notes and the Slack channels to
ensure that the communication is improved.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Community Meeting: Make it more reachable

2020-01-29 Thread Sankarshan Mukhopadhyay
On Wed, 29 Jan 2020 at 15:03, Sunny Kumar  wrote:
>
> Hello folks,
>
> I would like to propose moving community meeting to a time which is
> more suitable for EMEA/NA zone, that is merging both of the zone
> specific meetings into a single one.
>

I am aware that there are 2 sets of meetings now - APAC and EMEA/NA.
This came about to ensure that users and community at these TZs have
the opportunity to participate in time slices that are more
comfortable. I have never managed to attend a NA/EMEA instance - is
that not seeing enough participation? There is only a very thin time
slice that overlaps APAC, EMEA and NA. If we want to do this, we would
need to have a doodle/whenisgood set of options to see how this pans
out.

> It will be really helpful for people who wish to join form these
> meetings form other time zones and will help users to collaborate with
> developers in APAC region.
>
> Please share your thoughts.
>
> /sunny
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] following up on the work underway for improvements in ls-l - looking for the data on the test runs

2020-01-28 Thread Sankarshan Mukhopadhyay
We talked about a set of planned work underway which bring about
substantial improvements in ls-l and similar workloads.

Do we have the (a) data from the test runs to be shared more widely
(b) the patchsets and issues to track this work?
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] reminder: release 8 release notes tracker

2020-01-28 Thread Sankarshan Mukhopadhyay
This note is a reminder to add the topics which are being proposed to
be included in the release notes. As part of a previous meeting, there
is now an issue which tracks this
https://github.com/gluster/glusterfs/issues/813
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] [cross posted] Gluster Community Meeting; component updates and untriaged bugs

2020-01-13 Thread Sankarshan Mukhopadhyay
In a previous instance of the APAC meeting of the Gluster community I
had mentioned about looking at the component update section and
considering providing a report on the untriaged bug load. The
rationale is to ensure that instead of a large mess of "unknown" bugs,
the maintainers (and thus the project) take a look at the weekly
report that is generated and begin to take steps to control the
growing number of untriaged one.

In the meeting today/14Jan, Hari pointed out that the request should
be more widely circulated than an entry in the meeting minutes. And
hence this note. The triage activity would also help frame the
upcoming release(s) in context of how many reported bugs were
addressed in the release and other meta-data around the same eg. how
long it took the project to get it into a release.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Following up on the Gluster 8 conversation from the community meeting

2020-01-09 Thread Sankarshan Mukhopadhyay
I had mentioned that I'll reach out to Amar to urge about making
progress on Gluster 8. We did have a conversation and upon reviewing
the present state of the list of issues/features with the release
label, Amar would be setting up a meeting to obtain a more pragmatic
and reality based plan. It is somewhat obvious that a number of items
listed are not feasible within the current release timeline. Please
wait for the meeting request from Amar.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Modifying gluster's logging mechanism

2019-11-26 Thread Sankarshan Mukhopadhyay
On Wed, Nov 27, 2019 at 7:44 AM Amar Tumballi  wrote:

> Hi Barak,
>
> My replies inline.
>
> On Thu, Nov 21, 2019 at 6:34 PM Barak Sason Rofman 
> wrote:
>
>>
>>
[snip]


>
>> Initial tests, done by *removing logging from the regression testing,
>> shows an improvement of about 20% in run time*. This indicates we’re
>> taking a pretty heavy performance hit just because of the logging activity.
>>
>>
> That is interesting observation. For this alone, can we have an option to
> disable all logging during regression? That would fasten up things for
> normal runs immediately.
>

If having quicker regression runs is the objective, then perhaps we should
not look at turning off logging to accomplish them. Instead, there ought to
be additional aspects which can be reviewed with turning off logging being
the last available option.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Modifying gluster's logging mechanism

2019-11-24 Thread Sankarshan Mukhopadhyay
The original proposal from Barak has merit for planning towards an
alpha form to be available. Has the project moved away from writing
specifications and reviewing those proposals? Earlier, we used to see
those rather than discuss in multi-threaded list emails. While
recorded gains in performance are notable, it would be prudent to
cleanly assess the switch-over cost should the project want to move
over to the new patterns of logging. This seems like a good time to
plan well for a change that has both utility and value.

That said, I'd like to point out some relevant aspects. Logging is
just one (although an important one) part of what is being talked
about as o11y (observability). A log (or, a structured log) is a
record of events. Debugging of distributed systems require
understanding of an unit of work which flowed through a system firing
off events which in turn were recorded. Logs thus are often portions
of events which are part of this unit of work (assume this is a
"transaction" if that helps grasp it better). Or, in other words, logs
are portions of the abstraction ie. events. The key aspect I'd like to
highlight is that (re)designing structured logging in isolation from
o11y principles will not work as we see more customers adopt o11y
patterns, tools within their SRE and other emerging sub-teams.
Focusing just on logs keeps us rooted to the visual display of
information via ELK/EFK models rather than consider the behavior
centric diagnosis of the whole system.



On Fri, Nov 22, 2019 at 3:49 PM Ravishankar N  wrote:
>
>
> On 22/11/19 3:13 pm, Barak Sason Rofman wrote:
> > This is actually one of the main reasons I wanted to bring this up for
> > discussion - will it be fine with the community to run a dedicated
> > tool to reorder the logs offline?
>
> I think it is a bad idea to log without ordering and later relying on an
> external tool to sort it.  This is definitely not something I would want
> to do while doing test and development or debugging field issues.
> Structured logging  is definitely useful for gathering statistics and
> post-processing to make reports and charts and what not,  but from a
> debugging point of view, maintaining causality of messages and working
> with command line text editors and tools on log files is important IMO.
>
> I had a similar concerns when  brick multiplexing feature was developed
> where a single log file was used for logging all multiplexed bricks'
> logs.  So much extra work to weed out messages of 99 processes to read
> the log of the 1 process you are interested in.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 5.10

2019-10-29 Thread Sankarshan Mukhopadhyay
Would it be necessary to update the 5.10 release notes with this being
a "known issue"?

On Tue, Oct 29, 2019 at 11:07 AM Hari Gowtham  wrote:
>
> Hi,
>
> Hubert, I can see the 5.9 folder contains 5.9 packages.
>
> Alan, We just made a release for 5.10.So we can take this in for 5.11.
> Can you please backport the patch to the release-5 branch so that we
> can review and take it in for 5.11?
>
> On Mon, Oct 28, 2019 at 4:45 PM Alan Orth  wrote:
> >
> > Dear list,
> >
> > I hope that this readdirp issue that causes sporadic "permission denied" 
> > errors can be backported to release-5, as it's already in master and 
> > backported to release-6. There's a perfect reproducer for this issue in the 
> > bugzilla that currently works on Gluster 5.10 (been having it for a few 
> > months and happy to find the cause!).
> >
> > See: https://github.com/gluster/glusterfs/issues/711
> > See: https://bugzilla.redhat.com/show_bug.cgi?id=1668286
> >
> > Thank you!
> >
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Proposal: move glusterfs development to github workflow, completely

2019-08-24 Thread Sankarshan Mukhopadhyay
On Fri, Aug 23, 2019 at 6:42 PM Amar Tumballi  wrote:
>
> Hi developers,
>
> With this email, I want to understand what is the general feeling around this 
> topic.
>
> We from gluster org (in github.com/gluster) have many projects which follow 
> complete github workflow, where as there are few, specially the main one 
> 'glusterfs', which uses 'Gerrit'.
>
> While this has worked all these years, currently, there is a huge set of 
> brain-share on github workflow as many other top projects, and similar 
> projects use only github as the place to develop, track and run tests etc. As 
> it is possible to have all of the tools required for this project in github 
> itself (code, PR, issues, CI/CD, docs), lets look at how we are structured 
> today:
>
> Gerrit - glusterfs code + Review system
> Bugzilla - For bugs
> Github - For feature requests
> Trello - (not very much used) for tracking project development.
> CI/CD - CentOS-ci / Jenkins, etc but maintained from different repo.
> Docs - glusterdocs - different repo.
> Metrics - Nothing (other than github itself tracking contributors).
>
> While it may cause a minor glitch for many long time developers who are used 
> to the flow, moving to github would bring all these in single place, makes 
> getting new users easy, and uniform development practices for all gluster org 
> repositories.
>
> As it is just the proposal, I would like to hear people's thought on this, 
> and conclude on this another month, so by glusterfs-8 development time, we 
> are clear about this.
>

I'd want to propose that a decision be arrived at much earlier. Say,
within a fortnight ie. mid-Sep. I do not see why this would need a
whole month to consider. Such a timeline would also allow to manage
changes after proper assessment of sub-tasks.

> Can we decide on this before September 30th? Please voice your concerns.
>
> Regards,
> Amar
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] More intelligent file distribution across subvols of DHT when file size is known

2019-06-05 Thread Sankarshan Mukhopadhyay
On Wed, May 22, 2019 at 6:53 PM Krutika Dhananjay  wrote:
>
> Hi,
>
> I've proposed a solution to the problem of space running out in some children 
> of DHT even when its other children have free space available, here - 
> https://github.com/gluster/glusterfs/issues/675.
>
> The proposal aims to solve a very specific instance of this generic class of 
> problems where fortunately the size of the file that is getting created is 
> known beforehand.
>
> Requesting feedback on the proposal or even alternate solutions, if you have 
> any.

There has not been much commentary on this issue in the last 10 odd
days. What is the next step?
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] tests are timing out in master branch

2019-05-14 Thread Sankarshan Mukhopadhyay
On Wed, May 15, 2019 at 11:24 AM Atin Mukherjee  wrote:
>
> There're random tests which are timing out after 200 secs. My belief is this 
> is a major regression introduced by some commit recently or the builders have 
> become extremely slow which I highly doubt. I'd request that we first figure 
> out the cause, get master back to it's proper health and then get back to the 
> review/merge queue.
>

For such dire situations, we also need to consider a proposal to back
out patches in order to keep the master healthy. The outcome we seek
is a healthy master - the isolation of the cause allows us to not
repeat the same offense.

> Sanju has already started looking into 
> /tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t to understand 
> what test is specifically hanging and consuming more time.
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-Maintainers] Release 6.1: Expected tagging on April 10th

2019-04-16 Thread Sankarshan Mukhopadhyay
On Tue, Apr 16, 2019 at 10:27 PM Atin Mukherjee  wrote:
> On Tue, Apr 16, 2019 at 9:19 PM Atin Mukherjee  wrote:
>> On Tue, Apr 16, 2019 at 7:24 PM Shyam Ranganathan  
>> wrote:
>>>
>>> Status: Tagging pending
>>>
>>> Waiting on patches:
>>> (Kotresh/Atin) - glusterd: fix loading ctime in client graph logic
>>>   https://review.gluster.org/c/glusterfs/+/22579
>>
>>
>> The regression doesn't pass for the mainline patch. I believe master is 
>> broken now. With latest master sdfs-sanity.t always fail. We either need to 
>> fix it or mark it as bad test.
>
>
> commit 3883887427a7f2dc458a9773e05f7c8ce8e62301 (HEAD)
> Author: Pranith Kumar K 
> Date:   Mon Apr 1 11:14:56 2019 +0530
>
>features/locks: error-out {inode,entry}lk fops with all-zero lk-owner
>
>Problem:
>Sometimes we find that developers forget to assign lk-owner for an
>inodelk/entrylk/lk before writing code to wind these fops. locks
>xlator at the moment allows this operation. This leads to multiple
>threads in the same client being able to get locks on the inode
>because lk-owner is same and transport is same. So isolation
>with locks can't be achieved.
>
>Fix:
>Disallow locks with lk-owner zero.
>
>fixes bz#1624701
>Change-Id: I1c816280cffd150ebb392e3dcd4d21007cdd767f
>Signed-off-by: Pranith Kumar K 
>
> With the above commit sdfs-sanity.t started failing. But when I looked at the 
> last regression vote at 
> https://build.gluster.org/job/centos7-regression/5568/consoleFull I saw it 
> voted back positive but the bell rang when I saw the overall regression took 
> less than 2 hours and when I opened the regression link I saw the test 
> actually failed but still this job voted back +1 at gerrit.
>
> Deepshika - This is a bad CI bug we have now and have to be addressed at 
> earliest. Please take a look at 
> https://build.gluster.org/job/centos7-regression/5568/consoleFull and 
> investigate why the regression vote wasn't negative.

Atin, we (Deepshikha and I) agree with your assessment.

This is the kind of situation that reduces the trust in our
application build pipeline. This is a result of a minor change
introduced to fix the constant issue we have observed with non-voting.
This is something that should not have slipped through and it did. We
will be observing a random sampling of the jobs to ensure that we gate
any such incidents that reduce the utility value of the pipeline. We
will be reviewing the change to the scripts which have since also had
the fix for the issue which led to this situation in the first place.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Following up on the "Github teams/repo cleanup"

2019-03-28 Thread Sankarshan Mukhopadhyay
On Fri 29 Mar, 2019, 01:04 Vijay Bellur,  wrote:

>
>
> On Thu, Mar 28, 2019 at 11:39 AM Sankarshan Mukhopadhyay <
> sankarshan.mukhopadh...@gmail.com> wrote:
>
>> On Thu, Mar 28, 2019 at 11:34 PM John Strunk  wrote:
>> >
>> > Thanks for bringing this to the list.
>> >
>> > I think this is a good set of guidelines, and we should publicly post
>> and enforce them once agreement is reached.
>> > The integrity of the gluster github org is important for the future of
>> the project.
>> >
>>
>> I agree. And so, I am looking forward to additional
>> individuals/maintainers agreeing to this so that we can codify it
>> under the Gluster.org Github org too.
>>
>
>
> Looks good to me.
>
> While at this, I would also like us to think about evolving some
> guidelines for creating a new repository in the gluster github
> organization. Right now, a bug report does get a new repository created and
> I feel that the process could be a bit more involved to ensure that we
> host projects with the right content in github.
>

The bug ensures that there is a recorded trail for the request. What might
be the additional detail required which can emphasize on greater
involvement? At this point, I don't see many fly-by-night projects. Just
inactive ones and those too for myriad reasons.

>

>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Following up on the "Github teams/repo cleanup"

2019-03-28 Thread Sankarshan Mukhopadhyay
On Thu, Mar 28, 2019 at 11:34 PM John Strunk  wrote:
>
> Thanks for bringing this to the list.
>
> I think this is a good set of guidelines, and we should publicly post and 
> enforce them once agreement is reached.
> The integrity of the gluster github org is important for the future of the 
> project.
>

I agree. And so, I am looking forward to additional
individuals/maintainers agreeing to this so that we can codify it
under the Gluster.org Github org too.

>
> On Wed, Mar 27, 2019 at 10:21 PM Sankarshan Mukhopadhyay 
>  wrote:
>>
>> The one at 
>> <https://lists.gluster.org/pipermail/gluster-infra/2018-June/004589.html>
>> I am not sure if the proposal from Michael was agreed to separately
>> and it was done. Also, do we want to do this periodically?
>>
>> Feedback is appreciated.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Following up on the "Github teams/repo cleanup"

2019-03-27 Thread Sankarshan Mukhopadhyay
The one at 

I am not sure if the proposal from Michael was agreed to separately
and it was done. Also, do we want to do this periodically?

Feedback is appreciated.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-22 Thread Sankarshan Mukhopadhyay
On Wed, Mar 20, 2019 at 10:00 AM Atin Mukherjee  wrote:
>
> From glusterd perspective couple of enhancements I'd propose to be added (a) 
> to capture get-state dump and make it part of sosreport . Off late, we have 
> seen get-state dump has been very helpful in debugging few cases apart from 
> it's original purpose of providing source of cluster/volume information for 
> tendrl (b) capture glusterd statedump
>

How large can these payloads be? One of the challenges I've heard is
that users are often challenged when attempting to push large ( > 5GB)
payloads making the total size of the sos archive fairly big.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] GF_CALLOC to GF_MALLOC conversion - is it safe?

2019-03-21 Thread Sankarshan Mukhopadhyay
On Thu, Mar 21, 2019 at 9:24 PM Yaniv Kaul  wrote:

>> Smallfile is part of CI? I am happy to see it documented @ 
>> https://docs.gluster.org/en/latest/Administrator%20Guide/Performance%20Testing/#smallfile-distributed-io-benchmark
>>  , so at least one can know how to execute it manually.
>
>
> Following up the above link to the smallfile repo leads to 404 (I'm assuming 
> we don't have a link checker running on our documentation, so it can break 
> from time to time?)

Hmm... that needs to be addressed.

> I assume it's https://github.com/distributed-system-analysis/smallfile ?

Yes.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Coverity scan is back

2019-03-19 Thread Sankarshan Mukhopadhyay
On Tue, Mar 19, 2019 at 2:48 PM Yaniv Kaul  wrote:
>
> After a long shutdown, the upstream Coverity scan for Gluster is back[1].

There hasn't been much by the way of explaining why the service was
unavailable for a reasonably long period, has there?

Thanks for the update back to the list!

> We were last time measured (January) @ 61 issues and went up to 93 items.
> Overall, we are still at a very good 0.16 defect density, but we've regressed 
> a bit.

Probably something the maintainers can review based on the Impact
rating/score and have bite sized tasks.

> [1] https://scan.coverity.com/projects/gluster-glusterfs?tab=overview
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] requesting review available gluster* plugins in sos

2019-03-19 Thread Sankarshan Mukhopadhyay
On Tue, Mar 19, 2019 at 8:30 PM Soumya Koduri  wrote:
> On 3/19/19 9:49 AM, Sankarshan Mukhopadhyay wrote:
> > <https://github.com/sosreport/sos> is (as might just be widely known)
> > an extensible, portable, support data collection tool primarily aimed
> > at Linux distributions and other UNIX-like operating systems.
> >
> > At present there are 2 plugins
> > <https://github.com/sosreport/sos/blob/master/sos/plugins/gluster.py>
> > and 
> > <https://github.com/sosreport/sos/blob/master/sos/plugins/gluster_block.py>
> > I'd like to request that the maintainers do a quick review that this
> > sufficiently covers topics to help diagnose issues.
>
> There is one plugin available for nfs-ganesha as well -
> https://github.com/sosreport/sos/blob/master/sos/plugins/nfsganesha.py
>
> It needs a minor update. Sent a pull request for the same -
> https://github.com/sosreport/sos/pull/1593
>

Thanks Soumya!

Other Gluster maintainers - review and respond please.

> Kindly review.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] requesting review available gluster* plugins in sos

2019-03-18 Thread Sankarshan Mukhopadhyay
 is (as might just be widely known)
an extensible, portable, support data collection tool primarily aimed
at Linux distributions and other UNIX-like operating systems.

At present there are 2 plugins

and 
I'd like to request that the maintainers do a quick review that this
sufficiently covers topics to help diagnose issues.

This is a lead up to requesting more usage of the sos tool to diagnose
issues we see reported.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Github#268 Compatibility with Alpine Linux

2019-03-11 Thread Sankarshan Mukhopadhyay
Saw some recent activity on
 - is there a plan to
address this or, should the interested users be informed about other
plans?

/s
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] 8/10 AWS jenkins builders disconnected

2019-03-06 Thread Sankarshan Mukhopadhyay
On Wed, Mar 6, 2019 at 5:38 PM Deepshikha Khandelwal
 wrote:
>
> Hello,
>
> Today while debugging the centos7-regression failed builds I saw most of the 
> builders did not pass the instance status check on AWS and were unreachable.
>
> Misc investigated this and came to know about the patch[1] which seems to 
> break the builder one after the other. They all ran the regression test for 
> this specific change before going offline.
> We suspect that this change do result in infinite loop of processes as we did 
> not see any trace of error in the system logs.
>
> We did reboot all those builders and they all seem to be running fine now.
>

The question though is - what to do about the patch, if the patch
itself is the root cause? Is this assigned to anyone to look into?

> Please let us know if you see any such issues again.
>
> [1] https://review.gluster.org/#/c/glusterfs/+/22290/


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2018-12-31 Thread Sankarshan Mukhopadhyay
On Fri 28 Dec, 2018, 12:44 Raghavendra Gowdappa 
>
> On Mon, Dec 24, 2018 at 6:05 PM Raghavendra Gowdappa 
> wrote:
>
>>
>>
>> On Mon, Dec 24, 2018 at 3:40 PM Sankarshan Mukhopadhyay <
>> sankarshan.mukhopadh...@gmail.com> wrote:
>>
>>> [pulling the conclusions up to enable better in-line]
>>>
>>> > Conclusions:
>>> >
>>> > We should never have a volume with caching-related xlators disabled.
>>> The price we pay for it is too high. We need to make them work consistently
>>> and aggressively to avoid as many requests as we can.
>>>
>>> Are there current issues in terms of behavior which are known/observed
>>> when these are enabled?
>>>
>>
>> We did have issues with pgbench in past. But they've have been fixed.
>> Please refer to bz [1] for details. On 5.1, it runs successfully with all
>> caching related xlators enabled. Having said that the only performance
>> xlators which gave improved performance were open-behind and write-behind
>> [2] (write-behind had some issues, which will be fixed by [3] and we'll
>> have to measure performance again with fix to [3]).
>>
>
> One quick update. Enabling write-behind and md-cache with fix for [3]
> reduced the total time taken for pgbench init phase roughly by 20%-25%
> (from 12.5 min to 9.75 min for a scale of 100). Though this is still a huge
> time (around 12hrs for a db of scale 8000). I'll follow up with a detailed
> report once my experiments are complete. Currently trying to optimize the
> read path.
>
>
>> For some reason, read-side caching didn't improve transactions per
>> second. I am working on this problem currently. Note that these bugs
>> measure transaction phase of pgbench, but what xavi measured in his mail is
>> init phase. Nevertheless, evaluation of read caching (metadata/data) will
>> still be relevant for init phase too.
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1512691
>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1629589#c4
>> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1648781
>>
>
I think that what I am looking forward to is a well defined set of next
steps and potential update to this list windows that eventually result in a
formal and recorded procedure to ensure that Gluster performs best for
these application workloads.


>>
>>
>>> > We need to analyze client/server xlators deeper to see if we can avoid
>>> some delays. However optimizing something that is already at the
>>> microsecond level can be very hard.
>>>
>>> That is true - are there any significant gains which can be accrued by
>>> putting efforts here or, should this be a lower priority?
>>>
>>
>> The problem identified by xavi is also the one we (Manoj, Krutika, me and
>> Milind) had encountered in the past [4]. The solution we used was to have
>> multiple rpc connections between single brick and client. The solution
>> indeed fixed the bottleneck. So, there is definitely work involved here -
>> either to fix the single connection model or go with multiple connection
>> model. Its preferred to improve single connection and resort to multiple
>> connections only if bottlenecks in single connection are not fixable.
>> Personally I think this is high priority along with having appropriate
>> client side caching.
>>
>> [4] https://bugzilla.redhat.com/show_bug.cgi?id=1467614#c52
>>
>>
>>> > We need to determine what causes the fluctuations in brick side and
>>> avoid them.
>>> > This scenario is very similar to a smallfile/metadata workload, so
>>> this is probably one important cause of its bad performance.
>>>
>>> What kind of instrumentation is required to enable the determination?
>>>
>>> On Fri, Dec 21, 2018 at 1:48 PM Xavi Hernandez 
>>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I've done some tracing of the latency that network layer introduces in
>>> gluster. I've made the analysis as part of the pgbench performance issue
>>> (in particulat the initialization and scaling phase), so I decided to look
>>> at READV for this particular workload, but I think the results can be
>>> extrapolated to other operations that also have small latency (cached data
>>> from FS for example).
>>> >
>>> > Note that measuring latencies introduces some latency. It consists in
>>> a call to clock_get_time() for each probe point, so the real latency will
>>> be a bit lower, but still proportional to these numbers.
>>> >
>>>
>>> [snip]
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Implementing multiplexing for self heal client.

2018-12-24 Thread Sankarshan Mukhopadhyay
On Fri, Dec 21, 2018 at 6:30 PM RAFI KC  wrote:
>
> Hi All,
>
> What is the problem?
> As of now self-heal client is running as one daemon per node, this means
> even if there are multiple volumes, there will only be one self-heal
> daemon. So to take effect of each configuration changes in the cluster,
> the self-heal has to be reconfigured. But it doesn't have ability to
> dynamically reconfigure. Which means when you have lot of volumes in the
> cluster, every management operation that involves configurations changes
> like volume start/stop, add/remove brick etc will result in self-heal
> daemon restart. If such operation is executed more often, it is not only
> slow down self-heal for a volume, but also increases the slef-heal logs
> substantially.

What is the value of the number of volumes when you write "lot of
volumes"? 1000 volumes, more etc

>
>
> How to fix it?
>
> We are planning to follow a similar procedure as attach/detach graphs
> dynamically which is similar to brick multiplex. The detailed steps is
> as below,
>
>
>
>
> 1) First step is to make shd per volume daemon, to generate/reconfigure
> volfiles per volume basis .
>
>1.1) This will help to attach the volfiles easily to existing shd daemon
>
>1.2) This will help to send notification to shd daemon as each
> volinfo keeps the daemon object
>
>1.3) reconfiguring a particular subvolume is easier as we can check
> the topology better
>
>1.4) With this change the volfiles will be moved to workdir/vols/
> directory.
>
> 2) Writing new rpc requests like attach/detach_client_graph function to
> support clients attach/detach
>
>2.1) Also functions like graph reconfigure, mgmt_getspec_cbk has to
> be modified
>
> 3) Safely detaching a subvolume when there are pending frames to unwind.
>
>3.1) We can mark the client disconnected and make all the frames to
> unwind with ENOTCONN
>
>3.2) We can wait all the i/o to unwind until the new updated subvol
> attaches
>
> 4) Handle scenarios like glusterd restart, node reboot, etc
>
>
>
> At the moment we are not planning to limit the number of heal subvolmes
> per process as, because with the current approach also for every volume
> heal was doing from a single process. We have not heared any major
> complains on this?

Is the plan to not ever limit or, have a throttle set to a default
high(er) value? How would system resources be impacted if the proposed
design is implemented?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench

2018-12-24 Thread Sankarshan Mukhopadhyay
[pulling the conclusions up to enable better in-line]

> Conclusions:
>
> We should never have a volume with caching-related xlators disabled. The 
> price we pay for it is too high. We need to make them work consistently and 
> aggressively to avoid as many requests as we can.

Are there current issues in terms of behavior which are known/observed
when these are enabled?

> We need to analyze client/server xlators deeper to see if we can avoid some 
> delays. However optimizing something that is already at the microsecond level 
> can be very hard.

That is true - are there any significant gains which can be accrued by
putting efforts here or, should this be a lower priority?

> We need to determine what causes the fluctuations in brick side and avoid 
> them.
> This scenario is very similar to a smallfile/metadata workload, so this is 
> probably one important cause of its bad performance.

What kind of instrumentation is required to enable the determination?

On Fri, Dec 21, 2018 at 1:48 PM Xavi Hernandez  wrote:
>
> Hi,
>
> I've done some tracing of the latency that network layer introduces in 
> gluster. I've made the analysis as part of the pgbench performance issue (in 
> particulat the initialization and scaling phase), so I decided to look at 
> READV for this particular workload, but I think the results can be 
> extrapolated to other operations that also have small latency (cached data 
> from FS for example).
>
> Note that measuring latencies introduces some latency. It consists in a call 
> to clock_get_time() for each probe point, so the real latency will be a bit 
> lower, but still proportional to these numbers.
>

[snip]
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Whats latest on Glusto + GD2 integration?

2018-11-04 Thread Sankarshan Mukhopadhyay
On Mon, Nov 5, 2018 at 8:35 AM Atin Mukherjee  wrote:
>
> Thank you Rahul for the report. This does help to keep community up to date 
> on the effort being put up here and understand where the things stand. Some 
> comments inline.
>
> On Sun, Nov 4, 2018 at 8:01 PM Rahul Hinduja  wrote:
>>
>> Hello,
>>
>> Over past few weeks, few folks are engaged in integrating gd2 with existing 
>> glusto infrastructure/cases. This email is an attempt to provide the high 
>> level view of the work that's done so far and next.
>>
>> Whats Done.
>>
>> Libraries incorporated / under review:
>>
>> Gluster Base Class and setup.py file required to read config file and 
>> install all the packages
>> Exception and lib-utils file required for all basic test cases
>> Common rest methods(Post, Get,  Delete), to handle rest api’s
>> Peer management libraries
>> Basic Volume management libraries
>> Basic Snapshot libraries
>> Self-heal libraries
>> Glusterd init
>> Mount operations
>> Device operations
>>
>> Note: I request you all to provide review comments on the libraries that are 
>> submitted. Over this week, Akarsha and Vaibhavi will try to get the review 
>> comments incorporated and to get these libraries to closure.
>>
>> Where is the repo?
>>
>> [1] https://review.gluster.org/#/q/project:glusto-libs
>>
>> Are we able to consume gd1 cases into gd2?
>>
>> We tried POC to run glusterd and snapshot test cases (one-by-one) via 
>> modified automation and libraries. Following are the highlights:
>>
>> We were able to run 20 gd1 cases out of which 8 passed and 12 failed.
>> We were able to run 11 snapshot cases out of which 7 passed and 4 failed.
>>
>> Reason for failures:
>>
>> Because of different volume options with gd1/gd2
>
> Just to clarify here, we have an open GD2 issue  
> https://github.com/gluster/glusterd2/issues/739 which is being worked on and 
> that should help us to achieve this backward compatibility.
>>
>> Due to different error or output format between gd1/gd2
>
>
> We need to move towards parsing error codes than the error messages. I'm 
> aware that with GD1/CLI such infra was missing, but now that GD2 offers 
> specific error codes, all command failures need to be parsed through 
> error/ret codes in GD2. I believe the library/tests need to be modified 
> accordingly to cater to this need to handle both GD1/GD2 based failures.
>
>> For more detail which test cases is passed or failed and reasons for the 
>> failures [2]
>>
>> [2] 
>> https://docs.google.com/spreadsheets/d/1O9JXQ2IgRIg5uZjCacybk3BMIjMmMeZsiv3-x_RTHWg/edit?usp=sharing
>>

Do these failures require bugs or, issues to track the resolution?

>> For more information/collaboration, please reach-out to:
>>
>> Shrivaibavi Raghaventhiran (sragh...@redhat.com)
>> Akarsha Rai (ak...@redhat.com)
>> Rahul Hinduja (rhind...@redhat.com)
>>

Should we not be using
 for
these conversations as well?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] POC- Distributed regression testing framework

2018-10-04 Thread Sankarshan Mukhopadhyay
On Thu, Oct 4, 2018 at 6:10 AM Sanju Rakonde  wrote:
> On Wed, Oct 3, 2018 at 3:26 PM Deepshikha Khandelwal  
> wrote:
>>
>> Hello folks,
>>
>> Distributed-regression job[1] is now a part of Gluster's
>> nightly-master build pipeline. The following are the issues we have
>> resolved since we started working on this:
>>
>> 1) Collecting gluster logs from servers.
>> 2) Tests failed due to infra-related issues have been fixed.
>> 3) Time taken to run regression testing reduced to ~50-60 minutes.
>>
>> To get time down to 40 minutes needs your help!
>>
>> Currently, there is a test that is failing:
>>
>> tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t
>>
>> This needs fixing first.
>
>
> Where can I get the logs of this test case? In 
> https://build.gluster.org/job/distributed-regression/264/console I see this 
> test case is failed and re-attempted. But I couldn't find logs.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Glusto Happenings

2018-09-17 Thread Sankarshan Mukhopadhyay
On Tue, Sep 18, 2018 at 6:07 AM Amye Scavarda  wrote:
>
> Adding Maintainers as that's the group that will be more interested in this.
> Our next maintainers meeting is October 1st, want to present on what the 
> current status is there?

I'd like to have some links to tracking mechanisms eg. Github issues
and such for the topics that have been already mentioned below as well
as any Gluster specific changes that are coming up in versions. The
intent is to allow the maintainers to anticipate and plan for any
changes and discuss those.

> On Mon, Sep 17, 2018 at 12:29 AM Jonathan Holloway  
> wrote:
>>
>> Hi Gluster-devel,
>>
>> It's been awhile, since we updated gluster-devel on things related to Glusto.
>>
>> The big thing in the works for Glusto is Python3 compatibility.
>> A port is in progress, and the target is October to have a branch ready for 
>> testing. Look for another update here when that is available.
>>
>> Thanks to Vijay Avuthu for testing a change to the Python2 version of 
>> Carteplex (the cartesian product module in Glusto that drives the runs_on 
>> decorator used in Gluster tests). Tests inheriting from GlusterBaseClass 
>> have been using im_func to make calls against the base class setUp method. 
>> This change allows the use of super() as well as im_func.
>>
>> On a related note, the syntax for both im_func and super() changes in 
>> Python3. The "Developer Guide for Tests and Libraries" section of the 
>> glusterfs/glusto-tests docs currently shows 
>> "GlusterBaseClass.setUp.im_func(self)", but will be updated with the 
>> preferred call for Python3.
>>
>> And lastly, you might have seen an issue with tests under Python2 where a 
>> run kicked off via py.test or /usr/bin/glusto would immediately fail with a 
>> message indicating gcc needs to be installed. The problem was specific to a 
>> recent update of PyTest and scandir, and the original workaround was to 
>> install gcc or a previous version of pytest and scandir. The scandir 
>> maintainer fixed the issue upstream with scandir 1.9.0 (available in PyPI).
>>
>> That's all for now.
>>
>> Cheers,
>> Jonathan (loadtheacc)
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python components and test coverage

2018-08-10 Thread Sankarshan Mukhopadhyay
On Fri, Aug 10, 2018 at 5:47 PM, Nigel Babu  wrote:
> Hello folks,
>
> We're currently in a transition to python3. Right now, there's a bug in one
> piece of this transition code. I saw Nithya run into this yesterday. The
> challenge here is, none of our testing for python2/python3 transition
> catches this bug. Both Pylint and the ast-based testing that Kaleb
> recommended does not catch this bug. The bug is trivial and would take 2
> mins to fix, the challenge is that until we exercise almost all of these
> code paths from both Python3 and Python2, we're not going to find out that
> there are subtle breakages like this.
>

Where is this great reveal - what is this above mentioned bug?

> As far as I know, the three pieces where we use Python are geo-rep,
> glusterfind, and libgfapi-python. My question:
> * Are there more places where we run python?
> * What sort of automated test coverage do we have for these components right
> now?
> * What can the CI team do to help identify problems? We have both Centos7
> and Fedora28 builders, so we can definitely help run tests specific to
> python.
>
> --
> nigelb
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Documentation Hackathon - 7/19 through 7/23

2018-08-06 Thread Sankarshan Mukhopadhyay
Was a round-up/summary about this published to the lists?

On Wed, Jul 18, 2018 at 10:27 PM, Vijay Bellur  wrote:
> Hey All,
>
> We are organizing a hackathon to improve our upstream documentation. More
> details about the hackathon can be found at [1].
>
> Please feel free to let us know if you have any questions.
>
> Thanks,
> Amar & Vijay
>
> [1]
> https://docs.google.com/document/d/11LLGA-bwuamPOrKunxojzAEpHEGQxv8VJ68L3aKdPns/edit?usp=sharing
>



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-05 Thread Sankarshan Mukhopadhyay
On Mon, Aug 6, 2018 at 5:17 AM, Amye Scavarda  wrote:
>
>
> On Sun, Aug 5, 2018 at 3:24 PM Shyam Ranganathan 
> wrote:
>>
>> On 07/31/2018 07:16 AM, Shyam Ranganathan wrote:
>> > On 07/30/2018 03:21 PM, Shyam Ranganathan wrote:
>> >> On 07/24/2018 03:12 PM, Shyam Ranganathan wrote:
>> >>> 1) master branch health checks (weekly, till branching)
>> >>>   - Expect every Monday a status update on various tests runs
>> >> See https://build.gluster.org/job/nightly-master/ for a report on
>> >> various nightly and periodic jobs on master.
>> > Thinking aloud, we may have to stop merges to master to get these test
>> > failures addressed at the earliest and to continue maintaining them
>> > GREEN for the health of the branch.
>> >
>> > I would give the above a week, before we lockdown the branch to fix the
>> > failures.
>> >
>> > Let's try and get line-coverage and nightly regression tests addressed
>> > this week (leaving mux-regression open), and if addressed not lock the
>> > branch down.
>> >
>>
>> Health on master as of the last nightly run [4] is still the same.
>>
>> Potential patches that rectify the situation (as in [1]) are bunched in
>> a patch [2] that Atin and myself have put through several regressions
>> (mux, normal and line coverage) and these have also not passed.
>>
>> Till we rectify the situation we are locking down master branch commit
>> rights to the following people, Amar, Atin, Shyam, Vijay.
>>
>> The intention is to stabilize master and not add more patches that my
>> destabilize it.
>>
>> Test cases that are tracked as failures and need action are present here
>> [3].
>>
>> @Nigel, request you to apply the commit rights change as you see this
>> mail and let the list know regarding the same as well.
>>
>> Thanks,
>> Shyam
>>
>> [1] Patches that address regression failures:
>> https://review.gluster.org/#/q/starredby:srangana%2540redhat.com
>>
>> [2] Bunched up patch against which regressions were run:
>> https://review.gluster.org/#/c/20637
>>
>> [3] Failing tests list:
>>
>> https://docs.google.com/spreadsheets/d/1IF9GhpKah4bto19RQLr0y_Kkw26E_-crKALHSaSjZMQ/edit?usp=sharing
>>
>> [4] Nightly run dashboard: https://build.gluster.org/job/nightly-master/

>
> Locking master is fine, this seems like there's been ample notice and
> conversation.
> Do we have test criteria to indicate when we're unlocking master? X amount
> of tests passing, Y amount of bugs?

The "till we rectify" might just include 3 days of the entire set of
tests passing - thinking out loud here.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-02 Thread Sankarshan Mukhopadhyay
On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar
 wrote:
> I am facing different issue in softserve machines. The fuse mount itself is
> failing.
> I tried day before yesterday to debug geo-rep failures. I discussed with
> Raghu,
> but could not root cause it. So none of the tests were passing. It happened
> on
> both machine instances I tried.
>

Ugh! -infra team should have an issue to work with and resolve this.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-08-01 Thread Sankarshan Mukhopadhyay
On Thu, Aug 2, 2018 at 12:19 AM, Shyam Ranganathan  wrote:
> On 07/31/2018 12:41 PM, Atin Mukherjee wrote:
>> tests/bugs/core/bug-1432542-mpx-restart-crash.t - Times out even after
>> 400 secs. Refer
>> https://fstat.gluster.org/failure/209?state=2&start_date=2018-06-30&end_date=2018-07-31&branch=all,
>> specifically the latest report
>> https://build.gluster.org/job/regression-test-burn-in/4051/consoleText .
>> Wasn't timing out as frequently as it was till 12 July. But since 27
>> July, it has timed out twice. Beginning to believe commit
>> 9400b6f2c8aa219a493961e0ab9770b7f12e80d2 has added the delay and now 400
>> secs isn't sufficient enough (Mohit?)
>
> The above test is the one that is causing line coverage to fail as well
> (mostly, say 50% of the time).
>
> I did have this patch up to increase timeouts and also ran a few rounds
> of tests, but results are mixed. It passes when run first, and later
> errors out in other places (although not timing out).
>
> See: https://review.gluster.org/#/c/20568/2 for the changes and test run
> details.
>

If I may ask - why are we always exploring the "increase timeout" part
of this? I understand that some tests may take longer - but 400s is
quite a non-trivial amount of time - what other efficient means are we
not able to explore?

> The failure of this test in regression-test-burn-in run#4051 is strange
> again, it looks like the test completed within stipulated time, but
> restarted again post cleanup_func was invoked.
>
> Digging a little further the manner of cleanup_func and traps used in
> this test seem *interesting* and maybe needs a closer look to arrive at
> possible issues here.
>
> @Mohit, request you to take a look at the line coverage failures as
> well, as you handle the failures in this test.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 5: Master branch health report (Week of 30th July)

2018-07-31 Thread Sankarshan Mukhopadhyay
On Tue, Jul 31, 2018 at 4:46 PM, Shyam Ranganathan  wrote:
> On 07/30/2018 03:21 PM, Shyam Ranganathan wrote:
>> On 07/24/2018 03:12 PM, Shyam Ranganathan wrote:
>>> 1) master branch health checks (weekly, till branching)
>>>   - Expect every Monday a status update on various tests runs
>>
>> See https://build.gluster.org/job/nightly-master/ for a report on
>> various nightly and periodic jobs on master.
>

This doesn't look like how things are expected to be.

> Thinking aloud, we may have to stop merges to master to get these test
> failures addressed at the earliest and to continue maintaining them
> GREEN for the health of the branch.
>
> I would give the above a week, before we lockdown the branch to fix the
> failures.
>

Is 1 week a sufficient estimate to address the issues?

> Let's try and get line-coverage and nightly regression tests addressed
> this week (leaving mux-regression open), and if addressed not lock the
> branch down.
>
>>
>> RED:
>> 1. Nightly regression (3/6 failed)
>> - Tests that reported failure:
>> ./tests/00-geo-rep/georep-basic-dr-rsync.t
>> ./tests/bugs/core/bug-1432542-mpx-restart-crash.t
>> ./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
>> ./tests/bugs/distribute/bug-1122443.t
>>
>> - Tests that needed a retry:
>> ./tests/00-geo-rep/georep-basic-dr-tarssh.t
>> ./tests/bugs/glusterd/quorum-validation.t
>>
>> 2. Regression with multiplex (cores and test failures)
>>
>> 3. line-coverage (cores and test failures)
>> - Tests that failed:
>> ./tests/bugs/core/bug-1432542-mpx-restart-crash.t (patch
>> https://review.gluster.org/20568 does not fix the timeout entirely, as
>> can be seen in this run,
>> https://build.gluster.org/job/line-coverage/401/consoleFull )
>>
>> Calling out to contributors to take a look at various failures, and post
>> the same as bugs AND to the lists (so that duplication is avoided) to
>> get this to a GREEN status.
>>
>> GREEN:
>> 1. cpp-check
>> 2. RPM builds
>>
>> IGNORE (for now):
>> 1. clang scan (@nigel, this job requires clang warnings to be fixed to
>> go green, right?)
>>
>> Shyam
-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-25 Thread Sankarshan Mukhopadhyay
On Wed, Jul 25, 2018 at 11:53 PM, Yaniv Kaul  wrote:
>
>
> On Tue, Jul 24, 2018, 7:20 PM Pranith Kumar Karampuri 
> wrote:
>>
>> hi,
>>   Quite a few commands to monitor gluster at the moment take almost a
>> second to give output.
>> Some categories of these commands:
>> 1) Any command that needs to do some sort of mount/glfs_init.
>>  Examples: 1) heal info family of commands 2) statfs to find
>> space-availability etc (On my laptop replica 3 volume with all local bricks,
>> glfs_init takes 0.3 seconds on average)
>> 2) glusterd commands that need to wait for the previous command to unlock.
>> If the previous command is something related to lvm snapshot which takes
>> quite a few seconds, it would be even more time consuming.
>>
>> Nowadays container workloads have hundreds of volumes if not thousands. If
>> we want to serve any monitoring solution at this scale (I have seen
>> customers use upto 600 volumes at a time, it will only get bigger) and lets
>> say collecting metrics per volume takes 2 seconds per volume(Let us take the
>> worst example which has all major features enabled like
>> snapshot/geo-rep/quota etc etc), that will mean that it will take 20 minutes
>> to collect metrics of the cluster with 600 volumes. What are the ways in
>> which we can make this number more manageable? I was initially thinking may
>> be it is possible to get gd2 to execute commands in parallel on different
>> volumes, so potentially we could get this done in ~2 seconds. But quite a
>> few of the metrics need a mount or equivalent of a mount(glfs_init) to
>> collect different information like statfs, number of pending heals, quota
>> usage etc. This may lead to high memory usage as the size of the mounts tend
>> to be high.
>>
>> I wanted to seek suggestions from others on how to come to a conclusion
>> about which path to take and what problems to solve.
>
>
> I would imagine that in gd2 world:
> 1. All stats would be in etcd.
> 2. There will be a single API call for GetALLVolumesStats or something and
> we won't be asking the client to loop, or there will be a similar efficient
> single API to query and deliver stats for some volumes in a batch ('all
> bricks in host X' for example).
>

Single end point for metrics/monitoring was a topic that was not
agreed upon at <https://github.com/gluster/glusterd2/issues/538>

> Worth looking how it's implemented elsewhere in K8S.
>
> In any case, when asking for metrics I assume the latest already available
> would be returned and we are not going to fetch them when queried. This is
> both fragile (imagine an entity that doesn't respond well) and adds latency
> and will be inaccurate anyway a split second later.
>
> Y.



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How long should metrics collection on a cluster take?

2018-07-24 Thread Sankarshan Mukhopadhyay
On Tue, Jul 24, 2018 at 9:48 PM, Pranith Kumar Karampuri
 wrote:
> hi,
>   Quite a few commands to monitor gluster at the moment take almost a
> second to give output.

Is this at the (most) minimum recommended cluster size?

> Some categories of these commands:
> 1) Any command that needs to do some sort of mount/glfs_init.
>  Examples: 1) heal info family of commands 2) statfs to find
> space-availability etc (On my laptop replica 3 volume with all local bricks,
> glfs_init takes 0.3 seconds on average)
> 2) glusterd commands that need to wait for the previous command to unlock.
> If the previous command is something related to lvm snapshot which takes
> quite a few seconds, it would be even more time consuming.
>
> Nowadays container workloads have hundreds of volumes if not thousands. If
> we want to serve any monitoring solution at this scale (I have seen
> customers use upto 600 volumes at a time, it will only get bigger) and lets
> say collecting metrics per volume takes 2 seconds per volume(Let us take the
> worst example which has all major features enabled like
> snapshot/geo-rep/quota etc etc), that will mean that it will take 20 minutes
> to collect metrics of the cluster with 600 volumes. What are the ways in
> which we can make this number more manageable? I was initially thinking may
> be it is possible to get gd2 to execute commands in parallel on different
> volumes, so potentially we could get this done in ~2 seconds. But quite a
> few of the metrics need a mount or equivalent of a mount(glfs_init) to
> collect different information like statfs, number of pending heals, quota
> usage etc. This may lead to high memory usage as the size of the mounts tend
> to be high.
>

I am not sure if starting from the "worst example" (it certainly is
not) is a good place to start from. That said, for any environment
with that number of disposable volumes, what kind of metrics do
actually make any sense/impact?

> I wanted to seek suggestions from others on how to come to a conclusion
> about which path to take and what problems to solve.
>
> I will be happy to raise github issues based on our conclusions on this mail
> thread.
>
> --
> Pranith
>





-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: LTM release targeted for end of May

2018-03-13 Thread Sankarshan Mukhopadhyay
On Tue, Mar 13, 2018 at 1:05 PM, Pranith Kumar Karampuri
 wrote:
>
>
> On Tue, Mar 13, 2018 at 7:07 AM, Shyam Ranganathan 
> wrote:
>>
>> Hi,
>>
>> As we wind down on 4.0 activities (waiting on docs to hit the site, and
>> packages to be available in CentOS repositories before announcing the
>> release), it is time to start preparing for the 4.1 release.
>>
>> 4.1 is where we have GD2 fully functional and shipping with migration
>> tools to aid Glusterd to GlusterD2 migrations.
>>
>> Other than the above, this is a call out for features that are in the
>> works for 4.1. Please *post* the github issues to the *devel lists* that
>> you would like as a part of 4.1, and also mention the current state of
>> development.
>>
>> Further, as we hit end of March, we would make it mandatory for features
>> to have required spec and doc labels, before the code is merged, so
>> factor in efforts for the same if not already done.
>
>
> Could you explain the point above further? Is it just the label or the
> spec/doc
> that we need merged before the patch is merged?
>

I'll hazard a guess that the intent of the label is to indicate
availability of the doc. "Completeness" of code is being defined as
including specifications and documentation.

That said, I'll wait for Shyam to be more elaborate on this.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] IMP: Release 4.0: CentOS 6 packages will not be made available

2018-01-22 Thread Sankarshan Mukhopadhyay
On Fri, Jan 12, 2018 at 12:02 AM, Shyam Ranganathan  wrote:
> Gluster Users,
>
> This is to inform you that from the 4.0 release onward, packages for
> CentOS 6 will not be built by the gluster community. This also means
> that the CentOS SIG will not receive updates for 4.0 gluster packages.
>

It would be important to document a guidance for any users considering
Gluster 3.x/CentOS6 --> Gluster4.x/CentOS7 It is implied that the
3.x/CentOS7 --> 4.x/CentOS7 would be tested


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: [CentOS-devel] ansible in CentOS Extras

2017-09-08 Thread Sankarshan Mukhopadhyay
I'm not sure if Gluster has any material impact. Would request the
Gluster project members who are also part of the CentOS Storage SIG to
confirm and/or respond to Karanbir.


-- Forwarded message --
From: Karanbir Singh 
Date: Fri, Sep 8, 2017 at 4:08 AM
Subject: [CentOS-devel] ansible in CentOS Extras
To: "The CentOS developers mailing list." 


hi,

https://git.centos.org/log/rpms!ansible.git/c7-extras

is now in CentOS-Extras/ - this is going to impact every SIG that uses
ansible in and from cbs.centos.org, or needs specific versions pin'd for
their roles ( paas / cloud sig etc )

is there anything the SIG's need to validate before this content hits
mirror.centos.org in a few days time ?

--
Karanbir Singh
+44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
GnuPG Key : http://www.karan.org/publickey.asc

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.12: Glusto run status

2017-08-29 Thread Sankarshan Mukhopadhyay
On Wed, Aug 30, 2017 at 6:03 AM, Atin Mukherjee  wrote:
>
> On Wed, 30 Aug 2017 at 00:23, Shwetha Panduranga 
> wrote:
>>
>> Hi Shyam, we are already doing it. we wait for rebalance status to be
>> complete. We loop. we keep checking if the status is complete for '20'
>> minutes or so.
>
>
> Are you saying in this test rebalance status was executed multiple times
> till it succeed? If yes then the test shouldn't have failed. Can I get to
> access the complete set of logs?

Would you not prefer to look at the specific test under discussion as well?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterd2 - Some anticipated changes to glusterfs source

2017-08-17 Thread Sankarshan Mukhopadhyay
On Thu, Aug 17, 2017 at 12:46 PM, Kaushal M  wrote:
> On Thu, Aug 3, 2017 at 2:12 PM, Milind Changire  wrote:
>>
>>
>> On Thu, Aug 3, 2017 at 12:56 PM, Kaushal M  wrote:
>>>
>>> On Thu, Aug 3, 2017 at 2:14 AM, Niels de Vos  wrote:
>>> > On Wed, Aug 02, 2017 at 05:03:35PM +0530, Prashanth Pai wrote:
>>> >> Hi all,
>>> >>
>>> >> The ongoing work on glusterd2 necessitates following non-breaking and
>>> >> non-exhaustive list of changes to glusterfs source code:
>>> >>
>>> >> Port management
>>> >> - Remove hard-coding of glusterd's port as 24007 in clients and
>>> >> elsewhere.
>>> >>   Glusterd2 can be configured to listen to clients on any port (still
>>> >> defaults to
>>> >>   24007 though)
>>> >> - Let the bricks and daemons choose any available port and if needed
>>> >> report
>>> >>   the port used to glusterd during the "sign in" process. Prasanna has
>>> >> a
>>> >> patch
>>> >>   to do this.
>>> >> - Glusterd <--> brick (or any other local daemon) communication should
>>> >>   always happen over Unix Domain Socket. Currently glusterd and brick
>>> >>   process communicates over UDS and also port 24007. This will allow us
>>> >>   to set better authentication and rules for port 24007 as it shall
>>> >> only be
>>> >> used
>>> >>   by clients.
>>> >
>>> > I prefer this last point to be configurable. At least for debugging we
>>> > should be able to capture network traces and display the communication
>>> > in Wireshark. Defaulting to UNIX Domain Sockets is fine though.
>>>
>>> This is the communication between GD2 and bricks, of which there is
>>> not a lot happening, and not much to capture.
>>> But I agree, it will be nice to have this configurable.
>>>
>>
>> Could glusterd start attempting port binding at 24007 and progress on to
>> higher port numbers until successful and register the bound port number with
>> rpcbind ? This way the setup will be auto-configurable and admins need not
>> scratch their heads to decide upon one port number. Gluster clients could
>> always talk to rpcbind on the nodes to get glusterd service port whenever a
>> reconnect is required.
>
> 24007 has always been used as the GlusterD port. There was a plan to
> have it registered with IANA as well.
> Having a well defined port is useful to allow proper firewall rules to be 
> setup.

I seem to recall asking about this in another thread - is anyone
planning to follow through with the registration?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Fwd: tendrl-release v1.5.0 (Milestone1) is available

2017-08-05 Thread Sankarshan Mukhopadhyay
On Sat, Aug 5, 2017 at 8:50 AM, Vijay Bellur  wrote:
> Thank you Rohan for letting us know about this release!
>
> I looked for screenshots of the dashboard and found a few at [1]. Are there
> more screenshots that you can share with us?
>

The Metrics page (URL below) was intended to track the metrics
available in the build Rohan announced. This is good feedback and yes,
we can add screenshots to it.

The Tendrl project is going to improve on the install+configure
experience by switching over to an Ansible based model, the current
set of steps will then be substantially reduced.

> [1] https://github.com/Tendrl/documentation/wiki/Metrics
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Questions on github

2017-05-09 Thread Sankarshan Mukhopadhyay
On Tue, May 9, 2017 at 8:50 PM, Niels de Vos  wrote:

[much snipping]

> A but related to this, but is Stack Overflow not *the* place to ask and
> answer questions? There even is a "glusterfs" tag, and questions and
> answers can be marked even better than with GitHub (imho):
>
> https://stackoverflow.com/questions/tagged/glusterfs
>

Alright. The intended outcome (from how I comprehend this conversation
thread) is that developers/maintainers desire to respond more
(quickly/efficiently) to queries from users in the community. The way
this outcome is achieved is when the
developers/maintainers/those-who-know respond more often. The tooling
or, medium is not what is the crucial part here. However, it becomes
critical when there are far too many of avenues and too little
attention on a regular basis. *That* is a far from ideal situation.

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Questions on github

2017-05-09 Thread Sankarshan Mukhopadhyay
On Tue, May 9, 2017 at 8:21 PM, Pranith Kumar Karampuri
 wrote:
> People who respond on gluster-users do this already. Github issues is a
> better tool to do this (Again IMHO). It is a bit easier to know what
> questions are still open to be answered with github. Even after multiple
> responses on gluster-users mail you won't be sure if it is answered or not,
> so one has to go through the responses. Where as on github we can close it
> once answered. So these kinds of things are the reason for asking this.

Even with Github Issues, Gluster will not see a reduction of traffic
to -users/-devel asking questions about the releases. What will happen
for a (short?) while is that the project will have two inbound avenues
to look for questions and respond to them. Over a period of time, if
extreme focus and diligence is adopted, the traffic on mailing list
*might* move over to Github issues. There are obvious advantages to
using Issues. But there are also a small number of must-do things to
manage this approach.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Questions on github

2017-05-09 Thread Sankarshan Mukhopadhyay
On Tue, May 9, 2017 at 7:21 PM, Pranith Kumar Karampuri
 wrote:
>
>
> On Tue, May 9, 2017 at 7:09 PM, Sankarshan Mukhopadhyay
>  wrote:
>>
>> On Tue, May 9, 2017 at 3:18 PM, Amar Tumballi  wrote:
>> > I personally prefer github questions than mailing list, because a valid
>> > question can later become a reason for a new feature. Also, as you said,
>> > we
>> > can 'assignee' a question and if we start with bug triage we can also
>> > make
>> > sure at least we respond to questions which is pending.
>>
>> Is the on-going discussion in this thread about using
>> <https://github.com/gluster/glusterfs/issues> as a method to have
>> questions and responses from the community?
>
>
> yeah, this is something users have started doing. It seems better than mail
> (at least to me).
>

There is a trend of projects who are moving to Github Issues as a
medium/platform for responding to queries. As Amar mentions earlier in
the thread and Shyam implies - this requires constant (ie. daily)
vigil and attention. If those are in place, it is a practical move.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Questions on github

2017-05-09 Thread Sankarshan Mukhopadhyay
On Tue, May 9, 2017 at 3:18 PM, Amar Tumballi  wrote:
> I personally prefer github questions than mailing list, because a valid
> question can later become a reason for a new feature. Also, as you said, we
> can 'assignee' a question and if we start with bug triage we can also make
> sure at least we respond to questions which is pending.

Is the on-going discussion in this thread about using
<https://github.com/gluster/glusterfs/issues> as a method to have
questions and responses from the community?



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] What is the risk to upgrade glusterfs server version?

2016-11-06 Thread Sankarshan Mukhopadhyay
On Sun, Nov 6, 2016 at 11:23 AM, Jin Li  wrote:
> I want to upgrade glusterfs server to the latest version. Current old
> glusterfs server works with several volumes for a long time, and it is
> a little risky to upgrade. Could you show me what will be the
> highlight points to care about upgrading glusterfs server? Thanks.

Would it be possible to have more detail about the current/deployed
environment and the expected upgrade targets?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Events API new Port requirement

2016-09-08 Thread Sankarshan Mukhopadhyay
On Fri, Sep 9, 2016 at 3:55 AM, Niels de Vos  wrote:
> On Thu, Sep 08, 2016 at 10:03:03PM +0530, Sankarshan Mukhopadhyay wrote:
>> On Sun, Aug 28, 2016 at 2:13 PM, Niels de Vos  wrote:
>> > This definitely has my preference too. I've always wanted to try to
>> > register port 24007/8, and maybe the time has come to look into it.
>>
>> Has someone within the project previously undertaken the process of
>> requesting the IANA for assignment of a new service name and port
>> number value?
>
> Not that I know. I think Amar had an interest several years ago, but
> I've never seen any requests or results.
>
> There are some Red Hat colleagues that might have experience with this.
> I can ask and see if they can provide any guidance. But if there is
> someone very interested and eager to request these ports, please go
> ahead (and CC me on the communications).
>

If you can reach out and obtain the necessary information, it would be
a good first step.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Events API new Port requirement

2016-09-08 Thread Sankarshan Mukhopadhyay
On Sun, Aug 28, 2016 at 2:13 PM, Niels de Vos  wrote:
> This definitely has my preference too. I've always wanted to try to
> register port 24007/8, and maybe the time has come to look into it.

Has someone within the project previously undertaken the process of
requesting the IANA for assignment of a new service name and port
number value?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Post mortem of features that didn't make it to 3.9.0

2016-09-06 Thread Sankarshan Mukhopadhyay
On Wed, Sep 7, 2016 at 5:10 AM, Pranith Kumar Karampuri
 wrote:

>  Do you think it makes sense to do post-mortem of features that didn't
> make it to 3.9.0? We have some features that missed deadlines twice as well,
> i.e. planned for 3.8.0 and didn't make it and planned for 3.9.0 and didn't
> make it. So may be we are adding features to roadmap without thinking things
> through? Basically it leads to frustration in the community who are waiting
> for these components and they keep moving to next releases.

Doing a post-mortem to understand the pieces which went well (so that
we can continue doing them); which didn't go well (so that we can
learn from those) and which were impediments (so that we can address
the topics and remove them) is an useful exercise.

> Please let me know your thoughts. Goal is to get better at planning and
> deliver the features as planned as much as possible. Native subdirectoy
> mounts is in same situation which I was supposed to deliver.
>
> I have the following questions we need to ask ourselves the following
> questions IMO:

Incident based post-mortems require a timeline. However, while the
need for that might be unnecessary here, the questions are perhaps too
specific. Also, it would be good to set up the expectation from the
exercise - what would all the inputs lead to?

> 1) Did we have approved design before we committed the feature upstream for
> 3.9?
> 2) Did we allocate time for execution of this feature upstream?
> 3) Was the execution derailed by any of the customer issues/important work
> in your organizatoin?
> 4) Did developers focus on something that is not of priority which could
> have derailed the feature's delivery?
> 5) Did others in the team suspect the developers are not focusing on things
> that are of priority but didn't communicate?
> 6) Were there any infra issues that delayed delivery of this
> feature(regression failures etc)?
> 7) Were there any big delays in reviews of patches?
>
> Do let us know if you think we should ask more questions here.
>
> --
> Aravinda & Pranith



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regular Performance Regression Testing

2016-08-29 Thread Sankarshan Mukhopadhyay
On Mon, Aug 29, 2016 at 9:16 PM, Vijay Bellur  wrote:
> I would also recommend running perf-test.sh [1] for regression.
>

Would it be useful to have this script maintained as part of the
Gluster organization? Improvements/changes could perhaps be more
easily tracked.

> [1] https://github.com/avati/perf-test/blob/master/perf-test.sh




-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Backup support for GlusterFS

2016-08-19 Thread Sankarshan Mukhopadhyay
On Fri, Aug 19, 2016 at 2:53 PM, Alok Srivastava  wrote:
> the proposed approach is based on NDMP + Snapshots, Hence it's not a one
> size fits all approach. However, Making use of the snapshots will ensure
> that  a point in time copy is migrated and the in-flight directories are
> also accessible to the clients connected to the source.

Perhaps this is when the developers of snapshot feature would need to
chime in on the possible sequence of things to be done.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [gluster-devel] Documentation Tooling Review

2016-08-12 Thread Sankarshan Mukhopadhyay
On Fri, Aug 12, 2016 at 1:53 AM, Amye Scavarda  wrote:

[snip]

> I'm happy to take comments on this proposal. Over the next week, I'll be
> reviewing the level of effort it would take to migrate to ASCIIdocs and
> ASCIIbinder, with the goal being to have this in place by end of September.

This is a good start.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proposing a framework to leverage existing Python unit test standards for our testing

2016-08-02 Thread Sankarshan Mukhopadhyay
On Sat, Jul 30, 2016 at 5:26 AM, Jonathan Holloway  wrote:

[snip]

> I've posted several videos on YouTube at the following links.
> There are eight sections and then a combined full-length (really full)
> video.
> There is a little bit of Unit Test covered in "3. Using Glusto Overview",
> but the "8. Running Unit Tests" shows more depth (sample PyUnit format w/
> some PyTest, Gluster runs-on and reuse-setup example, filtering test runs,
> etc.).
> If you're looking to skip around, you might start with "1. Intro, 2. Using
> Glusto Overview, and "8. Running Unit Tests"--then pick and choose from
> there.

I finally had a bit of time to go through the videos in sequence. And
I'd like to put out a note of thanks for compiling them for everyone
in the project to take a look at.

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] performance issues Manoj found in EC testing

2016-06-27 Thread Sankarshan Mukhopadhyay
On Mon, Jun 27, 2016 at 2:38 PM, Manoj Pillai  wrote:
> Thanks, folks! As a quick update, throughput on a single client test jumped
> from ~180 MB/s to 700+MB/s after enabling client-io-threads. Throughput is
> now more in line with what is expected for this workload based on
> back-of-the-envelope calculations.

Is it possible to provide additional detail about this exercise in
terms of setup; tests executed; data sets generated?

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release Management Process change - proposal

2016-05-29 Thread Sankarshan Mukhopadhyay
On Mon, May 30, 2016 at 7:07 AM, Vijay Bellur  wrote:
> Since we do not have any objections to this proposal, let us do the
> following for 3.7.12:
>
> 1. Treat June 1st as the cut-off for patch acceptance in release-3.7.
> 2. I will tag 3.7.12rc1 on June 2nd.
> 3. All maintainers to ack content and stability of components by June 9th.
> 4. Release 3.7.12 around June 9th after we have all acks in place.

Would 3 days be sufficient to get the content for the website and a
bit of PR written up?

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Idea: Alternate Release process

2016-05-29 Thread Sankarshan Mukhopadhyay
ow what you guys think.
>>>>>>
>>>>> This reduces the number of versions that we need to maintain,
>>>>> which I like.
>>>>> Having official test (beta) releases should help get features
>>>>> out to
>>>>> testers hand faster,
>>>>> and get quicker feedback.
>>>>>
>>>>> One thing that's still not quite clear to is the issue of backwards
>>>>> compatibility.
>>>>> I'm still thinking it thorough and don't have a proper answer to
>>>>> this yet.
>>>>> Would a new release be backwards compatible with the previous
>>>>> release?
>>>>> Should we be maintaining compatibility with LTS releases with the
>>>>> latest release?
>>>>
>>>> Each LTS release will have seperate list of features to be
>>>> enabled. If we make any breaking changes(which are not backward
>>>> compatible) then it will affect LTS releases as you mentioned.
>>>> But we should not break compatibility unless it is major version
>>>> change like 4.0. I have to workout how we can handle backward
>>>> incompatible changes.
>>>>
>>>>> With our current strategy, we at least have a long term release
>>>>> branch,
>>>>> so we get some guarantees of compatibility with releases on the
>>>>> same branch.
>>>>>
>>>>> As I understand the proposed approach, we'd be replacing a stable
>>>>> branch with the beta branch.
>>>>> So we don't have a long-term release branch (apart from LTS).
>>>>
>>>> Stable branch is common for LTS releases also. Builds will be
>>>> different using different list of features.
>>>>
>>>> Below example shows stable release once in 6 weeks, and two LTS
>>>> releases in 6 months gap(3.8 and 3.12)
>>>>
>>>> LTS 1 : 3.83.8.1  3.8.2  3.8.3   3.8.4   3.8.5...
>>>> LTS 2 :  3.123.12.1...
>>>> Stable: 3.83.93.10   3.113.123.13...
>>>>>
>>>>> A user would be upgrading from one branch to another for every
>>>>> release.
>>>>> Can we sketch out how compatibility would work in this case?
>>>>
>>>> User will not upgrade from one branch to other branch, If user
>>>> interested in stable channel then upgrade once in 6 weeks. (Same
>>>> as minor update in current release style)
>>>>>
>>>>>
>>>>> This approach work well for projects like Chromium and Firefox,
>>>>> single
>>>>> system apps
>>>>>   which generally don't need to be compatible with the previous
>>>>> release.
>>>>> I don't understand how the Rust  project uses this (I am yet to
>>>>> read
>>>>> the linked blog post),
>>>>> as it requires some sort of backwards compatibility. But it too
>>>>> is a
>>>>> single system app,
>>>>> and doesn't have the compatibility problems we face.
>>>>>
>>>>> Gluster is a distributed system, that can involve multiple
>>>>> different
>>>>> versions interacting with each other.
>>>>> This is something we need to think about.
>>>>
>>>> I need to think about compatibility, What new problems about the
>>>> compatibility with this approach compared to our existing release
>>>> plan?
>>>>>
>>>>>
>>>>> We could work out some sort of a solution for this though.
>>>>> It might be something very obvious I'm missing right now.
>>>>>
>>>>> ~kaushal
>>>>>
>>>>>> --
>>>>>> regards
>>>>>> Aravinda

-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Suggest a feature redirect

2016-05-24 Thread Sankarshan Mukhopadhyay
On Wed, May 18, 2016 at 2:43 PM, Amye Scavarda  wrote:

> Said another way, if you wanted to be able to have someone contribute a
> feature idea, what would be the best way?
> Bugzilla? A google form? An email into the -devel list?
>

Whether it is a new BZ/RFE; a line item on a spreadsheet (via a form)
or, an email to the list - the incoming bits would need to be reviewed
and decided upon. A path which enables the Gluster community to
showcase in a public manner the received ideas and how they have
enabled features in the product would be the best thing to do.

The individual contributing the feature idea needs to know that
something was done with it and it wasn't a /dev/null


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reducing the size of the glusterdocs git repository

2016-05-12 Thread Sankarshan Mukhopadhyay
On Thu, May 12, 2016 at 3:55 PM, Kaushal M  wrote:
> If required we could just host the presentations on download.gluster.org.
> I've seen it being used to host resources for tutorials previously
> (like disk images),

I'd put forward the notion that download.gluster.org should ideally
remain for binaries (install-ready objects) than presentations. There
are specialized web properties for presentations and video which can
be better ways to do this.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting [was:Re: [Gluster-users] Show and Tell sessions for Gluster 4.0]

2016-05-10 Thread Sankarshan Mukhopadhyay
On Tue, May 10, 2016 at 3:22 PM, Niels de Vos  wrote:
> The weekly community meetins are visited well, if the discussions there
> are not productive/important enough to have every week, I would suggest
> to use that slot.

I changed the subject to avoid topics being enmeshed. The weekly
meetings on Wednesdays do have a good (as in, more than the bug triage
ones on Tuesdays) participation. I was wondering if the strength of
participation can be used to think over the topics/focus of the
meeting. Often the meetings on Wednesdays (the "community meeting")
brings higher weightage to action items and status updates thereof. To
an extent it is like a weekly standup of the all involved in a
release. Given the recent set of emails who are helping us test new
features (sharding etc) or, specific operators (I borrow the phrase
from OpenStack) - it could be a good idea to create space for
participation - whether in the form of thinking over the agenda or,
times which suit such participants.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Show and Tell sessions for Gluster 4.0

2016-05-08 Thread Sankarshan Mukhopadhyay
On Mon, May 9, 2016 at 11:55 AM, Atin Mukherjee  wrote:
> 1. We use the weekly community meeting slot once in a month
> or 2. Have a dedicated separate slot (probably @ 12:00 UTC, last
> Thursday of each month)

[2] would be nicer to set up a calendar for.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Google Summer of Code Application opens Feb 8th - Call For Mentors

2016-02-13 Thread Sankarshan Mukhopadhyay
On Thu, Feb 11, 2016 at 10:38 PM, Kaushal M  wrote:
> On Thu, Feb 11, 2016 at 6:41 PM, André Bauer  wrote:
>> Already asked myself why this not exists.
>>
>> so... +1 from me.
>>
>> Regards
>> André
>>
>> Am 04.02.2016 um 16:43 schrieb Niels de Vos:
>>>
>>> How about a project to write a fuse client based on libgfapi and
>>> libfuse instead? That would reduce the need for our fuse-bridge
>>> over time. libgfapi should be ready for this, Samba, NFS-Ganesha
>>> and others already use it heavily.
>>>
>
> +1 from me too. But we need a mentor for this. Neils would you be a
> willing mentor?

If this is the only project idea which exists, the application
organization for GSoC would be somewhat weak. Organizations are
usually expected to come up with a number (more than one) of ideas
which are also attached with (at least) one mentor. More at
<https://flossmanuals.net/GSoCMentoring/making-your-ideas-page/> (as a
manner of guidance)


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gluster testing matrix

2015-11-25 Thread Sankarshan Mukhopadhyay
On Wed, Nov 25, 2015 at 7:37 PM, Atin Mukherjee
 wrote:
> We'd also need IPv6 support in our infra going forward.

Worth a discussion on the -infra@ list to check and verify readiness.


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] removing "Target Release" from GlusterFS bug reports.

2015-11-24 Thread Sankarshan Mukhopadhyay
On Tue, Nov 24, 2015 at 6:16 PM, Kaleb KEITHLEY  wrote:
> As discussed during today's Bug Triage meeting it is proposed to remove
> the Target Release from all GlusterFS bug reports.
>
> This field is apparently not used by anyone, and it's not described in
> the Bug Triage process at
> http://gluster.readthedocs.org/en/latest/Contributors-Guide/Bug-Triage/
>
> Any objections?

Has it ever been used to add context to a filed bug?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Troubleshooting and Diagnostic tools for Gluster

2015-10-27 Thread Sankarshan Mukhopadhyay
On Mon, Oct 26, 2015 at 7:04 PM, Shyam  wrote:
> Older idea on this was, to consume the logs and filter based on the message
> IDs for those situations that can be remedied. The logs are hence the point
> where the event for consumption is generated.
>
> Also, the higher level abstraction uses these logs, it can *watch* based on
> message ID filters that are of interest to it, than parse the log message
> entirely to gain insight on the issue.

Are all situations usually atomic? Is it expected to have specific
mapping between an event recorded in a log from one part of an
installed system to a possible symptom? Or, do a collection of events
lead up to an observed failure (which, in turn, is recorded as a
series of events on the logs)?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Troubleshooting and Diagnostic tools for Gluster

2015-10-23 Thread Sankarshan Mukhopadhyay
On Fri, Oct 23, 2015 at 4:16 PM, Aravinda  wrote:
> Initial idea for Tools Framework:
> -
> A Shell/Python script which looks for the tool in plugins sub directory, if
> exists pass all the arguments and call that script.
>
> `glustertool help` triggers a python Script plugins/help.py which reads
> plugins.yml file to get the list of tools and help messages associated
> with it.
>
> No restrictions on the choice of programming language to create
> tool. It can be bash, Python, Go, Rust, awk, sed etc.
>
> Challenges:
> - Each plugin may have different dependencies, installing all tools
> may install all the dependencies.
> - Multiple programming languages, may be difficult to maintain/build.
> - Maintenance of Third party tools.
> - Creating Plugins registry to discover tools created by other developers.

Diagnostics and remediation become important when a higher level
abstraction (eg. a management construct for Gluster deployments are
involved). What your thoughts on such frameworks being able to consume
the logs; identify the possible issues and recommend a fix/solution?
Does this proposal anticipate such a progression?



-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster readdocs unaccessible

2015-10-14 Thread Sankarshan Mukhopadhyay
On Wed, Oct 14, 2015 at 2:03 PM, Avra Sengupta  wrote:
> I am unable to access gluster.readdocs.org . Is anyone else facing the same
> issue.

<https://gluster.readthedocs.org/en/latest/> ?


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] question about how to handle bugs filed against End-Of-Life versions of glusterfs

2015-09-16 Thread Sankarshan Mukhopadhyay
[for reference]

On Wed, Sep 16, 2015 at 6:35 PM, Kaleb S. KEITHLEY  wrote:
> As an example, Fedora simply closes any remaining open bugs when the
> version reaches EOL. It's incumbent on the person who filed the bug to
> reopen it if it still exists in newer versions.

<https://fedoraproject.org/wiki/BugZappers/HouseKeeping> - All bugs
for EOL releases are automatically closed on the EOL date after
providing a warning in the bug comments, 30 days prior to EOL.

<https://fedoraproject.org/wiki/BugZappers/StockBugzillaResponses#End_of_Life_.28EOL.29_product>
- The bug is reported against a version of Fedora that is no longer
maintained.Thank you for your bug report.We are sorry, but the Fedora
Project is no longer releasing bug fixes or any other updates for this
version of Fedora. This bug will be set to CLOSED:WONTFIX to reflect
this, but please reopen it if the problem persists after upgrading to
the latest version of Fedora, which is available
from:http://fedoraproject.org/get-fedora





-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Initial version of coreutils released

2015-08-13 Thread sankarshan
On Thu, 13 Aug 2015 22:19:54 +, Craig Cabrey wrote:

> I just pushed an initial version of the coreutils project that I have
> been working on during the course of this summer. There is still a lot
> to do, but I'm pretty excited about its start. Please check it out and
> if you want to help out, check out the TODO ;).

If would be nice to have a FAQ (cf. ) at  and also a bit more detail in the README itself.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [Cross-posted] Re: Gentle Reminder.. (Was: GlusterFS Documentation Improvements - An Update)

2015-06-23 Thread Sankarshan Mukhopadhyay
On Mon, Jun 22, 2015 at 5:24 PM, Shravan Chandrashekar
 wrote:
> We would like to finalize on the documentation contribution workflow by 26th 
> June 2015.
> As we have not yet received any comments/suggestion, we will confirm the 
> recommend workflow after 26th June.
>
>
> Kindly provide your suggestion on how we can improve this workflow.

There are a couple of aspects which need to be quickly looked through.

(a) a write-up of somewhat detail providing an overview of the new
workflow; how contributors can participate; reviewing mechanism for
patches against documentation; merge and release paths/cadence

(b) at <http://review.gluster.org/#/c/11129/> Niels has a comment
about "about design of structures used in the code" and how he thinks
that it is appropriate if "it stays part of the sources and does not
move out."

He also says "For example, I would like to document some of the memory
layout structures and functions, but this documentation will include
source-code comments and a .txt or .md file with additional details.
Spitting that makes it more difficult to keep in sync."

In this particular example, I'd probably say that it would be better
that such documentation is also part of the docs repo. It lends itself
to re-use as and when required (this particular example seems re-use
friendly).

I'd request that this switch-over to the new workflow and repositories
go ahead with the absolute "documentation" content. Examples/cases
like the above mentioned by Niels can be resolved via discussion and
probably not block the switch.

> Currently, mediawiki is read-only. We have ported most of the documents from 
> mediawiki to the new repository [1].
> If you find any document which is not ported, feel free to raise this by 
> opening an issue in [2] or if you would
> like to port your documents, send a pull request.
>
>
>
> [1] https://github.com/gluster/glusterdocs
> [2] https://github.com/gluster/glusterdocs/issues




-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS Documentation Improvements - An Update

2015-06-03 Thread Sankarshan Mukhopadhyay
On Wed, May 27, 2015 at 7:48 PM, Humble Devassy Chirammal
 wrote:
> Whats changing for community members?
>
> A very simplified contribution workflow.
>
> - How to Contribute?
>
> Contributing to the documentation requires a github account. To edit on
> github, fork the repository (see top-right of the screen, under your
> username). You will then be able to make changes easily. Once done, you can
> create a pull request and get the changes reviewed and merged into the
> official repository.
> With this simplified workflow, the documentation is no longer maintained in
> gluster/glusterfs/docs directory but it has a new elevated status in the
> form of a new project: gluster/glusterdocs
> (https://github.com/gluster/glusterdocs) and currently this project is being
> maintained by Anjana Sriram, Shravan and Humble.

Thanks for writing this up. Can this workflow/sequence be also visited
from the repo of the docs? It would perhaps be required to put this as
"How to contribute?"

> - What to Contribute
>
> Really, anything that you think has value to the GlusterFS developer
> community. While reading the docs you might find something incorrect or
> outdated. Fix it! Or maybe you have an idea for a tutorial, or for a topic
> that isn’t covered to your satisfaction. Create a new page and write it up!

+ FAQs and Common Issues/Known Issues etc

> Whats Next?
>
> Since the GlusterFS documentation has a new face-lift, MediaWiki will no
> longer be editable but will only be READ ONLY view mode. Hence, all the
> work-in-progress design notes which were maintained on MediaWiki will be
> ported to the GitHub repository and placed in "Feature Plans" folder. So,
> when you want to upload your work in progess documents you must do a pull
> request after the changes are made. This outlines the change in workflow as
> compared to MediaWiki.

It would be of benefit and advantage to have the read-only mode turned
on. This would help avoid 'split-brains' and also create a single
incoming path for content that needs to be maintained as part of the
Documentation community.

> A proposal:
>
> Another way to maintain work-in-progress documents in Google docs (or any
> other colloborative editing tool) and link them as an index entry in Feature
> Plans page on GitHub. This can be an excellent way to track a document
> through multiple rounds of collaborative editing in real time.

This is a sound idea as well :)


-- 
sankarshan mukhopadhyay
<https://about.me/sankarshan.mukhopadhyay>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel