[Gluster-devel] [REMINDER] Gluster RPC Internals - Lecture #2 - TODAY

2017-03-06 Thread Milind Changire

Blue Jeans Meeting ID: 1546612044
Start Time: 7:30pm India Time (UTC+0530)
Duration: 2 hours

https://www.bluejeans.com/

--
Milind
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Pluggable interface for erasure coding?

2017-03-06 Thread Per Simonsen
Hi,

I suggest that we set up an online meeting next week to discuss the erasure
coding features as well as possible implementations of a plugin
architecture. We also have some experience integrating with the liberasure
library mentioned by Prashant which we can share.

Does 10 am on Wednesday(8th of March) or Thursday(9th of March) next week
work for you guys?

Best,
Per Simonsen
CEO
MemoScale


On Thu, Mar 2, 2017 at 12:00 AM, Xavier Hernandez 
wrote:

> Hi Niels,
>
> On 02/03/17 07:58, Niels de Vos wrote:
>
>> Hi guys,
>>
>> I think this is a topic/question that has come up before, but I can not
>> find any references or feature requests related to it. Because there are
>> different libraries for Erasure Coding, it would be interesting to be
>> able to select alternatives to the bundled implementation that Gluster
>> has.
>>
>
> I agree.
>
> Are there any plans to make the current Erasure Coding
>> implementation more pluggable?
>>
>
> Yes. I've had this in my todo list for a long time. Once I even tried to
> implement the necessary infrastructure but didn't finish and now the code
> has changed too much to reuse it.
>
> Would this be a possible feature request,
>> or would it require a major rewrite of the current interface?
>>
>
> At the time I tried it, it required major changes. Now that the code has
> been considerably restructured to incorporate the dynamic code generation
> feature, maybe it doesn't require so many changes, though I'm not sure.
>
>
>> Here at FAST [0] I have briefly spoken to Per Simonsen from MemoScale
>> [1]. This company offers a (proprietary) library for Erasure Coding,
>> optimized for different architectures, and  with some unique(?) features
>> for recovering a failed fragment/disk. If Gluster allows alternative
>> implementations for the encoding, it would help organisations and
>> researchers to get results of their work in a distributed filesystem.
>> And with that, spread the word about how easy to adapt and extend
>> Gluster is :-)
>>
>
> That could be interesting. Is there any place where I can find additional
> information about the features of this library ?
>
> Xavi
>
>
>
>> Thanks,
>> Niels
>>
>>
>> 0. https://www.usenix.org/conference/fast17
>> 1. https://memoscale.com/
>>
>>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [RFC] Reducing maintenance burden and moving fuse support to an external project

2017-03-06 Thread Oleksandr Natalenko
H.

Keep me CCed, please, because for a couple last months I do not follow 
GlusterFS development…

On pátek 3. března 2017 21:50:07 CET Niels de Vos wrote:
> At the moment we have three top-level interfaces to maintain in Gluster,
> these are FUSE, Gluster/NFS and gfapi. If any work is needed to support
> new options, FOPs or other functionalities, we mostly have to do the
> work 3x. Often one of the interfaces gets forgotten, or does not need
> the new feature immediately (backlog++). This is bothering me every now
> and then, specially when bugs get introduced and need to get fixed in
> different ways for these three interfaced.
> 
> One of my main goals is to reduce the code duplication, and move
> everything to gfapi. We are on a good way to use NFS-Ganesha instead of
> Gluster/NFS already. In a similar approach, I would love to see
> deprecating our xlators/mount sources[0], and have it replaced by
> xglfs[1] from Oleksandr.
> 
> Having the FUSE mount binaries provided by a separate project should
> make it easier to implement things like subdirectory mounts (Samba and
> NFS-Ganesha already do this in some form through gfapi).
> 
> xglfs is not packaged in any distribution yet, this allows us to change
> the current commandline interface to something we deem more suitable (if
> so).
> 
> I would like to get some opinions from others, and if there are no
> absolute objections, we can work out a plan to make xglfs an alternative
> to the fuse-bridge and eventually replace it.
> 
> Thanks,
> Niels
> 
> 
> 0. https://github.com/gluster/glusterfs/tree/master/xlators/mount
> 1. https://github.com/gluster/xglfs


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Pluggable interface for erasure coding?

2017-03-06 Thread Per Simonsen
Hi,

I forgot to add the time zone: the suggested time is 10 am (GMT+1).

Best,
Per

On Thu, Mar 2, 2017 at 5:19 PM, Per Simonsen 
wrote:

> Hi,
>
> I suggest that we set up an online meeting next week to discuss the
> erasure coding features as well as possible implementations of a plugin
> architecture. We also have some experience integrating with the liberasure
> library mentioned by Prashant which we can share.
>
> Does 10 am on Wednesday(8th of March) or Thursday(9th of March) next week
> work for you guys?
>
> Best,
> Per Simonsen
> CEO
> MemoScale
>
>
> On Thu, Mar 2, 2017 at 12:00 AM, Xavier Hernandez 
> wrote:
>
>> Hi Niels,
>>
>> On 02/03/17 07:58, Niels de Vos wrote:
>>
>>> Hi guys,
>>>
>>> I think this is a topic/question that has come up before, but I can not
>>> find any references or feature requests related to it. Because there are
>>> different libraries for Erasure Coding, it would be interesting to be
>>> able to select alternatives to the bundled implementation that Gluster
>>> has.
>>>
>>
>> I agree.
>>
>> Are there any plans to make the current Erasure Coding
>>> implementation more pluggable?
>>>
>>
>> Yes. I've had this in my todo list for a long time. Once I even tried to
>> implement the necessary infrastructure but didn't finish and now the code
>> has changed too much to reuse it.
>>
>> Would this be a possible feature request,
>>> or would it require a major rewrite of the current interface?
>>>
>>
>> At the time I tried it, it required major changes. Now that the code has
>> been considerably restructured to incorporate the dynamic code generation
>> feature, maybe it doesn't require so many changes, though I'm not sure.
>>
>>
>>> Here at FAST [0] I have briefly spoken to Per Simonsen from MemoScale
>>> [1]. This company offers a (proprietary) library for Erasure Coding,
>>> optimized for different architectures, and  with some unique(?) features
>>> for recovering a failed fragment/disk. If Gluster allows alternative
>>> implementations for the encoding, it would help organisations and
>>> researchers to get results of their work in a distributed filesystem.
>>> And with that, spread the word about how easy to adapt and extend
>>> Gluster is :-)
>>>
>>
>> That could be interesting. Is there any place where I can find additional
>> information about the features of this library ?
>>
>> Xavi
>>
>>
>>
>>> Thanks,
>>> Niels
>>>
>>>
>>> 0. https://www.usenix.org/conference/fast17
>>> 1. https://memoscale.com/
>>>
>>>
>>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] will auth.ssl-allow override auth.allow?

2017-03-06 Thread Darren Zhang
Hi, 


I'm trying to configure SSL for glusterfs, I have set auth.ssl-allow = '*' and 
auth.allow = '10.10.0.*', if I mount the volume from client 10.10.1.1, it still 
can mount successfully, only when I disable SSL and only have the option 
auth.allow = '10.10.0.*' then client 10.10.1.1 can't mount the volume. So 
anyone can tell me, will volumes ignore auth.allow if auth.ssl-allow is there?


Thanks.___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Writing new Xlator and manipulate data

2017-03-06 Thread David Spisla
Hello Gluster-Devels,

actually I am writing my first xlator according to this tutorial:
https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/translator-development.md

So I put the rot-13 Xlator and wrote my own stuff. My idea is to change the 
iovec-content and put a new content in it.
My readv_cbk look like this:

int32_t
stub_iovec (xlator_t *this, struct iovec *vector, struct iobref *iobref, struct 
iatt *stbuf, int count)
{
   int i;
   for (i = 0; i < count; i++) {
   struct iobuf *iobuf = NULL;
   iobuf = iobuf_get(this->ctx->iobuf_pool);
iobuf_unref(iobref->iobrefs[i]);
iobref = iobref_new();
iobref_add(iobref, iobuf);
char temp[] = "I am the manipulated content!!!\n";
char *buf0 = "I am the manipulated content!!!\n";
vector[i].iov_base = buf0;
vector[i].iov_len = sizeof(temp);
stbuf->ia_size = sizeof(temp);

return stbuf->ia_size;
   }
}

int32_t
stub_readv_cbk (call_frame_t *frame,
 void *cookie,
 xlator_t *this,
 int32_t op_ret,
 int32_t op_errno,
 struct iovec *vector,
 int32_t count,
  struct iatt *stbuf,
 struct iobref *iobref, dict_t *xdata)
{
gf_log (this->name, GF_LOG_DEBUG, "Führe stub_readv_cbk aus!!!");

int32_t new_op_ret = stub_iovec(this, vector, iobref, stbuf, count);

   STACK_UNWIND_STRICT (readv, frame, new_op_ret, op_errno, vector, 
count,
 stbuf, iobref, xdata);
   return 0;
}

Ist not really working at all. If I have an original content like a simple 
"orig" in a test.txt and change it to "manipulate",
the command "cat /mnt/gluster/test.txt" show me an "mani". The size if the 
content did not changed.
Any idea about hat?

Regards
David
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Defining a "good build"

2017-03-06 Thread Nigel Babu
Hello folks,

At some point in the distant future, we want to be able to say definitively
that we have a good Gluster build. This conviction needs to be backed by tests
that we run on our builds to confirm that it is good. This conversation is
meant to tease out a definition of good build. This definition will help us
define what tests we to confirm that the build is indeed good. This is a very
important thing to know pre-release.

I started a conversation at the start of February with a few developers to
define a good build. Now is a good time to take this discussion public so we
can narrow this down and use this to focus on our testing efforts.

Most people, when they think about this conversation, think of performance. We
should test functionality before performance. It makes sense to test
performance when we can confirm that the setup we recommend works. Otherwise,
we're working with the assumption that it works unless proven otherwise.

A good build to me would be one that confirms that
* The packages installs and upgrades correctly (packaging bits).
* Mounts and volume types work.
* Integrations that we promise works do work.
* Upgrades work without causing data loss.
* The configurations we focus on works and we can verify that they do actually
  work.
* Performance for these configurations have not degraded from the last release.

Jeff recommended we start with configurations for these scenarios to begin
with:
* many large files, sequential read (media service)
* many large files, sequential write (video/IoT/log archiving)
* few large files, random read/write (virtual machines)
* many small directories, read/write, snapshots (containers)

This isn't achievable in a single day. Here's what's good to focus on:
* The package installs we test. We don't test that the packages upgrade yet.
  This is something we can do easily as part of our Glusto tests. I mean
  public-facing tests here. Our users should be able to verify our claims that
  it works.
* The mounts and volume types are tested with QE's verification tests. Shwetha
  has done some good work here and we have a decent number of tests that
  confirm everything works.
* I'd say we pick *one* use case, list down our recommended configurations for
  that type of workload. Then, write a test to setup Gluster in that
  configuration and test that everything works. Considering we're still
  figuring out Glusto, this is a good goal to begin. Shyam and I are planning
  to tackle the video archive workload in this cycle.
* Integration tests is a conversation I'd like to start with the GEDI team. For
  projects that we support, we need to confirm that we haven't broken anything
  that they depend on. Projects I can think of the top of my head: oVirt,
  Container workloads, Tendrl integration.
* Upgrades is something we don't test. It'll be useful to write down how we
  recommend upgrades and how to write those tests. Perhaps need to be part of
  each scenario's testing on how it handles upgrades.
* Running real performance testing requires some specialized hardware we don't
  yet have. We can find and fix memory leaks that the Coverity scans report (86
  as of this email). We could also build gluster with ASAN and run our test
  suite to see if that catches any memory issues.

This email has several areas for conversations to begin. But please remember
the goal of this thread. We want to define a good build.


--
nigelb


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Community Meeting 2017-03-01

2017-03-06 Thread Kaushal M
I'm late with the meeting notes again. But better late, than never.
Here are the meeting notes for the community meeting on 2017-03-01.

We had a lower attendance this week with lot of regular attendees out
for conferences. We had one topic of discussion on backports and
change-ids. Other than that we had an informal discussion around
maintainers and maintainership. More details about the discussions can
be found in the logs.

The next meeting is on 15th March, 1500UTC. The meeting pad is ready
for updates and topics for discussion at
https://bit.ly/gluster-community-meetings.

See you all next time.

~kaushal

- Logs :
- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-03-01/community_meeting_2017-03-01.2017-03-01-15.00.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2017-03-01/community_meeting_2017-03-01.2017-03-01-15.00.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-03-01/community_meeting_2017-03-01.2017-03-01-15.00.log.html

## Topics of Discussion

The meeting is an open floor, open for discussion of any topic entered below.

- Discuss backport tracking via gerrit Change-ID [shyam]
- Change-ID makes it easier to track backports.
- [Backport
guidelines](https://gluster.readthedocs.io/en/latest/Developer-guide/Backport-Guidelines/)
mention the need to use the same Change-ID for backports.
- But isn't enforced.
- How to we enforce it?
- Jenkins job that checks if Change-ID for new reviews on
release branches exists on master [nigelb]
- Yes [kshlm, shyam, vbellur]
- shyam will inform the lists before we proceed

### Next edition's meeting host

- kshlm (again)

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Action Items from the last meeting

- jdarcy will work with nigelb to make the infra for reverts easier.
- Nothing happened here
- nigelb will document kkeithleys build process for glusterfs packages
- Or here.

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
- GD2
- Targetting to provide a preview release with 3.11
- Gave a code name "Rogue One"
- We started filling up tasks for Rogue One on Github
- https://github.com/gluster/glusterd2/projects/1
- We want to try [mgmt](https://github.com/purpleidea/mgmt) as
the internal orchestraction and management engine
- Started a new PR to import mgmt into GD2, to allow
@purpleidea to show how we could use mgmt.
- https://github.com/gluster/glusterd2/pull/247
- SunRPC bits were refactored
- https://github.com/gluster/glusterd2/pull/242
- https://github.com/gluster/glusterd2/pull/245
- Portmap registry was implemented
- https://github.com/gluster/glusterd2/pull/246

 GlusterFS 3.11
- Maintainers: shyam, *TBD*
- Release: May 30th, 2017
- Branching: April 27th, 2017
- 3.11 Main focus areas
- Testing improvements in Gluster
- Merge all (or as much as possible) Facebook patches into master,
and hence into release 3.11
- We will still retain features that slipped 3.10 and hence were
moved to 3.11
- Release Scope: https://github.com/gluster/glusterfs/projects/1

 GlusterFS 3.10

- Maintainers : shyam, kkeithley, rtalur
- Current release : 3.10.0
- Next release : 3.10.1
- Target date: March 30, 2017
- Bug tracker: https://bugzilla.redhat.com/show_bug.cgi?glusterfs-3.10.1
- Updates:
  - 3.10.0 has finally been released.
  - http://blog.gluster.org/2017/02/announcing-gluster-3-10/

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.1
- Next release : EOL
- Updates:
  - EOLed. Announcement pending.
  - Bug cleanup pending.

 GlusterFS 3.8

- Maintainers : ndevos, jiffin
- Current release : 3.8.9
- Next release : 3.8.10
  - Release date : 10 March 2017
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.10
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.8.10_resolved=1
- Updates:
  - _Add updates here_

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.20
- Next release : EOL
- Updates:
  - EOLed. Announcement pending.
  - Bug cleanup pending.

### Related projects and efforts

 Community Infra

- Reminder: Community cage outage on 14th and 15th March
- All smoke jobs run on Centos 7 in the community cage.
- We will slowly be moving jobs into the cage.

 Samba

- _None_

 Ganesha

- _None_

 Containers

- _None_

 Testing

- [nigelb] Glusto tests are completely green at the moment. They had
been broken for a while. We'll make sure their status is monitored
more carefully going