[Gluster-users] Gluster 6 Retrospective Open Until April 8

2019-03-25 Thread Amye Scavarda
Congrats to the team for getting 6 released!
We're doing another retrospective, please come give us your feedback!
This retrospective will be open until April 8.

https://www.gluster.org/gluster-6-0-retrospective/

Thanks!
- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Help: gluster-block

2019-03-25 Thread Prasanna Kalever
[ adding +gluster-users for archive purpose ]

On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin  wrote:
>
> Hello Mr. Kalever,

Hello Jeffrey,

>
> I am currently working on a project to utilize GlusterFS for VMWare VMs. In 
> our research, we found that utilizing block devices with GlusterFS would be 
> the best approach for our use case (correct me if I am wrong). I saw the 
> gluster utility that you are a contributor for called gluster-block 
> (https://github.com/gluster/gluster-block), and I had a question about the 
> configuration. From what I understand, gluster-block only works on the 
> servers that are serving the gluster volume. Would it be possible to run the 
> gluster-block utility on a client machine that has a gluster volume mounted 
> to it?

Yes, that is right! At the moment gluster-block is coupled with
glusterd for simplicity.
But we have made some changes here [1] to provide a way to specify
server address (volfile-server) which is outside the gluster-blockd
node, please take a look.

Although it is not complete solution, but it should at-least help for
some usecases. Feel free to raise an issue [2] with the details about
your usecase and etc or submit a PR by your self :-)
We never picked it, as we never have a usecase needing separation of
gluster-blockd and glusterd.

>
> I also have another question: how do I make the iSCSI targets persist if all 
> of the gluster nodes were rebooted? It seems like once all of the nodes 
> reboot, I am unable to reconnect to the iSCSI targets created by the 
> gluster-block utility.

do you mean rebooting iscsi initiator ? or gluster-block/gluster
target/server nodes ?

1. for initiator to automatically connect to block devices post
reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
node.startup = automatic

2. if you mean, just in case if all the gluster nodes goes down, on
the initiator all the available HA path's will be down, but we still
want the IO to be queued on the initiator, until one of the path
(gluster node) is availabe:

for this in gluster-block sepcific section of multipath.conf you need
to replace 'no_path_retry 120' as 'no_path_retry queue'
Note: refer README for current multipath.conf setting recommendations.

[1] https://github.com/gluster/gluster-block/pull/161
[2] https://github.com/gluster/gluster-block/issues/new

BRs,
--
Prasanna
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
Thanks for the feedback Darrell,

The new proposal is to have one in North America 'morning' time. (10AM
PST), And another in ASIA day time, which is evening 7pm/6pm in Australia,
9pm Newzealand, 5pm Tokyo, 4pm Beijing.

For example, if we choose Every other Tuesday for meeting, and 1st of the
month is Tuesday, we would have North America time for 1st, and on 15th it
would be ASIA/Pacific time.

Hopefully, this way, we can cover all the timezones, and meeting minutes
would be committed to github repo, so that way, it will be easier for
everyone to be aware of what is happening.

Regards,
Amar

On Mon, Mar 25, 2019 at 8:40 PM Darrell Budic 
wrote:

> As a user, I’d like to visit more of these, but the time slot is my 3AM.
> Any possibility for a rolling schedule (move meeting +6 hours each week
> with rolling attendance from maintainers?) or an occasional regional
> meeting 12 hours opposed to the one you’re proposing?
>
>   -Darrell
>
> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
> All,
>
> We currently have 3 meetings which are public:
>
> 1. Maintainer's Meeting
>
> - Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
> on an avg, and not much is discussed.
> - Without majority attendance, we can't take any decisions too.
>
> 2. Community meeting
>
> - Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
> meeting which is for 'Community/Users'. Others are for developers as of
> now.
> Sadly attendance is getting closer to 0 in recent times.
>
> 3. GCS meeting
>
> - We started it as an effort inside Red Hat gluster team, and opened it up
> for community from Jan 2019, but the attendance was always from RHT
> members, and haven't seen any traction from wider group.
>
> So, I have a proposal to call out for cancelling all these meeting, and
> keeping just 1 weekly 'Community' meeting, where even topics related to
> maintainers and GCS and other projects can be discussed.
>
> I have a template of a draft template @
> https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g
>
> Please feel free to suggest improvements, both in agenda and in timings.
> So, we can have more participation from members of community, which allows
> more user - developer interactions, and hence quality of project.
>
> Waiting for feedbacks,
>
> Regards,
> Amar
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Darrell Budic
As a user, I’d like to visit more of these, but the time slot is my 3AM. Any 
possibility for a rolling schedule (move meeting +6 hours each week with 
rolling attendance from maintainers?) or an occasional regional meeting 12 
hours opposed to the one you’re proposing?

  -Darrell

> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan  
> wrote:
> 
> All,
> 
> We currently have 3 meetings which are public:
> 
> 1. Maintainer's Meeting
> 
> - Runs once in 2 weeks (on Mondays), and current attendance is around 3-5 on 
> an avg, and not much is discussed. 
> - Without majority attendance, we can't take any decisions too.
> 
> 2. Community meeting
> 
> - Supposed to happen on #gluster-meeting, every 2 weeks, and is the only 
> meeting which is for 'Community/Users'. Others are for developers as of now.
> Sadly attendance is getting closer to 0 in recent times.
> 
> 3. GCS meeting
> 
> - We started it as an effort inside Red Hat gluster team, and opened it up 
> for community from Jan 2019, but the attendance was always from RHT members, 
> and haven't seen any traction from wider group.
> 
> So, I have a proposal to call out for cancelling all these meeting, and 
> keeping just 1 weekly 'Community' meeting, where even topics related to 
> maintainers and GCS and other projects can be discussed.
> 
> I have a template of a draft template @ 
> https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g 
> 
> 
> Please feel free to suggest improvements, both in agenda and in timings. So, 
> we can have more participation from members of community, which allows more 
> user - developer interactions, and hence quality of project.
> 
> Waiting for feedbacks,
> 
> Regards,
> Amar
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Gluster Release 6

2019-03-25 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of 6.0, our
latest release.

This is a major release that includes a range of code improvements and
stability fixes along with a few features as noted below.

A selection of the key features and bugs addressed are documented in
this [1] page.

Announcements:

1. Releases that receive maintenance updates post release 6 are, 4.1 and
5 [2]

2. Release 6 will receive maintenance updates around the 10th of every
month for the first 3 months post release (i.e Apr'19, May'19, Jun'19).
Post the initial 3 months, it will receive maintenance updates every 2
months till EOL. [3]

A series of features/xlators have been deprecated in release 6 as
follows, for upgrade procedures from volumes that use these features to
release 6 refer to the release 6 upgrade guide [4].

Features deprecated:
- Block device (bd) xlator
- Decompounder feature
- Crypt xlator
- Symlink-cache xlator
- Stripe feature
- Tiering support (tier xlator and changetimerecorder)

Highlights of this release are:
- Several stability fixes addressing, coverity, clang-scan, address
sanitizer and valgrind reported issues
- Removal of unused and hence, deprecated code and features
- Client side inode garbage collection
- This release addresses one of the major concerns regarding FUSE mount
process memory footprint, by introducing client side inode garbage
collection
- Performance Improvements
- "--auto-invalidation" on FUSE mounts to leverage kernel page cache
more effectively

Bugs addressed are provided towards the end, in the release notes [1]

Thank you,
Gluster community

References:
[1] Release notes: https://docs.gluster.org/en/latest/release-notes/6.0/

[2] Release schedule: https://www.gluster.org/release-schedule/

[3] Gluster release cadence and version changes:
https://lists.gluster.org/pipermail/announce/2018-July/000103.html

[4] Upgrade guide to release-6:
https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-25 Thread Maurya M
some addtion logs from gverify-mastermnt.log & gverify-slavemnt.log:

[2019-03-25 12:13:23.819665] W [rpc-clnt.c:1753:rpc_clnt_submit]
0-vol_75a5fd373d88ba687f591f3353fa05cf-client-2: error returned while
attempting to connect to host:(null), port:0
[2019-03-25 12:13:23.819814] W [dict.c:923:str_to_data]
(-->/usr/lib64/glusterfs/4.1.7/xlator/protocol/client.so(+0x40c0a)
[0x7f3eb4d86c0a] -->/lib64/libglusterfs.so.0(dict_set_str+0x16)
[0x7f3ebc334266] -->/lib64/libglusterfs.so.0(str_to_data+0x91)
[0x7f3ebc330ea1] ) 0-dict: *value is NULL [Invalid argument]*


 any idea how to fix this ? any patch file i can try with please share.

thanks,
Maurya


On Mon, Mar 25, 2019 at 3:37 PM Maurya M  wrote:

> ran this command :  ssh -p  -i
> /var/lib/glusterd/geo-replication/secret.pem root@gluster
> volume info --xml
>
> attaching the output.
>
>
>
> On Mon, Mar 25, 2019 at 2:13 PM Aravinda  wrote:
>
>> Geo-rep is running `ssh -i /var/lib/glusterd/geo-replication/secret.pem
>> root@ gluster volume info --xml` and parsing its output.
>> Please try to to run the command from the same node and let us know the
>> output.
>>
>>
>> On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
>> > Now the error is on the same line 860 : as highlighted below:
>> >
>> > [2019-03-25 06:11:52.376238] E
>> > [syncdutils(monitor):332:log_raise_exception] : FAIL:
>> > Traceback (most recent call last):
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
>> > 311, in main
>> > func(args)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
>> > 50, in subcmd_monitor
>> > return monitor.monitor(local, remote)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> > 427, in monitor
>> > return Monitor().multiplex(*distribute(local, remote))
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> > 386, in distribute
>> > svol = Volinfo(slave.volume, "localhost", prelude)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
>> > 860, in __init__
>> > vi = XET.fromstring(vix)
>> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1300, in
>> > XML
>> > parser.feed(text)
>> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in
>> > feed
>> > self._raiseerror(v)
>> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in
>> > _raiseerror
>> > raise err
>> > ParseError: syntax error: line 1, column 0
>> >
>> >
>> > On Mon, Mar 25, 2019 at 11:29 AM Maurya M  wrote:
>> > > Sorry my bad, had put the print line to debug, i am using gluster
>> > > 4.1.7, will remove the print line.
>> > >
>> > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda 
>> > > wrote:
>> > > > Below print statement looks wrong. Latest Glusterfs code doesn't
>> > > > have
>> > > > this print statement. Please let us know which version of
>> > > > glusterfs you
>> > > > are using.
>> > > >
>> > > >
>> > > > ```
>> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
>> > > > line
>> > > > 860, in __init__
>> > > > print "debug varible " %vix
>> > > > ```
>> > > >
>> > > > As a workaround, edit that file and comment the print line and
>> > > > test the
>> > > > geo-rep config command.
>> > > >
>> > > >
>> > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
>> > > > > hi Aravinda,
>> > > > >  had the session created using : create ssh-port  push-pem
>> > > > and
>> > > > > also the :
>> > > > >
>> > > > > gluster volume geo-replication
>> > > > vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-
>> > > > port
>> > > > > 
>> > > > >
>> > > > > hitting this message:
>> > > > > geo-replication config-set failed for
>> > > > > vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
>> > > > > geo-replication command failed
>> > > > >
>> > > > > Below is snap of status:
>> > > > >
>> > > > > [root@k8s-agentpool1-24779565-1
>> > > > >
>> > > > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
>> > > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication
>> > > > vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
>> > > > >
>> > > > > MASTER NODE  MASTER VOL  MASTER
>> > > > > BRICK
>> > > >
>> > > > >SLAVE USERSLAVE
>> > > >
>> > > > >   SLAVE NODESTATUS CRAWL
>> > > > STATUS
>> > > > >   LAST_SYNCED
>> > > > > -
>> > > > --
>> > > > > -
>> > > > --
>> > > > > -
>> > > > --
>> > > > > -
>> > > > --
>> > > > > 
>> > > > > 172.16.189.4 

Re: [Gluster-users] Network Block device (NBD) on top of glusterfs

2019-03-25 Thread Xiubo Li

On 2019/3/25 14:36, Vijay Bellur wrote:


Hi Xiubo,

On Fri, Mar 22, 2019 at 5:48 PM Xiubo Li > wrote:


On 2019/3/21 11:29, Xiubo Li wrote:


All,

I am one of the contributor forgluster-block
[1] project, and also I
contribute to linux kernel andopen-iscsi
 project.[2]

NBD was around for some time, but in recent time, linux kernel’s
Network Block Device (NBD) is enhanced and made to work with more
devices and also the option to integrate with netlink is added.
So, I tried to provide a glusterfs client based NBD driver
recently. Please refergithub issue #633
[3], and good
news is I have a working code, with most basic things @nbd-runner
project [4].



This is nice. Thank you for your work!

As mentioned the nbd-runner(NBD proto) will work in the same layer
with tcmu-runner(iSCSI proto), this is not trying to replace the
gluster-block/ceph-iscsi-gateway great projects.

It just provides the common library to do the low level stuff,
like the sysfs/netlink operations and the IOs from the nbd kernel
socket, and the great tcmu-runner project is doing the sysfs/uio
operations and IOs from the kernel SCSI/iSCSI.

The nbd-cli tool will work like the iscsi-initiator-utils, and the
nbd-runner daemon will work like the tcmu-runner daemon, that's all.


Do you have thoughts on how nbd-runner currently differs or would 
differ from tcmu-runner? It might be useful to document the 
differences in github (or elsewhere) so that users can make an 
informed choice between nbd-runner & tcmu-runner.


Yeah, this makes sense and I will figure it out in the github. Currently 
for the open-iscsi/tcmu-runner, there are already many existing tools to 
help product it, and for NBD we may need to implement them, correct me 
if I am wrong here :-)




In tcmu-runner for different backend storages, they have separate
handlers, glfs.c handler for Gluster, rbd.c handler for Ceph, etc.
And what the handlers here are doing the actual IOs with the
backend storage services once the IO paths setup are done by
ceph-iscsi-gateway/gluster-block

Then we can support all the kind of backend storages, like the
Gluster/Ceph/Azure... as one separate handler in nbd-runner, which
no need to care about the NBD low level's stuff updates and changes.


Given that the charter for this project is to support multiple backend 
storage projects, would not it be better to host the project in the 
github repository associated with nbd [5]? Doing it that way could 
provide a more neutral (as perceived by users) venue for hosting 
nbd-runner and help you in getting more adoption for your work.



This is a good idea, I will try to push this forward.

Thanks very much Vijay.

BRs

Xiubo Li



Thanks,
Vijay

[5] https://github.com/NetworkBlockDevice/nbd


Thanks.



While this email is about announcing the project, and asking for
more collaboration, I would also like to discuss more about the
placement of the project itself. Currently nbd-runner project is
expected to be shared by our friends at Ceph project too, to
provide NBD driver for Ceph. I have personally worked with some
of them closely while contributing to open-iSCSI project, and we
would like to take this project to great success.

Now few questions:

 1. Can I continue to usehttp://github.com/gluster/nbd-runneras
home for this project, even if its shared by other filesystem
projects?

  * I personally am fine with this.

 2. Should there be a separate organization for this repo?

  * While it may make sense in future, for now, I am not planning
to start any new thing?

It would be great if we have some consensus on this soon as
nbd-runner is a new repository. If there are no concerns, I will
continue to contribute to the existing repository.

Regards,
Xiubo Li (@lxbsz)

[1] -https://github.com/gluster/gluster-block
[2] -https://github.com/open-iscsi
[3] -https://github.com/gluster/glusterfs/issues/633
[4] -https://github.com/gluster/nbd-runner


___
Gluster-users mailing list
Gluster-users@gluster.org  
https://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-25 Thread Maurya M
ran this command :  ssh -p  -i
/var/lib/glusterd/geo-replication/secret.pem root@gluster
volume info --xml

attaching the output.



On Mon, Mar 25, 2019 at 2:13 PM Aravinda  wrote:

> Geo-rep is running `ssh -i /var/lib/glusterd/geo-replication/secret.pem
> root@ gluster volume info --xml` and parsing its output.
> Please try to to run the command from the same node and let us know the
> output.
>
>
> On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> > Now the error is on the same line 860 : as highlighted below:
> >
> > [2019-03-25 06:11:52.376238] E
> > [syncdutils(monitor):332:log_raise_exception] : FAIL:
> > Traceback (most recent call last):
> >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> > 311, in main
> > func(args)
> >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> > 50, in subcmd_monitor
> > return monitor.monitor(local, remote)
> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> > 427, in monitor
> > return Monitor().multiplex(*distribute(local, remote))
> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> > 386, in distribute
> > svol = Volinfo(slave.volume, "localhost", prelude)
> >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> > 860, in __init__
> > vi = XET.fromstring(vix)
> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1300, in
> > XML
> > parser.feed(text)
> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in
> > feed
> > self._raiseerror(v)
> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in
> > _raiseerror
> > raise err
> > ParseError: syntax error: line 1, column 0
> >
> >
> > On Mon, Mar 25, 2019 at 11:29 AM Maurya M  wrote:
> > > Sorry my bad, had put the print line to debug, i am using gluster
> > > 4.1.7, will remove the print line.
> > >
> > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda 
> > > wrote:
> > > > Below print statement looks wrong. Latest Glusterfs code doesn't
> > > > have
> > > > this print statement. Please let us know which version of
> > > > glusterfs you
> > > > are using.
> > > >
> > > >
> > > > ```
> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > > line
> > > > 860, in __init__
> > > > print "debug varible " %vix
> > > > ```
> > > >
> > > > As a workaround, edit that file and comment the print line and
> > > > test the
> > > > geo-rep config command.
> > > >
> > > >
> > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> > > > > hi Aravinda,
> > > > >  had the session created using : create ssh-port  push-pem
> > > > and
> > > > > also the :
> > > > >
> > > > > gluster volume geo-replication
> > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-
> > > > port
> > > > > 
> > > > >
> > > > > hitting this message:
> > > > > geo-replication config-set failed for
> > > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> > > > > geo-replication command failed
> > > > >
> > > > > Below is snap of status:
> > > > >
> > > > > [root@k8s-agentpool1-24779565-1
> > > > >
> > > > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
> > > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication
> > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
> > > > >
> > > > > MASTER NODE  MASTER VOL  MASTER
> > > > > BRICK
> > > >
> > > > >SLAVE USERSLAVE
> > > >
> > > > >   SLAVE NODESTATUS CRAWL
> > > > STATUS
> > > > >   LAST_SYNCED
> > > > > -
> > > > --
> > > > > -
> > > > --
> > > > > -
> > > > --
> > > > > -
> > > > --
> > > > > 
> > > > > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
> > > > >
> > > > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_
> > > > 116f
> > > > > b9427fb26f752d9ba8e45e183cb1/brickroot
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
> > > >
> > > > >  CreatedN/A N/A
> > > > > 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf
> > > > >
> > > > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_
> > > > 266b
> > > > > b08f0d466d346f8c0b19569736fb/brickroot
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
> > > >
> > > > >  CreatedN/A N/A
> > > > > 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf
> > > > >
> > > > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_
> > > > dfa4
> > > > > 

[Gluster-users] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
All,

We currently have 3 meetings which are public:

1. Maintainer's Meeting

- Runs once in 2 weeks (on Mondays), and current attendance is around 3-5
on an avg, and not much is discussed.
- Without majority attendance, we can't take any decisions too.

2. Community meeting

- Supposed to happen on #gluster-meeting, every 2 weeks, and is the only
meeting which is for 'Community/Users'. Others are for developers as of now.
Sadly attendance is getting closer to 0 in recent times.

3. GCS meeting

- We started it as an effort inside Red Hat gluster team, and opened it up
for community from Jan 2019, but the attendance was always from RHT
members, and haven't seen any traction from wider group.

So, I have a proposal to call out for cancelling all these meeting, and
keeping just 1 weekly 'Community' meeting, where even topics related to
maintainers and GCS and other projects can be discussed.

I have a template of a draft template @
https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g

Please feel free to suggest improvements, both in agenda and in timings.
So, we can have more participation from members of community, which allows
more user - developer interactions, and hence quality of project.

Waiting for feedbacks,

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS v7.0 (and v8.0) roadmap discussion

2019-03-25 Thread Amar Tumballi Suryanarayan
Hello Gluster Members,

We are now done with glusterfs-6.0 release, and the next up is
glusterfs-7.0. But considering for many 'initiatives', 3-4 months are not
enough time to complete the tasks, we would like to call for a road-map
discussion meeting for calendar year 2019 (covers both glusterfs-7.0, and
8.0).

It would be good to use the meeting slot of community meeting for this.
While talking to team locally, I compiled a presentation here: <
https://docs.google.com/presentation/d/1rtn38S4YBe77KK5IjczWmoAR-ZSO-i3tNHg9pAH8Wt8/edit?usp=sharing>,
please go through and let me know what more can be added, or what can be
dropped?

We can start having discussions in https://hackmd.io/jlnWqzwCRvC9uoEU2h01Zw

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-25 Thread Aravinda
Geo-rep is running `ssh -i /var/lib/glusterd/geo-replication/secret.pem 
root@ gluster volume info --xml` and parsing its output.
Please try to to run the command from the same node and let us know the
output.


On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> Now the error is on the same line 860 : as highlighted below:
> 
> [2019-03-25 06:11:52.376238] E
> [syncdutils(monitor):332:log_raise_exception] : FAIL:
> Traceback (most recent call last):
>   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> 311, in main
> func(args)
>   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> 50, in subcmd_monitor
> return monitor.monitor(local, remote)
>   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> 427, in monitor
> return Monitor().multiplex(*distribute(local, remote))
>   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> 386, in distribute
> svol = Volinfo(slave.volume, "localhost", prelude)
>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> 860, in __init__
> vi = XET.fromstring(vix)
>   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1300, in
> XML
> parser.feed(text)
>   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in
> feed
> self._raiseerror(v)
>   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in
> _raiseerror
> raise err
> ParseError: syntax error: line 1, column 0
> 
> 
> On Mon, Mar 25, 2019 at 11:29 AM Maurya M  wrote:
> > Sorry my bad, had put the print line to debug, i am using gluster
> > 4.1.7, will remove the print line.
> > 
> > On Mon, Mar 25, 2019 at 10:52 AM Aravinda 
> > wrote:
> > > Below print statement looks wrong. Latest Glusterfs code doesn't
> > > have
> > > this print statement. Please let us know which version of
> > > glusterfs you
> > > are using.
> > > 
> > > 
> > > ```
> > >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > line
> > > 860, in __init__
> > > print "debug varible " %vix
> > > ```
> > > 
> > > As a workaround, edit that file and comment the print line and
> > > test the
> > > geo-rep config command.
> > > 
> > > 
> > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> > > > hi Aravinda,
> > > >  had the session created using : create ssh-port  push-pem
> > > and
> > > > also the :
> > > > 
> > > > gluster volume geo-replication
> > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-
> > > port
> > > > 
> > > > 
> > > > hitting this message:
> > > > geo-replication config-set failed for
> > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> > > > geo-replication command failed
> > > > 
> > > > Below is snap of status:
> > > > 
> > > > [root@k8s-agentpool1-24779565-1
> > > >
> > > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
> > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication
> > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
> > > > 
> > > > MASTER NODE  MASTER VOL  MASTER
> > > > BRICK 
> > >  
> > > >SLAVE USERSLAVE 
> > >  
> > > >   SLAVE NODESTATUS CRAWL
> > > STATUS 
> > > >   LAST_SYNCED
> > > > -
> > > --
> > > > -
> > > --
> > > > -
> > > --
> > > > -
> > > --
> > > > 
> > > > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf   
> > > >
> > > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_
> > > 116f
> > > > b9427fb26f752d9ba8e45e183cb1/brickroot 
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A 
> > >
> > > >  CreatedN/A N/A
> > > > 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf   
> > > >
> > > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_
> > > 266b
> > > > b08f0d466d346f8c0b19569736fb/brickroot 
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A 
> > >
> > > >  CreatedN/A N/A
> > > > 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf   
> > > >
> > > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_
> > > dfa4
> > > > 4c9380cdedac708e27e2c2a443a0/brickroot 
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A 
> > >
> > > >  CreatedN/A N/A
> > > > 
> > > > any ideas ? where can find logs for the failed commands check
> > > in
> > > > gysncd.log , the trace is as below:
> > > > 
> > > > 

Re: [Gluster-users] Network Block device (NBD) on top of glusterfs

2019-03-25 Thread Vijay Bellur
Hi Xiubo,

On Fri, Mar 22, 2019 at 5:48 PM Xiubo Li  wrote:

> On 2019/3/21 11:29, Xiubo Li wrote:
>
> All,
>
> I am one of the contributor for gluster-block
> [1] project, and also I
> contribute to linux kernel and open-iscsi 
> project.[2]
>
> NBD was around for some time, but in recent time, linux kernel’s Network
> Block Device (NBD) is enhanced and made to work with more devices and also
> the option to integrate with netlink is added. So, I tried to provide a
> glusterfs client based NBD driver recently. Please refer github issue #633
> [3], and good news is I
> have a working code, with most basic things @ nbd-runner project
> [4].
>
>
This is nice. Thank you for your work!


> As mentioned the nbd-runner(NBD proto) will work in the same layer with
> tcmu-runner(iSCSI proto), this is not trying to replace the
> gluster-block/ceph-iscsi-gateway great projects.
>
> It just provides the common library to do the low level stuff, like the
> sysfs/netlink operations and the IOs from the nbd kernel socket, and the
> great tcmu-runner project is doing the sysfs/uio operations and IOs from
> the kernel SCSI/iSCSI.
>
> The nbd-cli tool will work like the iscsi-initiator-utils, and the
> nbd-runner daemon will work like the tcmu-runner daemon, that's all.
>

Do you have thoughts on how nbd-runner currently differs or would differ
from tcmu-runner? It might be useful to document the differences in github
(or elsewhere) so that users can make an informed choice between nbd-runner
& tcmu-runner.

In tcmu-runner for different backend storages, they have separate handlers,
> glfs.c handler for Gluster, rbd.c handler for Ceph, etc. And what the
> handlers here are doing the actual IOs with the backend storage services
> once the IO paths setup are done by ceph-iscsi-gateway/gluster-block
>
> Then we can support all the kind of backend storages, like the
> Gluster/Ceph/Azure... as one separate handler in nbd-runner, which no need
> to care about the NBD low level's stuff updates and changes.
>

Given that the charter for this project is to support multiple backend
storage projects, would not it be better to host the project in the github
repository associated with nbd [5]? Doing it that way could provide a more
neutral (as perceived by users) venue for hosting nbd-runner and help you
in getting more adoption for your work.

Thanks,
Vijay

[5] https://github.com/NetworkBlockDevice/nbd




> Thanks.
>
>
> While this email is about announcing the project, and asking for more
> collaboration, I would also like to discuss more about the placement of the
> project itself. Currently nbd-runner project is expected to be shared by
> our friends at Ceph project too, to provide NBD driver for Ceph. I have
> personally worked with some of them closely while contributing to
> open-iSCSI project, and we would like to take this project to great success.
>
> Now few questions:
>
>1. Can I continue to use http://github.com/gluster/nbd-runner as home
>for this project, even if its shared by other filesystem projects?
>
>
>- I personally am fine with this.
>
>
>1. Should there be a separate organization for this repo?
>
>
>- While it may make sense in future, for now, I am not planning to
>start any new thing?
>
> It would be great if we have some consensus on this soon as nbd-runner is
> a new repository. If there are no concerns, I will continue to contribute
> to the existing repository.
>
> Regards,
> Xiubo Li (@lxbsz)
>
> [1] - https://github.com/gluster/gluster-block
> [2] - https://github.com/open-iscsi
> [3] - https://github.com/gluster/glusterfs/issues/633
> [4] - https://github.com/gluster/nbd-runner
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-25 Thread Maurya M
Now the error is on the same line 860 : as highlighted below:

[2019-03-25 06:11:52.376238] E
[syncdutils(monitor):332:log_raise_exception] : FAIL:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in
main
func(args)
  File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 50, in
subcmd_monitor
return monitor.monitor(local, remote)
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 427, in
monitor
return Monitor().multiplex(*distribute(local, remote))
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 386, in
distribute
svol = Volinfo(slave.volume, "localhost", prelude)
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 860,
in __init__
  *  vi = XET.fromstring(vix)*
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in
_raiseerror
raise err
ParseError: syntax error: line 1, column 0


On Mon, Mar 25, 2019 at 11:29 AM Maurya M  wrote:

> Sorry my bad, had put the print line to debug, i am using gluster 4.1.7,
> will remove the print line.
>
> On Mon, Mar 25, 2019 at 10:52 AM Aravinda  wrote:
>
>> Below print statement looks wrong. Latest Glusterfs code doesn't have
>> this print statement. Please let us know which version of glusterfs you
>> are using.
>>
>>
>> ```
>>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
>> 860, in __init__
>> print "debug varible " %vix
>> ```
>>
>> As a workaround, edit that file and comment the print line and test the
>> geo-rep config command.
>>
>>
>> On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
>> > hi Aravinda,
>> >  had the session created using : create ssh-port  push-pem and
>> > also the :
>> >
>> > gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-port
>> > 
>> >
>> > hitting this message:
>> > geo-replication config-set failed for
>> > vol_75a5fd373d88ba687f591f3353fa05cf
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
>> > geo-replication command failed
>> >
>> > Below is snap of status:
>> >
>> > [root@k8s-agentpool1-24779565-1
>> >
>> vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]#
>> gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
>> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
>> >
>> > MASTER NODE  MASTER VOL  MASTER
>> > BRICK
>> >SLAVE USERSLAVE
>> >   SLAVE NODESTATUS CRAWL STATUS
>> >   LAST_SYNCED
>> > ---
>> > ---
>> > ---
>> > ---
>> > 
>> > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
>> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116f
>> > b9427fb26f752d9ba8e45e183cb1/brickroot
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
>> >  CreatedN/A N/A
>> > 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf
>> > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266b
>> > b08f0d466d346f8c0b19569736fb/brickroot
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
>> >  CreatedN/A N/A
>> > 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf
>> > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa4
>> > 4c9380cdedac708e27e2c2a443a0/brickroot
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
>> >  CreatedN/A N/A
>> >
>> > any ideas ? where can find logs for the failed commands check in
>> > gysncd.log , the trace is as below:
>> >
>> > [2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main] :
>> > Using session config file  path=/var/lib/glusterd/geo-
>> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
>> > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > [2019-03-25 04:04:42.387192] E
>> > [syncdutils(monitor):332:log_raise_exception] : FAIL:
>> > Traceback (most recent call last):
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
>> > 311, in main
>> > func(args)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
>> > 50, in subcmd_monitor
>> > return monitor.monitor(local, remote)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> > 427, in monitor
>> > return Monitor().multiplex(*distribute(local, remote))
>> >   File