Re: [Gluster-devel] [Gluster-users] Network Block device (NBD) on top of glusterfs

2019-03-25 Thread Xiubo Li

On 2019/3/25 14:36, Vijay Bellur wrote:


Hi Xiubo,

On Fri, Mar 22, 2019 at 5:48 PM Xiubo Li <mailto:xiu...@redhat.com>> wrote:


On 2019/3/21 11:29, Xiubo Li wrote:


All,

I am one of the contributor forgluster-block
<https://github.com/gluster/gluster-block>[1] project, and also I
contribute to linux kernel andopen-iscsi
<https://github.com/open-iscsi> project.[2]

NBD was around for some time, but in recent time, linux kernel’s
Network Block Device (NBD) is enhanced and made to work with more
devices and also the option to integrate with netlink is added.
So, I tried to provide a glusterfs client based NBD driver
recently. Please refergithub issue #633
<https://github.com/gluster/glusterfs/issues/633>[3], and good
news is I have a working code, with most basic things @nbd-runner
project <https://github.com/gluster/nbd-runner>[4].



This is nice. Thank you for your work!

As mentioned the nbd-runner(NBD proto) will work in the same layer
with tcmu-runner(iSCSI proto), this is not trying to replace the
gluster-block/ceph-iscsi-gateway great projects.

It just provides the common library to do the low level stuff,
like the sysfs/netlink operations and the IOs from the nbd kernel
socket, and the great tcmu-runner project is doing the sysfs/uio
operations and IOs from the kernel SCSI/iSCSI.

The nbd-cli tool will work like the iscsi-initiator-utils, and the
nbd-runner daemon will work like the tcmu-runner daemon, that's all.


Do you have thoughts on how nbd-runner currently differs or would 
differ from tcmu-runner? It might be useful to document the 
differences in github (or elsewhere) so that users can make an 
informed choice between nbd-runner & tcmu-runner.


Yeah, this makes sense and I will figure it out in the github. Currently 
for the open-iscsi/tcmu-runner, there are already many existing tools to 
help product it, and for NBD we may need to implement them, correct me 
if I am wrong here :-)




In tcmu-runner for different backend storages, they have separate
handlers, glfs.c handler for Gluster, rbd.c handler for Ceph, etc.
And what the handlers here are doing the actual IOs with the
backend storage services once the IO paths setup are done by
ceph-iscsi-gateway/gluster-block

Then we can support all the kind of backend storages, like the
Gluster/Ceph/Azure... as one separate handler in nbd-runner, which
no need to care about the NBD low level's stuff updates and changes.


Given that the charter for this project is to support multiple backend 
storage projects, would not it be better to host the project in the 
github repository associated with nbd [5]? Doing it that way could 
provide a more neutral (as perceived by users) venue for hosting 
nbd-runner and help you in getting more adoption for your work.



This is a good idea, I will try to push this forward.

Thanks very much Vijay.

BRs

Xiubo Li



Thanks,
Vijay

[5] https://github.com/NetworkBlockDevice/nbd


Thanks.



While this email is about announcing the project, and asking for
more collaboration, I would also like to discuss more about the
placement of the project itself. Currently nbd-runner project is
expected to be shared by our friends at Ceph project too, to
provide NBD driver for Ceph. I have personally worked with some
of them closely while contributing to open-iSCSI project, and we
would like to take this project to great success.

Now few questions:

 1. Can I continue to usehttp://github.com/gluster/nbd-runneras
home for this project, even if its shared by other filesystem
projects?

  * I personally am fine with this.

 2. Should there be a separate organization for this repo?

  * While it may make sense in future, for now, I am not planning
to start any new thing?

It would be great if we have some consensus on this soon as
nbd-runner is a new repository. If there are no concerns, I will
continue to contribute to the existing repository.

Regards,
Xiubo Li (@lxbsz)

[1] -https://github.com/gluster/gluster-block
[2] -https://github.com/open-iscsi
[3] -https://github.com/gluster/glusterfs/issues/633
[4] -https://github.com/gluster/nbd-runner


___
Gluster-users mailing list
gluster-us...@gluster.org  <mailto:gluster-us...@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
gluster-us...@gluster.org <mailto:gluster-us...@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Network Block device (NBD) on top of glusterfs

2019-03-22 Thread Xiubo Li

On 2019/3/21 11:29, Xiubo Li wrote:


All,

I am one of the contributor forgluster-block 
<https://github.com/gluster/gluster-block>[1] project, and also I 
contribute to linux kernel andopen-iscsi 
<https://github.com/open-iscsi> project.[2]


NBD was around for some time, but in recent time, linux kernel’s 
Network Block Device (NBD) is enhanced and made to work with more 
devices and also the option to integrate with netlink is added. So, I 
tried to provide a glusterfs client based NBD driver recently. Please 
refergithub issue #633 
<https://github.com/gluster/glusterfs/issues/633>[3], and good news is 
I have a working code, with most basic things @nbd-runner project 
<https://github.com/gluster/nbd-runner>[4].


As mentioned the nbd-runner(NBD proto) will work in the same layer with 
tcmu-runner(iSCSI proto), this is not trying to replace the 
gluster-block/ceph-iscsi-gateway great projects.


It just provides the common library to do the low level stuff, like the 
sysfs/netlink operations and the IOs from the nbd kernel socket, and the 
great tcmu-runner project is doing the sysfs/uio operations and IOs from 
the kernel SCSI/iSCSI.


The nbd-cli tool will work like the iscsi-initiator-utils, and the 
nbd-runner daemon will work like the tcmu-runner daemon, that's all.


In tcmu-runner for different backend storages, they have separate 
handlers, glfs.c handler for Gluster, rbd.c handler for Ceph, etc. And 
what the handlers here are doing the actual IOs with the backend storage 
services once the IO paths setup are done by 
ceph-iscsi-gateway/gluster-block


Then we can support all the kind of backend storages, like the 
Gluster/Ceph/Azure... as one separate handler in nbd-runner, which no 
need to care about the NBD low level's stuff updates and changes.


Thanks.


While this email is about announcing the project, and asking for more 
collaboration, I would also like to discuss more about the placement 
of the project itself. Currently nbd-runner project is expected to be 
shared by our friends at Ceph project too, to provide NBD driver for 
Ceph. I have personally worked with some of them closely while 
contributing to open-iSCSI project, and we would like to take this 
project to great success.


Now few questions:

 1. Can I continue to usehttp://github.com/gluster/nbd-runneras home
for this project, even if its shared by other filesystem projects?

  * I personally am fine with this.

 2. Should there be a separate organization for this repo?

  * While it may make sense in future, for now, I am not planning to
start any new thing?

It would be great if we have some consensus on this soon as nbd-runner 
is a new repository. If there are no concerns, I will continue to 
contribute to the existing repository.


Regards,
Xiubo Li (@lxbsz)

[1] -https://github.com/gluster/gluster-block
[2] -https://github.com/open-iscsi
[3] -https://github.com/gluster/glusterfs/issues/633
[4] -https://github.com/gluster/nbd-runner


___
Gluster-users mailing list
gluster-us...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Network Block device (NBD) on top of glusterfs

2019-03-21 Thread Xiubo Li

On 2019/3/21 18:09, Prasanna Kalever wrote:



On Thu, Mar 21, 2019 at 9:00 AM Xiubo Li <mailto:xiu...@redhat.com>> wrote:


All,

I am one of the contributor forgluster-block
<https://github.com/gluster/gluster-block>[1] project, and also I
contribute to linux kernel andopen-iscsi
<https://github.com/open-iscsi> project.[2]

NBD was around for some time, but in recent time, linux kernel’s
Network Block Device (NBD) is enhanced and made to work with more
devices and also the option to integrate with netlink is added.
So, I tried to provide a glusterfs client based NBD driver
recently. Please refergithub issue #633
<https://github.com/gluster/glusterfs/issues/633>[3], and good
news is I have a working code, with most basic things @nbd-runner
project <https://github.com/gluster/nbd-runner>[4].

While this email is about announcing the project, and asking for
more collaboration, I would also like to discuss more about the
placement of the project itself. Currently nbd-runner project is
expected to be shared by our friends at Ceph project too, to
provide NBD driver for Ceph. I have personally worked with some of
them closely while contributing to open-iSCSI project, and we
would like to take this project to great success.

Now few questions:

 1. Can I continue to usehttp://github.com/gluster/nbd-runneras
home for this project, even if its shared by other filesystem
projects?

  * I personally am fine with this.

 2. Should there be a separate organization for this repo?

  * While it may make sense in future, for now, I am not planning
to start any new thing?

It would be great if we have some consensus on this soon as
nbd-runner is a new repository. If there are no concerns, I will
continue to contribute to the existing repository.


Thanks Xiubo Li, for finally sending this email out. Since this email 
is out on gluster mailing list, I would like to take a stand from 
gluster community point of view *only* and share my views.


My honest answer is "If we want to maintain this within gluster org, 
then 80% of the effort is common/duplicate of what we did all these 
days with gluster-block",


The great idea came from Mike Christie days ago and the nbd-runner 
project's framework is initially emulated from tcmu-runner. This is why 
I name this project as nbd-runner, which will work for all the other 
Distributed Storages, such as Gluster/Ceph/Azure, as discussed with Mike 
before.


nbd-runner(NBD proto) and tcmu-runner(iSCSI proto) are almost the same 
and both are working as lower IO(READ/WRITE/...) stuff, not the 
management layer like ceph-iscsi-gateway and gluster-block currently do.


Currently since I only implemented the Gluster handler and also using 
the RPC like glusterfs and gluster-block, most of the other code (about 
70%) in nbd-runner are for the NBD proto and these are very different 
from tcmu-runner/glusterfs/gluster-block projects, and there are many 
new features in NBD module that not yet supported and then there will be 
more different in future.


The framework coding has been done and the nbd-runner project is already 
stable and could already work well for me now.




like:
* rpc/socket code
* cli/daemon parser/helper logics
* gfapi util functions
* logger framework
* inotify & dyn-config threads


Yeah, these features were initially from tcmu-runner project, Mike and I 
coded two years ago. Currently nbd-runner also has copied them from 
tcmu-runner.


Very appreciated for you great ideas here Prasanna and hope nbd-runner 
could be more generically and successfully used in future.


BRs

Xiubo Li



* configure/Makefile/specfiles
* docsAboutGluster and etc ..

The repository gluster-block is actually a home for all the block 
related stuff within gluster and its designed to accommodate alike 
functionalities, if I was you I would have simply copied nbd-runner.c 
into https://github.com/gluster/gluster-block/tree/master/daemon/ just 
like ceph plays it here 
https://github.com/ceph/ceph/blob/master/src/tools/rbd_nbd/rbd-nbd.cc 
and be done.


Advantages of keeping nbd client within gluster-block:
-> No worry about maintenance code burdon
-> No worry about monitoring a new component
-> shipping packages to fedora/centos/rhel is handled
-> This helps improve and stabilize the current gluster-block framework
-> We can build a common CI
-> We can use reuse common test framework and etc ..

If you have an impression that gluster-block is for management, then I 
would really want to correct you at this point.


Some of my near future plans for gluster-block:
* Allow exporting blocks with FUSE access via fileIO backstore to 
improve large-file workloads, draft: 
https://github.com/gluster/gluster-block/pull/58

* Accommodate kernel loopback handling for local only applications

[Gluster-devel] Network Block device (NBD) on top of glusterfs

2019-03-20 Thread Xiubo Li

All,

I am one of the contributor forgluster-block 
<https://github.com/gluster/gluster-block>[1] project, and also I 
contribute to linux kernel andopen-iscsi <https://github.com/open-iscsi> 
project.[2]


NBD was around for some time, but in recent time, linux kernel’s Network 
Block Device (NBD) is enhanced and made to work with more devices and 
also the option to integrate with netlink is added. So, I tried to 
provide a glusterfs client based NBD driver recently. Please refergithub 
issue #633 <https://github.com/gluster/glusterfs/issues/633>[3], and 
good news is I have a working code, with most basic things @nbd-runner 
project <https://github.com/gluster/nbd-runner>[4].


While this email is about announcing the project, and asking for more 
collaboration, I would also like to discuss more about the placement of 
the project itself. Currently nbd-runner project is expected to be 
shared by our friends at Ceph project too, to provide NBD driver for 
Ceph. I have personally worked with some of them closely while 
contributing to open-iSCSI project, and we would like to take this 
project to great success.


Now few questions:

1. Can I continue to usehttp://github.com/gluster/nbd-runneras home for
   this project, even if its shared by other filesystem projects?

 * I personally am fine with this.

2. Should there be a separate organization for this repo?

 * While it may make sense in future, for now, I am not planning to
   start any new thing?

It would be great if we have some consensus on this soon as nbd-runner 
is a new repository. If there are no concerns, I will continue to 
contribute to the existing repository.


Regards,
Xiubo Li (@lxbsz)

[1] -https://github.com/gluster/gluster-block
[2] -https://github.com/open-iscsi
[3] -https://github.com/gluster/glusterfs/issues/633
[4] -https://github.com/gluster/nbd-runner

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel