[Gluster-users] gluster-block v0.5.1 is alive!

2020-09-30 Thread Prasanna Kalever
Hello Gluster folks,

The gluster-block team is happy to announce the v0.5.1 release [1].

This is the security and bug fix release of gluster-block, the CVE and
few bug fixes are made available as part of this release. Please find
the release notes with notable fixes at [2].

One can find the details about prerequisites, how to install and setup
at [3]. If you are a new user, check out the demo video attached in
the README doc [4], which will be a good source of intro to the
project. There are good examples of how to use gluster-block both in
the man pages [5] and test file [6] (also in the README).

If you want to quickly bring up a cluster with gluster-block
environment locally, please head to vagrant setting-up details [7].

gluster-block is part of fedora package collection, an updated package
with release version v0.5.1 will be made available. And the community
provided packages will be soon made available at [8].

Please spend a minute to report any kind of issue that comes to your
notice with this handy link [9]. We look forward to your feedback,
which will help gluster-block get better!

We would like to thank all our users, contributors for bug filing and
fixes, also the whole team involved in the huge effort with
pre-release testing.

[1] https://github.com/gluster/gluster-block/releases
[2] https://github.com/gluster/gluster-block/releases/tag/v0.5.1
[3] https://github.com/gluster/gluster-block#install
[4] https://github.com/gluster/gluster-block#demo
[5] https://github.com/gluster/gluster-block/tree/master/docs
[6] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
[7] 
https://github.com/gluster/gluster-block#how-to-quickly-bringup-gluster-block-environment-locally-
[8] https://download.gluster.org/pub/gluster/gluster-block/
[9] https://github.com/gluster/gluster-block/issues/new

Cheers,
--
Team Gluster-Block!





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] gluster-block v0.5 is alive!

2020-05-13 Thread Prasanna Kalever
Hello Gluster folks,

Gluster-block team is happy to announce the v0.5 release [1].

This is the new stable version of gluster-block, a good number of
features and interesting bug fixes are made available as part of this
release. Please find the list of release highlights and notable fixes
at [2].

One can find the details about prerequisites, how to install and setup
at [3]. If you are a new user, check out the demo video attached in
the README doc [4], which will be a good source of intro to the
project. There are good examples about how to use gluster-block both
in the man pages [5] and test file [6] (also in the README).

If you want to quickly bring up a cluster with gluster-block
environment locally, please head to vagrant setting-up details  [7].

gluster-block is part of fedora package collection, an updated package
with release version v0.5 will be soon made available. And the
community provided packages will be soon made available at [8].

Please spend a minute to report any kind of issue that comes to your
notice with this handy link [9]. We look forward to your feedback,
which will help gluster-block get better!

We would like to thank all our users, contributors for bug filing and
fixes, also the whole team involved in the huge effort with
pre-release testing.

[1] https://github.com/gluster/gluster-block/releases
[2] https://github.com/gluster/gluster-block/releases/tag/v0.5
[3] https://github.com/gluster/gluster-block#install
[4] https://github.com/gluster/gluster-block#demo
[5] https://github.com/gluster/gluster-block/tree/master/docs
[6] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
[7] 
https://github.com/gluster/gluster-block#how-to-quickly-bringup-gluster-block-environment-locally-
[8] https://download.gluster.org/pub/gluster/gluster-block/
[9] https://github.com/gluster/gluster-block/issues/new

Cheers,
Team Gluster-Block!





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster-block v0.4 is alive!

2019-05-21 Thread Prasanna Kalever
On Mon, May 20, 2019 at 9:05 PM Vlad Kopylov  wrote:
>
> Thank you Prasanna.
>
> Do we have architecture somewhere?

Vlad,

Although the complete set of details might be missing at one place
right now, some pointers to start are available at,
https://github.com/gluster/gluster-block#gluster-block and
https://pkalever.wordpress.com/2019/05/06/starting-with-gluster-block,
hopefully that should give some clarity about the project. Also
checkout the man pages.

> Dies it bypass Fuse and go directly gfapi ?

yes, we don't use Fuse access with gluster-block. The management
as-well-as IO happens over gfapi.

Please go through the docs pointed above, if you have any specific
queries, feel free to ask them here or on github.

Best Regards,
--
Prasanna

>
> v
>
> On Mon, May 20, 2019, 8:36 AM Prasanna Kalever  wrote:
>>
>> Hey Vlad,
>>
>> Thanks for trying gluster-block. Appreciate your feedback.
>>
>> Here is the patch which should fix the issue you have noticed:
>> https://github.com/gluster/gluster-block/pull/233
>>
>> Thanks!
>> --
>> Prasanna
>>
>> On Sat, May 18, 2019 at 4:48 AM Vlad Kopylov  wrote:
>> >
>> >
>> > straight from
>> >
>> > ./autogen.sh && ./configure && make -j install
>> >
>> >
>> > CentOS Linux release 7.6.1810 (Core)
>> >
>> >
>> > May 17 19:13:18 vm2 gluster-blockd[24294]: Error opening log file: No such 
>> > file or directory
>> > May 17 19:13:18 vm2 gluster-blockd[24294]: Logging to stderr.
>> > May 17 19:13:18 vm2 gluster-blockd[24294]: [2019-05-17 23:13:18.966992] 
>> > CRIT: trying to change logDir from /var/log/gluster-block to 
>> > /var/log/gluster-block [at utils.c+495 :]
>> > May 17 19:13:19 vm2 gluster-blockd[24294]: No such path 
>> > /backstores/user:glfs
>> > May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service: main process 
>> > exited, code=exited, status=1/FAILURE
>> > May 17 19:13:19 vm2 systemd[1]: Unit gluster-blockd.service entered failed 
>> > state.
>> > May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service failed.
>> >
>> >
>> >
>> > On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever  
>> > wrote:
>> >>
>> >> Hello Gluster folks,
>> >>
>> >> Gluster-block team is happy to announce the v0.4 release [1].
>> >>
>> >> This is the new stable version of gluster-block, lots of new and
>> >> exciting features and interesting bug fixes are made available as part
>> >> of this release.
>> >> Please find the big list of release highlights and notable fixes at [2].
>> >>
>> >> Details about installation can be found in the easy install guide at
>> >> [3]. Find the details about prerequisites and setup guide at [4].
>> >> If you are a new user, checkout the demo video attached in the README
>> >> doc [5], which will be a good source of intro to the project.
>> >> There are good examples about how to use gluster-block both in the man
>> >> pages [6] and test file [7] (also in the README).
>> >>
>> >> gluster-block is part of fedora package collection, an updated package
>> >> with release version v0.4 will be soon made available. And the
>> >> community provided packages will be soon made available at [8].
>> >>
>> >> Please spend a minute to report any kind of issue that comes to your
>> >> notice with this handy link [9].
>> >> We look forward to your feedback, which will help gluster-block get 
>> >> better!
>> >>
>> >> We would like to thank all our users, contributors for bug filing and
>> >> fixes, also the whole team who involved in the huge effort with
>> >> pre-release testing.
>> >>
>> >>
>> >> [1] https://github.com/gluster/gluster-block
>> >> [2] https://github.com/gluster/gluster-block/releases
>> >> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
>> >> [4] https://github.com/gluster/gluster-block#usage
>> >> [5] https://github.com/gluster/gluster-block/blob/master/README.md
>> >> [6] https://github.com/gluster/gluster-block/tree/master/docs
>> >> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
>> >> [8] https://download.gluster.org/pub/gluster/gluster-block/
>> >> [9] https://github.com/gluster/gluster-block/issues/new
>> >>
>> >> Cheers,
>> >> Team Gluster-Block!
>> >> ___
>> >> Gluster-users mailing list
>> >> Gluster-users@gluster.org
>> >> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster-block v0.4 is alive!

2019-05-20 Thread Prasanna Kalever
Hey Vlad,

Thanks for trying gluster-block. Appreciate your feedback.

Here is the patch which should fix the issue you have noticed:
https://github.com/gluster/gluster-block/pull/233

Thanks!
--
Prasanna

On Sat, May 18, 2019 at 4:48 AM Vlad Kopylov  wrote:
>
>
> straight from
>
> ./autogen.sh && ./configure && make -j install
>
>
> CentOS Linux release 7.6.1810 (Core)
>
>
> May 17 19:13:18 vm2 gluster-blockd[24294]: Error opening log file: No such 
> file or directory
> May 17 19:13:18 vm2 gluster-blockd[24294]: Logging to stderr.
> May 17 19:13:18 vm2 gluster-blockd[24294]: [2019-05-17 23:13:18.966992] CRIT: 
> trying to change logDir from /var/log/gluster-block to /var/log/gluster-block 
> [at utils.c+495 :]
> May 17 19:13:19 vm2 gluster-blockd[24294]: No such path /backstores/user:glfs
> May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service: main process exited, 
> code=exited, status=1/FAILURE
> May 17 19:13:19 vm2 systemd[1]: Unit gluster-blockd.service entered failed 
> state.
> May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service failed.
>
>
>
> On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever  wrote:
>>
>> Hello Gluster folks,
>>
>> Gluster-block team is happy to announce the v0.4 release [1].
>>
>> This is the new stable version of gluster-block, lots of new and
>> exciting features and interesting bug fixes are made available as part
>> of this release.
>> Please find the big list of release highlights and notable fixes at [2].
>>
>> Details about installation can be found in the easy install guide at
>> [3]. Find the details about prerequisites and setup guide at [4].
>> If you are a new user, checkout the demo video attached in the README
>> doc [5], which will be a good source of intro to the project.
>> There are good examples about how to use gluster-block both in the man
>> pages [6] and test file [7] (also in the README).
>>
>> gluster-block is part of fedora package collection, an updated package
>> with release version v0.4 will be soon made available. And the
>> community provided packages will be soon made available at [8].
>>
>> Please spend a minute to report any kind of issue that comes to your
>> notice with this handy link [9].
>> We look forward to your feedback, which will help gluster-block get better!
>>
>> We would like to thank all our users, contributors for bug filing and
>> fixes, also the whole team who involved in the huge effort with
>> pre-release testing.
>>
>>
>> [1] https://github.com/gluster/gluster-block
>> [2] https://github.com/gluster/gluster-block/releases
>> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
>> [4] https://github.com/gluster/gluster-block#usage
>> [5] https://github.com/gluster/gluster-block/blob/master/README.md
>> [6] https://github.com/gluster/gluster-block/tree/master/docs
>> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
>> [8] https://download.gluster.org/pub/gluster/gluster-block/
>> [9] https://github.com/gluster/gluster-block/issues/new
>>
>> Cheers,
>> Team Gluster-Block!
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] gluster-block v0.4 is alive!

2019-05-02 Thread Prasanna Kalever
Hello Gluster folks,

Gluster-block team is happy to announce the v0.4 release [1].

This is the new stable version of gluster-block, lots of new and
exciting features and interesting bug fixes are made available as part
of this release.
Please find the big list of release highlights and notable fixes at [2].

Details about installation can be found in the easy install guide at
[3]. Find the details about prerequisites and setup guide at [4].
If you are a new user, checkout the demo video attached in the README
doc [5], which will be a good source of intro to the project.
There are good examples about how to use gluster-block both in the man
pages [6] and test file [7] (also in the README).

gluster-block is part of fedora package collection, an updated package
with release version v0.4 will be soon made available. And the
community provided packages will be soon made available at [8].

Please spend a minute to report any kind of issue that comes to your
notice with this handy link [9].
We look forward to your feedback, which will help gluster-block get better!

We would like to thank all our users, contributors for bug filing and
fixes, also the whole team who involved in the huge effort with
pre-release testing.


[1] https://github.com/gluster/gluster-block
[2] https://github.com/gluster/gluster-block/releases
[3] https://github.com/gluster/gluster-block/blob/master/INSTALL
[4] https://github.com/gluster/gluster-block#usage
[5] https://github.com/gluster/gluster-block/blob/master/README.md
[6] https://github.com/gluster/gluster-block/tree/master/docs
[7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
[8] https://download.gluster.org/pub/gluster/gluster-block/
[9] https://github.com/gluster/gluster-block/issues/new

Cheers,
Team Gluster-Block!
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Help: gluster-block

2019-04-03 Thread Prasanna Kalever
On Tue, Apr 2, 2019 at 1:34 AM Karim Roumani 
wrote:

> Actually we have a question.
>
> We did two tests as follows.
>
> Test 1 - iSCSI target on the glusterFS server
> Test 2 - iSCSI target on a separate server with gluster client
>
> Test 2 performed a read speed of <1GB/second while Test 1 about
> 300MB/second
>
> Any reason you see to why this may be the case?
>

For Test 1 case,

1. ops b/w
* iscsi initiator <-> iscsi target and
* tcmu-runner <-> gluster server

are all using the same NIC resource.

2.  Also, it might be possible that, the node might be facing high resource
usage like cpu is high and/or memory is low, as everything is on the same
node.

You can check also check gluster profile info, to corner down some of these.

Thanks!
--
Prasanna


> ᐧ
>
> On Mon, Apr 1, 2019 at 1:00 PM Karim Roumani 
> wrote:
>
>> Thank you Prasanna for your quick response very much appreaciated we will
>> review and get back to you.
>> ᐧ
>>
>> On Mon, Mar 25, 2019 at 9:00 AM Prasanna Kalever 
>> wrote:
>>
>>> [ adding +gluster-users for archive purpose ]
>>>
>>> On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin 
>>> wrote:
>>> >
>>> > Hello Mr. Kalever,
>>>
>>> Hello Jeffrey,
>>>
>>> >
>>> > I am currently working on a project to utilize GlusterFS for VMWare
>>> VMs. In our research, we found that utilizing block devices with GlusterFS
>>> would be the best approach for our use case (correct me if I am wrong). I
>>> saw the gluster utility that you are a contributor for called gluster-block
>>> (https://github.com/gluster/gluster-block), and I had a question about
>>> the configuration. From what I understand, gluster-block only works on the
>>> servers that are serving the gluster volume. Would it be possible to run
>>> the gluster-block utility on a client machine that has a gluster volume
>>> mounted to it?
>>>
>>> Yes, that is right! At the moment gluster-block is coupled with
>>> glusterd for simplicity.
>>> But we have made some changes here [1] to provide a way to specify
>>> server address (volfile-server) which is outside the gluster-blockd
>>> node, please take a look.
>>>
>>> Although it is not complete solution, but it should at-least help for
>>> some usecases. Feel free to raise an issue [2] with the details about
>>> your usecase and etc or submit a PR by your self :-)
>>> We never picked it, as we never have a usecase needing separation of
>>> gluster-blockd and glusterd.
>>>
>>> >
>>> > I also have another question: how do I make the iSCSI targets persist
>>> if all of the gluster nodes were rebooted? It seems like once all of the
>>> nodes reboot, I am unable to reconnect to the iSCSI targets created by the
>>> gluster-block utility.
>>>
>>> do you mean rebooting iscsi initiator ? or gluster-block/gluster
>>> target/server nodes ?
>>>
>>> 1. for initiator to automatically connect to block devices post
>>> reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
>>> node.startup = automatic
>>>
>>> 2. if you mean, just in case if all the gluster nodes goes down, on
>>> the initiator all the available HA path's will be down, but we still
>>> want the IO to be queued on the initiator, until one of the path
>>> (gluster node) is availabe:
>>>
>>> for this in gluster-block sepcific section of multipath.conf you need
>>> to replace 'no_path_retry 120' as 'no_path_retry queue'
>>> Note: refer README for current multipath.conf setting recommendations.
>>>
>>> [1] https://github.com/gluster/gluster-block/pull/161
>>> [2] https://github.com/gluster/gluster-block/issues/new
>>>
>>> BRs,
>>> --
>>> Prasanna
>>>
>>
>>
>> --
>>
>> Thank you,
>>
>> *Karim Roumani*
>> Director of Technology Solutions
>>
>> TekReach Solutions / Albatross Cloud
>> 714-916-5677
>> karim.roum...@tekreach.com
>> Albatross.cloud <https://albatross.cloud/> - One Stop Cloud Solutions
>> Portalfronthosting.com <http://portalfronthosting.com/> - Complete
>> SharePoint Solutions
>>
>
>
> --
>
> Thank you,
>
> *Karim Roumani*
> Director of Technology Solutions
>
> TekReach Solutions / Albatross Cloud
> 714-916-5677
> karim.roum...@tekreach.com
> Albatross.cloud <https://albatross.cloud/> - One Stop Cloud Solutions
> Portalfronthosting.com <http://portalfronthosting.com/> - Complete
> SharePoint Solutions
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Help: gluster-block

2019-03-25 Thread Prasanna Kalever
[ adding +gluster-users for archive purpose ]

On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin  wrote:
>
> Hello Mr. Kalever,

Hello Jeffrey,

>
> I am currently working on a project to utilize GlusterFS for VMWare VMs. In 
> our research, we found that utilizing block devices with GlusterFS would be 
> the best approach for our use case (correct me if I am wrong). I saw the 
> gluster utility that you are a contributor for called gluster-block 
> (https://github.com/gluster/gluster-block), and I had a question about the 
> configuration. From what I understand, gluster-block only works on the 
> servers that are serving the gluster volume. Would it be possible to run the 
> gluster-block utility on a client machine that has a gluster volume mounted 
> to it?

Yes, that is right! At the moment gluster-block is coupled with
glusterd for simplicity.
But we have made some changes here [1] to provide a way to specify
server address (volfile-server) which is outside the gluster-blockd
node, please take a look.

Although it is not complete solution, but it should at-least help for
some usecases. Feel free to raise an issue [2] with the details about
your usecase and etc or submit a PR by your self :-)
We never picked it, as we never have a usecase needing separation of
gluster-blockd and glusterd.

>
> I also have another question: how do I make the iSCSI targets persist if all 
> of the gluster nodes were rebooted? It seems like once all of the nodes 
> reboot, I am unable to reconnect to the iSCSI targets created by the 
> gluster-block utility.

do you mean rebooting iscsi initiator ? or gluster-block/gluster
target/server nodes ?

1. for initiator to automatically connect to block devices post
reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
node.startup = automatic

2. if you mean, just in case if all the gluster nodes goes down, on
the initiator all the available HA path's will be down, but we still
want the IO to be queued on the initiator, until one of the path
(gluster node) is availabe:

for this in gluster-block sepcific section of multipath.conf you need
to replace 'no_path_retry 120' as 'no_path_retry queue'
Note: refer README for current multipath.conf setting recommendations.

[1] https://github.com/gluster/gluster-block/pull/161
[2] https://github.com/gluster/gluster-block/issues/new

BRs,
--
Prasanna
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Network Block device (NBD) on top of glusterfs

2019-03-21 Thread Prasanna Kalever
On Thu, Mar 21, 2019 at 6:31 PM Xiubo Li  wrote:

> On 2019/3/21 18:09, Prasanna Kalever wrote:
>
>
>
> On Thu, Mar 21, 2019 at 9:00 AM Xiubo Li  wrote:
>
>> All,
>>
>> I am one of the contributor for gluster-block
>> <https://github.com/gluster/gluster-block>[1] project, and also I
>> contribute to linux kernel and open-iscsi <https://github.com/open-iscsi>
>> project.[2]
>>
>> NBD was around for some time, but in recent time, linux kernel’s Network
>> Block Device (NBD) is enhanced and made to work with more devices and also
>> the option to integrate with netlink is added. So, I tried to provide a
>> glusterfs client based NBD driver recently. Please refer github issue
>> #633 <https://github.com/gluster/glusterfs/issues/633>[3], and good news
>> is I have a working code, with most basic things @ nbd-runner project
>> <https://github.com/gluster/nbd-runner>[4].
>>
>> While this email is about announcing the project, and asking for more
>> collaboration, I would also like to discuss more about the placement of the
>> project itself. Currently nbd-runner project is expected to be shared by
>> our friends at Ceph project too, to provide NBD driver for Ceph. I have
>> personally worked with some of them closely while contributing to
>> open-iSCSI project, and we would like to take this project to great success.
>>
>> Now few questions:
>>
>>1. Can I continue to use http://github.com/gluster/nbd-runner as home
>>for this project, even if its shared by other filesystem projects?
>>
>>
>>- I personally am fine with this.
>>
>>
>>1. Should there be a separate organization for this repo?
>>
>>
>>- While it may make sense in future, for now, I am not planning to
>>start any new thing?
>>
>> It would be great if we have some consensus on this soon as nbd-runner is
>> a new repository. If there are no concerns, I will continue to contribute
>> to the existing repository.
>>
>
> Thanks Xiubo Li, for finally sending this email out. Since this email is
> out on gluster mailing list, I would like to take a stand from gluster
> community point of view *only* and share my views.
>
> My honest answer is "If we want to maintain this within gluster org, then
> 80% of the effort is common/duplicate of what we did all these days with
> gluster-block",
>
> The great idea came from Mike Christie days ago and the nbd-runner
> project's framework is initially emulated from tcmu-runner. This is why I
> name this project as nbd-runner, which will work for all the other
> Distributed Storages, such as Gluster/Ceph/Azure, as discussed with Mike
> before.
>
> nbd-runner(NBD proto) and tcmu-runner(iSCSI proto) are almost the same and
> both are working as lower IO(READ/WRITE/...) stuff, not the management
> layer like ceph-iscsi-gateway and gluster-block currently do.
>
> Currently since I only implemented the Gluster handler and also using the
> RPC like glusterfs and gluster-block, most of the other code (about 70%) in
> nbd-runner are for the NBD proto and these are very different from
> tcmu-runner/glusterfs/gluster-block projects, and there are many new
> features in NBD module that not yet supported and then there will be more
> different in future.
>
> The framework coding has been done and the nbd-runner project is already
> stable and could already work well for me now.
>
> like:
> * rpc/socket code
> * cli/daemon parser/helper logics
> * gfapi util functions
> * logger framework
> * inotify & dyn-config threads
>
> Yeah, these features were initially from tcmu-runner project, Mike and I
> coded two years ago. Currently nbd-runner also has copied them from
> tcmu-runner.
>

I don't think tcmu-runner has any of,

-> cli/daemon approach routines
-> rpc low-level clnt/svc routines
-> gfapi level file create/delete util functions
-> Json parser support
-> socket bound/listener related functionalities
-> autoMake build frame-work, and
-> many other maintenance files

I actually can go in detail and furnish a long list of reference made here
and you cannot deny the fact, but its **all okay** to take references from
other alike projects. But my intention was not to point about the copy made
here, but rather saying we are just wasting our efforts rewriting,
copy-pasting, maintaining and fixing the same functionality framework.

Again all I'm trying to make is, if at all you want to maintain nbd client
as part of gluster.org, why not use gluster-block itself ? which is well
tested and stable enough.

Apart from all the examples I have mentioned in my previous thread, there
are

Re: [Gluster-users] [Gluster-devel] Network Block device (NBD) on top of glusterfs

2019-03-21 Thread Prasanna Kalever
On Thu, Mar 21, 2019 at 9:00 AM Xiubo Li  wrote:

> All,
>
> I am one of the contributor for gluster-block
> [1] project, and also I
> contribute to linux kernel and open-iscsi 
> project.[2]
>
> NBD was around for some time, but in recent time, linux kernel’s Network
> Block Device (NBD) is enhanced and made to work with more devices and also
> the option to integrate with netlink is added. So, I tried to provide a
> glusterfs client based NBD driver recently. Please refer github issue #633
> [3], and good news is I
> have a working code, with most basic things @ nbd-runner project
> [4].
>
> While this email is about announcing the project, and asking for more
> collaboration, I would also like to discuss more about the placement of the
> project itself. Currently nbd-runner project is expected to be shared by
> our friends at Ceph project too, to provide NBD driver for Ceph. I have
> personally worked with some of them closely while contributing to
> open-iSCSI project, and we would like to take this project to great success.
>
> Now few questions:
>
>1. Can I continue to use http://github.com/gluster/nbd-runner as home
>for this project, even if its shared by other filesystem projects?
>
>
>- I personally am fine with this.
>
>
>1. Should there be a separate organization for this repo?
>
>
>- While it may make sense in future, for now, I am not planning to
>start any new thing?
>
> It would be great if we have some consensus on this soon as nbd-runner is
> a new repository. If there are no concerns, I will continue to contribute
> to the existing repository.
>

Thanks Xiubo Li, for finally sending this email out. Since this email is
out on gluster mailing list, I would like to take a stand from gluster
community point of view *only* and share my views.

My honest answer is "If we want to maintain this within gluster org, then
80% of the effort is common/duplicate of what we did all these days with
gluster-block",

like:
* rpc/socket code
* cli/daemon parser/helper logics
* gfapi util functions
* logger framework
* inotify & dyn-config threads
* configure/Makefile/specfiles
* docsAboutGluster and etc ..

The repository gluster-block is actually a home for all the block related
stuff within gluster and its designed to accommodate alike functionalities,
if I was you I would have simply copied nbd-runner.c into
https://github.com/gluster/gluster-block/tree/master/daemon/ just like ceph
plays it here
https://github.com/ceph/ceph/blob/master/src/tools/rbd_nbd/rbd-nbd.cc and
be done.

Advantages of keeping nbd client within gluster-block:
-> No worry about maintenance code burdon
-> No worry about monitoring a new component
-> shipping packages to fedora/centos/rhel is handled
-> This helps improve and stabilize the current gluster-block framework
-> We can build a common CI
-> We can use reuse common test framework and etc ..

If you have an impression that gluster-block is for management, then I
would really want to correct you at this point.

Some of my near future plans for gluster-block:
* Allow exporting blocks with FUSE access via fileIO backstore to improve
large-file workloads, draft:
https://github.com/gluster/gluster-block/pull/58
* Accommodate kernel loopback handling for local only applications
* The same way we can accommodate nbd app/client, and IMHO this effort
shouldn't take 1 or 2 days to get it merged with in gluster-block and ready
for a go release.


Hope that clarifies it.


Best Regards,
--
Prasanna


> Regards,
> Xiubo Li (@lxbsz)
>
> [1] - https://github.com/gluster/gluster-block
> [2] - https://github.com/open-iscsi
> [3] - https://github.com/gluster/glusterfs/issues/633
> [4] - https://github.com/gluster/nbd-runner
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster-block Dev Stream Update

2018-10-08 Thread Prasanna Kalever
Hello Community!

Starting this week, we will be sending a (bi)weekly update about the
gluster-block development activities.

For someone who is catching up with gluster's block storage project
for the first time, here is everything that you need to know:
- https://github.com/gluster/gluster-block/blob/master/README.md

Dev update for the week:

What is done:
-> Dynamic config reloading feature
 - https://github.com/gluster/gluster-block/pull/88
-> CLI audit log feature
 - https://github.com/gluster/gluster-block/pull/83
-> Defending on minimum kernel version at various distros
 - https://github.com/gluster/gluster-block/pull/119

What is in-progress:
-> Design about various locking api's support required by ALUA feature
 - https://github.com/gluster/glusterfs/issues/466
 - https://github.com/gluster/gluster-block/issues/53
-> Design about block configuration self heal feature
 - TODO: Will share the link in the next update
-> Get rid of huge buffer allocation for reading the configuration file
 - https://github.com/gluster/gluster-block/pull/123
-> Dump all failure msgs to stderr
 - https://github.com/gluster/gluster-block/pull/121
-> Support new glibc versions by adopting libtirpc for fedora >=28
 - https://github.com/gluster/gluster-block/pull/57

What is coming up-next:
-> v0.4 release of gluster-block, mainly waiting on
 - https://github.com/gluster/gluster-block/pull/57
-> Package update for fedora 28
 - currently waiting on dependent projects package updates

How can one be part of gluster-block:
-> Share us your experience with gluster-block:
 - More information about how to use/test, can be found at '# man
gluster-block' or refer basic.t
-> Report new issues or submit a pull request for existing issues:
 - https://github.com/gluster/gluster-block/issues


Cheers!
--
Gluster-block Team.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] gluster-block v0.3 is alive!

2017-10-16 Thread Prasanna Kalever
Hello Gluster folks,

We are happy to announce the release of gluster-block [1] v0.3. Please
find highlights and notable fixes which went in this release at [2].

The packages are made available at copr for Fedora users [3]. For
other distributions, one can easily compile it from source. Details
about installation can be found in the easy install guide at [4]. The
source tarball and community provided packages will be available soon
at [5].

The next release v0.4 is planned to have some exciting features and
improvements. Here is a potential list of features in v0.4:
* Replace node feature to substitute a faulty node.
* Support for Snapshots
* Ability to re-size existing block devices
* Performance improvements (IO and management)
* Containerized gluster-block and more!

gluster-block is now part of fedora package collection (f26), an
updated package with release version v0.3 will be soon made available.

Please report any issues that you observe using [6].

We look forward to your feedback to help us get better!


[1] https://github.com/gluster/gluster-block
[2] https://github.com/gluster/gluster-block/blob/master/NEWS
[3] https://copr.fedorainfracloud.org/coprs/pkalever/gluster-block/build/637643/
[4] https://github.com/gluster/gluster-block/blob/master/INSTALL
[5] https://download.gluster.org/pub/gluster/gluster-block/
[6] https://github.com/gluster/gluster-block/issues/new


Thanks!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs expose iSCSI

2017-09-13 Thread Prasanna Kalever
On Wed, Sep 13, 2017 at 1:03 PM, GiangCoi Mr  wrote:
> Hi all
>

Hi GiangCoi,

The Good news is that now we have gluster-block [1] which will help
you configure block storage using gluster very easy.
gluster-block will take care of all the targetcli and tcmu-runner
configuration for you, all you need as a pre-requisite is a gluster
volume.

And the sad part is we haven't tested gluster-block on centos, but
just source compile should work IMO.

> I want to configure glusterfs to expose iSCSI target. I followed this
> artical
> https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
> but when I install tcmu-runner. It doesn't work.

What is your environment, do you want to setup guster block storage in
a container environment or is it just in a non-container centos
environment ?

>
> I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli,
> it not show user:glfs and user:gcow
>
> /> ls
> o- / .. [...]
>   o- backstores ... [...]
>   | o- block ... [Storage Objects: 0]
>   | o- fileio .. [Storage Objects: 0]
>   | o- pscsi ... [Storage Objects: 0]
>   | o- ramdisk . [Storage Objects: 0]
>   o- iscsi . [Targets: 0]
>   o- loopback .. [Targets: 0]
>

BTW - have you started your tcmu-runner.service ?
If your tcmu-runner service is running but you still cannot see them
listed in the 'targetcli ls' output, then it looks like your handlers
were not loaded properly.

In fedora, the default handler location will be at /usr/lib64/tcmu-runner

[0] ॐ 04:55:22@~ $ ls /usr/lib64/tcmu-runner/
handler_glfs.so

Just try using --handler-path option
[0] ॐ 04:56:05@~ $ tcmu-runner --handler-path /usr/lib64/tcmu-runner/ &

[0] ॐ 05:00:54@~ $ targetcli ls | grep glfs
  | o- user:glfs
..
[Storage Objects: 0]

If it works, then may be you can tweak the systemd unit, in case if
you want to run it as a service

> How I configure glusterfs to expose iSCSI. Please help me.

Feel free to parse gluster-block ReadMe [2]


[1] https://github.com/gluster/gluster-block
[2] https://github.com/gluster/gluster-block/blob/master/README.md

Cheers!
--
Prasanna

>
> Regards,
>
> Giang
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster-block v0.2.1 is alive!

2017-06-07 Thread Prasanna Kalever
Hello Gluster folks,

gluster-block [1] release 0.2.1 is tagged, this release is more
focused on bug fixing. All the documents are updated and packages made
available at copr for fedora users [2]

However for other distros one can easily compile it from source, find
the install guide at [3]

The source tar file and community provided packages will be soon made
available at [4]


Highlights:
-
* Implement LRU cache to hold glfs objects, this makes the cli
commands run fast.
For example on a single node,
create command takes ~1 sec now, while it was ~5 sec before.

* Log severity level is configurable now.
look for --log-level option of daemon and '/etc/sysconfig/gluster-blockd'


Other Notable Fixes:
---
* betterments in messages on failure
* fix heap-buffer-overflow
* prevent crashes when errMsg is not set
* print human readable time-stamp in log files
* improve logging at server side
* handle SIGPIPE in daemon
* update journal-data/block meta-data synchronously
* reuse port 24006 (SO_REUSEADDR) on bind
* add manual for gluster-blockd
* updated ReadMe
* and many more ...

Read more at about gluster-block [5]


Please report issues using [6]
Also do let us know your feedback and help us get better :-)

[1] https://github.com/gluster/gluster-block
[2] https://copr.fedorainfracloud.org/coprs/pkalever/gluster-block/build/562504/
[3] https://github.com/gluster/gluster-block/blob/master/INSTALL
[4] https://download.gluster.org/pub/gluster/gluster-block/
[5] https://github.com/gluster/gluster-block/blob/master/README.md
[6] https://github.com/gluster/gluster-block/issues/new


Thanks!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Elasticsearch with gluster-block

2017-03-17 Thread Prasanna Kalever
Hi,

Here are few performance numbers that were taken few months ago [1]

Find the configuration details [2]

I hope these should give you some Idea.

Note: These are performance numbers taken using IOZONE tool, on a
mount points (FUSE vs iSCSI). At least I don't have plans to measure
perfs on Elasticsearch engine in the near future, in case you get a
chance to do so, please drop by.

[1] 
http://htmlpreview.github.io/?https://github.com/pkalever/iozone_results_gluster/blob/master/block-store/iscsi-fuse-virt-mpath-3/html_out/index.html
[2] 
https://github.com/pkalever/iozone_results_gluster/commit/2392a76cfa4a08c94602ed5b9201760d828c0ba2

Cheers,
--
prasanna

On Thu, Mar 16, 2017 at 12:49 PM, vincent gromakowski
<vincent.gromakow...@gmail.com> wrote:
> Hi
> What is the order of magnitude in performance decrease compared to local
> storage ?
>
> Le 16 mars 2017 7:53 AM, "Prasanna Kalever" <pkale...@redhat.com> a écrit :
>>
>> If you have missed our post on "Elasticsearch with gluster-block" in
>> social media feeds, then here is the nexus [1]
>>
>> [1]
>> https://pkalever.wordpress.com/2017/03/14/elasticsearch-with-gluster-block/
>>
>>
>> Cheers!
>> --
>> prasanna
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Demo in community meetings

2017-03-14 Thread Prasanna Kalever
Thanks for the opportunity.

I will be happy to stream a demo on 'howto gluster-block' tomorrow.

--
Prasanna

On Mon, Mar 13, 2017 at 8:45 AM, Vijay Bellur  wrote:
> Hi All,
>
> In the last meeting of maintainers, we discussed about reserving 15-30
> minutes in the community meeting for demoing new functionalities on anything
> related to Gluster. If you are working on something new or possess
> specialized knowledge of some intricate functionality, then this slot would
> be a great opportunity for sharing that with the community and obtaining
> real time feedback from seasoned Gluster folks in the meeting.
>
> Given that the slot is for 15-30 minutes, we would be able to accommodate
> 1-2 demos per meeting. This demo will happen over bluejeans and the URL
> would be available in the agenda for the meeting. If you are interested in
> kickstarting the demo series this week, please respond on this thread and
> let us know.
>
> Thanks!
> Vijay
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing gluster-block v0.1

2017-03-02 Thread Prasanna Kalever
Heads up!

Packages are made available at download.gluster.org [1]

[1] https://download.gluster.org/pub/gluster/gluster-block/


Cheers.

On Thu, Mar 2, 2017 at 11:47 PM, Prasanna Kalever <pkale...@redhat.com> wrote:
> gluster-block [1] is a block device management framework which aims at
> making gluster backed block storage creation and maintenance as simple
> as possible. With this release, gluster-block provisions block devices
> and exports them using iSCSI. Read about usage, examples and more at
> [2]
>
> The initial gluster-block is ready with its v0.1 tagging and the
> packages are available at copr for fedora users [3]
>
> However one can compile it from source, find the install guide at [4]
>
> The source tar file and community provided packages will be soon made
> available at download.gluster.org.
>
> We will be iterating and improve gluster-block with a release every
> 2-3 weeks. Please let us know your feedback and help us get better :-)
>
>
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/blob/master/README.md
> [3] 
> https://copr.fedorainfracloud.org/coprs/pkalever/gluster-block/build/520204/
> [4] https://github.com/gluster/gluster-block/blob/master/INSTALL
>
>
> Thanks!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Announcing gluster-block v0.1

2017-03-02 Thread Prasanna Kalever
gluster-block [1] is a block device management framework which aims at
making gluster backed block storage creation and maintenance as simple
as possible. With this release, gluster-block provisions block devices
and exports them using iSCSI. Read about usage, examples and more at
[2]

The initial gluster-block is ready with its v0.1 tagging and the
packages are available at copr for fedora users [3]

However one can compile it from source, find the install guide at [4]

The source tar file and community provided packages will be soon made
available at download.gluster.org.

We will be iterating and improve gluster-block with a release every
2-3 weeks. Please let us know your feedback and help us get better :-)


[1] https://github.com/gluster/gluster-block
[2] https://github.com/gluster/gluster-block/blob/master/README.md
[3] https://copr.fedorainfracloud.org/coprs/pkalever/gluster-block/build/520204/
[4] https://github.com/gluster/gluster-block/blob/master/INSTALL


Thanks!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Elasticsearch with Gluster Block Storage

2016-11-17 Thread Prasanna Kalever
Hi,

Here is the blog about Elasticseach using the gluster block storage as
persistent store for its backend search engine indexing.

https://pkalever.wordpress.com/2016/11/18/elasticsearch-with-gluster-block-storage


Thanks,
--
Prasanna
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gfapi memleaks, all versions

2016-11-08 Thread Prasanna Kalever
Apologies for delay in response, it took me a while to switch here.

As someone pointed rightly in the discussion above. The start and stop
of a VM via libvirt (virsh) will at least call 2
glfs_new/glfs_init/glfs_fini calls.
In fact there are 3 calls involved 2 (mostly for stat, read headers
and chown) in libvirt context and 1 (actual read write IO) in qemu
context, since qemu is forked out and executed in its own process
memory context that will not incur a leak in libvirt, also on stop of
VM the qemu process dies.
Not that all, In case if we are using 4 extra attached disks, then the
total calls to glfs_* will be (4+1)*2 in libvirt and (4+1)*1 in qemu
space i.e 15.

What's been done so far in QEMU,
I have submitted a patch to qemu to cache the glfs object, Hence there
will be one glfs object per volume, now the glfs_* calls will be
reduced from N (In above case 4+1=5)  to 1 per volume.
This will optimize the performance by reducing number of calls, reduce
the memory consumption (as each instance occupies ~300MB VSZ) and
reduce the leak ( ~ 7 - 10 MB per call)
Note this patch is in master [1] already.

What about Libvirt then ?
Almost same here, I am planning to cache the connections (the glfs
object) until all the disks are initialized then finally followed by a
glfs_fini()
There by we reduce N * 2 (From above case its (4+1)*2 = 10) calls to
1, Work on this change is in progress, can expect this by end of the
week mostly.


[1] https://lists.gnu.org/archive/html/qemu-devel/2016-10/msg07087.html


--
Prasanna



On Thu, Oct 27, 2016 at 12:23 PM, Pranith Kumar Karampuri
 wrote:
> +Prasanna
>
> Prasanna changed qemu code to reuse the glfs object for adding multiple
> disks from same volume using refcounting. So the memory usage went down from
> 2GB to 200MB in the case he targetted. Wondering if the same can be done for
> this case too.
>
> Prasanna could you let us know if we can use refcounting even in this case.
>
>
> On Wed, Sep 7, 2016 at 10:28 AM, Oleksandr Natalenko
>  wrote:
>>
>> Correct.
>>
>> On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri
>>  wrote:
>> >On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko <
>> >oleksa...@natalenko.name> wrote:
>> >
>> >> Hello,
>> >>
>> >> thanks, but that is not what I want. I have no issues debugging gfapi
>> >apps,
>> >> but have an issue with GlusterFS FUSE client not being handled
>> >properly by
>> >> Massif tool.
>> >>
>> >> Valgrind+Massif does not handle all forked children properly, and I
>> >believe
>> >> that happens because of some memory corruption in GlusterFS FUSE
>> >client.
>> >>
>> >
>> >Is this the same libc issue that we debugged and provided with the
>> >option
>> >to avoid it?
>> >
>> >
>> >>
>> >> Regards,
>> >>   Oleksandr
>> >>
>> >> On субота, 3 вересня 2016 р. 18:21:59 EEST feihu...@sina.com wrote:
>> >> >  Hello,  Oleksandr
>> >> > You can compile that simple test code posted
>> >> > here(http://www.gluster.org/pipermail/gluster-users/2016-
>> >> August/028183.html
>> >> > ). Then, run the command
>> >> > $>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind
>> >> > --tool=massif  ./glfsxmp the cmd will produce a file like
>> >> massif.out.,
>> >> >  the file is the memory leak log file , you can use ms_print tool
>> >as
>> >> below
>> >> > command $>ms_print  massif.out.
>> >> > the cmd will output the memory alloc detail.
>> >> >
>> >> > the simple test code just call glfs_init and glfs_fini 100 times to
>> >found
>> >> > the memory leak,  by my test, all xlator init and fini is the main
>> >memory
>> >> > leak function. If you can locate the simple code memory leak code,
>> >maybe,
>> >> > you can locate the leak code in fuse client.
>> >> >
>> >> > please enjoy.
>> >>
>> >>
>> >> ___
>> >> Gluster-users mailing list
>> >> Gluster-users@gluster.org
>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>> >>
>>
>
>
>
> --
> Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Block storage with Qemu-Tcmu

2016-11-07 Thread Prasanna Kalever
[Top posting]

I am planning to write a short blog to answer few similar questions
that I received after posting this blog.

Is iSCSI stack obligatory for block store ?
Answer is No.

It basically depends on the use case and choice. If we can run/manage
target emulation on the client side, we don't have to bring iSCSI
stack into picture.
We simply export LUN using a loopback device i.e after creating the
back end with qemu-tcmu storage module, we can directly export the
target via loopback instead of iSCSI.
So In this case we don't see overheads with iSCSI layers, but IMO
overhead with iSCSI can be very minimal, may be I need the performance
numbers to prove (will spin a benchmark soon)

I have done some basic benchmarking taking baseline as Fuse mount and
target as iSCSI exposed target via tcmu-runner, you can find them at
[1]
You can find more bechmark's at [2], the commit messages should
explain you the configurations.

Hope that answers most of your questions :)

[1] 
https://htmlpreview.github.io/?https://github.com/pkalever/iozone_results_gluster/blob/master/block-store/iscsi-fuse-1/html_out/index.html
[2] https://github.com/pkalever/iozone_results_gluster/blob/master/block-store/

--
Prasanna



On Mon, Nov 7, 2016 at 2:23 PM, Gandalf Corvotempesta
 wrote:
> Il 07 nov 2016 09:23, "Lindsay Mathieson"  ha
> scritto:
>>
>> From a quick scan, there doesn't seem to be any particular advantage
>> over qemu using gfapi directly? Is this more aimed at apps that can't
>> use gfapi such as vmware or as a replacement for NFS?
>>
>
> Dump question:  why should i use a block storage replacing nfs?
> Nfs-ganesha makes use of libgfapi, block storage does the same but also need
> the whole iscsi stack so performance could be lower
>
> If i don't need direct access to a block device on the client (in example
> for creating custom FS or LVM and so on), the nfs ganesha should be a better
> approach, right?
>
> Anyone compared performances between:
>
> 1. Fuse mount
> 2. Nfs
> 3. Nfs ganesha
> 4. Qemu direct access via gfapi
> 5. Iscsi
>
> ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Block storage with Qemu-Tcmu

2016-11-07 Thread Prasanna Kalever
On Mon, Nov 7, 2016 at 1:53 PM, Lindsay Mathieson
<lindsay.mathie...@gmail.com> wrote:
> On 7 November 2016 at 17:01, Prasanna Kalever <pkale...@redhat.com> wrote:
>> Yet another approach to achieve Gluster Block Storage is with Qemu-Tcmu.
>
>
> Thanks Prasanna, interesting reading.
>
> From a quick scan, there doesn't seem to be any particular advantage
> over qemu using gfapi directly? Is this more aimed at apps that can't
> use gfapi such as vmware or as a replacement for NFS?
>

As mentioned in the conclusion part in the blog, the advantage here is
easy snapshots.
Qemu-tcmu will come up with '--snapshot' option (work still in
progress) as much like qemu-img.
Supporting this within gluster needs additional maintenance of
qemu-block xlator which is the clone of qcow2 spec implementation,
which could be more work.

Also the qemu gluster protocol driver (access gfapi) is more mature and tested.

--
Prasanna

> --
> Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Block storage with Qemu-Tcmu

2016-11-06 Thread Prasanna Kalever
Hi,

Yet another approach to achieve Gluster Block Storage is with Qemu-Tcmu.

More about Qemu-Tcmu, its progress, merits and procedure details can
be found at [1]

[1] 
https://pkalever.wordpress.com/2016/11/04/gluster-as-block-storage-with-qemu-tcmu/

--
Prasanna
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access

2016-11-03 Thread Prasanna Kalever
Hi,

After our past two days of investigations, this is no longer a new/fresh bug :)

The cause for is double unref of fd introduced in 3.8.5 with [1]

We have thoroughly investigated on this, and the fix [2] is likely to
be coming in the next gluster update.

[1] http://review.gluster.org/#/c/15585
[2] http://review.gluster.org/#/c/15768/

--
Prasanna


On Thu, Nov 3, 2016 at 4:34 PM, Radu Radutiu  wrote:
> Hi,
>
> After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the
> KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using
> libgfapi are no longer able to start. The libvirt log file shows:
>
> [2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify]
> 0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up
> [2016-11-02 14:26:41.864075] I [MSGID: 114020] [client.c:2356:notify]
> 0-testvol-client-0: parent translators are ready, attempting connect on
> transport
> [2016-11-02 14:26:41.882975] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> 0-testvol-client-0: changing port to 49152 (from 0)
> [2016-11-02 14:26:41.889362] I [MSGID: 114057]
> [client-handshake.c:1446:select_server_supported_programs]
> 0-testvol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version
> (330)
> [2016-11-02 14:26:41.890001] I [MSGID: 114046]
> [client-handshake.c:1222:client_setvolume_cbk] 0-testvol-client-0: Connected
> to testvol-client-0, attached to remote volume '/data/brick1/testvol'.
> [2016-11-02 14:26:41.890035] I [MSGID: 114047]
> [client-handshake.c:1233:client_setvolume_cbk] 0-testvol-client-0: Server
> and Client lk-version numbers are not same, reopening the fds
> [2016-11-02 14:26:41.917990] I [MSGID: 114035]
> [client-handshake.c:201:client_set_lk_version_cbk] 0-testvol-client-0:
> Server lk version = 1
> [2016-11-02 14:26:41.919289] I [MSGID: 104041]
> [glfs-resolve.c:885:__glfs_active_subvol] 0-testvol: switched to graph
> 73332d32-3937-3130-2d32-3031362d3131 (0)
> [2016-11-02 14:26:41.922174] I [MSGID: 114021] [client.c:2365:notify]
> 0-testvol-client-0: current graph is no longer active, destroying rpc_client
> [2016-11-02 14:26:41.922269] I [MSGID: 114018]
> [client.c:2280:client_rpc_notify] 0-testvol-client-0: disconnected from
> testvol-client-0. Client process will keep trying to connect to glusterd
> until brick's port is available
> [2016-11-02 14:26:41.922592] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-gfapi: size=84 max=1 total=1
> [2016-11-02 14:26:41.923044] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-gfapi: size=188 max=2 total=2
> [2016-11-02 14:26:41.923419] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-gfapi: size=140 max=2 total=2
> [2016-11-02 14:26:41.923442] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-testvol-client-0: size=1324 max=2
> total=5
> [2016-11-02 14:26:41.923458] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-testvol-dht: size=1148 max=0 total=0
> [2016-11-02 14:26:41.923546] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-testvol-dht: size=3380 max=2 total=5
> [2016-11-02 14:26:41.923815] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-testvol-read-ahead: size=188 max=0
> total=0
> [2016-11-02 14:26:41.923832] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-testvol-readdir-ahead: size=60 max=0
> total=0
> [2016-11-02 14:26:41.923844] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-testvol-io-cache: size=68 max=0 total=0
> [2016-11-02 14:26:41.923856] I [MSGID: 101053]
> [mem-pool.c:617:mem_pool_destroy] 0-testvol-io-cache: size=252 max=1 total=3
> [2016-11-02 14:26:41.923877] I [io-stats.c:3747:fini] 0-testvol: io-stats
> translator unloaded
> [2016-11-02 14:26:41.924191] I [MSGID: 101191]
> [event-epoll.c:659:event_dispatch_epoll_worker] 0-epoll: Exited thread with
> index 2
> [2016-11-02 14:26:41.924232] I [MSGID: 101191]
> [event-epoll.c:659:event_dispatch_epoll_worker] 0-epoll: Exited thread with
> index 1
> 2016-11-02T14:26:42.825041Z qemu-kvm: -drive
> file=gluster://s3/testvol/c7.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none:
> Could not read L1 table: Bad file descriptor
>
> The brick is available , runs on the same host  and mounted in another
> directory using fuse (to confirm that it is indeed fine).
> If I downgrade the gluster server to 3.8.4 everything works fine. Anyone has
> seen this or has any idea how to debug?
>
> Regards,
> Radu
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Block storage

2016-10-03 Thread Prasanna Kalever
On Mon, Oct 3, 2016 at 12:40 PM, Gandalf Corvotempesta
 wrote:
> Il 03 ott 2016 08:48, "Pranith Kumar Karampuri"  ha
> scritto:
>>
>> It is in early development phase. If you don't want snapshot
>> functionality, then I think the work is reasonably ready. Do let us know if
>> you want to take it for a spin and give us feedback.
>
> Any guide on how to configure it?

Here are some POC guides for block store using in containers:

Docker: 
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
Kubernetes: 
https://pkalever.wordpress.com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes/
Openshift: 
https://pkalever.wordpress.com/2016/08/16/read-write-once-persistent-storage-for-openshift-origin-using-gluster/

Performance numbers measured can be found at:
htmlpreview.github.io?https://github.com/pkalever/iozone_results_gluster/blob/master/block-store/iscsi-fuse-virt-mpath-shard-4/html_out/index.html

Let me know how this goes :)

Cheers,
--
Prasanna

>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] GlusterFs upstream bugzilla components Fine graining

2016-09-30 Thread Prasanna Kalever
On Fri, Sep 30, 2016 at 3:16 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Wed, Sep 28, 2016 at 10:09:34PM +0530, Prasanna Kalever wrote:
>> On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
>> <mvign...@redhat.com> wrote:
>> >
>> > Hi,
>> >
>> > This an update to the previous mail about Fine graining of the
>> > GlusterFS upstream bugzilla components.
>> >
>> > Finally we have come out a new structure that would help in easy
>> > access of the bug for reporter and assignee too.
>> >
>> > In the new structure we have decided to remove components that are
>> > listed as below -
>> >
>> > - BDB
>> > - HDFS
>> > - booster
>> > - coreutils
>> > - gluster-hdoop
>> > - gluster-hadoop-install
>> > - libglusterfsclient
>> > - map
>> > - path-converter
>> > - protect
>> > - qemu-block
>>
>> Well, we are working on bringing qemu-block xlator to alive again.
>> This is needed in achieving qcow2 based internal snapshots for/in the
>> gluster block store.
>
> We can keep this as a subcomponent for now.

What should be the main component in this case?

>
>> Take a look at  http://review.gluster.org/#/c/15588/  and dependent patches.
>
> Although we can take qemu-block back, we need a plan to address the
> copied qemu sources to handle the qcow2 format. Reducing the bundled
> sources (in contrib/) is important. Do you have a feature page in the
> glusterfs-specs repository that explains the usability of qemu-block? I
> have not seen a discussion on gluster-devel about this yet either,
> otherwise I would have replied there...

Yeah, have refreshed some part of the code already (local). The
current code is way old (2013) and miss the compat 1.1 (qcow2v3)
features and many more. We are cross checking the merits in using this
in the block store. Once we are in a state to say yes/continue with
this approach, I'm glad to take initiation in refreshing the complete
source and flush out the unused bundle of code.

Well, I do not know about any qcow libraries other than [1], and don't
think we have choice of keeping this outside the repo tree?

And currently I don't have a feature page, will update after summit
time frame, also make a note to post with the complete details in
devel mailing list.

>
> Nobody used this before, and I wonder if we should not design and
> develop a standard file-snapshot functionality that is not dependent on
> qcow2 format.

IMO, that will take an another year or more to bring into block store use.


[1] https://github.com/libyal/libqcow

--
Prasanna

>
> Niels
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] GlusterFs upstream bugzilla components Fine graining

2016-09-30 Thread Prasanna Kalever
On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
 wrote:
>
> Hi,
>
> This an update to the previous mail about Fine graining of the
> GlusterFS upstream bugzilla components.
>
> Finally we have come out a new structure that would help in easy
> access of the bug for reporter and assignee too.
>
> In the new structure we have decided to remove components that are
> listed as below -
>
> - BDB
> - HDFS
> - booster
> - coreutils
> - gluster-hdoop
> - gluster-hadoop-install
> - libglusterfsclient
> - map
> - path-converter
> - protect
> - qemu-block

Well, we are working on bringing qemu-block xlator to alive again.
This is needed in achieving qcow2 based internal snapshots for/in the
gluster block store.

Take a look at  http://review.gluster.org/#/c/15588/  and dependent patches.

--
Prasanna

[...]
> Thanks and regards,
>
> Muthu Vigneshwaran & Niels de vos
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] CFP for Gluster Developer Summit

2016-08-16 Thread Prasanna Kalever
Hey All,

Here is my topic to utter at gluster summit

Abstract:

Title: GLUSTER AS BLOCK STORE IN CONTAINERS

As we all know containers are stateless entities which are used to
deploy applications and hence need persistent storage to store
application data for availability across container incarnations.

Persistent storage in containers are of two types, shared and non-shared.

Shared storage:
Consider this as a volume/store where multiple Containers perform both
read and write operations on the same data. Useful for applications
like web servers that need to serve the same data from multiple
container instances.

Non Shared Storage:
Only a single container can perform write operations to this store at
a given time.

This presentation intend to show/discuss how gluster plays a role as a
nonshared block store in containers
Hence it indoctrinate the background to terminology (LIO, iSCSI,
tcmurunner, targetcli) and explains the solution achieving 'Block
store in Containers using gluster' followed by a demo.

Demo will showcase some basic (could be elaborated, based on the
audience) gluster setup, then show nodes initiating the iSCSI session,
attaches iSCSI target as block device and serve it to containers where
the application is running and requires persistent storage.

Will show the working demos about its integration with
* Docker
* Kubernetes
* OpenShift

Intention of this presentation is to get more feedback from people who
use similar solutions and also know  potential risks for better
defence
While discussing TODO's (access locking, encryption, snapshots and
etc.) we could gather some education around.


Cheers,
--
Prasanna


On Tue, Aug 16, 2016 at 7:23 PM, Kaushal M  wrote:
> Okay. Here's another proposal from me.
>
> # GlusterFS Release process
> An overview of the GlusterFS release process
>
> The GlusterFS release process has been recently updated and been
> documented for the first time. In this presentation, I'll be giving an
> overview the whole release process including release types, release
> schedules, patch acceptance criteria and the release procedure.
>
> Kaushal
> kshlms...@gmail.com
> Process & Infrastructure
>
> On Mon, Aug 15, 2016 at 5:30 AM, Amye Scavarda  wrote:
>> Kaushal,
>>
>> That's probably best. We'll be able to track similar proposals here.
>> - amye
>>
>> On Sat, Aug 13, 2016 at 6:30 PM, Kaushal M  wrote:
>>>
>>> How do we submit proposals now? Do we just reply here?
>>>
>>>
>>> On 13 Aug 2016 03:49, "Amye Scavarda"  wrote:
>>>
>>> GlusterFS for Users
>>> "GlusterFS for users" introduces you with GlusterFS, it's terminologies,
>>> it's features and how to manage y GlusterFS cluster.
>>>
>>> GlusterFS is a scalable network filesystem. Using commodity hardware, you
>>> can create large, distributed storage solutions for media streaming, data
>>> analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
>>> and open source software.
>>>
>>> This session is more intended for users/admins.
>>> Scope of this session :
>>>
>>> * What is Glusterfs
>>> * Glusterfs terminologies
>>> * Easy steps to get started with glusterfs
>>> * Volume topologies
>>> * Access protocols
>>> * Various features from user perspective :
>>> Replication, Data distribution, Geo-replication, Bit rot detection,
>>> data tiering,  Snapshot, Encryption, containerized glusterfs
>>> * Various configuration files
>>> * Various logs and it's location
>>> * various custom profile for specific use-cases
>>> * Collecting statedump and it's usage
>>> * Few common problems like :
>>>1) replacing a faulty brick
>>>2) resolving split-brain
>>>3) peer disconnect issue
>>>
>>> Bipin Kunal
>>> bku...@redhat.com
>>> User Perspectives
>>>
>>> On Fri, Aug 12, 2016 at 3:18 PM, Amye Scavarda  wrote:

 Demo : Quickly setup GlusterFS cluster
 This demo will let you understand How to setup GlusterFS cluster and how
 to exploit its features.

 GlusterFS is a scalable network filesystem. Using commodity hardware, you
 can create large, distributed storage solutions for media streaming, data
 analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
 and open source software.

 This demo is intended for new user who is willing to setup glusterFS
 cluster.

 This demo will let you understand How to setup GlusterFS cluster and how
 to exploit its features.

 Scope of this session :

 1) Install GlusterFS packages
 2) Create a trusted storage pool
 3) Create a GlusterFS volume
 4) Access GlusterFS volume using various protocols
a) FUSE b) NFS c) CIFS d) NFS-ganesha
 5) Using Snapshot
 6) Creating geo-rep session
 7) Adding/removing/replacing bricks
 8) Bit-rot detection and correction

 Bipin Kunal
 bku...@redhat.com
 User Perspectives

 On Fri, Aug 

Re: [Gluster-users] Using LIO with Gluster

2016-08-11 Thread Prasanna Kalever
On Fri, Aug 5, 2016 at 2:53 PM, luca giacomelli
 wrote:
> Hi, I'm trying to implement something similar to
> http://blog.gluster.org/2016/04/using-lio-with-gluster/ and
> https://pkalever.wordpress.com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes/
>
> Gluster 3.8.1 and Fedora 24
>
> gluster is up and running. The initiator discover the target but is not able
> to find the disk. I successfully tested targetli fileio with the same
> target and initiator.

Hey Luca,


Can you attach your /etc/target/saveconfig.json here ?
I doubt the targetcli configuration


Before that quick checks you can do:
* mount the gluster volume and see the target file exist
* targetcli /backstores/user:glfs/glfsLUN info

If you notice the correct signs above, try restarting the
tcmu-runner.service and target.service.

I remember there was a similar situation for me, nothing above worked,
I have rebooted the target nodes and flushed the iptables then it
worked.


Cheers,
--
Prasanna

>
> I tried also with tcmu-runner 1.1.0
>
> Any help would be appreciated.
>
> Thanks, Luca
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Block storage

2016-07-27 Thread Prasanna Kalever
On Tue, Jul 19, 2016 at 12:52 AM, Gandalf Corvotempesta
 wrote:
> Is block storage xlator stable and usable in production?
> Any docs about this?

We are planning to get the block story done by gelling gluster with
tcmu (user-space LIO) framework.
You can also read more about the same at [1] & [2]

[1] 
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
[2] 
https://pkalever.wordpress.com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes/

--
Prasanna

>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-15 Thread Prasanna Kalever
On Wed, Jun 15, 2016 at 2:41 PM, André Bauer  wrote:
>
> Hi Lists,
>
> i just updated on of my Ubuntu KVM Servers from 14.04 (Trusty) to 16.06
> (Xenial).
>
> I use the Glusterfs packages from the officail Ubuntu PPA and my own
> Qemu packages (
> https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7 )
> which have libgfapi enabled.
>
> On Ubuntu 14.04 everything is working fine. I only had to add the
> following lines to the Apparmor config in
> /etc/apparmor.d/abstractions/libvirt-qemu to get it work:
>
> # for glusterfs
> /proc/sys/net/ipv4/ip_local_reserved_ports r,
> /usr/lib/@{multiarch}/glusterfs/**.so mr,
> /tmp/** rw,
>
> In Ubuntu 16.04 i'm not able to start the my VMs via libvirt or to
> create new images via qemu-img using libgfapi.
>
> Mounting the volume via fuse does work without problems.
>
> Examples:
>
> qemu-img create gluster://storage.mydomain/vmimages/kvm2test.img 1G
> Formatting 'gluster://storage.intdmz.h1.mdd/vmimages/kvm2test.img',
> fmt=raw size=1073741824
> [2016-06-15 08:15:26.710665] E [MSGID: 108006]
> [afr-common.c:4046:afr_notify] 0-vmimages-replicate-0: All subvolumes
> are down. Going offline until atleast one of them comes back up.
> [2016-06-15 08:15:26.710736] E [MSGID: 108006]
> [afr-common.c:4046:afr_notify] 0-vmimages-replicate-1: All subvolumes
> are down. Going offline until atleast one of them comes back up.
>
> Libvirtd log:
>
> [2016-06-13 16:53:57.055113] E [MSGID: 104007]
> [glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch
> volume file (key:vmimages) [Invalid argument]
> [2016-06-13 16:53:57.055196] E [MSGID: 104024]
> [glfs-mgmt.c:738:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with
> remote-host: storage.intdmz.h1.mdd (Permission denied) [Permission denied]
> 2016-06-13T16:53:58.049945Z qemu-system-x86_64: -drive
> file=gluster://storage.intdmz.h1.mdd/vmimages/checkbox.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=writeback:
> Gluster connection failed for server=storage.intdmz.h1.mdd port=0
> volume=vmimages image=checkbox.qcow2 transport=tcp: Permission denied

I think you have missed enabling bind insecure which is needed by
libgfapi access, please try again after following below steps

=> edit /etc/glusterfs/glusterd.vol by add "option
rpc-auth-allow-insecure on" #(on all nodes)
=> gluster vol set $volume server.allow-insecure on
=> systemctl restart glusterd #(on all nodes)

In case this does not work,
provide help us with the below, along with the logfiles
# gluster vol info
# gluster vol status
# gluster peer status

--
Prasanna

>
> I don't see anything in the apparmor logs when setting everything to
> complain or audit.
>
> It also seems GlusterFS servers don't get any request because brick logs
> are not complaining anything.
>
> Any hints?
>
>
> --
> Regards
> André Bauer
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS Mounts at startup consuming needed local ports for host services

2016-05-10 Thread Prasanna Kalever
Hi Rayan,

1. I need to accept this, currently
/proc/sys/net/ipv4/ip_local_port_range was not taken care in
glusterfs, I shall soon patch it with the required changes which will
then start respecting 'ip_local_port_range'

2. Chen Chen observations were right! we have moved to bind insecure
around 3.7.2 period with which we try to fetch a port from 65535 -
down, because of which he observed mount is only using port 6+,
but in your case you are in 3.6.0 hence client ports start from 1023
if I am not wrong!

3. For now a quick work around could be

3.1. Specify your ports in
'/proc/sys/net/ipv4/ip_local_reserved_ports', glusterfs ensure not to
bind on the ports mentioned in 'ip_local_reserved_ports' which you may
wish to use them for other services.

The below command helps in doing so:

# sysctl -w net.ipv4.ip_local_reserved_ports=1002

3.2. As you rightly said, start all non-gluster services then
at last start mounting glusterfs

Thanks,
--
Prasanna


On Sun, Apr 10, 2016 at 7:03 PM, Chen Chen  wrote:
> Hi Ryan,
>
> In my case, glusterfs didn't use any port below 1. The output is
> attached.
> Maybe you could deploy a new test client with Gluster 3.6.9 and see if it
> helps.
>
> (Resend into list.)
>
> Best wishes,
> Chen
>
>
> On 4/10/2016 2:13 AM, ryan.j.wy...@wellsfargo.com wrote:
>>
>> Chen,
>>
>> Can you check and see what lsof outputs for glusterfs?
>>
>> # lsof -i | grep ^glusterfs
>> glusterfs  2893root9u  IPv4  15373  0t0  TCP
>> client.domain.com:1023->server1.domain.com:24007 (ESTABLISHED)
>>
>> see how my client uses local port 1023?
>>
>> Ryan
>>
>> -Original Message-
>> From: gluster-users-boun...@gluster.org
>> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Chen Chen
>> Sent: Saturday, April 09, 2016 9:20 AM
>> To: gluster-users@gluster.org
>> Subject: Re: [Gluster-users] GlusterFS Mounts at startup consuming needed
>> local ports for host services
>>
>> My gluster mount is only using port 6+. I was using 3.7.6 and now
>> using 3.7.9.
>>
>> Chen
>>
>> On 4/9/2016 11:00 PM, ryan.j.wy...@wellsfargo.com wrote:
>>>
>>> Is there any way to configure glusterfs to avoid using specific local
>>> ports when mounting filesystems?
>>>
>>> Ryan Wyler
>>
>> --
>> Chen Chen
>> Shanghai SmartQuerier Biotechnology Co., Ltd.
>> Add: Room 410, 781 Cai Lun Road, China (Shanghai) Pilot Free Trade Zone
>>  Shanghai 201203, P. R. China
>> Mob: +86 15221885893
>> Email: chenc...@smartquerier.com
>> Web: www.smartquerier.com
>>
>
> --
> Chen Chen
> 上海慧算生物技术有限公司
>
> Shanghai SmartQuerier Biotechnology Co., Ltd.
> Add: Room 410, 781 Cai Lun Road, China (Shanghai) Pilot Free Trade Zone
> Shanghai 201203, P. R. China
> Mob: +86 15221885893
> Email: chenc...@smartquerier.com
> Web: www.smartquerier.com
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] changes to client port range in release 3.1.3

2016-05-03 Thread Prasanna Kalever
Hi all,

The various port ranges in glusterfs as of now:  (very high level view)


client:
  In case of bind secure:
will start from 1023 - 1, In case all these
port exhaust bind to random port (a connect() with out bind() call)
  In case of bind insecure:
will start from 65535 all the way down till 1

bricks/server:
  any port starting from 49152 to 65535

glusterd:
  24007


There was a recent bug, In case of bind secure, client see all ports
as exhausted and connect to a random port which was unfortunately in
brick port map range. So client successfully got a connected on a
given port. Now without these information with glusterd (since pmap
alloc done only at start), it passes the same port to brick, where
brick fails to connect on it (also consider the race situation)


To solve this issue we decided to split the client and brick port ranges. [1]

As usual bricks port map range will be IANA  ephemeral port range i.e
49152-65535.
For clients only in-case of secure ports exhaust (which is a rare
case),  we decided to fall back to registered ports i.e 49151 - 1024


If we see the ephemeral port standards
1.  The Internet Assigned Numbers Authority (IANA) suggests the range
49152 to 65535
2.  Many Linux kernels use the port range 32768 to 61000
more at [2]

Some of our thoughts include split the current brick port range ( ~16K
) into two (may be ~8K each or some other ratio) and use them for
client and bricks, which could solve the problem but also  introduce a
 limitation for scalability.

The patch [1] goes in 3.1.3, we wanted know if there are any impacts
caused with these changes.


[1] http://review.gluster.org/#/c/13998/
[2] https://en.wikipedia.org/wiki/Ephemeral_port


Thanks,
--
Prasanna
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Plans for Gluster 3.8

2015-08-17 Thread Prasanna Kalever
Hi Athin :)

I shall take Bug 1245380
[RFE] Render all mounts of a volume defunct upon access revocation 
https://bugzilla.redhat.com/show_bug.cgi?id=1245380 

Thanks  Regards,
Prasanna Kumar K.


- Original Message -
From: Atin Mukherjee atin.mukherje...@gmail.com
To: Kaushal M kshlms...@gmail.com
Cc: Csaba Henk ch...@redhat.com, gluster-users@gluster.org, Gluster Devel 
gluster-de...@gluster.org
Sent: Thursday, August 13, 2015 8:58:20 PM
Subject: Re: [Gluster-users] [Gluster-devel] Plans for Gluster 3.8



Can we have some volunteers of these BZs? 

-Atin 
Sent from one plus one 
On Aug 12, 2015 12:34 PM, Kaushal M  kshlms...@gmail.com  wrote: 


Hi Csaba, 

These are the updates regarding the requirements, after our meeting 
last week. The specific updates on the requirements are inline. 

In general, we feel that the requirements for selective read-only mode 
and immediate disconnection of clients on access revocation are doable 
for GlusterFS-3.8. The only problem right now is that we do not have 
any volunteers for it. 

 1. Bug 829042 - [FEAT] selective read-only mode 
 https://bugzilla.redhat.com/show_bug.cgi?id=829042 
 
 absolutely necessary for not getting tarred  feathered in Tokyo ;) 
 either resurrect http://review.gluster.org/3526 
 and _find out integration with auth mechanism for special 
 mounts_, or come up with a completely different concept 
 

With the availability of client_t, implementing this should become 
easier. The server xlator would store the incoming connections common 
name or address in the client_t associated with the connection. The 
read-only xlator could then make use of this information to 
selectively allow read-only clients. The read-only xlator would need 
to implement a new option for selective read-only, which would be 
populated with lists of common-names and addresses of clients which 
would get read-only access. 

 2. Bug 1245380 - [RFE] Render all mounts of a volume defunct upon access 
 revocation 
 https://bugzilla.redhat.com/show_bug.cgi?id=1245380 
 
 necessary to let us enable a watershed scalability 
 enhancement 
 

Currently, when auth.allow/reject and auth.ssl-allow options are 
changed, the server xlator does a reconfigure to reload its access 
list. It just does a reload, and doesn't affect any existing 
connections. To bring this feature in, the server xlator would need to 
iterate through its xprt_list and check every connection for 
authorization again on a reconfigure. Those connections which have 
lost authorization would be disconnected. 

 3. Bug 1226776 – [RFE] volume capability query 
 https://bugzilla.redhat.com/show_bug.cgi?id=1226776 
 
 eventually we'll be choking in spaghetti if we don't get 
 this feature. The ugly version checks we need to do against 
 GlusterFS as in 
 
 https://review.openstack.org/gitweb?p=openstack/manila.git;a=commitdiff;h=29456c#patch3
  
 
 will proliferate and eat the guts of the code out of its 
 living body if this is not addressed. 
 

This requires some more thought to figure out the correct solution. 
One possible way to get the capabilities of the cluster would be to 
look at the clusters running op-version. This can be obtained using 
`gluster volume get all cluster.op-version` (the volume get command is 
available in glusterfs-3.6 and above). But this doesn't provide much 
improvement over the existing checks being done in the driver. 
___ 
Gluster-devel mailing list 
gluster-de...@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users