Re: [Gluster-users] Help: gluster-block

2019-04-10 Thread Karim Roumani
Actually we have a question.

We did two tests as follows.

Test 1 - iSCSI target on the glusterFS server
Test 2 - iSCSI target on a separate server with gluster client

Test 2 performed a read speed of <1GB/second while Test 1 about 300MB/second

Any reason you see to why this may be the case?
ᐧ

On Mon, Apr 1, 2019 at 1:00 PM Karim Roumani 
wrote:

> Thank you Prasanna for your quick response very much appreaciated we will
> review and get back to you.
> ᐧ
>
> On Mon, Mar 25, 2019 at 9:00 AM Prasanna Kalever 
> wrote:
>
>> [ adding +gluster-users for archive purpose ]
>>
>> On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin 
>> wrote:
>> >
>> > Hello Mr. Kalever,
>>
>> Hello Jeffrey,
>>
>> >
>> > I am currently working on a project to utilize GlusterFS for VMWare
>> VMs. In our research, we found that utilizing block devices with GlusterFS
>> would be the best approach for our use case (correct me if I am wrong). I
>> saw the gluster utility that you are a contributor for called gluster-block
>> (https://github.com/gluster/gluster-block), and I had a question about
>> the configuration. From what I understand, gluster-block only works on the
>> servers that are serving the gluster volume. Would it be possible to run
>> the gluster-block utility on a client machine that has a gluster volume
>> mounted to it?
>>
>> Yes, that is right! At the moment gluster-block is coupled with
>> glusterd for simplicity.
>> But we have made some changes here [1] to provide a way to specify
>> server address (volfile-server) which is outside the gluster-blockd
>> node, please take a look.
>>
>> Although it is not complete solution, but it should at-least help for
>> some usecases. Feel free to raise an issue [2] with the details about
>> your usecase and etc or submit a PR by your self :-)
>> We never picked it, as we never have a usecase needing separation of
>> gluster-blockd and glusterd.
>>
>> >
>> > I also have another question: how do I make the iSCSI targets persist
>> if all of the gluster nodes were rebooted? It seems like once all of the
>> nodes reboot, I am unable to reconnect to the iSCSI targets created by the
>> gluster-block utility.
>>
>> do you mean rebooting iscsi initiator ? or gluster-block/gluster
>> target/server nodes ?
>>
>> 1. for initiator to automatically connect to block devices post
>> reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
>> node.startup = automatic
>>
>> 2. if you mean, just in case if all the gluster nodes goes down, on
>> the initiator all the available HA path's will be down, but we still
>> want the IO to be queued on the initiator, until one of the path
>> (gluster node) is availabe:
>>
>> for this in gluster-block sepcific section of multipath.conf you need
>> to replace 'no_path_retry 120' as 'no_path_retry queue'
>> Note: refer README for current multipath.conf setting recommendations.
>>
>> [1] https://github.com/gluster/gluster-block/pull/161
>> [2] https://github.com/gluster/gluster-block/issues/new
>>
>> BRs,
>> --
>> Prasanna
>>
>
>
> --
>
> Thank you,
>
> *Karim Roumani*
> Director of Technology Solutions
>
> TekReach Solutions / Albatross Cloud
> 714-916-5677
> karim.roum...@tekreach.com
> Albatross.cloud  - One Stop Cloud Solutions
> Portalfronthosting.com  - Complete
> SharePoint Solutions
>


-- 

Thank you,

*Karim Roumani*
Director of Technology Solutions

TekReach Solutions / Albatross Cloud
714-916-5677
karim.roum...@tekreach.com
Albatross.cloud  - One Stop Cloud Solutions
Portalfronthosting.com  - Complete
SharePoint Solutions
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Help: gluster-block

2019-04-10 Thread Karim Roumani
Thank you Prasanna for your quick response very much appreaciated we will
review and get back to you.
ᐧ

On Mon, Mar 25, 2019 at 9:00 AM Prasanna Kalever 
wrote:

> [ adding +gluster-users for archive purpose ]
>
> On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin 
> wrote:
> >
> > Hello Mr. Kalever,
>
> Hello Jeffrey,
>
> >
> > I am currently working on a project to utilize GlusterFS for VMWare VMs.
> In our research, we found that utilizing block devices with GlusterFS would
> be the best approach for our use case (correct me if I am wrong). I saw the
> gluster utility that you are a contributor for called gluster-block (
> https://github.com/gluster/gluster-block), and I had a question about the
> configuration. From what I understand, gluster-block only works on the
> servers that are serving the gluster volume. Would it be possible to run
> the gluster-block utility on a client machine that has a gluster volume
> mounted to it?
>
> Yes, that is right! At the moment gluster-block is coupled with
> glusterd for simplicity.
> But we have made some changes here [1] to provide a way to specify
> server address (volfile-server) which is outside the gluster-blockd
> node, please take a look.
>
> Although it is not complete solution, but it should at-least help for
> some usecases. Feel free to raise an issue [2] with the details about
> your usecase and etc or submit a PR by your self :-)
> We never picked it, as we never have a usecase needing separation of
> gluster-blockd and glusterd.
>
> >
> > I also have another question: how do I make the iSCSI targets persist if
> all of the gluster nodes were rebooted? It seems like once all of the nodes
> reboot, I am unable to reconnect to the iSCSI targets created by the
> gluster-block utility.
>
> do you mean rebooting iscsi initiator ? or gluster-block/gluster
> target/server nodes ?
>
> 1. for initiator to automatically connect to block devices post
> reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
> node.startup = automatic
>
> 2. if you mean, just in case if all the gluster nodes goes down, on
> the initiator all the available HA path's will be down, but we still
> want the IO to be queued on the initiator, until one of the path
> (gluster node) is availabe:
>
> for this in gluster-block sepcific section of multipath.conf you need
> to replace 'no_path_retry 120' as 'no_path_retry queue'
> Note: refer README for current multipath.conf setting recommendations.
>
> [1] https://github.com/gluster/gluster-block/pull/161
> [2] https://github.com/gluster/gluster-block/issues/new
>
> BRs,
> --
> Prasanna
>


-- 

Thank you,

*Karim Roumani*
Director of Technology Solutions

TekReach Solutions / Albatross Cloud
714-916-5677
karim.roum...@tekreach.com
Albatross.cloud  - One Stop Cloud Solutions
Portalfronthosting.com  - Complete
SharePoint Solutions
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Help: gluster-block

2019-04-03 Thread Prasanna Kalever
On Tue, Apr 2, 2019 at 1:34 AM Karim Roumani 
wrote:

> Actually we have a question.
>
> We did two tests as follows.
>
> Test 1 - iSCSI target on the glusterFS server
> Test 2 - iSCSI target on a separate server with gluster client
>
> Test 2 performed a read speed of <1GB/second while Test 1 about
> 300MB/second
>
> Any reason you see to why this may be the case?
>

For Test 1 case,

1. ops b/w
* iscsi initiator <-> iscsi target and
* tcmu-runner <-> gluster server

are all using the same NIC resource.

2.  Also, it might be possible that, the node might be facing high resource
usage like cpu is high and/or memory is low, as everything is on the same
node.

You can check also check gluster profile info, to corner down some of these.

Thanks!
--
Prasanna


> ᐧ
>
> On Mon, Apr 1, 2019 at 1:00 PM Karim Roumani 
> wrote:
>
>> Thank you Prasanna for your quick response very much appreaciated we will
>> review and get back to you.
>> ᐧ
>>
>> On Mon, Mar 25, 2019 at 9:00 AM Prasanna Kalever 
>> wrote:
>>
>>> [ adding +gluster-users for archive purpose ]
>>>
>>> On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin 
>>> wrote:
>>> >
>>> > Hello Mr. Kalever,
>>>
>>> Hello Jeffrey,
>>>
>>> >
>>> > I am currently working on a project to utilize GlusterFS for VMWare
>>> VMs. In our research, we found that utilizing block devices with GlusterFS
>>> would be the best approach for our use case (correct me if I am wrong). I
>>> saw the gluster utility that you are a contributor for called gluster-block
>>> (https://github.com/gluster/gluster-block), and I had a question about
>>> the configuration. From what I understand, gluster-block only works on the
>>> servers that are serving the gluster volume. Would it be possible to run
>>> the gluster-block utility on a client machine that has a gluster volume
>>> mounted to it?
>>>
>>> Yes, that is right! At the moment gluster-block is coupled with
>>> glusterd for simplicity.
>>> But we have made some changes here [1] to provide a way to specify
>>> server address (volfile-server) which is outside the gluster-blockd
>>> node, please take a look.
>>>
>>> Although it is not complete solution, but it should at-least help for
>>> some usecases. Feel free to raise an issue [2] with the details about
>>> your usecase and etc or submit a PR by your self :-)
>>> We never picked it, as we never have a usecase needing separation of
>>> gluster-blockd and glusterd.
>>>
>>> >
>>> > I also have another question: how do I make the iSCSI targets persist
>>> if all of the gluster nodes were rebooted? It seems like once all of the
>>> nodes reboot, I am unable to reconnect to the iSCSI targets created by the
>>> gluster-block utility.
>>>
>>> do you mean rebooting iscsi initiator ? or gluster-block/gluster
>>> target/server nodes ?
>>>
>>> 1. for initiator to automatically connect to block devices post
>>> reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
>>> node.startup = automatic
>>>
>>> 2. if you mean, just in case if all the gluster nodes goes down, on
>>> the initiator all the available HA path's will be down, but we still
>>> want the IO to be queued on the initiator, until one of the path
>>> (gluster node) is availabe:
>>>
>>> for this in gluster-block sepcific section of multipath.conf you need
>>> to replace 'no_path_retry 120' as 'no_path_retry queue'
>>> Note: refer README for current multipath.conf setting recommendations.
>>>
>>> [1] https://github.com/gluster/gluster-block/pull/161
>>> [2] https://github.com/gluster/gluster-block/issues/new
>>>
>>> BRs,
>>> --
>>> Prasanna
>>>
>>
>>
>> --
>>
>> Thank you,
>>
>> *Karim Roumani*
>> Director of Technology Solutions
>>
>> TekReach Solutions / Albatross Cloud
>> 714-916-5677
>> karim.roum...@tekreach.com
>> Albatross.cloud  - One Stop Cloud Solutions
>> Portalfronthosting.com  - Complete
>> SharePoint Solutions
>>
>
>
> --
>
> Thank you,
>
> *Karim Roumani*
> Director of Technology Solutions
>
> TekReach Solutions / Albatross Cloud
> 714-916-5677
> karim.roum...@tekreach.com
> Albatross.cloud  - One Stop Cloud Solutions
> Portalfronthosting.com  - Complete
> SharePoint Solutions
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Help: gluster-block

2019-03-25 Thread Prasanna Kalever
[ adding +gluster-users for archive purpose ]

On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin  wrote:
>
> Hello Mr. Kalever,

Hello Jeffrey,

>
> I am currently working on a project to utilize GlusterFS for VMWare VMs. In 
> our research, we found that utilizing block devices with GlusterFS would be 
> the best approach for our use case (correct me if I am wrong). I saw the 
> gluster utility that you are a contributor for called gluster-block 
> (https://github.com/gluster/gluster-block), and I had a question about the 
> configuration. From what I understand, gluster-block only works on the 
> servers that are serving the gluster volume. Would it be possible to run the 
> gluster-block utility on a client machine that has a gluster volume mounted 
> to it?

Yes, that is right! At the moment gluster-block is coupled with
glusterd for simplicity.
But we have made some changes here [1] to provide a way to specify
server address (volfile-server) which is outside the gluster-blockd
node, please take a look.

Although it is not complete solution, but it should at-least help for
some usecases. Feel free to raise an issue [2] with the details about
your usecase and etc or submit a PR by your self :-)
We never picked it, as we never have a usecase needing separation of
gluster-blockd and glusterd.

>
> I also have another question: how do I make the iSCSI targets persist if all 
> of the gluster nodes were rebooted? It seems like once all of the nodes 
> reboot, I am unable to reconnect to the iSCSI targets created by the 
> gluster-block utility.

do you mean rebooting iscsi initiator ? or gluster-block/gluster
target/server nodes ?

1. for initiator to automatically connect to block devices post
reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
node.startup = automatic

2. if you mean, just in case if all the gluster nodes goes down, on
the initiator all the available HA path's will be down, but we still
want the IO to be queued on the initiator, until one of the path
(gluster node) is availabe:

for this in gluster-block sepcific section of multipath.conf you need
to replace 'no_path_retry 120' as 'no_path_retry queue'
Note: refer README for current multipath.conf setting recommendations.

[1] https://github.com/gluster/gluster-block/pull/161
[2] https://github.com/gluster/gluster-block/issues/new

BRs,
--
Prasanna
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users