Hello,
I try use Gluster deploymont. I've the message error:
failed: [llrovirttest02.in2p3.fr] (item={u'path': u'/gluster_bricks/engine',
u'vgname': u'gluster_vg_sdb', u'lvname': u'gluster_lv_engine'}) =>
{"ansible_loop_var": "item", "changed": false, "item": {"lvname":
"gluster_lv_engine", "pat
Hello,
I've some problem with Hyperconverged Configure Gluster storage and oVirt
hosted engine. I've the message error:
failed: [node2.x.fr] (item={u'key': u'gluster_vg_sdb', u'value':
[{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]}) =>
{"ansible_loop_var": "item", "changed": fal
Hi,
How can I get the best performance when using Glusterfs as an oVirt Domain
Storage?
Thanks
José
--
Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Pr
Hi community! Is it possibly to use oVirt with GlusterFS over FCoE with instruction https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html-single/administration_guide/#How_to_Set_Up_RHVM_to_Use_FCoE? ___
Users mailing list -- user
On Tue, Jan 24, 2017 at 4:56 PM, Devin Acosta
wrote:
>
> I have created an oVIRT 4.0.6 Cluster, it has 2 Compute nodes, and 3
> Dedicated Gluster nodes. The Gluster nodes are configured correctly and
> they have the replica set to 3. I'm trying to figure out when I go to
> attach the Data (Master
I have created an oVIRT 4.0.6 Cluster, it has 2 Compute nodes, and 3
Dedicated Gluster nodes. The Gluster nodes are configured correctly and
they have the replica set to 3. I'm trying to figure out when I go to
attach the Data (Master) domain to the oVIRT manager what is the best
method to do so in
Hi
> It is not removed. Can you try 'gluster volume set volname cluster.eager-lock
> enable`?
This works. BTW by default this setting is “on”. What’s the difference between
“on” and “enable”?
Thanks for the clarification.
Regards,
Roderick
> On 06 Apr 2016, at 10:56 AM, Ravishankar N wrote
On Tue, Apr 12, 2016 at 11:11:54AM +0200, Roderick Mooi wrote:
> Hi
>
> > It is not removed. Can you try 'gluster volume set volname
> > cluster.eager-lock enable`?
>
> This works. BTW by default this setting is “on”
Thanks for reporting back!
> What’s the difference between “on” and “enable”?
On 04/12/2016 02:41 PM, Roderick Mooi wrote:
Hi
It is not removed. Can you try 'gluster volume set volname
cluster.eager-lock enable`?
This works. BTW by default this setting is “on”. What’s the difference
between “on” and “enable”?
Both are identical. You can use any of the booleans to ac
Hi Ravi and colleagues
(apologies for hijacking this thread but I’m not sure where else to report this
(and it is related).)
With gluster 3.7.10, running
#gluster volume set group virt
fails with:
volume set: failed: option : eager-lock does not exist
Did you mean eager-lock?
I had to remove t
On 04/06/2016 02:08 PM, Roderick Mooi wrote:
Hi Ravi and colleagues
(apologies for hijacking this thread but I’m not sure where else to
report this (and it is related).)
With gluster 3.7.10, running
#gluster volume set group virt
fails with:
volume set: failed: option : eager-lock does not e
On 02/12/2016 09:11 PM, Bill James wrote:
wow, that made a whole lot of difference!
Thank you!
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile1 bs=1M
count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 20.2778 s, 51.7 MB/s
That's great. It was Vijay Bellur who noticed that it w
wow, that made a whole lot of difference!
Thank you!
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile1 bs=1M
count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 20.2778 s, 51.7 MB/s
these are the options now for the record.
Options Reconfigured:
cluster.server-quorum-type: serv
Hi Bill,
Can you enable virt-profile setting for your volume and see if that
helps? You need to enable this optimization when you create the volume
using ovrit, or use the following command for an existing volume:
#gluster volume set group virt
-Ravi
On 02/12/2016 05:22 AM, Bill James wrot
My apologies, I'm showing how much of a noob I am.
Ignore last direct to gluster numbers, as that wasn't really glusterfs.
[root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com:/gv1
/mnt/tmp/
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 bs=1M
count=1000 oflag=di
don't know if it helps, but I ran a few more tests, all from the same
hardware node.
The VM:
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M
count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s
Writing directly to gluster volume:
[root@ovirt2 test ~]#
xml attached.
On 02/11/2016 12:28 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 8:27 PM, Bill James wrote:
thank you for the reply.
We setup gluster using the names associated with NIC 2 IP.
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/glu
On Thu, Feb 11, 2016 at 8:27 PM, Bill James wrote:
> thank you for the reply.
>
> We setup gluster using the names associated with NIC 2 IP.
> Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
> Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
> Brick3: ovirt3-ks.test.j2noc.
thank you for the reply.
We setup gluster using the names associated with NIC 2 IP.
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
That's NIC 2's IP.
Using 'iftop
On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N wrote:
> +gluster-users
>
> Does disabling 'performance.write-behind' give a better throughput?
>
>
>
> On 02/10/2016 11:06 PM, Bill James wrote:
>>
>> I'm setting up a ovirt cluster using glusterfs and noticing not stellar
>> performance.
>> Maybe my
+gluster-users
Does disabling 'performance.write-behind' give a better throughput?
On 02/10/2016 11:06 PM, Bill James wrote:
I'm setting up a ovirt cluster using glusterfs and noticing not
stellar performance.
Maybe my setup could use some adjustments?
3 hardware nodes running centos7.2, glu
I'm setting up a ovirt cluster using glusterfs and noticing not stellar
performance.
Maybe my setup could use some adjustments?
3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split using LVM
with one logical volume
22 matches
Mail list logo