[ovirt-users] glusterfs

2020-02-14 Thread eevans
I currently have 3 nodes, one is the engine node and 2 Centos 7 hosts, and I 
plan to add another Centos 7 KVM host once I get all the vm's migrated. I have 
san storage plus the raid 5 internal disks. All OS are installed on mirrored 
SAS raid 1. I want to use the raid 5 vd's as exports, ISO and use the 4TB  
iscsi for the vm's to run on. The iscsi has snapshots hourly and over write 
weekly.
So here is my question: I want to add glusterfs, but after further reading, 
that should have been done in the initial setup. I am not new to Linux, but new 
to Ovirt and need to know if  I can implement glusterfs now or if it's a start 
from scratch situation. I really don't want to start over but would like the 
redundancy. 
Any advice is appreciated. 
Eric
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7PN44O7U2FC4WGIXQAQF3MRKUDJBWZD/


[ovirt-users] Glusterfs storage

2015-03-02 Thread suporte
Hi, 

I have setup up 2 hosts, host1 with hosted engine, and on host 2, since I have 
a spare disk, I want to setup a glusterfs domain storage. 
I'm using Centos7 and ovirt 3.5.1. 
The 2 hosts and hosted engine are up and running. 
How can I configure the glusterfs storage? 
I installed glusterfs server on host2: 
glusterfs-api-3.6.2-1.el7.x86_64 
glusterfs-fuse-3.6.2-1.el7.x86_64 
glusterfs-libs-3.6.2-1.el7.x86_64 
glusterfs-3.6.2-1.el7.x86_64 
glusterfs-rdma-3.6.2-1.el7.x86_64 
glusterfs-server-3.6.2-1.el7.x86_64 
glusterfs-cli-3.6.2-1.el7.x86_64 


Do I need first configure a Volume with a brick? I tried that but no success. 
Also tried to ad a Data domains of type glusterfs but no success. 
What am I missing? 


Thanks 

-- 

Jose Ferradeira 
http://www.logicworks.pt 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GlusterFS Hyperconvergence

2015-07-06 Thread Tiemen Ruiten
Hello,

I'm in the planning stage of setting up a new oVirt cluster, where I would
prefer to use the local disks of the hypervisors for a GlusterFS storage
domain. What is the current recommendation (for version 3.5.x)?

I found this presentation which seems to suggest that it's possbile with
some caveats:
http://www.ovirt.org/images/6/6c/2015-ovirt-glusterfs-hyperconvergence.pdf

And a few open bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1177791
https://bugzilla.redhat.com/show_bug.cgi?id=115
https://bugzilla.redhat.com/show_bug.cgi?id=113

What I can't find is current information on if this setup is already
possible with 3.5.x or it's better to wait for 3.6. Could someone enlighten
me?

-- 
Tiemen Ruiten
Systems Engineer
R&D Media
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Glusterfs version

2020-08-17 Thread suporte
Hello, 

What is the compatibility gluster version to work with oVirt 4.4.1 ? 

Thanks 

-- 

Jose Ferradeira 
http://www.logicworks.pt 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OVI4RDF5C7KP3QZICDN4QMSQ4LUBM72K/


[ovirt-users] GlusterFS 4.1

2018-07-04 Thread Chris Boot
All,

Now that GlusterFS 4.1 LTS has been released, and is the "default"
version of GlusterFS in CentOS (you get this from
"centos-release-gluster" now), what's the status with regards to oVirt?

How badly is oVirt 4.2.4 likely to break if one were to upgrade the
gluster* packages to 4.1?

Thanks,
Chris

-- 
Chris Boot
bo...@boo.tc
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECTRCG7EOZBTXQJNB4RKN6JYVHVOWV4S/


[ovirt-users] GlusterFS+Virt

2023-05-04 Thread antony . parkin
Hi,

Could somebody comment on storage options i.e Gluster or Ceph and when to use 
each one?

Looking for an architecture which is fully redundant, both at the hypervisor 
level and at the storage level so needs to include data replication. 

thx
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YT6DWPS3RVZ66EPOMCF7OMOVFMIJM25S/


[ovirt-users] glusterfs quetions/tips

2014-05-21 Thread Gabi C

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs tips/questions

2014-05-21 Thread Gabi C
Hello!

I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19
on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes
'gluster peeer status' raise 2 peer connected with it's UUID. On third node
'gluster peer status' raise 3 peers, out of which, two reffer to same
node/IP but different UUID.

What I have tried:
- stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect;
- stopped  volumes, removed bricks belonging to 3rd node, readded it, start
volumes but still no effect.


Any ideas, hints?

TIA
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs storage

2015-03-02 Thread Aharon Canan
Hi 

First, You need to create gluster volume. Do not forget to start the volume. 
then the "chown 36:36 " 

now you need to create new glusterfs domain, and to point to the  

Regards, 
__ 
Aharon Canan 
int phone - 8272036 
ext phone - +97297692036 
email - aca...@redhat.com 

- Original Message -

> From: supo...@logicworks.pt
> To: Users@ovirt.org
> Sent: Monday, March 2, 2015 5:55:33 PM
> Subject: [ovirt-users] Glusterfs storage

> Hi,

> I have setup up 2 hosts, host1 with hosted engine, and on host 2, since I
> have a spare disk, I want to setup a glusterfs domain storage.
> I'm using Centos7 and ovirt 3.5.1.
> The 2 hosts and hosted engine are up and running.
> How can I configure the glusterfs storage?
> I installed glusterfs server on host2:
> glusterfs-api-3.6.2-1.el7.x86_64
> glusterfs-fuse-3.6.2-1.el7.x86_64
> glusterfs-libs-3.6.2-1.el7.x86_64
> glusterfs-3.6.2-1.el7.x86_64
> glusterfs-rdma-3.6.2-1.el7.x86_64
> glusterfs-server-3.6.2-1.el7.x86_64
> glusterfs-cli-3.6.2-1.el7.x86_64

> Do I need first configure a Volume with a brick? I tried that but no success.
> Also tried to ad a Data domains of type glusterfs but no success.
> What am I missing?

> Thanks

> --

> Jose Ferradeira
> http://www.logicworks.pt

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs storage

2015-03-02 Thread Aharon Canan
forget to mention, 

You need the glusterfs pkg on all hosts that should access the glusterfs 
domain, 

Also, you can check below link. 
http://www.ovirt.org/Features/GlusterFS_Storage_Domain 

Regards, 
__ 
Aharon Canan 
int phone - 8272036 
ext phone - +97297692036 
email - aca...@redhat.com 

- Original Message -

> From: "Aharon Canan" 
> To: supo...@logicworks.pt
> Cc: Users@ovirt.org
> Sent: Monday, March 2, 2015 6:00:40 PM
> Subject: Re: [ovirt-users] Glusterfs storage

> Hi

> First, You need to create gluster volume. Do not forget to start the volume.
> then the "chown 36:36 "

> now you need to create new glusterfs domain, and to point to the  server name:/volume name>

> Regards,
> __
> Aharon Canan
> int phone - 8272036
> ext phone - +97297692036
> email - aca...@redhat.com

> - Original Message -

> > From: supo...@logicworks.pt
> 
> > To: Users@ovirt.org
> 
> > Sent: Monday, March 2, 2015 5:55:33 PM
> 
> > Subject: [ovirt-users] Glusterfs storage
> 

> > Hi,
> 

> > I have setup up 2 hosts, host1 with hosted engine, and on host 2, since I
> > have a spare disk, I want to setup a glusterfs domain storage.
> 
> > I'm using Centos7 and ovirt 3.5.1.
> 
> > The 2 hosts and hosted engine are up and running.
> 
> > How can I configure the glusterfs storage?
> 
> > I installed glusterfs server on host2:
> 
> > glusterfs-api-3.6.2-1.el7.x86_64
> 
> > glusterfs-fuse-3.6.2-1.el7.x86_64
> 
> > glusterfs-libs-3.6.2-1.el7.x86_64
> 
> > glusterfs-3.6.2-1.el7.x86_64
> 
> > glusterfs-rdma-3.6.2-1.el7.x86_64
> 
> > glusterfs-server-3.6.2-1.el7.x86_64
> 
> > glusterfs-cli-3.6.2-1.el7.x86_64
> 

> > Do I need first configure a Volume with a brick? I tried that but no
> > success.
> > Also tried to ad a Data domains of type glusterfs but no success.
> 
> > What am I missing?
> 

> > Thanks
> 

> > --
> 

> > Jose Ferradeira
> 
> > http://www.logicworks.pt
> 

> > ___
> 
> > Users mailing list
> 
> > Users@ovirt.org
> 
> > http://lists.ovirt.org/mailman/listinfo/users
> 

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs storage

2015-03-02 Thread suporte
Thanks Aharon, 

After a couple of hours, the chown 36:36 did the trick, now I have the domain 
up. 
Thanks a lot 

Jose 


- Mensagem original -

De: "Aharon Canan"  
Para: supo...@logicworks.pt 
Cc: Users@ovirt.org 
Enviadas: Segunda-feira, 2 De Março de 2015 16:00:40 
Assunto: Re: [ovirt-users] Glusterfs storage 

Hi 

First, You need to create gluster volume. Do not forget to start the volume. 
then the "chown 36:36 " 

now you need to create new glusterfs domain, and to point to the  




Regards, 
__ 
Aharon Canan 
int phone - 8272036 
ext phone - +97297692036 
email - aca...@redhat.com 

- Mensagem original -


From: supo...@logicworks.pt 
To: Users@ovirt.org 
Sent: Monday, March 2, 2015 5:55:33 PM 
Subject: [ovirt-users] Glusterfs storage 

Hi, 

I have setup up 2 hosts, host1 with hosted engine, and on host 2, since I have 
a spare disk, I want to setup a glusterfs domain storage. 
I'm using Centos7 and ovirt 3.5.1. 
The 2 hosts and hosted engine are up and running. 
How can I configure the glusterfs storage? 
I installed glusterfs server on host2: 
glusterfs-api-3.6.2-1.el7.x86_64 
glusterfs-fuse-3.6.2-1.el7.x86_64 
glusterfs-libs-3.6.2-1.el7.x86_64 
glusterfs-3.6.2-1.el7.x86_64 
glusterfs-rdma-3.6.2-1.el7.x86_64 
glusterfs-server-3.6.2-1.el7.x86_64 
glusterfs-cli-3.6.2-1.el7.x86_64 


Do I need first configure a Volume with a brick? I tried that but no success. 
Also tried to ad a Data domains of type glusterfs but no success. 
What am I missing? 


Thanks 

-- 

Jose Ferradeira 
http://www.logicworks.pt 


___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs storage

2015-03-02 Thread Jason Brooks


- Original Message -
> From: supo...@logicworks.pt
> To: "Aharon Canan" 
> Cc: Users@ovirt.org
> Sent: Monday, March 2, 2015 8:25:13 AM
> Subject: Re: [ovirt-users] Glusterfs storage
> 
> Thanks Aharon,
> 
> After a couple of hours, the chown 36:36 did the trick, now I have the domain
> up.

This is a good way to set those perms and make them stick:

gluster volume set $yourvol storage.owner-uid 36
gluster volume set $yourvol storage.owner-gid 36


Jason


> Thanks a lot
> 
> Jose
> 
> 
> - Mensagem original -
> 
> De: "Aharon Canan" 
> Para: supo...@logicworks.pt
> Cc: Users@ovirt.org
> Enviadas: Segunda-feira, 2 De Março de 2015 16:00:40
> Assunto: Re: [ovirt-users] Glusterfs storage
> 
> Hi
> 
> First, You need to create gluster volume. Do not forget to start the volume.
> then the "chown 36:36 "
> 
> now you need to create new glusterfs domain, and to point to the  server name:/volume name>
> 
> 
> 
> 
> Regards,
> __
> Aharon Canan
> int phone - 8272036
> ext phone - +97297692036
> email - aca...@redhat.com
> 
> - Mensagem original -
> 
> 
> From: supo...@logicworks.pt
> To: Users@ovirt.org
> Sent: Monday, March 2, 2015 5:55:33 PM
> Subject: [ovirt-users] Glusterfs storage
> 
> Hi,
> 
> I have setup up 2 hosts, host1 with hosted engine, and on host 2, since I
> have a spare disk, I want to setup a glusterfs domain storage.
> I'm using Centos7 and ovirt 3.5.1.
> The 2 hosts and hosted engine are up and running.
> How can I configure the glusterfs storage?
> I installed glusterfs server on host2:
> glusterfs-api-3.6.2-1.el7.x86_64
> glusterfs-fuse-3.6.2-1.el7.x86_64
> glusterfs-libs-3.6.2-1.el7.x86_64
> glusterfs-3.6.2-1.el7.x86_64
> glusterfs-rdma-3.6.2-1.el7.x86_64
> glusterfs-server-3.6.2-1.el7.x86_64
> glusterfs-cli-3.6.2-1.el7.x86_64
> 
> 
> Do I need first configure a Volume with a brick? I tried that but no success.
> Also tried to ad a Data domains of type glusterfs but no success.
> What am I missing?
> 
> 
> Thanks
> 
> --
> 
> Jose Ferradeira
> http://www.logicworks.pt
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs storage

2015-03-02 Thread Kanagaraj
If you are managing the gluster volume through ovirt, you could just 
select the volume in the ui and click "Optimize for virt store". It does 
the job of setting the required parameters for the volume.


Thanks,
Kanagaraj

On 03/02/2015 10:44 PM, Jason Brooks wrote:


- Original Message -

From: supo...@logicworks.pt
To: "Aharon Canan" 
Cc: Users@ovirt.org
Sent: Monday, March 2, 2015 8:25:13 AM
Subject: Re: [ovirt-users] Glusterfs storage

Thanks Aharon,

After a couple of hours, the chown 36:36 did the trick, now I have the domain
up.

This is a good way to set those perms and make them stick:

gluster volume set $yourvol storage.owner-uid 36
gluster volume set $yourvol storage.owner-gid 36


Jason



Thanks a lot

Jose


- Mensagem original -

De: "Aharon Canan" 
Para: supo...@logicworks.pt
Cc: Users@ovirt.org
Enviadas: Segunda-feira, 2 De Março de 2015 16:00:40
Assunto: Re: [ovirt-users] Glusterfs storage

Hi

First, You need to create gluster volume. Do not forget to start the volume.
then the "chown 36:36 "

now you need to create new glusterfs domain, and to point to the 




Regards,
__
Aharon Canan
int phone - 8272036
ext phone - +97297692036
email - aca...@redhat.com

- Mensagem original -


From: supo...@logicworks.pt
To: Users@ovirt.org
Sent: Monday, March 2, 2015 5:55:33 PM
Subject: [ovirt-users] Glusterfs storage

Hi,

I have setup up 2 hosts, host1 with hosted engine, and on host 2, since I
have a spare disk, I want to setup a glusterfs domain storage.
I'm using Centos7 and ovirt 3.5.1.
The 2 hosts and hosted engine are up and running.
How can I configure the glusterfs storage?
I installed glusterfs server on host2:
glusterfs-api-3.6.2-1.el7.x86_64
glusterfs-fuse-3.6.2-1.el7.x86_64
glusterfs-libs-3.6.2-1.el7.x86_64
glusterfs-3.6.2-1.el7.x86_64
glusterfs-rdma-3.6.2-1.el7.x86_64
glusterfs-server-3.6.2-1.el7.x86_64
glusterfs-cli-3.6.2-1.el7.x86_64


Do I need first configure a Volume with a brick? I tried that but no success.
Also tried to ad a Data domains of type glusterfs but no success.
What am I missing?


Thanks

--

Jose Ferradeira
http://www.logicworks.pt


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs storage

2015-03-03 Thread suporte
I have 2 hosts with hosted engine in the firts host. The second host also has 
the Domain Storage for the 2 hosts (Glusterfs). 

The Storage Domain is in active status. 
In the firts host I can see : /usr/sbin/glusterfs 
--volfile-server=host2.domain.tld --volfile-id=/gv0 
/rhev/data-center/mnt/glusterSD/host2.domain.tld:_gv0 
What I did for Gluster in the second host: 
# yum -y install glusterfs-server 


# systemctl start glusterd 

# systemctl enable glusterd 

# gluster volume create gv0 transport tcp host2.domain.tld:/home2/brick1 

# gluster volume start gv0 

# gluster volume info 
Volume Name: gv0 
Type: Distribute 
Volume ID: 6ccd1831-6c4c-41c3-a695-8c7b57cf1261 
Status: Started 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: host2.domain.tld:/home2/brick1 



# chown 36:36 /home2/brick1 

Now I cannot attach the ISO Domain 

The error in the engine.log: 

2015-03-03 09:48:37,376 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-31) [4ee81984] Correlation ID: null, Call 
Stack: null, Custom Event ID: -1, Message: Failed to connect Host host2 to the 
Storage Domains ISO_DOMAIN. 
2015-03-03 09:48:37,378 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-26) [67e35194] Correlation ID: null, Call 
Stack: null, Custom Event ID: -1, Message: Failed to connect Host host11 to the 
Storage Domains ISO_DOMAIN. 
2015-03-03 09:48:37,776 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-32) [7b817504] Correlation ID: 7b817504, Job 
ID: 1da178db-c072-42f8-92bc-551722c2c656, Call Stack: null, Custom Event ID: 
-1, Message: Failed to attach Storage Domain ISO_DOMAIN to Data Center Default. 
(User: admin@internal) 

The iso domain is in the hosted engine: 
# cat /etc/exports 
/home/iso 0.0.0.0/0.0.0.0(rw) 
# ls -l /home/iso 
total 0 
drwxr-xr-x 4 vdsm kvm 32 Feb 26 15:11 240929ec-62e3-47ff-88b0-39cecc53ab8f 

Any idea? 

Thanks 

Jose 

- Mensagem original -

De: "Kanagaraj"  
Para: "Jason Brooks" , supo...@logicworks.pt 
Cc: Users@ovirt.org 
Enviadas: Terça-feira, 3 De Março de 2015 4:43:50 
Assunto: Re: [ovirt-users] Glusterfs storage 

If you are managing the gluster volume through ovirt, you could just 
select the volume in the ui and click "Optimize for virt store". It does 
the job of setting the required parameters for the volume. 

Thanks, 
Kanagaraj 

On 03/02/2015 10:44 PM, Jason Brooks wrote: 
> 
> - Original Message - 
>> From: supo...@logicworks.pt 
>> To: "Aharon Canan"  
>> Cc: Users@ovirt.org 
>> Sent: Monday, March 2, 2015 8:25:13 AM 
>> Subject: Re: [ovirt-users] Glusterfs storage 
>> 
>> Thanks Aharon, 
>> 
>> After a couple of hours, the chown 36:36 did the trick, now I have the 
>> domain 
>> up. 
> This is a good way to set those perms and make them stick: 
> 
> gluster volume set $yourvol storage.owner-uid 36 
> gluster volume set $yourvol storage.owner-gid 36 
> 
> 
> Jason 
> 
> 
>> Thanks a lot 
>> 
>> Jose 
>> 
>> 
>> - Mensagem original ----- 
>> 
>> De: "Aharon Canan"  
>> Para: supo...@logicworks.pt 
>> Cc: Users@ovirt.org 
>> Enviadas: Segunda-feira, 2 De Março de 2015 16:00:40 
>> Assunto: Re: [ovirt-users] Glusterfs storage 
>> 
>> Hi 
>> 
>> First, You need to create gluster volume. Do not forget to start the volume. 
>> then the "chown 36:36 " 
>> 
>> now you need to create new glusterfs domain, and to point to the > server name:/volume name> 
>> 
>> 
>> 
>> 
>> Regards, 
>> ______ 
>> Aharon Canan 
>> int phone - 8272036 
>> ext phone - +97297692036 
>> email - aca...@redhat.com 
>> 
>> - Mensagem original - 
>> 
>> 
>> From: supo...@logicworks.pt 
>> To: Users@ovirt.org 
>> Sent: Monday, March 2, 2015 5:55:33 PM 
>> Subject: [ovirt-users] Glusterfs storage 
>> 
>> Hi, 
>> 
>> I have setup up 2 hosts, host1 with hosted engine, and on host 2, since I 
>> have a spare disk, I want to setup a glusterfs domain storage. 
>> I'm using Centos7 and ovirt 3.5.1. 
>> The 2 hosts and hosted engine are up and running. 
>> How can I configure the glusterfs storage? 
>> I installed glusterfs server on host2: 
>> glusterfs-api-3.6.2-1.el7.x86_64 
>> glusterfs-fuse-3.6.2-1.el7.x86_64 
>> glusterfs-libs-3.6.2-1.el7.x86_64 
>> glusterfs-3.6.2-1.el7.x86_64 
>> glusterfs-rdma-3.6.2-1.el7.x86_64 
>> glusterfs-ser

Re: [ovirt-users] GlusterFS Hyperconvergence

2015-07-06 Thread Tiemen Ruiten
Please note I will -not- be using Hosted Engine.

On 6 July 2015 at 13:54, Tiemen Ruiten  wrote:

> Hello,
>
> I'm in the planning stage of setting up a new oVirt cluster, where I would
> prefer to use the local disks of the hypervisors for a GlusterFS storage
> domain. What is the current recommendation (for version 3.5.x)?
>
> I found this presentation which seems to suggest that it's possbile with
> some caveats:
> http://www.ovirt.org/images/6/6c/2015-ovirt-glusterfs-hyperconvergence.pdf
>
> And a few open bugs:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1177791
> https://bugzilla.redhat.com/show_bug.cgi?id=115
> https://bugzilla.redhat.com/show_bug.cgi?id=113
>
> What I can't find is current information on if this setup is already
> possible with 3.5.x or it's better to wait for 3.6. Could someone enlighten
> me?
>
> --
> Tiemen Ruiten
> Systems Engineer
> R&D Media
>



-- 
Tiemen Ruiten
Systems Engineer
R&D Media
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Hyperconvergence

2015-07-06 Thread Doron Fediuck
On 06/07/15 15:03, Tiemen Ruiten wrote:
> Please note I will -not- be using Hosted Engine.
>
> On 6 July 2015 at 13:54, Tiemen Ruiten  > wrote:
>
> Hello,
>
> I'm in the planning stage of setting up a new oVirt cluster, where I
> would prefer to use the local disks of the hypervisors for a
> GlusterFS storage domain. What is the current recommendation (for
> version 3.5.x)?
>
> I found this presentation which seems to suggest that it's possbile
> with some
> caveats: 
> http://www.ovirt.org/images/6/6c/2015-ovirt-glusterfs-hyperconvergence.pdf
>
> And a few open bugs:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1177791
> https://bugzilla.redhat.com/show_bug.cgi?id=115
> https://bugzilla.redhat.com/show_bug.cgi?id=113
>
> What I can't find is current information on if this setup is already
> possible with 3.5.x or it's better to wait for 3.6. Could someone
> enlighten me?
>
> --
> Tiemen Ruiten
> Systems Engineer
> R&D Media
>
>
>
>
> -- 
> Tiemen Ruiten
> Systems Engineer
> R&D Media
>
>

Hi,
most of the current work is focused in hyperconverged using hosted engine and
this includes the installation and a few other features such as not fencing 
hypervisors
to avoid killing running bricks.
If you do not want to use hosted engine, then you should be able to create 
bricks
and manage them as a gluster volume which your setup should be able to work 
with.
If you do not use HA and/or fencing you should be mostly ok with this use case.

Anything specific you're looking for?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Hyperconvergence

2015-07-06 Thread Tiemen Ruiten
Thanks Doron,

I would initially start with a 3 host cluster, possibly adding more hosts
later on. Is this enough to ensure integrity of the Gluster storage domain,
eg. in case of host failure?

What are recommendations (if they exist) for the type of Gluster volume for
a 3-5 host cluster (replicated or replicated distributed, number of
replica's)?

Would this setup already leverage libgfapi instead of FUSE?


On 6 July 2015 at 16:33, Doron Fediuck  wrote:

> On 06/07/15 15:03, Tiemen Ruiten wrote:
> > Please note I will -not- be using Hosted Engine.
> >
> > On 6 July 2015 at 13:54, Tiemen Ruiten  > > wrote:
> >
> > Hello,
> >
> > I'm in the planning stage of setting up a new oVirt cluster, where I
> > would prefer to use the local disks of the hypervisors for a
> > GlusterFS storage domain. What is the current recommendation (for
> > version 3.5.x)?
> >
> > I found this presentation which seems to suggest that it's possbile
> > with some
> > caveats:
> http://www.ovirt.org/images/6/6c/2015-ovirt-glusterfs-hyperconvergence.pdf
> >
> > And a few open bugs:
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1177791
> > https://bugzilla.redhat.com/show_bug.cgi?id=115
> > https://bugzilla.redhat.com/show_bug.cgi?id=113
> >
> > What I can't find is current information on if this setup is already
> > possible with 3.5.x or it's better to wait for 3.6. Could someone
> > enlighten me?
> >
> > --
> > Tiemen Ruiten
> > Systems Engineer
> > R&D Media
> >
> >
> >
> >
> > --
> > Tiemen Ruiten
> > Systems Engineer
> > R&D Media
> >
> >
>
> Hi,
> most of the current work is focused in hyperconverged using hosted engine
> and
> this includes the installation and a few other features such as not
> fencing hypervisors
> to avoid killing running bricks.
> If you do not want to use hosted engine, then you should be able to create
> bricks
> and manage them as a gluster volume which your setup should be able to
> work with.
> If you do not use HA and/or fencing you should be mostly ok with this use
> case.
>
> Anything specific you're looking for?
>
>


-- 
Tiemen Ruiten
Systems Engineer
R&D Media
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Hyperconvergence

2015-07-06 Thread Tiemen Ruiten
Also, while I can imagine why fencing might be a problem, what would be the
issue with HA?

On 6 July 2015 at 16:52, Tiemen Ruiten  wrote:

> Thanks Doron,
>
> I would initially start with a 3 host cluster, possibly adding more hosts
> later on. Is this enough to ensure integrity of the Gluster storage domain,
> eg. in case of host failure?
>
> What are recommendations (if they exist) for the type of Gluster volume
> for a 3-5 host cluster (replicated or replicated distributed, number of
> replica's)?
>
> Would this setup already leverage libgfapi instead of FUSE?
>
>
> On 6 July 2015 at 16:33, Doron Fediuck  wrote:
>
>> On 06/07/15 15:03, Tiemen Ruiten wrote:
>> > Please note I will -not- be using Hosted Engine.
>> >
>> > On 6 July 2015 at 13:54, Tiemen Ruiten > > > wrote:
>> >
>> > Hello,
>> >
>> > I'm in the planning stage of setting up a new oVirt cluster, where I
>> > would prefer to use the local disks of the hypervisors for a
>> > GlusterFS storage domain. What is the current recommendation (for
>> > version 3.5.x)?
>> >
>> > I found this presentation which seems to suggest that it's possbile
>> > with some
>> > caveats:
>> http://www.ovirt.org/images/6/6c/2015-ovirt-glusterfs-hyperconvergence.pdf
>> >
>> > And a few open bugs:
>> >
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1177791
>> > https://bugzilla.redhat.com/show_bug.cgi?id=115
>> > https://bugzilla.redhat.com/show_bug.cgi?id=113
>> >
>> > What I can't find is current information on if this setup is already
>> > possible with 3.5.x or it's better to wait for 3.6. Could someone
>> > enlighten me?
>> >
>> > --
>> > Tiemen Ruiten
>> > Systems Engineer
>> > R&D Media
>> >
>> >
>> >
>> >
>> > --
>> > Tiemen Ruiten
>> > Systems Engineer
>> > R&D Media
>> >
>> >
>>
>> Hi,
>> most of the current work is focused in hyperconverged using hosted engine
>> and
>> this includes the installation and a few other features such as not
>> fencing hypervisors
>> to avoid killing running bricks.
>> If you do not want to use hosted engine, then you should be able to
>> create bricks
>> and manage them as a gluster volume which your setup should be able to
>> work with.
>> If you do not use HA and/or fencing you should be mostly ok with this use
>> case.
>>
>> Anything specific you're looking for?
>>
>>
>
>
> --
> Tiemen Ruiten
> Systems Engineer
> R&D Media
>



-- 
Tiemen Ruiten
Systems Engineer
R&D Media
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Hyperconvergence

2015-07-06 Thread Alex Crow



On 06/07/15 16:01, Tiemen Ruiten wrote:
Also, while I can imagine why fencing might be a problem, what would 
be the issue with HA?

/users


Hi,

Fencing is required for HA. If a box hosting HA vms seem to have gone 
away, it *has* to be guaranteed those VMs are not running before they 
are restarted elsewhere. Otherwise there could be more than 1 VM 
accessing the same storage, which will corrupt the VM's disk and leave 
you in a far worse situation.


Alex

--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Hyperconvergence

2015-07-06 Thread Doron Fediuck
Hey,

As for libgfapi, there are several uses for it in oVirt, not all fully
implemented at his point. However, you need to remember that this is an
optimization so whenever it's ready it will be release (as a part of the
relevant version) but not having it is still not lacking any functionality.

Per your question, we usually recommend working with replica-3 in hosted
engine scenario. I'd stick to this mode even without HE to ensure you
have replicas as well as quorum in case of a failure. In this mode you
can loose the first host and keep running with replica. Loosing the 2nd
host will cause the FS to become read only. This is double failure which
is pretty impressive for a community project.

Finally, HA is leveraging fencing to ensure we do not start an HA VM on
a host while it's still running on the old host and cause a split brain.
If we do not wish to fence bricks yet still use HA you need to manually
approve each time something happens that host has been rebooted which
kind of make this a manual mode and less highly available.

Doron

On 06/07/15 18:01, Tiemen Ruiten wrote:
> Also, while I can imagine why fencing might be a problem, what would be
> the issue with HA?
> 
> On 6 July 2015 at 16:52, Tiemen Ruiten  > wrote:
> 
> Thanks Doron,
> 
> I would initially start with a 3 host cluster, possibly adding more
> hosts later on. Is this enough to ensure integrity of the Gluster
> storage domain, eg. in case of host failure?
> 
> What are recommendations (if they exist) for the type of Gluster
> volume for a 3-5 host cluster (replicated or replicated distributed,
> number of replica's)? 
> 
> Would this setup already leverage libgfapi instead of FUSE?
> 
> 
> On 6 July 2015 at 16:33, Doron Fediuck  > wrote:
> 
> On 06/07/15 15:03, Tiemen Ruiten wrote:
> > Please note I will -not- be using Hosted Engine.
> >
> > On 6 July 2015 at 13:54, Tiemen Ruiten  
> > >>
> wrote:
> >
> > Hello,
> >
> > I'm in the planning stage of setting up a new oVirt
> cluster, where I
> > would prefer to use the local disks of the hypervisors for a
> > GlusterFS storage domain. What is the current
> recommendation (for
> > version 3.5.x)?
> >
> > I found this presentation which seems to suggest that it's
> possbile
> > with some
> > caveats:
> 
> http://www.ovirt.org/images/6/6c/2015-ovirt-glusterfs-hyperconvergence.pdf
> >
> > And a few open bugs:
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1177791
> > https://bugzilla.redhat.com/show_bug.cgi?id=115
> > https://bugzilla.redhat.com/show_bug.cgi?id=113
> >
> > What I can't find is current information on if this setup
> is already
> > possible with 3.5.x or it's better to wait for 3.6. Could
> someone
> > enlighten me?
> >
> > --
> > Tiemen Ruiten
> > Systems Engineer
> > R&D Media
> >
> >
> >
> >
> > --
> > Tiemen Ruiten
> > Systems Engineer
> > R&D Media
> >
> >
> 
> Hi,
> most of the current work is focused in hyperconverged using
> hosted engine and
> this includes the installation and a few other features such as
> not fencing hypervisors
> to avoid killing running bricks.
> If you do not want to use hosted engine, then you should be able
> to create bricks
> and manage them as a gluster volume which your setup should be
> able to work with.
> If you do not use HA and/or fencing you should be mostly ok with
> this use case.
> 
> Anything specific you're looking for?
> 
> 
> 
> 
> -- 
> Tiemen Ruiten
> Systems Engineer
> R&D Media
> 
> 
> 
> 
> -- 
> Tiemen Ruiten
> Systems Engineer
> R&D Media

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs heal issues

2017-01-16 Thread Gary Pedretty
This is a self hosted Glusterized setup, with 3 hosts.  I have had some 
glusterfs data storage domains have some disk issues where healing was 
required.  The self heal seemed to startup and the Ovirt Management portal 
showed healing taking place in the Volumes/Brick tab.  Later it showed 
everything ok.  This is a replica 3 volume.  I noticed however that the brick 
tab was not showing even use of the 3 bricks and looking on the actual hosts a 
df command also shows uneven use of the bricks.  However gluster volume heal 
(vol) info shows zero entries for all bricks.  There are no errors reported in 
the Data Center or Cluster, yet I see this uneven use of the bricks across the 
3 hosts.  

Doing a gluster volume status (vol) detail indeed shows different free disk 
space across the different bricks.  However the Inode Count and Free Inodes are 
identical across all bricks.  

I thought replica 3 was supposed to be mirrored across all nodes.  Any idea why 
I am seeing the uneven use, or is this just something about glusterfs that is 
different when it comes to free space vs Inode Count?

Gary



Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39













___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GlusterFS performance questions

2017-10-16 Thread Jayme
I am currently in the process of researching converting an existing SMB
infra to virtual.  Ovirt/RHEV is a strong  contender and checks off a lot
of boxes on our list.  GlusterFS is appealing but I am finding it very
difficult to find any answers or stats/numbers regarding how well it can
perform as backend storage for VMs, especially VMs with medium to high
workloads, and possibly some or many in a dev environment where many small
files are interacted with very frequently.

>From my research on GlusterFS in general some people tend to say that it is
not the best performer and it's not the best choice for handling small
files.  How true is this?

I understand that there are many variables such as hardware, drives, proper
configuration, internal network etc. but what I'm trying to avoid here is
going through the whole process of spendng a lot of money building out an
ovirt gluster with a proper SSD glusterFS configuration w/ 10gbe internal
network only to find out some day down the road that it cannot meet our I/O
needs.

Can anyone comment on how I can best determine the performance level to
expect from a typical ovrt w/ glusterfs replica 3 arbiter 1 configuration.
Or perhaps any use cases with people/companies running a similar setup in
production with high VM workloads?

Right now I'm leaning toward using a SAN due to the simplicity and
predictable performance.  Although this will definitely cost a lot more
money for something that glusterFS *may* be able to handle.

Thanks!

James
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Glusterfs and vm's

2021-05-07 Thread eevans
I have researched and applied several tweaks to Gluster to improve performance 
but on the vm side, depending on distribution, you can "tweak" the vm for 
better performance on Gluster as well;

yum install tuned.noarch tuned-utils-systemtap.noarch tuned-utils.noarch 
tuned-gtk.noarch -y
(GTK is for a gui if you have one.)

tuned-adm profile virtual-guest

Also, you can increase or decrease cache or swappiness on the host to fit your 
vm's, for workstations or servers. My cache is set to 2048 MB cache and 
swappiness of 10.
Thanks for the help you have given me. 
Eric


This helps RHEL and CentOS machines utilize glusterfs and actually speeds teh 
vm up.
I hope this will help someone. If you want the URL for the article, just ask. 
Also, you can increase or decrease cache or swappiness to fit your vm's, for 
workstations or servers.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4A47XQJV6HXCKHSPRSEEG2LO2TWHYHC/


[ovirt-users] GlusterFS and oVirt

2019-01-20 Thread Magnus Isaksson
Hello


I have quite some trouble getting Gluster to work.


I have 4 nodes running CentOS and oVirt. These 4 nodes is split up in 2 
clusters.

I do not run Gluster via oVirt, i run it stand alone to be able to use all 4 
nodes into one gluster volume.


I can add all peers successfully, and i can create an volume and start it with 
sucess, but after that it starts to getting troublesome.


If i run gluster volume status after starting the volume it times out, i have 
read that ping-time needs to bee more than 0 so i set it on 30, Still the same 
problem.


>From now on, i can not stop a volume nor remove it, i have to stop glusterd 
>and remove it from /var/lib/gluster/vols/* on all nodes to be able to do 
>anything with gluster.


>From time to time when i do a gluster peer status it shows "disconnected" and 
>when i run it again directly after it show "connected"


I get a lot of these errors in glusterd.log

[2019-01-20 12:53:46.087848] W [rpc-clnt-ping.c:223:rpc_clnt_ping_cbk] 
0-management: socket disconnected
[2019-01-20 12:53:46.087858] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 
0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:55.091598] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 
0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:56.094846] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 
0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:56.097482] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 
0-management: RPC_CLNT_PING notify failed


What am i doing wrong?


//Magnus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XHSOBCDZ6AG4AVC4MLQNRJUJ2NKYTI2/


[ovirt-users] glusterfs not starting

2021-08-20 Thread eevans
I have an ovirt 4.3 on 3 Centos 7 with Gluster 9.  Another Centos 7 server is 
the Ovirt node, but only for management and file sharing.
Everything was fine until a power outage. Now node2 glusterfs will not spawn 
the listening port for the brick.. 
All nodes are connected, but just no brick port for the brick on this node due 
to glusterfsd not starting .

I pointed to the volume file: glusterfs 
--volfile=dds_cluster.kvm02.digitaldatatechs.com.gluster_bricks-kvm02_bricks-brick1-brick1.vol
 
and even the entire path:
 glusterfs 
--volfile=/var/lib/glusterd/vols/dds_cluster/dds_cluster.kvm02.digitaldatatechs.com.gluster_bricks-kvm02_bricks-brick1-brick1.vol
 

I really need some help with this and it will be greatly appreciated.

Please direct replies to eevans6...@gmail.com. 
My mail server is down. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ESUURRRDQ3BVT6EK2JDKDPPMYZBDN726/


[ovirt-users] GlusterFS Monitoring/Alerting

2021-09-07 Thread simon
Hi All,

Does anyone have recommendations for GlusterFS monitoring/alerting software and 
or plugins.

Kind regards

Simon...  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M6AKHWOD7GLBHJRSCWNZRMM7OOXMKFOY/


[ovirt-users] GlusterFS poor performance

2022-03-03 Thread francesco--- via Users
Hi all,

I'm running a glusterFS setup v 8.6 with two node and one arbiter. Both nodes 
and arbiter are CentOS 8 Stream with oVirt 4.4. Under gluster I have a LVM thin 
partition.

VMs running in this cluster have really poor write performance, when a test 
directly performend on the disk score about 300 MB/s

dd test on host1:

[root@ovirt-host1 tmp]# dd if=/dev/zero of=./foo.dat bs=256M count=1 oflag=dsync
1+0 records in
1+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.839861 s, 320 MB/s

dd test on host1 on gluster:

[root@ovirt-host1 tmp]# dd if=/dev/zero 
of=/rhev/data-center/mnt/glusterSD/ovirt-host1:_data/foo.dat bs=256M count=1 
oflag=dsync
1+0 records in
1+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 50.6889 s, 5.3 MB/s

Nontheless, the write results in a VM inside the cluster is a little bit faster 
(dd results vary from 15 MB/s to 60 MB/s)  and this is very strange to me:

root@vm1-ha:/tmp# dd if=/dev/zero of=./foo.dat bs=256M count=1 oflag=dsync; rm 
-f ./foo.dat
1+0 records in
1+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 5.58727 s, 48.0 MB/s


Here's the actual gluster configuration, I also applied  some paramaters in 
/var/lib/glusterd/groups/virt as mentioned in other ovirt thread related I 
found.


gluster volume info data

Volume Name: data
Type: Replicate
Volume ID: 09b532eb-57de-4c29-862d-93993c990e32
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt-host1:/gluster_bricks/data/data
Brick2: ovirt-host2:/gluster_bricks/data/data
Brick3: ovirt-arbiter:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
server.event-threads: 4
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.server-quorum-type: server
cluster.lookup-optimize: off
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.choose-local: off
client.event-threads: 4
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: on
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable


The speed between two hosts is about 1Gb/s:

[root@ovirt-host1 ~]# iperf3 -c ovirt-host2 -p 5002
Connecting to host ovirt-host2 port 5002
[  5] local x.x.x.x port 58072 connected to y.y.y.y port 5002
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec   112 MBytes   938 Mbits/sec  117375 KBytes
[  5]   1.00-2.00   sec   112 MBytes   937 Mbits/sec0397 KBytes
[  5]   2.00-3.00   sec   110 MBytes   924 Mbits/sec   18344 KBytes
[  5]   3.00-4.00   sec   112 MBytes   936 Mbits/sec0369 KBytes
[  5]   4.00-5.00   sec   111 MBytes   927 Mbits/sec   12386 KBytes
[  5]   5.00-6.00   sec   112 MBytes   938 Mbits/sec0471 KBytes
[  5]   6.00-7.00   sec   108 MBytes   909 Mbits/sec   34382 KBytes
[  5]   7.00-8.00   sec   112 MBytes   942 Mbits/sec0438 KBytes
[  5]   8.00-9.00   sec   111 MBytes   928 Mbits/sec   38372 KBytes
[  5]   9.00-10.00  sec   111 MBytes   934 Mbits/sec0481 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec  1.08 GBytes   931 Mbits/sec  219 sender
[  5]   0.00-10.04  sec  1.08 GBytes   926 Mbits/sec  receiver

iperf Done.

Between nodes and arbiter about 200MB/s

[  5] local ovirt-arbiter port 45220 connected to ovirt-host1 port 5002
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  30.6 MBytes   257 Mbits/sec  1177281 KBytes
[  5]   1.00-2.00   sec  26.2 MBytes   220 Mbits/sec0344 KBytes
[  5]   2.00-3.00   sec  28.8 MBytes   241 Mbits/sec   15288 KBytes
[  5]   3.00-4.00   sec  26.2 MBytes   220 Mbits/sec0352 KBytes
[  5]   4.00-5.00   sec  30.0 MBytes   252 Mbits/sec   32293 KBytes
[  5]   5.00-6.00   sec  26.2 MBytes   220 Mbits/sec0354 KBytes
[  5]   6.00-7.00   sec  30.0 MBytes   252 Mbits/sec   32293 KBytes
[  5]   7.00-8.00   sec  27.5 MBytes   231 Mbits/sec0355 KBytes
[  5]   8.00-9.00   sec  28.8 MBytes   241 Mbits/sec   30294 KBytes
[  5]   9.00-10.00  sec  26.2 MBytes   220 Mbits/sec3250 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec   281 MBytes   235 Mbits/sec  1289 sender
[  5]   0.00-10.03  sec   277 MBytes   232 Mbits/sec  receiver

iperf Done.



I definitely missing something obvious and I'm not a gluster/ovirt black 
bealt... Can anyone point me to the right way?

Thank you for your time.

Regards,
Francesco
___
Users ma

[ovirt-users] GlusterFS Network issue

2022-08-18 Thread Facundo Badaracco
Hi everyone.
 I have deployed a 3x replica glusterfs succesfully.
 I have 4 nic in each server. Will be using 2 bond, one for gluster and the
other for the vm. My doubt is:

Actually, my network is 192.168.2.0/23

The IP of bond0 and bond1 should be in differente networks? Can i give, for
example, 192.168.2.3 to bond0 and 192.168.2.4 to bond1?
If above can be do it, how?

Thx in advance
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TJZSZDORACVDYCP7EYF2ESSZFTWHLHNC/


Re: [ovirt-users] glusterfs tips/questions

2014-05-21 Thread Kanagaraj


On 05/21/2014 02:04 PM, Gabi C wrote:

Hello!

I have an ovirt setup, 3.4.1, up-to date, with gluster package 
3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 
bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with 
it's UUID. On third node 'gluster peer status' raise 3 peers, out of 
which, two reffer to same node/IP but different UUID.


in every node you can find the peers in /var/lib/glusterd/peers/

you can get the uuid of the current node using the command "gluster 
system:: uuid get"


From this you can find which file is wrong in the above location.

[Adding gluster-us...@ovirt.org]



What I have tried:
- stopped gluster volumes, put 3rd node in maintenace, reboor -> no 
effect;
- stopped  volumes, removed bricks belonging to 3rd node, readded it, 
start volumes but still no effect.



Any ideas, hints?

TIA


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-21 Thread Gabi C
On afected node:

gluster peer status

gluster peer status
Number of Peers: 3

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)

Hostname: 10.125.1.196
Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
State: Peer in Cluster (Connected)

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)





ls -la /var/lib/gluster



ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 11:10 .
drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
-rw---. 1 root root   73 May 21 11:10
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 10:52
c22e41b8-2818-4a96-a6df-a237517836d6
-rw---. 1 root root   73 May 21 11:10
d95558a0-a306-4812-aec2-a361a9ddde3e


Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??





On Wed, May 21, 2014 at 12:00 PM, Kanagaraj  wrote:

>
> On 05/21/2014 02:04 PM, Gabi C wrote:
>
>   Hello!
>
>  I have an ovirt setup, 3.4.1, up-to date, with gluster package
> 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On
> 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On
> third node 'gluster peer status' raise 3 peers, out of which, two reffer to
> same node/IP but different UUID.
>
>
> in every node you can find the peers in /var/lib/glusterd/peers/
>
> you can get the uuid of the current node using the command "gluster
> system:: uuid get"
>
> From this you can find which file is wrong in the above location.
>
> [Adding gluster-us...@ovirt.org]
>
>
>  What I have tried:
>  - stopped gluster volumes, put 3rd node in maintenace, reboor -> no
> effect;
>  - stopped  volumes, removed bricks belonging to 3rd node, readded it,
> start volumes but still no effect.
>
>
>  Any ideas, hints?
>
>  TIA
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-21 Thread Gabi C
..or should I:

-stop volumes
-remove brick belonging to the affected node
-remove afected node/peer
-add thenode, brick then start volumes?



On Wed, May 21, 2014 at 1:13 PM, Gabi C  wrote:

> On afected node:
>
> gluster peer status
>
> gluster peer status
> Number of Peers: 3
>
> Hostname: 10.125.1.194
> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
> State: Peer in Cluster (Connected)
>
> Hostname: 10.125.1.196
> Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
> State: Peer in Cluster (Connected)
>
> Hostname: 10.125.1.194
> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
> State: Peer in Cluster (Connected)
>
>
>
>
>
> ls -la /var/lib/gluster
>
>
>
> ls -la /var/lib/glusterd/peers/
> total 20
> drwxr-xr-x. 2 root root 4096 May 21 11:10 .
> drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
> -rw---. 1 root root   73 May 21 11:10
> 85c2a08c-a955-47cc-a924-cf66c6814654
> -rw---. 1 root root   73 May 21 10:52
> c22e41b8-2818-4a96-a6df-a237517836d6
> -rw---. 1 root root   73 May 21 11:10
> d95558a0-a306-4812-aec2-a361a9ddde3e
>
>
> Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??
>
>
>
>
>
> On Wed, May 21, 2014 at 12:00 PM, Kanagaraj  wrote:
>
>>
>> On 05/21/2014 02:04 PM, Gabi C wrote:
>>
>>   Hello!
>>
>>  I have an ovirt setup, 3.4.1, up-to date, with gluster package
>> 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On
>> 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On
>> third node 'gluster peer status' raise 3 peers, out of which, two reffer to
>> same node/IP but different UUID.
>>
>>
>> in every node you can find the peers in /var/lib/glusterd/peers/
>>
>> you can get the uuid of the current node using the command "gluster
>> system:: uuid get"
>>
>> From this you can find which file is wrong in the above location.
>>
>> [Adding gluster-us...@ovirt.org]
>>
>>
>>  What I have tried:
>>  - stopped gluster volumes, put 3rd node in maintenace, reboor -> no
>> effect;
>>  - stopped  volumes, removed bricks belonging to 3rd node, readded it,
>> start volumes but still no effect.
>>
>>
>>  Any ideas, hints?
>>
>>  TIA
>>
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-21 Thread Kanagaraj

What are the steps which led this situation?

Did you re-install one of the nodes after forming the cluster or reboot 
which could have changed the ip?



On 05/21/2014 03:43 PM, Gabi C wrote:

On afected node:

gluster peer status

gluster peer status
Number of Peers: 3

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)

Hostname: 10.125.1.196
Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
State: Peer in Cluster (Connected)

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)





ls -la /var/lib/gluster



ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 11:10 .
drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
-rw---. 1 root root   73 May 21 11:10 
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 10:52 
c22e41b8-2818-4a96-a6df-a237517836d6
-rw---. 1 root root   73 May 21 11:10 
d95558a0-a306-4812-aec2-a361a9ddde3e



Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??





On Wed, May 21, 2014 at 12:00 PM, Kanagaraj > wrote:



On 05/21/2014 02:04 PM, Gabi C wrote:

Hello!

I have an ovirt setup, 3.4.1, up-to date, with gluster package
3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3
bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected
with it's UUID. On third node 'gluster peer status' raise 3
peers, out of which, two reffer to same node/IP but different UUID.


in every node you can find the peers in /var/lib/glusterd/peers/

you can get the uuid of the current node using the command
"gluster system:: uuid get"

From this you can find which file is wrong in the above location.

[Adding gluster-us...@ovirt.org ]



What I have tried:
- stopped gluster volumes, put 3rd node in maintenace, reboor ->
no effect;
- stopped  volumes, removed bricks belonging to 3rd node, readded
it, start volumes but still no effect.


Any ideas, hints?

TIA


___
Users mailing list
Users@ovirt.org  
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-21 Thread Gabi C
Hello!


I haven't change the IP, nor reinstall nodes. All nodes are updated via
yum. All I can think of was that after having some issue with gluster,from
WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) ,
than, *manually*, from one of the nodes , remove bricks, then detach peers,
probe them, add bricks again, bring the volume up, and readd storage
domains from the webGUI.


On Wed, May 21, 2014 at 4:26 PM, Kanagaraj  wrote:

>  What are the steps which led this situation?
>
> Did you re-install one of the nodes after forming the cluster or reboot
> which could have changed the ip?
>
>
>
> On 05/21/2014 03:43 PM, Gabi C wrote:
>
>  On afected node:
>
> gluster peer status
>
> gluster peer status
> Number of Peers: 3
>
> Hostname: 10.125.1.194
> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
> State: Peer in Cluster (Connected)
>
> Hostname: 10.125.1.196
> Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
> State: Peer in Cluster (Connected)
>
> Hostname: 10.125.1.194
> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
> State: Peer in Cluster (Connected)
>
>
>
>
>
>  ls -la /var/lib/gluster
>
>
>
> ls -la /var/lib/glusterd/peers/
> total 20
> drwxr-xr-x. 2 root root 4096 May 21 11:10 .
> drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
> -rw---. 1 root root   73 May 21 11:10
> 85c2a08c-a955-47cc-a924-cf66c6814654
> -rw---. 1 root root   73 May 21 10:52
> c22e41b8-2818-4a96-a6df-a237517836d6
> -rw---. 1 root root   73 May 21 11:10
> d95558a0-a306-4812-aec2-a361a9ddde3e
>
>
>  Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??
>
>
>
>
>
> On Wed, May 21, 2014 at 12:00 PM, Kanagaraj  wrote:
>
>>
>> On 05/21/2014 02:04 PM, Gabi C wrote:
>>
>>   Hello!
>>
>>  I have an ovirt setup, 3.4.1, up-to date, with gluster package
>> 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On
>> 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On
>> third node 'gluster peer status' raise 3 peers, out of which, two reffer to
>> same node/IP but different UUID.
>>
>>
>>  in every node you can find the peers in /var/lib/glusterd/peers/
>>
>> you can get the uuid of the current node using the command "gluster
>> system:: uuid get"
>>
>> From this you can find which file is wrong in the above location.
>>
>> [Adding gluster-us...@ovirt.org]
>>
>>
>>  What I have tried:
>>  - stopped gluster volumes, put 3rd node in maintenace, reboor -> no
>> effect;
>>  - stopped  volumes, removed bricks belonging to 3rd node, readded it,
>> start volumes but still no effect.
>>
>>
>>  Any ideas, hints?
>>
>>  TIA
>>
>>
>>  ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-21 Thread Kanagaraj

Ok.

I am not sure deleting the file or re-peer probe would be the right way 
to go.


Gluster-users can help you here.


On 05/21/2014 07:08 PM, Gabi C wrote:

Hello!


I haven't change the IP, nor reinstall nodes. All nodes are updated 
via yum. All I can think of was that after having some issue with 
gluster,from WebGUI I deleted VM, deactivate and detach storage 
domains ( I have 2) , than, _manually_, from one of the nodes , remove 
bricks, then detach peers, probe them, add bricks again, bring the 
volume up, and readd storage domains from the webGUI.



On Wed, May 21, 2014 at 4:26 PM, Kanagaraj > wrote:


What are the steps which led this situation?

Did you re-install one of the nodes after forming the cluster or
reboot which could have changed the ip?



On 05/21/2014 03:43 PM, Gabi C wrote:

On afected node:

gluster peer status

gluster peer status
Number of Peers: 3

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)

Hostname: 10.125.1.196
Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
State: Peer in Cluster (Connected)

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)





ls -la /var/lib/gluster



ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 11:10 .
drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
-rw---. 1 root root   73 May 21 11:10
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 10:52
c22e41b8-2818-4a96-a6df-a237517836d6
-rw---. 1 root root   73 May 21 11:10
d95558a0-a306-4812-aec2-a361a9ddde3e


Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??





On Wed, May 21, 2014 at 12:00 PM, Kanagaraj mailto:kmayi...@redhat.com>> wrote:


On 05/21/2014 02:04 PM, Gabi C wrote:

Hello!

I have an ovirt setup, 3.4.1, up-to date, with gluster
package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is
replicated on 3 bricks. On 2 nodes 'gluster peeer status'
raise 2 peer connected with it's UUID. On third node
'gluster peer status' raise 3 peers, out of which, two
reffer to same node/IP but different UUID.


in every node you can find the peers in /var/lib/glusterd/peers/

you can get the uuid of the current node using the command
"gluster system:: uuid get"

From this you can find which file is wrong in the above location.

[Adding gluster-us...@ovirt.org ]



What I have tried:
- stopped gluster volumes, put 3rd node in maintenace,
reboor -> no effect;
- stopped  volumes, removed bricks belonging to 3rd node,
readded it, start volumes but still no effect.


Any ideas, hints?

TIA


___
Users mailing list
Users@ovirt.org  
http://lists.ovirt.org/mailman/listinfo/users








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-23 Thread Vijay Bellur

On 05/21/2014 07:22 PM, Kanagaraj wrote:

Ok.

I am not sure deleting the file or re-peer probe would be the right way
to go.

Gluster-users can help you here.


On 05/21/2014 07:08 PM, Gabi C wrote:

Hello!


I haven't change the IP, nor reinstall nodes. All nodes are updated
via yum. All I can think of was that after having some issue with
gluster,from WebGUI I deleted VM, deactivate and detach storage
domains ( I have 2) , than, _manually_, from one of the nodes , remove
bricks, then detach peers, probe them, add bricks again, bring the
volume up, and readd storage domains from the webGUI.


On Wed, May 21, 2014 at 4:26 PM, Kanagaraj mailto:kmayi...@redhat.com>> wrote:

What are the steps which led this situation?

Did you re-install one of the nodes after forming the cluster or
reboot which could have changed the ip?



On 05/21/2014 03:43 PM, Gabi C wrote:

On afected node:

gluster peer status

gluster peer status
Number of Peers: 3

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)

Hostname: 10.125.1.196
Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
State: Peer in Cluster (Connected)

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)





ls -la /var/lib/gluster



ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 11:10 .
drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
-rw---. 1 root root   73 May 21 11:10
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 10:52
c22e41b8-2818-4a96-a6df-a237517836d6
-rw---. 1 root root   73 May 21 11:10
d95558a0-a306-4812-aec2-a361a9ddde3e





Can you please check the output of cat 
/var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e ?


If it does contain information about the duplicated peer and none of the 
other 2 nodes do have this file in /var/lib/glusterd/peers/, the file 
can be moved out of /var/lib/glusterd or deleted.


Regards,
Vijay


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-23 Thread Gabi C
On problematic node:

[root@virtual5 ~]# ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 16:33 .
drwxr-xr-x. 9 root root 4096 May 21 16:33 ..
-rw---. 1 root root   73 May 21 16:33
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 16:33
c22e41b8-2818-4a96-a6df-a237517836d6
-rw---. 1 root root   73 May 21 16:33
d95558a0-a306-4812-aec2-a361a9ddde3e
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6
uuid=c22e41b8-2818-4a96-a6df-a237517836d6
state=3
hostname1=10.125.1.196
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194







on other 2 nodes


[root@virtual4 ~]# ls -la /var/lib/glusterd/peers/
total 16
drwxr-xr-x. 2 root root 4096 May 21 16:34 .
drwxr-xr-x. 9 root root 4096 May 21 16:34 ..
-rw---. 1 root root   73 May 21 16:34
bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
-rw---. 1 root root   73 May 21 11:09
c22e41b8-2818-4a96-a6df-a237517836d6
[root@virtual4 ~]# cat
/var/lib/glusterd/peers/bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
uuid=bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
state=3
hostname1=10.125.1.195
[root@virtual4 ~]# cat
/var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6
uuid=c22e41b8-2818-4a96-a6df-a237517836d6
state=3
hostname1=10.125.1.196





[root@virtual6 ~]# ls -la /var/lib/glusterd/peers/
total 16
drwxr-xr-x. 2 root root 4096 May 21 16:34 .
drwxr-xr-x. 9 root root 4096 May 21 16:34 ..
-rw---. 1 root root   73 May 21 11:10
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 16:34
bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
[root@virtual6 ~]# cat
/var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194
[root@virtual6 ~]# cat
/var/lib/glusterd/peers/bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
uuid=bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
state=3
hostname1=10.125.1.195
[root@virtual6 ~]#



On Fri, May 23, 2014 at 2:05 PM, Vijay Bellur  wrote:

> On 05/21/2014 07:22 PM, Kanagaraj wrote:
>
>> Ok.
>>
>> I am not sure deleting the file or re-peer probe would be the right way
>> to go.
>>
>> Gluster-users can help you here.
>>
>>
>> On 05/21/2014 07:08 PM, Gabi C wrote:
>>
>>> Hello!
>>>
>>>
>>> I haven't change the IP, nor reinstall nodes. All nodes are updated
>>> via yum. All I can think of was that after having some issue with
>>> gluster,from WebGUI I deleted VM, deactivate and detach storage
>>> domains ( I have 2) , than, _manually_, from one of the nodes , remove
>>>
>>> bricks, then detach peers, probe them, add bricks again, bring the
>>> volume up, and readd storage domains from the webGUI.
>>>
>>>
>>> On Wed, May 21, 2014 at 4:26 PM, Kanagaraj >> > wrote:
>>>
>>> What are the steps which led this situation?
>>>
>>> Did you re-install one of the nodes after forming the cluster or
>>> reboot which could have changed the ip?
>>>
>>>
>>>
>>> On 05/21/2014 03:43 PM, Gabi C wrote:
>>>
 On afected node:

 gluster peer status

 gluster peer status
 Number of Peers: 3

 Hostname: 10.125.1.194
 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
 State: Peer in Cluster (Connected)

 Hostname: 10.125.1.196
 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
 State: Peer in Cluster (Connected)

 Hostname: 10.125.1.194
 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
 State: Peer in Cluster (Connected)





 ls -la /var/lib/gluster



 ls -la /var/lib/glusterd/peers/
 total 20
 drwxr-xr-x. 2 root root 4096 May 21 11:10 .
 drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
 -rw---. 1 root root   73 May 21 11:10
 85c2a08c-a955-47cc-a924-cf66c6814654
 -rw---. 1 root root   73 May 21 10:52
 c22e41b8-2818-4a96-a6df-a237517836d6
 -rw---. 1 root root   73 May 21 11:10
 d95558a0-a306-4812-aec2-a361a9ddde3e



>
> Can you please check the output of cat /var/lib/glusterd/peers/
> d95558a0-a306-4812-aec2-a361a9ddde3e ?
>
> If it does contain information about the duplicated peer and none of the
> other 2 nodes do have this file in /var/lib/glusterd/peers/, the file can
> be moved out of /var/lib/glusterd or deleted.
>
> Regards,
> Vijay
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-23 Thread Vijay Bellur

On 05/23/2014 05:25 PM, Gabi C wrote:

On problematic node:

[root@virtual5 ~]# ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 16:33 .
drwxr-xr-x. 9 root root 4096 May 21 16:33 ..
-rw---. 1 root root   73 May 21 16:33
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 16:33
c22e41b8-2818-4a96-a6df-a237517836d6
-rw---. 1 root root   73 May 21 16:33
d95558a0-a306-4812-aec2-a361a9ddde3e
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6
uuid=c22e41b8-2818-4a96-a6df-a237517836d6
state=3
hostname1=10.125.1.196
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194



Looks like this is stale information for 10.125.1.194 that has somehow 
persisted. Deleting this file and then restarting glusterd on this node 
should lead to a consistent state for the peers.


Regards,
Vijay

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-23 Thread Gabi C
just did it and seems to be OK!

Many thanks!



On Fri, May 23, 2014 at 3:11 PM, Vijay Bellur  wrote:

> On 05/23/2014 05:25 PM, Gabi C wrote:
>
>> On problematic node:
>>
>> [root@virtual5 ~]# ls -la /var/lib/glusterd/peers/
>> total 20
>> drwxr-xr-x. 2 root root 4096 May 21 16:33 .
>> drwxr-xr-x. 9 root root 4096 May 21 16:33 ..
>> -rw---. 1 root root   73 May 21 16:33
>> 85c2a08c-a955-47cc-a924-cf66c6814654
>> -rw---. 1 root root   73 May 21 16:33
>> c22e41b8-2818-4a96-a6df-a237517836d6
>> -rw---. 1 root root   73 May 21 16:33
>> d95558a0-a306-4812-aec2-a361a9ddde3e
>> [root@virtual5 ~]# cat
>> /var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654
>> uuid=85c2a08c-a955-47cc-a924-cf66c6814654
>> state=3
>> hostname1=10.125.1.194
>> [root@virtual5 ~]# cat
>> /var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6
>> uuid=c22e41b8-2818-4a96-a6df-a237517836d6
>> state=3
>> hostname1=10.125.1.196
>> [root@virtual5 ~]# cat
>> /var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e
>> uuid=85c2a08c-a955-47cc-a924-cf66c6814654
>> state=3
>> hostname1=10.125.1.194
>>
>>
> Looks like this is stale information for 10.125.1.194 that has somehow
> persisted. Deleting this file and then restarting glusterd on this node
> should lead to a consistent state for the peers.
>
> Regards,
> Vijay
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs quetions/tips

2014-05-24 Thread Gilad Chaplik
Hi,

did you miss the email's body :-) ?

Thanks, 
Gilad.


- Original Message -
> From: "Gabi C" 
> To: users@ovirt.org
> Sent: Wednesday, May 21, 2014 11:28:26 AM
> Subject: [ovirt-users] glusterfs quetions/tips
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs quetions/tips

2014-05-24 Thread Gilad Chaplik
ignore :-) saw the later thread...

Thanks, 
Gilad.


- Original Message -
> From: "Gilad Chaplik" 
> To: "Gabi C" 
> Cc: users@ovirt.org
> Sent: Saturday, May 24, 2014 11:32:14 AM
> Subject: Re: [ovirt-users] glusterfs quetions/tips
> 
> Hi,
> 
> did you miss the email's body :-) ?
> 
> Thanks,
> Gilad.
> 
> 
> - Original Message -
> > From: "Gabi C" 
> > To: users@ovirt.org
> > Sent: Wednesday, May 21, 2014 11:28:26 AM
> > Subject: [ovirt-users] glusterfs quetions/tips
> > 
> > 
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Glusterfs HA with Ovirt

2014-07-02 Thread Punit Dambiwal
Hi,

I have some HA related concern about glusterfs with Ovirt...let say i have
4 storage node with gluster bricks as below :-

1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed
replicated architecture...
2. Now attached this gluster storge to ovrit-engine with the following
mount point 10.10.10.2/vol1
3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to 10.10.10.7) SPM
is on 10.10.10.5...
4. What happen if 10.10.10.2 will goes down.can hypervisior host can
still access the storage ??
5. What happen if SPM goes down ???

Note :- What happen for point 4 &5 ,If storage and Compute both working on
the same server.

Thanks,
Punit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs domains and libgfapi?

2014-10-01 Thread Jason Brooks
Is glusterfs w/ libgfapi working for anyone?

Is this still a feature of oVirt?

Thanks, Jason
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GlusterFS host installation failed

2015-07-08 Thread Stefano Stagnaro
Hi,

host installation in a glusterfs cluster is failing due to dependecies errors:

Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 (ovirt-3.5-glusterfs-epel)
   Requires: pyxattr
Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 (ovirt-3.5-glusterfs-epel)
   Requires: liburcu-cds.so.1()(64bit)
Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 (ovirt-3.5-glusterfs-epel)
   Requires: liburcu-bp.so.1()(64bit)

oVirt Engine Version: 3.5.3.1-1.el6 on CentOS 6.6

I've installed the following repo: 
http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm

Thank you,
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterFS is not independent?

2017-11-09 Thread Jon bae
Hello,
I'm very new to oVirt and glusterFS, so maybe I got something wrong...

I have the oVirt engine installed on a separate server and I have also two
physical nodes. On every node I configure glusterFS, the volume is in
distribution mode and have only one brick, from is one node. Both volumes I
also add to its own storage domain.

The idea was, that both storage domains are independent from each other,
that I can turn of one node and only turn it on, when I need it.

But now I have the problem, that when I turn of on node, both storage
domains goes down. and the volume shows the the brick is not available.

Is there a way to fix this?

Regards
Jonathan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs resume vm paused state

2014-06-01 Thread Andrew Lau
Hi,

Has anyone had any luck with resuming a VM from a paused state on top
of NFS share? Even when the VMs are marked as HA, if the gluster
storage goes down for a few seconds the VMs go to a paused state and
can never be resumed. They require a hard reset.

I recall when using NFS to not have this issue.

Thanks,
Andrew
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs HA with Ovirt

2014-07-03 Thread Darrell Budic
You need to setup a virtual IP to use as the mount point, most people use 
keepalived to provide a virtual ip via vrrp for this. Setup something like 
10.10.10.10 and use that for your mounts.

Right now, if 10.10.10.2 goes away, all your gluster mounts go away and your 
VMs get paused because the hypervisors can’t access the storage. Your gluster 
storage is still fine, but ovirt can’t talk to it because 10.10.10.2 isn’t 
there.

If the SPM goes down, it the other hypervisor hosts will elect a new one (under 
control of the ovirt engine).

Same scenarios if storage & compute are on the same server, you still need a 
vip address for the storage portion to serve as the mount point so it’s not 
dependent on any one server.

-Darrell

On Jul 3, 2014, at 1:14 AM, Punit Dambiwal  wrote:

> Hi,
> 
> I have some HA related concern about glusterfs with Ovirt...let say i have 4 
> storage node with gluster bricks as below :- 
> 
> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed 
> replicated architecture...
> 2. Now attached this gluster storge to ovrit-engine with the following mount 
> point 10.10.10.2/vol1
> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to 10.10.10.7) SPM is 
> on 10.10.10.5...
> 4. What happen if 10.10.10.2 will goes down.can hypervisior host can 
> still access the storage ??
> 5. What happen if SPM goes down ???
> 
> Note :- What happen for point 4 &5 ,If storage and Compute both working on 
> the same server.
> 
> Thanks,
> Punit 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs HA with Ovirt

2014-07-03 Thread Punit Dambiwal
Hi,

Thanks...can you suggest me any good how to/article for the glusterfs with
ovirt...

One strange thing is if i will try both (compute & storage) on the same
node...the below quote not happen

-
Right now, if 10.10.10.2 goes away, all your gluster mounts go away and
your VMs get paused because the hypervisors can’t access the storage. Your
gluster storage is still fine, but ovirt can’t talk to it because
10.10.10.2 isn’t there.
-

Even the 10.10.10.2 goes down...i can still access the gluster mounts and
no VM pausei can access the VM via ssh...no connection failure.the
connection drop only in case of SPM goes down and the another node will
elect as SPM(All the running VM's pause in this condition).



On Fri, Jul 4, 2014 at 4:12 AM, Darrell Budic 
wrote:

> You need to setup a virtual IP to use as the mount point, most people use
> keepalived to provide a virtual ip via vrrp for this. Setup something like
> 10.10.10.10 and use that for your mounts.
>
> Right now, if 10.10.10.2 goes away, all your gluster mounts go away and
> your VMs get paused because the hypervisors can’t access the storage. Your
> gluster storage is still fine, but ovirt can’t talk to it because
> 10.10.10.2 isn’t there.
>
> If the SPM goes down, it the other hypervisor hosts will elect a new one
> (under control of the ovirt engine).
>
> Same scenarios if storage & compute are on the same server, you still need
> a vip address for the storage portion to serve as the mount point so it’s
> not dependent on any one server.
>
> -Darrell
>
> On Jul 3, 2014, at 1:14 AM, Punit Dambiwal  wrote:
>
> Hi,
>
> I have some HA related concern about glusterfs with Ovirt...let say i have
> 4 storage node with gluster bricks as below :-
>
> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed
> replicated architecture...
> 2. Now attached this gluster storge to ovrit-engine with the following
> mount point 10.10.10.2/vol1
> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to 10.10.10.7) SPM
> is on 10.10.10.5...
> 4. What happen if 10.10.10.2 will goes down.can hypervisior host can
> still access the storage ??
> 5. What happen if SPM goes down ???
>
> Note :- What happen for point 4 &5 ,If storage and Compute both working on
> the same server.
>
> Thanks,
> Punit
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs HA with Ovirt

2014-07-03 Thread Andrew Lau
Don't forget to take into consideration quroum, that's something
people often forget

The reason you're having the current happen, is gluster only uses the
initial IP address to get the volume details. After that it'll connect
directly to ONE of the servers, so with your 2 storage server case,
50% chance it won't go to paused state.

For the VIP, you could consider CTDB or keepelived, or even just using
localhost (as your storage and compute are all on the same machine).
For CTDB, checkout
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/

I have a BZ open regarding gluster VMs going into paused state and not
being resumable, so it's something you should also consider. My case,
switch dies, gluster volume goes away, VMs go into paused state but
can't be resumed. If you lose one server out of a cluster is a
different story though.
https://bugzilla.redhat.com/show_bug.cgi?id=1058300

HTH

On Fri, Jul 4, 2014 at 11:48 AM, Punit Dambiwal  wrote:
> Hi,
>
> Thanks...can you suggest me any good how to/article for the glusterfs with
> ovirt...
>
> One strange thing is if i will try both (compute & storage) on the same
> node...the below quote not happen
>
> -
>
> Right now, if 10.10.10.2 goes away, all your gluster mounts go away and your
> VMs get paused because the hypervisors can’t access the storage. Your
> gluster storage is still fine, but ovirt can’t talk to it because 10.10.10.2
> isn’t there.
> -
>
> Even the 10.10.10.2 goes down...i can still access the gluster mounts and no
> VM pausei can access the VM via ssh...no connection failure.the
> connection drop only in case of SPM goes down and the another node will
> elect as SPM(All the running VM's pause in this condition).
>
>
>
> On Fri, Jul 4, 2014 at 4:12 AM, Darrell Budic 
> wrote:
>>
>> You need to setup a virtual IP to use as the mount point, most people use
>> keepalived to provide a virtual ip via vrrp for this. Setup something like
>> 10.10.10.10 and use that for your mounts.
>>
>> Right now, if 10.10.10.2 goes away, all your gluster mounts go away and
>> your VMs get paused because the hypervisors can’t access the storage. Your
>> gluster storage is still fine, but ovirt can’t talk to it because 10.10.10.2
>> isn’t there.
>>
>> If the SPM goes down, it the other hypervisor hosts will elect a new one
>> (under control of the ovirt engine).
>>
>> Same scenarios if storage & compute are on the same server, you still need
>> a vip address for the storage portion to serve as the mount point so it’s
>> not dependent on any one server.
>>
>> -Darrell
>>
>> On Jul 3, 2014, at 1:14 AM, Punit Dambiwal  wrote:
>>
>> Hi,
>>
>> I have some HA related concern about glusterfs with Ovirt...let say i have
>> 4 storage node with gluster bricks as below :-
>>
>> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed
>> replicated architecture...
>> 2. Now attached this gluster storge to ovrit-engine with the following
>> mount point 10.10.10.2/vol1
>> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to 10.10.10.7) SPM
>> is on 10.10.10.5...
>> 4. What happen if 10.10.10.2 will goes down.can hypervisior host can
>> still access the storage ??
>> 5. What happen if SPM goes down ???
>>
>> Note :- What happen for point 4 &5 ,If storage and Compute both working on
>> the same server.
>>
>> Thanks,
>> Punit
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs HA with Ovirt

2014-07-03 Thread Punit Dambiwal
Hi Andrew,

Thanks for the updatethat means HA can not work without VIP in the
gluster,so better to use the glusterfs with the VIP to take over the
ip...in case of any storage node failure...


On Fri, Jul 4, 2014 at 12:35 PM, Andrew Lau  wrote:

> Don't forget to take into consideration quroum, that's something
> people often forget
>
> The reason you're having the current happen, is gluster only uses the
> initial IP address to get the volume details. After that it'll connect
> directly to ONE of the servers, so with your 2 storage server case,
> 50% chance it won't go to paused state.
>
> For the VIP, you could consider CTDB or keepelived, or even just using
> localhost (as your storage and compute are all on the same machine).
> For CTDB, checkout
> http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
>
> I have a BZ open regarding gluster VMs going into paused state and not
> being resumable, so it's something you should also consider. My case,
> switch dies, gluster volume goes away, VMs go into paused state but
> can't be resumed. If you lose one server out of a cluster is a
> different story though.
> https://bugzilla.redhat.com/show_bug.cgi?id=1058300
>
> HTH
>
> On Fri, Jul 4, 2014 at 11:48 AM, Punit Dambiwal  wrote:
> > Hi,
> >
> > Thanks...can you suggest me any good how to/article for the glusterfs
> with
> > ovirt...
> >
> > One strange thing is if i will try both (compute & storage) on the same
> > node...the below quote not happen
> >
> > -
> >
> > Right now, if 10.10.10.2 goes away, all your gluster mounts go away and
> your
> > VMs get paused because the hypervisors can’t access the storage. Your
> > gluster storage is still fine, but ovirt can’t talk to it because
> 10.10.10.2
> > isn’t there.
> > -
> >
> > Even the 10.10.10.2 goes down...i can still access the gluster mounts
> and no
> > VM pausei can access the VM via ssh...no connection failure.the
> > connection drop only in case of SPM goes down and the another node will
> > elect as SPM(All the running VM's pause in this condition).
> >
> >
> >
> > On Fri, Jul 4, 2014 at 4:12 AM, Darrell Budic  >
> > wrote:
> >>
> >> You need to setup a virtual IP to use as the mount point, most people
> use
> >> keepalived to provide a virtual ip via vrrp for this. Setup something
> like
> >> 10.10.10.10 and use that for your mounts.
> >>
> >> Right now, if 10.10.10.2 goes away, all your gluster mounts go away and
> >> your VMs get paused because the hypervisors can’t access the storage.
> Your
> >> gluster storage is still fine, but ovirt can’t talk to it because
> 10.10.10.2
> >> isn’t there.
> >>
> >> If the SPM goes down, it the other hypervisor hosts will elect a new one
> >> (under control of the ovirt engine).
> >>
> >> Same scenarios if storage & compute are on the same server, you still
> need
> >> a vip address for the storage portion to serve as the mount point so
> it’s
> >> not dependent on any one server.
> >>
> >> -Darrell
> >>
> >> On Jul 3, 2014, at 1:14 AM, Punit Dambiwal  wrote:
> >>
> >> Hi,
> >>
> >> I have some HA related concern about glusterfs with Ovirt...let say i
> have
> >> 4 storage node with gluster bricks as below :-
> >>
> >> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed
> >> replicated architecture...
> >> 2. Now attached this gluster storge to ovrit-engine with the following
> >> mount point 10.10.10.2/vol1
> >> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to 10.10.10.7)
> SPM
> >> is on 10.10.10.5...
> >> 4. What happen if 10.10.10.2 will goes down.can hypervisior host can
> >> still access the storage ??
> >> 5. What happen if SPM goes down ???
> >>
> >> Note :- What happen for point 4 &5 ,If storage and Compute both working
> on
> >> the same server.
> >>
> >> Thanks,
> >> Punit
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >>
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs HA with Ovirt

2014-07-03 Thread Andrew Lau
Or just localhost as your computer and storage are on the same box.


On Fri, Jul 4, 2014 at 2:48 PM, Punit Dambiwal  wrote:
> Hi Andrew,
>
> Thanks for the updatethat means HA can not work without VIP in the
> gluster,so better to use the glusterfs with the VIP to take over the ip...in
> case of any storage node failure...
>
>
> On Fri, Jul 4, 2014 at 12:35 PM, Andrew Lau  wrote:
>>
>> Don't forget to take into consideration quroum, that's something
>> people often forget
>>
>> The reason you're having the current happen, is gluster only uses the
>> initial IP address to get the volume details. After that it'll connect
>> directly to ONE of the servers, so with your 2 storage server case,
>> 50% chance it won't go to paused state.
>>
>> For the VIP, you could consider CTDB or keepelived, or even just using
>> localhost (as your storage and compute are all on the same machine).
>> For CTDB, checkout
>> http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
>>
>> I have a BZ open regarding gluster VMs going into paused state and not
>> being resumable, so it's something you should also consider. My case,
>> switch dies, gluster volume goes away, VMs go into paused state but
>> can't be resumed. If you lose one server out of a cluster is a
>> different story though.
>> https://bugzilla.redhat.com/show_bug.cgi?id=1058300
>>
>> HTH
>>
>> On Fri, Jul 4, 2014 at 11:48 AM, Punit Dambiwal  wrote:
>> > Hi,
>> >
>> > Thanks...can you suggest me any good how to/article for the glusterfs
>> > with
>> > ovirt...
>> >
>> > One strange thing is if i will try both (compute & storage) on the same
>> > node...the below quote not happen
>> >
>> > -
>> >
>> > Right now, if 10.10.10.2 goes away, all your gluster mounts go away and
>> > your
>> > VMs get paused because the hypervisors can’t access the storage. Your
>> > gluster storage is still fine, but ovirt can’t talk to it because
>> > 10.10.10.2
>> > isn’t there.
>> > -
>> >
>> > Even the 10.10.10.2 goes down...i can still access the gluster mounts
>> > and no
>> > VM pausei can access the VM via ssh...no connection failure.the
>> > connection drop only in case of SPM goes down and the another node will
>> > elect as SPM(All the running VM's pause in this condition).
>> >
>> >
>> >
>> > On Fri, Jul 4, 2014 at 4:12 AM, Darrell Budic
>> > 
>> > wrote:
>> >>
>> >> You need to setup a virtual IP to use as the mount point, most people
>> >> use
>> >> keepalived to provide a virtual ip via vrrp for this. Setup something
>> >> like
>> >> 10.10.10.10 and use that for your mounts.
>> >>
>> >> Right now, if 10.10.10.2 goes away, all your gluster mounts go away and
>> >> your VMs get paused because the hypervisors can’t access the storage.
>> >> Your
>> >> gluster storage is still fine, but ovirt can’t talk to it because
>> >> 10.10.10.2
>> >> isn’t there.
>> >>
>> >> If the SPM goes down, it the other hypervisor hosts will elect a new
>> >> one
>> >> (under control of the ovirt engine).
>> >>
>> >> Same scenarios if storage & compute are on the same server, you still
>> >> need
>> >> a vip address for the storage portion to serve as the mount point so
>> >> it’s
>> >> not dependent on any one server.
>> >>
>> >> -Darrell
>> >>
>> >> On Jul 3, 2014, at 1:14 AM, Punit Dambiwal  wrote:
>> >>
>> >> Hi,
>> >>
>> >> I have some HA related concern about glusterfs with Ovirt...let say i
>> >> have
>> >> 4 storage node with gluster bricks as below :-
>> >>
>> >> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed
>> >> replicated architecture...
>> >> 2. Now attached this gluster storge to ovrit-engine with the following
>> >> mount point 10.10.10.2/vol1
>> >> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to 10.10.10.7)
>> >> SPM
>> >> is on 10.10.10.5...
>> >> 4. What happen if 10.10.10.2 will goes down.can hypervisior host
>> >> can
>> >> still access the storage ??
>> >> 5. What happen if SPM goes down ???
>> >>
>> >> Note :- What happen for point 4 &5 ,If storage and Compute both working
>> >> on
>> >> the same server.
>> >>
>> >> Thanks,
>> >> Punit
>> >> ___
>> >> Users mailing list
>> >> Users@ovirt.org
>> >> http://lists.ovirt.org/mailman/listinfo/users
>> >>
>> >>
>> >
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterfs HA with Ovirt

2014-07-03 Thread Punit Dambiwal
Hi Andrew,

Yes..both on the same node...but i have 4 nodes of this type in the same
clusterSo it should work or not ??

1. 4 physical nodes with 12 bricks each(distributed replicated)...
2. The same all 4 nodes use for the compute purpose also...

Do i still require the VIP...or not ?? because i tested even the mount
point node goes down...the VM will not pause and not affect...


On Fri, Jul 4, 2014 at 1:18 PM, Andrew Lau  wrote:

> Or just localhost as your computer and storage are on the same box.
>
>
> On Fri, Jul 4, 2014 at 2:48 PM, Punit Dambiwal  wrote:
> > Hi Andrew,
> >
> > Thanks for the updatethat means HA can not work without VIP in the
> > gluster,so better to use the glusterfs with the VIP to take over the
> ip...in
> > case of any storage node failure...
> >
> >
> > On Fri, Jul 4, 2014 at 12:35 PM, Andrew Lau 
> wrote:
> >>
> >> Don't forget to take into consideration quroum, that's something
> >> people often forget
> >>
> >> The reason you're having the current happen, is gluster only uses the
> >> initial IP address to get the volume details. After that it'll connect
> >> directly to ONE of the servers, so with your 2 storage server case,
> >> 50% chance it won't go to paused state.
> >>
> >> For the VIP, you could consider CTDB or keepelived, or even just using
> >> localhost (as your storage and compute are all on the same machine).
> >> For CTDB, checkout
> >> http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
> >>
> >> I have a BZ open regarding gluster VMs going into paused state and not
> >> being resumable, so it's something you should also consider. My case,
> >> switch dies, gluster volume goes away, VMs go into paused state but
> >> can't be resumed. If you lose one server out of a cluster is a
> >> different story though.
> >> https://bugzilla.redhat.com/show_bug.cgi?id=1058300
> >>
> >> HTH
> >>
> >> On Fri, Jul 4, 2014 at 11:48 AM, Punit Dambiwal 
> wrote:
> >> > Hi,
> >> >
> >> > Thanks...can you suggest me any good how to/article for the glusterfs
> >> > with
> >> > ovirt...
> >> >
> >> > One strange thing is if i will try both (compute & storage) on the
> same
> >> > node...the below quote not happen
> >> >
> >> > -
> >> >
> >> > Right now, if 10.10.10.2 goes away, all your gluster mounts go away
> and
> >> > your
> >> > VMs get paused because the hypervisors can’t access the storage. Your
> >> > gluster storage is still fine, but ovirt can’t talk to it because
> >> > 10.10.10.2
> >> > isn’t there.
> >> > -
> >> >
> >> > Even the 10.10.10.2 goes down...i can still access the gluster mounts
> >> > and no
> >> > VM pausei can access the VM via ssh...no connection
> failure.the
> >> > connection drop only in case of SPM goes down and the another node
> will
> >> > elect as SPM(All the running VM's pause in this condition).
> >> >
> >> >
> >> >
> >> > On Fri, Jul 4, 2014 at 4:12 AM, Darrell Budic
> >> > 
> >> > wrote:
> >> >>
> >> >> You need to setup a virtual IP to use as the mount point, most people
> >> >> use
> >> >> keepalived to provide a virtual ip via vrrp for this. Setup something
> >> >> like
> >> >> 10.10.10.10 and use that for your mounts.
> >> >>
> >> >> Right now, if 10.10.10.2 goes away, all your gluster mounts go away
> and
> >> >> your VMs get paused because the hypervisors can’t access the storage.
> >> >> Your
> >> >> gluster storage is still fine, but ovirt can’t talk to it because
> >> >> 10.10.10.2
> >> >> isn’t there.
> >> >>
> >> >> If the SPM goes down, it the other hypervisor hosts will elect a new
> >> >> one
> >> >> (under control of the ovirt engine).
> >> >>
> >> >> Same scenarios if storage & compute are on the same server, you still
> >> >> need
> >> >> a vip address for the storage portion to serve as the mount point so
> >> >> it’s
> >> >> not dependent on any one server.
> >> >>
> >> >> -Darrell
> >> >>
> >> >> On Jul 3, 2014, at 1:14 AM, Punit Dambiwal 
> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> I have some HA related concern about glusterfs with Ovirt...let say i
> >> >> have
> >> >> 4 storage node with gluster bricks as below :-
> >> >>
> >> >> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed
> >> >> replicated architecture...
> >> >> 2. Now attached this gluster storge to ovrit-engine with the
> following
> >> >> mount point 10.10.10.2/vol1
> >> >> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to
> 10.10.10.7)
> >> >> SPM
> >> >> is on 10.10.10.5...
> >> >> 4. What happen if 10.10.10.2 will goes down.can hypervisior host
> >> >> can
> >> >> still access the storage ??
> >> >> 5. What happen if SPM goes down ???
> >> >>
> >> >> Note :- What happen for point 4 &5 ,If storage and Compute both
> working
> >> >> on
> >> >> the same server.
> >> >>
> >> >> Thanks,
> >> >> Punit
> >> >> ___
> >> >> Users mailing list
> >> >> Users@ovirt.org
> >> >> http://lists.ovirt.org/mailman/listinfo/user

Re: [ovirt-users] Glusterfs HA with Ovirt

2014-07-03 Thread Andrew Lau
As long as all your compute nodes are part of the gluster peer,
localhost will work.
Just remember, gluster will connect to any server, so even if you
mount as localhost:/ it could be accessing the storage from another
host in the gluster peer group.


On Fri, Jul 4, 2014 at 3:26 PM, Punit Dambiwal  wrote:
> Hi Andrew,
>
> Yes..both on the same node...but i have 4 nodes of this type in the same
> clusterSo it should work or not ??
>
> 1. 4 physical nodes with 12 bricks each(distributed replicated)...
> 2. The same all 4 nodes use for the compute purpose also...
>
> Do i still require the VIP...or not ?? because i tested even the mount point
> node goes down...the VM will not pause and not affect...
>
>
> On Fri, Jul 4, 2014 at 1:18 PM, Andrew Lau  wrote:
>>
>> Or just localhost as your computer and storage are on the same box.
>>
>>
>> On Fri, Jul 4, 2014 at 2:48 PM, Punit Dambiwal  wrote:
>> > Hi Andrew,
>> >
>> > Thanks for the updatethat means HA can not work without VIP in the
>> > gluster,so better to use the glusterfs with the VIP to take over the
>> > ip...in
>> > case of any storage node failure...
>> >
>> >
>> > On Fri, Jul 4, 2014 at 12:35 PM, Andrew Lau 
>> > wrote:
>> >>
>> >> Don't forget to take into consideration quroum, that's something
>> >> people often forget
>> >>
>> >> The reason you're having the current happen, is gluster only uses the
>> >> initial IP address to get the volume details. After that it'll connect
>> >> directly to ONE of the servers, so with your 2 storage server case,
>> >> 50% chance it won't go to paused state.
>> >>
>> >> For the VIP, you could consider CTDB or keepelived, or even just using
>> >> localhost (as your storage and compute are all on the same machine).
>> >> For CTDB, checkout
>> >> http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
>> >>
>> >> I have a BZ open regarding gluster VMs going into paused state and not
>> >> being resumable, so it's something you should also consider. My case,
>> >> switch dies, gluster volume goes away, VMs go into paused state but
>> >> can't be resumed. If you lose one server out of a cluster is a
>> >> different story though.
>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1058300
>> >>
>> >> HTH
>> >>
>> >> On Fri, Jul 4, 2014 at 11:48 AM, Punit Dambiwal 
>> >> wrote:
>> >> > Hi,
>> >> >
>> >> > Thanks...can you suggest me any good how to/article for the glusterfs
>> >> > with
>> >> > ovirt...
>> >> >
>> >> > One strange thing is if i will try both (compute & storage) on the
>> >> > same
>> >> > node...the below quote not happen
>> >> >
>> >> > -
>> >> >
>> >> > Right now, if 10.10.10.2 goes away, all your gluster mounts go away
>> >> > and
>> >> > your
>> >> > VMs get paused because the hypervisors can’t access the storage. Your
>> >> > gluster storage is still fine, but ovirt can’t talk to it because
>> >> > 10.10.10.2
>> >> > isn’t there.
>> >> > -
>> >> >
>> >> > Even the 10.10.10.2 goes down...i can still access the gluster mounts
>> >> > and no
>> >> > VM pausei can access the VM via ssh...no connection
>> >> > failure.the
>> >> > connection drop only in case of SPM goes down and the another node
>> >> > will
>> >> > elect as SPM(All the running VM's pause in this condition).
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Jul 4, 2014 at 4:12 AM, Darrell Budic
>> >> > 
>> >> > wrote:
>> >> >>
>> >> >> You need to setup a virtual IP to use as the mount point, most
>> >> >> people
>> >> >> use
>> >> >> keepalived to provide a virtual ip via vrrp for this. Setup
>> >> >> something
>> >> >> like
>> >> >> 10.10.10.10 and use that for your mounts.
>> >> >>
>> >> >> Right now, if 10.10.10.2 goes away, all your gluster mounts go away
>> >> >> and
>> >> >> your VMs get paused because the hypervisors can’t access the
>> >> >> storage.
>> >> >> Your
>> >> >> gluster storage is still fine, but ovirt can’t talk to it because
>> >> >> 10.10.10.2
>> >> >> isn’t there.
>> >> >>
>> >> >> If the SPM goes down, it the other hypervisor hosts will elect a new
>> >> >> one
>> >> >> (under control of the ovirt engine).
>> >> >>
>> >> >> Same scenarios if storage & compute are on the same server, you
>> >> >> still
>> >> >> need
>> >> >> a vip address for the storage portion to serve as the mount point so
>> >> >> it’s
>> >> >> not dependent on any one server.
>> >> >>
>> >> >> -Darrell
>> >> >>
>> >> >> On Jul 3, 2014, at 1:14 AM, Punit Dambiwal 
>> >> >> wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> I have some HA related concern about glusterfs with Ovirt...let say
>> >> >> i
>> >> >> have
>> >> >> 4 storage node with gluster bricks as below :-
>> >> >>
>> >> >> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have
>> >> >> distributed
>> >> >> replicated architecture...
>> >> >> 2. Now attached this gluster storge to ovrit-engine with the
>> >> >> following
>> >> >> mount point 10.10.10.2/vol1
>> >> >> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to
>

Re: [ovirt-users] Glusterfs HA with Ovirt

2014-07-04 Thread Brad House

I was never able to achieve a stable system that could survive the loss of a
single node with glusterfs.  I attempted to use replica 2 across 3 nodes (which
required 2 bricks per node as the number of bricks must be a multiple of the
replica, and you have to order them so the brick pairs span servers).  I enabled
server-side quorum, but found out later that client side quorum is based on
'sub volumes', which means that with a single node failure on replica 2, even
though there were 3 nodes, it would go into a readonly state.

After disabling client-side quorum (but keeping server side quorum) I thought 
the
issue was fixed, but every once in a while, rebooting one of the nodes (after 
ensuring
gluster was healed) would lead to i/o errors on the VM guest and essentially 
make
it so it needed to be rebooted (which was successful and everything worked after
even before bringing the downed node back up).  My nodes were all combined 
glusterfs
and ovirt nodes. I tried using both 'localhost' on the nodes as well as using a
keepalived VIP.

Its possible my issues were all due to client-side quorum not being enabled,
but that would require replica 3 to be able to survive a single node failure, 
but
I never pursued testing that theory.  Also, heal times seemed a bit long for
a single idle VM, it would consume 2 full cores of CPU for about 5 minutes for
healing a single idle VM (granted, I was testing on a 1Gbps network, but that
doesn't explain the CPU usage).

-Brad

On 7/4/14 1:29 AM, Andrew Lau wrote:

As long as all your compute nodes are part of the gluster peer,
localhost will work.
Just remember, gluster will connect to any server, so even if you
mount as localhost:/ it could be accessing the storage from another
host in the gluster peer group.


On Fri, Jul 4, 2014 at 3:26 PM, Punit Dambiwal  wrote:

Hi Andrew,

Yes..both on the same node...but i have 4 nodes of this type in the same
clusterSo it should work or not ??

1. 4 physical nodes with 12 bricks each(distributed replicated)...
2. The same all 4 nodes use for the compute purpose also...

Do i still require the VIP...or not ?? because i tested even the mount point
node goes down...the VM will not pause and not affect...


On Fri, Jul 4, 2014 at 1:18 PM, Andrew Lau  wrote:


Or just localhost as your computer and storage are on the same box.


On Fri, Jul 4, 2014 at 2:48 PM, Punit Dambiwal  wrote:

Hi Andrew,

Thanks for the updatethat means HA can not work without VIP in the
gluster,so better to use the glusterfs with the VIP to take over the
ip...in
case of any storage node failure...


On Fri, Jul 4, 2014 at 12:35 PM, Andrew Lau 
wrote:


Don't forget to take into consideration quroum, that's something
people often forget

The reason you're having the current happen, is gluster only uses the
initial IP address to get the volume details. After that it'll connect
directly to ONE of the servers, so with your 2 storage server case,
50% chance it won't go to paused state.

For the VIP, you could consider CTDB or keepelived, or even just using
localhost (as your storage and compute are all on the same machine).
For CTDB, checkout
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/

I have a BZ open regarding gluster VMs going into paused state and not
being resumable, so it's something you should also consider. My case,
switch dies, gluster volume goes away, VMs go into paused state but
can't be resumed. If you lose one server out of a cluster is a
different story though.
https://bugzilla.redhat.com/show_bug.cgi?id=1058300

HTH

On Fri, Jul 4, 2014 at 11:48 AM, Punit Dambiwal 
wrote:

Hi,

Thanks...can you suggest me any good how to/article for the glusterfs
with
ovirt...

One strange thing is if i will try both (compute & storage) on the
same
node...the below quote not happen

-

Right now, if 10.10.10.2 goes away, all your gluster mounts go away
and
your
VMs get paused because the hypervisors can’t access the storage. Your
gluster storage is still fine, but ovirt can’t talk to it because
10.10.10.2
isn’t there.
-

Even the 10.10.10.2 goes down...i can still access the gluster mounts
and no
VM pausei can access the VM via ssh...no connection
failure.the
connection drop only in case of SPM goes down and the another node
will
elect as SPM(All the running VM's pause in this condition).



On Fri, Jul 4, 2014 at 4:12 AM, Darrell Budic

wrote:


You need to setup a virtual IP to use as the mount point, most
people
use
keepalived to provide a virtual ip via vrrp for this. Setup
something
like
10.10.10.10 and use that for your mounts.

Right now, if 10.10.10.2 goes away, all your gluster mounts go away
and
your VMs get paused because the hypervisors can’t access the
storage.
Your
gluster storage is still fine, but ovirt can’t talk to it because
10.10.10.2
isn’t there.

If the SPM goes down, it the other hypervisor hosts will elect a new
one
(under control of the ovirt engine).

Same scenar

Re: [ovirt-users] glusterfs domains and libgfapi?

2014-10-01 Thread Itamar Heim

On 10/01/2014 06:23 PM, Jason Brooks wrote:

Is glusterfs w/ libgfapi working for anyone?

Is this still a feature of oVirt?

Thanks, Jason
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



it was disabled due to gaps in libvirt currently fixed.
a patch to enable it should be written for folks to test.
it should "work out of the box" for a patched vdsm for gluster storage 
domains.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-08 Thread Darshan Narayana Murthy
Hi,

   Can you please try enabling epel 6 repo. It should have the needed
dependencies.

-Regards
Darshan N

- Original Message -
> From: "Stefano Stagnaro" 
> To: users@ovirt.org
> Sent: Wednesday, July 8, 2015 4:09:42 PM
> Subject: [ovirt-users] GlusterFS host installation failed
> 
> Hi,
> 
> host installation in a glusterfs cluster is failing due to dependecies
> errors:
> 
> Error: Package: glusterfs-server-3.7.2-3.el6.x86_64
> (ovirt-3.5-glusterfs-epel)
>Requires: pyxattr
> Error: Package: glusterfs-server-3.7.2-3.el6.x86_64
> (ovirt-3.5-glusterfs-epel)
>Requires: liburcu-cds.so.1()(64bit)
> Error: Package: glusterfs-server-3.7.2-3.el6.x86_64
> (ovirt-3.5-glusterfs-epel)
>Requires: liburcu-bp.so.1()(64bit)
> 
> oVirt Engine Version: 3.5.3.1-1.el6 on CentOS 6.6
> 
> I've installed the following repo:
> http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
> 
> Thank you,
> --
> Stefano Stagnaro
> 
> Prisma Telecom Testing S.r.l.
> Via Petrocchi, 4
> 20127 Milano – Italy
> 
> Tel. 02 26113507 int 339
> e-mail: stefa...@prismatelecomtesting.com
> skype: stefano.stagnaro
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Stefano Stagnaro
Hi,

adding epel-6 gives another error:

Error: Package: vdsm-gluster-4.16.20-0.el6.noarch (@ovirt-3.5)
   Requires: vdsm = 4.16.20-0.el6
   Removing: vdsm-4.16.20-0.el6.x86_64 (@ovirt-3.5)
   vdsm = 4.16.20-0.el6
   Updated By: vdsm-4.16.20-1.git3a90f62.el6.x86_64 (epel)
   vdsm = 4.16.20-1.git3a90f62.el6
   Available: vdsm-4.16.7-1.gitdb83943.el6.x86_64 (ovirt-3.5)
   vdsm = 4.16.7-1.gitdb83943.el6
   Available: vdsm-4.16.10-0.el6.x86_64 (ovirt-3.5)
   vdsm = 4.16.10-0.el6
   Available: vdsm-4.16.10-8.gitc937927.el6.x86_64 (ovirt-3.5)
   vdsm = 4.16.10-8.gitc937927.el6
   Available: vdsm-4.16.14-0.el6.x86_64 (ovirt-3.5)
   vdsm = 4.16.14-0.el6

ovirt-3.5-dependencies.repo already provides epel packages but pyxattr and 
userspace-rcu are missing in the whitelist.

Appending pyxattr,userspace-rcu in the includepkgs seems to have resolved it.

Can anyone fix it?

Thanks,
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro

On gio, 2015-07-09 at 02:45 -0400, Darshan Narayana Murthy wrote:
> Hi,
> 
>Can you please try enabling epel 6 repo. It should have the needed
> dependencies.
> 
> -Regards
> Darshan N
> 
> - Original Message -
> > From: "Stefano Stagnaro" 
> > To: users@ovirt.org
> > Sent: Wednesday, July 8, 2015 4:09:42 PM
> > Subject: [ovirt-users] GlusterFS host installation failed
> > 
> > Hi,
> > 
> > host installation in a glusterfs cluster is failing due to dependecies
> > errors:
> > 
> > Error: Package: glusterfs-server-3.7.2-3.el6.x86_64
> > (ovirt-3.5-glusterfs-epel)
> >Requires: pyxattr
> > Error: Package: glusterfs-server-3.7.2-3.el6.x86_64
> > (ovirt-3.5-glusterfs-epel)
> >Requires: liburcu-cds.so.1()(64bit)
> > Error: Package: glusterfs-server-3.7.2-3.el6.x86_64
> > (ovirt-3.5-glusterfs-epel)
> >Requires: liburcu-bp.so.1()(64bit)
> > 
> > oVirt Engine Version: 3.5.3.1-1.el6 on CentOS 6.6
> > 
> > I've installed the following repo:
> > http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
> > 
> > Thank you,
> > --
> > Stefano Stagnaro
> > 
> > Prisma Telecom Testing S.r.l.
> > Via Petrocchi, 4
> > 20127 Milano – Italy
> > 
> > Tel. 02 26113507 int 339
> > e-mail: stefa...@prismatelecomtesting.com
> > skype: stefano.stagnaro
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Shubhendu Tripathi

I installed librcu using the below command -

yum install 
ftp://ftp.muug.mb.ca/mirror/fedora/epel/7/x86_64/u/userspace-rcu-0.7.9-1.el7.x86_64.rpm


Regards,
Shubhendu

On 07/08/2015 04:09 PM, Stefano Stagnaro wrote:

Hi,

host installation in a glusterfs cluster is failing due to dependecies errors:

Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 (ovirt-3.5-glusterfs-epel)
Requires: pyxattr
Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 (ovirt-3.5-glusterfs-epel)
Requires: liburcu-cds.so.1()(64bit)
Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 (ovirt-3.5-glusterfs-epel)
Requires: liburcu-bp.so.1()(64bit)

oVirt Engine Version: 3.5.3.1-1.el6 on CentOS 6.6

I've installed the following repo: 
http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm

Thank you,


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Shubhendu Tripathi

The libs are available for epel6 as well I remember.
Worth giving a try.

Regards,
Shubhendu

On 07/09/2015 03:50 PM, Shubhendu Tripathi wrote:

I installed librcu using the below command -

yum install 
ftp://ftp.muug.mb.ca/mirror/fedora/epel/7/x86_64/u/userspace-rcu-0.7.9-1.el7.x86_64.rpm


Regards,
Shubhendu

On 07/08/2015 04:09 PM, Stefano Stagnaro wrote:

Hi,

host installation in a glusterfs cluster is failing due to 
dependecies errors:


Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 
(ovirt-3.5-glusterfs-epel)

Requires: pyxattr
Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 
(ovirt-3.5-glusterfs-epel)

Requires: liburcu-cds.so.1()(64bit)
Error: Package: glusterfs-server-3.7.2-3.el6.x86_64 
(ovirt-3.5-glusterfs-epel)

Requires: liburcu-bp.so.1()(64bit)

oVirt Engine Version: 3.5.3.1-1.el6 on CentOS 6.6

I've installed the following repo: 
http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm


Thank you,




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Sandro Bonazzola
Il 09/07/2015 11:48, Stefano Stagnaro ha scritto:
> Hi,
> 
> adding epel-6 gives another error:
> 
> Error: Package: vdsm-gluster-4.16.20-0.el6.noarch (@ovirt-3.5)
>Requires: vdsm = 4.16.20-0.el6
>Removing: vdsm-4.16.20-0.el6.x86_64 (@ovirt-3.5)
>vdsm = 4.16.20-0.el6
>Updated By: vdsm-4.16.20-1.git3a90f62.el6.x86_64 (epel)
>vdsm = 4.16.20-1.git3a90f62.el6
>Available: vdsm-4.16.7-1.gitdb83943.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.7-1.gitdb83943.el6
>Available: vdsm-4.16.10-0.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.10-0.el6
>Available: vdsm-4.16.10-8.gitc937927.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.10-8.gitc937927.el6
>Available: vdsm-4.16.14-0.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.14-0.el6
> 
> ovirt-3.5-dependencies.repo already provides epel packages but pyxattr and 
> userspace-rcu are missing in the whitelist.
> 
> Appending pyxattr,userspace-rcu in the includepkgs seems to have resolved it.

I'm on it.


> 
> Can anyone fix it?
> 
> Thanks,
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Sandro Bonazzola
Il 09/07/2015 11:48, Stefano Stagnaro ha scritto:
> Hi,
> 
> adding epel-6 gives another error:
> 
> Error: Package: vdsm-gluster-4.16.20-0.el6.noarch (@ovirt-3.5)
>Requires: vdsm = 4.16.20-0.el6
>Removing: vdsm-4.16.20-0.el6.x86_64 (@ovirt-3.5)
>vdsm = 4.16.20-0.el6
>Updated By: vdsm-4.16.20-1.git3a90f62.el6.x86_64 (epel)
>vdsm = 4.16.20-1.git3a90f62.el6
>Available: vdsm-4.16.7-1.gitdb83943.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.7-1.gitdb83943.el6
>Available: vdsm-4.16.10-0.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.10-0.el6
>Available: vdsm-4.16.10-8.gitc937927.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.10-8.gitc937927.el6
>Available: vdsm-4.16.14-0.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.14-0.el6

Since when we release vdsm in epel???


> 
> ovirt-3.5-dependencies.repo already provides epel packages but pyxattr and 
> userspace-rcu are missing in the whitelist.
> 
> Appending pyxattr,userspace-rcu in the includepkgs seems to have resolved it.
> 
> Can anyone fix it?
> 
> Thanks,
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Sandro Bonazzola
Il 09/07/2015 11:48, Stefano Stagnaro ha scritto:
> Hi,
> 
> adding epel-6 gives another error:
> 
> Error: Package: vdsm-gluster-4.16.20-0.el6.noarch (@ovirt-3.5)
>Requires: vdsm = 4.16.20-0.el6
>Removing: vdsm-4.16.20-0.el6.x86_64 (@ovirt-3.5)
>vdsm = 4.16.20-0.el6
>Updated By: vdsm-4.16.20-1.git3a90f62.el6.x86_64 (epel)
>vdsm = 4.16.20-1.git3a90f62.el6
>Available: vdsm-4.16.7-1.gitdb83943.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.7-1.gitdb83943.el6
>Available: vdsm-4.16.10-0.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.10-0.el6
>Available: vdsm-4.16.10-8.gitc937927.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.10-8.gitc937927.el6
>Available: vdsm-4.16.14-0.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.14-0.el6
> 
> ovirt-3.5-dependencies.repo already provides epel packages but pyxattr and 
> userspace-rcu are missing in the whitelist.
> 
> Appending pyxattr,userspace-rcu in the includepkgs seems to have resolved it.
> 
> Can anyone fix it?

Fix: https://gerrit.ovirt.org/43397

test builds are available here: 
http://jenkins.ovirt.org/job/ovirt-release_master_gerrit/164/

Stefano, can you help verifying?

> 
> Thanks,
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Sandro Bonazzola
Il 09/07/2015 11:48, Stefano Stagnaro ha scritto:
> Hi,
> 
> adding epel-6 gives another error:
> 
> Error: Package: vdsm-gluster-4.16.20-0.el6.noarch (@ovirt-3.5)
>Requires: vdsm = 4.16.20-0.el6
>Removing: vdsm-4.16.20-0.el6.x86_64 (@ovirt-3.5)
>vdsm = 4.16.20-0.el6
>Updated By: vdsm-4.16.20-1.git3a90f62.el6.x86_64 (epel)
>vdsm = 4.16.20-1.git3a90f62.el6
>Available: vdsm-4.16.7-1.gitdb83943.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.7-1.gitdb83943.el6
>Available: vdsm-4.16.10-0.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.10-0.el6
>Available: vdsm-4.16.10-8.gitc937927.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.10-8.gitc937927.el6
>Available: vdsm-4.16.14-0.el6.x86_64 (ovirt-3.5)
>vdsm = 4.16.14-0.el6


Looks like the EPEL build is missing vdsm-gluster: 
http://koji.fedoraproject.org/koji/buildinfo?buildID=640700

Please orphan vdsm in EPEL and retire the package.
We're packaging VDSM in CentOS Virt SIG and I'm not really sure vdsm packages 
match EPEL policy.



> 
> ovirt-3.5-dependencies.repo already provides epel packages but pyxattr and 
> userspace-rcu are missing in the whitelist.
> 
> Appending pyxattr,userspace-rcu in the includepkgs seems to have resolved it.
> 
> Can anyone fix it?
> 
> Thanks,
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-09 Thread Stefano Stagnaro
The installation ended up correctly with ovirt-release35-005-1.noarch.rpm you 
provided.

Thanks.
-- 
Stefano Stagnaro

Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy

Tel. 02 26113507 int 339
e-mail: stefa...@prismatelecomtesting.com
skype: stefano.stagnaro


On gio, 2015-07-09 at 17:18 +0200, Sandro Bonazzola wrote:
> Il 09/07/2015 11:48, Stefano Stagnaro ha scritto:
> > Hi,
> > 
> > adding epel-6 gives another error:
> > 
> > Error: Package: vdsm-gluster-4.16.20-0.el6.noarch (@ovirt-3.5)
> >Requires: vdsm = 4.16.20-0.el6
> >Removing: vdsm-4.16.20-0.el6.x86_64 (@ovirt-3.5)
> >vdsm = 4.16.20-0.el6
> >Updated By: vdsm-4.16.20-1.git3a90f62.el6.x86_64 (epel)
> >vdsm = 4.16.20-1.git3a90f62.el6
> >Available: vdsm-4.16.7-1.gitdb83943.el6.x86_64 (ovirt-3.5)
> >vdsm = 4.16.7-1.gitdb83943.el6
> >Available: vdsm-4.16.10-0.el6.x86_64 (ovirt-3.5)
> >vdsm = 4.16.10-0.el6
> >Available: vdsm-4.16.10-8.gitc937927.el6.x86_64 (ovirt-3.5)
> >vdsm = 4.16.10-8.gitc937927.el6
> >Available: vdsm-4.16.14-0.el6.x86_64 (ovirt-3.5)
> >vdsm = 4.16.14-0.el6
> > 
> > ovirt-3.5-dependencies.repo already provides epel packages but pyxattr and 
> > userspace-rcu are missing in the whitelist.
> > 
> > Appending pyxattr,userspace-rcu in the includepkgs seems to have resolved 
> > it.
> > 
> > Can anyone fix it?
> 
> Fix: https://gerrit.ovirt.org/43397
> 
> test builds are available here: 
> http://jenkins.ovirt.org/job/ovirt-release_master_gerrit/164/
> 
> Stefano, can you help verifying?
> 
> > 
> > Thanks,
> > 
> 
> 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS host installation failed

2015-07-10 Thread Sandro Bonazzola
Il 09/07/2015 18:21, Stefano Stagnaro ha scritto:
> The installation ended up correctly with ovirt-release35-005-1.noarch.rpm you 
> provided.

Thanks for the feedback, releasing it.

> 
> Thanks.
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs heal issues [Solved]

2017-01-16 Thread Gary Pedretty
I figured it out.  I forgot that these gluster volumes as recommended by the 
Ovirt Glusterized setup were created as  Replica 3 Arbiter 1 volumes, so now I 
understand what that truly means,  One brick only contains the directory and 
meta data and so will always be smaller actual disk use.   Sorry for the 
confusion..


Gary



Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39













> On Jan 16, 2017, at 10:31 AM, Gary Pedretty  wrote:
> 
> This is a self hosted Glusterized setup, with 3 hosts.  I have had some 
> glusterfs data storage domains have some disk issues where healing was 
> required.  The self heal seemed to startup and the Ovirt Management portal 
> showed healing taking place in the Volumes/Brick tab.  Later it showed 
> everything ok.  This is a replica 3 volume.  I noticed however that the brick 
> tab was not showing even use of the 3 bricks and looking on the actual hosts 
> a df command also shows uneven use of the bricks.  However gluster volume 
> heal (vol) info shows zero entries for all bricks.  There are no errors 
> reported in the Data Center or Cluster, yet I see this uneven use of the 
> bricks across the 3 hosts.  
> 
> Doing a gluster volume status (vol) detail indeed shows different free disk 
> space across the different bricks.  However the Inode Count and Free Inodes 
> are identical across all bricks.  
> 
> I thought replica 3 was supposed to be mirrored across all nodes.  Any idea 
> why I am seeing the uneven use, or is this just something about glusterfs 
> that is different when it comes to free space vs Inode Count?
> 
> Gary
> 
> 
> 
> Gary Pedrettyg...@ravnalaska.net 
> 
> Systems Manager  www.flyravn.com 
> 
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
> Really loving the record green up date! Summmer!!   yourself” Matt 22:39
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterFS is not independent?

2017-11-09 Thread Johan Bernhardsson
For it to work you need to have the bricks in replicate. Of brick on
each server. 
If you only have two nodes. The quoum will be to low so it will set the
gluster to failsafe mode until the other brick comes online. 
For it to work properly you need three nodes with one brick or two
nodes and a third node acting as an arbiter.
/JohanOn Thu, 2017-11-09 at 11:35 +0100, Jon bae wrote:
> Hello,
> I'm very new to oVirt and glusterFS, so maybe I got something
> wrong...
> 
> I have the oVirt engine installed on a separate server and I have
> also two physical nodes. On every node I configure glusterFS, the
> volume is in distribution mode and have only one brick, from is one
> node. Both volumes I also add to its own storage domain.
> 
> The idea was, that both storage domains are independent from each
> other, that I can turn of one node and only turn it on, when I need
> it.
> 
> But now I have the problem, that when I turn of on node, both storage
> domains goes down. and the volume shows the the brick is not
> available.
> 
> Is there a way to fix this?
> 
> Regards
> Jonathan
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterFS is not independent?

2017-11-09 Thread Johan Bernhardsson
Gluster would be to overcomplicate this if you only want local storage
on both to be reachable over the network.
Simplest way is to setup an nf server on both nodes and create a
storage domain for each.
Gluster is a way to secure your data and replicate them over several
nodes. so that if one node goes down or explodes you always have the
data replicated to other nodes. Running a single brick gluster volume
is not recomended. 
/JohanOn Thu, 2017-11-09 at 11:57 +0100, Jon bae wrote:
> Thank you for your answer!
> 
> I don't understand way I have to put them in replicate mode. As I
> understand replicate means, that the files get copy to both nodes,
> but I would like to have them independent, and I move the vm disks to
> the node how i want it.
> 
> Theoretical I only need a solution where I can use local storage from
> the nodes, but that they are reachable over the network. 
> 
> Jonathan
> 
> 2017-11-09 11:39 GMT+01:00 Johan Bernhardsson :
> > For it to work you need to have the bricks in replicate. Of brick
> > on each server. 
> > 
> > If you only have two nodes. The quoum will be to low so it will set
> > the gluster to failsafe mode until the other brick comes online. 
> > 
> > For it to work properly you need three nodes with one brick or two
> > nodes and a third node acting as an arbiter.
> > 
> > /Johan
> > 
> > On Thu, 2017-11-09 at 11:35 +0100, Jon bae wrote:
> > > Hello,
> > > I'm very new to oVirt and glusterFS, so maybe I got something
> > > wrong...
> > > 
> > > I have the oVirt engine installed on a separate server and I have
> > > also two physical nodes. On every node I configure glusterFS, the
> > > volume is in distribution mode and have only one brick, from is
> > > one node. Both volumes I also add to its own storage domain.
> > > 
> > > The idea was, that both storage domains are independent from each
> > > other, that I can turn of one node and only turn it on, when I
> > > need it.
> > > 
> > > But now I have the problem, that when I turn of on node, both
> > > storage domains goes down. and the volume shows the the brick is
> > > not available.
> > > 
> > > Is there a way to fix this?
> > > 
> > > Regards
> > > Jonathan
> > > 
> > > 
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs backup-volfile-servers lost

2018-06-18 Thread g . vasilopoulos
I have a 4 node setup (centos 7) with hosted engine on glusterfs (replica 3 
arbiter1). Gluster fs is like this 
ohost01 104G brick (real data)
ohost02 104g brick (real data)
ohost04 104g brick (arbiter) 
ohost05 104g partition used as nfs-storage. 


hosted engine is on gluster.  I also have an fc domain of 3,6 TB 
mount of gluster is like this 

storage=172.16.224.10:/engine
mnt_options=backup-volfile-servers=172.16.224.11:172.16.224.13

172.16.224.10 is ohost01 storage network
172.16.224.12 is ohost02 storage network
172.16.224.13 is ohost04 storage network

Today I upgraded all nodes. I did it like this:
hosted-engine was running on ohost05 at the time
Put ohost04 (arbiter) on maintenance and did upgrade (ok)
Same with ohost02 
Ohost01 was spm so I did ohost05 spm then put ohost01 on maintenance and then 
upgraded it. I notticed that engine VM paused during the process (which usualy 
does not happen) as I have backup-volfile-servers mount option. But today I 
notticed that this option is ignored. On the hosts I also noted that mount is 
like this 
172.16.224.10:/engine on /rhev/data-center/mnt/glusterSD/172.16.224.10:_engine 
type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
so reduduncy is gone from gluster and I cannot figure out why. 
If I restart ohost01 (after maintenance) hosted engine get paused until ohost01 
comes back up.
How can I solve this issue
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2M3WNSDFUJELH354AMZRJZ7VRJFD4GC7/


[ovirt-users] GlusterFS LibgfApiSupported and High Availability

2018-11-18 Thread Shawn Weeks
Currently when LibgfApiSupported is enabled it looks like the startup command 
for the VM has the Gluster hostname always set to the same host. How does that 
work if that host is down? In my case GlusterFS has 3x replication and 
distribution enabled but if the first host is down the VMs don't work. I'm on 
GlusterFS 4.1 and oVirt 4.2 latest.

Thanks
Shawn
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TNGMFVWRKCAYKQ4PE7RDQUI6RLHXJFJH/


Re: [ovirt-users] glusterfs resume vm paused state

2014-06-02 Thread Humble Devassy Chirammal
Hi Andrew,

Afaict, there should be manual intervention to resume a 'paused vm' in any
storage domain even if VM is marked as "HA"..

Also, I failed to understand the setup you have, that said, you mentioned:

" resuming a VM from a paused state on top
of NFS share? Even when the VMs are marked as HA, if the gluster
storage goes down for a few seconds the VMs go to a paused state and
can never be resumed"

Do you have NFS storage domain configured by specifying "gluster server ip"
and "volume name " in place of "server" and "export" path ?

can you please detail the setup (wrt storage domain configuration and
gluster volumes) and version of ovirt and gluster in use ?

--Humble


On Mon, Jun 2, 2014 at 11:17 AM, Andrew Lau  wrote:

> Hi,
>
> Has anyone had any luck with resuming a VM from a paused state on top
> of NFS share? Even when the VMs are marked as HA, if the gluster
> storage goes down for a few seconds the VMs go to a paused state and
> can never be resumed. They require a hard reset.
>
> I recall when using NFS to not have this issue.
>
> Thanks,
> Andrew
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs resume vm paused state

2014-06-02 Thread Andrew Lau
Hi Humble,

On Mon, Jun 2, 2014 at 8:10 PM, Humble Devassy Chirammal
 wrote:
> Hi Andrew,
>
> Afaict, there should be manual intervention to resume a 'paused vm' in any
> storage domain even if VM is marked as "HA"..
>
I had a BZ open about this with some traction, but i forgot to keep up
with the requests and it's fallen behind
https://bugzilla.redhat.com/show_bug.cgi?id=1058300

Even manually, they won't resume. virsh resume host also has the same
end result.

> Also, I failed to understand the setup you have, that said, you mentioned:
>
>
> " resuming a VM from a paused state on top
> of NFS share? Even when the VMs are marked as HA, if the gluster
> storage goes down for a few seconds the VMs go to a paused state and
> can never be resumed"
>
> Do you have NFS storage domain configured by specifying "gluster server ip"
> and "volume name " in place of "server" and "export" path ?
>
> can you please detail the setup (wrt storage domain configuration and
> gluster volumes) and version of ovirt and gluster in use ?

We're testing a two host setup with oVirt and gluster on the same
boxes. CentOS 6.5, hosted-engine. Storage domain type as glusterfs,
although when I try a storage domain type of nfs (using the gluster
nfs server) the above issue doesn't seem to occur.

>
> --Humble
>
>
> On Mon, Jun 2, 2014 at 11:17 AM, Andrew Lau  wrote:
>>
>> Hi,
>>
>> Has anyone had any luck with resuming a VM from a paused state on top
>> of NFS share? Even when the VMs are marked as HA, if the gluster
>> storage goes down for a few seconds the VMs go to a paused state and
>> can never be resumed. They require a hard reset.
>>
>> I recall when using NFS to not have this issue.
>>
>> Thanks,
>> Andrew
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs resume vm paused state

2014-06-03 Thread Itamar Heim

On 06/02/2014 01:10 PM, Humble Devassy Chirammal wrote:

Hi Andrew,

Afaict, there should be manual intervention to resume a 'paused vm' in
any storage domain even if VM is marked as "HA"..


that has been fixed since 3.3 with auto-resume paused vm's after EIO



Also, I failed to understand the setup you have, that said, you mentioned:

" resuming a VM from a paused state on top
of NFS share? Even when the VMs are marked as HA, if the gluster
storage goes down for a few seconds the VMs go to a paused state and
can never be resumed"

Do you have NFS storage domain configured by specifying "gluster server
ip" and "volume name " in place of "server" and "export" path ?

can you please detail the setup (wrt storage domain configuration and
gluster volumes) and version of ovirt and gluster in use ?

--Humble


On Mon, Jun 2, 2014 at 11:17 AM, Andrew Lau mailto:and...@andrewklau.com>> wrote:

Hi,

Has anyone had any luck with resuming a VM from a paused state on top
of NFS share? Even when the VMs are marked as HA, if the gluster
storage goes down for a few seconds the VMs go to a paused state and
can never be resumed. They require a hard reset.

I recall when using NFS to not have this issue.

Thanks,
Andrew
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs resume vm paused state

2014-06-16 Thread Humble Devassy Chirammal
Hi,


>
that has been fixed since 3.3 with auto-resume paused vm's after EIO
>

Thanks Itamar .

>
I had a BZ open about this with some traction, but i forgot to keep up
with the requests and it's fallen behind
https://bugzilla.redhat.com/show_bug.cgi?id=1058300

Even manually, they won't resume. virsh resume host also has the same
end result.
>

@Andrew , the log files have to be analysed further. Its better to follow
up in the bugzilla.

--Humble



On Tue, Jun 3, 2014 at 5:02 PM, Itamar Heim  wrote:

> On 06/02/2014 01:10 PM, Humble Devassy Chirammal wrote:
>
>> Hi Andrew,
>>
>> Afaict, there should be manual intervention to resume a 'paused vm' in
>> any storage domain even if VM is marked as "HA"..
>>
>
> that has been fixed since 3.3 with auto-resume paused vm's after EIO
>
>
>> Also, I failed to understand the setup you have, that said, you mentioned:
>>
>> " resuming a VM from a paused state on top
>> of NFS share? Even when the VMs are marked as HA, if the gluster
>> storage goes down for a few seconds the VMs go to a paused state and
>> can never be resumed"
>>
>> Do you have NFS storage domain configured by specifying "gluster server
>> ip" and "volume name " in place of "server" and "export" path ?
>>
>> can you please detail the setup (wrt storage domain configuration and
>> gluster volumes) and version of ovirt and gluster in use ?
>>
>> --Humble
>>
>>
>> On Mon, Jun 2, 2014 at 11:17 AM, Andrew Lau > > wrote:
>>
>> Hi,
>>
>> Has anyone had any luck with resuming a VM from a paused state on top
>> of NFS share? Even when the VMs are marked as HA, if the gluster
>> storage goes down for a few seconds the VMs go to a paused state and
>> can never be resumed. They require a hard reset.
>>
>> I recall when using NFS to not have this issue.
>>
>> Thanks,
>> Andrew
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs resume vm paused state

2014-06-16 Thread Itamar Heim

On 06/16/2014 04:20 PM, Humble Devassy Chirammal wrote:

Hi,


 >
that has been fixed since 3.3 with auto-resume paused vm's after EIO
 >

Thanks Itamar .

 >
I had a BZ open about this with some traction, but i forgot to keep up
with the requests and it's fallen behind
https://bugzilla.redhat.com/show_bug.cgi?id=1058300


i assume this is around gluster deployment/split-brains/etc., since we 
try to resume and fail




Even manually, they won't resume. virsh resume host also has the same
end result.
 >

@Andrew , the log files have to be analysed further. Its better to
follow up in the bugzilla.

--Humble



On Tue, Jun 3, 2014 at 5:02 PM, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 06/02/2014 01:10 PM, Humble Devassy Chirammal wrote:

Hi Andrew,

Afaict, there should be manual intervention to resume a 'paused
vm' in
any storage domain even if VM is marked as "HA"..


that has been fixed since 3.3 with auto-resume paused vm's after EIO


Also, I failed to understand the setup you have, that said, you
mentioned:

" resuming a VM from a paused state on top
of NFS share? Even when the VMs are marked as HA, if the gluster
storage goes down for a few seconds the VMs go to a paused state and
can never be resumed"

Do you have NFS storage domain configured by specifying "gluster
server
ip" and "volume name " in place of "server" and "export" path ?

can you please detail the setup (wrt storage domain
configuration and
gluster volumes) and version of ovirt and gluster in use ?

--Humble


On Mon, Jun 2, 2014 at 11:17 AM, Andrew Lau
mailto:and...@andrewklau.com>
>__>
wrote:

 Hi,

 Has anyone had any luck with resuming a VM from a paused
state on top
 of NFS share? Even when the VMs are marked as HA, if the
gluster
 storage goes down for a few seconds the VMs go to a paused
state and
 can never be resumed. They require a hard reset.

 I recall when using NFS to not have this issue.

 Thanks,
 Andrew
 _
 Users mailing list
Users@ovirt.org  >
http://lists.ovirt.org/__mailman/listinfo/users






_
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/__mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs resume vm paused state

2014-06-18 Thread Humble Devassy Chirammal
Hi Itamar,

>
i assume this is around gluster deployment/split-brains/etc., since we try
to resume and fail
>

We are progressing on bz.
 Looks like quorum configuration playing a role here. I will follow up on
the same bug report.

--Humble



On Mon, Jun 16, 2014 at 7:10 PM, Itamar Heim  wrote:

> On 06/16/2014 04:20 PM, Humble Devassy Chirammal wrote:
>
>> Hi,
>>
>>
>>  >
>> that has been fixed since 3.3 with auto-resume paused vm's after EIO
>>  >
>>
>> Thanks Itamar .
>>
>>  >
>> I had a BZ open about this with some traction, but i forgot to keep up
>> with the requests and it's fallen behind
>> https://bugzilla.redhat.com/show_bug.cgi?id=1058300
>>
>
> i assume this is around gluster deployment/split-brains/etc., since we try
> to resume and fail
>
>
>> Even manually, they won't resume. virsh resume host also has the same
>> end result.
>>  >
>>
>> @Andrew , the log files have to be analysed further. Its better to
>> follow up in the bugzilla.
>>
>> --Humble
>>
>>
>>
>> On Tue, Jun 3, 2014 at 5:02 PM, Itamar Heim > > wrote:
>>
>> On 06/02/2014 01:10 PM, Humble Devassy Chirammal wrote:
>>
>> Hi Andrew,
>>
>> Afaict, there should be manual intervention to resume a 'paused
>> vm' in
>> any storage domain even if VM is marked as "HA"..
>>
>>
>> that has been fixed since 3.3 with auto-resume paused vm's after EIO
>>
>>
>> Also, I failed to understand the setup you have, that said, you
>> mentioned:
>>
>> " resuming a VM from a paused state on top
>> of NFS share? Even when the VMs are marked as HA, if the gluster
>> storage goes down for a few seconds the VMs go to a paused state
>> and
>> can never be resumed"
>>
>> Do you have NFS storage domain configured by specifying "gluster
>> server
>> ip" and "volume name " in place of "server" and "export" path ?
>>
>> can you please detail the setup (wrt storage domain
>> configuration and
>> gluster volumes) and version of ovirt and gluster in use ?
>>
>> --Humble
>>
>>
>> On Mon, Jun 2, 2014 at 11:17 AM, Andrew Lau
>> mailto:and...@andrewklau.com>
>> >__>
>>
>> wrote:
>>
>>  Hi,
>>
>>  Has anyone had any luck with resuming a VM from a paused
>> state on top
>>  of NFS share? Even when the VMs are marked as HA, if the
>> gluster
>>  storage goes down for a few seconds the VMs go to a paused
>> state and
>>  can never be resumed. They require a hard reset.
>>
>>  I recall when using NFS to not have this issue.
>>
>>  Thanks,
>>  Andrew
>>  _
>>  Users mailing list
>> Users@ovirt.org  > >
>> http://lists.ovirt.org/__mailman/listinfo/users
>> 
>>
>>
>>
>>
>>
>> _
>>
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/__mailman/listinfo/users
>> 
>>
>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GlusterFS Distributed Replicate HA with KeepAlived

2014-08-14 Thread Punit Dambiwal
Hi,

I have 4 node GlusterFS Distributed Replicate volume...the same 4 host node
i am using for compute purposenow i want to make it HAso if any
host goes down .VM will not pause and it will migrate to another
available node...

1. Can any one have any document or reference to do this with keepalived...
2. I have one more node as spare...so if any host goes down and can not
come up again because of any HW failure...i can add it...but i didn't find
any way to add these bricks to volume...??

Thanks,
Punit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs public key failure for rpm

2016-03-22 Thread Fabrice Bacchella
I tried to add a new host on a RHEL7, but it fails.

In the ovirt-host-deploy-20160322171347-XXX-6ba9d4a3.log file, I found:

warning: 
/var/cache/yum/x86_64/7/ovirt-3.6-glusterfs-epel/packages/glusterfs-libs-3.7.9-1.el7.x86_64.rpm:
 Header V4 RSA/SHA256 Signature, key ID d5dc52dc: NOKEY
Retrieving key from 
https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
2016-03-22 17:13:41 ERROR otopi.plugins.otopi.packagers.yumpackager 
yumpackager.error:100 Yum GPG key retrieval failed: [Errno 14] HTTPS Error 404 
- Not Found
2016-03-22 17:13:41 DEBUG otopi.context context._executeMethod:156 method 
exception
Traceback (most recent call last):
  File "/tmp/ovirt-6ocubrsLfP/pythonlib/otopi/context.py", line 146, in 
_executeMethod
method['method']()
  File "/tmp/ovirt-6ocubrsLfP/otopi-plugins/otopi/packagers/yumpackager.py", 
line 274, in _packages
self._miniyum.processTransaction()
  File "/tmp/ovirt-6ocubrsLfP/pythonlib/otopi/miniyum.py", line 1054, in 
processTransaction
rpmDisplay=self._RPMCallback(sink=self._sink)
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6500, in 
processTransaction
self._checkSignatures(pkgs,callback)
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6543, in 
_checkSignatures
self.getKeyForPackage(po, self._askForGPGKeyImport)
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6194, in 
getKeyForPackage
keys = self._retrievePublicKey(keyurl, repo)
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6091, in 
_retrievePublicKey
exception2msg(e))
YumBaseError: GPG key retrieval failed: [Errno 14] HTTPS Error 404 - Not Found


In /etc/yum.repos.d/ovirt-3.6-dependencies.repo, I found :

[ovirt-3.6-glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several 
petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=1
gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key

This file is up to date I think :
$ rpm -qf /etc/yum.repos.d/ovirt-3.6-dependencies.repo
ovirt-release36-005-1.noarch

$ yum update
...
No packages marked for update


If I try to download it :
$ curl -ORLv https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key 

...
< HTTP/1.1 404 Not Found

I think the explanation are here :
https://download.gluster.org/pub/gluster/glusterfs/LATEST/NEW_PUBLIC_KEY.README 


Any thing I can do ?

I don't even use glusterfs, I will be happy to disable it if I knew how.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GlusterFS for production use with Ovirt

2015-03-23 Thread Punit Dambiwal
Hi,

I want to use Glusterfs with Ovirt 3.5...please help me to make the
architecture stable for the production use :-

I have 4 servers...every server can host 24 SSD disk(As bricks)..i want to
deploy distributed replicated storage with replica =2i don't want to
use the Hardware RAID...as i think it will badly impact the performance...

1. Glusterfs 3.5 or 3.6 ?? (which one will be stable for the production
use).
2. Do i use the Hardware RAID or Not ??
3. IF HW RAID then which RAID level and does it impact the performance...
4. I want to make it rock solid...so it can use for production purpose...
5. How much RAM should be sufficient on each server...on the each server i
have two E5 CPU's...
6. For Network Connectivity i have 2*10G NIC with bonding on each server...

Thanks,
Punit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GlusterFS native client use with oVirt

2015-04-22 Thread Will Dennis
Hi all,

Can someone tell me if it's possible or not to utilize GlusterFS mounted as 
native (i.e. FUSE) for a storage domain with oVirt 3.5.x?  I have two nodes 
(with a third I'm thinking of using as well) that are running Gluster, and I've 
created the two volumes needed for hosted engine setup ("engine", "data") on 
them, and mounted them native (not via NFS.) Can this be used with oVirt 3.5.x?

Or is this (from what I now understand) a new feature coming in oVirt 3.6?

Thanks,
Will
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs Error message constantly being reported

2017-08-16 Thread Vadim
Hi, All

ovirt 4.1.4 fresh install
Constantly seeing this message in the logs, how to fix this:


VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 
'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 
'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'

--
Thanks,
Vadim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Distributed Replicate HA with KeepAlived

2014-08-18 Thread Humble Devassy Chirammal
Hi Punit,


If you are using gluster volume for virt store, it is always recommend to
enable /considering virt store use case mentioned here :
http://gluster.org/documentation/use_cases/Virt-store-usecase/

Regarding how to add bricks to the volume , you can refer #
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Expanding.html

The virt store use case enables quorum which ideally resolves the
possibilities of split brain.

Please feel free to revert if you come across any issues.

--Humble


On Fri, Aug 15, 2014 at 12:05 PM, Punit Dambiwal  wrote:

> Hi,
>
> I have 4 node GlusterFS Distributed Replicate volume...the same 4 host
> node i am using for compute purposenow i want to make it HAso if
> any host goes down .VM will not pause and it will migrate to another
> available node...
>
> 1. Can any one have any document or reference to do this with keepalived...
> 2. I have one more node as spare...so if any host goes down and can not
> come up again because of any HW failure...i can add it...but i didn't find
> any way to add these bricks to volume...??
>
> Thanks,
> Punit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Distributed Replicate HA with KeepAlived

2014-08-18 Thread Punit Dambiwal
Hi Humble,

Thanks for the updatesbut i want the way from the ovirt portal...if
possible ??

Also in my case if the mount point HV node goes down...all the VM's will
pause...and once the node come up..i need to manually poweroff the VM's and
manually start again...

Someone from community suggest me to use the localhost as mount point if
you are using compute & storage on the same node...but i doubt about the
failover...that means if any node goes downhow the VM migration on
another node will happen ??

Thanks,
Punit


On Mon, Aug 18, 2014 at 9:37 PM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:

> Hi Punit,
>
>
> If you are using gluster volume for virt store, it is always recommend to
> enable /considering virt store use case mentioned here :
> http://gluster.org/documentation/use_cases/Virt-store-usecase/
>
> Regarding how to add bricks to the volume , you can refer #
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Expanding.html
>
> The virt store use case enables quorum which ideally resolves the
> possibilities of split brain.
>
> Please feel free to revert if you come across any issues.
>
> --Humble
>
>
> On Fri, Aug 15, 2014 at 12:05 PM, Punit Dambiwal 
> wrote:
>
>> Hi,
>>
>> I have 4 node GlusterFS Distributed Replicate volume...the same 4 host
>> node i am using for compute purposenow i want to make it HAso if
>> any host goes down .VM will not pause and it will migrate to another
>> available node...
>>
>> 1. Can any one have any document or reference to do this with
>> keepalived...
>> 2. I have one more node as spare...so if any host goes down and can not
>> come up again because of any HW failure...i can add it...but i didn't find
>> any way to add these bricks to volume...??
>>
>> Thanks,
>> Punit
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Distributed Replicate HA with KeepAlived

2014-08-19 Thread Humble Devassy Chirammal
Hi,

>>
but i want the way from the ovirt portal...if possible ??
>>


Yes, its possible.

>>
Also in my case if the mount point HV node goes down...all the VM's will
pause...and once the node come up..i need to manually poweroff the VM's and
manually start again...
>>

I failed to interpret  'mount point' here. If you are talking about
'gluster' volume server , the VM behaviour depends on the gluster
configuration which indirectly depend on the quorum configuration and
volume configuration. If there is an error from underlying storage ( here
gluster volume) VM can pause , however latest versions of ovirt support
auto resume.

>>
Someone from community suggest me to use the localhost as mount point if
you are using compute & storage on the same node...but i doubt about the
failover...that means if any node goes downhow the VM migration on
another node will happen ??
>>

If you are using local data center the VM migration can't happen. You have
to configure a storage domain which can be accessed from your Ovirt
Cluster  Hosts .

Please have a look at http://www.ovirt.org/Gluster_Storage_Domain_Reference
which talks about the recommended way of configuring it.

--Humble


On Tue, Aug 19, 2014 at 7:26 AM, Punit Dambiwal  wrote:

> Hi Humble,
>
> Thanks for the updatesbut i want the way from the ovirt portal...if
> possible ??
>
> Also in my case if the mount point HV node goes down...all the VM's will
> pause...and once the node come up..i need to manually poweroff the VM's and
> manually start again...
>
> Someone from community suggest me to use the localhost as mount point if
> you are using compute & storage on the same node...but i doubt about the
> failover...that means if any node goes downhow the VM migration on
> another node will happen ??
>
> Thanks,
> Punit
>
>
> On Mon, Aug 18, 2014 at 9:37 PM, Humble Devassy Chirammal <
> humble.deva...@gmail.com> wrote:
>
>> Hi Punit,
>>
>>
>> If you are using gluster volume for virt store, it is always recommend to
>> enable /considering virt store use case mentioned here :
>> http://gluster.org/documentation/use_cases/Virt-store-usecase/
>>
>> Regarding how to add bricks to the volume , you can refer #
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Expanding.html
>>
>> The virt store use case enables quorum which ideally resolves the
>> possibilities of split brain.
>>
>> Please feel free to revert if you come across any issues.
>>
>> --Humble
>>
>>
>> On Fri, Aug 15, 2014 at 12:05 PM, Punit Dambiwal 
>> wrote:
>>
>>> Hi,
>>>
>>> I have 4 node GlusterFS Distributed Replicate volume...the same 4 host
>>> node i am using for compute purposenow i want to make it HAso if
>>> any host goes down .VM will not pause and it will migrate to another
>>> available node...
>>>
>>> 1. Can any one have any document or reference to do this with
>>> keepalived...
>>> 2. I have one more node as spare...so if any host goes down and can not
>>> come up again because of any HW failure...i can add it...but i didn't find
>>> any way to add these bricks to volume...??
>>>
>>> Thanks,
>>> Punit
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Distributed Replicate HA with KeepAlived

2014-08-19 Thread Punit Dambiwal
Hi Humble,

Thanks for the update...the mount point is the ip address to mount the
gluster volume,so if this server goes down...all the VM's will pause :-

[image: Inline image 1]

Some one from community suggest me to use the localhost instead of the ip
address for HAbut i have doubt if i use localhost...then whenever any
node goes down the VM's of the failed node will not migrate to another
node... ??

http://lists.ovirt.org/pipermail/users/2014-July/025728.html

Thanks,
Punit Dambiwal




On Tue, Aug 19, 2014 at 8:23 PM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:

> Hi,
>
>
> >>
> but i want the way from the ovirt portal...if possible ??
> >>
>
>
> Yes, its possible.
>
> >>
> Also in my case if the mount point HV node goes down...all the VM's will
> pause...and once the node come up..i need to manually poweroff the VM's and
> manually start again...
> >>
>
> I failed to interpret  'mount point' here. If you are talking about
> 'gluster' volume server , the VM behaviour depends on the gluster
> configuration which indirectly depend on the quorum configuration and
> volume configuration. If there is an error from underlying storage ( here
> gluster volume) VM can pause , however latest versions of ovirt support
> auto resume.
>
> >>
> Someone from community suggest me to use the localhost as mount point if
> you are using compute & storage on the same node...but i doubt about the
> failover...that means if any node goes downhow the VM migration on
> another node will happen ??
> >>
>
> If you are using local data center the VM migration can't happen. You have
> to configure a storage domain which can be accessed from your Ovirt
> Cluster  Hosts .
>
> Please have a look at
> http://www.ovirt.org/Gluster_Storage_Domain_Reference which talks about
> the recommended way of configuring it.
>
> --Humble
>
>
> On Tue, Aug 19, 2014 at 7:26 AM, Punit Dambiwal  wrote:
>
>> Hi Humble,
>>
>> Thanks for the updatesbut i want the way from the ovirt portal...if
>> possible ??
>>
>> Also in my case if the mount point HV node goes down...all the VM's will
>> pause...and once the node come up..i need to manually poweroff the VM's and
>> manually start again...
>>
>> Someone from community suggest me to use the localhost as mount point if
>> you are using compute & storage on the same node...but i doubt about the
>> failover...that means if any node goes downhow the VM migration on
>> another node will happen ??
>>
>> Thanks,
>> Punit
>>
>>
>> On Mon, Aug 18, 2014 at 9:37 PM, Humble Devassy Chirammal <
>> humble.deva...@gmail.com> wrote:
>>
>>> Hi Punit,
>>>
>>>
>>> If you are using gluster volume for virt store, it is always recommend
>>> to enable /considering virt store use case mentioned here :
>>> http://gluster.org/documentation/use_cases/Virt-store-usecase/
>>>
>>> Regarding how to add bricks to the volume , you can refer #
>>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Expanding.html
>>>
>>> The virt store use case enables quorum which ideally resolves the
>>> possibilities of split brain.
>>>
>>> Please feel free to revert if you come across any issues.
>>>
>>> --Humble
>>>
>>>
>>> On Fri, Aug 15, 2014 at 12:05 PM, Punit Dambiwal 
>>> wrote:
>>>
 Hi,

 I have 4 node GlusterFS Distributed Replicate volume...the same 4 host
 node i am using for compute purposenow i want to make it HAso if
 any host goes down .VM will not pause and it will migrate to another
 available node...

 1. Can any one have any document or reference to do this with
 keepalived...
 2. I have one more node as spare...so if any host goes down and can not
 come up again because of any HW failure...i can add it...but i didn't find
 any way to add these bricks to volume...??

 Thanks,
 Punit

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Distributed Replicate HA with KeepAlived

2014-08-22 Thread Humble Devassy Chirammal
The 'referred' mount point is the 'gluster server IP' and the VM behaviour
depends on the Gluster Volume availability as discussed before.

Reg#localhost instead of server IP, I havent tried this, but logically it
should work. You may test it. Also there is an option called
'backup-vol-file-servers' [1] , which should serve(if I am not wrong in
interpreting the mentioned mail thread )the same purpose.

--Humble


On Wed, Aug 20, 2014 at 9:23 AM, Punit Dambiwal  wrote:

> Hi Humble,
>
> Thanks for the update...the mount point is the ip address to mount the
> gluster volume,so if this server goes down...all the VM's will pause :-
>
> [image: Inline image 1]
>
> Some one from community suggest me to use the localhost instead of the ip
> address for HAbut i have doubt if i use localhost...then whenever any
> node goes down the VM's of the failed node will not migrate to another
> node... ??
>
> http://lists.ovirt.org/pipermail/users/2014-July/025728.html
>
> Thanks,
> Punit Dambiwal
>
>
>
>
> On Tue, Aug 19, 2014 at 8:23 PM, Humble Devassy Chirammal <
> humble.deva...@gmail.com> wrote:
>
>> Hi,
>>
>>
>> >>
>> but i want the way from the ovirt portal...if possible ??
>> >>
>>
>>
>> Yes, its possible.
>>
>> >>
>> Also in my case if the mount point HV node goes down...all the VM's will
>> pause...and once the node come up..i need to manually poweroff the VM's and
>> manually start again...
>> >>
>>
>> I failed to interpret  'mount point' here. If you are talking about
>> 'gluster' volume server , the VM behaviour depends on the gluster
>> configuration which indirectly depend on the quorum configuration and
>> volume configuration. If there is an error from underlying storage ( here
>> gluster volume) VM can pause , however latest versions of ovirt support
>> auto resume.
>>
>> >>
>> Someone from community suggest me to use the localhost as mount point if
>> you are using compute & storage on the same node...but i doubt about the
>> failover...that means if any node goes downhow the VM migration on
>> another node will happen ??
>> >>
>>
>> If you are using local data center the VM migration can't happen. You
>> have to configure a storage domain which can be accessed from your Ovirt
>> Cluster  Hosts .
>>
>> Please have a look at
>> http://www.ovirt.org/Gluster_Storage_Domain_Reference which talks about
>> the recommended way of configuring it.
>>
>> --Humble
>>
>>
>> On Tue, Aug 19, 2014 at 7:26 AM, Punit Dambiwal 
>> wrote:
>>
>>> Hi Humble,
>>>
>>> Thanks for the updatesbut i want the way from the ovirt portal...if
>>> possible ??
>>>
>>> Also in my case if the mount point HV node goes down...all the VM's will
>>> pause...and once the node come up..i need to manually poweroff the VM's and
>>> manually start again...
>>>
>>> Someone from community suggest me to use the localhost as mount point if
>>> you are using compute & storage on the same node...but i doubt about the
>>> failover...that means if any node goes downhow the VM migration on
>>> another node will happen ??
>>>
>>> Thanks,
>>> Punit
>>>
>>>
>>> On Mon, Aug 18, 2014 at 9:37 PM, Humble Devassy Chirammal <
>>> humble.deva...@gmail.com> wrote:
>>>
 Hi Punit,


 If you are using gluster volume for virt store, it is always recommend
 to enable /considering virt store use case mentioned here :
 http://gluster.org/documentation/use_cases/Virt-store-usecase/

 Regarding how to add bricks to the volume , you can refer #
 https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Expanding.html

 The virt store use case enables quorum which ideally resolves the
 possibilities of split brain.

 Please feel free to revert if you come across any issues.

 --Humble


 On Fri, Aug 15, 2014 at 12:05 PM, Punit Dambiwal 
 wrote:

> Hi,
>
> I have 4 node GlusterFS Distributed Replicate volume...the same 4 host
> node i am using for compute purposenow i want to make it HAso if
> any host goes down .VM will not pause and it will migrate to another
> available node...
>
> 1. Can any one have any document or reference to do this with
> keepalived...
> 2. I have one more node as spare...so if any host goes down and can
> not come up again because of any HW failure...i can add it...but i didn't
> find any way to add these bricks to volume...??
>
> Thanks,
> Punit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS Distributed Replicate HA with KeepAlived

2014-11-06 Thread Punit Dambiwal
Hi Humble,

Seems this option " 'backup-vol-file-servers' " work in the beginning of
the setupis this option can automatically try to mount another server
if first one fail such as three months later after setup ??

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/sect-Administration_Guide-GlusterFS_Client-GlusterFS_Client-Mounting_Volumes.html

On Fri, Aug 22, 2014 at 4:55 PM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:

> The 'referred' mount point is the 'gluster server IP' and the VM behaviour
> depends on the Gluster Volume availability as discussed before.
>
> Reg#localhost instead of server IP, I havent tried this, but logically it
> should work. You may test it. Also there is an option called
> 'backup-vol-file-servers' [1] , which should serve(if I am not wrong in
> interpreting the mentioned mail thread )the same purpose.
>
> --Humble
>
>
> On Wed, Aug 20, 2014 at 9:23 AM, Punit Dambiwal  wrote:
>
>> Hi Humble,
>>
>> Thanks for the update...the mount point is the ip address to mount the
>> gluster volume,so if this server goes down...all the VM's will pause :-
>>
>> [image: Inline image 1]
>>
>> Some one from community suggest me to use the localhost instead of the ip
>> address for HAbut i have doubt if i use localhost...then whenever any
>> node goes down the VM's of the failed node will not migrate to another
>> node... ??
>>
>> http://lists.ovirt.org/pipermail/users/2014-July/025728.html
>>
>> Thanks,
>> Punit Dambiwal
>>
>>
>>
>>
>> On Tue, Aug 19, 2014 at 8:23 PM, Humble Devassy Chirammal <
>> humble.deva...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>>
>>> >>
>>> but i want the way from the ovirt portal...if possible ??
>>> >>
>>>
>>>
>>> Yes, its possible.
>>>
>>> >>
>>> Also in my case if the mount point HV node goes down...all the VM's will
>>> pause...and once the node come up..i need to manually poweroff the VM's and
>>> manually start again...
>>> >>
>>>
>>> I failed to interpret  'mount point' here. If you are talking about
>>> 'gluster' volume server , the VM behaviour depends on the gluster
>>> configuration which indirectly depend on the quorum configuration and
>>> volume configuration. If there is an error from underlying storage ( here
>>> gluster volume) VM can pause , however latest versions of ovirt support
>>> auto resume.
>>>
>>> >>
>>> Someone from community suggest me to use the localhost as mount point if
>>> you are using compute & storage on the same node...but i doubt about the
>>> failover...that means if any node goes downhow the VM migration on
>>> another node will happen ??
>>> >>
>>>
>>> If you are using local data center the VM migration can't happen. You
>>> have to configure a storage domain which can be accessed from your Ovirt
>>> Cluster  Hosts .
>>>
>>> Please have a look at
>>> http://www.ovirt.org/Gluster_Storage_Domain_Reference which talks about
>>> the recommended way of configuring it.
>>>
>>> --Humble
>>>
>>>
>>> On Tue, Aug 19, 2014 at 7:26 AM, Punit Dambiwal 
>>> wrote:
>>>
 Hi Humble,

 Thanks for the updatesbut i want the way from the ovirt portal...if
 possible ??

 Also in my case if the mount point HV node goes down...all the VM's
 will pause...and once the node come up..i need to manually poweroff the
 VM's and manually start again...

 Someone from community suggest me to use the localhost as mount point
 if you are using compute & storage on the same node...but i doubt about the
 failover...that means if any node goes downhow the VM migration on
 another node will happen ??

 Thanks,
 Punit


 On Mon, Aug 18, 2014 at 9:37 PM, Humble Devassy Chirammal <
 humble.deva...@gmail.com> wrote:

> Hi Punit,
>
>
> If you are using gluster volume for virt store, it is always recommend
> to enable /considering virt store use case mentioned here :
> http://gluster.org/documentation/use_cases/Virt-store-usecase/
>
> Regarding how to add bricks to the volume , you can refer #
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/sect-User_Guide-Managing_Volumes-Expanding.html
>
> The virt store use case enables quorum which ideally resolves the
> possibilities of split brain.
>
> Please feel free to revert if you come across any issues.
>
> --Humble
>
>
> On Fri, Aug 15, 2014 at 12:05 PM, Punit Dambiwal 
> wrote:
>
>> Hi,
>>
>> I have 4 node GlusterFS Distributed Replicate volume...the same 4
>> host node i am using for compute purposenow i want to make it 
>> HAso
>> if any host goes down .VM will not pause and it will migrate to 
>> another
>> available node...
>>
>> 1. Can any one have any document or reference to do this with
>> keepalived...
>> 2. I have one more node as spare...so if any host goes down and can

Re: [ovirt-users] glusterfs public key failure for rpm

2016-03-22 Thread Taste-Of-IT

Hello Fabrice,

i saw the same for a fresh installation of centos 7 (1511) and actuall 
ovirt version. I wrote a solution here: 
http://taste-of-it.de/ovirt-centos-installation-glusterfs-gpg-key-not-found/

I think its a bug.


Am 2016-03-22 17:28, schrieb Fabrice Bacchella:

I tried to add a new host on a RHEL7, but it fails.

In the ovirt-host-deploy-20160322171347-XXX-6ba9d4a3.log file, I
found:

warning:
/var/cache/yum/x86_64/7/ovirt-3.6-glusterfs-epel/packages/glusterfs-libs-3.7.9-1.el7.x86_64.rpm:
Header V4 RSA/SHA256 Signature, key ID d5dc52dc: NOKEY
Retrieving key from
https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key [1]
2016-03-22 17:13:41 ERROR otopi.plugins.otopi.packagers.yumpackager
yumpackager.error:100 Yum GPG key retrieval failed: [Errno 14] HTTPS
Error 404 - Not Found
2016-03-22 17:13:41 DEBUG otopi.context context._executeMethod:156
method exception
Traceback (most recent call last):
 File "/tmp/ovirt-6ocubrsLfP/pythonlib/otopi/context.py", line 146, in
_executeMethod
 method['method']()
 File
"/tmp/ovirt-6ocubrsLfP/otopi-plugins/otopi/packagers/yumpackager.py",
line 274, in _packages
 self._miniyum.processTransaction()
 File "/tmp/ovirt-6ocubrsLfP/pythonlib/otopi/miniyum.py", line 1054,
in processTransaction
 rpmDisplay=self._RPMCallback(sink=self._sink)
 File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6500,
in processTransaction
 self._checkSignatures(pkgs,callback)
 File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6543,
in _checkSignatures
 self.getKeyForPackage(po, self._askForGPGKeyImport)
 File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6194,
in getKeyForPackage
 keys = self._retrievePublicKey(keyurl, repo)
 File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 6091,
in _retrievePublicKey
 exception2msg(e))
YumBaseError: GPG key retrieval failed: [Errno 14] HTTPS Error 404 -
Not Found

In /etc/yum.repos.d/ovirt-3.6-dependencies.repo, I found :

[ovirt-3.6-glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to
several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/
[2]
enabled=1
skip_if_unavailable=1
gpgcheck=1
gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
[1]

This file is up to date I think :

$ rpm -qf /etc/yum.repos.d/ovirt-3.6-dependencies.repo
ovirt-release36-005-1.noarch

$ yum update
...

No packages marked for update

If I try to download it :

$ curl -ORLv
https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key [1]
...

< HTTP/1.1 404 Not Found

I think the explanation are here :
https://download.gluster.org/pub/gluster/glusterfs/LATEST/NEW_PUBLIC_KEY.README
[3]

Any thing I can do ?

I don't even use glusterfs, I will be happy to disable it if I knew
how.



Links:
--
[1] https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
[2]
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/
[3]
https://download.gluster.org/pub/gluster/glusterfs/LATEST/NEW_PUBLIC_KEY.README

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs public key failure for rpm

2016-03-22 Thread Robert Story
On Tue, 22 Mar 2016 17:28:20 +0100 Fabrice wrote:
FB> I tried to add a new host on a RHEL7, but it fails.
FB> 
FB> In the ovirt-host-deploy-20160322171347-XXX-6ba9d4a3.log file, I found:
FB> 
FB> warning: 
/var/cache/yum/x86_64/7/ovirt-3.6-glusterfs-epel/packages/glusterfs-libs-3.7.9-1.el7.x86_64.rpm:
FB> Header V4 RSA/SHA256 Signature, key ID d5dc52dc: NOKEY Retrieving key
FB> from https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key

Try this patch, it worked for me:

diff --git a/yum.repos.d/ovirt-3.5-dependencies.repo 
b/yum.repos.d/ovirt-3.5-dependencies.repo
index c1914bb..3ef8a28 100644
--- a/yum.repos.d/ovirt-3.5-dependencies.repo
+++ b/yum.repos.d/ovirt-3.5-dependencies.repo
@@ -14,7 +14,7 @@ 
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-
 enabled=1
 skip_if_unavailable=1
 gpgcheck=1
-gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
+gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub

 [ovirt-3.5-glusterfs-noarch-epel]
 name=GlusterFS is a clustered file-system capable of scaling to several 
petabytes.
@@ -22,7 +22,7 @@ 
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-
 enabled=1
 skip_if_unavailable=1
 gpgcheck=1
-gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
+gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub

 [ovirt-3.5-patternfly1-noarch-epel]
 name=Copr repo for patternfly1 owned by patternfly


Robert

-- 
Senior Software Engineer @ Parsons


pgpxmjNIVkriN.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs public key failure for rpm

2016-03-22 Thread Fabrice Bacchella
The command
rpm --import https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
worked too.
> 

> Le 22 mars 2016 à 17:46, Robert Story  a écrit :
> 
> On Tue, 22 Mar 2016 17:28:20 +0100 Fabrice wrote:
> FB> I tried to add a new host on a RHEL7, but it fails.
> FB> 
> FB> In the ovirt-host-deploy-20160322171347-XXX-6ba9d4a3.log file, I found:
> FB> 
> FB> warning: 
> /var/cache/yum/x86_64/7/ovirt-3.6-glusterfs-epel/packages/glusterfs-libs-3.7.9-1.el7.x86_64.rpm:
> FB> Header V4 RSA/SHA256 Signature, key ID d5dc52dc: NOKEY Retrieving key
> FB> from https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
> 
> Try this patch, it worked for me:
> 
> diff --git a/yum.repos.d/ovirt-3.5-dependencies.repo 
> b/yum.repos.d/ovirt-3.5-dependencies.repo
> index c1914bb..3ef8a28 100644
> --- a/yum.repos.d/ovirt-3.5-dependencies.repo
> +++ b/yum.repos.d/ovirt-3.5-dependencies.repo
> @@ -14,7 +14,7 @@ 
> baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-
> enabled=1
> skip_if_unavailable=1
> gpgcheck=1
> -gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
> +gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
> 
> [ovirt-3.5-glusterfs-noarch-epel]
> name=GlusterFS is a clustered file-system capable of scaling to several 
> petabytes.
> @@ -22,7 +22,7 @@ 
> baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-
> enabled=1
> skip_if_unavailable=1
> gpgcheck=1
> -gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
> +gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
> 
> [ovirt-3.5-patternfly1-noarch-epel]
> name=Copr repo for patternfly1 owned by patternfly
> 
> 
> Robert
> 
> -- 
> Senior Software Engineer @ Parsons

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs public key failure for rpm

2016-03-23 Thread Sandro Bonazzola
ovirt-release36 RPMs have been updated with the new glusterfs key url.
Thanks,

On Tue, Mar 22, 2016 at 5:47 PM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:

> The command
> rpm --import
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
> worked too.
> >
>
> > Le 22 mars 2016 à 17:46, Robert Story  a écrit :
> >
> > On Tue, 22 Mar 2016 17:28:20 +0100 Fabrice wrote:
> > FB> I tried to add a new host on a RHEL7, but it fails.
> > FB>
> > FB> In the ovirt-host-deploy-20160322171347-XXX-6ba9d4a3.log file, I
> found:
> > FB>
> > FB> warning:
> /var/cache/yum/x86_64/7/ovirt-3.6-glusterfs-epel/packages/glusterfs-libs-3.7.9-1.el7.x86_64.rpm:
> > FB> Header V4 RSA/SHA256 Signature, key ID d5dc52dc: NOKEY Retrieving key
> > FB> from
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
> >
> > Try this patch, it worked for me:
> >
> > diff --git a/yum.repos.d/ovirt-3.5-dependencies.repo
> b/yum.repos.d/ovirt-3.5-dependencies.repo
> > index c1914bb..3ef8a28 100644
> > --- a/yum.repos.d/ovirt-3.5-dependencies.repo
> > +++ b/yum.repos.d/ovirt-3.5-dependencies.repo
> > @@ -14,7 +14,7 @@ baseurl=
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-
> > enabled=1
> > skip_if_unavailable=1
> > gpgcheck=1
> > -gpgkey=
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
> > +gpgkey=
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
> >
> > [ovirt-3.5-glusterfs-noarch-epel]
> > name=GlusterFS is a clustered file-system capable of scaling to several
> petabytes.
> > @@ -22,7 +22,7 @@ baseurl=
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-
> > enabled=1
> > skip_if_unavailable=1
> > gpgcheck=1
> > -gpgkey=
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
> > +gpgkey=
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
> >
> > [ovirt-3.5-patternfly1-noarch-epel]
> > name=Copr repo for patternfly1 owned by patternfly
> >
> >
> > Robert
> >
> > --
> > Senior Software Engineer @ Parsons
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS for production use with Ovirt

2015-03-24 Thread Sven Kieske


On 24/03/15 02:43, Punit Dambiwal wrote:
> I have 4 servers.

afaik this is a real problem regarding
split brains situation, like in every cluster
you should avoid even numbers of servers in
a cluster, so you have always a quorum.
use 3 or 5 servers, not 4.

HTH

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GlusterFS native client use with oVirt

2015-04-30 Thread Doron Fediuck


On 23/04/15 00:47, Will Dennis wrote:
> Hi all,
> 
>  
> 
> Can someone tell me if it’s possible or not to utilize GlusterFS mounted
> as native (i.e. FUSE) for a storage domain with oVirt 3.5.x?  I have two
> nodes (with a third I’m thinking of using as well) that are running
> Gluster, and I’ve created the two volumes needed for hosted engine setup
> (“engine”, “data”) on them, and mounted them native (not via NFS.) Can
> this be used with oVirt 3.5.x?
> 
>  
> 
> Or is this (from what I now understand) a new feature coming in oVirt 3.6?
> 
>  
> 
> Thanks,
> 
> Will
> 
> 
Hi Will,
note that Hosted engine requires replica-3 when using Gluster.

If all goes well, we may see a tighter integration coming in the next
version (will require gluster updates as well).

Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [GlusterFS] Regarding gluster backup-volfile-servers option

2017-05-17 Thread TranceWorldLogic .
Hi,

Before trying out, I want to understand how glusterfs will react for below
scenario.
Please help me.

Let consider I have two host hostA and hostB
I have setup replica volume on hostA and hostB. (consider as storage domain
for DATA in ovirt).
I have configure data domain mount command with backup server option
(backup-volfile-server) as hostB (I mean main server as hostA and backup as
hostB)

1> As I understood, VDSM execute mount command on both hostA and hostB.(for
creating data domain)
2> That mean, HostB glusterFS CLIENT will communicate with main server
(hostA).
(Please correct me if I am wrong here.)
3> Let say HostA got down (say shutdown, power off scenario)
4> Due to backup option I will have data domain available on HostB.
(Now glusterFS CLIENT on HostB will start communicating with HostB
GlusterFS SERVER).
5> Now let say HostA comes up.
6> Will it sync all data from HostB to HostA glusterFS server ?
(as per doc, yes, i not tried yet, want to confirm my understanding)
7> Will glusterFS CLIENT on HostB start communicate with main server
(HostA) ?

Please let me know, I am new to glusterFS.

Thanks,
~Rohit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs E­rror message constantly b­eing reported

2017-08-16 Thread Vadim
In vdsm.log

2017-08-16 16:34:15,314+0300 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer] Internal 
server error (__init__:577)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in 
_handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in 
_dynamicMethod
result = fn(*methodArgs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 117, 
in status
return self._gluster.volumeStatus(volumeName, brick, statusOption)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89, in 
wrapper
rv = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 411, in 
volumeStatus
data = self.svdsmProxy.glusterVolumeStatvfs(volumeName)
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in 
__call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in 

getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'AutoProxy[instance]' object has no attribute 
'glusterVolumeStatvfs'


2017-08-16 16:37:39,566+0300 ERROR (jsonrpc/3) [jsonrpc.JsonRpcServer] Internal 
server error (__init__:577)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in 
_handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in 
_dynamicMethod
result = fn(*methodArgs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 109, 
in list
return self._gluster.tasksList(taskIds)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89, in 
wrapper
rv = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 507, in 
tasksList
status = self.svdsmProxy.glusterTasksList(taskIds)
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in 
__call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in 

getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'AutoProxy[instance]' object has no attribute 'glusterTasksList'


Срд 16 Авг 2017 16:08:24 +0300, Vadim  написал:
> Hi, All
> 
> ovirt 4.1.4 fresh install
> Constantly seeing this message in the logs, how to fix this:
> 
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> 
> --
> Thanks,
> Vadim
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

--
Thanks,
Vadim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs E­rror message constantly b­eing reported

2017-08-16 Thread Sahina Bose
Can you check if you have vdsm-gluster rpm installed on the hosts?

On Wed, Aug 16, 2017 at 7:08 PM, Vadim  wrote:

> In vdsm.log
>
> 2017-08-16 16:34:15,314+0300 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:577)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 572, in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,
> in _dynamicMethod
> result = fn(*methodArgs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py",
> line 117, in status
> return self._gluster.volumeStatus(volumeName, brick, statusOption)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89,
> in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 411,
> in volumeStatus
> data = self.svdsmProxy.glusterVolumeStatvfs(volumeName)
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
> __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in
> 
> getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> AttributeError: 'AutoProxy[instance]' object has no attribute
> 'glusterVolumeStatvfs'
>
>
> 2017-08-16 16:37:39,566+0300 ERROR (jsonrpc/3) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:577)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 572, in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,
> in _dynamicMethod
> result = fn(*methodArgs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py",
> line 109, in list
> return self._gluster.tasksList(taskIds)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89,
> in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 507,
> in tasksList
> status = self.svdsmProxy.glusterTasksList(taskIds)
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
> __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in
> 
> getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> AttributeError: 'AutoProxy[instance]' object has no attribute
> 'glusterTasksList'
>
>
> Срд 16 Авг 2017 16:08:24 +0300, Vadim  написал:
> > Hi, All
> >
> > ovirt 4.1.4 fresh install
> > Constantly seeing this message in the logs, how to fix this:
> >
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> >
> > --
> > Thanks,
> > Vadim
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> --
> Thanks,
> Vadim
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs Error message constantly being reported

2017-08-18 Thread Kasturi Narra
Hi,

   Can you please check if you have vdsm-gluster package installed on the
system ?

Thanks
kasturi

On Wed, Aug 16, 2017 at 6:12 PM, Vadim  wrote:

> Hi, All
>
> ovirt 4.1.4 fresh install
> Constantly seeing this message in the logs, how to fix this:
>
>
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
>
> --
> Thanks,
> Vadim
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs-fuse consuming large amounts of ram

2018-06-18 Thread Edward Clay

It looks like we are experiencing a bug in the version of glusterfs
included with ovirt 4.2.3.  It looks like glusterfs 3.12.x has an issue
where it consumes large amounts of ram which has caused our HV report
storage errors and pause VMs.

https://bugzilla.redhat.com/show_bug.cgi?id=1496379

Is there a safe way to get glusterfs-fuse v3.13.x installed with ovirt
4.2.3 or do we have to live with this issue until future updates to
ovirt/glusterfs are released?

$ ssh hv1.domain.com "sudo grep glusterfs
/var/log/messages-* | grep -i kill "
/var/log/messages-20180610:Jun  6 13:49:54 hv1 kernel: Out of memory:
Kill process 15353 (glusterfs) score 630 or sacrifice child
/var/log/messages-20180610:Jun  6 13:49:54 hv1 kernel: Killed process
15353 (glusterfs) total-vm:33800604kB, anon-rss:31896632kB,
file-rss:840kB, shmem-rss:0kB
/var/log/messages-20180617:Jun 17 00:24:16 hv1 kernel: Out of memory:
Kill process 4072 (glusterfs) score 678 or sacrifice child
/var/log/messages-20180617:Jun 17 00:24:16 hv1 kernel: Killed process
4072 (glusterfs) total-vm:36159900kB, anon-rss:34338508kB,
file-rss:888kB, shmem-rss:0kB


We see this same issue occur on all hv in our cluster.

Edward Clay
Systems Administrator
The Hut Group

Tel:
Email: edward.c...@uk2group.com

For the purposes of this email, the "company" means The Hut Group Limited, a 
company registered in England and Wales (company number 6539496) whose registered office 
is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any 
of its respective subsidiaries.

Confidentiality Notice
This e-mail is confidential and intended for the use of the named recipient 
only. If you are not the intended recipient please notify us by telephone 
immediately on +44(0)1606 811888 or return it to us by e-mail. Please then 
delete it from your system and note that any use, dissemination, forwarding, 
printing or copying is strictly prohibited. Any views or opinions are solely 
those of the author and do not necessarily represent those of the company.

Encryptions and Viruses
Please note that this e-mail and any attachments have not been encrypted. They 
may therefore be liable to be compromised. Please also note that it is your 
responsibility to scan this e-mail and any attachments for viruses. We do not, 
to the extent permitted by law, accept any liability (whether in contract, 
negligence or otherwise) for any virus infection and/or external compromise of 
security and/or confidentiality in relation to transmissions sent by e-mail.

Monitoring
Activity and use of the company's systems is monitored to secure its effective 
use and operation and for other lawful business purposes. Communications using 
these systems will also be monitored and may be recorded to secure effective 
use and operation and for other lawful business purposes.

hgvyjuv
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WRYEBOLNHJZGKKJUNF77TJ7WMBS66ZYK/


[ovirt-users] glusterfs health-check failed, (brick) going down

2021-07-07 Thread Jiří Sléžka

Hello,

I have 3 node HCI cluster with oVirt 4.4.6 and CentOS8.

For time to time (I belive) random brick on random host goes down 
because health-check. It looks like


[root@ovirt-hci02 ~]# grep "posix_health_check" /var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07 
07:13:37.408184] M [MSGID: 113075] 
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix: 
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07 
07:13:37.408407] M [MSGID: 113075] 
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still 
alive! -> SIGTERM
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07 
16:11:14.518971] M [MSGID: 113075] 
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix: 
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07 
16:11:14.519200] M [MSGID: 113075] 
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still 
alive! -> SIGTERM


on other host

[root@ovirt-hci01 ~]# grep "posix_health_check" /var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05 
13:15:51.983327] M [MSGID: 113075] 
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-engine-posix: 
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05 
13:15:51.983728] M [MSGID: 113075] 
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-engine-posix: 
still alive! -> SIGTERM
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05 
01:53:35.769129] M [MSGID: 113075] 
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix: 
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05 
01:53:35.769819] M [MSGID: 113075] 
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still 
alive! -> SIGTERM


I cannot link these errors to any storage/fs issue (in dmesg or 
/var/log/messages), brick devices looks healthy (smartd).


I can force start brick with

gluster volume start vms|engine force

and after some healing all works fine for few days

Did anybody observe this behavior?

vms volume has this structure (two bricks per host, each is separate 
JBOD ssd disk), engine volume has one brick on each host...


gluster volume info vms

Volume Name: vms
Type: Distributed-Replicate
Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.0.4.11:/gluster_bricks/vms/vms
Brick2: 10.0.4.13:/gluster_bricks/vms/vms
Brick3: 10.0.4.12:/gluster_bricks/vms/vms
Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2
Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2
Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.stat-prefetch: off
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

Cheers,

Jiri
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BPXG53NG34QKCABYJ35UYIWPNNWTKXW4/


[ovirt-users] GlusterFS Storage - Size limited at 2 GB

2018-12-17 Thread v . levet
Hello everyone,

I've installed oVirt on 8 nodes of a MacroServer (SuperMicro Microcloud): 7 
Nodes with oVirt Node installed and 1 Node with Centos 7 and oVirt installed. 
The last one works like hypervisor and node. 

I would use all the storage of all nodes like a Gluster storage, so I've 
created a new DataCenter, with a Cluster with all nodes inside. Then I've 
created the volume Gluster (by GUI): Replicated Duplicated version and 4 
replicas on all 8 nodes. Than "Optimize for oVirt Storage".
For last I've created Domain storage based on volume created before.
All works fine but the total storage that is available from Domain is 1.9 GB. 
I've tried already to force it using these commands:


gluster volume GlusterVol limit-usage / 3200GB
gluster volume quota GlusterVol limit-usage /data 3200GB

At first it seems work but finally when I was installing a VM, I get Full 
Storage. It seems that it could use only the storage of the node where is 
instantiated the volume. 

I would see all the storage distributed and I wish I could use the storage from 
all nodes.

Someone could help me?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IGJQBMSGWGU47FBZ6S5EIRH6UBT3T3VS/


Re: [ovirt-users] [GlusterFS] Regarding gluster backup-volfile-servers option

2017-05-17 Thread knarra

Hi,

backup-volfile-servers is mainly used to avoid SPOF. For example 
take a scenario where you have Host A and Host B and when you try to 
mount a glusterfs volume using Host A with backup-volfile-servers 
specified, if Host A is not accessible, mount will happen with Host B 
which is specified in backup-volfile-server. backup-volfile-servers are 
mainly used to fetch the volfile from gluster and  has nothing to do 
with data sync.


data syncing comes as part of replicate feature in glusterfs where 
say for example, you have two Hosts Host A and Host B with replica 
volume configured, if Host A goes down for sometime, all the writes 
happens on Host B and when Host A comes up data gets synced to HostA.


Hope this helps 

Thanks
kasturi

On 05/17/2017 11:31 PM, TranceWorldLogic . wrote:

Hi,

Before trying out, I want to understand how glusterfs will react for 
below scenario.

Please help me.

Let consider I have two host hostA and hostB
I have setup replica volume on hostA and hostB. (consider as storage 
domain for DATA in ovirt).
I have configure data domain mount command with backup server option 
(backup-volfile-server) as hostB (I mean main server as hostA and 
backup as hostB)


1> As I understood, VDSM execute mount command on both hostA and 
hostB.(for creating data domain)
2> That mean, HostB glusterFS CLIENT will communicate with main server 
(hostA).

(Please correct me if I am wrong here.)
3> Let say HostA got down (say shutdown, power off scenario)
4> Due to backup option I will have data domain available on HostB.
(Now glusterFS CLIENT on HostB will start communicating with HostB 
GlusterFS SERVER).

5> Now let say HostA comes up.
6> Will it sync all data from HostB to HostA glusterFS server ?
(as per doc, yes, i not tried yet, want to confirm my understanding)
7> Will glusterFS CLIENT on HostB start communicate with main server 
(HostA) ?


Please let me know, I am new to glusterFS.

Thanks,
~Rohit


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   >