Re: [ovirt-users] Replicated Glusterfs on top of ZFS

2017-03-05 Thread Ramesh Nachimuthu

+gluster-users 


Regards,
Ramesh

- Original Message -
> From: "Arman Khalatyan" 
> To: "Juan Pablo" 
> Cc: "users" , "FERNANDO FREDIANI" 
> Sent: Friday, March 3, 2017 8:32:31 PM
> Subject: Re: [ovirt-users] Replicated Glusterfs on top of ZFS
> 
> The problem itself is not the streaming data performance., and also dd zero
> does not help much in the production zfs running with compression.
> the main problem comes when the gluster is starting to do something with
> that, it is using xattrs, probably accessing extended attributes inside the
> zfs is slower than XFS.
> Also primitive find file or ls -l in the (dot)gluster folders takes ages:
> 
> now I can see that arbiter host has almost 100% cache miss during the
> rebuild, which is actually natural while he is reading always the new
> datasets:
> [root@clei26 ~]# arcstat.py 1
> time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
> 15:57:31 29 29 100 29 100 0 0 29 100 685M 31G
> 15:57:32 530 476 89 476 89 0 0 457 89 685M 31G
> 15:57:33 480 467 97 467 97 0 0 463 97 685M 31G
> 15:57:34 452 443 98 443 98 0 0 435 97 685M 31G
> 15:57:35 582 547 93 547 93 0 0 536 94 685M 31G
> 15:57:36 439 417 94 417 94 0 0 393 94 685M 31G
> 15:57:38 435 392 90 392 90 0 0 374 89 685M 31G
> 15:57:39 364 352 96 352 96 0 0 352 96 685M 31G
> 15:57:40 408 375 91 375 91 0 0 360 91 685M 31G
> 15:57:41 552 539 97 539 97 0 0 539 97 685M 31G
> 
> It looks like we cannot have in the same system performance and reliability
> :(
> Simply final conclusion is with the single disk+ssd even zfs doesnot help to
> speedup the glusterfs healing.
> I will stop here:)
> 
> 
> 
> 
> On Fri, Mar 3, 2017 at 3:35 PM, Juan Pablo < pablo.localh...@gmail.com >
> wrote:
> 
> 
> 
> cd to inside the pool path
> then dd if=/dev/zero of= test.tt bs=1M
> leave it runing 5/10 minutes.
> do ctrl+c paste result here.
> etc.
> 
> 2017-03-03 11:30 GMT-03:00 Arman Khalatyan < arm2...@gmail.com > :
> 
> 
> 
> No, I have one pool made of the one disk and ssd as a cache and log device.
> I have 3 Glusterfs bricks- separate 3 hosts:Volume type Replicate (Arbiter)=
> replica 2+1!
> That how much you can push into compute nodes(they have only 3 disk slots).
> 
> 
> On Fri, Mar 3, 2017 at 3:19 PM, Juan Pablo < pablo.localh...@gmail.com >
> wrote:
> 
> 
> 
> ok, you have 3 pools, zclei22, logs and cache, thats wrong. you should have 1
> pool, with zlog+cache if you are looking for performance.
> also, dont mix drives.
> whats the performance issue you are facing?
> 
> 
> regards,
> 
> 2017-03-03 11:00 GMT-03:00 Arman Khalatyan < arm2...@gmail.com > :
> 
> 
> 
> This is CentOS 7.3 ZoL version 0.6.5.9-1
> 
> 
> 
> 
> 
> [root@clei22 ~]# lsscsi
> 
> [2:0:0:0] disk ATA INTEL SSDSC2CW24 400i /dev/sda
> 
> [3:0:0:0] disk ATA HGST HUS724040AL AA70 /dev/sdb
> 
> [4:0:0:0] disk ATA WDC WD2002FYPS-0 1G01 /dev/sdc
> 
> 
> 
> 
> [root@clei22 ~]# pvs ;vgs;lvs
> 
> PV VG Fmt Attr PSize PFree
> 
> /dev/mapper/INTEL_SSDSC2CW240A3_CVCV306302RP240CGN vg_cache lvm2 a-- 223.57g
> 0
> 
> /dev/sdc2 centos_clei22 lvm2 a-- 1.82t 64.00m
> 
> VG #PV #LV #SN Attr VSize VFree
> 
> centos_clei22 1 3 0 wz--n- 1.82t 64.00m
> 
> vg_cache 1 2 0 wz--n- 223.57g 0
> 
> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> 
> home centos_clei22 -wi-ao 1.74t
> 
> root centos_clei22 -wi-ao 50.00g
> 
> swap centos_clei22 -wi-ao 31.44g
> 
> lv_cache vg_cache -wi-ao 213.57g
> 
> lv_slog vg_cache -wi-ao 10.00g
> 
> 
> 
> 
> [root@clei22 ~]# zpool status -v
> 
> pool: zclei22
> 
> state: ONLINE
> 
> scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017
> 
> config:
> 
> 
> 
> 
> NAME STATE READ WRITE CKSUM
> 
> zclei22 ONLINE 0 0 0
> 
> HGST_HUS724040ALA640_PN2334PBJ4SV6T1 ONLINE 0 0 0
> 
> logs
> 
> lv_slog ONLINE 0 0 0
> 
> cache
> 
> lv_cache ONLINE 0 0 0
> 
> 
> 
> 
> errors: No known data errors
> 
> 
> ZFS config:
> 
> 
> 
> [root@clei22 ~]# zfs get all zclei22/01
> 
> NAME PROPERTY VALUE SOURCE
> 
> zclei22/01 type filesystem -
> 
> zclei22/01 creation Tue Feb 28 14:06 2017 -
> 
> zclei22/01 used 389G -
> 
> zclei22/01 available 3.13T -
> 
> zclei22/01 referenced 389G -
> 
> zclei22/01 compressratio 1.01x -
> 
> zclei22/01 mounted yes -
> 
> zclei22/01 quota none default
> 
> zclei22/01 reservation none default
> 
> zclei22/01 recordsize 128K local
> 
> zclei22/01 mountpoint /zclei22/01 default
> 
> zclei22/01 sharenfs off default
> 
> zclei22/01 checksum on default
> 
> zclei22/01 compression off local
> 
> zclei22/01 atime on default
> 
> zclei22/01 devices on default
> 
> zclei22/01 exec on default
> 
> zclei22/01 setuid on default
> 
> zclei22/01 readonly off default
> 
> zclei22/01 zoned off default
> 
> zclei22/01 snapdir hidden default
> 
> zclei22/01 aclinherit restricted default
> 
> zclei22/01 canmount on default
> 
> zclei22/01 xattr sa local
> 
> zclei22/01 copies 1 default
> 
> zclei22/01 version 5 -
> 
> zclei22/01 utf8only off -
> 
> zclei22/01 normalization none -
> 
> zclei22/01

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Ramesh Nachimuthu




- Original Message -
> From: "Arman Khalatyan" 
> To: "Ramesh Nachimuthu" 
> Cc: "users" , "Sahina Bose" 
> Sent: Wednesday, March 1, 2017 11:22:32 PM
> Subject: Re: [ovirt-users] Gluster setup disappears any chance to recover?
> 
> ok I will answer by my self:
> yes gluster daemon is managed by vdms:)
> and to recover lost config simply one should add "force" keyword
> gluster volume create GluReplica replica 3 arbiter 1 transport TCP,RDMA
> 10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu
> 10.10.10.41:/zclei26/01/glu
> force
> 
> now everything is up an running !
> one annoying thing is epel dependency in the zfs and conflicting ovirt...
> every time one need to enable and then disable epel.
> 
> 

Glusterd service will be started when you add/activate the host in oVirt. It 
will be configured to start after every reboot. 
Volumes disappearing seems to be a serious issue. We have never seen such an 
issue with XFS file system. Are you able to reproduce this issue consistently?.

Regards,
Ramesh

> 
> On Wed, Mar 1, 2017 at 5:33 PM, Arman Khalatyan  wrote:
> 
> > ok Finally by single brick up and running so I can access to data.
> > Now the question is do we need to run glusterd daemon on startup? or it is
> > managed by vdsmd?
> >
> >
> > On Wed, Mar 1, 2017 at 2:36 PM, Arman Khalatyan  wrote:
> >
> >> all folders /var/lib/glusterd/vols/ are empty
> >> In the history of one of the servers I found the command how it was
> >> created:
> >>
> >> gluster volume create GluReplica replica 3 arbiter 1 transport TCP,RDMA
> >> 10.10.10.44:/zclei22/01/glu 10.10.10.42:/zclei21/01/glu 10.10.10.41:
> >> /zclei26/01/glu
> >>
> >> But executing this command it claims that:
> >> volume create: GluReplica: failed: /zclei22/01/glu is already part of a
> >> volume
> >>
> >> Any chance to force it?
> >>
> >>
> >>
> >> On Wed, Mar 1, 2017 at 12:13 PM, Ramesh Nachimuthu 
> >> wrote:
> >>
> >>>
> >>>
> >>>
> >>>
> >>> - Original Message -
> >>> > From: "Arman Khalatyan" 
> >>> > To: "users" 
> >>> > Sent: Wednesday, March 1, 2017 3:10:38 PM
> >>> > Subject: Re: [ovirt-users] Gluster setup disappears any chance to
> >>> recover?
> >>> >
> >>> > engine throws following errors:
> >>> > 2017-03-01 10:39:59,608+01 WARN
> >>> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >>> > (DefaultQuartzScheduler6) [d7f7d83] EVENT_ID:
> >>> > GLUSTER_VOLUME_DELETED_FROM_CLI(4,027), Correlation ID: null, Call
> >>> Stack:
> >>> > null, Custom Event ID: -1, Message: Detected deletion of volume
> >>> GluReplica
> >>> > on cluster HaGLU, and deleted it from engine DB.
> >>> > 2017-03-01 10:39:59,610+01 ERROR
> >>> > [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
> >>> (DefaultQuartzScheduler6)
> >>> > [d7f7d83] Error while removing volumes from database!:
> >>> > org.springframework.dao.DataIntegrityViolationException:
> >>> > CallableStatementCallback; SQL [{call deleteglustervolumesbyguids(?)
> >>> }];
> >>> > ERROR: update or delete on table "gluster_volumes" violates foreign key
> >>> > constraint "fk_storage_connection_to_glustervolume" on table
> >>> > "storage_server_connections"
> >>> > Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still
> >>> referenced
> >>> > from table "storage_server_connections".
> >>> > Where: SQL statement "DELETE
> >>> > FROM gluster_volumes
> >>> > WHERE id IN (
> >>> > SELECT *
> >>> > FROM fnSplitterUuid(v_volume_ids)
> >>> > )"
> >>> > PL/pgSQL function deleteglustervolumesbyguids(character varying) line
> >>> 3 at
> >>> > SQL statement; nested exception is org.postgresql.util.PSQLException:
> >>> ERROR:
> >>> > update or delete on table "gluster_volumes" violates foreign key
> >>> constraint
> >>> > "fk_storage_connection_to_glustervolume" on table
> >>> > "storage_server_connections"
> >>> > Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-

Re: [ovirt-users] Gluster setup disappears any chance to recover?

2017-03-01 Thread Ramesh Nachimuthu




- Original Message -
> From: "Arman Khalatyan" 
> To: "users" 
> Sent: Wednesday, March 1, 2017 3:10:38 PM
> Subject: Re: [ovirt-users] Gluster setup disappears any chance to recover?
> 
> engine throws following errors:
> 2017-03-01 10:39:59,608+01 WARN
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler6) [d7f7d83] EVENT_ID:
> GLUSTER_VOLUME_DELETED_FROM_CLI(4,027), Correlation ID: null, Call Stack:
> null, Custom Event ID: -1, Message: Detected deletion of volume GluReplica
> on cluster HaGLU, and deleted it from engine DB.
> 2017-03-01 10:39:59,610+01 ERROR
> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler6)
> [d7f7d83] Error while removing volumes from database!:
> org.springframework.dao.DataIntegrityViolationException:
> CallableStatementCallback; SQL [{call deleteglustervolumesbyguids(?)}];
> ERROR: update or delete on table "gluster_volumes" violates foreign key
> constraint "fk_storage_connection_to_glustervolume" on table
> "storage_server_connections"
> Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still referenced
> from table "storage_server_connections".
> Where: SQL statement "DELETE
> FROM gluster_volumes
> WHERE id IN (
> SELECT *
> FROM fnSplitterUuid(v_volume_ids)
> )"
> PL/pgSQL function deleteglustervolumesbyguids(character varying) line 3 at
> SQL statement; nested exception is org.postgresql.util.PSQLException: ERROR:
> update or delete on table "gluster_volumes" violates foreign key constraint
> "fk_storage_connection_to_glustervolume" on table
> "storage_server_connections"
> Detail: Key (id)=(3d8bfa9d-1c83-46ac-b4e9-bd317623ed2d) is still referenced
> from table "storage_server_connections".
> Where: SQL statement "DELETE
> FROM gluster_volumes
> WHERE id IN (
> SELECT *
> FROM fnSplitterUuid(v_volume_ids)
> )"
> PL/pgSQL function deleteglustervolumesbyguids(character varying) line 3 at
> SQL statement
> at
> org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:243)
> [spring-jdbc.jar:4.2.4.RELEASE]
> at
> org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
> [spring-jdbc.jar:4.2.4.RELEASE]
> at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1094)
> [spring-jdbc.jar:4.2.4.RELEASE]
> at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1130)
> [spring-jdbc.jar:4.2.4.RELEASE]
> at
> org.springframework.jdbc.core.simple.AbstractJdbcCall.executeCallInternal(AbstractJdbcCall.java:405)
> [spring-jdbc.jar:4.2.4.RELEASE]
> at
> org.springframework.jdbc.core.simple.AbstractJdbcCall.doExecute(AbstractJdbcCall.java:365)
> [spring-jdbc.jar:4.2.4.RELEASE]
> at
> org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198)
> [spring-jdbc.jar:4.2.4.RELEASE]
> at
> org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:135)
> [dal.jar:]
> at
> org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:130)
> [dal.jar:]
> at
> org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeModification(SimpleJdbcCallsHandler.java:76)
> [dal.jar:]
> at
> org.ovirt.engine.core.dao.gluster.GlusterVolumeDaoImpl.removeAll(GlusterVolumeDaoImpl.java:233)
> [dal.jar:]
> at
> org.ovirt.engine.core.bll.gluster.GlusterSyncJob.removeDeletedVolumes(GlusterSyncJob.java:521)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshVolumeData(GlusterSyncJob.java:465)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshClusterData(GlusterSyncJob.java:133)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshLightWeightData(GlusterSyncJob.java:111)
> [bll.jar:]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [rt.jar:1.8.0_121]
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [rt.jar:1.8.0_121]
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_121]
> at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_121]
> at
> org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:77)
> [scheduler.jar:]
> at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:51)
> [scheduler.jar:]
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [rt.jar:1.8.0_121]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [rt.jar:1.8.0_121]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [rt.jar:1.8.0_121]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [rt.jar:1.8.0_121]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_121]
> Caused by: org

Re: [ovirt-users] gdeploy error

2017-02-15 Thread Ramesh Nachimuthu

+ Sac,


- Original Message -
> From: "Sandro Bonazzola" 
> To: "Ishmael Tsoaela" , "Ramesh Nachimuthu" 
> 
> Cc: "users" 
> Sent: Wednesday, February 15, 2017 1:52:26 PM
> Subject: Re: [ovirt-users] gdeploy error
> 
> On Tue, Feb 14, 2017 at 3:52 PM, Ishmael Tsoaela 
> wrote:
> 
> > Hi,
> >
> >
> > I am a new sys admin and trying to install glusterfs using gdeploy, I
> > started with a simple script to enable a service(ntpd).
> >
> >
> >  gdeploy --version
> > gdeploy 2.0.1
> >
> > [root@ovirt1 gdeploy]# cat ntp.conf
> > [hosts]
> > ovirt1
> >
> > [service1]
> > action=enable
> > service=ntpd
> >
> > [service2]
> > action=start
> > service=ntpd
> >
> >
> >
> > The issue is that the gdeploy is returning error
> > fatal: [ovirt1]: FAILED! => {"failed": true, "msg": "module (setup) is
> > missing interpreter line"}
> >
> >
> > Is there a simple way to debug or figure out how to fix this error?
> >
> >
> I guess it's missing the ansible rpm. Adding Ramesh.

Ansible will be installed as a dependency for gdeploy in version gdeploy 2.0.1. 
So its not an issue.

Ishmael, Do you see any related issue in /var/log/messages?. gdeploy logs in a 
local file. So can you check any thing under .gdeploy/* ?

Regards,
Ramesh

> 
> 
> 
> 
> >
> >
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> 
> 
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ramesh Nachimuthu




- Original Message -
> From: "Ralf Schenk" 
> To: "Ramesh Nachimuthu" 
> Cc: users@ovirt.org
> Sent: Friday, February 3, 2017 4:19:02 PM
> Subject: Re: [ovirt-users] [Call for feedback] did you install/update to 
> 4.1.0?
> 
> Hello,
> 
> in reality my cluster is a hyper-converged cluster. But how do I tell
> this Ovirt Engine ? Of course I activated the checkbox "Gluster"
> (already some versions ago around 4.0.x) but that didn't change anything.
> 

Do you see any error/warning in the engine.log?

Regards,
Ramesh

> Bye
> Am 03.02.2017 um 11:18 schrieb Ramesh Nachimuthu:
> >> 2. I'm missing any gluster specific management features as my gluster is
> >> not
> >> managable in any way from the GUI. I expected to see my gluster now in
> >> dashboard and be able to add volumes etc. What do I need to do to "import"
> >> my existing gluster (Only one volume so far) to be managable ?
> >>
> >>
> > If it is a hyperconverged cluster, then all your hosts are already managed
> > by ovirt. So you just need to enable 'Gluster Service' in the Cluster,
> > gluster volume will be imported automatically when you enable gluster
> > service.
> >
> > If it is not a hyperconverged cluster, then you have to create a new
> > cluster and enable only 'Gluster Service'. Then you can import or add the
> > gluster hosts to this Gluster cluster.
> >
> > You may also need to define a gluster network if you are using a separate
> > network for gluster data traffic. More at
> > http://www.ovirt.org/develop/release-management/features/network/select-network-for-gluster/
> >
> >
> >
> 
> --
> 
> 
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* <mailto:r...@databay.de>
>   
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* <http://www.databay.de>
> 
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-03 Thread Ramesh Nachimuthu




- Original Message -
> From: "Ralf Schenk" 
> To: users@ovirt.org
> Sent: Friday, February 3, 2017 3:24:55 PM
> Subject: Re: [ovirt-users] [Call for feedback] did you install/update to 
> 4.1.0?
> 
> 
> 
> Hello,
> 
> I upgraded my cluster of 8 hosts with gluster storage and hosted-engine-ha.
> They were already Centos 7.3 and using Ovirt 4.0.6 and gluster 3.7.x
> packages from storage-sig testing.
> 
> 
> I'm missing the storage listed under storage tab but this is already filed by
> a bug. Increasing Cluster and Storage Compability level and also "reset
> emulated machine" after having upgraded one host after another without the
> need to shutdown vm's works well. (VM's get sign that there will be changes
> after reboot).
> 
> Important: you also have to issue a yum update on the host for upgrading
> additional components like i.e. gluster to 3.8.x. I was frightened of this
> step but It worked well except a configuration issue I was responsible for
> in gluster.vol (I had "transport socket, rdma")
> 
> 
> Bugs/Quirks so far:
> 
> 
> 1. After restarting a single VM that used RNG-Device I got an error (it was
> german) but like "RNG Device not supported by cluster". I hat to disable RNG
> Device save the settings. Again settings and enable RNG Device. Then machine
> boots up.
> I think there is a migration step missing from /dev/random to /dev/urandom
> for exisiting VM's.
> 
> 2. I'm missing any gluster specific management features as my gluster is not
> managable in any way from the GUI. I expected to see my gluster now in
> dashboard and be able to add volumes etc. What do I need to do to "import"
> my existing gluster (Only one volume so far) to be managable ?
> 
> 

If it is a hyperconverged cluster, then all your hosts are already managed by 
ovirt. So you just need to enable 'Gluster Service' in the Cluster, gluster 
volume will be imported automatically when you enable gluster service. 

If it is not a hyperconverged cluster, then you have to create a new cluster 
and enable only 'Gluster Service'. Then you can import or add the gluster hosts 
to this Gluster cluster. 

You may also need to define a gluster network if you are using a separate 
network for gluster data traffic. More at 
http://www.ovirt.org/develop/release-management/features/network/select-network-for-gluster/



> 3. Three of my hosts have the hosted engine deployed for ha. First all three
> where marked by a crown (running was gold and others where silver). After
> upgrading the 3 Host deployed hosted engine ha is not active anymore.
> 
> I can't get this host back with working ovirt-ha-agent/broker. I already
> rebooted, manually restarted the services but It isn't able to get cluster
> state according to
> "hosted-engine --vm-status". The other hosts state the host status as
> "unknown stale-data"
> 
> I already shut down all agents on all hosts and issued a "hosted-engine
> --reinitialize-lockspace" but that didn't help.
> 
> 
> Agents stops working after a timeout-error according to log:
> 
> MainThread::INFO::2017-02-02
> 19:24:52,040::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:24:59,185::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:06,333::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:13,554::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:20,710::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::INFO::2017-02-02
> 19:25:27,865::hosted_engine::841::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_domain_monitor_status)
> VDSM domain monitor status: PENDING
> MainThread::ERROR::2017-02-02
> 19:25:27,866::hosted_engine::815::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_domain_monitor)
> Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
> domain acquisition
> MainThread::WARNING::2017-02-02
> 19:25:27,866::hosted_engine::469::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Error while monitoring engine: Failed to start monitoring domain
> (sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
> domain acquisition
> MainThread::WARNING::2017-02-02
> 19:25:27,866::hosted_engine::472::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Unexpected error
> Traceback (most recent call last):
> File
> "/usr/

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-20 Thread Ramesh Nachimuthu




- Original Message -
> From: "Giuseppe Ragusa" 
> To: "Ramesh Nachimuthu" 
> Cc: users@ovirt.org, gluster-us...@gluster.org, "Ravishankar Narayanankutty" 
> 
> Sent: Tuesday, December 20, 2016 4:15:18 AM
> Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> GlusterFS volumes in HC HE oVirt 3.6.7 /
> GlusterFS 3.7.17
> 
> On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> > - Original Message -
> > > From: "Giuseppe Ragusa" 
> > > To: "Ramesh Nachimuthu" 
> > > Cc: users@ovirt.org
> > > Sent: Friday, December 16, 2016 2:42:18 AM
> > > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > > GlusterFS 3.7.17
> > > 
> > > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> > > clic sul collegamento seguente.
> > > 
> > > 
> > > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > 
> > > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > 
> > > 
> > > 
> > > Da: Ramesh Nachimuthu 
> > > Inviato: lunedì 12 dicembre 2016 09.32
> > > A: Giuseppe Ragusa
> > > Cc: users@ovirt.org
> > > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> > > 
> > > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > > Hi all,
> > > >
> > > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all
> > > > on
> > > > CentOS 7.2):
> > > >
> > > >  From /var/log/messages:
> > > >
> > > > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in #012
> > > > **kwargs)#012
> > > > File "", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting
> > > > Engine
> > > > VM OVF from the OVF_STORE
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > > > path:
> > > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > > > an OVF for HE VM, trying to convert
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > > > vm.conf from OVF_STORE
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current
> > > > state
> > > > En

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-15 Thread Ramesh Nachimuthu




- Original Message -
> From: "Giuseppe Ragusa" 
> To: "Ramesh Nachimuthu" 
> Cc: users@ovirt.org
> Sent: Friday, December 16, 2016 2:42:18 AM
> Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> GlusterFS volumes in HC HE oVirt 3.6.7 /
> GlusterFS 3.7.17
> 
> Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> clic sul collegamento seguente.
> 
> 
> <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> 
> vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> 
> 
> 
> Da: Ramesh Nachimuthu 
> Inviato: lunedì 12 dicembre 2016 09.32
> A: Giuseppe Ragusa
> Cc: users@ovirt.org
> Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> 
> On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > Hi all,
> >
> > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on
> > CentOS 7.2):
> >
> >  From /var/log/messages:
> >
> > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > server error#012Traceback (most recent call last):#012  File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > _serveRequest#012res = method(**params)#012  File
> > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > wrapper#012rv = func(*args, **kwargs)#012  File
> > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > __call__#012return callMethod()#012  File
> > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > File "", line 2, in glusterVolumeStatus#012  File
> > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> >   'device'
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine
> > VM OVF from the OVF_STORE
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > path:
> > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > an OVF for HE VM, trying to convert
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > vm.conf from OVF_STORE
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state
> > EngineUp (score: 3400)
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote
> > host read.mgmt.private (id: 2, score: 3400)
> > Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > server error#012Traceback (most recent call last):#012  File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > _serveRequest#012res = method(**params)#012  File
> > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > wrapper#012rv = func(*args, **kwargs)#012  File
> > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > __call__#012return callMethod()#012  File
> > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > File "", line 

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-12 Thread Ramesh Nachimuthu



On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:

Hi all,

I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7 
GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on 
CentOS 7.2):

 From /var/log/messages:

Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal server error#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in _serveRequest#012res = method(**params)#012  File "/usr/share/vdsm/rpc/Bridge.py", 
line 275, in _dynamicMethod#012result = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status#012return 
self._gluster.volumeStatus(volumeName, brick, statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper#012rv = func(*args, **kwargs)#012  File 
"/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__#012return 
callMethod()#012  File "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012  File "", line 2, in glusterVolumeStatus#012 
 File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _

ca

  llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine VM OVF 
from the OVF_STORE
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume path: 
/rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found an 
OVF for HE VM, trying to convert
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got vm.conf 
from OVF_STORE
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state 
EngineUp (score: 3400)
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote host 
read.mgmt.private (id: 2, score: 3400)
Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal server error#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in _serveRequest#012res = method(**params)#012  File "/usr/share/vdsm/rpc/Bridge.py", 
line 275, in _dynamicMethod#012result = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status#012return 
self._gluster.volumeStatus(volumeName, brick, statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper#012rv = func(*args, **kwargs)#012  File 
"/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__#012return 
callMethod()#012  File "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012  File "", line 2, in glusterVolumeStatus#012 
 File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _

ca

  llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: INFO:mem_free.MemFree:memFree: 7392
Dec  9 15:27:50 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal server error#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in _serveRequest#012res = method(**params)#012  File "/usr/share/vdsm/rpc/Bridge.py", 
line 275, in _dynamicMethod#012result = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status#012return 
self._gluster.volumeStatus(volumeName, brick, statusOpti

Re: [ovirt-users] remove gluster storage domain and resize gluster storage domain

2016-12-01 Thread Ramesh Nachimuthu




- Original Message -
> From: "Bill James" 
> To: "Sahina Bose" 
> Cc: users@ovirt.org
> Sent: Thursday, December 1, 2016 8:15:03 PM
> Subject: Re: [ovirt-users] remove gluster storage domain and resize gluster 
> storage domain
> 
> thank you for the reply.
> 
> [root@ovirt1 prod ~]# lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 1.1T 0 disk
> ├─sda1 8:1 0 500M 0 part /boot
> ├─sda2 8:2 0 4G 0 part [SWAP]
> └─sda3 8:3 0 1.1T 0 part
> ├─rootvg01-lv01 253:0 0 50G 0 lvm /
> └─rootvg01-lv02 253:1 0 1T 0 lvm /ovirt-store
> 
> ovirt2 same.
> ovirt3:
> 
> [root@ovirt3 prod ~]# lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 279.4G 0 disk
> ├─sda1 8:1 0 500M 0 part /boot
> ├─sda2 8:2 0 4G 0 part [SWAP]
> └─sda3 8:3 0 274.9G 0 part
> ├─rootvg01-lv01 253:0 0 50G 0 lvm /
> └─rootvg01-lv02 253:1 0 224.9G 0 lvm /ovirt-store
> 

See the difference between ovirt3 and other two nodes. LV 'rootvg01-lv02' in 
ovirt3 has only 224 GB of capacity. In replicated Gluster volume, storage 
capacity of the volume is limited by the smallest replica brick. If you want to 
have 1TB gluster volume then please make sure that all the bricks in the 
replicated volume has minimum 1TB capacity.

Regards,
Ramesh

> Ah ha! I missed that. Thank you!!
> I can fix that.
> 
> 
> Once I detached the storage domain it is no longer listed.
> Is there some option to make it show detached volumes?
> 
> ovirt-engine-3.6.4.1-1.el7.centos.noarch
> 
> 
> On 12/1/16 3:58 AM, Sahina Bose wrote:
> 
> 
> 
> 
> 
> On Thu, Dec 1, 2016 at 10:54 AM, Bill James < bill.ja...@j2.com > wrote:
> 
> 
> I have a 3 node cluster with replica 3 gluster volume.
> But for some reason the volume is not using the full size available.
> I thought maybe it was because I had created a second gluster volume on same
> partition, so I tried to remove it.
> 
> I was able to put it in maintenance mode and detach it, but in no window was
> I able to get the "remove" option to be enabled.
> Now if I select "attach data" I see ovirt thinks the volume is still there,
> although it is not.
> 
> 2 questions.
> 
> 1. how do I clear out the old removed volume from ovirt?
> 
> To remove the storage domain, you need to detach the domain from the Data
> Center sub tab of Storage Domain. Once detached, the remove and format
> domain option should be available to you.
> Once you detach - what is the status of the storage domain? Does it show as
> Detached?
> 
> 
> 
> 
> 2. how do I get gluster to use the full disk space available?
> 
> 
> 
> Its a 1T partition but it only created a 225G gluster volume. Why? How do I
> get the space back?
> 
> What's the output of "lsblk"? Is it consistent across all 3 nodes?
> 
> 
> 
> 
> All three nodes look the same:
> /dev/mapper/rootvg01-lv02 1.1T 135G 929G 13% /ovirt-store
> ovirt1-gl.j2noc.com:/gv1 225G 135G 91G 60% /rhev/data-center/mnt/glusterSD/
> ovirt1-gl.j2noc.com :_gv1
> 
> 
> [root@ovirt1 prod ovirt1-gl.j2noc.com:_gv1]# gluster volume status
> Status of volume: gv1
> Gluster process TCP Port RDMA Port Online Pid
> --
> Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
> k1/gv1 49152 0 Y 5218
> Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
> k1/gv1 49152 0 Y 5678
> Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
> k1/gv1 49152 0 Y 61386
> NFS Server on localhost 2049 0 Y 31312
> Self-heal Daemon on localhost N/A N/A Y 31320
> NFS Server on ovirt3-gl.j2noc.com 2049 0 Y 38109
> Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A Y 38119
> NFS Server on ovirt2-gl.j2noc.com 2049 0 Y 5387
> Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A Y 5402
> 
> Task Status of Volume gv1
> --
> There are no active volume tasks
> 
> 
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> 
> 
> 
> This email, its contents and attachments contain information from j2 Global,
> Inc . and/or its affiliates which may be privileged, confidential or
> otherwise protected from disclosure. The information is intended to be for
> the addressee(s) only. If you are not an addressee, any disclosure, copy,
> distribution, or use of the contents of this message is prohibited. If you
> have received this email in error please notify the sender by reply e-mail
> and delete the original message and any copies. � 2015 j2 Global, Inc . All
> rights reserved. eFax � , eVoice � , Campaigner � , FuseMail � , KeepItSafe
> � and Onebox � are r egistered trademarks of j2 Global, Inc . and its
> affiliates.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Uncaught exception occurred. Cannot read property 'f' of null

2016-10-27 Thread Ramesh Nachimuthu



On 10/27/2016 02:56 PM, Jorick Astrego wrote:




On 10/20/2016 03:14 PM, Alexander Wels wrote:

On Thursday, October 20, 2016 9:20:23 AM EDT Jorick Astrego wrote:

On 10/18/2016 03:59 PM, Michal Skrivanek wrote:

On 18 Oct 2016, at 15:56, Alexander Wels  wrote:

On Tuesday, October 18, 2016 3:44:31 PM EDT Jorick Astrego wrote:

Hi,

We have ovirt connected to our freeipa domain. Things work fine

generally, but once in a while I get the following error pop up in the

ui:

 Uncaught exception occurred. Please try reloading the page. Details:
 Exception caught: Exception caught: (TypeError) __gwt$exception:
 : Cannot read property 'f' of null
 Please have your administrator check the UI logs

Could you install the symbol maps assocaited with the obfuscated code so
we
can get a readable stack trace. To install the symbol maps please run the
following command on the machine running the engine (or VM if it is HE).

yum install ovirt-engine-webadmin-portal-debuginfo

Please restart the ovirt-engine process with

systemctl restart ovirt-engine

after you have installed the symbol maps. Then next time you see the
message in the UI, the stack trace in the log should be readable and we
can help you better determine what is causing the problem.

right, that’s likely even easier than what I just wrote;-)


The log that goes with it:

2016-10-17 16:31:32,578 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-21) [] Permutation name: 430985F23DFC1C8BE1C7FDD91EDAA785
2016-10-17 16:31:32,578 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-21) [] Uncaught exception: :
com.google.gwt.event.shared.UmbrellaException: Exception caught:
Exception caught: (TypeError)

   __gwt$exception: : Cannot read property 'f' of null
   
  at


Unknown.ps(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985F
23DF C1C8BE1C7FDD91EDAA785.cache.html@3837) at
Unknown.xs(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985F
23DF C1C8BE1C7FDD91EDAA785.cache.html@41) at
Unknown.C3(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985F
23DF C1C8BE1C7FDD91EDAA785.cache.html@19) at
Unknown.F3(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985F
23DF C1C8BE1C7FDD91EDAA785.cache.html@19) at
Unknown.P2(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985F
23DF C1C8BE1C7FDD91EDAA785.cache.html@117) at
Unknown.hwf(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@41) at
Unknown.twf(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@162) at
Unknown.xwf(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@14293) at
Unknown.KVe(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@1172) at
Unknown.yUe(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@33) at
Unknown.viy(@53)at Unknown.Piy(@18587) at
Unknown.zOr(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@189) at
Unknown.$to(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@311) at
Unknown.VBo(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@2599) at
Unknown.mCo(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@8942) at
Unknown.qRn(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@116) at
Unknown.tRn(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@568) at
Unknown.kVn(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@74) at
Unknown.nVn(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@25943) at
Unknown.cUn(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@150) at
Unknown.fUn(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@24587) at
Unknown.KJe(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@21125) at
Unknown.Izk(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985
F23D FC1C8BE1C7FDD91EDAA785.cache.html@10384) at
Unknown.P3(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985F
23DF C1C8BE1C7FDD91EDAA785.cache.html@137) at
Unknown.g4(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985F
23DF C1C8BE1C7FDD91EDAA785.cache.html@8271) at
Unknown.(https://ovirttest.netbulae.test/ovirt-engine/webadmi
n/43 0985F23DFC1C8BE1C7FDD91EDAA785.cache.html@65) at
Unknown._t(https://ovirttest.netbulae.test/ovirt-engine/webadmin/430985F
23DF C1C8BE1C7FDD91EDAA785.cache.html@29)

Re: [ovirt-users] host gaga and ovirt cannt control it.

2016-10-26 Thread Ramesh Nachimuthu
Can you explain state of your setup now. May be a screen shot of the 'Hosts' 
tab and logs from /var/log/ovirt-engine/engine.log should help us to understand 
the situation there.

Regards,
Ramesh




- Original Message -
> From: "Thing" 
> To: "users" 
> Sent: Thursday, October 27, 2016 9:03:22 AM
> Subject: [ovirt-users] host gaga and ovirt cannt control it.
> 
> Ok, I have struggled with this for 2 hours now, glusterp2 and the ovirt
> server are basically not talking at all. I have rebooted both, I dont know
> how many times. Reading via google there seems to be no fix for this bar a
> manual hack of the ovirt server's database to delete the host glusterp2? or
> it it re-install from scratch time?
> 
> If I have to re-install from scratch, is it best to go back a version of
> ovirt say to 3.6?
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Detected conflict in hook delete-POST

2016-10-24 Thread Ramesh Nachimuthu


- Original Message -
> From: "Nathanaël Blanchet" 
> To: users@ovirt.org
> Sent: Monday, October 24, 2016 1:21:42 PM
> Subject: [ovirt-users] Detected conflict in hook delete-POST
> 
> Hello,
> 
> What can I do to solve this kind of messages in ovirt 4.0.4?
> 
> Detected conflict in hook delete-POST-57glusterfind-delete-post.py of
> Cluster Test.
> Detected conflict in hook start-POST-31ganesha-start.sh of Cluster Test.
> 
> 

You can resolve the conflicts in gluster hooks. Please use the steps specified 
in 'Resolving the Conflicts' section of gluster hook management feature page 
https://www.ovirt.org/develop/release-management/features/gluster/gluster-hooks-management/

Regards,
Ramesh

> --
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] about hosted engine gluster support

2016-10-23 Thread Ramesh Nachimuthu




- Original Message -
> From: "张余歌" 
> To: users@ovirt.org
> Sent: Monday, October 24, 2016 10:21:15 AM
> Subject: [ovirt-users] about hosted engine gluster support
> 
> hey ,friends!
> Refer to
> https://www.ovirt.org/develop/release-management/features/engine/self-hosted-engine-gluster-support/
> i meet some problem:when i process 'hosted-engine --deploy'
> i show support iscsi,nfs3,nfs4 ,but gluster.
> 
> i should be: Please specify the storage you would like to use (glusterfs,
> iscsi, nfs3, nfs4)[nfs3]: glusterfs
> 
> i followed the stage of the link,but i failed to hosted-engine support
> gluster,maybe something else i should configure?i am so so so so
> confused!why ,please help me!
> 
> thanks.
> 
> my ovirt version is 3.5.6.

gluster is not supported as a storage for hosted-engine in oVirt 3.5. I would 
strongly suggest you to use the latest oVirt 4.0 unless you have a specific 
reason.

Regards,
Ramesh

> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [Review-request] feature page for gdeploy cockpit integration

2016-09-21 Thread Ramesh Nachimuthu


Hi, 

We are planning to integrate gdeploy with cockpit-ovirt plugin to improve the 
deployment of Hyper-Converged oVrt-Gluster solution. I have sent a pull request 
to add a feature page for the same. It will helpful if you can review and 
provide your valuable feedback. 


Pull request : https://github.com/oVirt/ovirt-site/pull/480 
Regards, 
Ramesh 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Enable Gluster service during hosted-engine deploy

2016-08-25 Thread Ramesh Nachimuthu



On 08/25/2016 03:15 PM, knarra wrote:

On 08/25/2016 01:20 PM, Renout Gerrits wrote:

Hi all,

Is there a way to enable the Gluster Service for a cluster in the 
hosted engine deploy?
In the config append you make the gluster service available with: 
OVESETUP_CONFIG/applicationMode=str:both. But how would one enable it?



Currently it is missing. You have to run *"engine-config -s 
AllowClusterWithVirtGlusterEnabled=true"* first to enable both gluster 
and virt in same cluster and edit the cluster using API or UI.

Please rise an RFE  so that we can enable this in next release.

Regards,
Ramesh


I would like to enable it automatically. Now I change it afterwards 
with API,
There is no automatic way as of now to do this. It can either be done 
via API or UI.
but after the change I have to put the hosts into maintenance and 
activate them again. It would seems there must be a better way to do 
this.
This is an issue as of now. There is a bug logged to track this 
https://bugzilla.redhat.com/show_bug.cgi?id=1313497


Hope this helps !!


Thanks,
Renout


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Dedicated NICs for gluster network

2016-08-21 Thread Ramesh Nachimuthu



On 08/22/2016 11:24 AM, Sahina Bose wrote:



On Fri, Aug 19, 2016 at 6:20 PM, Nicolas Ecarnot > wrote:


Le 19/08/2016 à 13:43, Sahina Bose a écrit :



Or are you adding the 3 nodes to your existing cluster? If
so, I suggest you try adding this to a new cluster

OK, I tried and succeed to create a new cluster.
In this new cluster, I was ABLE to add the first new host,
using its mgmt DNS name.
This first host still has to have its NICs configured, and
(using Chrome or FF) the access to the network settings
window is stalling the browser (I tried to restart even the
engine, to no avail). Thus, I can not setup this first node NICs.

Thus, I can not add any further host because oVirt relies on
a first host to validate the further ones.



Network team should be able to help you here.



OK, there were no mean I could continue this way (browser crash),
so I tried and succeed doing so :
- remove the newly created host and cluster
- create a new DATACENTER
- create a new cluster in this DC
- add the first new host : OK
- add the 2 other new hosts : OK

Now, I can smoothly configure their NICs.

Doing all this, I saw that oVirt detected there already was
existing gluster cluster and volume, and integrated it in oVirt.

Then, I was able to create a new storage domain in this new DC and
cluster, using one of the *gluster* FQDN's host. It went nicely.

BUT, when viewing the volume tab and brick details, the displayed
brick names are the host DNS name, and NOT the host GLUSTER DNS names.

I'm worrying about this, confirmed by what I read in the logs :

2016-08-19 14:46:30,484 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-100) [107dc2e3] Could not associate
brick 'serv-vm-al04-data.sdis.isere.fr:/gluster/data/brick04
' of volume '35026521-e76e-4774-8ddf-0a701b9eb40c' with correct
network as no gluster network found in cluster
'1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30'
2016-08-19 14:46:30,492 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-100) [107dc2e3] Could not associate
brick 'serv-vm-al05-data.sdis.isere.fr:/gluster/data/brick04
' of volume '35026521-e76e-4774-8ddf-0a701b9eb40c' with correct
network as no gluster network found in cluster
'1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30'
2016-08-19 14:46:30,500 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-100) [107dc2e3] Could not associate
brick 'serv-vm-al06-data.sdis.isere.fr:/gluster/data/brick04
' of volume '35026521-e76e-4774-8ddf-0a701b9eb40c' with correct
network as no gluster network found in cluster
'1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30'

[oVirt shell (connected)]# list clusters

id : 0001-0001-0001-0001-0045
name   : cluster51
description: Cluster d'alerte de test

id : 1c8e75a0-af3f-4e97-a8fb-2f7ef3ed9f30
name   : cluster52
description: Cluster d'alerte de test

[oVirt shell (connected)]#

"cluster52" is the recent cluster, and I do have a dedicated
gluster network, marked as gluster network, in the correct DC and
cluster.
The only point is that :
- Each host has its name ("serv-vm-al04") and a second name for
gluster ("serv-vm-al04-data").
- Using blahblahblah-data is correct on a gluster point of view
- Maybe oVirt is disturb not to be able to ping the gluster FQDN
(not routed) and then throwing this error?


We do have a limitation currently that if you use multiple FQDNs, 
oVirt cannot associate it to the gluster brick correctly. This will be 
a problem only when you try brick management from oVirt - i.e try to 
remove or replace brick from oVirt. For monitoring brick status and 
detecting bricks - this is not an issue, and you can ignore the error 
in logs.


Adding Ramesh who has a patch to fix this .


Patch https://gerrit.ovirt.org/#/c/60083/ is posted to address this 
issue. But it will work only if the oVirt Engine can resolve FQDN 
*'serv-vm-al04-data.xx*'* to an IP address which is mapped to the 
gluster NIC (NIC with gluster network) on the host.


Sahina: Can you review the patch :-)

Regards,
Ramesh

-- 
Nicolas ECARNOT





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.0 installation

2016-08-17 Thread Ramesh Nachimuthu




- Original Message -
> From: "Piotr Kliczewski" 
> To: "knarra" 
> Cc: users@ovirt.org
> Sent: Wednesday, August 17, 2016 5:12:52 PM
> Subject: Re: [ovirt-users] ovirt 4.0 installation
> 
> 
> 
> On Wed, Aug 17, 2016 at 1:35 PM, knarra < kna...@redhat.com > wrote:
> 
> 
> 
> On 08/17/2016 04:57 PM, Nir Soffer wrote:
> 
> 
> On Wed, Aug 17, 2016 at 10:16 AM, knarra < kna...@redhat.com > wrote:
> 
> 
> Hi,
> 
> I see the below error logged in vdsm.log file . can some one help me
> understand what this error is and do we have any bug for this error?
> 
> 
> This is not a failure. This line is logged when client closes the connection.
> I can happen at anytime and from vdsm perspective the closure occurred
> during reading data.
> 
> Please check the engine logs to understand who the connection was closed.
> 
> 
> 
> 
> 
> 
> 
> JsonRpc (StompReactor)::ERROR::2016-08-17
> 12:32:05,348::betterAsyncore::113::vds.dispatcher::(recv) SSL error during
> reading data: unexpected eof
> This means the client disconnected in unclean way, Is this a hosted engine
> setup?
> 
> Nir
> 
> yes, this is a hosted engine setup.

I saw similar error in vdsm-4.17.33-1.0 in a gluster only setup when gluster 
nodes are managed by Ovirt 4.0 as well ad Ovirt 3.5.

Kasturi, Can you tell the vdsm and ovirt engine versions used in your setup?


Regards,
Ramesh

> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fail to setup Hyperconverged Infrastructure using oVirt and Gluster

2016-07-07 Thread Ramesh Nachimuthu
Hi Dewey,

Looks like SSH login from Engine VM to 'Host-01' is failing. Can you confirm 
that "PermitRootLogin without-password" is enabled on 'Host-01' where you are 
running " hosted-engine --deploy "


Regards,
Ramesh

- Original Message -
> From: "Dewey Du" 
> To: "Scott" 
> Cc: "users" 
> Sent: Thursday, July 7, 2016 1:11:14 PM
> Subject: Re: [ovirt-users] Fail to setup Hyperconverged Infrastructure using 
> oVirt and Gluster
> 
> On Engine VM, the command " engine-setup " is executed successfully.
> 
> On Host-01 which running the command " hosted-engine --deploy "
> always display the following message:
> 
> Make a selection from the options below:
> (1) Continue setup - oVirt-Engine installation is ready and ovirt-engine
> service is up
> (2) Abort setup
> (3) Power off and restart the VM
> (4) Destroy VM and abort setup
> (1, 2, 3, 4)[1]: 1
> 
> My engine.log on Engine VM is attached below
> 
> 2016-07-05 15:55:54,086 ERROR
> [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-17)
> [77e52cbc] Failed to authenticate session with host 'hosted_engine_1': SSH
> authentication to 'root@localhost.localdomain' failed. Please verify
> provided credentials. Make sure key is authorized at host
> 2016-07-05 15:55:54,086 WARN
> [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-17)
> [77e52cbc] CanDoAction of action 'AddVds' failed for user admin@internal.
> Reasons: VAR__ACTION__ADD,VAR__TYPE__HOST,$server
> localhost.localdomain,VDS_CANNOT_AUTHENTICATE_TO_SERVER
> 2016-07-05 15:55:54,098 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
> task-17) [] Operation Failed: [Cannot add Host. SSH authentication failed,
> verify authentication parameters are correct (Username/Password, public-key
> etc.) You may refer to the engine.log file for further details.]
> 
> On Sun, Jul 3, 2016 at 11:27 PM, Scott < romra...@gmail.com > wrote:
> 
> 
> 
> Do you have root logins disabled in SSH? If I remember right, oVirt will use
> SSH keys once configured so you need "PermitRootLogin without-password" at a
> minimum.
> 
> The engine log and your auth/secure log on the host should probably give you
> some idea of what happened.
> 
> Scott
> 
> On Sun, Jul 3, 2016 at 10:13 AM Dewey Du < dewe...@gmail.com > wrote:
> 
> 
> 
> oVirt 3.6
> 
> # hosted-engine --deploy
> 
> [ ERROR ] Cannot automatically add the host to cluster Default: Cannot add
> Host. SSH authentication failed, verify authentication parameters are
> correct (Username/Password, public-key etc.) You may refer to the engine.log
> file for further details.
> 
> Has anyone encountered this issue before? Thx.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Could not associate gluster brick with correct network warning

2016-06-02 Thread Ramesh Nachimuthu



On 05/30/2016 02:00 PM, Roderick Mooi wrote:

Hi

Yes, I created the volume using "gluster volume create ..." prior to 
installing ovirt. Something I noticed is that there is no "gluster" 
bridge on top of the interface I selected for the "Gluster Management" 
network - could this be the problem?




Ovirt is not able to associate the FQDN name "glustermount.host1" to any 
of the network interface in the host. This is not a major problem. 
Everything will work except brick management from oVirt. You won't be 
able to do any brick specific action using Ovirt.


Note: We are planing to remove the repeated warning message seen in the 
engine.log.



Regards,
Ramesh


Thanks,

Roderick

Roderick Mooi

Senior Engineer: South African National Research Network (SANReN)
Meraka Institute, CSIR

roder...@sanren.ac.za <mailto:roder...@sanren.ac.za> | +27 12 841 4111 
| www.sanren.ac.za <http://www.sanren.ac.za>


On Fri, May 27, 2016 at 11:35 AM, Ramesh Nachimuthu 
mailto:rnach...@redhat.com>> wrote:


How did you create the volume?. Looks like the volume was created
using FQDN in Gluster CLI.


Regards,
Ramesh

- Original Message -
> From: "Roderick Mooi" mailto:roder...@sanren.ac.za>>
> To: "users" mailto:users@ovirt.org>>
> Sent: Friday, May 27, 2016 2:34:51 PM
> Subject: [ovirt-users] Could not associate gluster brick with
correct network warning
>
> Good day
>
> I've setup a "Gluster Management" network in DC, cluster and all
hosts. It is
> appearing as "operational" in the cluster and all host networks look
> correct. But I'm seeing this warning continually in the engine.log:
>
> 2016-05-27 08:56:58,988 WARN
>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-80) [] Could not associate brick
> 'glustermount.host1:/gluster/data/brick' of volume
> '7a25d2fb-1048-48d8-a26d-f288ff0e28cb' with correct network as
no gluster
> network found in cluster '0002-0002-0002-0002-02b8'
>
> This is on ovirt 3.6.5.
>
> Can anyone assist?
>
> Thanks,
>
> Roderick Mooi
>
> Senior Engineer: South African National Research Network (SANReN)
> Meraka Institute, CSIR
>
> roder...@sanren.ac.za <mailto:roder...@sanren.ac.za> | +27 12
841 4111  | www.sanren.ac.za
<http://www.sanren.ac.za>
>
> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Could not associate gluster brick with correct network warning

2016-05-27 Thread Ramesh Nachimuthu
How did you create the volume?. Looks like the volume was created using FQDN in 
Gluster CLI.


Regards,
Ramesh

- Original Message -
> From: "Roderick Mooi" 
> To: "users" 
> Sent: Friday, May 27, 2016 2:34:51 PM
> Subject: [ovirt-users] Could not associate gluster brick with correct network 
> warning
> 
> Good day
> 
> I've setup a "Gluster Management" network in DC, cluster and all hosts. It is
> appearing as "operational" in the cluster and all host networks look
> correct. But I'm seeing this warning continually in the engine.log:
> 
> 2016-05-27 08:56:58,988 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-80) [] Could not associate brick
> 'glustermount.host1:/gluster/data/brick' of volume
> '7a25d2fb-1048-48d8-a26d-f288ff0e28cb' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-02b8'
> 
> This is on ovirt 3.6.5.
> 
> Can anyone assist?
> 
> Thanks,
> 
> Roderick Mooi
> 
> Senior Engineer: South African National Research Network (SANReN)
> Meraka Institute, CSIR
> 
> roder...@sanren.ac.za | +27 12 841 4111 | www.sanren.ac.za
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] have to run gluster peer detach to remove a host form a gluster cluster.

2016-01-17 Thread Ramesh Nachimuthu

Hi Nathanaël Blanchet,

This could be because of the recent change we did in Gluster cluster.  
We stop all gluster process when a gluster host is moved to Maintenance. 
Can u attach the engine log for further analysis?


Regards,
Ramesh

On 01/15/2016 09:28 PM, Nathanaël Blanchet wrote:



Hi all,

When I want to remove an host from a gluster cluster, engines tells me
that it fails to remove the host.
Once I run a manual gluster peer detach , then the host is
successfully removed.
It seems to be bug, doesn't it?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Blog on Hyperconverged Infrastructure using oVirt and Gluster

2016-01-13 Thread Ramesh Nachimuthu
Sorry, There is a mistake in the link. Use 
http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.html


Regards,
Ramesh

On 01/12/2016 05:10 PM, Ramesh Nachimuthu wrote:

Hi Folks,

  Have you ever wondered about Hyperconverged Ovirt and Gluster Setup. 
Here is an answer[1]. I wrote a blog explaining how to setup oVirt in 
a hyper-converged mode with Gluster.


[1] 
http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.htm

*
*Regards,
Ramesh


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Blog on Hyperconverged Infrastructure using oVirt and Gluster

2016-01-12 Thread Ramesh Nachimuthu

Hi Folks,

  Have you ever wondered about Hyperconverged Ovirt and Gluster Setup. 
Here is an answer[1]. I wrote a blog explaining how to setup oVirt in a 
hyper-converged mode with Gluster.


[1] 
http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.htm 


*
*Regards,
Ramesh
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not able to resume a VM which was paused because of gluster quorum issue

2015-09-23 Thread Ramesh Nachimuthu



On 09/24/2015 11:28 AM, Nir Soffer wrote:
On Thu, Sep 24, 2015 at 7:37 AM, Ramesh Nachimuthu 
mailto:rnach...@redhat.com>> wrote:




On 09/24/2015 02:38 AM, Darrell Budic wrote:

This is a known issue in overt 3.5.x and below. It’s been solved
in the upcoming ovirt 3.6.

Related to https://bugzilla.redhat.com/show_bug.cgi?id=1172905,
the fix involved setting up a special cgroup for the mount, but i
can’t find the exact details atm.



I have vdsm 4.17.6-0.el7.centos already installed on the hosts. So
I am not sure above bug 1172905
<https://bugzilla.redhat.com/show_bug.cgi?id=1172905> fixes this
correctly.


I think the root cause is the same - qemu cannot recover from 
glusterfs unmount, and the only way to resume the vm is to restart it 
with a fresh mount.


The mentioned bug handle the case where stopping vdsm kills the 
glusterfs mount helper. This issue is fixed in 3.6.


The issue here seems different. I suggest you open a bug so gluster 
guys can investigate this.




Seems like I am hitting the issue reported in bz 
https://bugzilla.redhat.com/show_bug.cgi?id=1171261.


Regards,
Ramesh


Nir



Regards,
Ramesh





On Sep 23, 2015, at 7:38 AM, Ramesh Nachimuthu
mailto:rnach...@redhat.com>> wrote:



On 09/22/2015 05:57 PM, Alastair Neil wrote:

You need to set the gluster.server-quorum-ratio to 51%



I did that. But still I am facing the same issue. VM get paused
when I do some I/O using fio on some disks backed by gluster. I
am not able to resume the VM after this. Now only way is to
bring down the VM and run again. It runs successfully on the
same host without any issue.

Regards,
Ramesh


On 22 September 2015 at 08:25, Ramesh Nachimuthu
mailto:rnach...@redhat.com>> wrote:



On 09/22/2015 05:43 PM, Alastair Neil wrote:

what are the gluster-quorum-type
and gluster.server-quorum-ratio  settings on the volume?



*cluster.server-quorum-type*:server
*cluster.quorum-type*:auto
*gluster.server-quorum-ratio is not set.*

One brick process is purposefully killed  but remaining two
bricks are up and running.

Regards,
Ramesh


On 22 September 2015 at 06:24, Ramesh Nachimuthu
mailto:rnach...@redhat.com>> wrote:

Hi,

   I am not able to resume a VM which was paused
because of gluster client quorum issue. Here is what
happened in my setup.

1. Created a gluster storage domain which is backed by
gluster volume with replica 3.
2. Killed one brick process. So only two bricks are
running in replica 3 setup.
3. Created two VMs
4. Started some IO using fio on both of the VMs
5. After some time got the following error in gluster
mount and VMs moved to paused state.
 " server 10.70.45.17:49217
<http://10.70.45.17:49217/> has not responded in the
last 42 seconds, disconnecting."
  "vmstore-replicate-0:
e16d1e40-2b6e-4f19-977d-e099f465dfc6: Failing WRITE as
quorum is not met"
  more gluster mount logs at
http://pastebin.com/UmiUQq0F
6. After some time gluster quorum is active and I am
able to write the the gluster file system.
7. When I try to resume the VM it doesn't work and I
got following error in vdsm log.
http://pastebin.com/aXiamY15


Regards,
Ramesh


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not able to resume a VM which was paused because of gluster quorum issue

2015-09-23 Thread Ramesh Nachimuthu



On 09/24/2015 02:38 AM, Darrell Budic wrote:
This is a known issue in overt 3.5.x and below. It’s been solved in 
the upcoming ovirt 3.6.


Related to https://bugzilla.redhat.com/show_bug.cgi?id=1172905, the 
fix involved setting up a special cgroup for the mount, but i can’t 
find the exact details atm.




I have vdsm 4.17.6-0.el7.centos already installed on the hosts. So I am 
not sure above bug 1172905 
<https://bugzilla.redhat.com/show_bug.cgi?id=1172905> fixes this correctly.


Regards,
Ramesh



On Sep 23, 2015, at 7:38 AM, Ramesh Nachimuthu <mailto:rnach...@redhat.com>> wrote:




On 09/22/2015 05:57 PM, Alastair Neil wrote:

You need to set the gluster.server-quorum-ratio to 51%



I did that. But still I am facing the same issue. VM get paused when 
I do some I/O using fio on some disks backed by gluster. I am not 
able to resume the VM after this. Now only way is to bring down the 
VM and run again. It runs successfully on the same host without any 
issue.


Regards,
Ramesh

On 22 September 2015 at 08:25, Ramesh Nachimuthu 
mailto:rnach...@redhat.com>> wrote:




On 09/22/2015 05:43 PM, Alastair Neil wrote:

what are the gluster-quorum-type
and gluster.server-quorum-ratio  settings on the volume?



*cluster.server-quorum-type*:server
*cluster.quorum-type*:auto
*gluster.server-quorum-ratio is not set.*

One brick process is purposefully killed but remaining two
bricks are up and running.

Regards,
Ramesh


On 22 September 2015 at 06:24, Ramesh Nachimuthu
 wrote:

Hi,

   I am not able to resume a VM which was paused because of
gluster client quorum issue. Here is what happened in my
setup.

1. Created a gluster storage domain which is backed by
gluster volume with replica 3.
2. Killed one brick process. So only two bricks are running
in replica 3 setup.
3. Created two VMs
4. Started some IO using fio on both of the VMs
5. After some time got the following error in gluster mount
and VMs moved to paused state.
 " server 10.70.45.17:49217
<http://10.70.45.17:49217/> has not responded in the last
42 seconds, disconnecting."
  "vmstore-replicate-0:
e16d1e40-2b6e-4f19-977d-e099f465dfc6: Failing WRITE as
quorum is not met"
  more gluster mount logs at http://pastebin.com/UmiUQq0F
6. After some time gluster quorum is active and I am able
to write the the gluster file system.
7. When I try to resume the VM it doesn't work and I got
following error in vdsm log.
http://pastebin.com/aXiamY15


Regards,
Ramesh


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not able to resume a VM which was paused because of gluster quorum issue

2015-09-23 Thread Ramesh Nachimuthu



On 09/22/2015 05:57 PM, Alastair Neil wrote:

You need to set the gluster.server-quorum-ratio to 51%



I did that. But still I am facing the same issue. VM get paused when I 
do some I/O using fio on some disks backed by gluster. I am not able to 
resume the VM after this. Now only way is to bring down the VM and run 
again. It runs successfully on the same host without any issue.


Regards,
Ramesh

On 22 September 2015 at 08:25, Ramesh Nachimuthu <mailto:rnach...@redhat.com>> wrote:




On 09/22/2015 05:43 PM, Alastair Neil wrote:

what are the gluster-quorum-type and gluster.server-quorum-ratio
 settings on the volume?



*cluster.server-quorum-type*:server
*cluster.quorum-type*:auto
*gluster.server-quorum-ratio is not set.*

One brick process is purposefully killed  but remaining two bricks
are up and running.

Regards,
Ramesh


On 22 September 2015 at 06:24, Ramesh Nachimuthu
mailto:rnach...@redhat.com>> wrote:

Hi,

   I am not able to resume a VM which was paused because of
gluster client quorum issue. Here is what happened in my setup.

1. Created a gluster storage domain which is backed by
gluster volume with replica 3.
2. Killed one brick process. So only two bricks are running
in replica 3 setup.
3. Created two VMs
4. Started some IO using fio on both of the VMs
5. After some time got the following error in gluster mount
and VMs moved to paused state.
 " server 10.70.45.17:49217
<http://10.70.45.17:49217> has not responded in the last 42
seconds, disconnecting."
  "vmstore-replicate-0:
e16d1e40-2b6e-4f19-977d-e099f465dfc6: Failing WRITE as quorum
is not met"
  more gluster mount logs at http://pastebin.com/UmiUQq0F
6. After some time gluster quorum is active and I am able to
write the the gluster file system.
7. When I try to resume the VM it doesn't work and I got
following error in vdsm log.
http://pastebin.com/aXiamY15


Regards,
Ramesh


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Not able to resume a VM which was paused because of gluster quorum issue

2015-09-22 Thread Ramesh Nachimuthu



On 09/22/2015 05:43 PM, Alastair Neil wrote:
what are the gluster-quorum-type and gluster.server-quorum-ratio 
 settings on the volume?




*cluster.server-quorum-type*:server
*cluster.quorum-type*:auto
*gluster.server-quorum-ratio is not set.*

One brick process is purposefully killed  but remaining two bricks are 
up and running.


Regards,
Ramesh

On 22 September 2015 at 06:24, Ramesh Nachimuthu <mailto:rnach...@redhat.com>> wrote:


Hi,

   I am not able to resume a VM which was paused because of
gluster client quorum issue. Here is what happened in my setup.

1. Created a gluster storage domain which is backed by gluster
volume with replica 3.
2. Killed one brick process. So only two bricks are running in
replica 3 setup.
3. Created two VMs
4. Started some IO using fio on both of the VMs
5. After some time got the following error in gluster mount and
VMs moved to paused state.
 " server 10.70.45.17:49217 <http://10.70.45.17:49217> has
not responded in the last 42 seconds, disconnecting."
  "vmstore-replicate-0: e16d1e40-2b6e-4f19-977d-e099f465dfc6:
Failing WRITE as quorum is not met"
  more gluster mount logs at http://pastebin.com/UmiUQq0F
6. After some time gluster quorum is active and I am able to write
the the gluster file system.
7. When I try to resume the VM it doesn't work and I got following
error in vdsm log.
http://pastebin.com/aXiamY15


Regards,
Ramesh


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Not able to resume a VM which was paused because of gluster quorum issue

2015-09-22 Thread Ramesh Nachimuthu

Hi,

   I am not able to resume a VM which was paused because of gluster 
client quorum issue. Here is what happened in my setup.


1. Created a gluster storage domain which is backed by gluster volume 
with replica 3.
2. Killed one brick process. So only two bricks are running in replica 3 
setup.

3. Created two VMs
4. Started some IO using fio on both of the VMs
5. After some time got the following error in gluster mount and VMs 
moved to paused state.
 " server 10.70.45.17:49217 has not responded in the last 42 
seconds, disconnecting."
  "vmstore-replicate-0: e16d1e40-2b6e-4f19-977d-e099f465dfc6: 
Failing WRITE as quorum is not met"

  more gluster mount logs at http://pastebin.com/UmiUQq0F
6. After some time gluster quorum is active and I am able to write the 
the gluster file system.
7. When I try to resume the VM it doesn't work and I got following error 
in vdsm log.

  http://pastebin.com/aXiamY15


Regards,
Ramesh

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network statistics shown in ovirt doesn't look correct.

2015-09-08 Thread Ramesh Nachimuthu



On 09/07/2015 05:47 PM, Dan Kenigsberg wrote:

On Mon, Sep 07, 2015 at 04:51:34PM +0530, Ramesh Nachimuthu wrote:

Hi,

   I have a strange issue with the network traffic shown in oVirt webadmin
portal. I have 2 10 Gb network card and I have created bond out of them. I
have used *'iperf*' command to generate10 Gbps traffic and I can see that
getting reflected in my nagios monitoring. Also iperf command confirms that
it transferred the data at the rate of 9.38 Gbits/sec.  But oVirt Ui shows
only 1500 Mbps. I am not sure how oVirt shows very low traffic. Anyone
experiencing the similar problem?

VDSM Version: vdsm-4.16.20-1.3.el6rhs.x86_64
Ovirt release : both 3.5 and 3.6

I have attached the relevant screen shots here.

Can you provide the output of

   vdsClient -s 0 getVdsStats

when iperf is running?


Here is the output of vdsClient
http://ur1.ca/nporl



Could you see if a similar issue happens with a new 3.6 vdsm?


I have to re-install the machine with 3.6. I will update u after doing 
the same.


Regards,
Ramesh


Regards,
Dan.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network statistics shown in ovirt doesn't look correct.

2015-09-07 Thread Ramesh Nachimuthu

Hi,

  I have a strange issue with the network traffic shown in oVirt 
webadmin portal. I have 2 10 Gb network card and I have created bond out 
of them. I have used *'iperf*' command to generate10 Gbps traffic and I 
can see that getting reflected in my nagios monitoring. Also iperf 
command confirms that it transferred the data at the rate of 9.38 
Gbits/sec.  But oVirt Ui shows only 1500 Mbps. I am not sure how oVirt 
shows very low traffic. Anyone experiencing the similar problem?


VDSM Version: vdsm-4.16.20-1.3.el6rhs.x86_64
Ovirt release : both 3.5 and 3.6

I have attached the relevant screen shots here.

Regards,
Ramesh
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt on OFTC issues? #ovirt :Cannot, send to channel

2015-09-03 Thread Ramesh Nachimuthu
Even I faced this problem suddenly. I solved it by registering my nick 
again using

"/msg NickServ REGISTER  "

Regards,
Ramesh

On 08/23/2015 01:43 AM, Greg Sheremeta wrote:

Happening to me too. Started out of nowhere -- I didn't change
anything with IRC.

On Fri, Aug 21, 2015 at 2:03 AM, Sahina Bose  wrote:

Hi all

When I send a message to #ovirt on OFTC , I get a response -  #ovirt :Cannot
send to channel

Anyone else facing this?

thanks
sahina
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread Ramesh Nachimuthu



On 09/03/2015 05:35 PM, supo...@logicworks.pt wrote:

On the gluster node (server)
Is not a replicate solution, only one gluster node

# gluster peer status
Number of Peers: 0



Strange.


Thanks

José


*De: *"Ramesh Nachimuthu" 
*Para: *supo...@logicworks.pt, Users@ovirt.org
*Enviadas: *Quinta-feira, 3 De Setembro de 2015 12:55:31
*Assunto: *Re: [ovirt-users] Gluster command [] failed on server

Can u post the output of 'gluster peer status' on the gluster node?

Regards,
Ramesh

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote:

Hi,

I just installed Version 3.5.3.1-1.el7.centos, on centos 7.1, no HE.

for storage, I have only one server with glusterfs:
glusterfs-fuse-3.7.3-1.el7.x86_64
glusterfs-server-3.7.3-1.el7.x86_64
glusterfs-libs-3.7.3-1.el7.x86_64
glusterfs-client-xlators-3.7.3-1.el7.x86_64
glusterfs-api-3.7.3-1.el7.x86_64
glusterfs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64

# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Thu 2015-09-03 11
:23:32 WEST; 10min ago
  Process: 1153 ExecStart=/usr/sbin/glusterd -p
/var/run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 1387 (glusterd)
   CGroup: /system.slice/glusterd.service
   ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid
   ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt
--volfile-id gv0.gfs...

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a
clustered f
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a
clustered fi
Hint: Some lines were ellipsized, use -l to show in full.


Everything was running until I need to restart the node (host),
after that I was not ables to make the host active. This is the
error message:
Gluster command [] failed on server


I also disable JSON protocol, but no success

vdsm.log:
Thread-14::DEBUG::2015-09-03 11
:37:23,131::BindingXMLRPC::1133::vds::(wrapper)
client [192.168.6.200 ]::call
getHardwareInfo with () {}
Thread-14::DEBUG::2015-09-03 11
:37:23,132::BindingXMLRPC::1140::vds::(wrapper)
return getHardwareInfo with {'status': {'message': 'Done', 'code':
0}, 'info': {'systemProductName': 'PRIMERGY RX2520 M1',
'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER',
'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU'}}
Thread-14::DEBUG::2015-09-03 11
:37:23,266::BindingXMLRPC::1133::vds::(wrapper)
client [192.168.6.200 ]::call hostsList with
() {} flowID [4acc5233]
Thread-14::ERROR::2015-09-03 11
:37:23,279::BindingXMLRPC::1149::vds::(wrapper)
vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in 
**kwargs)
  File "", line 2, in glusterPeerStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line
773, in _callmethod
raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is
operational.
return code: 1


supervdsm.log:
MainProcess|Thread-14::DEBUG::2015-09-03 11

:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call getHardwareInfo with () {}
MainProcess|Thread-14::DEBUG::2015-09-03 11

:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return getHardwareInfo with {'systemProductName': 'PRIMERGY RX2520
M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER',
'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU'}
MainProcess|Thread-14::DEBUG::2015-09-03 11

:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapp

Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread Ramesh Nachimuthu

Can u post the output of 'gluster peer status' on the gluster node?

Regards,
Ramesh

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote:

Hi,

I just installed Version 3.5.3.1-1.el7.centos, on centos 7.1, no HE.

for storage, I have only one server with glusterfs:
glusterfs-fuse-3.7.3-1.el7.x86_64
glusterfs-server-3.7.3-1.el7.x86_64
glusterfs-libs-3.7.3-1.el7.x86_64
glusterfs-client-xlators-3.7.3-1.el7.x86_64
glusterfs-api-3.7.3-1.el7.x86_64
glusterfs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64

# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Thu 2015-09-03 11 
:23:32 WEST; 10min ago
  Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
(code=exited, status=0/SUCCESS)

 Main PID: 1387 (glusterd)
   CGroup: /system.slice/glusterd.service
   ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid
   ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id 
gv0.gfs...


Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a 
clustered f
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a 
clustered fi

Hint: Some lines were ellipsized, use -l to show in full.


Everything was running until I need to restart the node (host), after 
that I was not ables to make the host active. This is the error message:

Gluster command [] failed on server


I also disable JSON protocol, but no success

vdsm.log:
Thread-14::DEBUG::2015-09-03 11 
:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client 
[192.168.6.200 ]::call getHardwareInfo with () {}
Thread-14::DEBUG::2015-09-03 11 
:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return 
getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 
'info': {'systemProductName': 'PRIMERGY RX2520 M1', 
'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER', 
'systemVersion': 'GS01', 'systemUUID': 
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}}
Thread-14::DEBUG::2015-09-03 11 
:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client 
[192.168.6.200 ]::call hostsList with () {} 
flowID [4acc5233]
Thread-14::ERROR::2015-09-03 11 
:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm 
exception occured

Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in 
**kwargs)
  File "", line 2, in glusterPeerStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, 
in _callmethod

raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1


supervdsm.log:
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) 
call getHardwareInfo with () {}
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) 
return getHardwareInfo with {'systemProductName': 'PRIMERGY RX2520 
M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER', 
'systemVersion': 'GS01', 'systemUUID': 
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) 
call wrapper with () {}
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,267::utils::739::root::(execCmd) 
/usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,278::utils::759::root::(execCmd) 
FAILED:  = '';  = 1
MainProcess|Thread-14::ERROR::2015-09-03 11 
:37:23,279::supervdsmServer::106::SuperVdsm.ServerCallback::(wrapper) 
Error in wrapper

Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper
res = func(*args, **kwargs)
  File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper
return func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper
return func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus
xmltree = _execGlusterXml(command)
  File "/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml
raise ge.GlusterCmdExecFailedException(rc, out, err)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1



Any idea?

Thanks

José


--
---

Re: [ovirt-users] Ovirt/Gluster

2015-08-18 Thread Ramesh Nachimuthu
+ Ravi from gluster.

Regards,
Ramesh

- Original Message -
From: "Sander Hoentjen" 
To: users@ovirt.org
Sent: Tuesday, August 18, 2015 3:30:35 PM
Subject: [ovirt-users] Ovirt/Gluster

Hi,

We are looking for some easy to manage self contained VM hosting. Ovirt 
with GlusterFS seems to fit that bill perfectly. I installed it and then 
starting kicking the tires. First results looked promising, but now I 
can get a VM to pause indefinitely fairly easy:

My setup is 3 hosts that are in a Virt and Gluster cluster. Gluster is 
setup as replica-3. The gluster export is used as the storage domain for 
the VM's.

Now when I start the VM all is good, performance is good enough so we 
are happy. I then start bonnie++ to generate some load. I have a VM 
running on host 1, host 2 is SPM and all 3 VM's are seeing some network 
traffic courtesy of gluster.

Now, for fun, suddenly the network on host3 goes bad (iptables -I OUTPUT 
-m statistic --mode random --probability 0.75 -j REJECT).
Some time later I see the guest has a small "hickup", I'm guessing that 
is when gluster decides host 3 is not allowed to play anymore. No big 
deal anyway.
After a while 25% of packages just isn't good enough for Ovirt anymore, 
so the host will be fenced. After a reboot *sometimes* the VM will be 
paused, and even after the gluster self-heal is complete it can not be 
unpaused, has to be restarted.

Is there anything I can do to prevent the VM from being paused?

Regards,
Sander

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt and Gluster

2015-07-29 Thread Ramesh Nachimuthu



On 07/29/2015 05:54 PM, Joop wrote:

On 29-7-2015 13:59, Jorick Astrego wrote:

For RHEV I don't have an answer.

But for anyone that cares, we've been running oVirt 3.5 and GlusterFS 
3.6 for about three months in production now. We run a Gluster 
cluster and a seperate virtualization hosts cluster. The Gluster 
cluster is installed, configured and managed seperately from oVirt as 
there is no way to have a seperate storage network in oVirt 3.5


I run something comparable but using a split dns to get a separate 
storage network but also a storage cluster and virt cluster. The 
storage is managed through ovirt though because of the split-dns.



This is being addressed in oVirt 3.6. oVirt 3.6 will support gluster 
storage network. Also oVirt 3.6 comes with lot of gluster specific 
features like 'Storage provisioning for Gluster', 'Geo Replication 
Management', 'Gluster Volume Snapshot management and Scheduling', 
'Storage network for gluster' ,etc.

Stay tuned for oVirt 3.6 release :)


Regards,
Ramesh



There have been no specific incompatibility issues, so we are very 
happy. But as we don't use every feature mileage may vary.

Question: do you use fuse mounts or libgfapi?

I use libgfapi in test, using special vdsm packages and need to test 
if rebooting gluster nodes has an impact on running VMs. Select 
GlusterFS storage domain in oVirt isn't guaranteed  to give you libgfapi!


Joop



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [ANN] oVirt 3.4.0 Test Day - Tomorrow Jan 23th

2014-01-22 Thread Ramesh Nachimuthu
Hi,

   After adding the node to engine, it goes to non operational state with the 
error "Host 10.70.43.160 is compatible with versions (3.0,3.1,3.2,3.3) and 
cannot join Cluster Default which is set to version 3.4.". 

I have enabled the repo 
http://resources.ovirt.org/releases/3.4.0_pre/rpm/Fedora/$releasever/ in host. 

  Following is the vdsm version installed in F19 node:

[root@localhost ~]# rpm -qa | grep vdsm
vdsm-cli-4.14.1-2.fc19.noarch
vdsm-4.14.1-2.fc19.x86_64
vdsm-python-4.14.1-2.fc19.x86_64
vdsm-python-zombiereaper-4.14.1-2.fc19.noarch
vdsm-xmlrpc-4.14.1-2.fc19.noarch
vdsm-gluster-4.14.1-2.fc19.noarch


[root@localhost ~]# vdsClient -s 0 getVdsCaps
clusterLevels = ['3.0', '3.1', '3.2', '3.3']

Anything I am missing here?


Regards,
Ramesh
- Original Message -
From: "Sandro Bonazzola" 
To: "arch" , "engine-devel" , 
Users@ovirt.org
Sent: Wednesday, January 22, 2014 6:37:23 PM
Subject: [ANN] oVirt 3.4.0 Test Day - Tomorrow Jan 23th

Hi all,
tomorrow Jan 23th we'll have oVirt 3.4.0 test day.

On this day all relevant engineers will be online ready to support
any issues you find during install / operating this new release.

Just make sure you have 1 hosts or more to test drive the new release.
If you're curious to see how it works, this is your chance.

Thanks again for everyone who will join us tomorrow!

Location
#ovirt irc channel
Please communicate here to allow others to see any issues

What
In this test day you have a license to kill ;)
Follow the documentation to setup your environment, and test drive the new 
features.
Please remember we expect to see some issues, and anything you come up with 
will save a you when you'll install final release
Remember to try daily tasks you'd usually do in the engine, to see there 
are no regressions.
Write down the configuration you used (HW, console, etc) in the report 
etherpad[1].

Documentation
Release notes: http://www.ovirt.org/OVirt_3.4.0_release_notes
Features pages links: http://bit.ly/17qBn6F
If you find errors in the wiki please annotate it as well in report 
etherpad [1]

Prerequisites / recommendations
Use CentOS or RHEL 6.5 only. 6.4 is unsupported due to various issues 
(sanlock, libvirt, etc).
Use Fedora 19 only. Fedora 20 is unsupported due to various issues (sos, 
jboss).

Latest RPMs
repository to be enabled for testing the release are listed in the release 
notes page [2].

NEW issues / reports
For any new issue, please update the reports etherpad [1]

Feature owners, please make sure:
your feature is updated and referenced on release page [2].
you have testing instruction for your feature either on test day page [3] 
or in your feature page.
your team regression testing section is organized and up to date on test 
day page [3].


[1] http://etherpad.ovirt.org/p/3.4-testday-1
[2] http://www.ovirt.org/OVirt_3.4.0_release_notes
[3] http://www.ovirt.org/OVirt_3.4_Test_Day


Thanks.

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Arch mailing list
a...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/arch
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users