[ovirt-devel] [vdsm] VmDevices rework

2015-04-28 Thread Martin Polednik
Hello everyone,

I have started working on line of patches that deal with current state of VM 
devices
in VDSM. There is a wiki page at [1] describing the issues, phases and final 
goals
for this work. Additionally, there is formal naming system for code related to
devices. I would love to hear your opinions and comments about this effort!
(and please note that this is very long term work)

[1] http://www.ovirt.org/Feature/VmDevices_rework

mpolednik
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] [ANN] oVirt 3.5.2 Final Release is now available

2015-04-28 Thread Sandro Bonazzola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

The oVirt team is pleased to announce that the oVirt 3.5.2 Final Release is now 
available as of April 28th 2015.

oVirt is an open source alternative to VMware vSphere, and provides an 
excellent KVM management interface for multi-node virtualization.
oVirt is available now for Fedora 20,
Red Hat Enterprise Linux 6.6, CentOS 6.6 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS 7.1 (or similar).

This release of oVirt includes numerous bug fixes. See the release notes [1] 
for a list of the new features and bugs fixed.

Please refer to release notes [1] for Installation / Upgrade instructions.

A new oVirt Live and oVirt Node ISO will be available soon as well[2].

Please note that mirrors[3] may need usually one day before being synchronized.

Please refer to the release notes for known issues in this release.

[1] http://www.ovirt.org/OVirt_3.5.2_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors

- -- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJVP0l9AAoJEHw3u57E4QAOQqQP/1xFK0hv63XnK3PuY4nvgHZI
8zYx4najqu/HraI1T8CNbNdA9gRsC41Wi8Rj6NmYYjYQYli2aVRCvCJQ2ZHMJ/4C
Li4PjzRN/I5jUVXTnXXFZwBR5puormC6Xw5/0BWpZl1HY9kQfp0BQAN2xZRpPOAL
d05YWwkhGRlY2QqbejD2/Srs7EeaBm6a72eso87VlF0cPqmOyEQHOsZW/+ePUr8m
IsEvmiLs4AjVddcThAMwnpxoTyVuvAcKcNNuimnkOLqnVn0cvEUJQuBSzeasPU9u
lMJMiavT3F1NgcO8nuxl+ouxDiPPD2k3sJYWishr4kSjlAh+nHD9AiUBvwKG6wtz
MsL4CEURu+ufk2IRNsiVwheS+V7NDxePZ8FiDiv8I8Fi3cJNSTs8LL9sjGZnW/uz
p1CfTXr21965yManVDXqhHOU5Z5v5cp9ZLbYMU4HkBDyF9r5U/fGt5wYzaTtHIQ9
9JsPoEmWzSWGBMPeouMEpWNKJJM0RZ1DpD11qoksjTQ/nIiMP8doBlvkvPBs9ZS6
WEwgLMxa/nGm5/nO/30VDKgHTVa/bFJnxe98D0hosFs3nyFGYvj1jT707b1d0srk
aHd0WYv0+7tsJ/y4QJPKoesL61zmWmdrK2URNYiyIWGgNwQJJZuYVY1ckayx63kb
DMpHcfZNlBNrss7FF5lr
=A4ZO
-END PGP SIGNATURE-
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] Engine Broken - The column name gluster_tuned_profile was not found in this ResultSet

2015-04-28 Thread Christopher Pereira

Hi, something broke Engine's Database in master:

2015-04-28 05:53:15,959 ERROR 
[org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean] (MSC 
service thread 1-4) [] Failed to initialize backend: 
org.jboss.weld.exceptions.WeldException: WELD-49 Unable to invoke 
[method] @PostConstruct private 
org.ovirt.engine.core.vdsbroker.ResourceManager.init() on 
org.ovirt.engine.core.vdsbroker.ResourceManager@38e3648c
at 
org.jboss.weld.bean.AbstractClassBean.defaultPostConstruct(AbstractClassBean.java:518) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.bean.ManagedBean$ManagedBeanInjectionTarget.postConstruct(ManagedBean.java:174) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:291) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.context.AbstractContext.get(AbstractContext.java:107) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:616) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:643) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.ovirt.engine.core.di.Injector.instanceOf(Injector.java:73) 
[vdsbroker.jar:]
at org.ovirt.engine.core.di.Injector.get(Injector.java:58) 
[vdsbroker.jar:]
at 
org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean.create(InitBackendServicesOnStartupBean.java:75) 
[bll.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
[rt.jar:1.7.0_79]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
[rt.jar:1.7.0_79]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
[rt.jar:1.7.0_79]
at java.lang.reflect.Method.invoke(Method.java:606) 
[rt.jar:1.7.0_79]
at 
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptorFactory$ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptorFactory.java:130) 
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.weld.injection.WeldInjectionInterceptor.processInvocation(WeldInjectionInterceptor.java:73) 
[jboss-as-weld-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ee.component.ManagedReferenceInterceptorFactory$ManagedReferenceInterceptor.processInvocation(ManagedReferenceInterceptorFactory.java:95) 
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) 
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:228) 
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.as.ejb3.tx.CMTTxInterceptor.requiresNew(CMTTxInterceptor.java:333) 
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.as.ejb3.tx.SingletonLifecycleCMTTxInterceptor.processInvocation(SingletonLifecycleCMTTxInterceptor.java:56) 
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) 
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45) 
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorConte

Re: [ovirt-devel] Engine Broken - The column name gluster_tuned_profile was not found in this ResultSet

2015-04-28 Thread Roy Golan

On 04/28/2015 11:56 AM, Christopher Pereira wrote:

Hi, something broke Engine's Database in master:

2015-04-28 05:53:15,959 ERROR 
[org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean] (MSC 
service thread 1-4) [] Failed to initialize backend: 
org.jboss.weld.exceptions.WeldException: WELD-49 Unable to invoke 
[method] @PostConstruct private 
org.ovirt.engine.core.vdsbroker.ResourceManager.init() on 
org.ovirt.engine.core.vdsbroker.ResourceManager@38e3648c
at 
org.jboss.weld.bean.AbstractClassBean.defaultPostConstruct(AbstractClassBean.java:518) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.bean.ManagedBean$ManagedBeanInjectionTarget.postConstruct(ManagedBean.java:174) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:291) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.context.AbstractContext.get(AbstractContext.java:107) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:616) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:643) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.ovirt.engine.core.di.Injector.instanceOf(Injector.java:73) 
[vdsbroker.jar:]
at org.ovirt.engine.core.di.Injector.get(Injector.java:58) 
[vdsbroker.jar:]
at 
org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean.create(InitBackendServicesOnStartupBean.java:75) 
[bll.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
[rt.jar:1.7.0_79]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
[rt.jar:1.7.0_79]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
[rt.jar:1.7.0_79]
at java.lang.reflect.Method.invoke(Method.java:606) 
[rt.jar:1.7.0_79]
at 
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptorFactory$ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptorFactory.java:130) 
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.weld.injection.WeldInjectionInterceptor.processInvocation(WeldInjectionInterceptor.java:73) 
[jboss-as-weld-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ee.component.ManagedReferenceInterceptorFactory$ManagedReferenceInterceptor.processInvocation(ManagedReferenceInterceptorFactory.java:95) 
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) 
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:228) 
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.as.ejb3.tx.CMTTxInterceptor.requiresNew(CMTTxInterceptor.java:333) 
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.as.ejb3.tx.SingletonLifecycleCMTTxInterceptor.processInvocation(SingletonLifecycleCMTTxInterceptor.java:56) 
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) 
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) 
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at 
org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45) 
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at 
org.jboss.i

Re: [ovirt-devel] Engine Broken - The column name gluster_tuned_profile was not found in this ResultSet

2015-04-28 Thread Christopher Pereira

On 28-04-2015 6:08, Roy Golan wrote:

On 04/28/2015 11:56 AM, Christopher Pereira wrote:

Hi, something broke Engine's Database in master:

2015-04-28 05:53:15,959 ERROR 
[org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean] (MSC 
service thread 1-4) [] Failed to initialize backend: 
org.jboss.weld.exceptions.WeldException: WELD-49 Unable to invoke 
[method] @PostConstruct private 
org.ovirt.engine.core.vdsbroker.ResourceManager.init() on 
org.ovirt.engine.core.vdsbroker.ResourceManager@38e3648c
at 
org.jboss.weld.bean.AbstractClassBean.defaultPostConstruct(AbstractClassBean.java:518) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.bean.ManagedBean$ManagedBeanInjectionTarget.postConstruct(ManagedBean.java:174) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:291) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.context.AbstractContext.get(AbstractContext.java:107) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:616) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:643) 
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at 
org.ovirt.engine.core.di.Injector.instanceOf(Injector.java:73) 
[vdsbroker.jar:]

[...]
Caused by: org.postgresql.util.PSQLException: The column name 
gluster_tuned_profile was not found in this ResultSet.
at 
org.postgresql.jdbc2.AbstractJdbc2ResultSet.findColumn(AbstractJdbc2ResultSet.java:2542)
at 
org.postgresql.jdbc2.AbstractJdbc2ResultSet.getString(AbstractJdbc2ResultSet.java:2385)
at 
org.jboss.jca.adapters.jdbc.WrappedResultSet.getString(WrappedResultSet.java:1381)
at 
org.ovirt.engine.core.dao.VdsGroupDAODbFacadeImpl$VdsGroupRowMapper.mapRow(VdsGroupDAODbFacadeImpl.java:306) 
[dal.jar:]
at 
org.ovirt.engine.core.dao.VdsGroupDAODbFacadeImpl$VdsGroupRowMapper.mapRow(VdsGroupDAODbFacadeImpl.java:256) 
[dal.jar:]
at 
org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:92) 
[spring-jdbc.jar:3.1.1.RELEASE]
at 
org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:1) 
[spring-jdbc.jar:3.1.1.RELEASE]
at 
org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:649) 
[spring-jdbc.jar:3.1.1.RELEASE]
at 
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:587) 
[spring-jdbc.jar:3.1.1.RELEASE]

... 68 more

probably one of the packaging/dbscripts/upgrade script failed or 
didn't run


try to run this

packaging/dbscripts/upgrade/03_06_1260_add_tuned_profile_column_to_vds_groups.sql 



and the vds_groups_sp.sql must be run again


1) Adding the field 'gluster_tuned_profile' to 'vds_groups' didn't fix 
the problem.


2) FYI: select * from schema_version order by id desc limit 1, was 
reporting:

version  | 03061240
script   | 
upgrade/03_06_1240_revert_03061210_add_vm_host_device_commands.sql


3) I executed manually the remaining update scripts but it didn't fix 
the problem.


4) Then I tried:

   cd /usr/share/ovirt-engine/dbscripts
   export PGPASSWORD=...
   ./schema.sh -u engine -d engine -c refresh -v

but it failed with:

   Creating views...
   psql:./create_views.sql:445: ERROR: column
   vm_device.is_using_scsi_reservation does not exist
   LINE 9: vm_device.is_using_scsi_reservation
   FATAL: Cannot execute sql command: --file=./create_views.sql

5) This field is defined in '03_06_1270_add_uses_scsi_reservation.sql' 
which I supposedly executed before (but probably really did not).
I added manually "alter table vm_device add column 
is_using_scsi_reservation BOOLEAN NOT NULL DEFAULT FALSE;" 
('fn_db_add_column' was not defined at this point).


6) Then I executed again "./schema.sh -u engine -d engine -c refresh -v" 
and this time it run until the end and fixed the problem.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] "Please activate the master Storage Domain first"

2015-04-28 Thread Christopher Pereira
The DC storage master domain is on a (unrecoverable) storage on a remote 
dead host.

Engine is automatically setting another storage as the "Data (Master)".
Seconds later, the unrecoverable storage is marked as "Data (Master)" again.
There is no way to start the Datacenter.

Both storages are gluster. The old (unrecoverable) one worked fine as a 
master.


Any hint?

Logs:

   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::592::Storage.TaskManager.Task::(_updateState)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::moving from state init ->
 state preparing
   Thread-32620::INFO::2015-04-28
   16:34:02,508::logUtils::48::dispatcher::(wrapper) Run and protect:
   getAllTasksStatuses(spUUID=None, options=None)
   Thread-32620::ERROR::2015-04-28
   16:34:02,508::task::863::Storage.TaskManager.Task::(_setError)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::Unexpected error
   Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 870, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2202, in
   getAllTasksStatuses
raise se.SpmStatusError()
   SpmStatusError: Not SPM: ()
   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::882::Storage.TaskManager.Task::(_run)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::Task._run:
   bf487090-8d62-4b42-bfd
   e-93574a8e1486 () {} failed - stopping task
   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::1214::Storage.TaskManager.Task::(stop)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::stopping in state
   preparing (for
   ce False)
   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::990::Storage.TaskManager.Task::(_decref)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::ref 1 aborting True
   Thread-32620::INFO::2015-04-28
   16:34:02,508::task::1168::Storage.TaskManager.Task::(prepare)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::aborting: Task is
   aborted: 'No
   t SPM' - code 654
   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::1173::Storage.TaskManager.Task::(prepare)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::Prepare: aborted: Not SPM
   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::990::Storage.TaskManager.Task::(_decref)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::ref 0 aborting True
   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::925::Storage.TaskManager.Task::(_doAbort)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::Task._doAbort: force False
   Thread-32620::DEBUG::2015-04-28
   
16:34:02,508::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
   Owner.cancelAll requests {}
   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::592::Storage.TaskManager.Task::(_updateState)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::moving from state
   preparing -> state aborting
   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::547::Storage.TaskManager.Task::(__state_aborting) 
Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::_aborting:
   recover policy none
   Thread-32620::DEBUG::2015-04-28
   16:34:02,508::task::592::Storage.TaskManager.Task::(_updateState)
   Task=`bf487090-8d62-4b42-bfde-93574a8e1486`::moving from state
   aborting -> state failed
   Thread-32620::DEBUG::2015-04-28
   
16:34:02,508::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
   Owner.releaseAll requests {} resources {}
   Thread-32620::DEBUG::2015-04-28
   
16:34:02,508::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
   Owner.cancelAll requests {}
   Thread-32620::ERROR::2015-04-28
   16:34:02,509::dispatcher::76::Storage.Dispatcher::(wrapper)
   {'status': {'message': 'Not SPM: ()', 'code': 654}}
   Thread-32620::DEBUG::2015-04-28
   16:34:02,509::stompReactor::158::yajsonrpc.StompServer::(send)
   Sending response

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] VDSM - sampling.py - remove() called without previous add()

2015-04-28 Thread Christopher Pereira

Hi,

In sampling.py, remove() is being called without calling add() before, 
which throws:


   JsonRpc (StompReactor)::DEBUG::2015-04-28
   17:35:55,061::stompReactor::94::Broker.StompAdapter::(handle_frame)
   Handling message 
   Thread-37401::DEBUG::2015-04-28
   17:35:55,062::__init__::445::jsonrpc.JsonRpcServer::(_serveRequest)
   Calling 'VM.destroy' in bridge with {u'vmID':
   u'6ec9c0a0-2879-4bfe-9a79-92471881ebfe'}
   JsonRpcServer::DEBUG::2015-04-28
   17:35:55,062::__init__::482::jsonrpc.JsonRpcServer::(serve_requests)
   Waiting for request
   Thread-37401::INFO::2015-04-28
   17:35:55,062::API::334::vds::(destroy) vmContainerLock acquired by
   vm 6ec9c0a0-2879-4bfe-9a79-92471881ebfe
   Thread-37401::DEBUG::2015-04-28
   17:35:55,062::vm::3513::vm.Vm::(destroy)
   vmId=`6ec9c0a0-2879-4bfe-9a79-92471881ebfe`::destroy Called
   Thread-37401::INFO::2015-04-28
   17:35:55,062::vm::3444::vm.Vm::(releaseVm)
   vmId=`6ec9c0a0-2879-4bfe-9a79-92471881ebfe`::Release VM resources
   Thread-37401::WARNING::2015-04-28
   17:35:55,062::vm::375::vm.Vm::(_set_lastStatus)
   vmId=`6ec9c0a0-2879-4bfe-9a79-92471881ebfe`::trying to set state to
   Powering down when already Down
   Thread-37401::ERROR::2015-04-28
   17:35:55,063::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest)
   Internal server error
   Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
   line 464, in _serveRequest
res = method(**params)
  File "/usr/share/vdsm/rpc/Bridge.py", line 273, in _dynamicMethod
result = fn(*methodArgs)
  File "/usr/share/vdsm/API.py", line 339, in destroy
res = v.destroy()
  File "/usr/share/vdsm/virt/vm.py", line 3515, in destroy
result = self.doDestroy()
  File "/usr/share/vdsm/virt/vm.py", line 3533, in doDestroy
return self.releaseVm()
  File "/usr/share/vdsm/virt/vm.py", line 3448, in releaseVm
sampling.stats_cache.remove(self.id)
  File "/usr/share/vdsm/virt/sampling.py", line 428, in remove
if vmid in self._vm_last_timestamp.keys():
   KeyError: u'6ec9c0a0-2879-4bfe-9a79-92471881ebfe'
   Thread-37401::DEBUG::2015-04-28
   17:35:55,063::stompReactor::158::yajsonrpc.StompServer::(send)
   Sending response

In file '/usr/share/vdsm/virt/sampling.py':

def add(self, vmid):
"""
Warm up the cache for the given VM.
This is to avoid races during the first sampling and the first
reporting, which may result in a VM wrongly reported as 
unresponsive.

"""
with self._lock:
self._vm_last_timestamp[vmid] = self._clock()

def remove(self, vmid):
"""
Remove any data from the cache related to the given VM.
"""
with self._lock:
*if vmid in 
self._vm_last_timestamp.keys():**<- I patched here as a 
workarround

**del self._vm_last_timestamp[vmid]*

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] VDSM - sampling.py - remove() called without previous add()

2015-04-28 Thread Nir Soffer
> In sampling.py, remove() is being called without calling add() before, which
> throws:

Fixed in master, please update:
https://gerrit.ovirt.org/40223

Nir
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] "Please activate the master Storage Domain first"

2015-04-28 Thread Nir Soffer
> The DC storage master domain is on a (unrecoverable) storage on a remote dead
> host.
> Engine is automatically setting another storage as the "Data (Master)".
> Seconds later, the unrecoverable storage is marked as "Data (Master)" again.
> There is no way to start the Datacenter.
> 
> Both storages are gluster. The old (unrecoverable) one worked fine as a
> master.

This may be related to this bug:
https://bugzilla.redhat.com/1183977.
Are you using latest engine?
 
> Any hint?

If a one gluster node dies, and this brings down your data center, 
your gluster is probably not set up correctly. With proper replication
everything should work after a storage node dies.

Please check this for the recommended configuration:
http://www.ovirt.org/Gluster_Storage_Domain_Reference

Nir
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] "Please activate the master Storage Domain first"

2015-04-28 Thread Christopher Pereira


On 28-04-2015 18:14, Nir Soffer wrote:

The DC storage master domain is on a (unrecoverable) storage on a remote dead
host.
Engine is automatically setting another storage as the "Data (Master)".
Seconds later, the unrecoverable storage is marked as "Data (Master)" again.
There is no way to start the Datacenter.

Both storages are gluster. The old (unrecoverable) one worked fine as a
master.

This may be related to this bug:
https://bugzilla.redhat.com/1183977.


Ok. I added a comment and explained more in detail the issue on BZ.


Are you using latest engine?


Yes, 
ovirt-engine-3.6.0-0.0.master.20150427175110.git61dec8c.el7.centos.noarch



Any hint?

If a one gluster node dies, and this brings down your data center,
your gluster is probably not set up correctly. With proper replication
everything should work after a storage node dies.


Right, in theory vdsm, ovirt-engine and gluster should all be stable 
enough so that the Master Storage Domain is always alive.
Besides, oVirt DC admins should know that a Master Storage Domain can 
not be removed or firewalled out from the DC without loosing the whole DC.
From another point of view, oVirt should be rock solid even in the case 
Master Storage Domain went down.
It should not rely on a single SD but choose other available SD as the 
new master SD, and that's the way it seems to be implemented (though not 
always working).
Expected result : the alive SD should become the new MSD to reactivate 
the DC.
Issue : Engine tries to set the alive SD as the new MSD but fails 
without a reason.



Please check this for the recommended configuration:
http://www.ovirt.org/Gluster_Storage_Domain_Reference

Thanks. Yes, we are applying replica 3 on "production".
On our lab, funny things are happening all the time with the master 
nightly builds and latest gluster builds, but this helps to test and fix 
issues on the run and generate extreme test cases making oVirt more robust.


Regards,
Chris
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Maintainer rights on vdsm - ovirt-3.5-gluster

2015-04-28 Thread Sahina Bose


On 04/20/2015 04:36 PM, Dan Kenigsberg wrote:

On Mon, Apr 20, 2015 at 03:20:18PM +0530, Sahina Bose wrote:

Hi!

On the vdsm branch "ovirt-3.5-gluster", could you provide merge rights to
Bala (barum...@redhat.com) ?

+1 from me.

ovirt-3.5-gluster needs a rebase on top of the current ovirt-3.5


Hi!

Is there a push right to be enabled on this branch as well?

Currently, rebasing and pushing is failing with following:

remote: Branch refs/heads/ovirt-3.5-gluster:
remote: You are not allowed to perform this operation.
remote: To push into this reference you need 'Push' rights.
remote: User: bala
remote: Please read the documentation and contact an administrator
remote: if you feel the configuration is incorrect
remote: Processing changes: refs: 1, done
To ssh://b...@gerrit.ovirt.org:29418/vdsm.git
 ! [remote rejected] ovirt-3.5-gluster -> ovirt-3.5-gluster (prohibited 
by Gerrit)
error: failed to push some refs to 
'ssh://b...@gerrit.ovirt.org:29418/vdsm.git'


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel