Re: [ovirt-devel] Test day: help testing hosted engine on ovirt node

2014-07-01 Thread Fabian Deutsch
- Original Message -
> Hi,
> I was assigned to test this topic, but i don't see any info on how to start,
> looking at the wiki: http://www.ovirt.org/Node_Hosted_Engine there is no
> info,
> nor on the HE how-to wiki: http://www.ovirt.org/Hosted_Engine_Howto
> 
> should i build the node myself on a fedora, and then run the hosted engine
> setup as described in the how-to?
> what is the expected flow for this, for a user that want to start using ovirt
> with hosted engine and ovirt node?

Hey Omer,

sadly we just noted today that the rpms for the HE plugin are missing, and that 
the current oVirt Node iso does not contain the necessary bits.

We hope to have an testable ISO available soon - as in tomorrow or Thursday.

Please watch the devel@ and users@ lists for updates.

Thanks
fabian
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] oVirt 3.5 test day 1 results

2014-07-01 Thread Douglas Schilling Landgraf

Hi,

This time I have tested the below RFE:

[RFE] Change the "Slot" field to "Service Profile" when cisco_ucs is 
selected as the fencing type.

https://bugzilla.redhat.com/show_bug.cgi?id=1090803

Test Data
===
Running oVirt 3.5 with Power Management enabled in hosts when selecting 
Type cisco_ucs the Slot field get replaced by Service Profile as RFE 
requested. The same test under 3.4 the field is not replaced.

I would say this RFE is 100% accomplished.



--
Cheers
Douglas
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] oVirt 3.5 test day 1 results

2014-07-01 Thread Nir Soffer
Hi all,

I tested today [RFE] replace XML-RPC communication (engine-vdsm) with json-rpc 
based on bidirectional transport

First I upgraded ovirt-3.4 stable engine to ovirt-3.5 - ok
Then I upgraded 4 hosts to latest vdsm - ok

I upgraded 2 data centers to cluster version 3.5:
- 2 Fedora 19 hosts with 30 ISCSI storage domains - ok
- 2 RHEL 6.5 hosts with 45 NFS storage domains - failed
  I had to remove the hosts and the virtual machines to complete
  the upgrade [1]

Then I removed the hosts and added them back (to configure jsonrpc), and
setup one host using jsonrpc and the other using xmlrpc - ok

After moving the hosts to maintenance mode and starting them back, I found
that the host using jsonrpc was stuck in "Unassigned" state [2],[3].

The errors in the vdsm log were not clear enough. After I improving this [4],
I could fix it in one line patch [5].

Finally when I had a working system, I run some sanity tests:
- start/stop vm - ok
- create vm from template - ok
- migrate vms between two hosts concurrenly (one host use xmlrpc, one using 
json) - ok

Then I tried to test create template from vm, but I had low disk space
on that storage domain. So I tried to extend the domain which would be
useful test as well.

But turns out that you cannot create or edit a block domain when using jsonrpc 
[6]

Looking at the logs, I found also that shutting down protocol detector fails [7]

Summary:

- upgrade is broken in some cases - critical
- jsonrpc is not ready yet
- jsonrpc needs lot of additional testing - for next test day I suggest one 
tester
  from each team (virt, storage, networking, sla?) to test jsonrpc with relevant
  flows.

[1] https://bugzilla.redhat.com/1114994
Cannot edit cluster after upgrade from version 3.4 to 3.5 because cpu type 
(Intel Haswell) does not match

[2] https://bugzilla.redhat.com/1115033
StoragePool_disconnect: disconnect() takes exactly 4 arguments

[3] https://bugzilla.redhat.com/1115044
Host stuck in "Unassinged" state when using jsonrpc and disconnection from 
pool failed

[4] http://gerrit.ovirt.org/29457 
bridge: Show more info when method call fail

[5] http://gerrit.ovirt.org/29465
api: Make remove optional

[6] https://bugzilla.redhat.com/show_bug.cgi?id=1115152
Cannot edit or create block storage doamin when using jsonrpc

[7] https://bugzilla.redhat.com/1115104
Shuting down protocol detector fails

Nir
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] oVirt 3.5 test day results

2014-07-01 Thread Tal Nisan

Hi,
Today I have tested the following features:

1090798[RFE] Admin GUI - Add host uptime information to the 
"General" tab


1108861[RFE] Support logging of commands parameters

1090808[RFE] Ability to dismiss alerts and events from web-admin portal


*Results:*


_/1090798[RFE] Admin GUI - Add host uptime information to the 
"General" t/_ab


The host boot time appears on the host general subtab (see attached 
screenshot), note that the boot time showed is in the timezone of the 
host, *perhaps it'll be wise to add a documentation about it as the host 
and the engine might be in two different time zone* (FYI Dima)




/_1108861[RFE] Support logging of commands parameters_/

When changing the log threshold of ovirt to DEBUG, all the commands 
tested included a dump of the parameters, for instance:


2014-07-01 14:45:59,908 INFO 
[org.ovirt.engine.core.bll.AddImageFromScratchCommand] 
(http--0.0.0.0-8080-1) [766feacc] Running command: 
AddImageFromScratchCommand(MasterVmId = 
d2e5e41f-241b-4832-8f85-85382216bfa1, DiskInfo = 
org.ovirt.engine.core.common.businessentities.DiskImage@169bd050, 
ShouldRemainIllegalOnFailedExecution = false, ImageId = 
----, VmSnapshotId = 
27ea8f58-6742-4075-b44f-349b7556177c, DiskAlias = mlip_Disk3, 
DestinationImageId = ----, 
OldLastModifiedValue = null, ImageGroupID = 
----, ImportEntity = false, LeaveLocked 
= false, Description = null, StorageDomainId = 
0355997e-5b39-48ff-92aa-6ffb2d91e526, QuotaId = null, IsInternal = 
false, VdsId = null, StoragePoolId = 
9ada25ba-5156-48a9-a995-08ac9882abc6, ForceDelete = false) internal: 
true. Entities affected :  ID: 0355997e-5b39-48ff-92aa-6ffb2d91e526 
Type: Storage



_/
/__/1090808[RFE] Ability to dismiss alerts and events from web-admin 
portal/_


The alerts tab included an X icon that upon click made the alert 
disappear, the right mouse button context menu included a dismiss menu 
item that did the same and a clear all button that restored all 
dismissed alerts


*Two notes*: (FYI Ravi)
1. The original bug description refers to alerts & events, the dismiss 
option exists via webadmin only for alerts and not for events, was this 
on purpose?
2. Although it was not explained in the bug, the clear all button to my 
understanding is supposed to dismiss all alerts, instead it restores all 
the dismissed alerts and make them reappear, is this the wanted behavior?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Test day: gluster install

2014-07-01 Thread Piotr Kliczewski
On Tue, Jul 1, 2014 at 4:24 PM, Kanagaraj Mayilsamy  wrote:
> Can you try moving SELinux to Permissive or Disabled and see?
>

I set SELinux to permissive and restarted glusterd service and got:

[root@f20 ~]# gluster peer status
Number of Peers: 0

I opened BZ for it: https://bugzilla.redhat.com/show_bug.cgi?id=1115091

>
> - Original Message -
>> From: "Piotr Kliczewski" 
>> To: "Kanagaraj Mayilsamy" 
>> Cc: devel@ovirt.org
>> Sent: Tuesday, July 1, 2014 6:56:45 PM
>> Subject: Re: [ovirt-devel] Test day: gluster install
>>
>> [root@f20 ~]# service glusterd status
>> Redirecting to /bin/systemctl status  glusterd.service
>> glusterd.service - GlusterFS, a clustered file-system server
>>Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
>>Active: active (running) since Tue 2014-07-01 11:12:29 CEST; 4h 9min ago
>>  Main PID: 31056 (glusterd)
>>CGroup: /system.slice/glusterd.service
>>└─31056 /usr/sbin/glusterd -p /run/glusterd.pid
>>
>> Jul 01 11:12:29 f20.example.com systemd[1]: Started GlusterFS, a
>> clustered file-system server.
>> Jul 01 11:12:29 f20.example.com python[31062]: SELinux is preventing
>> /usr/sbin/glusterfsd from write access on the sock_file .
>>
>>*  Plugin catchall
>> (100. confidence) suggests   **...
>> Hint: Some lines were ellipsized, use -l to show in full.
>> [root@f20 ~]# gluster peer status
>> Connection failed. Please check if gluster daemon is operational.
>>
>> On Tue, Jul 1, 2014 at 3:07 PM, Kanagaraj Mayilsamy 
>> wrote:
>> >
>> >
>> > - Original Message -
>> >> From: "Piotr Kliczewski" 
>> >> To: "Kanagaraj Mayilsamy" 
>> >> Cc: devel@ovirt.org
>> >> Sent: Tuesday, July 1, 2014 3:52:59 PM
>> >> Subject: Re: [ovirt-devel] Test day: gluster install
>> >>
>> >> On Tue, Jul 1, 2014 at 11:56 AM, Kanagaraj Mayilsamy
>> >>  wrote:
>> >> > This can happen if glusterd service is down.
>> >> >
>> >> > What does "service glusterd status" say?
>> >> >
>> >> > If you find this down, start it by "service glusterd start"
>> >> >
>> >>
>> >> I checked status of this service and it was active.\
>> >
>> > Whats the output of "gluster peer status"?
>> >
>> >
>> >>
>> >> >
>> >> > Thanks,
>> >> > Kanagaraj
>> >> >
>> >> > - Original Message -
>> >> >> From: "Piotr Kliczewski" 
>> >> >> To: devel@ovirt.org
>> >> >> Sent: Tuesday, July 1, 2014 3:00:29 PM
>> >> >> Subject: [ovirt-devel]  Test day: gluster install
>> >> >>
>> >> >> I stated to test gluster related features and noticed issue after
>> >> >> installation.
>> >> >> I performed following steps on my f20 using xmlrpc:
>> >> >> 1. Installed ovirt 3.5 repo.
>> >> >> 2. Installed engine
>> >> >> 3. Installed vdsm on the same host - status UP
>> >> >> 4. Removed vdsm
>> >> >> 5. Enabled gluster service
>> >> >> 6. Installed vdsm again (tried several times with the same result)
>> >> >>
>> >> >> Here is the output that I get:
>> >> >> I can see gluserd and glusterfsd services being active.
>> >> >>
>> >> >> Engine:
>> >> >> 2014-07-01 10:38:53,722 WARN
>> >> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> >> >> (org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
>> >> >> Call Stack: null, Custom Event ID: -1, Message: Host fedora's
>> >> >> following network(s) are not synchronized with their Logical Network
>> >> >> configuration: ovirtmgmt.
>> >> >>
>> >> >> vdsm:
>> >> >>
>> >> >> Thread-13::DEBUG::2014-07-01
>> >> >> 10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
>> >> >> ('gluster-swift',) not found
>> >> >> Thread-13::DEBUG::2014-07-01
>> >> >> 10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
>> >> >> ('gluster-swift-object',) not found
>> >> >> Thread-13::DEBUG::2014-07-01
>> >> >> 10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
>> >> >> ('gluster-swift-plugin',) not found
>> >> >> Thread-13::DEBUG::2014-07-01
>> >> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
>> >> >> ('gluster-swift-account',) not found
>> >> >> Thread-13::DEBUG::2014-07-01
>> >> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
>> >> >> ('gluster-swift-proxy',) not found
>> >> >> Thread-13::DEBUG::2014-07-01
>> >> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
>> >> >> ('gluster-swift-doc',) not found
>> >> >> Thread-13::DEBUG::2014-07-01
>> >> >> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
>> >> >> ('gluster-swift-container',) not found
>> >> >> Thread-13::DEBUG::2014-07-01
>> >> >> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
>> >> >> ('glusterfs-geo-replication',) not found
>> >> >>
>> >> >> Thread-13::ERROR::2014-07-01
>> >> >> 10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
>> >> >> occured
>> >> >> Traceback (most recent call last):
>> >> >>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1110, in wrapper
>> >> >> res = f(*args, *

Re: [ovirt-devel] Test day: gluster install

2014-07-01 Thread Kanagaraj Mayilsamy
Can you try moving SELinux to Permissive or Disabled and see?


- Original Message -
> From: "Piotr Kliczewski" 
> To: "Kanagaraj Mayilsamy" 
> Cc: devel@ovirt.org
> Sent: Tuesday, July 1, 2014 6:56:45 PM
> Subject: Re: [ovirt-devel] Test day: gluster install
> 
> [root@f20 ~]# service glusterd status
> Redirecting to /bin/systemctl status  glusterd.service
> glusterd.service - GlusterFS, a clustered file-system server
>Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
>Active: active (running) since Tue 2014-07-01 11:12:29 CEST; 4h 9min ago
>  Main PID: 31056 (glusterd)
>CGroup: /system.slice/glusterd.service
>└─31056 /usr/sbin/glusterd -p /run/glusterd.pid
> 
> Jul 01 11:12:29 f20.example.com systemd[1]: Started GlusterFS, a
> clustered file-system server.
> Jul 01 11:12:29 f20.example.com python[31062]: SELinux is preventing
> /usr/sbin/glusterfsd from write access on the sock_file .
> 
>*  Plugin catchall
> (100. confidence) suggests   **...
> Hint: Some lines were ellipsized, use -l to show in full.
> [root@f20 ~]# gluster peer status
> Connection failed. Please check if gluster daemon is operational.
> 
> On Tue, Jul 1, 2014 at 3:07 PM, Kanagaraj Mayilsamy 
> wrote:
> >
> >
> > - Original Message -
> >> From: "Piotr Kliczewski" 
> >> To: "Kanagaraj Mayilsamy" 
> >> Cc: devel@ovirt.org
> >> Sent: Tuesday, July 1, 2014 3:52:59 PM
> >> Subject: Re: [ovirt-devel] Test day: gluster install
> >>
> >> On Tue, Jul 1, 2014 at 11:56 AM, Kanagaraj Mayilsamy
> >>  wrote:
> >> > This can happen if glusterd service is down.
> >> >
> >> > What does "service glusterd status" say?
> >> >
> >> > If you find this down, start it by "service glusterd start"
> >> >
> >>
> >> I checked status of this service and it was active.\
> >
> > Whats the output of "gluster peer status"?
> >
> >
> >>
> >> >
> >> > Thanks,
> >> > Kanagaraj
> >> >
> >> > - Original Message -
> >> >> From: "Piotr Kliczewski" 
> >> >> To: devel@ovirt.org
> >> >> Sent: Tuesday, July 1, 2014 3:00:29 PM
> >> >> Subject: [ovirt-devel]  Test day: gluster install
> >> >>
> >> >> I stated to test gluster related features and noticed issue after
> >> >> installation.
> >> >> I performed following steps on my f20 using xmlrpc:
> >> >> 1. Installed ovirt 3.5 repo.
> >> >> 2. Installed engine
> >> >> 3. Installed vdsm on the same host - status UP
> >> >> 4. Removed vdsm
> >> >> 5. Enabled gluster service
> >> >> 6. Installed vdsm again (tried several times with the same result)
> >> >>
> >> >> Here is the output that I get:
> >> >> I can see gluserd and glusterfsd services being active.
> >> >>
> >> >> Engine:
> >> >> 2014-07-01 10:38:53,722 WARN
> >> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> >> (org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
> >> >> Call Stack: null, Custom Event ID: -1, Message: Host fedora's
> >> >> following network(s) are not synchronized with their Logical Network
> >> >> configuration: ovirtmgmt.
> >> >>
> >> >> vdsm:
> >> >>
> >> >> Thread-13::DEBUG::2014-07-01
> >> >> 10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
> >> >> ('gluster-swift',) not found
> >> >> Thread-13::DEBUG::2014-07-01
> >> >> 10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
> >> >> ('gluster-swift-object',) not found
> >> >> Thread-13::DEBUG::2014-07-01
> >> >> 10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
> >> >> ('gluster-swift-plugin',) not found
> >> >> Thread-13::DEBUG::2014-07-01
> >> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
> >> >> ('gluster-swift-account',) not found
> >> >> Thread-13::DEBUG::2014-07-01
> >> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
> >> >> ('gluster-swift-proxy',) not found
> >> >> Thread-13::DEBUG::2014-07-01
> >> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
> >> >> ('gluster-swift-doc',) not found
> >> >> Thread-13::DEBUG::2014-07-01
> >> >> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
> >> >> ('gluster-swift-container',) not found
> >> >> Thread-13::DEBUG::2014-07-01
> >> >> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
> >> >> ('glusterfs-geo-replication',) not found
> >> >>
> >> >> Thread-13::ERROR::2014-07-01
> >> >> 10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
> >> >> occured
> >> >> Traceback (most recent call last):
> >> >>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1110, in wrapper
> >> >> res = f(*args, **kwargs)
> >> >>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> >> >> rv = func(*args, **kwargs)
> >> >>   File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
> >> >> return {'hosts': self.svdsmProxy.glusterPeerStatus()}
> >> >>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> >> >> return callMethod()
> >> >>   File "/usr

[ovirt-devel] Test day: help testing hosted engine on ovirt node

2014-07-01 Thread Omer Frenkel
Hi,
I was assigned to test this topic, but i don't see any info on how to start,
looking at the wiki: http://www.ovirt.org/Node_Hosted_Engine there is no info,
nor on the HE how-to wiki: http://www.ovirt.org/Hosted_Engine_Howto

should i build the node myself on a fedora, and then run the hosted engine 
setup as described in the how-to?
what is the expected flow for this, for a user that want to start using ovirt 
with hosted engine and ovirt node?

Thanks,
Omer.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt 3.5.0 Beta is now available for testing -- Node update

2014-07-01 Thread Fabian Deutsch
- Original Message -
> The oVirt team is pleased to announce that the 3.5.0 Beta is now
> available for testing.
> 
> Feel free to join us testing it!
> 
> You'll find all needed info for installing it on the release notes page,
> already available on the wiki [1].
> 
> A new oVirt Live iso is already available for testing[2] including all
> available updates from CentOS.
> An oVirt Guest Tools iso is now available too[3].
> 
> A new oVirt Node build will be available soon as well.

Hey,

a fresh oVirt Node build is also available now:

http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/ovirt-node-iso-3.5.0.ovirt35.20140630.el6.iso

To circumvent some SELinux issues, please append enforcing=0 to the kernel 
commandline when booting the ISO.

The ISO is missing the plugin for Hosted Engine, but we hope to deliver an iso 
which includes this plugin shortly.

Greetings
fabian
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] ovirt 3.5 Test day 1 - vdsm-tool configure libvirt with python code

2014-07-01 Thread Yedidyah Bar David
Hi all,

I was assigned to test [1], which was fixed by [2], which pointed
at [3].

Most things worked as expected.

Issues I noticed:

* the table says that vdsClient with or without '-s' should work against
vdsm with ssl=true or ssl=false. In my tests '-s' worked with true, without
'-s' worked with false, but the other options didn't work.

* the vdsm-tool package does not depend on vdsm, but
'vdsm-tool configure --force' fails without it.

I didn't open bugs on them because they seem insignificant.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1069636
[2] http://gerrit.ovirt.org/27298
[3] http://www.ovirt.org/Configure_libvirt_testing_matrix
-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Update on UI column sort infra issues

2014-07-01 Thread Lior Vernia


On 27/06/14 15:07, Vojtech Szocs wrote:
> Hi guys,
> 
> just a quick update on issues related to UI column sort infra.
> 
> The client-side sorting fix [1] is now merged in master branch.
> 
> The server-side sorting fix [2] is pending review.
> 
> Also note that Lior merged a patch [3] which greatly simplifies
> code when making text-based columns *client-side* sortable, for
> example:
> 
>   TextColumnWithTooltip nameColumn = ...
> 
> instead of this:
> 
>   // for text-based columns, need to provide separate Comparator
>   // that (typically) uses LexoNumericComparator for comparison
>   nameColumn.makeSortable(someEntityPropertyComparator);
> 
> you can do this:
> 
>   // uses LexoNumericComparator to compare column's text values
>   nameColumn.makeSortable();
> 

Similar infrastructural patches are pending review:
* Checkbox columns - http://gerrit.ovirt.org/#/c/28751/
* Simple status (up/down/none) columns - http://gerrit.ovirt.org/#/c/28753/
* "Identifiable" (interface used by many enums) columns -
http://gerrit.ovirt.org/#/c/28755/
* Rx/Tx rate columns (networking statistics in various tabs) -
http://gerrit.ovirt.org/#/c/28757/

> --
> 
> [1] http://gerrit.ovirt.org/#/c/28392/ ... where items would
> disappear from grid when activating client-side column sorting
> 
> [2] http://gerrit.ovirt.org/#/c/28557/ ... where triggering
> server-side sorting on a given column might corrupt the actual
> search query dispatched by the model
> 
> [3] http://gerrit.ovirt.org/#/c/28670/
> 
> --
> 
> Regards,
> Vojtech
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
> 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] oVirt Node Weekly Meeting Minutes - July 1 2014

2014-07-01 Thread Fabian Deutsch
Minutes:http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-01-13.02.html
Minutes (text): http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-01-13.02.txt
Log:
http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-01-13.02.log.html



=
#ovirt: oVirt Node Weekly Meeting
=


Meeting started by fabiand at 13:02:53 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2014/ovirt.2014-07-01-13.02.log.html
.



Meeting summary
---
* Agenda  (fabiand, 13:04:48)
  * Stable Release (3.0.6)  (fabiand, 13:05:01)
  * Next Release (3.1)  (fabiand, 13:05:06)
  * Hosted Engine Plugin  (fabiand, 13:05:11)
  * Other Items  (fabiand, 13:05:15)

* Action Item Review  (fabiand, 13:05:33)
  * Node team once again to do a review sprint  (fabiand, 13:05:49)
  * ~20 patches merged. ISO quite stable  (fabiand, 13:06:51)

* Stable Release (3.0.6)  (fabiand, 13:07:14)
  * 3.06 unlikel,y rather focusing on 3.1  (fabiand, 13:08:12)

* Next release (3.1)  (fabiand, 13:08:17)
  * 3.1 snapshot in the pipe to be published in 3.5-pre repo
packages+iso  (fabiand, 13:09:27)

* Hosted Engine Plugin  (fabiand, 13:12:00)
  * rpms are missing in 3.5-pre repo  (fabiand, 13:17:21)
  * prevents testing of this feature  (fabiand, 13:17:26)
  * ACTION: rbarry to create a job to build
ovirt-node-plugin-hosted-engine  (fabiand, 13:22:05)

* Other Items  (fabiand, 13:23:34)
  * oVirt Virtual Appliance -- is not available for download.  (fabiand,
13:24:02)
  * LINK: https://fedorahosted.org/ovirt/ticket/188   (fabiand,
13:24:21)
  * apuimedo 's persistencen patches  (fabiand, 13:29:41)

Meeting ended at 13:38:43 UTC.




Action Items

* rbarry to create a job to build ovirt-node-plugin-hosted-engine




Action Items, by person
---
* rbarry
  * rbarry to create a job to build ovirt-node-plugin-hosted-engine
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* fabiand (98)
* eedri (10)
* apuimedo (8)
* rbarry (6)
* dcaro (3)
* ovirtbot (2)
* yzaslavs (1)
* Netbulae (1)
* danken (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Test day: gluster install

2014-07-01 Thread Piotr Kliczewski
[root@f20 ~]# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Tue 2014-07-01 11:12:29 CEST; 4h 9min ago
 Main PID: 31056 (glusterd)
   CGroup: /system.slice/glusterd.service
   └─31056 /usr/sbin/glusterd -p /run/glusterd.pid

Jul 01 11:12:29 f20.example.com systemd[1]: Started GlusterFS, a
clustered file-system server.
Jul 01 11:12:29 f20.example.com python[31062]: SELinux is preventing
/usr/sbin/glusterfsd from write access on the sock_file .

   *  Plugin catchall
(100. confidence) suggests   **...
Hint: Some lines were ellipsized, use -l to show in full.
[root@f20 ~]# gluster peer status
Connection failed. Please check if gluster daemon is operational.

On Tue, Jul 1, 2014 at 3:07 PM, Kanagaraj Mayilsamy  wrote:
>
>
> - Original Message -
>> From: "Piotr Kliczewski" 
>> To: "Kanagaraj Mayilsamy" 
>> Cc: devel@ovirt.org
>> Sent: Tuesday, July 1, 2014 3:52:59 PM
>> Subject: Re: [ovirt-devel] Test day: gluster install
>>
>> On Tue, Jul 1, 2014 at 11:56 AM, Kanagaraj Mayilsamy
>>  wrote:
>> > This can happen if glusterd service is down.
>> >
>> > What does "service glusterd status" say?
>> >
>> > If you find this down, start it by "service glusterd start"
>> >
>>
>> I checked status of this service and it was active.\
>
> Whats the output of "gluster peer status"?
>
>
>>
>> >
>> > Thanks,
>> > Kanagaraj
>> >
>> > - Original Message -
>> >> From: "Piotr Kliczewski" 
>> >> To: devel@ovirt.org
>> >> Sent: Tuesday, July 1, 2014 3:00:29 PM
>> >> Subject: [ovirt-devel]  Test day: gluster install
>> >>
>> >> I stated to test gluster related features and noticed issue after
>> >> installation.
>> >> I performed following steps on my f20 using xmlrpc:
>> >> 1. Installed ovirt 3.5 repo.
>> >> 2. Installed engine
>> >> 3. Installed vdsm on the same host - status UP
>> >> 4. Removed vdsm
>> >> 5. Enabled gluster service
>> >> 6. Installed vdsm again (tried several times with the same result)
>> >>
>> >> Here is the output that I get:
>> >> I can see gluserd and glusterfsd services being active.
>> >>
>> >> Engine:
>> >> 2014-07-01 10:38:53,722 WARN
>> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> >> (org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
>> >> Call Stack: null, Custom Event ID: -1, Message: Host fedora's
>> >> following network(s) are not synchronized with their Logical Network
>> >> configuration: ovirtmgmt.
>> >>
>> >> vdsm:
>> >>
>> >> Thread-13::DEBUG::2014-07-01
>> >> 10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
>> >> ('gluster-swift',) not found
>> >> Thread-13::DEBUG::2014-07-01
>> >> 10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
>> >> ('gluster-swift-object',) not found
>> >> Thread-13::DEBUG::2014-07-01
>> >> 10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
>> >> ('gluster-swift-plugin',) not found
>> >> Thread-13::DEBUG::2014-07-01
>> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
>> >> ('gluster-swift-account',) not found
>> >> Thread-13::DEBUG::2014-07-01
>> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
>> >> ('gluster-swift-proxy',) not found
>> >> Thread-13::DEBUG::2014-07-01
>> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
>> >> ('gluster-swift-doc',) not found
>> >> Thread-13::DEBUG::2014-07-01
>> >> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
>> >> ('gluster-swift-container',) not found
>> >> Thread-13::DEBUG::2014-07-01
>> >> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
>> >> ('glusterfs-geo-replication',) not found
>> >>
>> >> Thread-13::ERROR::2014-07-01
>> >> 10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
>> >> occured
>> >> Traceback (most recent call last):
>> >>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1110, in wrapper
>> >> res = f(*args, **kwargs)
>> >>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
>> >> rv = func(*args, **kwargs)
>> >>   File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
>> >> return {'hosts': self.svdsmProxy.glusterPeerStatus()}
>> >>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
>> >> return callMethod()
>> >>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
>> >> **kwargs)
>> >>   File "", line 2, in glusterPeerStatus
>> >>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
>> >> in _callmethod
>> >> raise convert_to_error(kind, result)
>> >> GlusterCmdExecFailedException: Command execution failed
>> >> error: Connection failed. Please check if gluster daemon is operational.
>> >>
>> >> Can someone help me understand what am I missing or confirm to open a BZ?
>> >>
>> >> Thanks,
>> >> Pio

Re: [ovirt-devel] Test day: gluster install

2014-07-01 Thread Kanagaraj Mayilsamy


- Original Message -
> From: "Piotr Kliczewski" 
> To: "Kanagaraj Mayilsamy" 
> Cc: devel@ovirt.org
> Sent: Tuesday, July 1, 2014 3:52:59 PM
> Subject: Re: [ovirt-devel] Test day: gluster install
> 
> On Tue, Jul 1, 2014 at 11:56 AM, Kanagaraj Mayilsamy
>  wrote:
> > This can happen if glusterd service is down.
> >
> > What does "service glusterd status" say?
> >
> > If you find this down, start it by "service glusterd start"
> >
> 
> I checked status of this service and it was active.\

Whats the output of "gluster peer status"?


> 
> >
> > Thanks,
> > Kanagaraj
> >
> > - Original Message -
> >> From: "Piotr Kliczewski" 
> >> To: devel@ovirt.org
> >> Sent: Tuesday, July 1, 2014 3:00:29 PM
> >> Subject: [ovirt-devel]  Test day: gluster install
> >>
> >> I stated to test gluster related features and noticed issue after
> >> installation.
> >> I performed following steps on my f20 using xmlrpc:
> >> 1. Installed ovirt 3.5 repo.
> >> 2. Installed engine
> >> 3. Installed vdsm on the same host - status UP
> >> 4. Removed vdsm
> >> 5. Enabled gluster service
> >> 6. Installed vdsm again (tried several times with the same result)
> >>
> >> Here is the output that I get:
> >> I can see gluserd and glusterfsd services being active.
> >>
> >> Engine:
> >> 2014-07-01 10:38:53,722 WARN
> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> (org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
> >> Call Stack: null, Custom Event ID: -1, Message: Host fedora's
> >> following network(s) are not synchronized with their Logical Network
> >> configuration: ovirtmgmt.
> >>
> >> vdsm:
> >>
> >> Thread-13::DEBUG::2014-07-01
> >> 10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
> >> ('gluster-swift',) not found
> >> Thread-13::DEBUG::2014-07-01
> >> 10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
> >> ('gluster-swift-object',) not found
> >> Thread-13::DEBUG::2014-07-01
> >> 10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
> >> ('gluster-swift-plugin',) not found
> >> Thread-13::DEBUG::2014-07-01
> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
> >> ('gluster-swift-account',) not found
> >> Thread-13::DEBUG::2014-07-01
> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
> >> ('gluster-swift-proxy',) not found
> >> Thread-13::DEBUG::2014-07-01
> >> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
> >> ('gluster-swift-doc',) not found
> >> Thread-13::DEBUG::2014-07-01
> >> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
> >> ('gluster-swift-container',) not found
> >> Thread-13::DEBUG::2014-07-01
> >> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
> >> ('glusterfs-geo-replication',) not found
> >>
> >> Thread-13::ERROR::2014-07-01
> >> 10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
> >> occured
> >> Traceback (most recent call last):
> >>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1110, in wrapper
> >> res = f(*args, **kwargs)
> >>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> >> rv = func(*args, **kwargs)
> >>   File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
> >> return {'hosts': self.svdsmProxy.glusterPeerStatus()}
> >>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> >> return callMethod()
> >>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
> >> **kwargs)
> >>   File "", line 2, in glusterPeerStatus
> >>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> >> in _callmethod
> >> raise convert_to_error(kind, result)
> >> GlusterCmdExecFailedException: Command execution failed
> >> error: Connection failed. Please check if gluster daemon is operational.
> >>
> >> Can someone help me understand what am I missing or confirm to open a BZ?
> >>
> >> Thanks,
> >> Piotr
> >> ___
> >> Devel mailing list
> >> Devel@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/devel
> >>
> 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] ovirt-node-plugin-hosted-engine is missing in 3.5-pre

2014-07-01 Thread Fabian Deutsch
Hey,

I just noted that the packages for ovirt-node-plugin-hosted-engine are missing 
in the 3.5 repos. I'm now on  it to get the in shape.
This also means that the current (to be relased?) ovirt-node-iso rpm is missing 
this plugin as well :-/


- fabian
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] oVirt 3.5 Test Day 1 Results

2014-07-01 Thread Martin Perina
Hi,

I tested these features:

  1073453 - OVIRT35 - [RFE] add Debian 7 to the list of operating systems when 
creating a new vm
Info: Debian 7 is listed in OS in new VM dialog
Result: success

  1047624 - OVIRT35 - [RFE] support BIOS boot device menu
Info: Boot menu has to be enabled in Edit VM dialog Boot options tab Enable 
boot menu. Once enabled,
  user can press F12 and select boot device in the same way as in 
standard BIOS
Result: success


During test I found these issues:

  1) Engine installation problem on Centos 6.5
  Package 
ovirt-engine-userportal-3.5.0-0.0.master.20140629172257.git0b16ed7.el6.noarch.rpm
 is not signed
 After disabling GPG signature check in /etc/yum.repos.d/ovirt-3.5.repo, 
installation continues fine.

  2) Engine installation problem on Centos 6.5
 Engine indirectly depends on batik packaged, but xmlgraphics-batik is 
installed instead of it.
 I created a bug [1]

  3) Packages ioprocess and python-ioprocess are not available in oVirt 
repository for 3.5 beta (even they are
 available in master-snapshot-static repository).
 Created a ticket for infra https://fedorahosted.org/ovirt/ticket/205
 


Martin

[1] https://bugzilla.redhat.com/1114921
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] ovirt-engine 3.5 branched

2014-07-01 Thread Yedidyah Bar David
Hi all,

ovirt-engine-3.5 was branched from master.

The commit used was 0b16ed7a76d3fbe106e15263211f1a64f075df0c :
core: validation error on edit instance type

This is the same commit used to build the beta build that is used in the test 
day
that we are having today.

Developers: Note that since this commit, new changes were committed to master.
Please cherry-pick/push to 3.5 changes that should be there.

Best regards,
-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] ovirt-engine-3.5 branch is way too old

2014-07-01 Thread Alon Bar-Lev
Hi,

The following backlog is post branching, as branching was done at random point 
in effort.
As far as I can see all these should go into 3.5 anyway, if someone can do us 
the service and just remove on top of master it will reduce effort of each 
individual developer.
Next time branch should be created after at least one bug day is over and major 
issues found are in.
Beta is a tag in time not a branch in time.

Thanks,
Alon

549d9e6 engine: NetworkValidator uses new validation syntax
9f0310b engine: Clear syntax for writing validations
52c6b35 host-deploy: appropriate message for kdump detection
375c554 core: Use force detach only on Data SD
8f02a74 engine: no need to save vm_static on run once
c6851e4 ui: remove Escape characters for TextBoxLabel
5e37215 ui: improve hot plug cpu wording
028c175 engine: Rename providerId to networkProviderId in add/update host 
actions
5b4d20c engine: Configure unique host name on neutron.conf
90eb1d2 extapi: aaa: add auth result to credential change
994996b backend: Add richer formatting of migration duration
98e293b core: handle fence agent power wait param on stop
bb9ecfb engine: Clear eclipse warning in AddVdsCommand
36dd138 aaa: always use engine context for queries
24f0cf8 restapi: rsdl_metadata - quota.id in add disk
7161ac0 tools: Expose VmGracefulShutdownTimeout option to engine-config
8255f44 aaa: more fixes to command context propgation
b8feb57 restapi: missing vms link under affinity groups
f056835 core, engine: Fix HotPlugCpuSupported config value
4492ef7 core, engine: Avoid migration in ppc64
2710b07 ui: avoid casting warnings on findbugs
bcb156c core: adding missing command constructor
92c1522 core: Changing Host free space threshold
a0d000b webadmin: column sorting support for Disks sub-tabs
5a0c76f webadmin: column sorting support for Storage sub-tabs
14a625e webadmin: column sorting support for Disks tabs
a32d199 core: DiskConditionField - extract verbs to constants
48cc09d core: fixed searching disks by creation date
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Test day: gluster install

2014-07-01 Thread Piotr Kliczewski
On Tue, Jul 1, 2014 at 11:56 AM, Kanagaraj Mayilsamy
 wrote:
> This can happen if glusterd service is down.
>
> What does "service glusterd status" say?
>
> If you find this down, start it by "service glusterd start"
>

I checked status of this service and it was active.

>
> Thanks,
> Kanagaraj
>
> - Original Message -
>> From: "Piotr Kliczewski" 
>> To: devel@ovirt.org
>> Sent: Tuesday, July 1, 2014 3:00:29 PM
>> Subject: [ovirt-devel]  Test day: gluster install
>>
>> I stated to test gluster related features and noticed issue after
>> installation.
>> I performed following steps on my f20 using xmlrpc:
>> 1. Installed ovirt 3.5 repo.
>> 2. Installed engine
>> 3. Installed vdsm on the same host - status UP
>> 4. Removed vdsm
>> 5. Enabled gluster service
>> 6. Installed vdsm again (tried several times with the same result)
>>
>> Here is the output that I get:
>> I can see gluserd and glusterfsd services being active.
>>
>> Engine:
>> 2014-07-01 10:38:53,722 WARN
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
>> Call Stack: null, Custom Event ID: -1, Message: Host fedora's
>> following network(s) are not synchronized with their Logical Network
>> configuration: ovirtmgmt.
>>
>> vdsm:
>>
>> Thread-13::DEBUG::2014-07-01
>> 10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
>> ('gluster-swift',) not found
>> Thread-13::DEBUG::2014-07-01
>> 10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
>> ('gluster-swift-object',) not found
>> Thread-13::DEBUG::2014-07-01
>> 10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
>> ('gluster-swift-plugin',) not found
>> Thread-13::DEBUG::2014-07-01
>> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
>> ('gluster-swift-account',) not found
>> Thread-13::DEBUG::2014-07-01
>> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
>> ('gluster-swift-proxy',) not found
>> Thread-13::DEBUG::2014-07-01
>> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
>> ('gluster-swift-doc',) not found
>> Thread-13::DEBUG::2014-07-01
>> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
>> ('gluster-swift-container',) not found
>> Thread-13::DEBUG::2014-07-01
>> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
>> ('glusterfs-geo-replication',) not found
>>
>> Thread-13::ERROR::2014-07-01
>> 10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
>> occured
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1110, in wrapper
>> res = f(*args, **kwargs)
>>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
>> rv = func(*args, **kwargs)
>>   File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
>> return {'hosts': self.svdsmProxy.glusterPeerStatus()}
>>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
>> return callMethod()
>>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
>> **kwargs)
>>   File "", line 2, in glusterPeerStatus
>>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
>> in _callmethod
>> raise convert_to_error(kind, result)
>> GlusterCmdExecFailedException: Command execution failed
>> error: Connection failed. Please check if gluster daemon is operational.
>>
>> Can someone help me understand what am I missing or confirm to open a BZ?
>>
>> Thanks,
>> Piotr
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Test day: gluster install

2014-07-01 Thread Kanagaraj Mayilsamy
This can happen if glusterd service is down.

What does "service glusterd status" say?

If you find this down, start it by "service glusterd start"

Thanks,
Kanagaraj

- Original Message -
> From: "Piotr Kliczewski" 
> To: devel@ovirt.org
> Sent: Tuesday, July 1, 2014 3:00:29 PM
> Subject: [ovirt-devel]  Test day: gluster install
> 
> I stated to test gluster related features and noticed issue after
> installation.
> I performed following steps on my f20 using xmlrpc:
> 1. Installed ovirt 3.5 repo.
> 2. Installed engine
> 3. Installed vdsm on the same host - status UP
> 4. Removed vdsm
> 5. Enabled gluster service
> 6. Installed vdsm again (tried several times with the same result)
> 
> Here is the output that I get:
> I can see gluserd and glusterfsd services being active.
> 
> Engine:
> 2014-07-01 10:38:53,722 WARN
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
> Call Stack: null, Custom Event ID: -1, Message: Host fedora's
> following network(s) are not synchronized with their Logical Network
> configuration: ovirtmgmt.
> 
> vdsm:
> 
> Thread-13::DEBUG::2014-07-01
> 10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
> ('gluster-swift',) not found
> Thread-13::DEBUG::2014-07-01
> 10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
> ('gluster-swift-object',) not found
> Thread-13::DEBUG::2014-07-01
> 10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
> ('gluster-swift-plugin',) not found
> Thread-13::DEBUG::2014-07-01
> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
> ('gluster-swift-account',) not found
> Thread-13::DEBUG::2014-07-01
> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
> ('gluster-swift-proxy',) not found
> Thread-13::DEBUG::2014-07-01
> 10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
> ('gluster-swift-doc',) not found
> Thread-13::DEBUG::2014-07-01
> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
> ('gluster-swift-container',) not found
> Thread-13::DEBUG::2014-07-01
> 10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
> ('glusterfs-geo-replication',) not found
> 
> Thread-13::ERROR::2014-07-01
> 10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
> occured
> Traceback (most recent call last):
>   File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1110, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
> return {'hosts': self.svdsmProxy.glusterPeerStatus()}
>   File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> return callMethod()
>   File "/usr/share/vdsm/supervdsm.py", line 48, in 
> **kwargs)
>   File "", line 2, in glusterPeerStatus
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> in _callmethod
> raise convert_to_error(kind, result)
> GlusterCmdExecFailedException: Command execution failed
> error: Connection failed. Please check if gluster daemon is operational.
> 
> Can someone help me understand what am I missing or confirm to open a BZ?
> 
> Thanks,
> Piotr
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
> 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] Test day: gluster install

2014-07-01 Thread Piotr Kliczewski
I stated to test gluster related features and noticed issue after installation.
I performed following steps on my f20 using xmlrpc:
1. Installed ovirt 3.5 repo.
2. Installed engine
3. Installed vdsm on the same host - status UP
4. Removed vdsm
5. Enabled gluster service
6. Installed vdsm again (tried several times with the same result)

Here is the output that I get:
I can see gluserd and glusterfsd services being active.

Engine:
2014-07-01 10:38:53,722 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-12) [3987041c] Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: Host fedora's
following network(s) are not synchronized with their Logical Network
configuration: ovirtmgmt.

vdsm:

Thread-13::DEBUG::2014-07-01
10:49:32,670::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,671::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-object',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,672::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-plugin',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-account',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-proxy',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,673::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-doc',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
('gluster-swift-container',) not found
Thread-13::DEBUG::2014-07-01
10:49:32,674::caps::682::root::(_getKeyPackages) rpm package
('glusterfs-geo-replication',) not found

Thread-13::ERROR::2014-07-01
10:49:38,021::BindingXMLRPC::1123::vds::(wrapper) vdsm exception
occured
Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1110, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in 
**kwargs)
  File "", line 2, in glusterPeerStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.

Can someone help me understand what am I missing or confirm to open a BZ?

Thanks,
Piotr
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel