[ovirt-devel] Testday: json rpc

2014-07-30 Thread Antoni Segura Puimedon
Hi fellow oVirters,

On this test day I picked JSON RPC for testing:

My initial environment consisted of a hosted engine setup with three hosts.

Upgrading
===
The first task, then, was to upgrade everything to 3.5.

My guide was http://www.ovirt.org/Hosted_Engine_Howto#Upgrade_Hosted_Engine

One issue that I encountered was that after setting the 3.5 pre-release yum
repo on the engine VM and doing yum update, when doing
engine-setup
To make it handle the upgrade, that failed complaining about missing a package
called patternfly1. I remembered that on the list Greg Sheremeta mentioned a
copr repository for it, so I went ahead, installed it and repeated the
engine-setup

This time it succeeded, however I feel like patternfly1 should probably be an
ovirt-engine-3.5 dependency and it should be in the ovirt 3.5 repository.

I also noticed that after upgrading the hosts the amount of free system memory
is much lower, while the VMs continue to run fine. I opened:
https://bugzilla.redhat.com/1124451

Another thing that happened was that restarting ovirt-ha-agent and
ovirt-ha-broker using systemd fails silently and it says that they were killed.
I only got it to work again by doing:

/lib/systemd/systemd-ovirt-ha-broker restart
/lib/systemd/systemd-ovirt-ha-agent restart

However, and obviously, doing it like that escapes from the advisable control
of systemd. Another bad thing is that after doing all this we get that the
current host (which has the engine running displays):

--== Host 2 status ==--

Status up-to-date  : False
Hostname   : 10.34.63.180
Host ID: 2
Engine status  : unknown stale-data
Score  : 2000
Local maintenance  : False
Host timestamp : 1406640293
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1406640293 (Tue Jul 29 15:24:53 2014)
 host-id=2
 score=2000
 maintenance=False
 bridge=True
 cpu-load=0.0856
 engine-health={"health": "good", "vm": "up", "detail": "up"}
 gateway=True
 mem-free=3192

Why did the engine status go to unknown stale-data? (it also happened for the
other two hosts in the setup.

Changing hosts to use JSON RPC
===

After talking with Piotr and trying it with the webadmin UI, I found out that
there is no direct way to update a host's settings to use json rpc as its
connectivity mechanism with the engine. This constitutes a usabiltiy bug
which I filed:

https://bugzilla.redhat.com/1124442

It is presumable that users will want to move to the newer and better RPC
mechanism and they should be able to do so by merely putting the host in
maintenance and ticking some checkbox in the 'edit host' dialog.

I did the workaround of removing the host and adding it again and that worked,
the host went up.

Network operations via JSON RPC


After the host went up, I decided to send a setupNetworks command. The
operation worked out fine, but unfortunately we have a very serious gap that
makes it impossible for me to use jsonrpc for network operations/development.

**Logging**

When doing a network operation with xmlrpc we'd get the following in vdsm.log
Thread-21::DEBUG::2014-07-30 
13:38:11,414::BindingXMLRPC::1127::vds::(wrapper) client [10.34.61.242]::call 
setupNetworks with ({'10': {'nic': 'em2', 'vlan': '10', 'STP': 'no', 'bridged': 
'true', 'mtu': '1500'}}, {}, {'connectivityCheck': 'true', 
'connectivityTimeout': 120}) {} flowID [686033d4]
Thread-21::DEBUG::2014-07-30 
13:38:32,689::BindingXMLRPC::1134::vds::(wrapper) return setupNetworks with 
{'status': {'message': 'Done', 'code': 0}}

As you can see, we get the bare minimum logging one could ask for, an entry
with the command called and the data it received and another entry with the
return result data.

Doing the same with jsonrpc (and ignoring the excessive IOProcess) if I search
for "setupNetworks" the only thing I get is:
Thread-23057::DEBUG::2014-07-30 
13:32:44,126::__init__::462::jsonrpc.JsonRpcServer::(_serveRequest) Looking for 
method 'Host_setupNetworks' in bridge

And if I search for the data received, like 'STP', there is nothing whatsoever.
As I said, unless this is fixed and we get entries with the same amount of data
as before, it can't be used in production nor in development.

https://bugzilla.redhat.com/1124813


TL;DR:
- lack of usability upgrading an environment to use jsonrpc
  https://bugzilla.redhat.com/1124442
- Failure on step 7 of upgrade steps:
  https://bugzilla.redhat.com/1124826
- JSON RPC logging excessive but insuficient for network call debugging
  https://bugzilla.redhat.com/1124813
___
Devel mailing list
Devel@ovirt.org
h

Re: [ovirt-devel] Testday: json rpc

2014-07-31 Thread Antoni Segura Puimedon


- Original Message -
> From: "Antoni Segura Puimedon" 
> To: devel@ovirt.org, "users" 
> Sent: Wednesday, 30 July, 2014 2:15:36 PM
> Subject: Testday: json rpc
> 
> Hi fellow oVirters,
> 
> On this test day I picked JSON RPC for testing:
> 
> My initial environment consisted of a hosted engine setup with three hosts.
> 
> Upgrading
> ===
> The first task, then, was to upgrade everything to 3.5.
> 
> My guide was http://www.ovirt.org/Hosted_Engine_Howto#Upgrade_Hosted_Engine
> 
> One issue that I encountered was that after setting the 3.5 pre-release yum
> repo on the engine VM and doing yum update, when doing
> engine-setup
> To make it handle the upgrade, that failed complaining about missing a
> package
> called patternfly1. I remembered that on the list Greg Sheremeta mentioned a
> copr repository for it, so I went ahead, installed it and repeated the
> engine-setup
> 
> This time it succeeded, however I feel like patternfly1 should probably be an
> ovirt-engine-3.5 dependency and it should be in the ovirt 3.5 repository.
> 
> I also noticed that after upgrading the hosts the amount of free system
> memory
> is much lower, while the VMs continue to run fine. I opened:
> https://bugzilla.redhat.com/1124451
> 
> Another thing that happened was that restarting ovirt-ha-agent and
> ovirt-ha-broker using systemd fails silently and it says that they were
> killed.
> I only got it to work again by doing:
> 
> /lib/systemd/systemd-ovirt-ha-broker restart
> /lib/systemd/systemd-ovirt-ha-agent restart
> 
> However, and obviously, doing it like that escapes from the advisable control
> of systemd. Another bad thing is that after doing all this we get that the
> current host (which has the engine running displays):
> 
> --== Host 2 status ==--
> 
> Status up-to-date  : False
> Hostname   : 10.34.63.180
> Host ID: 2
> Engine status  : unknown stale-data
> Score  : 2000
> Local maintenance  : False
> Host timestamp : 1406640293
> Extra metadata (valid at timestamp):
>  metadata_parse_version=1
>  metadata_feature_version=1
>  timestamp=1406640293 (Tue Jul 29 15:24:53 2014)
>  host-id=2
>  score=2000
>  maintenance=False
>  bridge=True
>  cpu-load=0.0856
>  engine-health={"health": "good", "vm": "up", "detail": "up"}
>  gateway=True
>  mem-free=3192
> 
> Why did the engine status go to unknown stale-data? (it also happened for the
> other two hosts in the setup.

After discussing with Jiři: the 'stale data thing' resulted in
https://bugzilla.redhat.com/1125244
> 
> Changing hosts to use JSON RPC
> ===
> 
> After talking with Piotr and trying it with the webadmin UI, I found out that
> there is no direct way to update a host's settings to use json rpc as its
> connectivity mechanism with the engine. This constitutes a usabiltiy bug
> which I filed:
> 
> https://bugzilla.redhat.com/1124442
> 
> It is presumable that users will want to move to the newer and better RPC
> mechanism and they should be able to do so by merely putting the host in
> maintenance and ticking some checkbox in the 'edit host' dialog.
> 
> I did the workaround of removing the host and adding it again and that
> worked,
> the host went up.
> 
> Network operations via JSON RPC
> 
> 
> After the host went up, I decided to send a setupNetworks command. The
> operation worked out fine, but unfortunately we have a very serious gap that
> makes it impossible for me to use jsonrpc for network operations/development.
> 
> **Logging**
> 
> When doing a network operation with xmlrpc we'd get the following in vdsm.log
> Thread-21::DEBUG::2014-07-30
> 13:38:11,414::BindingXMLRPC::1127::vds::(wrapper) client
> [10.34.61.242]::call setupNetworks with ({'10': {'nic': 'em2', 'vlan':
> '10', 'STP': 'no', 'bridged': 'true', 'mtu': '1500'}}, {},
> {'connectivityCheck': 'true', 'connectivityTimeout': 120}) {} flowID
> [686033d4]
> Thread-21::DEBUG::2014-07-30
> 13:38:32,689::BindingXMLRPC::1134::vds::(wrapper) return setupNetworks
> with {'status': {'message': 'Done', 'code': 0}}
> 
> As you can see, we get the bare minimum logging one could ask for, an entry
> with the command called and the data it received and another entry with the
> return result data.
> 
> Doing the same with jsonrpc (and ignoring the excessive IOProcess) if I
> search
> for "setupNetworks" the only thing I get is:
> Thread-23057::DEBUG::2014-07-30
> 13:32:44,126::__init__::462::jsonrpc.JsonRpcServer::(_serveRequest)
> Looking for method 'Host_setupNetworks' in bridge
> 
> And if I search for the data received, like 'STP', there is nothing
> whatsoever.
> As I said, unless t