Re: [one-users] After upgrade to 4.10.2 unable to login using any users including oneadmin

2015-01-23 Thread Bill Cole

On 21 Jan 2015, at 11:13, Madko wrote:


Hi,

Had the same problem about conf files renamed to .rpmsave, and 
therefor

opennebula was enable to start. Hopefully I saw this thread ;)

Is it possible to change this behavior and have .rpmnew instead, to 
prevent
breaking everything after an upgrade? %config(noreplace) in the spec 
file

should do the trick, and it's a good practice.


It seems to me that this has to be looked at on a case-by-case basis. In 
THIS case, the installation of new default config files is inconsistent 
with the release notes that say there are no changes, so replacing 
derivatives of the 4.10.[01] files with defaul 4.10.2 files is wrong. 
However, most past updates have been documented as having incompatible 
changes in the config files, and in such cases it can be better to put 
the new default files in place and move the existing files to .rpmsave 
files, especially if the old config files are likely to break the new 
software at runtime. Ideally, a package's config files are structured 
and maintained to minimize the need for manual merging of local settings 
after updates, but OpenNebula is not near that ideal.


Of course, any competent upgrade process for RPM platforms *ALWAYS* 
includes checking for .rpm{save,new,orig} files afterwards and cleaning 
them up with the application of human intelligence not available to the 
RPM tools.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Nomadic CentOS repository: WHY?!?

2014-08-25 Thread Bill Cole
Past OpenNebula 4.x releases have been packaged as RPM's for CentOS 
available in a yum repository with a baseurl of 
'http://downloads.opennebula.org/repo/CentOS/6/stable/$basearch'


For some inscrutable reason, 4.8 RPMs are not there. Instead, the 4.8 
docs on the website now direct us to a repo at the baseurl 
'http://downloads.opennebula.org/repo/4.8/CentOS/6/x86_64/'


Is there a rational explanation for this? One of the main points of 
using a package management system like yum/RPM is to simplify updates, 
but this change obfuscates the visibility of the update and seems to 
indicate a plan to hide future releases as well.


Also, the new repo lacks an RPM for opennebula-context and the release 
notes point to a github repo which includes a script to build packages 
from the source. Does this mean there will no longer be a prebuilt RPM 
of opennebula-context?


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Nomadic CentOS repository: WHY?!?

2014-08-25 Thread Bill Cole

On 25 Aug 2014, at 19:40, Damon (Albino Geek) wrote:


Hello,

I found that there is actually a prebuilt 4.8 context RPM in one of 
their source trees (not GitHub).


That's not exactly helpful. The fact that there's an RPM in some unnamed 
place built from who-knows-what revision is an interesting and maybe 
indicative factoid, but it does not clarify whether or not there will be 
a canonical RPM for the package in the same repository as those for the 
other packages (or anywhere documented.)


As per the repo change, this new version actually makes more sense and 
follows proper repo format.
http://mirrors.kernel.org/centos/6/os/x86_64/ being an example 
repository that follows standard format.


The stable path part didn't make any sense considering how versions 
work in OpenNebula.


The '4.8' part doesn't make any sense in any context, and *changing* the 
baseurl for the repository with each release does away with a key reason 
for using a package repository.



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM monitoring information do not get deleted

2014-05-14 Thread Bill Cole

On 14 May 2014, at 9:33, Wilma Hermann wrote:


Hi,

I observed a problem with two OpenNebula setups, that I set up with 
version

4.4 and which I upgraded to 4.6 some weeks ago: The VM monitoring
information does not seem to be deleted from the database (MySQL) 
after

VM_MONITORING_EXPIRATION_TIME has expired.

I have a sandbox for testing issues: A single machine (both frontend 
and

host) with a single virtual machine, that runs 24/7. When I upgraded
OpenNebula 4.4 to 4.6, the SQL-Dump created by onedb upgrade was 3.6 
MB
big (perfectly okay for such a small setup). Today, when I dumped the 
DB,
the backup file is 176 MB in size. Wondering about the size, I 
inspected
the database and found ~77k rows in the vm_monitoring table. 
Obviously,

OpenNebula writes rows into this table every few seconds without ever
deleting anything.

I didn't change VM_MONITORING_EXPIRATION_TIME in oned.conf (it was
commented out), so it should delete old values after 4h. I manually 
set
VM_MONITORING_EXPIRATION_TIME to 14400 as well as other values: No 
effect,

the DB continues to inflate.

Meanwhile, Sunstone begins to become unresponsible when I open the 
details

of a VM. I believe this is due to generating the CPU and memory graphs
which has to process several ten thousands of rows.

Did I miss some setting or is this a bug?


It looks to me like a bug. I have the same problem using a sqlite3 DB. 
With 3 hosts and a few dozen running VMs, it became a highly visible 
problem in a fairly short time.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Sunstone endpoint for VDC

2013-09-19 Thread Bill Cole

On 17 Sep 2013, at 10:03, Tino Vazquez wrote:


Hi Stefan,

comments inline,

On Tue, Sep 17, 2013 at 2:51 PM, Stefan Kooman ste...@bit.nl wrote:

Hi list,

I'm in the process of testing the OpenNebula oZones / VDC 
functionality.

My setup is as follows:

- 2 hosts, with 1 one of them in a cluster TESTCLUST01, 1 in 
default

cluster.
- a zone TESTZONE01 using resources from TESTCLUST01
- a VDC TESTVDC01 within zone TESTZONE01.

I have several apache vhosts (listening on *:443) acting as reverse 
(SSL) proxy:


- ozones (reverse proxying to http://127.0.0.1:6121)
- sunstone (reverse proxying to http://172.16.0.183:9869
- occi (reverse proxying to http://127.0.0.1:4567)

These end points all work correctly (occi API works, sunstone works,
ozones web GUI works).


This is a wise choice. Given that Sunstone and oZones accept and emit 
authentication credentials in plaintext, it is insane to have those 
listeners talking to the Internet without a SSL proxy.




Creating the zone, cluster and vdc goes fine. However, the Sunstone
Endpoint for the VDC TESTVDC01 is not working. If I understand the
documentation correctly [1] I should point my browser to  the 
sunstone
endpoint. In my case 
http(s)://sunstone.domain.tld/sunstone_TESTVDC01/

The response I get is:

Sinatra doesn’t know this ditty.
Try this:

get '/sunstone_TESTVDC01/' do
Hello World
end

VDC cli interface does work (export
ONE_XMLRPC=http://localhost:2633/RPC2; and export
ONE_AUTH=~/.one/one_vdc).


In this case you are not using the oZones reverse proxy functionality,
but rather accessing OpenNebula directly. You should use an endpoint
of the form

ONE_XMLRPC=http://ozones-server/TESTVDC01;


It seems like a very bad idea to do anything that would make a URL like 
that functional, as it implies an insecure proxy of the XMLRPC interface 
to an external address.



sunstone.log:
GET /sunstone_TESTVDC01/ HTTP/1.1 404 456 0.0024
ozones-server.log
GET /zone/user?timeout=true HTTP/1.1 200 19672 0.0189

I haven't setup the Apache server like [2] but made per vhost reverse
proxies. What port and URI should be hit by the proxy on the backend 
site when

hitting http://ozones.server/sunstone_MyVDC/;  on the frontend?


I don't think your use case is supported.


That's an understatement.

The docs for configuring the oZones server are laughably poor (e.g. 
commands and file paths that are only found on Ubuntu and its sibling 
distros, whose Apache config is unlike any others) and the only useful 
way to understand how it is intended to work is to watch its startlingly 
wrong behaviors when given seemingly correct config parameters and 
search through the code for how they happen.


The oZones GUI is not designed to support secure (i.e. proxied) 
communications with OpenNebula instances, and that is made obvious lines 
349 and 406 of /usr/lib/one/ozones/public/js/plugins/vdcs-tab.js, both 
of which use literal http://; strings to build the links to OpenNebula 
and use the hostname extracted from zone's ENDPOINT parameter (which is 
likely to be 'localhost' for instances that are only used to build VDC's 
in a zone managed on the same host) To make the oZones GUI provide URL's 
that work, one must fix the code.


From a broader perspective, the problem is in the design. It doesn't 
make any sense for one tool to handle multiple OpenNebula instances as a 
collection of Zones that also is the only tool for defining a VDC: a 
subset of a cluster, which is a subset of a zone. VDC definition should 
be moved into OpenNebula proper (oned/one.db) and Sunstone. The oZones 
server should only be required or useful for handling multiple zones, it 
should assume that it needs to be able to communicate with OpenNebula 
instances that are behind SSL proxies, and it should be documented to 
make that assumption clear.



You see, the ozones server
talks with the apache server via a .htaccess file, where it is
dynamically setting reverse proxy rules for each created VDC. So,
after creating TESTVDC01, a couple of lines are written in the
.htaccess file so apache knows that when a request comes for

http://ozones-server/sunstone_TESTVDC01

it should redirect to

http://sunstone-server:9869

where sunstone-server is the Sunstone endpoint defined by the zone
that contains TESTVDC01. 9869 is the default Sunstone port, which can
be changed as well upon VDC creation.


This explanation doesn't much improve on the muddled documentation of 
the oZones server. Maybe a as someone who has come at oZones as an admin 
instead of as a developer I can offer a different angle:


It is important for people deploying the oZones server to understand 
that it performs two weakly related functions:


I. Aggregate access to subsets (VDCs) of multiple OpenNebula instances 
(Zones) through a single host running a web proxy.
II. Create a templated pattern of users, groups, and ACLs in a zone that 
allows delegated administration  use of VDCs.


It does this using two distinct components:

1. A Ruby