Re: [one-users] sunstone novnc console on separate window/tab in 4.4RC (one-4.4)

2013-11-28 Thread Daniel Molina
It should be fixed with this commit:
http://dev.opennebula.org/projects/opennebula/repository/revisions/ab1473b221a5b16603f3caeb65f4d168a7fcebdf

Cheers


On 26 November 2013 14:57, Rolandas Naujikas rolandas.nauji...@mif.vu.ltwrote:

 After http://dev.opennebula.org/projects/opennebula/repository/revisions/
 277a862a7aac0026e299f0305e329bd9c3a8cb04 I see VM title bar text is over
 at top of console output.

 Regards, Rolandas
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Sunstone image upload not working - images not in tmpdir

2013-11-28 Thread Daniel Molina
On 27 November 2013 21:27, Stefan Kooman ste...@bit.nl wrote:

 Quoting Daniel Molina (dmol...@opennebula.org):

 
  Could you try exporting the $TMPDIR var before starting the passenger
  processes?
 
  This is the code that generates the temp file (sunstone-server.rb):
  tmpfile = Tempfile.open('sunstone-upload')
 
  by default, it uses Dir.tmpdir as temp dir and this method checks the env
  var TMPDIR.  This variable is defined in the sunstone-server script, but
  Apache do not use this script to start new server instances
 
  You can also specify it as a parameter in the code:
  tmpfile = Tempfile.open('sunstone-upload',
 '/mnt/sunstone_upload')

 Hmm, in opennebula 4.3.90 this isn't working anymore. I have the tmpfile
 hardcoded in sunstone-server.rb.

 I also tried exporting the TMPDIR in /etc/apache/envvars, /etc/bash.bashrc
 and in
 the ruby script itself:

 ENV['TMPDIR'] = '/mnt/sunstone_upload (and in config.ru)

 but without any effect. What changes have been made that defeat above
 settings?


We didn't change anything, just:
https://github.com/OpenNebula/one/commit/f8e2e65b0170268e9c72d52c4fe9f0e13fa05acd

So, it should work as before.




 Thanks,

 Stefan





 --
 | BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
 | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)

 iF4EAREIAAYFAlKWVc4ACgkQTyGgYdFIOcYG1AD+MYO5Wwb9TgjUorswPKBhryaw
 ugl40UkLuFaMkqGN2UoA/jkIVoMbIB1l0svlG5li4LVc2JFrW73XKfesGXSqOKhU
 =z1YO
 -END PGP SIGNATURE-




-- 
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Member of oneadmin group unable to see user/groups

2013-11-28 Thread Daniel Molina
Could you check if the following attribute is defined in
sunstone-server.conf?

# Default table order
:table_order: desc


On 27 November 2013 17:15, Stefan Kooman ste...@bit.nl wrote:

 Quoting Daniel Molina (dmol...@opennebula.org):
  Any error in the browser console after logging as that user?

 Nope, I can click whatever I want but no errors are given, only empty
 result sets, i.e. no resources.

 Gr. Stefan

 --
 | BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
 | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)

 iF4EAREIAAYFAlKWGr4ACgkQTyGgYdFIOcb0ggD8DgSSiAyI/AnVIuqpwy9GIna4
 FgSGeb2cBpvfMdyuc6sA/RfQSnWtUy8kqGSv55miFwozRzNuST5BqD6CEfK4Jlbi
 =8NzE
 -END PGP SIGNATURE-




-- 
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM description/comment

2013-11-28 Thread Daniel Molina
Hi Nikolay,


On 27 November 2013 15:02, kna...@gmail.com wrote:


 Daniel Molina wrote on 24/10/12 13:58:

 On 24 October 2012 11:08, kna...@gmail.com mailto:kna...@gmail.com
 wrote:

 Daniel Molina wrote on 23/10/12 20:23:

 On 23 October 2012 14:31, kna...@gmail.com mailto:
 kna...@gmail.com
 mailto:kna...@gmail.com mailto:kna...@gmail.com wrote:

 Dear Ruben,

 first of all, sorry for delay with reply!
 Please, see my comments inline.

 Ruben S. Montero wrote on 19/10/12 00:34:

 Hi Ricardo + Nikolay

 You are right, one thing we have in our short roadmap is
 to add a generic metada
 section
 for VMs. This metadata could be updated using the
 *update* functionality currently
 present
 for other commands.

 sounds encouraging! Is any information when such feature is
 planned to be implemented?


 Just to give you the rationale behind not having this
 yet. As you probably know the VM
 template is extended once the VM created with control
 data (e.g. DISK_ID's, specifric
 LEASES, SOURCE for DISK...) for obvious reasons we do not
 want a user to modify this.

 seems reasonable

 So we will split this in two, one for the control data
 and other to be
 used/modified by
 the user.

 For now, as Nikolay suggests this limit somehow part of
 the out-of-the-box
 functionality
 (e.g. adding DESCRIPTION in  a bulk submission), this
 functionality will need a custom
 program using OCA. About parsing the out put of onevm
 show, note that you can
 always get
 the full pool information with onevm list -x  (TEMPLATE
 included) the onevm list
 command
 just parse and pick some of this info and present it in a
 tabular form...

 Thanks a lot for detailed reply and explanations!


 JFYI You can easily add new columns to the onevm list command.
 The following patch adds a
 new DESCRIPTION column to the onevm list output:

 https://gist.github.com/8f8499704cbee0e5db84

 The onevm.yaml can be defined per user in $HOME/.one/onevm.yaml
 or globally in
 /etc/one/cli/onevm.yaml

  Dear OpenNebula developers,

 It have been very convenient for me to have DESCRIPTION column in vm list
 but it seems that patch is absent in 4.2 release. I wonder if it is planned
  to include such feature in mainstream code in future releases or it will
 be needed to apply that patch for every new release?


OpenNebula 4.4 is almost ready, but we can consider including it for
one-4.6. Could you please open a feature request in our dev page so we can
schedule it for the next release?

http://dev.opennebula.org/

Cheers



 Best regards,

 Nikolay.
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Member of oneadmin group unable to see user/groups

2013-11-28 Thread Stefan Kooman
Quoting Daniel Molina (dmol...@opennebula.org):
 Could you check if the following attribute is defined in
 sunstone-server.conf?
 
 # Default table order
 :table_order: desc
It wasn't. I've added this attribute, restarted sunstone (apache),
created a user and the user is able to see resources without the need of
updating it's settings.

Thanks,

Stefan

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl


signature.asc
Description: Digital signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Upgrade warnings! OpenNebula 4.4 release

2013-11-28 Thread Jaime Melis
Dear community,

OpenNebula 4.4 is around the corner waiting to be released. This email is a
heads up for those that are using OpenNebula's repos.

Since upgrading from OpenNebula 4.2 to OpenNebula 4.4 **requires manual
intervention**, please remember to either disable automatic-updates for
your system or for the OpenNebula repo, if you aren't willing to upgrade
just yet.

If you do want to upgrade, it's always important to read first the upgrade
notes and the compatilibity guide, and this applies to everyone!

http://opennebula.org/documentation:rel4.4:upgrade
http://opennebula.org/documentation:rel4.4:compatibility

Enjoy the Retina!

cheers,
Jaime

-- 
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] VM description/comment

2013-11-28 Thread knawnd

Hi Daniel,

Daniel Molina wrote on 28/11/13 14:13:

Hi Nikolay,


On 27 November 2013 15:02, kna...@gmail.com mailto:kna...@gmail.com wrote:


Daniel Molina wrote on 24/10/12 13:58:

On 24 October 2012 11:08, kna...@gmail.com mailto:kna...@gmail.com
mailto:kna...@gmail.com mailto:kna...@gmail.com wrote:

Daniel Molina wrote on 23/10/12 20:23:

On 23 October 2012 14:31, kna...@gmail.com 
mailto:kna...@gmail.com
mailto:kna...@gmail.com mailto:kna...@gmail.com
mailto:kna...@gmail.com mailto:kna...@gmail.com 
mailto:kna...@gmail.com
mailto:kna...@gmail.com wrote:

Dear Ruben,

first of all, sorry for delay with reply!
Please, see my comments inline.

Ruben S. Montero wrote on 19/10/12 00:34:

Hi Ricardo + Nikolay

You are right, one thing we have in our short roadmap 
is to add a generic
metada
section
for VMs. This metadata could be updated using the 
*update* functionality
currently
present
for other commands.

sounds encouraging! Is any information when such feature is 
planned to be
implemented?


Just to give you the rationale behind not having this 
yet. As you probably
know the VM
template is extended once the VM created with control 
data (e.g.
DISK_ID's, specifric
LEASES, SOURCE for DISK...) for obvious reasons we do 
not want a user to
modify this.

seems reasonable

So we will split this in two, one for the control data 
and other to be
used/modified by
the user.

For now, as Nikolay suggests this limit somehow part of 
the out-of-the-box
functionality
(e.g. adding DESCRIPTION in  a bulk submission), this 
functionality will
need a custom
program using OCA. About parsing the out put of onevm 
show, note that you can
always get
the full pool information with onevm list -x  (TEMPLATE 
included) the
onevm list
command
just parse and pick some of this info and present it in 
a tabular form...

Thanks a lot for detailed reply and explanations!


JFYI You can easily add new columns to the onevm list command. 
The following patch
adds a
new DESCRIPTION column to the onevm list output:

https://gist.github.com/8f8499704cbee0e5db84

The onevm.yaml can be defined per user in $HOME/.one/onevm.yaml 
or globally in
/etc/one/cli/onevm.yaml

Dear OpenNebula developers,

It have been very convenient for me to have DESCRIPTION column in vm list 
but it seems that
patch is absent in 4.2 release. I wonder if it is planned  to include such 
feature in
mainstream code in future releases or it will be needed to apply that patch 
for every new release?


OpenNebula 4.4 is almost ready, but we can consider including it for one-4.6. Could you please 
open a feature request in our dev page so we can schedule it for the next release?


http://dev.opennebula.org/

http://dev.opennebula.org/issues/2509

Thanks!

Nikolay.



Cheers


Best regards,

Nikolay.
___
Users mailing list
Users@lists.opennebula.org mailto:Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




--
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org http://www.OpenNebula.org | dmol...@opennebula.org 
mailto:dmol...@opennebula.org | @OpenNebula


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Ceph w/ CephX Authentication

2013-11-28 Thread Jaime Melis
Hi Bill,

thanks again for all the feedback. We've just updated the Ceph guide to
reflect your comments:
http://opennebula.org/documentation:rel4.4:ceph_ds

With regard to the automated creation of secret files, we'll look into that
for the next release. We'll probably need to consider adding restricted
areas to templates.

thanks,
Jaime


On Thu, Nov 21, 2013 at 10:39 PM, Campbell, Bill 
bcampb...@axcess-financial.com wrote:

 Thanks for the response.

 I'm looking at this from an automation level perspective.  If we had the
 key defined within OpenNebula, that would provide one location where data
 related needs to be stored (vs. multiple, etc.).  Trying to adjust for my
 use case systemically.

 As far as the key being public, can particular attributes only be seen or
 designated based on group ACLs?  Or do ACLs only control access at the
 datastore level?  I'd agree with storing the key would defeat the purpose,
 depending on use case (ours is internal with no 'users' per se, just
 administrative access).

 --
 *From: *Ruben S. Montero rsmont...@opennebula.org
 *To: *Bill Campbell bcampb...@axcess-financial.com
 *Cc: *Users OpenNebula users@lists.opennebula.org
 *Sent: *Thursday, November 21, 2013 4:23:03 PM
 *Subject: *Re: [one-users] Ceph w/ CephX Authentication


 Hi Bill

 Thanks for giving a try to 4.4beta and your great feedback on integrating
 cephx.  So there are a couple of thing to clear out:

 1.- About the optional use of cephx.

 Currently ig CEPH_HOST is defined we generate the elements:

 host name='$FIRST CEPHMON ENTRY' port='$CEPHPORT'/

 But only if CEPH_SECRET (identified by UUID) is defined the auth element
 is included.

 Is this a good way to enable/disable cephx?

 2.- About the cephx user

 Yes, now it is hardcoded to libvirt but you should be able to change that.
 We will add a CEPH_USER attribute to the Datastore configuration. So

 auth username='$CEPHUSER'

 is generated as in 1796

 3.- About secret definition

 There are two options then:

 a) create and define secret.xml in all hypervisors, and add the UUID used
 to identify the secret to the datastore template.

 b) Let the admin define the key (and optionally a UUID for it) and then
 create and define the secret before deploying the VM.

 Currently we have a)

 My concern with b) is that any user able to register an image in the
 datastore, or deploying a VM using an image from the ceph datastore will
 have access to the KEY. I am not sure if that defeats the whole purpose of
 using cephx. If you are going to public the key, why bother?

 What do you think?

 On the other hand, with a) you need to setup the secret on the hypervisors
 but we're assuming this can be automated as part of the installation process

 Again, we really appreciate your feedback and contributions on this


 Cheers

 Ruben








 On Thu, Nov 21, 2013 at 5:29 PM, Campbell, Bill 
 bcampb...@axcess-financial.com wrote:

 So would like to share my experiences with the support of the new CephX
 authentication support in 4.4, and have a few questions for the development
 team.

 In short, it works, but the documentation is not very clear on what needs
 to be done to make it work, so here's what I did (and I'm certain this is
 the intent of development currently):


- A secret needs to be defined within Libvirt on ALL hypervisor nodes
that you utilize with Ceph.  This needs to be done manually prior to
deploying any virtual instances within the cluster.
- Looks like development used the guideline set forth by the Ceph
documentation.  In this, it is required to create a *client.libvirt *user,
as shown in the example in the Ceph documentation.  Otherwise, that user 
 is
not validated via Ceph and will be unable to access RBDs in the cluster.
- The UUID can be specified or generated, however the secret.xml file
must be defined:  See my example here:
http://dev.opennebula.org/issues/1796 and reference the Ceph
documentation for defining the secret.xml within Libvirt.
- You must copy the key for the client.libvirt user and define the
secret
   - 'ceph auth list' will list the users and their appropriate keys.
Copy the key for placement in the definition string.  See the link to 
 the
   opennebula issue 1796 above for an example:


 Now my questions for the OpenNebula devs:


- Since the Ceph_Secret UUID can be defined within a secret.xml file,
can that UUID be automatically generated as part of the Ceph Datastore
definition?
   - I see why it's not based on the source, checking to see if the
   UUID variable exists to ensure the deployment file has the appropriate
   information
   - Can there be an option for Ceph Datastore creation for whether
   or not you utilize CephX authentication on the Datastore (like a 
 checkbox
   or YES/NO)?  If so, then a UUID can be generated and the CEPH_SECRET
   attribute can be defined 

[one-users] User couldn't be authenticated, aborting call

2013-11-28 Thread Eduardo Roloff
Hello all,

I'm new to OpenNebula and my company is doing a large POC with several
Cloud Managers.

I installed ONE 4.2 using yum in a CenOS front-end.

When I try to use some of the one commands with the oneadmin user, I
got this message User couldn't be authenticated, aborting call

What I need to verify to check waht is wrong with my installation?

Thanks
Eduardo
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] opennebula 4.4.0

2013-11-28 Thread Rolandas Naujikas

Different free space is shown for local datastores in sunstone and from cli.

For e.g. we have 1.5GB / 130.6GB (1%) in sunstone, when in cli it shows:
LOCAL SYSTEM DATASTORE #102 CAPACITY 


TOTAL:: 130.6G
USED: : 1M
FREE: : 129.1G

The difference is because different method is used to calculate used 
space. In cli it uses monitored value (from du -sLm), when sunstone uses


var total = parseInt(ds.TOTAL_MB);
var used = total - parseInt(ds.FREE_MB);

Difference is because some space is used by file system itself and also 
it could be shared with some local OS installation files.


Regards, Rolandas
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Shared datastore only to cluster

2013-11-28 Thread Gary S. Cuozzo
Hi, 
I believe you should be able to do this by creating a custom datastore driver. 

I have a use case which is not too different than you. I created a 'localfs' 
datastore  transfer driver which is able to create images on a specific host, 
utilizing local disk storage (for performance reasons). While your storage is 
actually shared across hosts, it is similar to mine in that the filesystems 
only exist on host machines and not in the front-end. 

All you should have to do is use a shared SYSTEM datastore and create a 
transfer driver which creates the proper symlinks in the system datastore over 
to the image in your ocfs2. I believe the iscsi drivers do something similar. 
You may want to take a look at them. 

Hope that helps, 
gary 

- Original Message -

From: Ruben S. Montero rsmont...@opennebula.org 

Hi Mário 

I am not really sure I fully understand your question. So let me briefly 
summarize the use of clusters and datasotres: 
* There are 2 datastore types. Images (to store VM disks) and System (to run 
the VMs from there). Stopped VMs (disks and checkpoint) are in stored in the 
System DS. 
* DS can be either shared among all the clusters (assigned to CLUSTER_NONE) or 
assigned to a cluster. 
So if you have a stopped guest that needs to be allocated to a different 
cluster you should be able to do it as long as all the files are accessible to 
both hosts. Otherwise it may not work (if the VM is using persistent images for 
example). 


Hope it is more clear now, if not come back 

Ruben 


On Wed, Aug 28, 2013 at 7:09 PM, Mário Reis  mario.r...@ivirgula.com  wrote: 



Hi all, 

i'm trying to build a opennebula infrastructure, but i guess i'm missing 
something basic about datastores. 

Currently i have 3 ubuntu hosts virtualizing several kvm guests. Created 
another guest and installed Opennebula 4.2 over a Gentoo in this new guest. 
Removed all VMs from the third host, created a cluster in sunstone interface, 
added to it this host. For far so good, the host is monitored and showing 
available CPU/RAM. 

Next step was to import a qcow2 image from a stopped guest and use it in this 
host. This files and shared across all hosts using ocfs2 over fc luns 
(opennebula guest don't actually have access to them). 

So, my problem here is: 

Can I add this image files to a datastore that only exists in the cluster 
hosts? It seems that i can't do this without placing them in the frontend 
guest. 




___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 







-- 
-- 
Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013 
-- 
Ruben S. Montero, PhD 
Project co-Lead and Chief Architect 
OpenNebula - The Open Source Solution for Data Center Virtualization 
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula 

___ 
Users mailing list 
Users@lists.opennebula.org 
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] opennebula 4.4.0

2013-11-28 Thread Carlos Martín Sánchez
Hi,

You are right, there is an inconsistency there.

This one is tricky... The problem comes from the fact that TOTAL != FREE +
USED.
USED is the amount of storage taken by OpenNebula, but FREE is the TOTAL -
storage taken by ONE or other (e.g. by the OS)

In the onedatastore list column and the sunstone table we use the TOTAL and
FREE. In the CLI to show the available percentage, and in Sunstone to show
the consumed percentage.
We could also make the percentages with TOTAL and USED; or USED and FREE,
and the 3 options would be true.

We will have to rethink this when we tackle issue 2402 [1]. DS will have a
logical ALLOCATED usage, and the code to determine scheduling and storage
capacity will be more similar to what we do in the hosts for cpu and memory.
Probably TOTAL will need to be reported as the total available for
opennebula, and not the total disk; discarding the storage taken outside
the datastore dir.

Regards

[1] http://dev.opennebula.org/issues/2402
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Thu, Nov 28, 2013 at 3:33 PM, Rolandas Naujikas 
rolandas.nauji...@mif.vu.lt wrote:

 Different free space is shown for local datastores in sunstone and from
 cli.

 For e.g. we have 1.5GB / 130.6GB (1%) in sunstone, when in cli it shows:
 LOCAL SYSTEM DATASTORE #102 CAPACITY
 TOTAL:: 130.6G
 USED: : 1M
 FREE: : 129.1G

 The difference is because different method is used to calculate used
 space. In cli it uses monitored value (from du -sLm), when sunstone uses

 var total = parseInt(ds.TOTAL_MB);
 var used = total - parseInt(ds.FREE_MB);

 Difference is because some space is used by file system itself and also it
 could be shared with some local OS installation files.

 Regards, Rolandas
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] base path of vmfs system datastore

2013-11-28 Thread Daems Dirk
Hi list,

I am quite new to OpenNebula.
I installed the OpenNebula 4.2 client and configured my VMWare hypervisor. I 
can create an OpenNebula host which connects to the VMWare hypervisor without 
problems.

However, I can't seem to get the pre-defined system and image datastores to 
work with the vmfs datastore type. I set the DATASTORE_LOCATION in the 
oned.conf file to /vmfs/volumes. SSH access for the oneadmin account is working 
fine. I also updated the pre-defined system (0) and image (1) datastores using 
the correct attribute values for DS_MAD, TM_MAD and BRIDGE_LIST. In the vSphere 
client, I created VMFS datastores reflecting the datastore id's 0 and 1.

Now the base path of these datastores still points to /var/lib/one/datastores/0 
and /var/lib/one/datastores/0 which is not correct.
I would think they need to point to /vmfs/volumes/0 and /vmfs/volumes/1?

Is there a way to change this? I could also create additional system and image 
datastores but I though the system datastore should have id 0. In the vmfs 
documentation there's a note that the DATASTORE_LOCATION should be configured 
before creating the datastore. However, as the system datastore is auto-created 
how can one change the DATASTORE_LOCATION before creation?

Regards,
Dirk
VITO Disclaimer: http://www.vito.be/e-maildisclaimer
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] User couldn't be authenticated, aborting call

2013-11-28 Thread Carlos Martín Sánchez
Hi,

The authentication is made using the one_auth file, located in
~/.one/one_auth. It should contain the text oneadmin:password.
This file is created during the package installation with a random password.

The first time OpenNebula is started, it will read this file and create the
oneadmin user with that password.

Maybe something went wrong during the installation... you could look for
errors in /var/log/one/oned.log.
Anyway, if you didn't create any resources yet, you can simply stop
opennebula and delete the database (/var/lib/one/one.db if you are using
sqlite). Then make sure the one_auth file exists and start OpenNebula again.

Hope that helps.
Carlos
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Thu, Nov 28, 2013 at 1:58 PM, Eduardo Roloff rol...@gmail.com wrote:

 Hello all,

 I'm new to OpenNebula and my company is doing a large POC with several
 Cloud Managers.

 I installed ONE 4.2 using yum in a CenOS front-end.

 When I try to use some of the one commands with the oneadmin user, I
 got this message User couldn't be authenticated, aborting call

 What I need to verify to check waht is wrong with my installation?

 Thanks
 Eduardo
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] base path of vmfs system datastore

2013-11-28 Thread Tino Vazquez
Hi,

You are right, there is a chicken-and-egg problem with VMware and DS 0
and 1 in OpenNebula 4.2 that prevents the use of the default
datastores in VMware. This is fixed in the imminent OpenNebula 4.4
Retina.

The workaround is to simply use two new datastores (say, 100 and 101),
configure them with the appropriate BRIDGE_LIST and 'vmfs' drivers and
DATASTORE_LOCATION, rename the DS in the ESX hosts and you should be
good to go.

Regards,

-Tino

--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.


On Thu, Nov 28, 2013 at 5:56 PM, Daems Dirk dirk.da...@vito.be wrote:
 Hi list,



 I am quite new to OpenNebula.

 I installed the OpenNebula 4.2 client and configured my VMWare hypervisor. I
 can create an OpenNebula host which connects to the VMWare hypervisor
 without problems.



 However, I can’t seem to get the pre-defined system and image datastores to
 work with the vmfs datastore type. I set the DATASTORE_LOCATION in the
 oned.conf file to /vmfs/volumes. SSH access for the oneadmin account is
 working fine. I also updated the pre-defined system (0) and image (1)
 datastores using the correct attribute values for DS_MAD, TM_MAD and
 BRIDGE_LIST. In the vSphere client, I created VMFS datastores reflecting the
 datastore id’s 0 and 1.



 Now the base path of these datastores still points to
 /var/lib/one/datastores/0 and /var/lib/one/datastores/0 which is not
 correct.

 I would think they need to point to /vmfs/volumes/0 and /vmfs/volumes/1?



 Is there a way to change this? I could also create additional system and
 image datastores but I though the system datastore should have id 0. In the
 vmfs documentation there’s a note that the DATASTORE_LOCATION should be
 configured before creating the datastore. However, as the system datastore
 is auto-created how can one change the DATASTORE_LOCATION before creation?



 Regards,

 Dirk

 VITO Disclaimer: http://www.vito.be/e-maildisclaimer

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Sunstone image upload not working - images not in tmpdir

2013-11-28 Thread Stefan Kooman
Quoting Daniel Molina (dmol...@opennebula.org):
 On 27 November 2013 21:27, Stefan Kooman ste...@bit.nl wrote:
 
 
 We didn't change anything, just:
 https://github.com/OpenNebula/one/commit/f8e2e65b0170268e9c72d52c4fe9f0e13fa05acd
 
 So, it should work as before.

Passenger 4.0.26 is giving me this error (500):

[ 2013-11-28 17:34:07.3852 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr] NameError - uninitialized constant 
PhusionPassenger::Utils::RewindableInput:
[ 2013-11-28 17:34:07.3858 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/one/sunstone/sunstone-server.rb:412:in `block 
in top (required)'
[ 2013-11-28 17:34:07.3860 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:1212:in 
`call'
[ 2013-11-28 17:34:07.3862 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:1212:in 
`block in compile!'
[ 2013-11-28 17:34:07.3864 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:785:in `[]'
[ 2013-11-28 17:34:07.3865 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:785:in 
`block (3 levels) in route!'
[ 2013-11-28 17:34:07.3867 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:801:in 
`route_eval'
[ 2013-11-28 17:34:07.3868 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:785:in 
`block (2 levels) in route!'
[ 2013-11-28 17:34:07.3870 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:822:in 
`block in process_route'
[ 2013-11-28 17:34:07.3872 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:820:in 
`catch'
[ 2013-11-28 17:34:07.3874 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:820:in 
`process_route'
[ 2013-11-28 17:34:07.3875 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:784:in 
`block in route!'
[ 2013-11-28 17:34:07.3877 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:783:in `each'
[ 2013-11-28 17:34:07.3879 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:783:in 
`route!'
[ 2013-11-28 17:34:07.3881 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:886:in 
`dispatch!'
[ 2013-11-28 17:34:07.3883 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:719:in 
`block in call!'
[ 2013-11-28 17:34:07.3884 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:871:in 
`block in invoke'
[ 2013-11-28 17:34:07.3886 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:871:in 
`catch'
[ 2013-11-28 17:34:07.3888 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:871:in 
`invoke'
[ 2013-11-28 17:34:07.3890 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:719:in 
`call!'
[ 2013-11-28 17:34:07.3891 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/sinatra/base.rb:705:in `call'
[ 2013-11-28 17:34:07.3893 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   /usr/lib/ruby/vendor_ruby/rack/commonlogger.rb:33:in 
`call'
[ 2013-11-28 17:34:07.3896 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   
/usr/lib/ruby/vendor_ruby/rack/session/abstract/id.rb:225:in `context'
[ 2013-11-28 17:34:07.3898 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   
/usr/lib/ruby/vendor_ruby/rack/session/abstract/id.rb:220:in `call'
[ 2013-11-28 17:34:07.3900 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   
/usr/lib/ruby/vendor_ruby/rack/protection/xss_header.rb:18:in `call'
[ 2013-11-28 17:34:07.3901 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   
/usr/lib/ruby/vendor_ruby/rack/protection/path_traversal.rb:16:in `call'
[ 2013-11-28 17:34:07.3903 19823/7f177e5a6700 Pool2/Implementation.cpp:1291 ]: 
[App 19906 stderr]   
/usr/lib/ruby/vendor_ruby/rack/protection/json_csrf.rb:18:in `call'
[ 2013-11-28 17:34:07.3905 19823/7f177e5a6700 

Re: [one-users] oneflow service names

2013-11-28 Thread Carlos Martín Sánchez
Hi,

It is not supported yet, but this is something we'll add eventually [1].

Thanks for your feedback.

[1] http://dev.opennebula.org/issues/2123

--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmar...@opennebula.org |
@OpenNebulahttp://twitter.com/opennebulacmar...@opennebula.org


On Thu, Nov 28, 2013 at 1:07 AM, Shankhadeep Shome shank15...@gmail.comwrote:

 Is it possible to change the name of an oneflow service once it's
 deployed? If not, I think it would be a nice feature, or at least have a
 way to differentiate one instantiation from another. Right now the service
 name stays the same, just the two IDs are different.



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] reg oneflow - service template

2013-11-28 Thread Rajendar K
Hi All,
  I am using opennebula 4.2, i have the following queries
related to auto scaling features

(i) Even when conditions met, scaling up takes long time to trigger.
For my trial,  I have used #periods 5   #period 30 cooldown 30

- i can able to c the values 5/5 in the log,  and then it crosses
16/5 in course of time
- Unable to trace down when it trigger actually.
hope it should trigger when it reaches 5/5 ?

(ii) Scheduled policies
 - For my trial i have specified
 start time - 2013-11-29 10:30:30
  is my time format is correct? unable to scale up/down
using this time format.

(ii) is it like AND conditions, if i specify both (i) and (ii)?. I have
tried separately (i) and (ii) but still in-vain.

Kindly help to have a clarification on this.


with regards
Raj





Raj,

Believe Yourself...


On Fri, Oct 25, 2013 at 5:51 PM, Rajendar K k.rajen...@gmail.com wrote:

 Dear All,
I am using opennebula 4. 2 and oneapps 3.9 .
 I would like to know how oneflow templates are stored and retrieved?
 Whether the oneflow template values are stored on database?

 Kindly provide me suggestions...

 with regards
 Raj,



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Sunstone image upload not working - images not in tmpdir

2013-11-28 Thread Daniel Molina
On 28 November 2013 18:15, Stefan Kooman ste...@bit.nl wrote:

 Quoting Daniel Molina (dmol...@opennebula.org):
  On 27 November 2013 21:27, Stefan Kooman ste...@bit.nl wrote:
 
 
  We didn't change anything, just:
 
 https://github.com/OpenNebula/one/commit/f8e2e65b0170268e9c72d52c4fe9f0e13fa05acd
 
  So, it should work as before.

 Passenger 4.0.26 is giving me this error (500):

 [ 2013-11-28 17:34:07.3852 19823/7f177e5a6700
 Pool2/Implementation.cpp:1291 ]: [App 19906 stderr] NameError -
 uninitialized constant PhusionPassenger::Utils::RewindableInput:
 [ 2013-11-28 17:34:07.3858 19823/7f177e5a6700
 Pool2/Implementation.cpp:1291 ]: [App 19906 stderr]
 /usr/lib/one/sunstone/sunstone-server.rb:412:in `block in top (required)'

 Image upload _is_ working with Apache Passenger 3.0.13debian-1.2.
 Apparently
 passenger 4.x needs this class to be handled differently.


Could you check what it the value of rackinput.class in the post '/upload'
do method of sunstone-server.rb. You can add a
logger.error(rackinput.class) and it will be reported in the log.

Cheers



 Gr. Stefan

 --
 | BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
 | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)

 iF4EAREIAAYFAlKXejoACgkQTyGgYdFIOcbamgEA0CqmB4CHEhh5lGCu/EFEjsJy
 xOc+fp2UbMA9F0WAgH8A/1qrzs5RPipd+gY5l7DRQ1DZia/LalKJyFAQyp/nF7M9
 =6Fhy
 -END PGP SIGNATURE-




-- 
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] resize VM dialog is better, but there is a problem with VCPU

2013-11-28 Thread Daniel Molina
Hi Rolandas,

Could you please open a ticket in our dev page, to consider it for next
releases

Thanks

On 26 November 2013 15:04, Rolandas Naujikas rolandas.nauji...@mif.vu.ltwrote:

 Hi,

 After http://dev.opennebula.org/projects/opennebula/repository/revisions/
 ba2bd837018a4d606e554b919359b74ecc888d9f resize dialog works better, but
 for VCPU there is logic error. In leftmost position VCPU value should be 1
 at least, 0 - is not valid value. Maximum value should be also bigger 16 at
 least.

 Also 0 value is not meaningful for CPU and MEMORY.

 Regards, Rolandas

 P.S. Of cause all those values could be entered manually.
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org