[one-users] Problem with Datablock image template

2012-07-27 Thread Humberto N. Castejon Martinez
Hi!

When I create a Datablcok image using Sunstone (One v3.0), Opennebula
generates an image template with bus=IDE (i.e. with uppercase). Then,
when I try to create a VM, libvirt complains saying it does not recognize
the IDE bus type. The template should therefore be generated with bus=ide.

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Econe and OCCI servers

2012-07-25 Thread Humberto N. Castejon Martinez
Hi!

After trying to access Opennebula using the OCCI API for some time, I
realized that the OCCI server was indeed not running, even though
everything had been installed and configured according to the
documentation. After some debugging, I found out that it was the Econe
server which was actually running on port 4567 (the default one for the
OCCI server). It seems that port 4567 is used by default for both the Econe
and OCCI servers, so only one of them can run. This is true for version 3.0
of Openenebula, and I suspect that it is also true for version 3.4,
according to the documentation in the web. I hope you can change the
default port for one of the servers, so other people will not have my same
problem (the real problem is that I was receiving an answer with a
non-authorized error message from what I thought was the OCCI server, but
it was the Econe server responding, the ONE_AUTH variable was properly set
and I was using the right credentials, so I could not understand what the
problem was).

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Exception in oneacct

2011-12-21 Thread Humberto N. Castejon Martinez
Hi,

I am running Openenebula 3.0 and have enabled the Accounting and Statistics
module, using MySql (and have applied the patch indicated in the
documentation). I can see the graphs with monitoring information in
Sunstone, but from time to time oneacctd crashes. This is the content of
the /var/log/one/onecacctd.log:

Tue Dec 20 17:14:57 +0100 2011 OneWatch::VmMonitoring
Tue Dec 20 17:19:57 +0100 2011 OneWatch::VmMonitoring
Tue Dec 20 17:24:57 +0100 2011 OneWatch::VmMonitoring
Tue Dec 20 17:24:57 +0100 2011 OneWatch::HostMonitoring
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/adapters/mysql.rb:167:in
`query': Mysql::Error: Duplicate entry '1-0' for key 'PRIMARY'
(Sequel::DatabaseError)
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/adapters/mysql.rb:167:in
`_execute'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/logging.rb:28:in
`log_yield'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/adapters/mysql.rb:167:in
`_execute'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/adapters/shared/mysql_prepared_statements.rb:23:in
`execute'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:71:in
`hold'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/connecting.rb:225:in
`synchronize'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/adapters/shared/mysql_prepared_statements.rb:23:in
`execute'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/query.rb:74:in
`execute_dui'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/dataset/actions.rb:657:in
`execute_dui'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/adapters/mysql.rb:346:in
`execute_dui'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/adapters/mysql.rb:334:in
`update'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/model/associations.rb:1279:in
`_remove_all_samples'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/model/associations.rb:1465:in
`send'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/model/associations.rb:1465:in
`remove_all_associated_objects'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/model/associations.rb:1294:in
`remove_all_samples'
from /usr/lib/one/ruby/acct/watch_helper.rb:279:in `fix_size'
from /usr/lib/one/ruby/acct/watch_helper.rb:449:in `flush'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/query.rb:259:in
`_transaction'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/query.rb:228:in
`transaction'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:84:in
`hold'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/connecting.rb:225:in
`synchronize'
from
/usr/lib/ruby/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/query.rb:226:in
`transaction'
from /usr/lib/one/ruby/acct/watch_helper.rb:446:in `flush'
from /usr/lib/one/ruby/acct/monitoring.rb:31:in `insert'
from /usr/lib/one/ruby/acct/acctd.rb:77:in `update'
from /usr/lib/one/ruby/acct/acctd.rb:59:in `each'
from /usr/lib/one/ruby/acct/acctd.rb:59:in `update'
from /usr/lib/one/ruby/acct/acctd.rb:57:in `each'
from /usr/lib/one/ruby/acct/acctd.rb:57:in `update'
from /usr/lib/one/ruby/acct/acctd.rb:129
from /usr/lib/one/ruby/acct/acctd.rb:124:in `loop'
from /usr/lib/one/ruby/acct/acctd.rb:124

I guess there must be some bug hidden somewhere, or I did something wrong
during the installation.

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Adding a host via Sunstone with non-standard manager(s)

2011-12-21 Thread Humberto N. Castejon Martinez
Hi,

Adding a host via Sunstone does not allow to specify managers different
than the standard ones (e.g. if a new image transfer manager is created, it
cannot be selected in Sunstone). This would be desirable. Or is there a way
to configure the list of available managers to select from in Sunstone?

Cheers,
Humberto


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Problem with persistent image

2011-11-29 Thread Humberto N. Castejon Martinez
Hi,

Given a VM created from a persistent image, if I execute Cancel on the VM
everything works fine. I can see in the logs that, first, TM shared´s mv is
executed, which moves /var/lib/one/vid/images/disk.0 to
/var/lib/one/vid/disk.0. Then, IM´s mv is executed, which moves
/var/lib/one/vid/disk.0  to the repository (well, it does  not really do
it, since the file in the repo is the one linked by disk.0).
When I execute Shutdown and then Delete on the VM, the persistent image
enters into Error state. In the logs I can see that only IM´s mv is
executed, which tries to move   /var/lib/one/vid/disk.0  to the
repository. However,   /var/lib/one/vid/disk.0 does not exist, and
therefore the move command exts with error.

Any clue about what is wrong?

Also, in the case of persistent images and shared repository + VM_DIR, the
move commands are executed but they always do nothing, right? That is, they
are not really needed, since the VM uses directly the image in the repo,
right? Move commands are then only needed in case of non persistent images
in combination with Save As, right?

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Database problem with oZones server

2011-11-18 Thread Humberto N. Castejon Martinez
That's right. It was a permissions problem. I had tried to run the ozones
server as oneadmin before asking for help, but it did not work. The actual
problem was that I had run the server the very first time as root, and so
root became the owner of the log file, giving problems when running the
server as oneadmin later on. I remove the log file, started the server as
oneadmin and everything worked. Thanks!

Cheers,
Humberto

 FYI


 On Tue, Nov 15, 2011 at 4:34 PM, Tino Vazquez tin...@opennebula.org
wrote:

 Hi,

 This is probably a permissions issue

 You should run the server as an unprivileged user (like oneadmin). It
 should have write permissions in VAR_LOCATION, which is:

  * If ONE_LOCATION is defined, $ONE_LOCATION/var
  * otherwise, /var/lib/one

 Regards,

 -Tino

 --
 Constantino V?zquez Blanco, MSc

 Project Engineer
 OpenNebula - The Open Source Toolkit for Data Center Virtualization
 www.OpenNebula.org http://www.opennebula.org/ | @tinova79 | @OpenNebula



 On Tue, Nov 15, 2011 at 2:21 PM, Humberto N. Castejon Martinez 
 humca...@gmail.com wrote:

 Hi,

 I am trying to run the oZones sever, but have problems accessing the
 database. With both, MySQL and SQLite, when I run ozones-server start
as
 root, I get the following error message:


/usr/lib/ruby/gems/1.8/gems/data_objects-0.10.7/lib/data_objects/connection.rb:79:in
 `initialize': unable to open database file (DataObjects::Connection$
 ...
 from /usr/lib/ruby/gems/1.8/gems/rack-1.3.5/lib/rack/builder.rb:51:in
 `initialize'
 from /usr/lib/one/ozones/config.ru:1:in `new'
 from /usr/lib/one/ozones/config.ru:1
 ~ unable to open database file (code: 14, sql state: , query: , uri: )

 Would like to know if I have to configure the database in some specific
 way, or if I am doing something wrong. Thanks!

 Cheers,
 Humberto

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/2017/f71f51ce/attachment-0001.htm


--

Message: 3
Date: Thu, 17 Nov 2011 18:20:22 +0100
From: Jaime Melis jme...@opennebula.org
To: Erico Augusto Cavalcanti Guedes e...@cin.ufpe.br
Cc: users@lists.opennebula.org
Subject: Re: [one-users] Doubt about VNet operation
Message-ID:
   CA+HrgRqjwfdf-jMRsm5Vx+RSi+RcOtdaFo4EgbfVaPpLJdq=u...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1

Hello Erico,

take a look at this thread which is currently under discussion:
http://www.mail-archive.com/users@lists.opennebula.org/msg04582.html

regards,
Jaime

On Wed, Nov 16, 2011 at 2:15 AM, Erico Augusto Cavalcanti Guedes
e...@cin.ufpe.br wrote:
 Dears,

 when two VMs are running on same Physical Machine under one unique VNet,
 communication between then are performed without an external device, like
a
 switch, or it is unpredictable that traffic pass through the switch?

 I read [1], but It was inconclusive.

 Thanks in advance,

 Erico.

 [1]

http://www.opennebula.org/_detail/documentation:rel1.4:network-02.png?id=documentation%3Arel3.0%3Avgg



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org





--
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org


--

Message: 4
Date: Thu, 17 Nov 2011 18:07:48 -
From: Jo?o Soares joaosoa...@ua.pt
To: users@lists.opennebula.org
Subject: [one-users] Opennebula 3.0  OpenVSwitch
Message-ID: 002301cca553$d0d06a50$72713ef0$@pt
Content-Type: text/plain; charset=iso-8859-1

Hello,



There are two ways of installing OpenVSwitch: with kernel modification; and
without it (userspace mode). I was thinking about installing the userspace
mode?is there any problem? Is there any difference in terms of Opennebula?s
interaction with OVS?



Cheers,

Jo?o

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/2017/db476df0/attachment-0001.htm


--

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


End of Users Digest, Vol 45, Issue 35
*
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] error create VM KVM

2011-11-16 Thread Humberto N. Castejon Martinez
Hi,

I have also experienced this problem, my if I am not wrong it can be due to
several reasons. In your case I think the following error message can have
the clue:

[VMM][I]: error: unknown OS type hvm

Have you properly defined the OS attribute of the image you are trying to
deploy?
Also, check /var/log/libvirt/qemu/one-9.log in cluster2 for more info about
the problem.

Cheers,
Humberto

--

Message: 4
Date: Wed, 16 Nov 2011 02:54:01 +0700
From: Dian Djaelani djael...@maxindo.net.id
To: users@lists.opennebula.org
Subject: [one-users] error create VM KVM
Message-ID: 4ec2c359.1040...@maxindo.net.id
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

anyone can help me ?? i`m try to build opennebula but error when create
virtual machine with KVM hypervisor

Wed Nov 16 02:29:57 2011 [DiM][I]: New VM state is ACTIVE.
Wed Nov 16 02:29:57 2011 [LCM][I]: New VM state is PROLOG.
Wed Nov 16 02:29:57 2011 [VM][I]: Virtual Machine has no context
Wed Nov 16 02:30:57 2011 [TM][D]: tm_clone.sh:
frontend:/var/lib/one/images/f29c4332b31b94eeed731f0dbd7cebbf
cluster2:/var/lib/one//9/images/disk.0
Wed Nov 16 02:30:57 2011 [TM][D]: tm_clone.sh: DST:
/var/lib/one//9/images/disk.0
Wed Nov 16 02:30:57 2011 [TM][I]: tm_clone.sh: Creating directory
/var/lib/one//9/images
Wed Nov 16 02:30:57 2011 [TM][I]: tm_clone.sh: Executed ssh cluster2
mkdir -p /var/lib/one//9/images.
Wed Nov 16 02:30:57 2011 [TM][I]: tm_clone.sh: Cloning
frontend:/var/lib/one/images/f29c4332b31b94eeed731f0dbd7cebbf
Wed Nov 16 02:30:57 2011 [TM][I]: tm_clone.sh: Executed scp
frontend:/var/lib/one/images/f29c4332b31b94eeed731f0dbd7cebbf
cluster2:/var/lib/one//9/images/disk.0.
Wed Nov 16 02:30:57 2011 [TM][I]: tm_clone.sh: Executed ssh cluster2
chmod a+rw /var/lib/one//9/images/disk.0.
Wed Nov 16 02:30:57 2011 [TM][I]: ExitCode: 0
Wed Nov 16 02:30:57 2011 [LCM][I]: New VM state is BOOT
Wed Nov 16 02:30:57 2011 [VMM][I]: Generating deployment file:
/var/lib/one/9/deployment.0
Wed Nov 16 02:31:07 2011 [VMM][I]: Command execution fail: 'if [ -x
/var/tmp/one/vmm/kvm/deploy ]; then /var/tmp/one/vmm/kvm/deploy
/var/lib/one//9/images/deployment.0 cluster2 9 cluster2;
else  exit 42; fi'
Wed Nov 16 02:31:07 2011 [VMM][I]: error: Failed to create domain from
/var/lib/one//9/images/deployment.0
Wed Nov 16 02:31:07 2011 [VMM][I]: error: unknown OS type hvm
Wed Nov 16 02:31:07 2011 [VMM][E]: Could not create domain from
/var/lib/one//9/images/deployment.0
Wed Nov 16 02:31:07 2011 [VMM][I]: ExitCode: 255
Wed Nov 16 02:31:07 2011 [VMM][E]: Error deploying virtual machine:
Could not create domain from /var/lib/one//9/images/deployment.0
Wed Nov 16 02:31:08 2011 [DiM][I]: New VM state is FAILED
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Image Manager and Transfer Manager drivers

2011-11-14 Thread Humberto N. Castejon Martinez
Thanks Ruben for your answer (you answered while I was typing my last
email, so please, forget it)

With the Image Driver everything is clear now, but not so with the Transfer
Driver, so I hope you can help me a little bit more. Have in mind that I am
developing drivers for a ZFS-based shared storage solution.
In the documentation I can read that the mkimage and mkswap commands
respectively create empty and swap images in destination. I assume
destination will be a worker, so it will be a path of the form
VM_DIR/VM_ID/images/filename. Is that right? And in which situations
will these commands actually be used? Is tm_clone not used to copy empty
images from the repository to the VM_DIR?
You say that tm_mv is used to move the images from the host to the VM_DIR
at the front-end. What is this needed for (i.e. in which situation is
executed)? Is this command needed if the VM_DIR is shared between the
front-end and the workers? (I would say no)

Thanks again.

Cheers,
Humberto





On Mon, Nov 14, 2011 at 10:23 AM, Ruben S. Montero rsmont...@opennebula.org
 wrote:

 Hi,

 Basically you have to storage areas:

 * The image repository, where your base images are kept. Any operation
 to copy or move an image from there are through the Image driver. This
 is for: 1) Resitering a new image (CP) 2) Creating an empty image
 (MKFS) 3) Copy back a persistent image or a saved_as one (MV)  and 4)
 delete an image (RM)

 * The storage at the hosts and the VM_DIR (can be local or in a shared
 FS). Transfer operations to/from that directories are handle with the
 TM. TM move is then used to move the images from the host to the
 VM_DIR at the front-end.

 Cheers

 Ruben

 On Wed, Nov 9, 2011 at 3:08 PM, Humberto N. Castejon Martinez
 humca...@gmail.com wrote:
  Hi,
 
  I have two questions regarding some of the commands in these two sets of
  drivers.
  When is the IM's mkfs command used and when the TM's mkimage command ?
  When (for what) is the   IM's mv command used? (I understand the TM's mv
  command is used to move the image used by a running VM back to the
  repository, right?)
  Thanks.
  Cheers,
  Humberto
  ___
  Users mailing list
  Users@lists.opennebula.org
  http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 

 --
 Ruben S. Montero, PhD
 Project co-Lead and Chief Architect
 OpenNebula - The Open Source Toolkit for Data Center Virtualization
 www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] oZones

2011-11-14 Thread Humberto N. Castejon Martinez
Hi,

Is it possible to migrate VMs between oZones? And to having VMs deployed
among different oZones in a transparent and/or controlled way for the user?
What I am thinking on is on having a fderated cloud consisting of other
individual clouds. A user would use the federated cloud as it uses a normal
cloud (i.e. w/o knowing that his cloud consists of different smaller
clouds). More advanced users may set constrints on which oZone(s) the VMs
could be deployed.

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Wrong persistent image state

2011-11-10 Thread Humberto N. Castejon Martinez
Hi,

When ONE receives a request to deploy a VM using a persistent image, the
image is marked as used. If the VM deployment fails, the state of the
image is not set back to ready and remains as used. I guess this is a
bug, or is there a reason for this behavior?

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Image Manager and Transfer Manager drivers

2011-11-09 Thread Humberto N. Castejon Martinez
Hi,

I have two questions regarding some of the commands in these two sets of
drivers.

When is the IM's mkfs command used and when the TM's mkimage command ?
When (for what) is the   IM's mv command used? (I understand the TM's mv
command is used to move the image used by a running VM back to the
repository, right?)

Thanks.

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Sharing workers between two front-ends

2011-11-08 Thread Humberto N. Castejon Martinez
Thanks, I will try that

2011/11/8 Carlos Martín Sánchez cmar...@opennebula.org

 Hi,

 OpenNebula names the VMs, meaning the hypervisor instances, as one-id.
 You will probably have collisions.

 You can use this workaround: modify the pool_control table in the DB and
 set the last_oid for VMs to 1000 in one of the two front-ends.
 That will make one OpenNebula create VMs starting with one-1001,
 one-1002...

 Regards.
 --
 Carlos Martín, MSc
 Project Engineer
 OpenNebula - The Open Source Toolkit for Data Center Virtualization
 www.OpenNebula.org | cmar...@opennebula.org | 
 @OpenNebulahttp://twitter.com/opennebula


 On Mon, Nov 7, 2011 at 9:59 AM, Humberto N. Castejon Martinez 
 humca...@gmail.com wrote:

 Hi,

 I know that sharing a worker node between two front-ends (i.e. having a
 worker as part of two clouds) sounds strange, and I am not thinking on
 doing it for production. I am just performing some experiments and would
 like to use the same worker from different front-ends. Provided that I use
 different VM_DIR and SCRIPTS_REMOTE_DIR for each front-end, would I
 experiment any problem.

 Cheers,
 Humberto

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Sharing workers between two front-ends

2011-11-07 Thread Humberto N. Castejon Martinez
Hi,

I know that sharing a worker node between two front-ends (i.e. having a
worker as part of two clouds) sounds strange, and I am not thinking on
doing it for production. I am just performing some experiments and would
like to use the same worker from different front-ends. Provided that I use
different VM_DIR and SCRIPTS_REMOTE_DIR for each front-end, would I
experiment any problem.

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Opennebula Watch and MySQL

2011-11-03 Thread Humberto N. Castejon Martinez
Hi again,

I solved the problem. After all it seems it was a problem of missing
libraries/gems. I had manually installed the ruby gems named in the
documentation for Opennebula Watch, but it seems something was still
missing. i decided then to run sudo /usr/share/one/install_gems to install
all gems needed by all (optional) components. I got an error running that
script: it was complaining about the version of rubygems. Since I am
running Ubuntu 10.04.3 LTS, I had to run the following commands to update
rubygems:
gem install rubygems-update
cd /var/lib/gems/1.9.1/bin
sudo ./update_rubygems

I run again sudo /usr/share/one/install_gems and everything was ok.
thereafter I could start both the Watch service and Sunstone without
problems.

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Opennebula Watch and MySQL

2011-11-02 Thread Humberto N. Castejon Martinez
Hi,

I am trying to use MySQL with Opennebula Watch, but I have problems to
connect to the database.
I created a database oneacct in my MySQL server (the documentation says
neither whether the database has to be created nor if a specific name has
to be used) and have given all privileges on that database to the oneadmin
user. I edited the /etc/one/acctd.conf file and added a line like:

:DB: mysql://oneadmin:password@localhost/oneacct

When I run oneacctd start I get the following error message (note that i
installed the gems sqlite3 and ruby-mysql):

option not implemented: 5
/var/lib/gems/1.8/gems/ruby-mysql-2.9.4/lib/mysql/protocol.rb:212:in
`initialize': Errno::ENOENT: No such file or directory - /tmp/mysql.sock
(Sequel::DatabaseConnectionError)
from /var/lib/gems/1.8/gems/ruby-mysql-2.9.4/lib/mysql/protocol.rb:212:in
`new'
from /var/lib/gems/1.8/gems/ruby-mysql-2.9.4/lib/mysql/protocol.rb:212:in
`initialize'
from /usr/lib/ruby/1.8/timeout.rb:53:in `timeout'
from /var/lib/gems/1.8/gems/ruby-mysql-2.9.4/lib/mysql/protocol.rb:209:in
`initialize'
from /var/lib/gems/1.8/gems/ruby-mysql-2.9.4/lib/mysql.rb:110:in `new'
from /var/lib/gems/1.8/gems/ruby-mysql-2.9.4/lib/mysql.rb:110:in
`real_connect'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/adapters/mysql.rb:103:in
`connect'
from /var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/misc.rb:48
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool.rb:92:in
`call'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool.rb:92:in
`make_new'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:127:in
`make_new'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:113:in
`available'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:103:in
`acquire'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:147:in
`sync'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:147:in
`synchronize'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:147:in
`sync'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:102:in
`acquire'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/connection_pool/threaded.rb:74:in
`hold'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/connecting.rb:225:in
`synchronize'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/adapters/shared/mysql_prepared_statements.rb:23:in
`execute'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/query.rb:74:in
`execute_dui'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/query.rb:67:in
`execute_ddl'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/schema_methods.rb:377:in
`create_table_from_generator'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/schema_methods.rb:97:in
`create_table'
from
/var/lib/gems/1.8/gems/sequel-3.29.0/lib/sequel/database/schema_methods.rb:120:in
`create_table?'
from /usr/lib/one/ruby/acct/watch_helper.rb:137:in `bootstrap'
from /usr/lib/one/ruby/acct/watch_helper.rb:221
from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in
`gem_original_require'
from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require'
from /usr/lib/one/ruby/acct/acctd.rb:35

Running the following command from the shell gives no error:

mysql -h localhost -u oneadmin -p -D oneacct

I would appreciate if someone could tell me what I am doing wrong. Thanks.

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Storage subsystem: which one?

2011-10-18 Thread Humberto N. Castejon Martinez
Hi,

Thank you very much, Fabian and Carlos, for your help. Things are much more
clear now, I think.

*Sharing the image repository.
If I understood right, the aim with sharing the image repository between the
front-end and the workers is to increase performance  by reducing (or
eliminating) the time needed to transfer an image from the repository to the
worker that will run an instance of such image. I have, however, a
question/remark here. To really reduce or eliminate the transfer time, the
image should already reside on the worker node or close to it. If the image
resides on a central server (case of NFS, if I am not wrong) or on an
external shared distributed storage space (case of MooseFS, GlusterFS,
Lustre, and the like), there is still a need to transfer the image to the
worker, right? In the case of a distributed storage solution like MooseFs,
etc., the worker could itself be part of the distributed storage space. In
that case, the image may already reside on the worker, although not
necessarily, right? But using the worker as both a storage server and client
may actually compromise performance, for what I have read.
Am I totally wrong with my thoughts here? If not, do we really increase
transfer performance by sharing the image repository using, e.g. NFS? Are
there any performance numbers for the different cases that could be shared?

* Sharing the VM_dir.
Sharing the VM_dir between the front-end and the workers is not really
needed, but it is more of a convenient solution, right?
Sharing the VM_dir between the workers  themselves might be needed for
live migration. I say might because i have just seen that, for example,
with KVM we may perform live migrations without a shared storage [2]. Has
anyone experimented with this?

Regarding the documentation, Carlos, it looks fine. I would only suggest the
possibility of documenting the 3rd case where the image repository is not
shared but the VM_dir is shared.

Thanks!

Cheers,
Humberto

[1]
http://opennebula.org/documentation:rel3.0:sfs#considerations_limitations
[2] http://www.mail-archive.com/libvir-list@redhat.com/msg23674.html


Date: Mon, 17 Oct 2011 16:30:13 +0200
From: Fabian Wenk fab...@wenks.ch
To: users@lists.opennebula.org
Subject: Re: [one-users] Storage subsystem: which one?
Message-ID: 4e9c3bf5.1060...@wenks.ch
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello Carlos

On 17.10.2011 11:34, Carlos Mart?n S?nchez wrote:
 Thank you for your great contributions to the list!

You're welcome.

 I'd like to add that we tried to summarize the implications of the shared
 [1] and non-shared [2] approaches in the documentation, let us know if
there
 are any big gaps we forgot about.

Thank you for documenting it on the website. I think it is
complete and mentions all important facts about the two storage
possibilities.


bye
Fabian
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Storage subsystem: which one?

2011-10-13 Thread Humberto N. Castejon Martinez
Hi,

I have a setup with a shared-NFS solution for managing VM images. In my
current setup, I share the whole /srv/cloud directory between the front-end
and the worker nodes. One thing I do not like from this solution is that VM
images take space from all workers, even if a worker is not executing a
certain VM.

I wonder if there exist more efficient solutions. Some potential
alternatives I have looked at are: using ZFS/NexentaStor as shared
centralized storage or MooseFS  as shared distributed storage. However, I am
quite new to both Opennebula and storage solutions, so I would appreciate if
you could help me to understand the benefits and disadvantages of these
alternatives (as well as other potential ones).
tored (i.e. whether they are stored in the workers themselves or not).

Reading the Opennebula documentation, I believe there are two things I have
to deal with:

1) The image repository, and whether it is shared or not between the
front-end and the workers
2) The VM_DIR, that contains deployment files, etc for the VMs running on
a worker. This directory may or not be shared between the front-end and the
workers, but it should always be shared between the workers if we want live
migration, right?

Some of the questions I have are these (sorry if some of them seem stupid
:-)):

- What are the implications of sharing or not the image repository  between
the front-end and the workers (apart from the need to transfer images to the
worker nodes in the latter case)?
- What are the implications of sharing or not the VM_DIR between the
front-end and the workers?
- Can I use ZFS or MooseFs and still be able to live migrate VMs?
- Will the VM_DIR always hold a (copy of a) VM image for every VM running
on a worker? Must the VM-DIR be in the local hard drive of a worker or may
it reside on an external shared storage?

I guess two factors i should also consider when choosing a solution are the
following, right?

- The speed of transferring VM images
- The speed of cloning VM images


Thanks for your help!

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Error monitoring host

2011-09-22 Thread Humberto N. Castejon Martinez
Thanks for your help. As I said in my original email, I thought it was a
funny problem, since everything used to work before and suddenly stopped
working, w/o me doing anything. Well, as usual, funny problems are normally
the result of stupid mistakes. I actually did something that prevented the
monitoring to work as it should. I re-started Opennebula, but I did it as
root. Now i have re-strated it once more, this time as oneadmin, and
everything works as it used to. Thanks again for the help and sorry for
bothering you with stupid problems.

Cheers,
Humberto


Date: Wed, 21 Sep 2011 22:19:32 +0400
From: kna...@gmail.com
To: users@lists.opennebula.org
Subject: Re: [one-users] Error monitoring host
Message-ID: 4e7a2ab4.3070...@gmail.com
Content-Type: text/plain; charset=UTF-8; format=flowed

Hi Humberto,

Just a guess.

What the output of the command 'grep nodeinfo_text
$ONE_LOCATION/lib/remotes/im/kvm.d/kvm.rb|grep virsh' on front-end node?
I have
nodeinfo_text = `virsh -c qemu:///system nodeinfo`

But it looks like you should change it to
virsh -c qemu+ssh:///system

HTH,
Nikolay.

Humberto N. Castejon Martinez wrote on 21/09/11 19:21:
 Hi,

 I know the problem I am going to report has already been discussed, but
 what I have read from previous discussions has not helped.
 My Opennebula front-end has problems to monitor one of my hosts. I get
 the following log messages:

 Wed Sep 21 17:03:44 2011 [InM][I]: Command execution fail: 'if [ -x
 /var/tmp/one/im/run_probes ]; then /var/tmp/one/im/run_probes kvm
 joker; else $
 Wed Sep 21 17:03:44 2011 [InM][I]: STDERR follows.
 Wed Sep 21 17:03:44 2011 [InM][I]: Permission denied, please try again.
 Wed Sep 21 17:03:44 2011 [InM][I]: Permission denied, please try again.
 Wed Sep 21 17:03:44 2011 [InM][I]: Permission denied (publickey,password).
 Wed Sep 21 17:03:44 2011 [InM][I]: ExitCode: 255
 Wed Sep 21 17:03:44 2011 [InM][E]: Error monitoring host 0 : MONITOR
 FAILURE 0 Could not monitor host joker.

 The first thing that I have noticed is that run_probes is not in
 /var/tmp/one/im/, but I have it in /srv/cloud/one/lib/remotes/im/. Can
 that be the source of the problem?
 If I executed from my front end machine the following command as root,
 everything seems to work ok:

   /srv/cloud/one/lib/remotes/im/run_probes kvm joker

 However, if i execute it as oneadmin, i get the following error:

 Connecting to uri: qemu:///system
 error: unable to connect to '/var/run/libvirt/libvirt-sock': Permission
 denied
 error: failed to connect to the hypervisor
 Error executing kvm.rb
 ARCH=x86_64 MODELNAME=Intel(R) Xeon(R) CPU E5540 @ 2.53GHz

 I have checked, and oneadmin is member of the cloud group in joker, I
 have specified unix_sock_group = cloud in /etc/libvirt/libvirtd.conf
 and   libvirt-sock  seems to have the right permissions: srwxrwx--- 1
 root cloud 0 2011-09-21 17:02 /var/run/libvirt/libvirt-sock
 So i do not know why I get the permission denied message.
 However, if I run virsh -c qemu+ssh://oneadmin@joker/system, I manage to
 connect to virsh, w/o entering a password.

 So i do not really know what the problem is, and hope you can help me.
 The funny thing is that it used to work perfectly, but it stopped
 working suddenly, without me changing anything (or at least I think so).

 Cheers,
 Humberto


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


--

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


End of Users Digest, Vol 43, Issue 43
*
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Error monitoring host

2011-09-21 Thread Humberto N. Castejon Martinez
Hi,

I know the problem I am going to report has already been discussed, but what
I have read from previous discussions has not helped.
My Opennebula front-end has problems to monitor one of my hosts. I get the
following log messages:

Wed Sep 21 17:03:44 2011 [InM][I]: Command execution fail: 'if [ -x
/var/tmp/one/im/run_probes ]; then /var/tmp/one/im/run_probes kvm joker;
else $
Wed Sep 21 17:03:44 2011 [InM][I]: STDERR follows.
Wed Sep 21 17:03:44 2011 [InM][I]: Permission denied, please try again.
Wed Sep 21 17:03:44 2011 [InM][I]: Permission denied, please try again.
Wed Sep 21 17:03:44 2011 [InM][I]: Permission denied (publickey,password).
Wed Sep 21 17:03:44 2011 [InM][I]: ExitCode: 255
Wed Sep 21 17:03:44 2011 [InM][E]: Error monitoring host 0 : MONITOR FAILURE
0 Could not monitor host joker.

The first thing that I have noticed is that run_probes is not in
/var/tmp/one/im/, but I have it in /srv/cloud/one/lib/remotes/im/. Can that
be the source of the problem?
If I executed from my front end machine the following command as root,
everything seems to work ok:

 /srv/cloud/one/lib/remotes/im/run_probes kvm joker

However, if i execute it as oneadmin, i get the following error:

Connecting to uri: qemu:///system
error: unable to connect to '/var/run/libvirt/libvirt-sock': Permission
denied
error: failed to connect to the hypervisor
Error executing kvm.rb
ARCH=x86_64 MODELNAME=Intel(R) Xeon(R) CPU E5540 @ 2.53GHz

I have checked, and oneadmin is member of the cloud group in joker, I have
specified unix_sock_group = cloud in /etc/libvirt/libvirtd.conf and
libvirt-sock  seems to have the right permissions: srwxrwx--- 1 root cloud 0
2011-09-21 17:02 /var/run/libvirt/libvirt-sock
So i do not know why I get the permission denied message.
However, if I run virsh -c qemu+ssh://oneadmin@joker/system, I manage to
connect to virsh, w/o entering a password.

So i do not really know what the problem is, and hope you can help me.  The
funny thing is that it used to work perfectly, but it stopped working
suddenly, without me changing anything (or at least I think so).

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] (no subject)

2011-09-20 Thread Humberto N. Castejon Martinez
Hi,

I am running Opennebula 2.1.8. When specifying the context for a VM, if I
specify a URL in the files attribute, the contextualization fails.
Consider, for example, the following context:

CONTEXT=[ files=http://domain.com/test.xml; ]

When I try to create the VM with such context, I get the following error
message:

tm_context.sh: Executed mkdir -p /srv/cloud/one/var//106/images/isofiles.
tm_context.sh: Executed cp -R /srv/cloud/one/var/106/context.sh
/srv/cloud/one/var//106/images/isofiles.
tm_context.sh: ERROR: Command /usr/bin/wget -O
/srv/cloud/one/var//106/images/isofiles http://domain.com/test.xml; failed.
tm_context.sh: ERROR: /srv/cloud/one/var//106/images/isofiles: Is a
directory

Error excuting image transfer script:
/srv/cloud/one/var//106/images/isofiles: Is a directory
I presume that specifying a URL in the file attribute is allowed, since
Opennebula uses wget to fetch the file (otherwise, if a local file is
specified, Opennebula uses cp to copy the file to the isofiles directory),
but the command that Opennebula runs is not correct. Is this a bug? Or am I
wrong thinking that a url can be specified?

Cheers,
Humberto
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org