Hi,
On Thu, Dec 20, 2012 at 10:35 AM, Hyun Woo Kim wrote:
>
>
> 1. apply the patch in the first place
>
The patch applies cleanly to the latest 3.8.1.
Also don't forget to set VM_SUBMIT_ON_HOLD = "YES" in your oned.conf.
> 2. define a new VM_HOOK in oned.conf that looks like
>VM_HOOK= [
Thanks jamie
that worked for me, I figured it out last week by running a scp command
directly from the terminal. Didn't understand/know why I needed to add the
local key to authorised hosts. I opted for shared storage in the end, too much
of a wait for ssh to transfer the images (probably becaus
I wrote these to very simple script to start and shutdown a few VMs that need
to be up and/or shutdown in our cluster.
They work for us. Please forgive me if they don't work off the bat (my shop is
Italian based so I had to translate them).
Here we go.
To start the VMs:
# start-core-vm
onet
Hi Oriol
Not really, If your image datastore can be actually accessed by the two
clusters, just leave the image datastore associated to none. This means
that can be used in any cluster. This only applies for networks.
Just create the clusters with the host and System DS for each set, and
that's i
Hi Simon,
You are right, probably the best way to handle this is to go for the
synchronous delete operation.
THANKS for your feedback :)
Ruben
On Mon, Dec 17, 2012 at 5:50 AM, Simon Boulet wrote:
> One note, however, it seems the only way for a user to cancel /delete a
> Powered off VM is to
Hi,
I don't remember the reason why we took out the clusters, but they were
included again in 3.4 [1]
Regards
[1] http://opennebula.org/documentation:rel3.8:hostsubsystem
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.or
Hi Simon,
Thanks for the patch.
I think I understand how I use this new feature from reading your instruction.
Let me confirm that I am understanding right.
1. apply the patch in the first place
2. define a new VM_HOOK in oned.conf that looks like
VM_HOOK= [
name = "special_hold"
on
In OpenNebula 2.0 there was a "onecluster" command which
allowed me to divide my cloud up into two logical clusters,
one with KVM hypervisors and one with Xen.
In the OpenNebula 3.x series this appears to have gone away.
What is the replacement for this functionality? Should we
be using the Zone
Hi,
I have just submitted a patch that solves this.
http://dev.opennebula.org/issues/1103
You could use that patch along with a custom CREATE VM HOOK that checks for
a custom VM attribute you specify in your templates.
Feedback more than welcome!
Thanks
Simon
On Thu, Dec 20, 2012 at 9:14 AM
Hi, Carlos,
Thanks for a quick and precise information.
- onevm hold will not be helpful to us because it also requires users to be
"quick" just like deploy.
- For the time being, using some impossible requirement will be our solution
- We will look forward to the new option that you plan to add
On Tue, Dec 18, 2012 at 4:47 PM, Gary S. Cuozzo wrote:
> [...]
> I'm happy to share them if you let me know how best to do so. They are quick
> & dirty but you can easily clean them up to make them acceptable to include
> with ONE.
There are several methods you can contribute patches. One is f
Hello Marc,
you should be able to ssh passwordlessly from the frontend to the frontend
(I mean that, no typo here!)
In order to do that, can you do from the frontend (opennebula-host):
$ ssh opennebula-host
After getting that to work, try again.
cheers,
Jaime
On Sun, Dec 9, 2012 at 2:00 AM, M
Hi Alberto,
This may come from "status" not being implemented for the opennebula
init script in the particular linux flavour you are using. What
distribution and version are you on?
Also, did you install ONE from the distro packages or did you download
them from opennebula.org?
Best,
-Tino
--
Hello Michael,
there seems to be a problem with the ssh passwordless authentication. Can
you do this from the frontend, as oneadmin?
$ ssh opennebula-host
Does that work?
cheers,
Jaime
On Mon, Dec 10, 2012 at 10:28 PM, michael o cearra <
michael290...@hotmail.com> wrote:
> Hows it going,
>
>
Hi,
1: Yes, onehost flush [1]
2: If the images are persistent, the VM will be using a link to the source
image (unless you are using the ssh transfer manager driver), and the
redeployed VM will be up to date. Volatile and non-persistent disks are
lost on resubmit, but this can be changed following
Hi,
That is what the command 'onevm hold' [1] does. But that command requires
you to instantiate the template, and then quickly hold the new VM.
For the next version we will add the option to create the new VMs directly
on hold instead of pending [2].
Meanwhile, if you don't want to instantiate &
Hi *;
Everytime I ask for status to opennebula I receive a dead but subsys
locked. However, it seems to be working fine.
[root@opennebula one]# /etc/init.d/opennebula status
opennebula dead but subsys locked
[root@opennebula one]# ll /var/run/open*.pid
-rw-r--r-- 1 root root 9 Dec 20 09:44 /
17 matches
Mail list logo