Hi,
I created a new vnet based on the following template:
--- template start ---
NAME= v-test01
TYPE= ranged
BRIDGE = uplink
CLUSTER = kvm_cluster
NETWORK_ADDRESS = 172.17.226.0
NETWORK_MASK= 255.255.255.224
IP_START= 172.17.226.4
IP_END = 172.17.226.30
GATEWAY
Quoting Jaime Melis (jme...@opennebula.org):
We have a bit more information with regard to this. The hackathon will
start shortly after lunch, at 14:15, and we will take upon us to improve
the following points in OpenNebula / CentOS:
- CloudInit 0.7.5 support
- Systemd scripts for CentOS
Hi Stefan,
I agree - I'm actually writing this email from Arch Linux, actually :-).
I don't have much systemd expertise, but from looking at some examples it
looks something that we could easily do. Let's see how systemd's
documentation is, as long as it's clear enough...
Robert Schweikert made
Hi Steven,
I don't think there are any news with regard to that as of yet, but as soon
as there is, we'll definitely share it with the list.
Regards,
Jaime
On Wed, Jan 29, 2014 at 3:24 PM, Steven Timm t...@fnal.gov wrote:
For those of us who can't make it to Brussels, it would be
Hi Kiran,
That sounds very interesting, at some point it would be great if you could
elaborate a bit more on exactly how you have done it, but it looks really
nice.
The thing about self contained mode is that it's desgined for development
or for custom solutions, such as yours. It's not intented
Hi Daniel,
thanks for the update. We've seen the *bump* to the associated issue, and
we agree :)
http://dev.opennebula.org/issues/2381#change-7555
cheers,
Jaime
On Wed, Jan 29, 2014 at 4:36 PM, Daniel Dehennin
daniel.dehen...@baby-gnu.org wrote:
Daniel Dehennin daniel.dehen...@baby-gnu.org
Unfortunately the current version of cloud-init does not load new
network parameters after they are configured in some distributions.
There is a ticket to track that problem [1]
The documentation gives some ideas on how to overcome this [2]:
--8--
The current version of cloud-init configures
Hi all,
Stefan Kooman recently requested this functionality and it's included in
the code. Apologies for not understanding the issue better before:
http://dev.opennebula.org/issues/2345
cheers,
Jaime
On Tue, Jan 28, 2014 at 9:42 PM, Dmitri Chebotarov dcheb...@gmu.edu wrote:
It seems like
Hi Igor,
Just to let you know, this has been fixed in the main repo, and will
be available for upcoming releases (4.4.1 and 4.6).
Regards,
-Tino
--
OpenNebula - Flexible Enterprise Cloud Made Simple
--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
Hi,
OpenNebula does not manage kvm external snapshots. You could modify the
snapshot scripts [1] and then schedule onevm snapshot-create actions. But
this may have side effects, since OpenNebula won't be aware of any other
new VM files.
If you modify the drivers in the front-end's
I've been digging a bit more in the shareable issue and it seems that
setting shareable to a disk just disables cache in qemu/kvm[1]:
--8--
4158 virBufferAsprintf(opt, ,cache=%s, mode);
4159 } else if (disk-shared !disk-readonly) {
4160 virBufferAddLit(opt, ,cache=off);
Mmm, I'm not entirely sure which cache is turned off here, but the version of
libvirt I seem to be running refuses to migrate an attached block device unless
it's
marked as sharable. It's not that it fails physically, libvirt just refuses to
do it ...
(I struggled for quite some time with
One step ahead of you, request already submitted. ;-)
http://dev.opennebula.org/issues/2687
Thanks
-Original Message-
From: Tino Vazquez [mailto:cvazq...@c12g.com]
Sent: Thursday, January 30, 2014 6:03 AM
To: Campbell, Bill
Cc: users
Subject: Re: [one-users] Sunstone Cloud View --
Quoting Javier Fontan (jfon...@opennebula.org):
I've been digging a bit more in the shareable issue and it seems that
setting shareable to a disk just disables cache in qemu/kvm[1]:
--8--
4158 virBufferAsprintf(opt, ,cache=%s, mode);
4159 } else if (disk-shared
Mmm, there has been some discussion around this ..
Technically using cache=writeback should be safe for migration.
(which is the method I use)
So .. I need cache=writeback and sharable for migration to happen.
(the performance hit with cache=off is an unacceptable performance hit, at
least for
I'll investigate a bit more on this issue but from the qemu driver
code it seems that cache is disabled with shareable flag.
On Thu, Jan 30, 2014 at 4:07 PM, Gareth Bult gar...@linux.co.uk wrote:
Mmm, there has been some discussion around this ..
Technically using cache=writeback should be
Mmm, I can't disagree if that's what the code says .. however turning
off the cache completely in the GUI does seem to make quite a *dramatic*
difference to the VM performance, whereas setting the sharable flag seems
to have no performance impact at all .. (when using writethrough ...) ???
--
Quoting Gareth Bult (gar...@linux.co.uk):
Mmm, there has been some discussion around this ..
Technically using cache=writeback should be safe for migration.
(which is the method I use)
So .. I need cache=writeback and sharable for migration to happen.
(the performance hit with cache=off
Sorry, my fault, attention is elsewhere ..
I actually use writethrough rather than writeback .. so with the exception
of the quote from the
mailing list, where I said writethrough, please read writeback ...
apologies.
I used to use writeback, but then read a few horror stories and tried
Hi Ondrej,
Right after clicking on the VNC link, is anything showing in the
Firefox dev tools console [1]?
Best,
-Tino
[1] https://developer.mozilla.org/en/docs/Tools
--
OpenNebula - Flexible Enterprise Cloud Made Simple
--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect
Hi Stefan,
On Thu, Jan 30, 2014 at 7:52 AM, Stefan Kooman ste...@bit.nl wrote:
Hi,
I was reading through Amazon EC2 prerequisites [1] which implies that
there can be only one set of AWS credentials per opennebula cloud. Is
that correct? This might not be a problem for a private cloud
Hi Stefan,
On Thu, Jan 30, 2014 at 9:18 AM, Stefan Kooman ste...@bit.nl wrote:
Hi,
I created a new vnet based on the following template:
--- template start ---
NAME= v-test01
TYPE= ranged
BRIDGE = uplink
CLUSTER = kvm_cluster
NETWORK_ADDRESS = 172.17.226.0
NETWORK_MASK=
Hi Tino,
Thank you for reply. Here's the output:
17:45:06.596 POST https://opennebulaaddr/vm/272/startvnc [HTTP/1.1 200 OK 59ms]
17:45:06.621 SecurityError: The operation is insecure. websock.js:333
17:45:06.619 New state 'loaded', was 'disconnected'. Msg: noVNC ready: native
WebSockets, canvas
Hi Stefan,
On Tue, Jan 28, 2014 at 5:57 PM, Stefan Kooman ste...@bit.nl wrote:
Hi list,
Running oneact as oneadmin user I get the following error:
oneacct
[VirtualMachinePoolAccounting] Internal Error.
oned.log shows:
Tue Jan 28 17:21:19 2014 [ReM][D]: Req:6624 UID:0
Hello,
I would like to contextualize a Fedora 20 desktop VM. For that purpose I used
the offical one-context package from OpenNebula
(http://dev.opennebula.org/attachments/download/747/one-context_4.4.0.rpm) but
it simply does not work, my eth0 interface does not get any IP address
Quoting ML mail (mlnos...@yahoo.com):
Hello,
I would like to contextualize a Fedora 20 desktop VM. For that purpose
I used the offical one-context package from OpenNebula
(http://dev.opennebula.org/attachments/download/747/one-context_4.4.0.rpm)
but it simply does not work, my eth0
Hi Jaime,
The setup is quite simple, Tools needed are ucarp, rsync, Mysql
master-master replication.
We need to use a virtual IP which is configured using ucarp and that
virtual IP can be used to access Sunstone. I have setup this with 2 servers
but can configure ucarp with 3 or 4 servers. We
Is possible to get the guest memory usage by one command line or by xen ?
thanks.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
Hi Stefan,
Ah that makes sense, thanks for clarifying! Have a productive and fun day today
at the dojo ;)
Cheers
M.L.
On Thursday, January 30, 2014 9:43 PM, Stefan Kooman ste...@bit.nl wrote:
Quoting ML mail (mlnos...@yahoo.com):
Hello,
I would like to contextualize a Fedora 20
Quoting kiran ranjane (kiran.ranj...@gmail.com):
I have tested this and it works well. I get only 3 to 4 timeout request
when fail-over is triggered so this is quite instant and simple to
troubleshoot in-case of issues, No split brain as M/M replication is used
and both the database and
30 matches
Mail list logo