[one-users] blktap xen 4.2 and opennebula 4.2

2014-03-26 Thread kenny . kenny
Hello, i need to use blktap instead of default disk drive.
 
i changed /var/lib/one/remotes/vmm/xen4/attach_disk and /etc/one/vmm_exec/vmm_exec_xen4.conf , but when take a look at deployment.0 , it always with "file:".
What i need to do to change that ?
 
I will change it beacuase with file i can run just 8 vm per node.
 
 
thank
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Architecture advice

2014-03-26 Thread Stuart Longland
On 18/03/14 02:35, Gandalf Corvotempesta wrote:
> Hi to all
> i'm planning a brand new cloud infrastructure with opennebula.
> I'll have many KVM nodes and 3 "management nodes" where I would like
> to place OpenNebula, Sunstone and something else used to orchestrate
> the whole infrastructure
> 
> Simple question: can I use these 3 nodes to power on OpenNebula (in
> HA-Configuration) and also host some virtual machines managed by
> OpenNebula ?

I could be wrong, I'm new to OpenNebula myself, but from what I've seen,
the management node isn't a particularly heavyweight process.

I had it running semi-successfully on an old server here (pre
virtualisation-technology).  I say semi-successfully; the machine had a
SCSI RAID card that took a dislike to Ubuntu 12.04, so the machine I had
as the master would die after 8 hours.

I was using SSH based transfers (so no shared storage) at the time.

Despite this, the VMs held up, they just couldn't be managed.  This
won't be the case if your VM hosts mount any space off the frontend
node: in which case a true HA set-up is needed.  (And lets face it, I
wouldn't recommend running the master node as a VM if you're going to be
mounting storage directly off it for the hosts.)

Based on this it would seem you could do a HA setup with some shared
storage between the nodes, either a common SAN or DR:BD to handle the
OpenNebula frontend.

Seeing as OpenNebula will want to control libvirt on the hosts (being
that you're also suggesting making these run the OpenNebula-managed VMs
too, this might have to be a KVM process managed outside of libvirt.
Not overly difficult, just fiddly.

But, as I say, I could be wrong, so take the above advice with a grain
of salt.

Regards,
-- 
Stuart Longland
Systems Engineer
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] More ipv6 network questions

2014-03-26 Thread Steven Timm



A followup--I did find an example in the documentation but it is only for 
"RANGED" IPv6 network.  I need a FIXED IPv6 network.


I saw that when I set an IP6_GLOBAL PREFIX in the network
file it would then append the ipv6-ified mac address of the
machine and construct an IP6_GLOBAL for me.  But that's not what I want.

Would like to do something like this:
LEASES = [ IP="131.225.41.182", MAC="54:52:00:02:0B:01", 
IP6_GLOBAL="2001:400:2410:29::182" ]
LEASES = [ IP="131.225.41.183", MAC="54:52:00:02:0B:02", 
IP6_GLOBAL="2001:400:2410:29::183"]
LEASES = [ IP="131.225.41.184", MAC="54:52:00:02:0B:03", 
IP6_GLOBAL="2001:400:2410:29::184" ]
LEASES = [ IP="131.225.41.185", MAC="54:52:00:02:0B:04", 
IP6_GLOBAL="2001:400:2410:29::185" ]
LEASES = [ IP="131.225.41.186", MAC="54:52:00:02:0B:05", 
IP6_GLOBAL="2001:400:2410:29::186" ]


But this doesn't work.  the IP6_GLOBAL in the LEASES field is ignored.

Is there any IPV6-related field that is accepted in the LEASES
field of a fixed-network network template?  This is of some urgency.
(I promised my users who depend on ipv6 cloud vm's I would have them
up this morning local time and it is now quitting time today).

Steve


On Wed, 26 Mar 2014, Steven Timm wrote:



Below is the network template that I used to
successfully create the IPV4 side of a dual stack ipv4/ipv6 network
in one4.4.


-bash-4.1$ cat static-ipv6-net
NAME = "Static_IPV6_Public"
TYPE = FIXED

#Now we'll use the cluster private network (physical)
BRIDGE = br0
DNS = 131.225.0.254
GATEWAY = 131.225.41.200
NETWORK_MASK = 255.255.255.128
LEASES = [ IP="131.225.41.132", MAC="00:16:3E:06:01:01" ]

--

and here's what I get back:

-bash-4.1$ onevnet show 1
VIRTUAL NETWORK 1 INFORMATION
ID : 1
NAME   : Static_IPV6_Public
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: -
TYPE   : FIXED
BRIDGE : br0
VLAN   : No
USED LEASES: 0

PERMISSIONS
OWNER  : um-
GROUP  : ---
OTHER  : ---

VIRTUAL NETWORK TEMPLATE
DNS="131.225.0.254"
GATEWAY="131.225.41.200"
NETWORK_MASK="255.255.255.128"

FREE LEASES
LEASE=[ MAC="00:16:3e:06:01:01", IP="131.225.41.132", 
IP6_LINK="fe80::216:3eff:fe06:101", USED="0", VID="-1" ]


VIRTUAL MACHINES

   ID USER GROUPNAMESTAT UCPUUMEM HOST TIME


I have several questions:

1) Does the OpenNebula head node also have to have access to the
IPv6 network too, or just the VM hosts?

2) Is there any way to specify on a host by host basis
the IPV6 address as well as the IPV4 address?  I.e. for
this host fgt1x1.fnal.gov IPV4 is 131.225.41.132 and
IPV6 Global address is 2001:400:2410:29::132

It's puzzling because in the "leases" database table there
is only room for id, ipv4 address, and body.
the "IP6_LINK" field above is not anywhere stored in the database
and yet it shows.

3) Do the contextualization rpms as distributed with ONE4.0 and greater 
handle the

process of setting up IPV6 addresses as part of contextualization?
If so, what fields should we put in the CONTEXT section
to set these values?

Any help is appreciated as getting the ipv6 VM's up and running
is of some urgency.  Right now I am forced to configure
the ipv6 addresses by hand once the VM is up.

Steve Timm

--
Steven C. Timm, Ph.D  (630) 840-8525
t...@fnal.gov  http://home.fnal.gov/~timm/
Fermilab Scientific Computing Division, Scientific Computing Services Quad.
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing



--
Steven C. Timm, Ph.D  (630) 840-8525
t...@fnal.gov  http://home.fnal.gov/~timm/
Fermilab Scientific Computing Division, Scientific Computing Services Quad.
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Architecture advice

2014-03-26 Thread Gandalf Corvotempesta
No advice?

2014-03-17 17:35 GMT+01:00 Gandalf Corvotempesta
:
> Hi to all
> i'm planning a brand new cloud infrastructure with opennebula.
> I'll have many KVM nodes and 3 "management nodes" where I would like
> to place OpenNebula, Sunstone and something else used to orchestrate
> the whole infrastructure
>
> Simple question: can I use these 3 nodes to power on OpenNebula (in
> HA-Configuration) and also host some virtual machines managed by
> OpenNebula ?
>
> For example:
>
> node1 (the same for all 3):
>  - Debian Wheezy with KVM
>  - One manually-created KVM virtual machine with OpenNebula on it (and
> keepalived/ucarp for redundancy)
>  - Many KVM virtual machines managed by OpenNebula
>
> First KVM virtual machine will obviously manually managed, because
> OpenNebula can't manage it self.
>
> In this case I'll have a full virtualized infrastructure, with just 3
> virtual machine out of opennebula (the opennebula machines itself)
> Evertything else will be managed by opennebula.
>
> Any suggestions ? Probably i'll use 2 distinct LVM volume groups to
> split OpenNebula VM and VM storage used by VM managed by opennebula.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Fault Tolerance and shared storage

2014-03-26 Thread Tino Vazquez
Hi,

Thanks for the info.

The hook for host error in OpenNebula 4.4 allows to define one, and only
one, of the "-r" and "-d" flags:

   * -r will "delete --recreate" the VM in the failed host, this will go
through the epilog_delete phase, and it should erase the sym links and
launch the VM again. This is probably what you want, please come back if
the problem does not go away

  * -d will "delete" the VM in the failed host, but won't launch the VM
again.

These two are mutually exclusive.

Regards,

-Tino


--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and any
accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person and/or
entity to whom it is addressed (i.e. those identified in the "To" and "cc"
box). They are the property of C12G Labs S.L.. Unauthorized distribution,
review, use, disclosure, or copying of this communication, or any part
thereof, is strictly prohibited and may be unlawful. If you have received
this e-mail in error, please notify us immediately by e-mail at
ab...@c12g.com and delete the e-mail and attachments and any copy from your
system. C12G thanks you for your cooperation.


On 26 March 2014 17:48, Nuno Serro  wrote:

>  Hello Tino,
>
> We are using version 4.4.1. If you need any details on the configuration I
> can provide them.
>
>
>
>   Nuno Serro
> Coordenador
> Núcleo de Infraestruturas e Telecomunicações
> Departamento de Informática
>
> Alameda da Universidade  -  Cidade Universitária
> 1649-004 LisboaPORTUGAL
> T. +351 210 443 566 - Ext. 19816
> E. nse...@reitoria.ulisboa.pt
> www.ulisboa.pt
>
>
>
>
>  On 26-03-2014 16:44, Tino Vazquez wrote:
>
> Hi Nuno,
>
>  What version of OpenNebula are you using?
>
>  Best,
>
>  -Tino
>
>
> --
> OpenNebula - Flexible Enterprise Cloud Made Simple
>
> --
> Constantino Vázquez Blanco, PhD, MSc
> Senior Infrastructure Architect at C12G Labs
> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>
> --
> Confidentiality Warning: The information contained in this e-mail and any
> accompanying documents, unless otherwise expressly indicated, is
> confidential and privileged, and is intended solely for the person and/or
> entity to whom it is addressed (i.e. those identified in the "To" and "cc"
> box). They are the property of C12G Labs S.L.. Unauthorized distribution,
> review, use, disclosure, or copying of this communication, or any part
> thereof, is strictly prohibited and may be unlawful. If you have received
> this e-mail in error, please notify us immediately by e-mail at
> ab...@c12g.com and delete the e-mail and attachments and any copy from
> your system. C12G thanks you for your cooperation.
>
>
> On 26 March 2014 17:29, Nuno Serro  wrote:
>
>>  Hello,
>>
>> We've started using a system datastore whith a shared storage in a
>> clustered fs, so we could start testing the live migrate functionality. The
>> live migrate is working as expected, but when testing the fault tolerance
>> using the host_hook, we noticed the following error:
>>
>> [TM][I]:  ln -s "/dev/vg-nebula/lv-one-144"
>> "/var/lib/one/datastores/109/393/disk.0"" failed: ln: failed to create
>> symbolic link `/var/lib/one/datastores/109/393/disk.0': File exists
>> [TM][E]: Error linking /dev/vg-nebula/lv-one-144
>>
>> I understand the error. When I kill one node and host_hook kicks in,
>> being the storage shared between nodes, the symlinks to the images are
>> already there.
>>
>> My question is regarding the hook definition:
>>
>> HOST_HOOK = [
>> name  = "error",
>> on= "ERROR",
>> command   = "ft/host_error.rb",
>> arguments = "$ID -r",
>> remote= "no" ]
>>
>> Is it possible to configure the hook to delete the VM, and afterwards
>> creating it? We tried combining "-d" and "-r" flags but only one flag seems
>> to be used.
>>
>> Thanks in advance.
>>
>> Regards,
>>
>>  --
>>
>>   Nuno Serro
>> Coordenador
>> Núcleo de Infraestruturas e Telecomunicações
>> Departamento de Informática
>>
>> Alameda da Universidade  -  Cidade Universitária
>> 1649-004 LisboaPORTUGAL
>> T. +351 210 443 566 <%2B351%20210%20443%20566> - Ext. 19816
>> E. nse...@reitoria.ulisboa.pt
>> www.ulisboa.pt
>>
>>
>>
>>
>>
>> ___
>> Users mailing list
>> Users@lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
<><>___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Fault Tolerance and shared storage

2014-03-26 Thread Nuno Serro

  
  
Hello Tino,
  
  We are using version 4.4.1. If you need any details on the
  configuration I can provide them.
  
  
  



  

  

   


  Nuno Serro
  Coordenador
  Núcleo de Infraestruturas e Telecomunicações
  Departamento de Informática 
  
  Alameda da Universidade  -  Cidade Universitária
  1649-004 LisboaPORTUGAL
  T. +351 210 443 566 - Ext. 19816
  E. nse...@reitoria.ulisboa.pt
  
  www.ulisboa.pt


  

  
   
   

  
  On 26-03-2014 16:44, Tino Vazquez wrote:


  
  Hi Nuno,


What version of OpenNebula are you using?


Best,


-Tino


  
  
--
  OpenNebula - Flexible Enterprise Cloud Made Simple
  
  --
  Constantino Vázquez Blanco, PhD, MSc
  Senior Infrastructure Architect at C12G Labs
  www.c12g.com | @C12G | es.linkedin.com/in/tinova
  
  --
  Confidentiality Warning: The information contained in this
  e-mail and any accompanying documents, unless otherwise
  expressly indicated, is confidential and privileged, and is
  intended solely for the person and/or entity to whom it is
  addressed (i.e. those identified in the "To" and "cc" box).
  They are the property of C12G Labs S.L.. Unauthorized
  distribution, review, use, disclosure, or copying of this
  communication, or any part thereof, is strictly prohibited and
  may be unlawful. If you have received this e-mail in error,
  please notify us immediately by e-mail at ab...@c12g.com and delete the e-mail and
  attachments and any copy from your system. C12G thanks you for
  your cooperation.


On 26 March 2014 17:29, Nuno Serro 
  wrote:
  
 Hello,
  
  We've started using a system datastore whith a
shared storage in a clustered fs, so we could start
testing the live migrate functionality. The live migrate
is working as expected, but when testing the fault
tolerance using the host_hook, we noticed the following
error:
  
  [TM][I]:  ln -s "/dev/vg-nebula/lv-one-144"
"/var/lib/one/datastores/109/393/disk.0"" failed: ln:
failed to create symbolic link
`/var/lib/one/datastores/109/393/disk.0': File exists
  [TM][E]: Error linking /dev/vg-nebula/lv-one-144
  
  I understand the error. When I kill one node and
host_hook kicks in, being the storage shared between
nodes, the symlinks to the images are already there.
  
  My question is regarding the hook definition:
  
  HOST_HOOK = [
      name  = "error",
      on    = "ERROR",
      command   = "ft/host_error.rb",
      arguments = "$ID -r",
      remote    = "no" ]
  
Is it possible to configure the hook to delete the VM,
and afterwards creating it? We tried combining "-d" and
"-r" flags but only one flag seems to be used.

Thanks in advance.

Regards,

  
  -- 

  

  

   


  Nuno
  Serro
  Coordenador
  Núcleo de Infraestruturas e
  Telecomunicações
  Departamento de Informática 
  
  Alameda da Universidade  -  Cidade
  Universitária
  1649-004 LisboaPORTUGAL
  T. +351
210 443 566 - Ext. 19816
  E. nse...@reitoria.ulisboa.pt
  
  www.ulisboa.pt


  

  
  

Re: [one-users] Fault Tolerance and shared storage

2014-03-26 Thread Tino Vazquez
Hi Nuno,

What version of OpenNebula are you using?

Best,

-Tino


--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and any
accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person and/or
entity to whom it is addressed (i.e. those identified in the "To" and "cc"
box). They are the property of C12G Labs S.L.. Unauthorized distribution,
review, use, disclosure, or copying of this communication, or any part
thereof, is strictly prohibited and may be unlawful. If you have received
this e-mail in error, please notify us immediately by e-mail at
ab...@c12g.com and delete the e-mail and attachments and any copy from your
system. C12G thanks you for your cooperation.


On 26 March 2014 17:29, Nuno Serro  wrote:

>  Hello,
>
> We've started using a system datastore whith a shared storage in a
> clustered fs, so we could start testing the live migrate functionality. The
> live migrate is working as expected, but when testing the fault tolerance
> using the host_hook, we noticed the following error:
>
> [TM][I]:  ln -s "/dev/vg-nebula/lv-one-144"
> "/var/lib/one/datastores/109/393/disk.0"" failed: ln: failed to create
> symbolic link `/var/lib/one/datastores/109/393/disk.0': File exists
> [TM][E]: Error linking /dev/vg-nebula/lv-one-144
>
> I understand the error. When I kill one node and host_hook kicks in, being
> the storage shared between nodes, the symlinks to the images are already
> there.
>
> My question is regarding the hook definition:
>
> HOST_HOOK = [
> name  = "error",
> on= "ERROR",
> command   = "ft/host_error.rb",
> arguments = "$ID -r",
> remote= "no" ]
>
> Is it possible to configure the hook to delete the VM, and afterwards
> creating it? We tried combining "-d" and "-r" flags but only one flag seems
> to be used.
>
> Thanks in advance.
>
> Regards,
>
>  --
>
>   Nuno Serro
> Coordenador
> Núcleo de Infraestruturas e Telecomunicações
> Departamento de Informática
>
> Alameda da Universidade  -  Cidade Universitária
> 1649-004 LisboaPORTUGAL
> T. +351 210 443 566 - Ext. 19816
> E. nse...@reitoria.ulisboa.pt
> www.ulisboa.pt
>
>
>
>
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
<>___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Fault Tolerance and shared storage

2014-03-26 Thread Nuno Serro

  
  
Hello,

We've started using a system datastore whith a shared
  storage in a clustered fs, so we could start testing the live
  migrate functionality. The live migrate is working as expected,
  but when testing the fault tolerance using the host_hook, we
  noticed the following error:

[TM][I]:  ln -s "/dev/vg-nebula/lv-one-144"
  "/var/lib/one/datastores/109/393/disk.0"" failed: ln: failed to
  create symbolic link `/var/lib/one/datastores/109/393/disk.0':
  File exists
[TM][E]: Error linking /dev/vg-nebula/lv-one-144

I understand the error. When I kill one node and host_hook
  kicks in, being the storage shared between nodes, the symlinks to
  the images are already there.

My question is regarding the hook definition:

HOST_HOOK = [
    name  = "error",
    on    = "ERROR",
    command   = "ft/host_error.rb",
    arguments = "$ID -r",
    remote    = "no" ]

  Is it possible to configure the hook to delete the VM, and
  afterwards creating it? We tried combining "-d" and "-r" flags but
  only one flag seems to be used.
  
  Thanks in advance.
  
  Regards,
  

-- 
  
  
  

  

  
 
  
  
Nuno Serro
Coordenador
Núcleo de Infraestruturas e Telecomunicações
Departamento de Informática 

Alameda da Universidade  -  Cidade Universitária
1649-004 LisboaPORTUGAL
T. +351 210 443 566 - Ext. 19816
E. nse...@reitoria.ulisboa.pt

www.ulisboa.pt
  
  

  

 
 
  

  



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] More ipv6 network questions

2014-03-26 Thread Steven Timm


Below is the network template that I used to
successfully create the IPV4 side of a dual stack ipv4/ipv6 network
in one4.4.


-bash-4.1$ cat static-ipv6-net
NAME = "Static_IPV6_Public"
TYPE = FIXED

#Now we'll use the cluster private network (physical)
BRIDGE = br0
DNS = 131.225.0.254
GATEWAY = 131.225.41.200
NETWORK_MASK = 255.255.255.128
LEASES = [ IP="131.225.41.132", MAC="00:16:3E:06:01:01" ]

--

and here's what I get back:

-bash-4.1$ onevnet show 1
VIRTUAL NETWORK 1 INFORMATION
ID : 1
NAME   : Static_IPV6_Public
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: -
TYPE   : FIXED
BRIDGE : br0
VLAN   : No
USED LEASES: 0

PERMISSIONS
OWNER  : um-
GROUP  : ---
OTHER  : ---

VIRTUAL NETWORK TEMPLATE
DNS="131.225.0.254"
GATEWAY="131.225.41.200"
NETWORK_MASK="255.255.255.128"

FREE LEASES
LEASE=[ MAC="00:16:3e:06:01:01", IP="131.225.41.132", 
IP6_LINK="fe80::216:3eff:fe06:101", USED="0", VID="-1" ]


VIRTUAL MACHINES

ID USER GROUPNAMESTAT UCPUUMEM HOST 
TIME



I have several questions:

1) Does the OpenNebula head node also have to have access to the
IPv6 network too, or just the VM hosts?

2) Is there any way to specify on a host by host basis
the IPV6 address as well as the IPV4 address?  I.e. for
this host fgt1x1.fnal.gov IPV4 is 131.225.41.132 and
IPV6 Global address is 2001:400:2410:29::132

It's puzzling because in the "leases" database table there
is only room for id, ipv4 address, and body.
the "IP6_LINK" field above is not anywhere stored in the database
and yet it shows.

3) Do the contextualization rpms as distributed with ONE4.0 and greater 
handle the

process of setting up IPV6 addresses as part of contextualization?
If so, what fields should we put in the CONTEXT section
to set these values?

Any help is appreciated as getting the ipv6 VM's up and running
is of some urgency.  Right now I am forced to configure
the ipv6 addresses by hand once the VM is up.

Steve Timm

--
Steven C. Timm, Ph.D  (630) 840-8525
t...@fnal.gov  http://home.fnal.gov/~timm/
Fermilab Scientific Computing Division, Scientific Computing Services Quad.
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Problem boot VM template with opennebula

2014-03-26 Thread Javier Fontan
Let's try taking out a couple of parameters. KERNEL_CMD, swap disk,
device_model and VNC. I'm using the template you provided in a
previous email:

NAME   = Encourage7
MEMORY = 4096
CPU= 1
OS=[ KERNEL = "/boot/vmlinuz-2.6.32-279.el6.x86_64", INITRD =
"/boot/initramfs-2.6.32-279.el6.x86_64.img", root = "xvda1" ]
DISK =[ DRIVER="file:", IMAGE_ID="27", TARGET="xvda1" ]
NIC = [ NETWORK_ID = 3 ]

In case this works you can start adding some other parameters. I'll
start with the swap disk.

On Wed, Mar 26, 2014 at 3:55 PM, Cuquerella Sanchez, Javier
 wrote:
> Hi,
>
> The deployment file generated by OpenNebula is:
>
> [root@VTSS003 114]# cat deployment.0
> name = 'one-114'
> #O CPU_CREDITS = 256
> memory  = '4096'
> kernel = '/boot/vmlinuz-2.6.32-279.el6.x86_64'
> ramdisk = '/boot/initramfs-2.6.32-279.el6.x86_64.img'
> root = '/dev/xvda1'
> extra = 'ro xencons=tty console=tty1'
> disk = [
> 'file:/var/lib/one//datastores/0/114/disk.0,xvda1,w',
> 'tap:aio:/var/lib/one//datastores/0/114/disk.1,sdb,w',
> ]
> vif = [
> ' mac=02:00:5f:d3:e8:04,ip=95.211.232.4,bridge=br0',
> ]
> vfb = ['type=vnc,vnclisten=0.0.0.0,vncdisplay=2,keymap=es']
> device_model='/usr/lib64/xen/bin/qemu-dm'
>
>
> Thanks
>
> Regards
>
> ---
> Javier Cuquerella Sánchez
>
> javier.cuquere...@atos.net
> Atos Research & Innovation
> Systems Administrator
> Albarracin 25
> 28037-Madrid (SPAIN)
> Tfno: +34.91.214.8080
> www.atosresearch.eu
> es.atos.net
>
>
> -Original Message-
> From: Javier Fontan [mailto:jfon...@opennebula.org]
> Sent: Monday, March 24, 2014 6:43 PM
> To: Cuquerella Sanchez, Javier
> Cc: users@lists.opennebula.org
> Subject: Re: Problem boot VM template with opennebula
>
> Can you send us the deployment file generated by OpenNebula? Just to check if 
> there is something too different to your deployment file. It should be 
> located in you frontend at
> /var/lib/one/vms//deployment.0
>
> On Mon, Mar 24, 2014 at 10:29 AM, Cuquerella Sanchez, Javier 
>  wrote:
>> Hi,
>>
>> have been able to look at something ?
>>
>> I do not understand because if I start the same image with xen ONLY works:
>>
>> xm create encourage.cfg
>>
>> name = " encourage"
>>
>> memory = 4096
>> maxmem = 4096
>> vcpus = 1
>>
>> # bootloader = "/ usr / bin / pygrub "
>> # bootloader = "/ usr/lib64/xen/bin/pygrub "
>> on_poweroff = "destroy"
>> on_reboot = " restart"
>> on_crash = " restart"
>>
>> disk = 
>> ['file:/home/jcuque/encourage/encourage2.img,xvda1,w','file:/home/jcuque/encourage/encourage2.swap,xvda2,w']
>> vif = [' bridge = br0 ']
>> kernel = " / boot/vmlinuz-2.6.32-279.el6.x86_64 "
>> ramdisk = "/ boot/initramfs-2.6.32-279.el6.x86_64.img "
>> root = " / dev/xvda1 ro"
>>
>> [root @ VTSS003 ~ ] # xm list | grep Encourage Encourage 135 4096 1-b
>>  20.8
>>
>> the machine is started well but also can login via console:
>>
>> [root @ VTSS003 ~ ] # xm console 135
>>
>> CentOS release 6.4 ( Final)
>> Kernel 2.6.32 - 279.el6.x86_64 on an x86_64
>>
>> encourage.es.atos.net login :
>>
>>
>>
>>
>> ---
>> Javier Cuquerella Sánchez
>>
>> javier.cuquere...@atos.net
>> Atos Research & Innovation
>> Systems Administrator
>> Albarracin 25
>> 28037-Madrid (SPAIN)
>> Tfno: +34.91.214.8080
>> www.atosresearch.eu
>> es.atos.net
>>
>>
>> -Original Message-
>> From: Cuquerella Sanchez, Javier
>> Sent: Friday, March 21, 2014 9:30 AM
>> To: 'Javier Fontan'
>> Cc: 'users@lists.opennebula.org'
>> Subject: RE: Problem boot VM template with opennebula
>>
>> Hi,
>>
>> I don't output anything. by putting the command stays there
>>
>> [root@VTSS003 ~]# xm list
>> NameID   Mem VCPUs  State   
>> Time(s)
>> Domain-0 0  1024 1 r-  
>> 19487.0
>> ciudad20201 67  2048 1 -b   
>> 2512.8
>> ciudad20202 68  2048 1 -b
>> 559.8
>> ciudad20203 69  2048 1 -b   
>> 8516.7
>> one-106133  4096 1 -b
>> 489.9
>> [root@VTSS003 ~]# xm console 133
>>
>> maybe it's useful to bring to the template: KERNEL_CMD = "ro xencons=tty 
>> console=tty1"]  , no?
>>
>>
>>
>> Thanks.
>>
>> Regards
>>
>>
>> ---
>> Javier Cuquerella Sánchez
>>
>> javier.cuquere...@atos.net
>> Atos Research & Innovation
>> Systems Administrator
>> Albarracin 25
>> 28037-Madrid (SPAIN)
>> Tfno: +34.91.214.8080
>> www.atosresearch.eu
>> es.atos.net
>>
>>
>> -Original Message-
>> From: Javier Fontan [mailto:jfon...@opennebula.org]
>> Sent: Thursday, March 20, 2014 6:56 PM
>> To: Cuquerella Sanchez, Javier
>> Cc: users@lists.openne

Re: [one-users] Problem boot VM template with opennebula

2014-03-26 Thread Cuquerella Sanchez, Javier
Hi,

The deployment file generated by OpenNebula is:

[root@VTSS003 114]# cat deployment.0
name = 'one-114'
#O CPU_CREDITS = 256
memory  = '4096'
kernel = '/boot/vmlinuz-2.6.32-279.el6.x86_64'
ramdisk = '/boot/initramfs-2.6.32-279.el6.x86_64.img'
root = '/dev/xvda1'
extra = 'ro xencons=tty console=tty1'
disk = [
'file:/var/lib/one//datastores/0/114/disk.0,xvda1,w',
'tap:aio:/var/lib/one//datastores/0/114/disk.1,sdb,w',
]
vif = [
' mac=02:00:5f:d3:e8:04,ip=95.211.232.4,bridge=br0',
]
vfb = ['type=vnc,vnclisten=0.0.0.0,vncdisplay=2,keymap=es']
device_model='/usr/lib64/xen/bin/qemu-dm'


Thanks

Regards

---
Javier Cuquerella Sánchez

javier.cuquere...@atos.net
Atos Research & Innovation
Systems Administrator
Albarracin 25
28037-Madrid (SPAIN)
Tfno: +34.91.214.8080
www.atosresearch.eu
es.atos.net 
 

-Original Message-
From: Javier Fontan [mailto:jfon...@opennebula.org] 
Sent: Monday, March 24, 2014 6:43 PM
To: Cuquerella Sanchez, Javier
Cc: users@lists.opennebula.org
Subject: Re: Problem boot VM template with opennebula

Can you send us the deployment file generated by OpenNebula? Just to check if 
there is something too different to your deployment file. It should be located 
in you frontend at
/var/lib/one/vms//deployment.0

On Mon, Mar 24, 2014 at 10:29 AM, Cuquerella Sanchez, Javier 
 wrote:
> Hi,
>
> have been able to look at something ?
>
> I do not understand because if I start the same image with xen ONLY works:
>
> xm create encourage.cfg
>
> name = " encourage"
>
> memory = 4096
> maxmem = 4096
> vcpus = 1
>
> # bootloader = "/ usr / bin / pygrub "
> # bootloader = "/ usr/lib64/xen/bin/pygrub "
> on_poweroff = "destroy"
> on_reboot = " restart"
> on_crash = " restart"
>
> disk = 
> ['file:/home/jcuque/encourage/encourage2.img,xvda1,w','file:/home/jcuque/encourage/encourage2.swap,xvda2,w']
> vif = [' bridge = br0 ']
> kernel = " / boot/vmlinuz-2.6.32-279.el6.x86_64 "
> ramdisk = "/ boot/initramfs-2.6.32-279.el6.x86_64.img "
> root = " / dev/xvda1 ro"
>
> [root @ VTSS003 ~ ] # xm list | grep Encourage Encourage 135 4096 1-b 
>  20.8
>
> the machine is started well but also can login via console:
>
> [root @ VTSS003 ~ ] # xm console 135
>
> CentOS release 6.4 ( Final)
> Kernel 2.6.32 - 279.el6.x86_64 on an x86_64
>
> encourage.es.atos.net login :
>
>
>
>
> ---
> Javier Cuquerella Sánchez
>
> javier.cuquere...@atos.net
> Atos Research & Innovation
> Systems Administrator
> Albarracin 25
> 28037-Madrid (SPAIN)
> Tfno: +34.91.214.8080
> www.atosresearch.eu
> es.atos.net
>
>
> -Original Message-
> From: Cuquerella Sanchez, Javier
> Sent: Friday, March 21, 2014 9:30 AM
> To: 'Javier Fontan'
> Cc: 'users@lists.opennebula.org'
> Subject: RE: Problem boot VM template with opennebula
>
> Hi,
>
> I don't output anything. by putting the command stays there
>
> [root@VTSS003 ~]# xm list
> NameID   Mem VCPUs  State   
> Time(s)
> Domain-0 0  1024 1 r-  19487.0
> ciudad20201 67  2048 1 -b   2512.8
> ciudad20202 68  2048 1 -b559.8
> ciudad20203 69  2048 1 -b   8516.7
> one-106133  4096 1 -b489.9
> [root@VTSS003 ~]# xm console 133
>
> maybe it's useful to bring to the template: KERNEL_CMD = "ro xencons=tty 
> console=tty1"]  , no?
>
>
>
> Thanks.
>
> Regards
>
>
> ---
> Javier Cuquerella Sánchez
>
> javier.cuquere...@atos.net
> Atos Research & Innovation
> Systems Administrator
> Albarracin 25
> 28037-Madrid (SPAIN)
> Tfno: +34.91.214.8080
> www.atosresearch.eu
> es.atos.net
>
>
> -Original Message-
> From: Javier Fontan [mailto:jfon...@opennebula.org]
> Sent: Thursday, March 20, 2014 6:56 PM
> To: Cuquerella Sanchez, Javier
> Cc: users@lists.opennebula.org
> Subject: Re: Problem boot VM template with opennebula
>
> Are you getting an error with xm console or it just doesn't output anything?
>
> On Thu, Mar 20, 2014 at 5:50 PM, Cuquerella Sanchez, Javier 
>  wrote:
>> HI,
>>
>> The virtual machine runs on both xen and opennebula:
>>
>> [root@VTSS003 106]# xm list | grep one
>> one-106133  4096 1 -b 
>> 11.3
>>
>> [oneadmin@VTSS003 CUQUE]$ onevm list
>> ID USER GROUPNAMESTAT UCPUUMEM HOST 
>> TIME
>>106 oneadmin oneadmin one-106 runn0  4G ARICLOUD10d 
>> 01h10
>>
>> Template virtual machine:
>>
>> [oneadmin@VTSS003 CUQUE]$ cat VMencourage7.tmpl
>> NAME   = Encourage7
>> MEMORY = 4096
>> CPU= 1
>>

Re: [one-users] How to remove opennebula and its dependenciescompletly?

2014-03-26 Thread Stuart Longland
On 26/03/14 20:52, Meduri Jagadeesh wrote:
> Iam trying to remove opennebula completly with all its dependencies by
> 
> apt-get remove 

Doesn't `apt-get purge ` remove the package *and* its config files?

That followed by an `apt-get autoremove` since the deps will have
nothing requiring them to stick around.
-- 
Stuart Longland
Contractor
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] ceph+flashcache datastore driver

2014-03-26 Thread Stuart Longland
Hi Shankhadeep,
On 26/03/14 12:35, Shankhadeep Shome wrote:
> Try bcache as a flash backend, I feel its more flexible as a caching
> tier and its well integrated into the kernel. The kernel 3.10.X version
> is now quite mature so an epel6 long term kernel would work great. We
> are using it in a linux based production SAN as a cache tier with pci-e
> SSDs, a very flexible subsystem and rock solid. 

Cheers for the heads up, I will have a look.  What are you using to
implement the SAN and what sort of VMs are you using with it?

One thing I'm finding: when I tried using this, I had a stack of RBD
images created by OpenNebula that were in RBD v1 format.  I converted
them to v2 format by means of a simple script: basically renaming the
old images then doing a pipe from 'rbd export' to 'rbd import'.

I had a few images in there, most originally for other hypervisors:
- Windows 2000 Pro image
- Windows XP Pro image (VMWare ESXi image)
- Windows 2012 Standard Evaluation image (CloudBase OpenStack image)
- Windows 2008 R2 Enterprise Evaluation (HyperV image)
- Windows 2012 R2 Data Centre Evaluation (HyperV image)

The latter two were downloaded from Microsoft's site and are actually
supposed to run on HyperV, however they ran fine with IDE storage under
KVM under the out-of-the-box Ceph support in OpenNebula 4.4.

I'm finding that after conversion of the RBDs to RBDv2 format, and
re-creating the image in OpenNebula to clear out the DISK_TYPE attribute
(DISK_TYPE=RBD kept creeping in), the image would deploy but then the OS
would crash.

Win2008r2 would crash after changing the Administrator password (hang
with black screen), Win2012r2 would crash with a CRITICAL_PROCESS_DIED
blue-screen-of-death when attempting to set the Administrator password.

The other images run fine.  The only two that were actually intended for
KVM are the Windows 2012 evaluation image produced by CloudBase (for
OpenStack), and the Windows 2000 image that I personally created.  The
others were all built on other hypervisors, then converted.

I'm not sure if it's something funny with the conversion of the RBDs or
whether it's an oddity with FlashCache+RBD that's causing this.  These
images were fine before I got FlashCache involved (if a little slow).
Either there's a bug in my script, in FlashCache, or I buggered up the
RBD conversion.

But I will have a look at bcache and see how it performs in comparison.
 One thing we are looking for is the ability to throttle or control
cache write-backs for non-production work-loads ... that is, we wish to
prioritise Ceph traffic for production VMs during work hours.
FlashCache doesn't offer this feature at this time.

Do you know if bcache offers any such controls?
-- 
Stuart Longland
Contractor
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] use of image CACHE attribute in images

2014-03-26 Thread Ruben S. Montero
Hi Olivier,

You need to update the VM Template, not the IMAGE. You can find the CACHE
attribute in the Storage (DISK) section under advance.

Cheers

Ruben


On Wed, Mar 26, 2014 at 12:40 PM, Olivier Sallou wrote:

> Hi,
> I have opennebula 4.2 over KVM and was trying to use the CACHE attribute
> to set it to  'writeback' (according to
>
> http://archives.opennebula.org/documentation:archives:rel4.2:template#i_o_devices_section
> ).
> I indeed face very slow disk I/O though using ext3 filesystem and virtio.
>
> I have tagged my image with a tag CACHE, but I do not see it in
> generated template. I have the DEV_PREFIX (used for virtio), but no CACHE.
>
> Any idea of what is going wrong ? There is no eror in the logs
>
> Thanks
>
> Olivier
>
> --
> Olivier Sallou
> IRISA / University of Rennes 1
> Campus de Beaulieu, 35000 RENNES - FRANCE
> Tel: 02.99.84.71.95
>
> gpg key id: 4096R/326D8438  (keyring.debian.org)
> Key fingerprint = 5FB4 6F83 D3B9 5204 6335  D26D 78DC 68DB 326D 8438
>
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>



-- 
-- 
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] use of image CACHE attribute in images

2014-03-26 Thread Olivier Sallou
Hi,
I have opennebula 4.2 over KVM and was trying to use the CACHE attribute
to set it to  'writeback' (according to
http://archives.opennebula.org/documentation:archives:rel4.2:template#i_o_devices_section).
I indeed face very slow disk I/O though using ext3 filesystem and virtio.

I have tagged my image with a tag CACHE, but I do not see it in
generated template. I have the DEV_PREFIX (used for virtio), but no CACHE.

Any idea of what is going wrong ? There is no eror in the logs

Thanks

Olivier

-- 
Olivier Sallou
IRISA / University of Rennes 1
Campus de Beaulieu, 35000 RENNES - FRANCE
Tel: 02.99.84.71.95

gpg key id: 4096R/326D8438  (keyring.debian.org)
Key fingerprint = 5FB4 6F83 D3B9 5204 6335  D26D 78DC 68DB 326D 8438

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] How to remove opennebula and its dependenciescompletly?

2014-03-26 Thread Meduri Jagadeesh
Iam trying to remove opennebula completly with all its dependencies by

apt-get remove 

but when iam trying to re-install it is using the previous files.i want all the 
files out of my pc and install newly any solution??
  ___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] TechDay streaming info

2014-03-26 Thread Jaime Melis
Dear all,

you can watch the techday event [1] from 14:00 CET at this link:
http://one.bit.nl/

If you have any questions, you can drop by #opennebula-techday in freenode.

[1] http://opennebula.org/community/techdays/ede2014/

Regards,
Jaime


-- 
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Request for comments: Sunstone dashboard

2014-03-26 Thread ML mail
Hi Carlos,

I think it would be nice to to see some more "white labeling" features/options 
where one does not need to directly modify the code of sunstone. For example 
the "OpenNebula App Market" title on the app market page does not make sense 
when someone uses its own appmarket server. Just some small details... 
functionality, easy-of-use, and speed are of course more important.

Regards
M.L.





On Thursday, March 20, 2014 11:11 AM, Carlos Martín Sánchez 
 wrote:
 
Dear community,


As you may know if you tried the beta release, we are doing a facelift to 
Sunstone. We are now improving the refresh mechanism to improve the performance 
and alleviate the load Sunstone puts on OpenNebula.

To remove as many refresh calls, we are going to simplify the dashboard. Right 
now it shows aggregated historic graphs for the VM network speed, and the Host 
CPU and Memory.

We will change it to show only the instantaneous number of VMs in each state, 
and the instantaneous cpu and memory usage. Plus the existing storage, users, 
and network stats. Possibly, we will also add the current quota usage.

The information shown by the graphs we are going to remove will be still 
accessible for each VM/Host in the individual info view.


We'd appreciate your comments on what information you would like to see in the 
dashboard, for both admins and regular users.

Best regards,
Carlos
--

Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple

www.OpenNebula.org | cmar...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org