Re: [one-users] OpenNebuka Hosted-VLAN support

2011-10-27 Thread Jaime Melis
Hello Patrice,

you make very valid points and we're aware of those limitations. They have
actually been reflected on the documentation:
http://opennebula.org/documentation:rel3.0:nm#considerations_limitations

The problem is that the network management is based on the hook subsystem,
but we have realized there should be a specific driver for networking. We
have created a feature ticket to develop this functionality:
http://dev.opennebula.org/issues/863
The target release for this functionality is OpenNebula 3.2.

For the moment, in OpenNebula 3.0, the only solution to the aforementioned
problems is to create a static network configuration.

Regards,
Jaime

On Mon, Oct 17, 2011 at 5:59 PM, Patrice LACHANCE patlacha...@gmail.comwrote:

 Hello

 In previous version there was a cluster feature that was replaced in
 OpenNebula 3.0 by ozones.
 Shouldn't opennebula make sure that all the nodes in a zone are able to run
 a vm and thus handle network creation on all nodes before starting a new VM?

 Patrice

 2011/9/27 Alberto Picón Couselo alpic...@gmail.com

 **
 Hello,

 We are testing hosted VLAN support in OpenNebula to implement network
 isolation. This feature seems to work correctly when a new instance is
 deployed, as it is stated in oned.conf, hm-vlan hook is executed in PROLOG
 state.

 However, there are another states where VLANs and bridges should be
 created (or its existence checked) before executing a concrete operation:

 * Migration/Live migration of an instance to a hypervisor where VLAN and
 bridge of the instance has never been created
 VLAN and bridge existence should be checked and created if necessary
 before migration is executed. Opennebula 3.0 RC1 performs migration without
 doing these checks and fails to migrate/live migrate the instance, leaving
 it in a FAILED state.

 * A failed instance cannot be redeployed to a hypervisor where VLAN and
 bridge of the instance has never been created
 VLAN and bridge existence should be checked and created if necessary to
 redeploy the image to the selected hypervisor.

 * A stopped instance cannot be resumed if VLAN and bridge of the instance
 does not exist.
 If we stop all instances of a concrete hypervisor and reboot the
 hypervisor for maintenance purposes, all bridges and VLANs will be deleted.
 Stopped instances won't resume because VLANs and bridges requirements are
 not satisfied and will enter in a FAILED state (performing a deletion of non
 persistent disks; BTW, we have removed deletion lines in tm_delete script
 for the moment, :D).

 So, VLAN and bridge existence should be checked and created if necessary
 to resume/migrate/livemigrate/recover_from_failed_state the instance to the
 selected hypervisor. As it is stated in oned.conf, hm-vlan hook could be
 executed on:

 # Virtual Machine Hooks (VM_HOOK) defined by:
 #   name  : for the hook, useful to track the hook (OPTIONAL)
 #   on: when the hook should be executed,
 #   - CREATE, when the VM is created (onevm create)
 #   - PROLOG, when the VM is in the prolog state
 #   - RUNNING, after the VM is successfully booted
 #   - SHUTDOWN, after the VM is shutdown
 #   - STOP, after the VM is stopped (including VM image
 transfers)
 #   - DONE, after the VM is deleted or shutdown
 #   - FAILED, when the VM enters the failed state

 But I'm not able to find a procedure to implement these functionalities in
 oned.conf for the states I mentioned.

 Please, can you give me any clues?

 Best Regards,
 Alberto Picón



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Patrice LACHANCE
 Manager IT Consulting, Logica : http://www.logica.com

 Réseau Viaduc
 Consultez mon profil:
 http://www.viaduc.com/public/profil/?memberId=00226pj42r07h9f3
 Vous inscrire sur le réseau:
 http://www.viaduc.com/invitation/00226pj42r07h9f3

 LinkedIn Network:
 See my profile:  http://www.linkedin.com/in/plachance
 Join the network: http://www.linkedin.com

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebuka Hosted-VLAN support

2011-10-27 Thread Jaime Melis
I apologize, the message was both for Patrice and Alberto Picón, who raised
the initial questions.

On Thu, Oct 27, 2011 at 11:30 AM, Jaime Melis jme...@opennebula.org wrote:

 Hello Patrice,

 you make very valid points and we're aware of those limitations. They have
 actually been reflected on the documentation:
 http://opennebula.org/documentation:rel3.0:nm#considerations_limitations

 The problem is that the network management is based on the hook subsystem,
 but we have realized there should be a specific driver for networking. We
 have created a feature ticket to develop this functionality:
 http://dev.opennebula.org/issues/863
 The target release for this functionality is OpenNebula 3.2.

 For the moment, in OpenNebula 3.0, the only solution to the aforementioned
 problems is to create a static network configuration.

 Regards,
 Jaime

 On Mon, Oct 17, 2011 at 5:59 PM, Patrice LACHANCE 
 patlacha...@gmail.comwrote:

 Hello

 In previous version there was a cluster feature that was replaced in
 OpenNebula 3.0 by ozones.
 Shouldn't opennebula make sure that all the nodes in a zone are able to
 run a vm and thus handle network creation on all nodes before starting a new
 VM?

 Patrice

 2011/9/27 Alberto Picón Couselo alpic...@gmail.com

 **
 Hello,

 We are testing hosted VLAN support in OpenNebula to implement network
 isolation. This feature seems to work correctly when a new instance is
 deployed, as it is stated in oned.conf, hm-vlan hook is executed in PROLOG
 state.

 However, there are another states where VLANs and bridges should be
 created (or its existence checked) before executing a concrete operation:

 * Migration/Live migration of an instance to a hypervisor where VLAN and
 bridge of the instance has never been created
 VLAN and bridge existence should be checked and created if necessary
 before migration is executed. Opennebula 3.0 RC1 performs migration without
 doing these checks and fails to migrate/live migrate the instance, leaving
 it in a FAILED state.

 * A failed instance cannot be redeployed to a hypervisor where VLAN and
 bridge of the instance has never been created
 VLAN and bridge existence should be checked and created if necessary to
 redeploy the image to the selected hypervisor.

 * A stopped instance cannot be resumed if VLAN and bridge of the
 instance does not exist.
 If we stop all instances of a concrete hypervisor and reboot the
 hypervisor for maintenance purposes, all bridges and VLANs will be deleted.
 Stopped instances won't resume because VLANs and bridges requirements are
 not satisfied and will enter in a FAILED state (performing a deletion of non
 persistent disks; BTW, we have removed deletion lines in tm_delete script
 for the moment, :D).

 So, VLAN and bridge existence should be checked and created if necessary
 to resume/migrate/livemigrate/recover_from_failed_state the instance to the
 selected hypervisor. As it is stated in oned.conf, hm-vlan hook could be
 executed on:

 # Virtual Machine Hooks (VM_HOOK) defined by:
 #   name  : for the hook, useful to track the hook (OPTIONAL)
 #   on: when the hook should be executed,
 #   - CREATE, when the VM is created (onevm create)
 #   - PROLOG, when the VM is in the prolog state
 #   - RUNNING, after the VM is successfully booted
 #   - SHUTDOWN, after the VM is shutdown
 #   - STOP, after the VM is stopped (including VM image
 transfers)
 #   - DONE, after the VM is deleted or shutdown
 #   - FAILED, when the VM enters the failed state

 But I'm not able to find a procedure to implement these functionalities
 in oned.conf for the states I mentioned.

 Please, can you give me any clues?

 Best Regards,
 Alberto Picón



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Patrice LACHANCE
 Manager IT Consulting, Logica : http://www.logica.com

 Réseau Viaduc
 Consultez mon profil:
 http://www.viaduc.com/public/profil/?memberId=00226pj42r07h9f3
 Vous inscrire sur le réseau:
 http://www.viaduc.com/invitation/00226pj42r07h9f3

 LinkedIn Network:
 See my profile:  http://www.linkedin.com/in/plachance
 Join the network: http://www.linkedin.com

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




 --
 Jaime Melis
 Project Engineer

 OpenNebula - The Open Source Toolkit for Cloud Computing
 www.OpenNebula.org | jme...@opennebula.org




-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] issue with LVM storage

2011-10-27 Thread Jaime Melis
Hello Prakhar,

are you still experiencing this issue? It looks like a system setup problem.
Do you have any further information on this problem?

Regards,
Jaime

On Tue, Jun 21, 2011 at 5:28 PM, Prakhar Srivastava
prakhar@gmail.comwrote:

 Hi,
 When I try to create to VM on a cluster node using tm_lvm mode. I get the
 following error.

 Tue Jun 21 19:50:24 2011 [DiM][I]: New VM state is ACTIVE.
 Tue Jun 21 19:50:24 2011 [LCM][I]: New VM state is PROLOG.
 Tue Jun 21 20:03:46 2011 [TM][I]: tm_clone.sh:
 iCloud:/srv/cloud/one/var//images/56cee3989f620fc82e2af24ed4aeffcdc84b125b
 192.168.145.105:/srv/cloud/one/var//211/images/disk.0
 Tue Jun 21 20:03:46 2011 [TM][I]: tm_clone.sh: DST:
 /srv/cloud/one/var//211/images/disk.0
 Tue Jun 21 20:03:46 2011 [TM][I]: tm_clone.sh: Creating directory
 /srv/cloud/one/var//211/images
 Tue Jun 21 20:03:46 2011 [TM][I]: tm_clone.sh: Executed /usr/bin/ssh
 192.168.145.105 mkdir -p /srv/cloud/one/var//211/images.
 Tue Jun 21 20:03:46 2011 [TM][I]: tm_clone.sh: Creating LV lv-one-211-0
 Tue Jun 21 20:03:46 2011 [TM][I]: tm_clone.sh: Executed /usr/bin/ssh
 192.168.145.105 /usr/bin/sudo /sbin/lvcreate -L1G -n lv-one-211-0 one-data.
 Tue Jun 21 20:03:46 2011 [TM][I]: tm_clone.sh: Executed /usr/bin/ssh
 192.168.145.105 ln -s /dev/one-data/lv-one-211-0
 /srv/cloud/one/var//211/images/disk.0.
 Tue Jun 21 20:03:46 2011 [TM][I]: tm_clone.sh: Dumping Image
 Tue Jun 21 20:03:46 2011 [TM][I]: tm_clone.sh: Executed eval cat
 /srv/cloud/one/var//images/56cee3989f620fc82e2af24ed4aeffcdc84b125b |
 /usr/bin/ssh 192.168.145.105 /usr/bin/sudo /bin/dd
 of=/dev/one-data/lv-one-211-0 bs=64k.
 Tue Jun 21 20:04:00 2011 [TM][I]: tm_mkswap.sh: Creating 1024Mb image in
 /srv/cloud/one/var//211/images/disk.1
 Tue Jun 21 20:04:00 2011 [TM][I]: tm_mkswap.sh: Executed /usr/bin/ssh
 192.168.145.105 mkdir -p /srv/cloud/one/var//211/images.
 Tue Jun 21 20:04:00 2011 [TM][I]: tm_mkswap.sh: Executed /usr/bin/ssh
 192.168.145.105 /bin/dd if=/dev/zero
 of=/srv/cloud/one/var//211/images/disk.1 bs=1 count=1 seek=1024M.
 Tue Jun 21 20:04:00 2011 [TM][I]: tm_mkswap.sh: Initializing swap space
 Tue Jun 21 20:04:00 2011 [TM][I]: tm_mkswap.sh: Executed /usr/bin/ssh
 192.168.145.105 /sbin/mkswap /srv/cloud/one/var//211/images/disk.1.
 Tue Jun 21 20:04:00 2011 [TM][I]: tm_mkswap.sh: Executed /usr/bin/ssh
 192.168.145.105 chmod a+w /srv/cloud/one/var//211/images/disk.1.
 Tue Jun 21 20:04:06 2011 [TM][I]: tm_context.sh: Executed mkdir -p
 /srv/cloud/one/var/c1799edd687e76f8d944d8aaa4f4ff71/isofiles.
 Tue Jun 21 20:04:06 2011 [TM][I]: tm_context.sh: Executed cp -R
 /srv/cloud/one/var/211/context.sh
 /srv/cloud/one/var/c1799edd687e76f8d944d8aaa4f4ff71/isofiles.
 Tue Jun 21 20:04:06 2011 [TM][I]: tm_context.sh: Executed cp -R
 /home/cloud/opennebula/images/init.sh
 /srv/cloud/one/var/c1799edd687e76f8d944d8aaa4f4ff71/isofiles.
 Tue Jun 21 20:04:06 2011 [TM][I]: tm_context.sh: Executed cp -R
 /root/.ssh/id_dsa.pub
 /srv/cloud/one/var/c1799edd687e76f8d944d8aaa4f4ff71/isofiles.
 Tue Jun 21 20:04:06 2011 [TM][I]: tm_context.sh: Executed /usr/bin/mkisofs
 -o /srv/cloud/one/var/c1799edd687e76f8d944d8aaa4f4ff71/disk.2 -J -R
 /srv/cloud/one/var/c1799edd687e76f8d944d8aaa4f4ff71/isofiles.
 Tue Jun 21 20:04:06 2011 [TM][I]: tm_context.sh: Executed /usr/bin/scp
 /srv/cloud/one/var/c1799edd687e76f8d944d8aaa4f4ff71/disk.2 192.168.145.105:
 /srv/cloud/one/var//211/images/disk.2.
 Tue Jun 21 20:04:06 2011 [TM][I]: tm_context.sh: Executed rm -rf
 /srv/cloud/one/var/c1799edd687e76f8d944d8aaa4f4ff71.
 Tue Jun 21 20:04:06 2011 [LCM][I]: New VM state is BOOT
 Tue Jun 21 20:04:06 2011 [VMM][I]: Generating deployment file:
 /srv/cloud/one/var/211/deployment.0
 Tue Jun 21 20:04:19 2011 [VMM][I]: Command execution fail: 'if [ -x
 /var/tmp/one/vmm/xen/deploy ]; then /var/tmp/one/vmm/xen/deploy
 /srv/cloud/one/var//211/images/deployment.0; else
exit 42; fi'
 Tue Jun 21 20:04:19 2011 [VMM][I]: STDERR follows.
 Tue Jun 21 20:04:19 2011 [VMM][I]: Error: Device 51712 (vbd) could not be
 connected. /dev/dm-0 does not exist.
 Tue Jun 21 20:04:19 2011 [VMM][I]: ExitCode: 1
 *Tue Jun 21 20:04:19 2011 [VMM][E]: Error deploying virtual machine:
 Error: Device 51712 (vbd) could not be connected. /dev/dm-0 does not exist.
 *
 Tue Jun 21 20:04:19 2011 [DiM][I]: New VM state is FAILED
 Tue Jun 21 20:04:31 2011 [TM][W]: Ignored: LOG - 211 tm_delete.sh: Deleting
 remote LVs

 Tue Jun 21 20:04:31 2011 [TM][W]: Ignored: LOG - 211 tm_delete.sh: Executed
 /usr/bin/ssh 192.168.145.105 /usr/bin/sudo /sbin/lvremove -f $(echo
 one-data/$(/usr/bin/sudo /sbin/lvs --noheadings one-data|awk '{print
 $1}'|grep lv-one-211)).

 Tue Jun 21 20:04:31 2011 [TM][W]: Ignored: LOG - 211 tm_delete.sh: Deleting
 /srv/cloud/one/var//211/images

 Tue Jun 21 20:04:31 2011 [TM][W]: Ignored: LOG - 211 tm_delete.sh: Executed
 /usr/bin/ssh 192.168.145.105 rm -rf /srv/cloud/one/var//211/images.

 Tue Jun 21 20:04:31 2011 [TM][W]: Ignored: TRANSFER SUCCESS 211 -

 When I check the cluster node 

Re: [one-users] ebtables does not give anything

2011-10-27 Thread Jaime Melis
Hello Fanttazio,

I'm sorry we couldn't find a solution for this problem. Have you by any
chance tried with OpenNebula 3.0?

Regards,
Jaime

On Mon, Jun 27, 2011 at 2:25 PM, fanttazio fantta...@gmail.com wrote:

 OK here are the things that you wanted.

 oned.log:

 ##
 Mon Jun 27 13:02:32 2011 [ONE][I]: Init OpenNebula Log system
 Mon Jun 27 13:02:32 2011 [ONE][I]: Log Level: 3
 [0=ERROR,1=WARNING,2=INFO,3=DEBUG]
 Mon Jun 27 13:02:32 2011 [ONE][I]: 
 Mon Jun 27 13:02:32 2011 [ONE][I]:  OpenNebula Configuration File
 Mon Jun 27 13:02:32 2011 [ONE][I]: 
 Mon Jun 27 13:02:32 2011 [ONE][I]:
 --

 DB=BACKEND=mysql,DB_NAME=opennebula,PASSWD=oneadmin,PORT=0,SERVER=localhost,USER=oneadmin
 DEBUG_LEVEL=3
 DEFAULT_DEVICE_PREFIX=hd
 DEFAULT_IMAGE_TYPE=OS
 HM_MAD=EXECUTABLE=one_hm
 HOST_MONITORING_INTERVAL=600
 IMAGE_REPOSITORY_PATH=/srv/cloud/one/var//images
 IM_MAD=ARGUMENTS=-r 0 -t 15 kvm,EXECUTABLE=one_im_ssh,NAME=im_kvm
 MAC_PREFIX=02:00
 MANAGER_TIMER=15
 NETWORK_SIZE=254
 PORT=2633
 SCRIPTS_REMOTE_DIR=/var/tmp/one
 TM_MAD=ARGUMENTS=tm_nfs/tm_nfs.conf,EXECUTABLE=one_tm,NAME=tm_nfs
 VM_DIR=/srv/cloud/one/var/
 VM_HOOK=ARGUMENTS=$VMID,COMMAND=image.rb,NAME=image,ON=DONE

 VM_HOOK=ARGUMENTS=one-$VMID,COMMAND=/srv/cloud/one/share/hooks/ebtables-kvm,NAME=ebtables-start,ON=running,REMOTE=yes

 VM_HOOK=ARGUMENTS=,COMMAND=/srv/cloud/one/share/hooks/ebtables-flush,NAME=ebtables-flush,ON=done,REMOTE=yes
 VM_MAD=ARGUMENTS=-t 15 -r 0
 kvm,DEFAULT=vmm_ssh/vmm_ssh_kvm.conf,EXECUTABLE=one_vmm_ssh,NAME=vmm_kvm,TYPE=kvm
 VM_POLLING_INTERVAL=600
 VNC_BASE_PORT=5900
 --
 Mon Jun 27 13:02:32 2011 [ONE][I]: Bootstraping OpenNebula database.
 Mon Jun 27 13:02:32 2011 [VMM][I]: Starting Virtual Machine Manager...
 Mon Jun 27 13:02:32 2011 [LCM][I]: Starting Life-cycle Manager...
 Mon Jun 27 13:02:32 2011 [VMM][I]: Virtual Machine Manager started.
 Mon Jun 27 13:02:32 2011 [LCM][I]: Life-cycle Manager started.
 Mon Jun 27 13:02:32 2011 [InM][I]: Starting Information Manager...
 Mon Jun 27 13:02:32 2011 [InM][I]: Information Manager started.
 Mon Jun 27 13:02:32 2011 [TrM][I]: Starting Transfer Manager...
 Mon Jun 27 13:02:32 2011 [TrM][I]: Transfer Manager started.
 Mon Jun 27 13:02:32 2011 [DiM][I]: Starting Dispatch Manager...
 Mon Jun 27 13:02:32 2011 [DiM][I]: Dispatch Manager started.
 Mon Jun 27 13:02:32 2011 [ReM][I]: Starting Request Manager...
 Mon Jun 27 13:02:32 2011 [ReM][I]: Starting XML-RPC server, port 2633 ...
 Mon Jun 27 13:02:32 2011 [ReM][I]: Request Manager started.
 Mon Jun 27 13:02:32 2011 [HKM][I]: Starting Hook Manager...
 Mon Jun 27 13:02:32 2011 [HKM][I]: Hook Manager started.
 Mon Jun 27 13:02:34 2011 [VMM][I]: Loading Virtual Machine Manager drivers.
 Mon Jun 27 13:02:34 2011 [VMM][I]: Loading driver: vmm_kvm (KVM)
 Mon Jun 27 13:02:34 2011 [VMM][I]: Driver vmm_kvm loaded.
 Mon Jun 27 13:02:34 2011 [InM][I]: Loading Information Manager drivers.
 Mon Jun 27 13:02:34 2011 [InM][I]: Loading driver: im_kvm
 Mon Jun 27 13:02:34 2011 [InM][I]: Driver im_kvm loaded
 Mon Jun 27 13:02:34 2011 [TM][I]: Loading Transfer Manager drivers.
 Mon Jun 27 13:02:34 2011 [VMM][I]: Loading driver: tm_nfs
 Mon Jun 27 13:02:34 2011 [TM][I]: Driver tm_nfs loaded.
 Mon Jun 27 13:02:34 2011 [HKM][I]: Loading Hook Manager driver.
 Mon Jun 27 13:02:34 2011 [HKM][I]: Hook Manager loaded
 Mon Jun 27 13:02:49 2011 [ReM][D]: HostPoolInfo method invoked
 Mon Jun 27 13:02:49 2011 [ReM][D]: ClusterPoolInfo method invoked
 Mon Jun 27 13:03:02 2011 [ReM][D]: HostPoolInfo method invoked
 Mon Jun 27 13:03:02 2011 [VMM][I]: Monitoring VM 6.
 Mon Jun 27 13:03:02 2011 [ReM][D]: VirtualMachinePoolInfo method invoked
 Mon Jun 27 13:03:02 2011 [VMM][D]: Message received: POLL SUCCESS 6 STATE=a
 NETTX=300 USEDCPU=2.6 USEDMEMORY=262144 NETRX=2236446

 Mon Jun 27 13:03:17 2011 [InM][I]: Monitoring host node02 (2)
 Mon Jun 27 13:03:20 2011 [InM][D]: Host 2 successfully monitored.
 Mon Jun 27 13:03:32 2011 [ReM][D]: HostPoolInfo method invoked
 Mon Jun 27 13:03:32 2011 [ReM][D]: VirtualMachinePoolInfo method invoked
 Mon Jun 27 13:03:37 2011 [ReM][D]: VirtualMachinePoolInfo method invoked
 Mon Jun 27 13:04:01 2011 [ReM][D]: UserPoolInfo method invoked
 Mon Jun 27 13:04:02 2011 [ReM][D]: HostPoolInfo method invoked
 Mon Jun 27 13:04:02 2011 [ReM][D]: VirtualMachinePoolInfo method invoked
 Mon Jun 27 13:04:09 2011 [ReM][D]: VirtualNetworkPoolInfo method invoked
 Mon Jun 27 13:04:17 2011 [ReM][D]: ImagePoolInfo method invoked
 Mon Jun 27 13:04:32 2011 [ReM][D]: HostPoolInfo method invoked
 Mon Jun 27 13:04:32 2011 [ReM][D]: VirtualMachinePoolInfo method invoked
 Mon Jun 27 13:04:33 2011 [ReM][D]: VirtualMachineInfo method invoked
 Mon Jun 27 13:04:33 2011 [ReM][D]: VirtualMachineMigrate invoked
 Mon Jun 27 13:04:33 2011 [DiM][D]: Live-migrating VM 2
 Mon Jun 27 13:04:33 2011 [ReM][D]: 

Re: [one-users] [Beta 3.0] Problem creating VM: Failed to create domain

2011-10-27 Thread Jaime Melis
Hello Florin,

Are you still experiencing this issue? It seems like you were using tm_nfs,
but you should be using tm_ssh. A few extra tips for debugging:
set ONE_MAD_DEBUG=1 in etc/defaultrc
issue the commands in oned.log manually. Also, TM_* scripts are always ran
from the frontend host.

Regards,
Jaime


On Mon, Aug 8, 2011 at 12:13 PM, Florin Antonescu florinantone...@gmail.com
 wrote:

 I tend to believe that disk.0 image file is not correctly copied to
 destination host. How can I debug this process in order to see what is
 really happening? The logs are not very detailed regarding where these
 actions are executed on the remote host.
 Any help is greatly appreciated.

 Best regards

 On Thu, Aug 4, 2011 at 6:18 PM, Florin Antonescu 
 florinantone...@gmail.com wrote:

 Hi,

 I modified tm_delete.sh and added ls -l $SRC_PATH just before rm command
 and this was the output it generated:
  -rw-rw-rw- 1 oneadmin cloud 41943040 Jul 28 20:42 disk.0
 it seems that disk.0 is in place, but I think there is an authorization
 problem, related to  error: unable to set user and group to '117:126' on
 '/srv/cloud/one/var//11/images/disk.0':

 Any new ideas?

 On Thu, Aug 4, 2011 at 5:54 PM, Vladimir Vuksan vli...@veus.hr wrote:


 I would explore this

  Thu Jul 28 19:56:05 2011 [VMM][D]: Message received: LOG I 11 error:
 unable
  to set user and group to '117:126' on
 '/srv/cloud/one/var//11/images/disk.0': No such file or directory

 Edit tm_delete.sh in $ONE_HOME/lib/tm_commands (?). Comment out the line
 where it deletes the images. Then once things fail see what's in the
 /srv/cloud/one/var/11/images/ directory.

 On Thu, 4 Aug 2011 17:40:27 +0200, Florin Antonescu
 florinantone...@gmail.com wrote:
  Hello,
 
  I am running into some problems when I try to create a VM with onevm
  command. These are the logged messages on oned.log and var/11/vm.log.
  Can anyone give me a suggestion on how to fix this?
 
  Best regards,
  Florian
 
  Thu Jul 28 19:55:52 2011 [DiM][D]: Deploying VM 11
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: LOG D 11
 tm_clone.sh:
  zrhv:/srv/cloud/one/var/images/aca498c2dc295b68a9311246f0745526
  zrhs:/srv/cloud/one/var//11/images/disk.0
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: LOG D 11
 tm_clone.sh:
  DST: /srv/cloud/one/var//11/images/disk.0
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: LOG I 11
 tm_clone.sh:
  Creating directory /srv/cloud/one/var//11/images
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: LOG I 11
 tm_clone.sh:
  Executed mkdir -p /srv/cloud/one/var//11/images.
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: LOG I 11
 tm_clone.sh:
  Executed chmod a+w /srv/cloud/one/var//11/images.
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: LOG I 11
 tm_clone.sh:
  Cloning /srv/cloud/one/var/images/aca498c2dc295b68a9311246f0745526
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: LOG I 11
 tm_clone.sh:
  Executed cp -r
 /srv/cloud/one/var/images/aca498c2dc295b68a9311246f0745526
  /srv/cloud/one/var//11/images/disk.0.
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: LOG I 11
 tm_clone.sh:
  Executed chmod a+rw /srv/cloud/one/var//11/images/disk.0.
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: LOG I 11 ExitCode:
 0
  Thu Jul 28 19:56:03 2011 [TM][D]: Message received: TRANSFER SUCCESS 11
 -
  Thu Jul 28 19:56:05 2011 [VMM][D]: Message received: LOG I 11 Command
  execution fail: 'if [ -x /var/tmp/one/vmm/kvm/deploy ]; then
  /var/tmp/one/vmm/kvm/deploy /srv/cloud/one/var//11/images/deployment.0
 zrhs
  11 zrhs; else  exit 42; fi'
  Thu Jul 28 19:56:05 2011 [VMM][D]: Message received: LOG I 11 error:
 Failed
  to create domain from /srv/cloud/one/var//11/images/deployment.0
  Thu Jul 28 19:56:05 2011 [VMM][D]: Message received: LOG I 11 error:
 unable
  to set user and group to '117:126' on
  '/srv/cloud/one/var//11/images/disk.0': No such file or directory
  Thu Jul 28 19:56:05 2011 [VMM][D]: Message received: LOG E 11 Could not
  create domain from /srv/cloud/one/var//11/images/deployment.0
  Thu Jul 28 19:56:05 2011 [VMM][D]: Message received: LOG I 11 ExitCode:
 255
  Thu Jul 28 19:56:05 2011 [VMM][D]: Message received: DEPLOY FAILURE 11
  Could
  not create domain from /srv/cloud/one/var//11/images/deployment.0
  Thu Jul 28 19:56:05 2011 [TM][D]: Message received: LOG I 11
 tm_delete.sh:
  Deleting /srv/cloud/one/var//11/images
  Thu Jul 28 19:56:05 2011 [TM][D]: Message received: LOG I 11
 tm_delete.sh:
  Executed rm -rf /srv/cloud/one/var//11/images.
  Thu Jul 28 19:56:05 2011 [TM][D]: Message received: LOG I 11 ExitCode:
 0
  Thu Jul 28 19:56:05 2011 [TM][D]: Message received: TRANSFER SUCCESS 11
 -




 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | 

Re: [one-users] Hi, I have some questions about Open vSwitch management in Opennebula 3(feature 476).

2011-10-27 Thread Jaime Melis
Hello,

When we released OpenNebula 3.0, we finalized the network configuration
documentation, you can see how to activate the Open vSwitch integration
here:
http://opennebula.org/documentation:rel3.0:openvswitch
When you originally wrote this email this guide wasn't yet written (the
network drivers themselves were still under development). I hope it answers
your questions.

It addresse the 'brctl' by indicating that the Open vSwitch compatibility
layer for Linux bridging must be installed:
http://openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=INSTALL.bridge;hb=HEAD

Cheers,
Jaime

On Tue, Jul 26, 2011 at 2:58 AM, wen.y...@cs2c.com.cn wrote:

 Hello developers:

 I am interesting about feature 476 that Open vSwitch management in
 Opennebula 3. I download the source code by git. I have some questions:

 (1) How to use scripts in /src/vnm_mad? I can't find the C++ code that call
 vnm_mad  scripts in Opennebula 3. If I want to use these scripts to test the
 Open vSwitch management function, what should I do?

 (2) In vnm_mad scripts, 'brctl' command is used to control bridge. The
 'brctl' command can't work normally if the bridge module is not loaded. But
 it is neccessary to start Open vSwitch that remove the bridge module(rmmod
 bridge). Moreover, Open vSwitch has a database named 'ovsdb', all
 information(such as bridge,port,and so on) be stored in ovsdb. The bridge
 created by 'brctl' command can't be used by Open vSwitch because the bridge
 information isn't stored in ovsdb. So, I think that the 'brctl' command
 can't be used in vnm_mad  scripts.

 I execute the following experiment:

 brctl addbr br0

 rmmod bridge

 insmod openvswitch_mod.ko

 /

 start openvswitch

 /

 ovs-vsctl add-port br0 eth0

 Error message is printed as bridge br0 not exist.

 Looking forward to your reply.
 Thanks  best regards!

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Bare-metal

2011-10-27 Thread Jaime Melis
Hi Alex,

Thanks for your feedback. Could you please create a ticket in
dev.opennebula.org requesting this feature? We will append the information
you have provided and take it into account when developing this feature.

Regards,
Jaime

On Tue, Oct 25, 2011 at 1:09 AM, Alexandre Emeriau cloudbe...@gmail.comwrote:

  Hello Jame,

 As a basis, Proxmox and Opennode are the best examples i can provide for
 easy and fast setup of an hypervisor.
 Proxmox is quite well established but at this point Opennode seems more
 promising, that's the reason why i invite you to have a look on its
 features, its roadmap and its features requests (
 http://opennode.activesys.org/forum/index.php?board=5.0).

 Taking as a basis you already know both of them quite well and as you're
 still asking details for a bare-metal approach, then i suppose you ask for a
 more advanced view of bare-metal in comparison to the status where most of
 them are now.

 Some interesting points :

 - Hot deployment of templated bare-metal : based on pre-established
 bare-metal configurations (or templates if your prefer), setup to
 deployment automation of as much as wanted bare-metal ready to boot from the
 network. These configurations concern the bare-metal themselves but also the
 configured image appliances they are supposed to host for a given goal.

 - Central management of these deployed bare-metal and the mentioned
 configurations with Sunstone. All Sunstone capabilities being possible
 through API too.

 - Appstore like : taking advantage of third party software for deploying
 new functionnalities either on the platform (the bare-metal itself) or
 through image appliances. The load balancing Opennode feature request (
 http://opennode.activesys.org/forum/index.php/topic,95.0.html) is an
 example of image appliance serving the capabilities the bare-metal has to
 offer.

 I'm not sure if it were this kind of statement you were waiting for or if
 it answers your question so ask for details if needed.

 Cheers,
 Alex



 Le 24/10/2011 23:18, Jaime Melis a écrit :

 Hi Alex,

  actually what we're asking is how do you think this feature should be
 implemented. What features should OpenNebula offer to integrate with
 bare-metal?

  Besides technical detals, such as specific distributions, could you
 describe what you'd like to see OpenNebula doing with bare-metal?

  Thanks again,
 Jaime

 On Fri, Oct 14, 2011 at 9:36 PM, Alexandre Emeriau 
 cloudbe...@gmail.comwrote:

  Hi Jaime,

 Do you mean dependency upon the choosed os ? or a particular project among
 anothers of mine ?
 Just trying to follow and bring the answer you're waiting for.

 Regards,
 Alex

 Le 14/10/2011 09:40, Jaime Melis a écrit :

 Hello,

  that's something we're certainly considering. Could you please
 ellaborate a bit more on your use case?

  This also applies to anyone interested in this feature, how would you
 use it?

  cheers,
 Jaime

 On Wed, Oct 12, 2011 at 9:41 PM, Alexandre Emeriau 
 alexandre.emer...@ainsidonc.com wrote:

  Hi,

 Is a bare-metal OpenNebula is planned ? Something like Proxmox or
 Opennode.

 Regards

 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




  --
 Jaime Melis
 Project Engineer
 OpenNebula - The Open Source Toolkit for Cloud Computing
 www.OpenNebula.org | jme...@opennebula.org




  --
 Jaime Melis
 Project Engineer
 OpenNebula - The Open Source Toolkit for Cloud Computing
 www.OpenNebula.org | jme...@opennebula.org




-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jme...@opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Tiny Local Business scenario for openNebula

2011-10-27 Thread Carlos Martín Sánchez
Hi,

OpenNebula can be used for the scenario you describe, even if you are not
going to take advantage of its on-demand cloud features.
It will provide a centralized view and management of your Images and VMs,
what will surely help to administer and monitor your virtualized
workstations.

OpenNebula can use the same computer as the front-end and host, the only
thing to keep in mind is that you need to use the shared storage transfer
manager [1] (the front-end and the hosts are sharing the same storage).

Knowing that all the VMs will be windows, you may want to configure remote
desktop access to the guest OS instead of VNC.

Regards.

[1] http://opennebula.org/documentation:rel3.0:sfs
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org


On Wed, Oct 26, 2011 at 3:55 PM, Diego Jacobi jacobidi...@gmail.com wrote:

 Hi Ben.
 I appreciate your answer.

 I was expecting to be able to install kvm, sshd, and openNebula on the
 same hardware. As I would not need to provide many different
 technologies.
 I think that I would have maybe 4 VM at the same time, but the virtual
 processor will be most of the time sleeping.

 Will this be in some software related conflict ? Or your
 recommendation is due to the load ?

 It sounds that the method you describe, involves the same procedures
 as installing openNebula.

 Kind regards,
 Diego



 2011/10/26 Ben Tullis b...@tiger-computing.co.uk:
  Hi Diego,
 
  I don't think that OpenNebula is likely to be the best tool for the job
  in this case, as it is more geared towards on-demand cloud computing.
 
  However, it does sound like you could really benefit from virtualization
  in the office. The way I would approach your situation is as follows.
 
  Make sure that the machine you're going to use as a server has hardware
  virtualization support built in.
  http://en.wikipedia.org/wiki/Intel_VT#Processor
 
  Use disks in pairs of equal sizes, then install Linux and configure
  software RAID1 so that the system will be able to withstand a failure in
  any disk.
  http://en.wikipedia.org/wiki/Mdadm
 
  Install a hypervisor to enable you to run many concurrent virtual
  machines. You might like to consider KVM, Xen and Virtualbox.
  http://www.linux-kvm.org
  http://wiki.xensource.com/xenwiki/
  http://virtualbox.org
 
  You can then define virtual machines and install Windows onto them, in
  order to make them available to your colleagues. You can use normal
  Windows system management techniques (such as sysprep) to deploy
  pre-configured Windows system images, thereby saving you time. You could
  then use VNC to make these virtual machines available to your staff, in
  the manner that you suggest.
 
  I'm currently looking at building an OpenNebula cluster to support a
  small-business requirement, but I can't really see that there is any way
  of ensuring high-availability in any system with fewer than four
  physical servers in it. I think you'd be making things unnecessarily
  hard for yourself if you tried to do it all on one server.
 
  I hope that helps.
 
  Kind regards,
  Ben
 
  --
  |Ben Tullis
  |Tiger Computing Ltd
  |Linux for Business
  |
  |Tel: 033 0088 1511
  |Web: http://www.tiger-computing.co.uk
  |
  |Registered in England. Company number: 3389961
  |Registered address: Wyastone Business Park,
  |Wyastone Leys, Monmouth, NP25 3SR
 
 
 
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] [SECURITY FIX] X509 proxy permissions

2011-10-27 Thread Javier Fontan
Hello,

There is a security problem related with x509 proxy generation. The
proxies generated have permissions that let any other user to read,
that is, be logged as any other user with valid x509 proxy. To fix
this issue you can download this file:

http://dev.opennebula.org/attachments/download/491/x509_permissions-3.0.patch

and follow these steps:

1.- Go to /usr/lib/one/ruby or $ONE_LOCATION/lib/ruby
2.- Apply patch (files to be patched ssh_auth.rb and x509_auth.rb):
  $ patch  x509_permissions-3.0.patch
3.- After that (no need to restart nothing) please make your users to
remove their login files and renew them

Cheers

-- 
Javier Fontán Muiños
Project Engineer
OpenNebula Toolkit | opennebula.org
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Cluster node connection error

2011-10-27 Thread Carlos Martín Sánchez
Hi,

When you mount the NFS export in /srv/cloud, you lose the previous
/srv/cloud/one dir.

You have a step by step explanation in the installation guide:
http://opennebula.org/documentation:rel3.0:ignc#secure_shell_access_front-end

Regards.

--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org


On Fri, Oct 21, 2011 at 4:22 PM, bala suru balaq...@gmail.com wrote:


 Hi,
 I have set the user oneadmin with uid 1001 and home path /srv/cloud/one at
 the both *front end *and *cluster end* of the systems .

 First I copied the id_dsa key to the cluster node then Checked the SSH
 password less login it work fine

 then I did nfs mount the /srv/cloud of frontend system at the cluster end.

 And then I try to to the SSH password login but get the following error .

 Received disconnect from 59.96.107.56: 2: Too many authentication failures
 for oneadmin

 :

 If I un mount shared nfs files then ssh login works fine .

 How to over come this problem pls help ..?

 I checked the uid at gid of the both users (front end and cluste end) its
 uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud) .
 Pls help ..

 Regards

 Bala



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Looking at moving to OpenNebula with a few questions

2011-10-27 Thread Donny Brooks

Hello all,

 I have been running Xen on our virtual servers here for a while 
now. Recently we have added some machines and it has come up that we 
need to redo our virtual servers to better utilize them. The main thing 
the higher-ups are requesting is a GUI to manage the environment. So I 
asked on the xen-users list and they pointed me to OpenNebula. So far I 
like what I see. Support for Xen, KVM, and VMWare is a plus for future 
expansion if needed.


I have a few questions though. Below is our current machine and SAN 
specs. I want to fully utilize all of them and have some level of 
failover, either manual or automatic, for when I need to bring a machine 
down for maintenance. Here's the specs:


xen-test: Testbed machine:Dell 2900; 8x 300GB in raid6, 8GB ram, 1x 
Intel 5405 quad core 2.0Ghz, 2x gigabit nics
xen1: Production unit: Dell T710; 8x 2TB in raid6, 24GB ram, 2x Intel 
5504 quad core 2.0Ghz, 4x gigabit nics
xen2: Production unit: Dell R610; 3x 160GB in raid5, 32GB ram, 2x Intel 
5506 quad core 2.13Ghz, 4x gigabit nics

SAN: Dell MD3200i; 12x 2TB in raid6, dual controllers.

Currently all machines have their network cards bonded and vlans passed 
over the trunked interface as we have approximately 20 vlans we use. 
This should be fairly simple to do with OpenNebula correct?
I have a mix of local and network storage. Should OpenNebula be able to 
handle both local and SAN storage?
All raid is hardware based. With the current setup what is the best way 
to set it up for best fault tolerance/speed/space?
What is the best OS to start with? We currently use Centos 5.5 on all 3 
nodes but would prefer Fedora or similar. Debian would be doable also.
Would I be able to import the existing virtual machines that are running 
into OpenNebula?


I know that one major performance hit is the raid6 and it would be 
better to do a raid10. At the time xen1 and the SAN were purchased they 
were all we had so fault tolerance with maximum space was top priority 
over speed. We may look at redoing this soon.


We are a small state government agency with little to no IT budget so I 
have to work with what I have. Please keep that in mind before 
suggesting why not buy such and such Thanks in advance for the input.


--
Donny B.

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] OpenNebula 3.0 Documentation - Downloadable PDF File

2011-10-27 Thread Vickreman Chettiar
Hello.

  I have been reading through the OpenNebula 3.0 documentation, and was
wondering whether the documentation from
http://www.opennebula.org/documentation:rel3.0 is anywhere available as a
downloadable PDF file? It would be much easier to jump between pages in
Acrobat Reader than it is to click back and forth between webpages. Thanks.

  Regards to everyone,

Vickreman Chettiar http://sg.linkedin.com/in/vickremanchettiar
I'm quite particular about particular fields of quiet physics such as
particular physics.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Using Requirements and Rank to create only one guest of type X per host

2011-10-27 Thread Karl Katzke
Howdy! We’re currently implementing OpenNebula as a test to see if it fits our 
infrastructure needs.

We have a specific requirement for some guests to have direct access to a raid0 
array on a host. We obviously only want one of these guests to be running on 
each host. (Yes, raid0. The work they’re doing needs fast disk access with 
bulletproof locking, but the results get shipped elsewhere when it’s done. It’s 
an ideal fit for a cloud since the ) I can’t tell from the requirements/rank 
manual page how to set a requirement that this disk space is not currently used 
by another VM.

Worse, some of our dom0/hosts have two of these resources, of a slightly 
different type, to allocate.

Note that the machines will all be named a particular way, so I could put in an 
exclusion that only select a host named like “dd” where a “gdd” machine is not 
running, for example, but I’m not sure how to do the second part of that.

We also want those hosts to start first when a host or the cluster comes back 
up. If I’m reading the documentation correctly, that means that we would use 
the rank algorithm somehow — but by my read of the documentation, that was only 
for host selection, not boot priority.

Manually coupling a particular VM to a host would also work, but I can’t figure 
out how to do that yet within the scope of the “VM Template” scheme either.

Does anyone have any suggestions on how to implement this resource constraint 
in “the OpenNebula way”?

Last but not least, I’d like to have a the SSH transition, the LVM transition 
manager, and the Shared transition manager all enabled. Do I simply uncomment 
all three in /etc/one/oned.conf, or do I need to use some other syntax? The 
manual is not clear on this.

Thanks,
Karl Katzke
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] OpenNebuka Hosted-VLAN support

2011-10-27 Thread Alberto Picón Couselo
As you mention, we have implemented a static vlan configuration for the 
moment.


We are glad to know that these issues could be addressed in Opennebula 
v3.2.


Please, consider our help for testing whatever you may need.

Thank you very much for your help and such a great product,
Best Regards,
Alberto Picón

El 27/10/2011 11:32, Jaime Melis escribió:
I apologize, the message was both for Patrice and Alberto Picón, who 
raised the initial questions.


On Thu, Oct 27, 2011 at 11:30 AM, Jaime Melis jme...@opennebula.org 
mailto:jme...@opennebula.org wrote:


Hello Patrice,

you make very valid points and we're aware of those limitations.
They have actually been reflected on the documentation:
http://opennebula.org/documentation:rel3.0:nm#considerations_limitations

The problem is that the network management is based on the hook
subsystem, but we have realized there should be a specific driver
for networking. We have created a feature ticket to develop this
functionality:
http://dev.opennebula.org/issues/863
The target release for this functionality is OpenNebula 3.2.

For the moment, in OpenNebula 3.0, the only solution to the
aforementioned problems is to create a static network configuration.

Regards,
Jaime

On Mon, Oct 17, 2011 at 5:59 PM, Patrice LACHANCE
patlacha...@gmail.com mailto:patlacha...@gmail.com wrote:

Hello

In previous version there was a cluster feature that was
replaced in OpenNebula 3.0 by ozones.
Shouldn't opennebula make sure that all the nodes in a zone
are able to run a vm and thus handle network creation on all
nodes before starting a new VM?

Patrice

2011/9/27 Alberto Picón Couselo alpic...@gmail.com
mailto:alpic...@gmail.com

Hello,

We are testing hosted VLAN support in OpenNebula to
implement network isolation. This feature seems to work
correctly when a new instance is deployed, as it is stated
in oned.conf, hm-vlan hook is executed in PROLOG state.

However, there are another states where VLANs and bridges
should be created (or its existence checked) before
executing a concrete operation:

* Migration/Live migration of an instance to a hypervisor
where VLAN and bridge of the instance has never been created
VLAN and bridge existence should be checked and created if
necessary before migration is executed. Opennebula 3.0 RC1
performs migration without doing these checks and fails to
migrate/live migrate the instance, leaving it in a FAILED
state.

* A failed instance cannot be redeployed to a hypervisor
where VLAN and bridge of the instance has never been created
VLAN and bridge existence should be checked and created if
necessary to redeploy the image to the selected hypervisor.

* A stopped instance cannot be resumed if VLAN and bridge
of the instance does not exist.
If we stop all instances of a concrete hypervisor and
reboot the hypervisor for maintenance purposes, all
bridges and VLANs will be deleted. Stopped instances won't
resume because VLANs and bridges requirements are not
satisfied and will enter in a FAILED state (performing a
deletion of non persistent disks; BTW, we have removed
deletion lines in tm_delete script for the moment, :D).

So, VLAN and bridge existence should be checked and
created if necessary to
resume/migrate/livemigrate/recover_from_failed_state the
instance to the selected hypervisor. As it is stated in
oned.conf, hm-vlan hook could be executed on:

# Virtual Machine Hooks (VM_HOOK) defined by:
#   name  : for the hook, useful to track the hook
(OPTIONAL)
#   on: when the hook should be executed,
#   - CREATE, when the VM is created (onevm
create)
#   - PROLOG, when the VM is in the prolog state
#   - RUNNING, after the VM is successfully booted
#   - SHUTDOWN, after the VM is shutdown
#   - STOP, after the VM is stopped (including
VM image transfers)
#   - DONE, after the VM is deleted or shutdown
#   - FAILED, when the VM enters the failed state

But I'm not able to find a procedure to implement these
functionalities in oned.conf for the states I mentioned.

Please, can you give me any clues?

Best Regards,
Alberto Picón



___
Users mailing list
 

Re: [one-users] Using Requirements and Rank to create only one guest of type X per host

2011-10-27 Thread Karl Katzke
Thanks! That helps with the VM pinning aspect greatly. I didn't realize that 
the custom variables could be used in constraints. 

My other questions concerned a cold start of the cluster and the cluster 
drivers. 

There is very little information about what happens during a cold start. In our 
case, some dependencies need to start before other cloud/cluster services can 
fully start. Is there a way to script this startup process so that we can be 
sure that these depended-upon VMs are started first?

Second, what is the proper configuration in /etc/one/oned.conf to be using the 
TM_LVM transfer manager driver simultaneously with the TM_SHARED transfer 
manager? The manual and configuration files aren't clear on this point ... They 
emphasize that multiple transfer managers can be in use unlike multiple image 
manager drivers, but don't give an example of that configuration. 

Thanks,
Karl Katzke

Sent from my shoePhone.

On Oct 27, 2011, at 17:25, Ruben S. Montero rsmont...@opennebula.org wrote:

 Hi
 
 A couple of hints...
 
 We also want those hosts to start first when a host or the cluster comes 
 back up. If I’m reading the documentation correctly, that means that we 
 would use the rank algorithm somehow — but by my read of the documentation, 
 that was only for host selection, not boot priority.
 
 RANK/REQUIREMENTS is for host selection. But I am not sure that I
 really understand this. If you want to first use those host that has
 been recently rebooted. You could simply add a probe with uptime and
 RANK them based on that.
 
 Manually coupling a particular VM to a host would also work, but I can’t
 figure out how to do that yet within the scope of the “VM Template” scheme
 either.
 
 You can add the names of the VMs you want to pin to a particular
 host in a special variable. Something like:
 
 onehost update host0
 
 # At the editor add
 RAID0_VMS=db_vm web_vm
 
 Then a template for a VM pinned to that host would look like:
 
 NAME=db_vm
 ...
 REQUIREMENTS=RAID0_VMS=\*$NAME*\
 
 Instead of NAMES you could pinned them by MAC for example, just add
 RAID0_MACS=11:22:33:44:55:66 11:22:33:44:55:66 and the requirements
 would look like RAID0_MACS= \*$NIC[MAC]\*
 
 
 Last but not least, I’d like to have a the SSH transition, the LVM
 transition manager, and the Shared transition manager all enabled.
 
 Yes simply remove comments for the TMs.
 
 Thanks,
 Karl Katzke
 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
 
 
 
 
 
 -- 
 Ruben S. Montero, PhD
 Project co-Lead and Chief Architect
 OpenNebula - The Open Source Toolkit for Cloud Computing
 www.OpenNebula.org | rsmont...@opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org