[one-users] ceph+flashcache datastore driver

2014-03-24 Thread Stuart Longland
Hi all,

Well, I'm starting to have a stab at creating a driver to do local
caching on hosts for RBD storage.

For those who want to follow progress, I've thrown a repository up here:
http://git.longlandclan.yi.org/?p=opennebula-ceph-flashcache.git;a=summary

At time of writing, none of this has been tested.  I'm enquiring with
the Ceph people about how to convert my existing images to RBD v2
format, then I'll give this a shot.

Regards,
-- 
Stuart Longland
Systems Engineer
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] DataStorage full

2014-03-24 Thread Christophe Duez
Hello,
I set up an infrastructure with 3 servers and one controlling server
running opennebula.
they are all connected with nfs /var/lib/one/
now my data storage is always full... how can i remove all the stuff and
start fresh?
I tried to delete the folder but the system folder always remains with 100%
used...

can any one explain me why this is?
-- 
Kind regards,
Duez Christophe
Student at University of Antwerp :
Master of Industrial Sciences: Electronics-ICT

E christophe.d...@student.uantwperen.be
L linkedin duez-christophehttp://www.linkedin.com/pub/duez-christophe/74/7/39
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Can sunstone 4.0 be deployed on apache2 + passenger

2014-03-24 Thread Daniel Molina
Then, you'll have to limit the number of processes that are spawned by
passenger (PassengerMaxProcesses) or upgrade opennebula to the last version
and configure the memcache sessions


On 22 March 2014 01:45, sam song samsong8...@gmail.com wrote:

  Hi Daniel,

 I am using Passenger with apache2, configured according to [1].

 [1]
 http://docs.opennebula.org/4.4/advanced_administration/scalability/suns_advance.html#running-sunstone-with-passenger-in-apache


 于 2014年03月21日 17:33, Daniel Molina 写道:




 On 21 March 2014 03:39, sam song samsong8...@gmail.com wrote:

  Hi all,

 Thanks for your reply.

 I have configured one 4.0 to run in apache2 as a vhost, but there is an
 another issue.
 After I login to sunstone successfully and the dashboard is shown, the
 browser is directed to the login page again at once.
 I think it may be a misconfiguration issue.
 I only changed the sunstone listening ip, session is managed by memory.


  Are you using Apache as a proxy or are you using Passenger?




 Any advice?

 Sam


 于 2014年03月21日 01:01, Daniel Molina 写道:


 On 20 March 2014 17:54, Daniel Molina dmol...@opennebula.org wrote:

 In OpenNebula you can run sunstone on top of apache but memcache is not
 supported, so you have to force only one server instance in your apache
 configuration, otherwise authentication won't work.


 This applies to one-4.0, in one-4.4 memcache is supported


  --
  --
  Daniel Molina
 Project Engineer
 OpenNebula - Flexible Enterprise Cloud Made Simple
 www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




  --
  --
  Daniel Molina
 Project Engineer
 OpenNebula - Flexible Enterprise Cloud Made Simple
 www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula





-- 
--
Daniel Molina
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Datastores takes half of space

2014-03-24 Thread Carlos Martín Sánchez
Hi,

On Fri, Mar 21, 2014 at 3:13 PM, Christophe Duez 
christophe.d...@student.uantwerpen.be wrote:

 Hello there,
 in my opennebula GUI I can see that my DataStores take 28,5 GB.
 There are the 3 default storages files, default and system.
 I removed all vm's and iso's and still no space has came free...
 How can I clean up this data or is this normal that a clean opennebula
 takes 28Gigs?


The datastore monitorization reports the used/free disk space, not the
space taken only by the datstore directory. You will see the same used 28GB
with the df command.

Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org http://www.opennebula.org/ | cmar...@opennebula.org |
@OpenNebula http://twitter.com/opennebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Problem boot VM template with opennebula

2014-03-24 Thread Cuquerella Sanchez, Javier
Hi,

have been able to look at something ?

I do not understand because if I start the same image with xen ONLY works:

xm create encourage.cfg

name =  encourage

memory = 4096
maxmem = 4096
vcpus = 1

# bootloader = / usr / bin / pygrub 
# bootloader = / usr/lib64/xen/bin/pygrub 
on_poweroff = destroy
on_reboot =  restart
on_crash =  restart

disk = 
['file:/home/jcuque/encourage/encourage2.img,xvda1,w','file:/home/jcuque/encourage/encourage2.swap,xvda2,w']
vif = [' bridge = br0 ']
kernel =  / boot/vmlinuz-2.6.32-279.el6.x86_64 
ramdisk = / boot/initramfs-2.6.32-279.el6.x86_64.img 
root =  / dev/xvda1 ro

[root @ VTSS003 ~ ] # xm list | grep Encourage
Encourage 135 4096 1-b  20.8

the machine is started well but also can login via console:

[root @ VTSS003 ~ ] # xm console 135

CentOS release 6.4 ( Final)
Kernel 2.6.32 - 279.el6.x86_64 on an x86_64

encourage.es.atos.net login :




---
Javier Cuquerella Sánchez

javier.cuquere...@atos.net
Atos Research  Innovation
Systems Administrator
Albarracin 25
28037-Madrid (SPAIN)
Tfno: +34.91.214.8080
www.atosresearch.eu
es.atos.net 
 

-Original Message-
From: Cuquerella Sanchez, Javier 
Sent: Friday, March 21, 2014 9:30 AM
To: 'Javier Fontan'
Cc: 'users@lists.opennebula.org'
Subject: RE: Problem boot VM template with opennebula

Hi,

I don’t output anything. by putting the command stays there

[root@VTSS003 ~]# xm list
NameID   Mem VCPUs  State   Time(s)
Domain-0 0  1024 1 r-  19487.0
ciudad20201 67  2048 1 -b   2512.8
ciudad20202 68  2048 1 -b559.8
ciudad20203 69  2048 1 -b   8516.7
one-106133  4096 1 -b489.9
[root@VTSS003 ~]# xm console 133

maybe it's useful to bring to the template: KERNEL_CMD = ro xencons=tty 
console=tty1]  , no?



Thanks.

Regards


---
Javier Cuquerella Sánchez

javier.cuquere...@atos.net
Atos Research  Innovation
Systems Administrator
Albarracin 25
28037-Madrid (SPAIN)
Tfno: +34.91.214.8080
www.atosresearch.eu
es.atos.net 
 

-Original Message-
From: Javier Fontan [mailto:jfon...@opennebula.org]
Sent: Thursday, March 20, 2014 6:56 PM
To: Cuquerella Sanchez, Javier
Cc: users@lists.opennebula.org
Subject: Re: Problem boot VM template with opennebula

Are you getting an error with xm console or it just doesn't output anything?

On Thu, Mar 20, 2014 at 5:50 PM, Cuquerella Sanchez, Javier 
javier.cuquere...@atos.net wrote:
 HI,

 The virtual machine runs on both xen and opennebula:

 [root@VTSS003 106]# xm list | grep one
 one-106133  4096 1 -b 11.3

 [oneadmin@VTSS003 CUQUE]$ onevm list
 ID USER GROUPNAMESTAT UCPUUMEM HOST 
 TIME
106 oneadmin oneadmin one-106 runn0  4G ARICLOUD10d 
 01h10

 Template virtual machine:

 [oneadmin@VTSS003 CUQUE]$ cat VMencourage7.tmpl
 NAME   = Encourage7
 MEMORY = 4096
 CPU= 1
 OS=[ KERNEL = /boot/vmlinuz-2.6.32-279.el6.x86_64, INITRD = 
 /boot/initramfs-2.6.32-279.el6.x86_64.img, KERNEL_CMD = ro , root 
 = xvda1 ] DISK =[ DRIVER=file:, IMAGE_ID=27, TARGET=xvda1 ] 
 DISK =[ TYPE  = swap, SIZE = 1024, TARGET = sdb  ] NIC = [ 
 NETWORK_ID = 3 ] GRAPHICS=[ KEYMAP=es, LISTEN=0.0.0.0, PORT=5902, 
 TYPE=vnc ] RAW=[ DATA=device_model='/usr/lib64/xen/bin/qemu-dm',
 TYPE=xen ]



 Xen log is correct:

 [root@VTSS003 CUQUE]# tail -f /var/log/xen/xend.log
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices vbd.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:144) Waiting for 51713.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:628) hotplugStatusCallback 
 /local/domain/0/backend/vbd/133/51713/hotplug-status.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:642) hotplugStatusCallback 1.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices irq.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices vfb.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices pci.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices vusb.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices vtpm.
 [2014-03-20 16:15:56 4335] INFO (XendDomain:1225) Domain one-106 (133) 
 unpaused.

 but I can not connect to the machine or by using VNC or xen using the
 command: xm console IDmachine

 Opennebula log :

 [root@VTSS003 CUQUE]# tail -f /var/log/one/106.log Thu Mar 20 16:15:52
 2014 [LCM][I]: New VM state is BOOT Thu Mar 20 16:15:52 2014 [VMM][I]: 
 

Re: [one-users] Hosts in error state.

2014-03-24 Thread Jaime Melis
Hi,

you are getting sh: 1: /var/lib/one/remotes/datastore/fs/monitor:
Permission denied

have you modified that file? what are the permissions / owner of that file?
they should belong to oneadmin. ls -l of that file would help.

cheers,
Jaime


On Tue, Mar 18, 2014 at 8:00 PM, Meduri Jagadeesh
meduri.jagade...@msn.comwrote:

 Wed Mar 19 00:16:04 2014 [ReM][D]: Req:3136 UID:0 VirtualMachinePoolInfo
 result SUCCESS, VM_POOL/VM_POOL
 Wed Mar 19 00:16:04 2014 [ReM][D]: Req:6816 UID:0 VirtualMachinePoolInfo
 invoked, -2, -1, -1, -1
 Wed Mar 19 00:16:04 2014 [ReM][D]: Req:6816 UID:0 VirtualMachinePoolInfo
 result SUCCESS, VM_POOL/VM_POOL
 Wed Mar 19 00:16:05 2014 [ReM][D]: Req:2352 UID:0 VirtualMachinePoolInfo
 invoked, -2, -1, -1, -1
 Wed Mar 19 00:16:05 2014 [ReM][D]: Req:2352 UID:0 VirtualMachinePoolInfo
 result SUCCESS, VM_POOL/VM_POOL
 Wed Mar 19 00:16:05 2014 [InM][D]: Monitoring host localhost (32)
 Wed Mar 19 00:16:05 2014 [ReM][D]: Req:2352 UID:0 VirtualMachinePoolInfo
 invoked, -2, -1, -1, -1
 Wed Mar 19 00:16:05 2014 [InM][D]: Monitoring datastore default (1)
 Wed Mar 19 00:16:06 2014 [ImM][I]: Command execution fail:
 /var/lib/one/remotes/datastore/fs/monitor
 PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xPC9JRD48VUlEPjA8L1VJRD48R0lEPjA8L0dJRD48VU5BTUU+b25lYWRtaW48L1VOQU1FPjxHTkFNRT5vbmVhZG1pbjwvR05BTUU+PE5BTUU+ZGVmYXVsdDwvTkFNRT48UEVSTUlTU0lPTlM+PE9XTkVSX1U+MTwvT1dORVJfVT48T1dORVJfTT4xPC9PV05FUl9NPjxPV05FUl9BPjA8L09XTkVSX0E+PEdST1VQX1U+MTwvR1JPVVBfVT48R1JPVVBfTT4wPC9HUk9VUF9NPjxHUk9VUF9BPjA8L0dST1VQX0E+PE9USEVSX1U+MTwvT1RIRVJfVT48T1RIRVJfTT4wPC9PVEhFUl9NPjxPVEhFUl9BPjA8L09USEVSX0E+PC9QRVJNSVNTSU9OUz48RFNfTUFEPmZzPC9EU19NQUQ+PFRNX01BRD5zaGFyZWQ8L1RNX01BRD48QkFTRV9QQVRIPi92YXIvbGliL29uZS8vZGF0YXN0b3Jlcy8xPC9CQVNFX1BBVEg+PFRZUEU+MDwvVFlQRT48RElTS19UWVBFPjA8L0RJU0tfVFlQRT48Q0xVU1RFUl9JRD4tMTwvQ0xVU1RFUl9JRD48Q0xVU1RFUj48L0NMVVNURVI+PFRPVEFMX01CPjE3MTg1PC9UT1RBTF9NQj48RlJFRV9NQj4xMzI4ODwvRlJFRV9NQj48VVNFRF9NQj4yMDc8L1VTRURfTUI+PElNQUdFUz48SUQ+NTwvSUQ+PElEPjg8L0lEPjwvSU1BR0VTPjxURU1QTEFURT48Q0xPTkVfVEFSR0VUPjwhW0NEQVRBW1NZU1RFTV1dPjwvQ0xPTkVfVEFSR0VUPjxESVNLX1RZUEU+PCFbQ0RBVEFbRklMRV1dPjwvRElTS19UWVBFPjxEU19NQUQ+PCFbQ0RBVEFbZnNdXT48L0RTX01BRD48TE5fVEFSR0VUPjwhW0NEQVRBW05PTkVdXT48L0xOX1RBUkdFVD48VE1fTUFEPjwhW0NEQVRBW3NoYXJlZF1dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU19EUklWRVJfQUNUSU9OX0RBVEE+
 1
 Wed Mar 19 00:16:06 2014 [ImM][I]: sh: 1:
 /var/lib/one/remotes/datastore/fs/monitor: Permission denied
 Wed Mar 19 00:16:06 2014 [ImM][I]: ExitCode: 126
 Wed Mar 19 00:16:06 2014 [ImM][E]: Error monitoring datastore 1: -
 Wed Mar 19 00:16:07 2014 [ReM][D]: Req:2352 UID:0 VirtualMachinePoolInfo
 result SUCCESS, VM_POOL/VM_POOL
 Wed Mar 19 00:16:07 2014 [InM][D]: Monitoring datastore files (2)
 Wed Mar 19 00:16:07 2014 [ImM][I]: Command execution fail:
 /var/lib/one/remotes/datastore/fs/monitor
 PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4yPC9JRD48VUlEPjA8L1VJRD48R0lEPjA8L0dJRD48VU5BTUU+b25lYWRtaW48L1VOQU1FPjxHTkFNRT5vbmVhZG1pbjwvR05BTUU+PE5BTUU+ZmlsZXM8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4xPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjE8L09USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PERTX01BRD5mczwvRFNfTUFEPjxUTV9NQUQ+c3NoPC9UTV9NQUQ+PEJBU0VfUEFUSD4vdmFyL2xpYi9vbmUvL2RhdGFzdG9yZXMvMjwvQkFTRV9QQVRIPjxUWVBFPjI8L1RZUEU+PERJU0tfVFlQRT4wPC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+LTE8L0NMVVNURVJfSUQ+PENMVVNURVI+PC9DTFVTVEVSPjxUT1RBTF9NQj4xNzE4NTwvVE9UQUxfTUI+PEZSRUVfTUI+MTMyODg8L0ZSRUVfTUI+PFVTRURfTUI+MTwvVVNFRF9NQj48SU1BR0VTPjwvSU1BR0VTPjxURU1QTEFURT48Q0xPTkVfVEFSR0VUPjwhW0NEQVRBW1NZU1RFTV1dPjwvQ0xPTkVfVEFSR0VUPjxEU19NQUQ+PCFbQ0RBVEFbZnNdXT48L0RTX01BRD48TE5fVEFSR0VUPjwhW0NEQVRBW1NZU1RFTV1dPjwvTE5fVEFSR0VUPjxUTV9NQUQ+PCFbQ0RBVEFbc3NoXV0+PC9UTV9NQUQ+PFRZUEU+PCFbQ0RBVEFbRklMRV9EU11dPjwvVFlQRT48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjwvRFNfRFJJVkVSX0FDVElPTl9EQVRBPg==
 2
 Wed Mar 19 00:16:07 2014 [ImM][I]: sh: 1:
 /var/lib/one/remotes/datastore/fs/monitor: Permission denied
 Wed Mar 19 00:16:07 2014 [ImM][I]: ExitCode: 126
 Wed Mar 19 00:16:07 2014 [ImM][E]: Error monitoring datastore 2: -
 Wed Mar 19 00:16:08 2014 [ReM][D]: Req:1488 UID:0 VirtualMachinePoolInfo
 invoked, -2, -1, -1, -1
 Wed Mar 19 00:16:08 2014 [ReM][D]: Req:1488 UID:0 VirtualMachinePoolInfo
 result SUCCESS, VM_POOL/VM_POOL
 Wed Mar 19 00:16:08 2014 [ReM][D]: Req:1488 UID:0 VirtualMachinePoolInfo
 invoked, -2, -1, -1, -1
 Wed Mar 19 00:16:08 2014 [ReM][D]: Req:1488 UID:0 VirtualMachinePoolInfo
 result SUCCESS, VM_POOL/VM_POOL
 Wed Mar 19 00:16:09 2014 [ReM][D]: Req:7488 UID:0 VirtualMachinePoolInfo
 invoked, -2, -1, -1, -1
 Wed Mar 19 00:16:09 2014 [ReM][D]: Req:7488 UID:0 VirtualMachinePoolInfo
 result SUCCESS, VM_POOL/VM_POOL
 Wed Mar 19 00:16:09 2014 [ReM][D]: Req:7488 UID:0 VirtualMachinePoolInfo
 invoked, -2, -1, -1, -1
 Wed Mar 19 00:16:09 2014 [ReM][D]: 

[one-users] One 4.6 and ozones

2014-03-24 Thread Ondrej Hamada

Hi,
I was briefly looking at roadmap for one 4.6 and discovered the feature
#2691: Remove ozones code.
What kind of change is that? Just some refactoring or are you remove
ozones completely? If yes, what will replace them?

Thank you in advance

Ondra

This e-mail and any attachment is for authorised use by the intended 
recipient(s) only. It may contain proprietary material, confidential 
information and/or be subject to legal privilege. It should not be copied, 
disclosed to, retained or used by, any other party. If you are not an intended 
recipient then please promptly delete this e-mail and any attachment and all 
copies and inform the sender. Thank you for understanding.


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] One 4.6 and ozones

2014-03-24 Thread Ruben S. Montero
Hi Ondrej,

Ozones has been redesign and integrated in OpenNebula core, you can have
more information about the VDC part here [1]. We need to find some time to
make a screencast of the federation part, meanwhile take a look to the new
documentation [2,3] Note that we are working on the docs

Cheers

[1]
http://opennebula.org/partitioning-clouds-with-virtual-data-centers-vdcs/
[2]
http://docs.opennebula.org/4.6/advanced_administration/data_center_federation/federationconfig.html
[3]
http://docs.opennebula.org/4.6/advanced_administration/data_center_federation/federationmng.html


On Mon, Mar 24, 2014 at 4:51 PM, Ondrej Hamada ondrej.ham...@acision.comwrote:

 Hi,
 I was briefly looking at roadmap for one 4.6 and discovered the feature
 #2691: Remove ozones code.
 What kind of change is that? Just some refactoring or are you remove
 ozones completely? If yes, what will replace them?

 Thank you in advance

 Ondra

 This e-mail and any attachment is for authorised use by the intended
 recipient(s) only. It may contain proprietary material, confidential
 information and/or be subject to legal privilege. It should not be copied,
 disclosed to, retained or used by, any other party. If you are not an
 intended recipient then please promptly delete this e-mail and any
 attachment and all copies and inform the sender. Thank you for
 understanding.


 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




-- 
-- 
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | rsmont...@opennebula.org | @OpenNebula
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Problem boot VM template with opennebula

2014-03-24 Thread Javier Fontan
Can you send us the deployment file generated by OpenNebula? Just to
check if there is something too different to your deployment file. It
should be located in you frontend at
/var/lib/one/vms/vmid/deployment.0

On Mon, Mar 24, 2014 at 10:29 AM, Cuquerella Sanchez, Javier
javier.cuquere...@atos.net wrote:
 Hi,

 have been able to look at something ?

 I do not understand because if I start the same image with xen ONLY works:

 xm create encourage.cfg

 name =  encourage

 memory = 4096
 maxmem = 4096
 vcpus = 1

 # bootloader = / usr / bin / pygrub 
 # bootloader = / usr/lib64/xen/bin/pygrub 
 on_poweroff = destroy
 on_reboot =  restart
 on_crash =  restart

 disk = 
 ['file:/home/jcuque/encourage/encourage2.img,xvda1,w','file:/home/jcuque/encourage/encourage2.swap,xvda2,w']
 vif = [' bridge = br0 ']
 kernel =  / boot/vmlinuz-2.6.32-279.el6.x86_64 
 ramdisk = / boot/initramfs-2.6.32-279.el6.x86_64.img 
 root =  / dev/xvda1 ro

 [root @ VTSS003 ~ ] # xm list | grep Encourage
 Encourage 135 4096 1-b  20.8

 the machine is started well but also can login via console:

 [root @ VTSS003 ~ ] # xm console 135

 CentOS release 6.4 ( Final)
 Kernel 2.6.32 - 279.el6.x86_64 on an x86_64

 encourage.es.atos.net login :




 ---
 Javier Cuquerella Sánchez

 javier.cuquere...@atos.net
 Atos Research  Innovation
 Systems Administrator
 Albarracin 25
 28037-Madrid (SPAIN)
 Tfno: +34.91.214.8080
 www.atosresearch.eu
 es.atos.net


 -Original Message-
 From: Cuquerella Sanchez, Javier
 Sent: Friday, March 21, 2014 9:30 AM
 To: 'Javier Fontan'
 Cc: 'users@lists.opennebula.org'
 Subject: RE: Problem boot VM template with opennebula

 Hi,

 I don't output anything. by putting the command stays there

 [root@VTSS003 ~]# xm list
 NameID   Mem VCPUs  State   
 Time(s)
 Domain-0 0  1024 1 r-  19487.0
 ciudad20201 67  2048 1 -b   2512.8
 ciudad20202 68  2048 1 -b559.8
 ciudad20203 69  2048 1 -b   8516.7
 one-106133  4096 1 -b489.9
 [root@VTSS003 ~]# xm console 133

 maybe it's useful to bring to the template: KERNEL_CMD = ro xencons=tty 
 console=tty1]  , no?



 Thanks.

 Regards


 ---
 Javier Cuquerella Sánchez

 javier.cuquere...@atos.net
 Atos Research  Innovation
 Systems Administrator
 Albarracin 25
 28037-Madrid (SPAIN)
 Tfno: +34.91.214.8080
 www.atosresearch.eu
 es.atos.net


 -Original Message-
 From: Javier Fontan [mailto:jfon...@opennebula.org]
 Sent: Thursday, March 20, 2014 6:56 PM
 To: Cuquerella Sanchez, Javier
 Cc: users@lists.opennebula.org
 Subject: Re: Problem boot VM template with opennebula

 Are you getting an error with xm console or it just doesn't output anything?

 On Thu, Mar 20, 2014 at 5:50 PM, Cuquerella Sanchez, Javier 
 javier.cuquere...@atos.net wrote:
 HI,

 The virtual machine runs on both xen and opennebula:

 [root@VTSS003 106]# xm list | grep one
 one-106133  4096 1 -b 
 11.3

 [oneadmin@VTSS003 CUQUE]$ onevm list
 ID USER GROUPNAMESTAT UCPUUMEM HOST 
 TIME
106 oneadmin oneadmin one-106 runn0  4G ARICLOUD10d 
 01h10

 Template virtual machine:

 [oneadmin@VTSS003 CUQUE]$ cat VMencourage7.tmpl
 NAME   = Encourage7
 MEMORY = 4096
 CPU= 1
 OS=[ KERNEL = /boot/vmlinuz-2.6.32-279.el6.x86_64, INITRD =
 /boot/initramfs-2.6.32-279.el6.x86_64.img, KERNEL_CMD = ro , root
 = xvda1 ] DISK =[ DRIVER=file:, IMAGE_ID=27, TARGET=xvda1 ]
 DISK =[ TYPE  = swap, SIZE = 1024, TARGET = sdb  ] NIC = [
 NETWORK_ID = 3 ] GRAPHICS=[ KEYMAP=es, LISTEN=0.0.0.0, PORT=5902,
 TYPE=vnc ] RAW=[ DATA=device_model='/usr/lib64/xen/bin/qemu-dm',
 TYPE=xen ]



 Xen log is correct:

 [root@VTSS003 CUQUE]# tail -f /var/log/xen/xend.log
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices vbd.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:144) Waiting for 51713.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:628) hotplugStatusCallback 
 /local/domain/0/backend/vbd/133/51713/hotplug-status.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:642) hotplugStatusCallback 1.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices irq.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices vfb.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices pci.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for devices 
 vusb.
 [2014-03-20 16:15:56 4335] DEBUG (DevController:139) Waiting for 

Re: [one-users] Native GlusterFS support

2014-03-24 Thread Stefan Kooman
Quoting Javier Fontan (jfon...@opennebula.org):
 Now that the packages for OpenNebula 4.6 beta are ready is anyone
 willing to give a shot to the gluster integration? Any feedback is
 welcome.
 
 Post: http://opennebula.org/native-glusterfs-image-access-for-kvm-drivers/
 Packages: http://opennebula.org/software/
 Documentation: 
 http://docs.opennebula.org/4.6/administration/storage/gluster_ds.html

I'm trying glusterfs on Ubuntu Saucy (frontend) and Ubuntu Trusty
(nodes). I've followed the documentation but something is not working.
Might be me missing something here.

DATASTORE 101 INFORMATION   
ID : 101 
NAME   : gluster_gv0_ds  
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: -   
TYPE   : IMAGE   
DS_MAD : shared  
TM_MAD : shared  
BASE PATH  : /var/lib/one//datastores/101
DISK_TYPE  : 

DATASTORE CAPACITY  
TOTAL: : 0M  
FREE:  : 0M  
USED:  : 0M  
LIMIT: : -   

PERMISSIONS 
OWNER  : um- 
GROUP  : u-- 
OTHER  : --- 

DATASTORE TEMPLATE  
BASE_PATH=/var/lib/one//datastores/
CLONE_TARGET=SYSTEM
DISK_TYPE=GLUSTER
DS_MAD=shared
GLUSTER_HOST=gluster1:24007
GLUSTER_VOLUME=gv0
LN_TARGET=NONE
TM_MAD=shared
TYPE=IMAGE_DS

oneadmin@oned1:~$ onedatastore list
  ID NAMESIZE AVAIL CLUSTER  IMAGES TYPE DS   TM  
   0 system215.9G 92%   - 0 sys  -shared
   1 default   215.9G 92%   - 1 img  fs   shared
   2 files  17.6G 38%   - 0 fil  fs   ssh
 100 ceph_one_ds   1T 91%   - 2 img  ceph ceph
 101 gluster_gv0_d 0M - - 0 img  shared   shared

It reports 0M - AVAIL. The system and default datastores are mounted
glusterfs volume (so glusterfs does work on frontend/nodes).

If I try to import an image from marketplace I get the following error:

[ImageAllocate] Cannot determine Image SIZE. Datastore driver 'shared'
not available.

Gr. Stefan


-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org